Search is not available for this dataset
project
stringlengths 1
235
| source
stringclasses 16
values | language
stringclasses 48
values | content
stringlengths 909
64.8M
|
---|---|---|---|
OSNMTF | cran | R | Package ‘OSNMTF’
October 12, 2022
Type Package
Title Orthogonal Sparse Non-Negative Matrix Tri-Factorization
Version 0.1.0
Author <NAME>
Maintainer <NAME> <<EMAIL>>
Description A novel method to implement cancer subtyping and subtype specific drug targets identifi-
cation via non-negative matrix tri-factorization.
To improve the interpretability, we introduce orthogonal constraint to the row coefficient ma-
trix and column coefficient matrix. To meet the prior knowledge
that each subtype should be strongly associated with few gene sets, we introduce sparsity con-
straint to the association sub-matrix. The average residue was
introduced to evaluate the row and column cluster numbers. This is part of the work ``Liver Can-
cer Analysis via Orthogonal Sparse Non-Negative Matrix Tri-
Factorization'' which will be submitted to BBRC.
Imports dplyr, MASS, stats
Depends R (>= 3.4.4)
License GPL (>= 2)
Encoding UTF-8
LazyData true
RoxygenNote 6.0.1
Suggests knitr, rmarkdown
VignetteBuilder knitr
NeedsCompilation no
Repository CRAN
Date/Publication 2019-11-28 13:50:02 UTC
R topics documented:
affinityMatri... 2
AS... 3
cos... 4
dist2e... 5
initializatio... 6
MS... 7
OSNMT... 8
simu_data_generatio... 9
Standard_Normalizatio... 9
update_... 10
update_... 11
update_... 12
update_... 12
affinityMatrix Calculate the similarity matrix
Description
To calculate the similarity matrix with the same method in package M2SMF, for asymmetric case
Usage
affinityMatrix(Diff, K = 20, sigma = 0.5)
Arguments
Diff The distance matrix to culculate the similarity
K The number of neighbours to culculate the similarity
sigma A hyper-parameter to culculate the similarity
Value
The similarity matrix
Author(s)
<NAME>
Examples
data1 <- matrix(0,100,100)
data2 <- matrix(0,80,100)
for (i in 1:20)
{
data1[i,] <- rnorm(100,10,1)
}
for (i in 21:40)
{
data1[i,] <- rnorm(100,20,1)
}
for (i in 41:60)
{
data1[i,] <- rnorm(100,30,1)
}
for (i in 61:80)
{
data1[i,] <- rnorm(100,40,1)
}
for (i in 81:100)
{
data1[i,] <- rnorm(100,50,1)
}
for (i in 1:20)
{
data2[i,] <- rnorm(100,5,1)
}
for (i in 21:40)
{
data2[i,] <- rnorm(100,10,1)
}
for (i in 41:60)
{
data2[i,] <- rnorm(100,15,1)
}
for (i in 61:80)
{
data2[i,] <- rnorm(100,20,1)
}
new_data1 <- Standard_Normalization(data1)
new_data2 <- Standard_Normalization(data2)
Diff <- dist2eu(new_data1,new_data2)
simi_matr1 <- affinityMatrix(Diff, K = 20, sigma = 0.5)
ASR Average Residue
Description
To calculate average residues of the bi-clustering results
Usage
ASR(row_cluster,col_cluster,W)
Arguments
row_cluster The cluster results of the rows of W, this value should be a vector whose length
is the same as the number of rows in W
col_cluster The cluster results of the columns of W, this value should be a vector whose
length is the same as the number of columns in W
W The matrix to be factorized
Value
The average residues of the bi-clustering results
Author(s)
<NAME>
Examples
W <- simu_data_generation()
OSNMTF_res <- OSNMTF(W,k=5,l=4)
row_cluster <- OSNMTF_res[[2]][[1]]
column_cluster <- OSNMTF_res[[2]][[2]]
ASR_value <- ASR(row_cluster,column_cluster,W)
cost Calculate the cost
Description
A function to calculate the cost of the objective function
Usage
cost(W,init_list,lambda=0.2)
Arguments
W The matrix to be factorized
init_list A list containing the updated results in this iteration
lambda A parameter to set the relative weight of the sparsity constraint
Value
A number indicating the total cost of the objective function
Author(s)
<NAME>
Examples
W <- simu_data_generation()
init_list <- initialization(W,k=5,l=4)
update_L_list <- update_L(W,init_list)
update_B_list <- update_B(W,update_L_list)
update_R_list <- update_R(W,update_B_list)
update_C_list <- update_C(W,update_R_list,lambda=0.2,rho=1.1)
temp_cost <- cost(W,init_list,lambda=0.2)
dist2eu Euclidean Distance
Description
The distance matrix of the two group of samples
Usage
dist2eu(X,C)
Arguments
X The first samples matrix
C The second samples matrix
Value
The distance matrix
Author(s)
<NAME>
Examples
data1 <- matrix(0,100,100)
data2 <- matrix(0,80,100)
for (i in 1:20)
{
data1[i,] <- rnorm(100,10,1)
}
for (i in 21:40)
{
data1[i,] <- rnorm(100,20,1)
}
for (i in 41:60)
{
data1[i,] <- rnorm(100,30,1)
}
for (i in 61:80)
{
data1[i,] <- rnorm(100,40,1)
}
for (i in 81:100)
{
data1[i,] <- rnorm(100,50,1)
}
for (i in 1:20)
{
data2[i,] <- rnorm(100,5,1)
}
for (i in 21:40)
{
data2[i,] <- rnorm(100,10,1)
}
for (i in 41:60)
{
data2[i,] <- rnorm(100,15,1)
}
for (i in 61:80)
{
data2[i,] <- rnorm(100,20,1)
}
new_data1 <- Standard_Normalization(data1)
new_data2 <- Standard_Normalization(data2)
dist1 <- dist2eu(new_data1,new_data2)
initialization initialize the values used in NMTFOSC
Description
initialize the values which will be updated in NMTFOSC
Usage
initialization(W,k,l)
Arguments
W The matrix to be factorized
k A parameter to specify the row cluster number
l A parameter to specify the column cluster number
Value
A list with 6 elements, corresponding to the matrices L,C,R,B,Y and the penalty parameter miu
Author(s)
<NAME>
Examples
W <- simu_data_generation()
init_list <- initialization(W,k=5,l=4)
MSR Mean Residue
Description
To calculate mean residue of a sub-matrix block of W, indexed by a row cluster and a column cluster
Usage
MSR(Block)
Arguments
Block The sub-matrix block of W, indexed by a row cluster and a column cluster
Value
The mean residue of the block
Author(s)
<NAME>
Examples
W <- simu_data_generation()
OSNMTF_res <- OSNMTF(W,k=5,l=4)
row_cluster <- OSNMTF_res[[2]][[1]]
column_cluster <- OSNMTF_res[[2]][[2]]
temp_rows <- which(row_cluster==1,TRUE)
temp_cols <- which(column_cluster==1,TRUE)
MSR_value <- MSR(W[temp_rows,temp_cols])
OSNMTF The algorithm OSNMTF
Description
Factorize matrix W into the multiplication of L, C and R, with L and R being orthogonal and C
being sparse. Then the row cluster results and column cluster results are obtained from L and R.
Usage
OSNMTF(W,lambda=0.2,theta=10^-4,k,l)
Arguments
W The matrix to be factorized
lambda A parameter to set the relative weight of the sparsity constraints
theta A parameter to determine the convergence
k A parameter to specify the row cluster number
l A parameter to specify the column cluster number
Value
A list containing the clustering result
sub_matrices a list containing the matrix L, C, R
cluster_results
a list containing the row cluster results and the column cluster results
Author(s)
<NAME>
Examples
W <- simu_data_generation()
OSNMTF_res <- OSNMTF(W,k=5,l=4)
simu_data_generation 9
simu_data_generation Generate simulation data
Description
To generate the simulation data matrix
Usage
simu_data_generation()
Value
The simulated data matrix
Author(s)
<NAME>
Examples
simu_data <- simu_data_generation()
Standard_Normalization
Standard Normalization
Description
To normalize the data matrix by column
Usage
Standard_Normalization(x)
Arguments
x The data matrix to be normalized
Value
The normalized matrix
Author(s)
<NAME>
Examples
data1 <- matrix(0,100,100)
data2 <- matrix(0,80,100)
for (i in 1:20)
{
data1[i,] <- rnorm(100,10,1)
}
for (i in 21:40)
{
data1[i,] <- rnorm(100,20,1)
}
for (i in 41:60)
{
data1[i,] <- rnorm(100,30,1)
}
for (i in 61:80)
{
data1[i,] <- rnorm(100,40,1)
}
for (i in 81:100)
{
data1[i,] <- rnorm(100,50,1)
}
new_data1 <- Standard_Normalization(data1)
update_B Update sub-matrix B
Description
Update sub-matrix B
Usage
update_B(W,update_L_list)
Arguments
W The matrix to be factorized
update_L_list A list containing the updated results in this iteration after running the function
update_L
Value
A list the same as update_L_list with the matrix B updated
Author(s)
<NAME>
Examples
W <- simu_data_generation()
init_list <- initialization(W,k=5,l=4)
update_L_list <- update_L(W,init_list)
update_B_list <- update_B(W,update_L_list)
update_C Update sub-matrix C
Description
Update sub-matrix C
Usage
update_C(W,update_R_list,lambda=0.2,rho=1.1)
Arguments
W The matrix to be factorized
update_R_list A list containing the updated results in this iteration after running the function
update_R
lambda A parameter to set the relative weight of the sparsity constraints
rho A parameter used in the augmented lagrange multiplier method
Value
A list the same as update_R_list with the matrix C updated
Author(s)
<NAME>
Examples
W <- simu_data_generation()
init_list <- initialization(W,k=5,l=4)
update_L_list <- update_L(W,init_list)
update_B_list <- update_B(W,update_L_list)
update_R_list <- update_R(W,update_B_list)
update_C_list <- update_C(W,update_R_list,lambda=0.2,rho=1.1)
update_L Update sub-matrix L
Description
Update sub-matrix L
Usage
update_L(W,init_list)
Arguments
W The matrix to be factorized
init_list A list containing the updated results in this iteration
Value
A list the same as init_list with the matrix L updated
Examples
W <- simu_data_generation()
init_list <- initialization(W,k=5,l=4)
update_L_list <- update_L(W,init_list)
update_R Update sub-matrix R
Description
Update sub-matrix R
Usage
update_R(W,update_B_list)
Arguments
W The matrix to be factorized
update_B_list A list containing the updated results in this iteration after running the function
update_B
Value
A list the same as update_B_list with the matrix R updated
update_R 13
Examples
W <- simu_data_generation()
init_list <- initialization(W,k=5,l=4)
update_L_list <- update_L(W,init_list)
update_B_list <- update_B(W,update_L_list)
update_R_list <- update_R(W,update_B_list) |
github.com/instana/go-sensor/instrumentation/instaecho | go | Go | README
[¶](#section-readme)
---
### Instana instrumentation for Echo framework
This module contains middleware to instrument HTTP services written with [`github.com/labstack/echo`](https://github.com/labstack/echo).
[![PkgGoDev](https://pkg.go.dev/badge/github.com/instana/go-sensor/instrumentation/instaecho)](https://pkg.go.dev/github.com/instana/go-sensor/instrumentation/instaecho)
#### Installation
To add the module to your `go.mod` file run the following command in your project directory:
```
$ go get github.com/instana/go-sensor/instrumentation/instaecho
```
#### Usage
```
// create a sensor sensor := instana.NewSensor("echo-sensor")
// init instrumented Echo e := instaecho.New(sensor)
// define API e.GET("/foo", func(c echo.Context) error { /* ... */ })
// ...
```
[Full example](https://pkg.go.dev/github.com/instana/go-sensor/instrumentation/instaecho#example-package)
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
Example [¶](#example-package)
This example shows how to instrument an HTTP server that uses github.com/labstack/echo with Instana
```
sensor := instana.NewSensor("my-web-server")
// Use instaecho.New() to create a new instance of Echo. The returned instance is instrumented
// with Instana and will create an entry HTTP span for each incoming request.
engine := instaecho.New(sensor)
// Use the instrumented instance as usual engine.GET("/myendpoint", func(c echo.Context) error {
return c.JSON(200, map[string]string{
"message": "pong",
})
})
log.Fatalln(engine.Start(":0"))
```
```
Output:
```
### Index [¶](#pkg-index)
* [Constants](#pkg-constants)
* [func Middleware(sensor instana.TracerLogger) echo.MiddlewareFunc](#Middleware)
* [func New(sensor instana.TracerLogger) *echo.Echo](#New)
#### Examples [¶](#pkg-examples)
* [Package](#example-package)
### Constants [¶](#pkg-constants)
```
const Version = "1.11.0"
```
Version is the instrumentation module semantic version
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
####
func [Middleware](https://github.com/instana/go-sensor/blob/instrumentation/instaecho/v1.11.0/instrumentation/instaecho/handler.go#L26) [¶](#Middleware)
```
func Middleware(sensor [instana](/github.com/instana/go-sensor).[TracerLogger](/github.com/instana/go-sensor#TracerLogger)) echo.MiddlewareFunc
```
Middleware wraps Echo's handlers execution. Adds tracing context and handles entry span.
It should be added as a first Middleware to the Echo, before defining handlers.
####
func [New](https://github.com/instana/go-sensor/blob/instrumentation/instaecho/v1.11.0/instrumentation/instaecho/handler.go#L17) [¶](#New)
```
func New(sensor [instana](/github.com/instana/go-sensor).[TracerLogger](/github.com/instana/go-sensor#TracerLogger)) *echo.Echo
```
New returns an instrumented Echo.
### Types [¶](#pkg-types)
This section is empty. |
thorn | cran | R | Package ‘thorn’
October 14, 2022
Type Package
Title 'HTMLwidgets' Displaying Some 'WebGL' Shaders
Version 0.2.0
Description Creates some 'WebGL' shaders. They can be used as the back-
ground of a 'Shiny' app. They also can be visualized in the 'RStudio' viewer pane or in-
cluded in 'Rmd' documents, but this is pretty useless, besides contemplating them.
License GPL-3
Encoding UTF-8
LazyData true
Imports htmlwidgets
Suggests shiny, htmltools
URL https://github.com/stla/thorn
BugReports https://github.com/stla/thorn/issues
RoxygenNote 7.1.1
NeedsCompilation no
Author <NAME> [aut, cre],
<NAME> [ctb, cph] ('Hamster.js' library),
<NAME> [ctb, cph] ('PixiJS' library),
<NAME> [ctb, cph] ('PixiJS' library)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2020-11-12 19:30:02 UTC
R topics documented:
thor... 2
thorn-shin... 3
thorn HTML widget displaying a shader
Description
Creates a HTML widget displaying a shader.
Usage
thorn(shader, width = NULL, height = NULL, elementId = NULL)
Arguments
shader the name of the shader, one of "thorn", "thorn-color", "ikeda", "sweet",
"biomorph1", "biomorph2", "biomorph3", "apollony", "smoke", "plasma"
width, height a valid CSS measurement (like "100%", "400px", "auto") or a number, which
will be coerced to a string and have "px" appended
elementId a HTML id for the widget
Examples
library(thorn)
thorn("ikeda") # click on the shader to animate it
thorn("thorn") # you can also use the mouse wheel on this one
# four shaders ####
library(htmltools)
hw1 <- thorn("thorn-color", width = "50vw", height = "50vh")
hw2 <- thorn("ikeda", width = "50vw", height = "50vh")
hw3 <- thorn("sweet", width = "50vw", height = "50vh")
hw4 <- thorn("biomorph3", width = "50vw", height = "50vh")
if(interactive()){
browsable(
withTags(
div(
div(
style = "position:absolute; top:0;",
div(hw1, style="position:fixed; left:0;"),
div(hw2, style="position:fixed; left:50vw;")
),
div(
style = "position:absolute; top:50vh;",
div(hw3, style="position:fixed; left:0;"),
div(hw4, style="position:fixed; left:50vw;")
)
)
)
)
}
thorn-shiny Shiny bindings for thorn
Description
Output and render functions for using thorn within Shiny applications and interactive Rmd docu-
ments.
Usage
thornOutput(outputId, width = "100%", height = "100%")
renderThorn(expr, env = parent.frame(), quoted = FALSE)
Arguments
outputId output variable to read from
width, height a valid CSS measurement (like "100%", "400px", "auto") or a number, which
will be coerced to a string and have "px" appended
expr an expression that generates a shader created with thorn
env the environment in which to evaluate expr
quoted logical, whether expr is a quoted expression
Examples
# use a shader as the background of a Shiny app ####
library(thorn)
library(shiny)
ui <- fluidPage(
thornOutput("thorn", width = "100%", height = "100%"),
br(),
sidebarLayout(
sidebarPanel(
sliderInput(
"slider", "Slide me",
value = 10, min = 0, max = 20
),
selectInput(
"select", "Select me", choices = c("Choice 1", "Choice 2")
)
),
mainPanel()
)
)
server <- function(input, output){
output[["thorn"]] <- renderThorn({
thorn("biomorph2")
})
}
if(interactive()){
shinyApp(ui, server)
}
# all available shaders ####
library(thorn)
library(shiny)
ui <- fluidPage(
br(),
sidebarLayout(
sidebarPanel(
wellPanel(
radioButtons(
"shader", "Shader",
choices = c(
"thorn",
"thorn-color",
"ikeda",
"biomorph1",
"biomorph2",
"biomorph3",
"sweet",
"apollony",
"smoke"
)
)
)
),
mainPanel(
thornOutput("shader", width = "calc(100% - 15px)", height = "400px")
)
)
)
server <- function(input, output){
output[["shader"]] <- renderThorn({
thorn(input[["shader"]])
})
}
if(interactive()){
thorn-shiny 5
shinyApp(ui, server)
} |
tt-tio | npm | JavaScript | Tio
===
基于Promise的,支持所有JavaScript运行时的Http库。Tio是基于[axios](https://github.com/axios/axios)二次开发的一个项目,它主要有如下功能和特点:
1. 兼容[axios](https://github.com/axios/axios)API.
2. 支持所有JavaScript运行时(小程序、Weex等)。
3. 支持请求同步。
4. 请求重定向;在APP的Webview中,可以将网络请求自动重定向到Native。
安装
--
使用 npm(推荐):
```
$ npm install tt-tio
```
使用bower:
```
$ bower install tt-tio
```
源码集成请参考:[Tio集成-使用CDN或dist目录文件集成](https://github.com/tio/tio/blob/HEAD/doc/use-cdn-or-dist.md)
简介
--
### 兼容axiosAPI
Tio兼容axios API,使用方法可以参照[axios](https://github.com/axios/axios),不过需要将示例中的`axio`换成`tio`,如:
```
const tio = require('tt-tio'); // Make a request for a user with a given IDtio.get('/user?ID=12345') .then(function (response) { // handle success console.log(response); }) .catch(function (error) { // handle error console.log(error); })
```
### 支持所有JavaScript运行时
Axios目前只支持浏览器和Node,而tio目标之一正是为了弥补这个不足。Tio可以通过适配器的方式,支持各种所有JavaScript运行时。下面是Tio的架构图:上层提供标准、平台统一的API,下层针对不同平台提供不同的适配器:
目前tio支持的平台有[头条小程序](https://developer.toutiao.com/docs/api/request.html)、[微信小程序](https://developers.weixin.qq.com/miniprogram/dev/api/wx.request.html)、[支付宝小程序](https://docs.alipay.com/mini/api/network)、[轻应用](https://www.quickapp.cn/)、[Weex](https://weex.apache.org/zh/guide/introduction.html)以及浏览器和[Node](https://nodejs.org/en/), 下面试各个平台的使用方法:
先引入tio:
```
const tio = require('tt-tio');
```
下面是各个平台的导入方法:
> **注意**:示例是在npm环境中引入的,如果某个平台的开发工具不支持npm包管理,请使用[源码集成](https://github.com/tio/tio/blob/HEAD/doc/use-cdn-or-dist.md)。
#### 头条小程序
```
const adapter=require('tt-tio/lib/adapters/mp/tt')tio.defaults.adapter=adapter;
```
#### 微信小程序
```
const adapter=require('tt-tio/lib/adapters/mp/wx')tio.defaults.adapter=adapter;
```
#### 支付宝小程序
```
const adapter=require('tt-tio/lib/adapters/mp/al')tio.defaults.adapter=adapter;
```
#### 轻应用
```
const adapter=require('tt-tio/lib/adapters/hap')tio.defaults.adapter=adapter;
```
#### Weex
```
const adapter=require('tt-tio/lib/adapters/weex')tio.defaults.adapter=adapter;
```
#### Node
```
const adapter=require('tt-tio/lib/adapters/http')tio.defaults.adapter=adapter;
```
#### 浏览器
```
const adapter=require('tt-tio/lib/adapters/xhr')tio.defaults.adapter=adapter;
```
> 注意:和axios不同,在浏览器环境中,tio必须显式设置xhrAdapter,这是因为tio为了减小包体积,没有将xhrAdapter作为内置默认的adapter.
现在你就可以使用tio来发起网络请求了,使用方法和axios一致,如:
```
tio.get('/user?ID=12345') .then(function (response) { // handle success console.log(response); }) .catch(function (error) { // handle error console.log(error); }) .then(function () { // always executed });
```
### 支持请求同步
我们知道axios的拦截器可以返回一个promise来执行异步任务,但是axios的拦截器没有办法来同步多个请求,什么意思?我们看一个场景:
由于安全原因,我们需要所有的请求都需要在header中设置一个`csrfToken`,如果`csrfToken`不存在时,我们需要先请求一个`csrfToken`,然后再发起网络请求。
可以想到,在这个场景中,为了保证每次请求都有`csrfToken`,我们需要在每次发起网络请求之前都要先检查一下token;很显然,我们不可能没个请求都这么做一下,那怎么解决这个问题?可以想到的一个方法就是在拦截器中去检查,如果没有再请求;用axios实现的流程大概如下:
```
var csrfToken="";tio.interceptors.request.use(function (config) { if(!csrfToken){ //csrfToken不存在, 先获取 return fetchTocken().then((data)=>{ config.headers.csrfToken=csrfToken=data.token; return config; }) }else{ config.headers.csrfToken=csrfToken=data.token; return config; }});
```
上面看起来貌似很完美,但是有一个严重的缺陷,那就是**如果页面初始化时同时发起多个网络请求时,csrfToken会请求多次**。因为每个请求都会进入请求拦截器,而这时csrfToken都为空,所以没个请求都会走到`fetchTocken`,而我们期望的是,只有第一个去请求csrfToken,在请求的过程中,其它请求都先等待,直到csrfToken请求成功后,其它请求再继续,这种场景和多线程同步问题非常类似,所以这种场景我们可以理解为需要“同步请求”(而不是并发)。为了解决“同步”问题,**tio引入了一种机制:可以给拦截器加锁**。我们先看看用tio如何解决这个问题:
```
var csrfToken="";tio.interceptors.request.use(function (config) { if(!csrfToken){ //csrfToken不存在, 先获取 this.lock(); //锁定请求拦截器,之后,其它请求将在请求拦截器外面等待, return fetchTocken().then((data)=>{ config.headers.csrfToken=csrfToken=data.token; this.unlock(); //解锁请求拦截器 return config; }).catch(()=>this.unlock()) //解锁请求拦截器 }else{ config.headers.csrfToken=csrfToken=data.token return config; }});
```
解释:
1. 请求拦截器被锁定后(调用`lock`方法),其它请求将不能再进入请求拦截器,此时,其它请求将进入一个等待队列;当请求拦截器解锁后(调用`unlock`方法),等待队列中的请求才会进入请求拦截器。上面代码中,我们在请求csrfToken前锁定了请求拦截器,所以即使有多个并发请求,其它请求都得进入等待队列,当我们请求到csrfToken后解锁,其它请求恢复执行,这时csrfToken已经存在,所以就不需要再去请求csrfToken。
2. 如果你想取等待消队列里的所有请求(如在请求csrfToken的过程中遇到错误),可以调用`clear(reason)` 方法,调用后便会终止请求,进入上层catch方法,`reason` 将会作为catch的回调参数。如:
```
tio.interceptors.request.use(function (config) { ... //省略无关代码 this.clear("error test") ... });
```
发起请求:
```
tio.all([getUserAccount(), getUserPermissions()])
.then(tio.spread(function (acct, perms) {
...
}))
.catch(e){
console.log(e); // > "error test"
}
```
3. 请求拦截器和响应拦截器是不同的对象,每个拦截器都包含lock/unlock/clear三个方法,加锁、解锁、清空操作用于当前拦截器,比如你只锁定的是响应拦截器,那么请求拦截器依然没有锁,所有并发请求也都会进入请求拦截器,知道他们在需要进入响应拦截器的时,才会到响应拦截器外排队。
注意:在执行拦截器回调时,tio会将当前拦截器对象作为this来call拦截器回调,我们拿请求拦截器来举例:
```
tio.interceptors.request.use(function (config) { this.lock();// 等价于tio.interceptors.request.lock()});
```
另外注意,拦截器回调函数如果使用箭头函数,则不能在里面使用`this`来替代拦截器对象。
4. 在上面的示例中,`fetchTocken()` 方法不能再使用`tio`去请求csrfToken,因为在调用`fetchTocken()` 前`tio`的拦截器队列已经锁定了,所以再用它去请求csrfToken的话将会陷入循环等待(死锁),正确的作法很简单,重新创建一个新的tio实例来发起请求即可,如:
```
function fetchTocken(){ return tio.create().get("/token")}
```
除了上面请求csrfToken的场景,请求同步功能在很多场景也都很实用,再举一个常见的例子,比如登录token自动延时:登录成功后会返回一个token,但token会有一个有效期,如果过期则需要重新请求token。
#### 总结
我们总看看Tio发起网络请求的整个流程图:
### 请求重定向
在APP内嵌的H5页面中,应该尽可能通过APP发起网络请求;如果使用Webview发起网络请求会有如下问题:
1. 不能使用我司的TTNet库 2. cookie 同步困难 3. 接口安全 4. 访问控制 5. 性能 6. 缓存
详细的分析请查看 [为什么在APP内嵌的H5页面中,网络请求应该尽可能通过Native发起?](https://github.com/tio/tio/blob/HEAD/doc/redirect.md)
**那如何使用APP发起网络请求呢?**
目前我司的APP都有JsBridge,APP内可以实现一个网络请求的JsBridge方法`fetch`供H5调用;这样的话前端可以在内嵌的H5页面中直接通过JsBridge方法`fetch`方法来发起请求;如果直接手动调用`fetch`方法,这样不但太痛苦,而且代码迁移难度会非常大,试想一下:有些H5页面会同时在外部浏览器和APP中打开,对于这些页面我们期望如果在APP中就使用`fetch`,如果在浏览器中就使用浏览器发起。
那么现在,救星来了,使用tio,这一切将会变得非常简单;我们只需要定义一个能将请求转发到Native的Adapter,然后再APP环境中便使用Native Adapter,在浏览器环境就仍然使用xhr adapter(内置实现);那么如何定义Native Adapter呢?以F项目举例:APP实现的`fetch`方法和[主端的fetch](https://wiki.bytedance.net/pages/viewpage.action?pageId=173666100)方法一致,Native Adapter代码如下(文件名为fAppAdapter.js):
```
require('byted-ttfe-jsbridge');const settle = require('tio/lib/core/settle');const createError = require('tio/lib/core/createError'); module.exports = function (config) { return new Promise(function (resolve, reject) { config.header = config.headers; config.data = config.body; window.ToutiaoJSBridge.call("fetch", config, function (res) { if (res.code === 1) { var response = { status: res.status, config: config, data: res.response, headers: res.headers }; settle(resolve, reject, response); } else { reject(createError("Network Error!", config, res.status || 0)) } }); })}
```
使用
```
const tio = require('tt-tio');var adapter = require('./fAppAdapter')tio.defaults.adapter=adapter;
```
接下来就可以正常发起请求了;如果要动态判断是否在App中,代码如下:
```
const tio = require('tt-tio');var nativeAdapter = require('./fAppAdapter');var xhrAdapter= require('tt-io/lib/adapters/xhr');tio.defaults.adapter=utils.isInAPP()?nativeAdapter:xhrAdapter;
```
> 注意,如果确定页面只会在APP中打开,但测试的时候需要在浏览器中测试,请不要使用这种方法,因为这样会将两个adapter都打包;取而代之的方法是使用DefinePlugin,根据不同的打包参数来打包不同的代码。
#### 内置的ttAppAdapter
只要APP实现的`fetch`方法也和[主端的fetch](https://wiki.bytedance.net/pages/viewpage.action?pageId=173666100)方法一致,就可以使用上述的adapter,为了使用方便,tio将上述adapter已经内置了,名为"ttAppAdapter", 所以可以直接使用:
```
const adapter=require('tt-tio/lib/adapters/ttAppAdapter')tio.defaults.adapter=adapter;
```
成功案例
---
目前F项目前端相关项目都在使用:包括M站、C端APP内嵌H5工程、B端APP内嵌H5工程、小程序、B端后台、中台等。
如果你的项目也正在使用,请告知我们 (Lark搜 杜文),谢谢。
FAQ
---
**我的页面时一个纯Web页面,不会在App中打开,有必要使用tio吗?**
> 如果你需要使用Tio的请求同步功能,则需要使用tio;如果没使用,则无所谓,此时tio和axios功能是一致的。
**Tio包有多大,和axios相比呢?**
> Tio和axios包大小基本持平,如果是浏览器环境下,Tio+xhrAdapter为13.8K,而axios为13.3K,Gzip后两者基本持平,都在5K左右;另外值得注意的是,在小程序环境下,Tio+adapterGzip前平均在12K左右,会小于浏览器环境下。
**小程序中能使用tio进行文件上传吗?**
> 不可以,小程序中都会有单独的文件上传API,使用其即可;浏览器中Tio可以支持文件上传是因为使用了浏览器内置对象FormData,而小程序中没有该对象。
**Tio的请求取消、超时在各个平台下都支持吗?**
> 请求取消、超时取决于平台原生API是否支持,目前浏览器、Node、各种小程序平台都是支持的,所以这些平台下Tio都是支持的。而APP内嵌取决于appAdapter实现,如果APP jsBridge 没有实现或支持请求取消机制,则Tio的取消功能将无效;
>
Readme
---
### Keywords
* xhr
* http
* ajax
* promise
* node |
aws-sdk-sms | rust | Rust | Crate aws_sdk_sms
===
**Please Note: The SDK is currently in Developer Preview and is intended strictly for feedback purposes only. Do not use this SDK for production workloads.**
**Product update**
We recommend Amazon Web Services Application Migration Service (Amazon Web Services MGN) as the primary migration service for lift-and-shift migrations. If Amazon Web Services MGN is unavailable in a specific Amazon Web Services Region, you can use the Server Migration Service APIs through March 2023.
Server Migration Service (Server Migration Service) makes it easier and faster for you to migrate your on-premises workloads to Amazon Web Services. To learn more about Server Migration Service, see the following resources:
* Server Migration Service product page
* Server Migration Service User Guide
### Getting Started
> Examples are available for many services and operations, check out the
> examples folder in GitHub.
The SDK provides one crate per AWS service. You must add Tokio as a dependency within your Rust project to execute asynchronous code. To add `aws-sdk-sms` to your project, add the following to your **Cargo.toml** file:
```
[dependencies]
aws-config = "0.56.1"
aws-sdk-sms = "0.33.0"
tokio = { version = "1", features = ["full"] }
```
Then in code, a client can be created with the following:
```
use aws_sdk_sms as sms;
#[::tokio::main]
async fn main() -> Result<(), sms::Error> {
let config = aws_config::load_from_env().await;
let client = aws_sdk_sms::Client::new(&config);
// ... make some calls with the client
Ok(())
}
```
See the client documentation for information on what calls can be made, and the inputs and outputs for each of those calls.
### Using the SDK
Until the SDK is released, we will be adding information about using the SDK to the Developer Guide. Feel free to suggest additional sections for the guide by opening an issue and describing what you are trying to do.
### Getting Help
* GitHub discussions - For ideas, RFCs & general questions
* GitHub issues - For bug reports & feature requests
* Generated Docs (latest version)
* Usage examples
Crate Organization
---
The entry point for most customers will be `Client`, which exposes one method for each API offered by AWS Server Migration Service. The return value of each of these methods is a “fluent builder”,
where the different inputs for that API are added by builder-style function call chaining,
followed by calling `send()` to get a `Future` that will result in either a successful output or a `SdkError`.
Some of these API inputs may be structs or enums to provide more complex structured information.
These structs and enums live in `types`. There are some simpler types for representing data such as date times or binary blobs that live in `primitives`.
All types required to configure a client via the `Config` struct live in `config`.
The `operation` module has a submodule for every API, and in each submodule is the input, output, and error type for that API, as well as builders to construct each of those.
There is a top-level `Error` type that encompasses all the errors that the client can return. Any other error type can be converted to this `Error` type via the
`From` trait.
The other modules within this crate are not required for normal usage.
Modules
---
* clientClient for calling AWS Server Migration Service.
* configConfiguration for AWS Server Migration Service.
* errorCommon errors and error handling utilities.
* metaInformation about this crate.
* operationAll operations that this crate can perform.
* primitivesPrimitives such as `Blob` or `DateTime` used by other types.
* typesData structures used by operation inputs/outputs.
Structs
---
* ClientClient for AWS Server Migration Service
* ConfigConfiguration for a aws_sdk_sms service client.
Enums
---
* ErrorAll possible error types for this service.
Crate aws_sdk_sms
===
**Please Note: The SDK is currently in Developer Preview and is intended strictly for feedback purposes only. Do not use this SDK for production workloads.**
**Product update**
We recommend Amazon Web Services Application Migration Service (Amazon Web Services MGN) as the primary migration service for lift-and-shift migrations. If Amazon Web Services MGN is unavailable in a specific Amazon Web Services Region, you can use the Server Migration Service APIs through March 2023.
Server Migration Service (Server Migration Service) makes it easier and faster for you to migrate your on-premises workloads to Amazon Web Services. To learn more about Server Migration Service, see the following resources:
* Server Migration Service product page
* Server Migration Service User Guide
### Getting Started
> Examples are available for many services and operations, check out the
> examples folder in GitHub.
The SDK provides one crate per AWS service. You must add Tokio as a dependency within your Rust project to execute asynchronous code. To add `aws-sdk-sms` to your project, add the following to your **Cargo.toml** file:
```
[dependencies]
aws-config = "0.56.1"
aws-sdk-sms = "0.33.0"
tokio = { version = "1", features = ["full"] }
```
Then in code, a client can be created with the following:
```
use aws_sdk_sms as sms;
#[::tokio::main]
async fn main() -> Result<(), sms::Error> {
let config = aws_config::load_from_env().await;
let client = aws_sdk_sms::Client::new(&config);
// ... make some calls with the client
Ok(())
}
```
See the client documentation for information on what calls can be made, and the inputs and outputs for each of those calls.
### Using the SDK
Until the SDK is released, we will be adding information about using the SDK to the Developer Guide. Feel free to suggest additional sections for the guide by opening an issue and describing what you are trying to do.
### Getting Help
* GitHub discussions - For ideas, RFCs & general questions
* GitHub issues - For bug reports & feature requests
* Generated Docs (latest version)
* Usage examples
Crate Organization
---
The entry point for most customers will be `Client`, which exposes one method for each API offered by AWS Server Migration Service. The return value of each of these methods is a “fluent builder”,
where the different inputs for that API are added by builder-style function call chaining,
followed by calling `send()` to get a `Future` that will result in either a successful output or a `SdkError`.
Some of these API inputs may be structs or enums to provide more complex structured information.
These structs and enums live in `types`. There are some simpler types for representing data such as date times or binary blobs that live in `primitives`.
All types required to configure a client via the `Config` struct live in `config`.
The `operation` module has a submodule for every API, and in each submodule is the input, output, and error type for that API, as well as builders to construct each of those.
There is a top-level `Error` type that encompasses all the errors that the client can return. Any other error type can be converted to this `Error` type via the
`From` trait.
The other modules within this crate are not required for normal usage.
Modules
---
* clientClient for calling AWS Server Migration Service.
* configConfiguration for AWS Server Migration Service.
* errorCommon errors and error handling utilities.
* metaInformation about this crate.
* operationAll operations that this crate can perform.
* primitivesPrimitives such as `Blob` or `DateTime` used by other types.
* typesData structures used by operation inputs/outputs.
Structs
---
* ClientClient for AWS Server Migration Service
* ConfigConfiguration for a aws_sdk_sms service client.
Enums
---
* ErrorAll possible error types for this service.
Struct aws_sdk_sms::client::Client
===
```
pub struct Client { /* private fields */ }
```
Client for AWS Server Migration Service
Client for invoking operations on AWS Server Migration Service. Each operation on AWS Server Migration Service is a method on this this struct. `.send()` MUST be invoked on the generated operations to dispatch the request to the service.
### Constructing a `Client`
A `Config` is required to construct a client. For most use cases, the `aws-config`
crate should be used to automatically resolve this config using
`aws_config::load_from_env()`, since this will resolve an `SdkConfig` which can be shared across multiple different AWS SDK clients. This config resolution process can be customized by calling `aws_config::from_env()` instead, which returns a `ConfigLoader` that uses the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
```
let config = aws_config::load_from_env().await;
let client = aws_sdk_sms::Client::new(&config);
```
Occasionally, SDKs may have additional service-specific that can be set on the `Config` that is absent from `SdkConfig`, or slightly different settings for a specific client may be desired.
The `Config` struct implements `From<&SdkConfig>`, so setting these specific settings can be done as follows:
```
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_sms::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();
```
See the `aws-config` docs and `Config` for more information on customizing configuration.
*Note:* Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
Using the `Client`
---
A client has a function for every operation that can be performed by the service.
For example, the `CreateApp` operation has a `Client::create_app`, function which returns a builder for that operation.
The fluent builder ultimately has a `send()` function that returns an async future that returns a result, as illustrated below:
```
let result = client.create_app()
.name("example")
.send()
.await;
```
The underlying HTTP requests that get made by this can be modified with the `customize_operation`
function on the fluent builder. See the `customize` module for more information.
Implementations
---
### impl Client
#### pub fn create_app(&self) -> CreateAppFluentBuilder
Constructs a fluent builder for the `CreateApp` operation.
* The fluent builder is configurable:
+ `name(impl Into<String>)` / `set_name(Option<String>)`: The name of the new application.
+ `description(impl Into<String>)` / `set_description(Option<String>)`: The description of the new application
+ `role_name(impl Into<String>)` / `set_role_name(Option<String>)`: The name of the service role in the customer’s account to be used by Server Migration Service.
+ `client_token(impl Into<String>)` / `set_client_token(Option<String>)`: A unique, case-sensitive identifier that you provide to ensure the idempotency of application creation.
+ `server_groups(ServerGroup)` / `set_server_groups(Option<Vec<ServerGroup>>)`: The server groups to include in the application.
+ `tags(Tag)` / `set_tags(Option<Vec<Tag>>)`: The tags to be associated with the application.
* On success, responds with `CreateAppOutput` with field(s):
+ `app_summary(Option<AppSummary>)`: A summary description of the application.
+ `server_groups(Option<Vec<ServerGroup>>)`: The server groups included in the application.
+ `tags(Option<Vec<Tag>>)`: The tags associated with the application.
* On failure, responds with `SdkError<CreateAppError>`
### impl Client
#### pub fn create_replication_job(&self) -> CreateReplicationJobFluentBuilder
Constructs a fluent builder for the `CreateReplicationJob` operation.
* The fluent builder is configurable:
+ `server_id(impl Into<String>)` / `set_server_id(Option<String>)`: The ID of the server.
+ `seed_replication_time(DateTime)` / `set_seed_replication_time(Option<DateTime>)`: The seed replication time.
+ `frequency(i32)` / `set_frequency(Option<i32>)`: The time between consecutive replication runs, in hours.
+ `run_once(bool)` / `set_run_once(Option<bool>)`: Indicates whether to run the replication job one time.
+ `license_type(LicenseType)` / `set_license_type(Option<LicenseType>)`: The license type to be used for the AMI created by a successful replication run.
+ `role_name(impl Into<String>)` / `set_role_name(Option<String>)`: The name of the IAM role to be used by the Server Migration Service.
+ `description(impl Into<String>)` / `set_description(Option<String>)`: The description of the replication job.
+ `number_of_recent_amis_to_keep(i32)` / `set_number_of_recent_amis_to_keep(Option<i32>)`: The maximum number of SMS-created AMIs to retain. The oldest is deleted after the maximum number is reached and a new AMI is created.
+ `encrypted(bool)` / `set_encrypted(Option<bool>)`: Indicates whether the replication job produces encrypted AMIs.
+ `kms_key_id(impl Into<String>)` / `set_kms_key_id(Option<String>)`: The ID of the KMS key for replication jobs that produce encrypted AMIs. This value can be any of the following:
- KMS key ID
- KMS key alias
- ARN referring to the KMS key ID
- ARN referring to the KMS key alias If encrypted is *true* but a KMS key ID is not specified, the customer’s default KMS key for Amazon EBS is used.
* On success, responds with `CreateReplicationJobOutput` with field(s):
+ `replication_job_id(Option<String>)`: The unique identifier of the replication job.
* On failure, responds with `SdkError<CreateReplicationJobError>`
### impl Client
#### pub fn delete_app(&self) -> DeleteAppFluentBuilder
Constructs a fluent builder for the `DeleteApp` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
+ `force_stop_app_replication(bool)` / `set_force_stop_app_replication(Option<bool>)`: Indicates whether to stop all replication jobs corresponding to the servers in the application while deleting the application.
+ `force_terminate_app(bool)` / `set_force_terminate_app(Option<bool>)`: Indicates whether to terminate the stack corresponding to the application while deleting the application.
* On success, responds with `DeleteAppOutput`
* On failure, responds with `SdkError<DeleteAppError>`
### impl Client
#### pub fn delete_app_launch_configuration(
&self
) -> DeleteAppLaunchConfigurationFluentBuilder
Constructs a fluent builder for the `DeleteAppLaunchConfiguration` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `DeleteAppLaunchConfigurationOutput`
* On failure, responds with `SdkError<DeleteAppLaunchConfigurationError>`
### impl Client
#### pub fn delete_app_replication_configuration(
&self
) -> DeleteAppReplicationConfigurationFluentBuilder
Constructs a fluent builder for the `DeleteAppReplicationConfiguration` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `DeleteAppReplicationConfigurationOutput`
* On failure, responds with `SdkError<DeleteAppReplicationConfigurationError>`
### impl Client
#### pub fn delete_app_validation_configuration(
&self
) -> DeleteAppValidationConfigurationFluentBuilder
Constructs a fluent builder for the `DeleteAppValidationConfiguration` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `DeleteAppValidationConfigurationOutput`
* On failure, responds with `SdkError<DeleteAppValidationConfigurationError>`
### impl Client
#### pub fn delete_replication_job(&self) -> DeleteReplicationJobFluentBuilder
Constructs a fluent builder for the `DeleteReplicationJob` operation.
* The fluent builder is configurable:
+ `replication_job_id(impl Into<String>)` / `set_replication_job_id(Option<String>)`: The ID of the replication job.
* On success, responds with `DeleteReplicationJobOutput`
* On failure, responds with `SdkError<DeleteReplicationJobError>`
### impl Client
#### pub fn delete_server_catalog(&self) -> DeleteServerCatalogFluentBuilder
Constructs a fluent builder for the `DeleteServerCatalog` operation.
* The fluent builder takes no input, just `send` it.
* On success, responds with `DeleteServerCatalogOutput`
* On failure, responds with `SdkError<DeleteServerCatalogError>`
### impl Client
#### pub fn disassociate_connector(&self) -> DisassociateConnectorFluentBuilder
Constructs a fluent builder for the `DisassociateConnector` operation.
* The fluent builder is configurable:
+ `connector_id(impl Into<String>)` / `set_connector_id(Option<String>)`: The ID of the connector.
* On success, responds with `DisassociateConnectorOutput`
* On failure, responds with `SdkError<DisassociateConnectorError>`
### impl Client
#### pub fn generate_change_set(&self) -> GenerateChangeSetFluentBuilder
Constructs a fluent builder for the `GenerateChangeSet` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application associated with the change set.
+ `changeset_format(OutputFormat)` / `set_changeset_format(Option<OutputFormat>)`: The format for the change set.
* On success, responds with `GenerateChangeSetOutput` with field(s):
+ `s3_location(Option<S3Location>)`: The location of the Amazon S3 object.
* On failure, responds with `SdkError<GenerateChangeSetError>`
### impl Client
#### pub fn generate_template(&self) -> GenerateTemplateFluentBuilder
Constructs a fluent builder for the `GenerateTemplate` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application associated with the CloudFormation template.
+ `template_format(OutputFormat)` / `set_template_format(Option<OutputFormat>)`: The format for generating the CloudFormation template.
* On success, responds with `GenerateTemplateOutput` with field(s):
+ `s3_location(Option<S3Location>)`: The location of the Amazon S3 object.
* On failure, responds with `SdkError<GenerateTemplateError>`
### impl Client
#### pub fn get_app(&self) -> GetAppFluentBuilder
Constructs a fluent builder for the `GetApp` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `GetAppOutput` with field(s):
+ `app_summary(Option<AppSummary>)`: Information about the application.
+ `server_groups(Option<Vec<ServerGroup>>)`: The server groups that belong to the application.
+ `tags(Option<Vec<Tag>>)`: The tags associated with the application.
* On failure, responds with `SdkError<GetAppError>`
### impl Client
#### pub fn get_app_launch_configuration(
&self
) -> GetAppLaunchConfigurationFluentBuilder
Constructs a fluent builder for the `GetAppLaunchConfiguration` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `GetAppLaunchConfigurationOutput` with field(s):
+ `app_id(Option<String>)`: The ID of the application.
+ `role_name(Option<String>)`: The name of the service role in the customer’s account that CloudFormation uses to launch the application.
+ `auto_launch(Option<bool>)`: Indicates whether the application is configured to launch automatically after replication is complete.
+ `server_group_launch_configurations(Option<Vec<ServerGroupLaunchConfiguration>>)`: The launch configurations for server groups in this application.
* On failure, responds with `SdkError<GetAppLaunchConfigurationError>`
### impl Client
#### pub fn get_app_replication_configuration(
&self
) -> GetAppReplicationConfigurationFluentBuilder
Constructs a fluent builder for the `GetAppReplicationConfiguration` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `GetAppReplicationConfigurationOutput` with field(s):
+ `server_group_replication_configurations(Option<Vec<ServerGroupReplicationConfiguration>>)`: The replication configurations associated with server groups in this application.
* On failure, responds with `SdkError<GetAppReplicationConfigurationError>`
### impl Client
#### pub fn get_app_validation_configuration(
&self
) -> GetAppValidationConfigurationFluentBuilder
Constructs a fluent builder for the `GetAppValidationConfiguration` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `GetAppValidationConfigurationOutput` with field(s):
+ `app_validation_configurations(Option<Vec<AppValidationConfiguration>>)`: The configuration for application validation.
+ `server_group_validation_configurations(Option<Vec<ServerGroupValidationConfiguration>>)`: The configuration for instance validation.
* On failure, responds with `SdkError<GetAppValidationConfigurationError>`
### impl Client
#### pub fn get_app_validation_output(&self) -> GetAppValidationOutputFluentBuilder
Constructs a fluent builder for the `GetAppValidationOutput` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `GetAppValidationOutputOutput` with field(s):
+ `validation_output_list(Option<Vec<ValidationOutput>>)`: The validation output.
* On failure, responds with `SdkError<GetAppValidationOutputError>`
### impl Client
#### pub fn get_connectors(&self) -> GetConnectorsFluentBuilder
Constructs a fluent builder for the `GetConnectors` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The token for the next set of results.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results to return in a single call. The default value is 50. To retrieve the remaining results, make another call with the returned `NextToken` value.
* On success, responds with `GetConnectorsOutput` with field(s):
+ `connector_list(Option<Vec<Connector>>)`: Information about the registered connectors.
+ `next_token(Option<String>)`: The token required to retrieve the next set of results. This value is null when there are no more results to return.
* On failure, responds with `SdkError<GetConnectorsError>`
### impl Client
#### pub fn get_replication_jobs(&self) -> GetReplicationJobsFluentBuilder
Constructs a fluent builder for the `GetReplicationJobs` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `replication_job_id(impl Into<String>)` / `set_replication_job_id(Option<String>)`: The ID of the replication job.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The token for the next set of results.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results to return in a single call. The default value is 50. To retrieve the remaining results, make another call with the returned `NextToken` value.
* On success, responds with `GetReplicationJobsOutput` with field(s):
+ `replication_job_list(Option<Vec<ReplicationJob>>)`: Information about the replication jobs.
+ `next_token(Option<String>)`: The token required to retrieve the next set of results. This value is null when there are no more results to return.
* On failure, responds with `SdkError<GetReplicationJobsError>`
### impl Client
#### pub fn get_replication_runs(&self) -> GetReplicationRunsFluentBuilder
Constructs a fluent builder for the `GetReplicationRuns` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `replication_job_id(impl Into<String>)` / `set_replication_job_id(Option<String>)`: The ID of the replication job.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The token for the next set of results.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results to return in a single call. The default value is 50. To retrieve the remaining results, make another call with the returned `NextToken` value.
* On success, responds with `GetReplicationRunsOutput` with field(s):
+ `replication_job(Option<ReplicationJob>)`: Information about the replication job.
+ `replication_run_list(Option<Vec<ReplicationRun>>)`: Information about the replication runs.
+ `next_token(Option<String>)`: The token required to retrieve the next set of results. This value is null when there are no more results to return.
* On failure, responds with `SdkError<GetReplicationRunsError>`
### impl Client
#### pub fn get_servers(&self) -> GetServersFluentBuilder
Constructs a fluent builder for the `GetServers` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The token for the next set of results.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results to return in a single call. The default value is 50. To retrieve the remaining results, make another call with the returned `NextToken` value.
+ `vm_server_address_list(VmServerAddress)` / `set_vm_server_address_list(Option<Vec<VmServerAddress>>)`: The server addresses.
* On success, responds with `GetServersOutput` with field(s):
+ `last_modified_on(Option<DateTime>)`: The time when the server was last modified.
+ `server_catalog_status(Option<ServerCatalogStatus>)`: The status of the server catalog.
+ `server_list(Option<Vec<Server>>)`: Information about the servers.
+ `next_token(Option<String>)`: The token required to retrieve the next set of results. This value is null when there are no more results to return.
* On failure, responds with `SdkError<GetServersError>`
### impl Client
#### pub fn import_app_catalog(&self) -> ImportAppCatalogFluentBuilder
Constructs a fluent builder for the `ImportAppCatalog` operation.
* The fluent builder is configurable:
+ `role_name(impl Into<String>)` / `set_role_name(Option<String>)`: The name of the service role. If you omit this parameter, we create a service-linked role for Migration Hub in your account. Otherwise, the role that you provide must have the policy and trust policy described in the *Migration Hub User Guide*.
* On success, responds with `ImportAppCatalogOutput`
* On failure, responds with `SdkError<ImportAppCatalogError>`
### impl Client
#### pub fn import_server_catalog(&self) -> ImportServerCatalogFluentBuilder
Constructs a fluent builder for the `ImportServerCatalog` operation.
* The fluent builder takes no input, just `send` it.
* On success, responds with `ImportServerCatalogOutput`
* On failure, responds with `SdkError<ImportServerCatalogError>`
### impl Client
#### pub fn launch_app(&self) -> LaunchAppFluentBuilder
Constructs a fluent builder for the `LaunchApp` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `LaunchAppOutput`
* On failure, responds with `SdkError<LaunchAppError>`
### impl Client
#### pub fn list_apps(&self) -> ListAppsFluentBuilder
Constructs a fluent builder for the `ListApps` operation.
* The fluent builder is configurable:
+ `app_ids(impl Into<String>)` / `set_app_ids(Option<Vec<String>>)`: The unique application IDs.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The token for the next set of results.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results to return in a single call. The default value is 100. To retrieve the remaining results, make another call with the returned `NextToken` value.
* On success, responds with `ListAppsOutput` with field(s):
+ `apps(Option<Vec<AppSummary>>)`: The application summaries.
+ `next_token(Option<String>)`: The token required to retrieve the next set of results. This value is null when there are no more results to return.
* On failure, responds with `SdkError<ListAppsError>`
### impl Client
#### pub fn notify_app_validation_output(
&self
) -> NotifyAppValidationOutputFluentBuilder
Constructs a fluent builder for the `NotifyAppValidationOutput` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
+ `notification_context(NotificationContext)` / `set_notification_context(Option<NotificationContext>)`: The notification information.
* On success, responds with `NotifyAppValidationOutputOutput`
* On failure, responds with `SdkError<NotifyAppValidationOutputError>`
### impl Client
#### pub fn put_app_launch_configuration(
&self
) -> PutAppLaunchConfigurationFluentBuilder
Constructs a fluent builder for the `PutAppLaunchConfiguration` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
+ `role_name(impl Into<String>)` / `set_role_name(Option<String>)`: The name of service role in the customer’s account that CloudFormation uses to launch the application.
+ `auto_launch(bool)` / `set_auto_launch(Option<bool>)`: Indicates whether the application is configured to launch automatically after replication is complete.
+ `server_group_launch_configurations(ServerGroupLaunchConfiguration)` / `set_server_group_launch_configurations(Option<Vec<ServerGroupLaunchConfiguration>>)`: Information about the launch configurations for server groups in the application.
* On success, responds with `PutAppLaunchConfigurationOutput`
* On failure, responds with `SdkError<PutAppLaunchConfigurationError>`
### impl Client
#### pub fn put_app_replication_configuration(
&self
) -> PutAppReplicationConfigurationFluentBuilder
Constructs a fluent builder for the `PutAppReplicationConfiguration` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
+ `server_group_replication_configurations(ServerGroupReplicationConfiguration)` / `set_server_group_replication_configurations(Option<Vec<ServerGroupReplicationConfiguration>>)`: Information about the replication configurations for server groups in the application.
* On success, responds with `PutAppReplicationConfigurationOutput`
* On failure, responds with `SdkError<PutAppReplicationConfigurationError>`
### impl Client
#### pub fn put_app_validation_configuration(
&self
) -> PutAppValidationConfigurationFluentBuilder
Constructs a fluent builder for the `PutAppValidationConfiguration` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
+ `app_validation_configurations(AppValidationConfiguration)` / `set_app_validation_configurations(Option<Vec<AppValidationConfiguration>>)`: The configuration for application validation.
+ `server_group_validation_configurations(ServerGroupValidationConfiguration)` / `set_server_group_validation_configurations(Option<Vec<ServerGroupValidationConfiguration>>)`: The configuration for instance validation.
* On success, responds with `PutAppValidationConfigurationOutput`
* On failure, responds with `SdkError<PutAppValidationConfigurationError>`
### impl Client
#### pub fn start_app_replication(&self) -> StartAppReplicationFluentBuilder
Constructs a fluent builder for the `StartAppReplication` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `StartAppReplicationOutput`
* On failure, responds with `SdkError<StartAppReplicationError>`
### impl Client
#### pub fn start_on_demand_app_replication(
&self
) -> StartOnDemandAppReplicationFluentBuilder
Constructs a fluent builder for the `StartOnDemandAppReplication` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
+ `description(impl Into<String>)` / `set_description(Option<String>)`: The description of the replication run.
* On success, responds with `StartOnDemandAppReplicationOutput`
* On failure, responds with `SdkError<StartOnDemandAppReplicationError>`
### impl Client
#### pub fn start_on_demand_replication_run(
&self
) -> StartOnDemandReplicationRunFluentBuilder
Constructs a fluent builder for the `StartOnDemandReplicationRun` operation.
* The fluent builder is configurable:
+ `replication_job_id(impl Into<String>)` / `set_replication_job_id(Option<String>)`: The ID of the replication job.
+ `description(impl Into<String>)` / `set_description(Option<String>)`: The description of the replication run.
* On success, responds with `StartOnDemandReplicationRunOutput` with field(s):
+ `replication_run_id(Option<String>)`: The ID of the replication run.
* On failure, responds with `SdkError<StartOnDemandReplicationRunError>`
### impl Client
#### pub fn stop_app_replication(&self) -> StopAppReplicationFluentBuilder
Constructs a fluent builder for the `StopAppReplication` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `StopAppReplicationOutput`
* On failure, responds with `SdkError<StopAppReplicationError>`
### impl Client
#### pub fn terminate_app(&self) -> TerminateAppFluentBuilder
Constructs a fluent builder for the `TerminateApp` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `TerminateAppOutput`
* On failure, responds with `SdkError<TerminateAppError>`
### impl Client
#### pub fn update_app(&self) -> UpdateAppFluentBuilder
Constructs a fluent builder for the `UpdateApp` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
+ `name(impl Into<String>)` / `set_name(Option<String>)`: The new name of the application.
+ `description(impl Into<String>)` / `set_description(Option<String>)`: The new description of the application.
+ `role_name(impl Into<String>)` / `set_role_name(Option<String>)`: The name of the service role in the customer’s account used by Server Migration Service.
+ `server_groups(ServerGroup)` / `set_server_groups(Option<Vec<ServerGroup>>)`: The server groups in the application to update.
+ `tags(Tag)` / `set_tags(Option<Vec<Tag>>)`: The tags to associate with the application.
* On success, responds with `UpdateAppOutput` with field(s):
+ `app_summary(Option<AppSummary>)`: A summary description of the application.
+ `server_groups(Option<Vec<ServerGroup>>)`: The updated server groups in the application.
+ `tags(Option<Vec<Tag>>)`: The tags associated with the application.
* On failure, responds with `SdkError<UpdateAppError>`
### impl Client
#### pub fn update_replication_job(&self) -> UpdateReplicationJobFluentBuilder
Constructs a fluent builder for the `UpdateReplicationJob` operation.
* The fluent builder is configurable:
+ `replication_job_id(impl Into<String>)` / `set_replication_job_id(Option<String>)`: The ID of the replication job.
+ `frequency(i32)` / `set_frequency(Option<i32>)`: The time between consecutive replication runs, in hours.
+ `next_replication_run_start_time(DateTime)` / `set_next_replication_run_start_time(Option<DateTime>)`: The start time of the next replication run.
+ `license_type(LicenseType)` / `set_license_type(Option<LicenseType>)`: The license type to be used for the AMI created by a successful replication run.
+ `role_name(impl Into<String>)` / `set_role_name(Option<String>)`: The name of the IAM role to be used by Server Migration Service.
+ `description(impl Into<String>)` / `set_description(Option<String>)`: The description of the replication job.
+ `number_of_recent_amis_to_keep(i32)` / `set_number_of_recent_amis_to_keep(Option<i32>)`: The maximum number of SMS-created AMIs to retain. The oldest is deleted after the maximum number is reached and a new AMI is created.
+ `encrypted(bool)` / `set_encrypted(Option<bool>)`: When true, the replication job produces encrypted AMIs. For more information, `KmsKeyId`.
+ `kms_key_id(impl Into<String>)` / `set_kms_key_id(Option<String>)`: The ID of the KMS key for replication jobs that produce encrypted AMIs. This value can be any of the following:
- KMS key ID
- KMS key alias
- ARN referring to the KMS key ID
- ARN referring to the KMS key aliasIf encrypted is enabled but a KMS key ID is not specified, the customer’s default KMS key for Amazon EBS is used.
* On success, responds with `UpdateReplicationJobOutput`
* On failure, responds with `SdkError<UpdateReplicationJobError>`
### impl Client
#### pub fn from_conf(conf: Config) -> Self
Creates a new client from the service `Config`.
##### Panics
This method will panic if the `conf` has retry or timeouts enabled without a `sleep_impl`.
If you experience this panic, it can be fixed by setting the `sleep_impl`, or by disabling retries and timeouts.
#### pub fn config(&self) -> &Config
Returns the client’s configuration.
### impl Client
#### pub fn new(sdk_config: &SdkConfig) -> Self
Creates a new client from an SDK Config.
##### Panics
* This method will panic if the `sdk_config` is missing an async sleep implementation. If you experience this panic, set the `sleep_impl` on the Config passed into this function to fix it.
* This method will panic if the `sdk_config` is missing an HTTP connector. If you experience this panic, set the
`http_connector` on the Config passed into this function to fix it.
Trait Implementations
---
### impl Clone for Client
#### fn clone(&self) -> Client
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read moreAuto Trait Implementations
---
### impl !RefUnwindSafe for Client
### impl Send for Client
### impl Sync for Client
### impl Unpin for Client
### impl !UnwindSafe for Client
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct aws_sdk_sms::Client
===
```
pub struct Client { /* private fields */ }
```
Client for AWS Server Migration Service
Client for invoking operations on AWS Server Migration Service. Each operation on AWS Server Migration Service is a method on this this struct. `.send()` MUST be invoked on the generated operations to dispatch the request to the service.
### Constructing a `Client`
A `Config` is required to construct a client. For most use cases, the `aws-config`
crate should be used to automatically resolve this config using
`aws_config::load_from_env()`, since this will resolve an `SdkConfig` which can be shared across multiple different AWS SDK clients. This config resolution process can be customized by calling `aws_config::from_env()` instead, which returns a `ConfigLoader` that uses the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
```
let config = aws_config::load_from_env().await;
let client = aws_sdk_sms::Client::new(&config);
```
Occasionally, SDKs may have additional service-specific that can be set on the `Config` that is absent from `SdkConfig`, or slightly different settings for a specific client may be desired.
The `Config` struct implements `From<&SdkConfig>`, so setting these specific settings can be done as follows:
```
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_sms::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();
```
See the `aws-config` docs and `Config` for more information on customizing configuration.
*Note:* Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
Using the `Client`
---
A client has a function for every operation that can be performed by the service.
For example, the `CreateApp` operation has a `Client::create_app`, function which returns a builder for that operation.
The fluent builder ultimately has a `send()` function that returns an async future that returns a result, as illustrated below:
```
let result = client.create_app()
.name("example")
.send()
.await;
```
The underlying HTTP requests that get made by this can be modified with the `customize_operation`
function on the fluent builder. See the `customize` module for more information.
Implementations
---
### impl Client
#### pub fn create_app(&self) -> CreateAppFluentBuilder
Constructs a fluent builder for the `CreateApp` operation.
* The fluent builder is configurable:
+ `name(impl Into<String>)` / `set_name(Option<String>)`: The name of the new application.
+ `description(impl Into<String>)` / `set_description(Option<String>)`: The description of the new application
+ `role_name(impl Into<String>)` / `set_role_name(Option<String>)`: The name of the service role in the customer’s account to be used by Server Migration Service.
+ `client_token(impl Into<String>)` / `set_client_token(Option<String>)`: A unique, case-sensitive identifier that you provide to ensure the idempotency of application creation.
+ `server_groups(ServerGroup)` / `set_server_groups(Option<Vec<ServerGroup>>)`: The server groups to include in the application.
+ `tags(Tag)` / `set_tags(Option<Vec<Tag>>)`: The tags to be associated with the application.
* On success, responds with `CreateAppOutput` with field(s):
+ `app_summary(Option<AppSummary>)`: A summary description of the application.
+ `server_groups(Option<Vec<ServerGroup>>)`: The server groups included in the application.
+ `tags(Option<Vec<Tag>>)`: The tags associated with the application.
* On failure, responds with `SdkError<CreateAppError>`
### impl Client
#### pub fn create_replication_job(&self) -> CreateReplicationJobFluentBuilder
Constructs a fluent builder for the `CreateReplicationJob` operation.
* The fluent builder is configurable:
+ `server_id(impl Into<String>)` / `set_server_id(Option<String>)`: The ID of the server.
+ `seed_replication_time(DateTime)` / `set_seed_replication_time(Option<DateTime>)`: The seed replication time.
+ `frequency(i32)` / `set_frequency(Option<i32>)`: The time between consecutive replication runs, in hours.
+ `run_once(bool)` / `set_run_once(Option<bool>)`: Indicates whether to run the replication job one time.
+ `license_type(LicenseType)` / `set_license_type(Option<LicenseType>)`: The license type to be used for the AMI created by a successful replication run.
+ `role_name(impl Into<String>)` / `set_role_name(Option<String>)`: The name of the IAM role to be used by the Server Migration Service.
+ `description(impl Into<String>)` / `set_description(Option<String>)`: The description of the replication job.
+ `number_of_recent_amis_to_keep(i32)` / `set_number_of_recent_amis_to_keep(Option<i32>)`: The maximum number of SMS-created AMIs to retain. The oldest is deleted after the maximum number is reached and a new AMI is created.
+ `encrypted(bool)` / `set_encrypted(Option<bool>)`: Indicates whether the replication job produces encrypted AMIs.
+ `kms_key_id(impl Into<String>)` / `set_kms_key_id(Option<String>)`: The ID of the KMS key for replication jobs that produce encrypted AMIs. This value can be any of the following:
- KMS key ID
- KMS key alias
- ARN referring to the KMS key ID
- ARN referring to the KMS key alias If encrypted is *true* but a KMS key ID is not specified, the customer’s default KMS key for Amazon EBS is used.
* On success, responds with `CreateReplicationJobOutput` with field(s):
+ `replication_job_id(Option<String>)`: The unique identifier of the replication job.
* On failure, responds with `SdkError<CreateReplicationJobError>`
### impl Client
#### pub fn delete_app(&self) -> DeleteAppFluentBuilder
Constructs a fluent builder for the `DeleteApp` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
+ `force_stop_app_replication(bool)` / `set_force_stop_app_replication(Option<bool>)`: Indicates whether to stop all replication jobs corresponding to the servers in the application while deleting the application.
+ `force_terminate_app(bool)` / `set_force_terminate_app(Option<bool>)`: Indicates whether to terminate the stack corresponding to the application while deleting the application.
* On success, responds with `DeleteAppOutput`
* On failure, responds with `SdkError<DeleteAppError>`
### impl Client
#### pub fn delete_app_launch_configuration(
&self
) -> DeleteAppLaunchConfigurationFluentBuilder
Constructs a fluent builder for the `DeleteAppLaunchConfiguration` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `DeleteAppLaunchConfigurationOutput`
* On failure, responds with `SdkError<DeleteAppLaunchConfigurationError>`
### impl Client
#### pub fn delete_app_replication_configuration(
&self
) -> DeleteAppReplicationConfigurationFluentBuilder
Constructs a fluent builder for the `DeleteAppReplicationConfiguration` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `DeleteAppReplicationConfigurationOutput`
* On failure, responds with `SdkError<DeleteAppReplicationConfigurationError>`
### impl Client
#### pub fn delete_app_validation_configuration(
&self
) -> DeleteAppValidationConfigurationFluentBuilder
Constructs a fluent builder for the `DeleteAppValidationConfiguration` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `DeleteAppValidationConfigurationOutput`
* On failure, responds with `SdkError<DeleteAppValidationConfigurationError>`
### impl Client
#### pub fn delete_replication_job(&self) -> DeleteReplicationJobFluentBuilder
Constructs a fluent builder for the `DeleteReplicationJob` operation.
* The fluent builder is configurable:
+ `replication_job_id(impl Into<String>)` / `set_replication_job_id(Option<String>)`: The ID of the replication job.
* On success, responds with `DeleteReplicationJobOutput`
* On failure, responds with `SdkError<DeleteReplicationJobError>`
### impl Client
#### pub fn delete_server_catalog(&self) -> DeleteServerCatalogFluentBuilder
Constructs a fluent builder for the `DeleteServerCatalog` operation.
* The fluent builder takes no input, just `send` it.
* On success, responds with `DeleteServerCatalogOutput`
* On failure, responds with `SdkError<DeleteServerCatalogError>`
### impl Client
#### pub fn disassociate_connector(&self) -> DisassociateConnectorFluentBuilder
Constructs a fluent builder for the `DisassociateConnector` operation.
* The fluent builder is configurable:
+ `connector_id(impl Into<String>)` / `set_connector_id(Option<String>)`: The ID of the connector.
* On success, responds with `DisassociateConnectorOutput`
* On failure, responds with `SdkError<DisassociateConnectorError>`
### impl Client
#### pub fn generate_change_set(&self) -> GenerateChangeSetFluentBuilder
Constructs a fluent builder for the `GenerateChangeSet` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application associated with the change set.
+ `changeset_format(OutputFormat)` / `set_changeset_format(Option<OutputFormat>)`: The format for the change set.
* On success, responds with `GenerateChangeSetOutput` with field(s):
+ `s3_location(Option<S3Location>)`: The location of the Amazon S3 object.
* On failure, responds with `SdkError<GenerateChangeSetError>`
### impl Client
#### pub fn generate_template(&self) -> GenerateTemplateFluentBuilder
Constructs a fluent builder for the `GenerateTemplate` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application associated with the CloudFormation template.
+ `template_format(OutputFormat)` / `set_template_format(Option<OutputFormat>)`: The format for generating the CloudFormation template.
* On success, responds with `GenerateTemplateOutput` with field(s):
+ `s3_location(Option<S3Location>)`: The location of the Amazon S3 object.
* On failure, responds with `SdkError<GenerateTemplateError>`
### impl Client
#### pub fn get_app(&self) -> GetAppFluentBuilder
Constructs a fluent builder for the `GetApp` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `GetAppOutput` with field(s):
+ `app_summary(Option<AppSummary>)`: Information about the application.
+ `server_groups(Option<Vec<ServerGroup>>)`: The server groups that belong to the application.
+ `tags(Option<Vec<Tag>>)`: The tags associated with the application.
* On failure, responds with `SdkError<GetAppError>`
### impl Client
#### pub fn get_app_launch_configuration(
&self
) -> GetAppLaunchConfigurationFluentBuilder
Constructs a fluent builder for the `GetAppLaunchConfiguration` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `GetAppLaunchConfigurationOutput` with field(s):
+ `app_id(Option<String>)`: The ID of the application.
+ `role_name(Option<String>)`: The name of the service role in the customer’s account that CloudFormation uses to launch the application.
+ `auto_launch(Option<bool>)`: Indicates whether the application is configured to launch automatically after replication is complete.
+ `server_group_launch_configurations(Option<Vec<ServerGroupLaunchConfiguration>>)`: The launch configurations for server groups in this application.
* On failure, responds with `SdkError<GetAppLaunchConfigurationError>`
### impl Client
#### pub fn get_app_replication_configuration(
&self
) -> GetAppReplicationConfigurationFluentBuilder
Constructs a fluent builder for the `GetAppReplicationConfiguration` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `GetAppReplicationConfigurationOutput` with field(s):
+ `server_group_replication_configurations(Option<Vec<ServerGroupReplicationConfiguration>>)`: The replication configurations associated with server groups in this application.
* On failure, responds with `SdkError<GetAppReplicationConfigurationError>`
### impl Client
#### pub fn get_app_validation_configuration(
&self
) -> GetAppValidationConfigurationFluentBuilder
Constructs a fluent builder for the `GetAppValidationConfiguration` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `GetAppValidationConfigurationOutput` with field(s):
+ `app_validation_configurations(Option<Vec<AppValidationConfiguration>>)`: The configuration for application validation.
+ `server_group_validation_configurations(Option<Vec<ServerGroupValidationConfiguration>>)`: The configuration for instance validation.
* On failure, responds with `SdkError<GetAppValidationConfigurationError>`
### impl Client
#### pub fn get_app_validation_output(&self) -> GetAppValidationOutputFluentBuilder
Constructs a fluent builder for the `GetAppValidationOutput` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `GetAppValidationOutputOutput` with field(s):
+ `validation_output_list(Option<Vec<ValidationOutput>>)`: The validation output.
* On failure, responds with `SdkError<GetAppValidationOutputError>`
### impl Client
#### pub fn get_connectors(&self) -> GetConnectorsFluentBuilder
Constructs a fluent builder for the `GetConnectors` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The token for the next set of results.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results to return in a single call. The default value is 50. To retrieve the remaining results, make another call with the returned `NextToken` value.
* On success, responds with `GetConnectorsOutput` with field(s):
+ `connector_list(Option<Vec<Connector>>)`: Information about the registered connectors.
+ `next_token(Option<String>)`: The token required to retrieve the next set of results. This value is null when there are no more results to return.
* On failure, responds with `SdkError<GetConnectorsError>`
### impl Client
#### pub fn get_replication_jobs(&self) -> GetReplicationJobsFluentBuilder
Constructs a fluent builder for the `GetReplicationJobs` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `replication_job_id(impl Into<String>)` / `set_replication_job_id(Option<String>)`: The ID of the replication job.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The token for the next set of results.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results to return in a single call. The default value is 50. To retrieve the remaining results, make another call with the returned `NextToken` value.
* On success, responds with `GetReplicationJobsOutput` with field(s):
+ `replication_job_list(Option<Vec<ReplicationJob>>)`: Information about the replication jobs.
+ `next_token(Option<String>)`: The token required to retrieve the next set of results. This value is null when there are no more results to return.
* On failure, responds with `SdkError<GetReplicationJobsError>`
### impl Client
#### pub fn get_replication_runs(&self) -> GetReplicationRunsFluentBuilder
Constructs a fluent builder for the `GetReplicationRuns` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `replication_job_id(impl Into<String>)` / `set_replication_job_id(Option<String>)`: The ID of the replication job.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The token for the next set of results.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results to return in a single call. The default value is 50. To retrieve the remaining results, make another call with the returned `NextToken` value.
* On success, responds with `GetReplicationRunsOutput` with field(s):
+ `replication_job(Option<ReplicationJob>)`: Information about the replication job.
+ `replication_run_list(Option<Vec<ReplicationRun>>)`: Information about the replication runs.
+ `next_token(Option<String>)`: The token required to retrieve the next set of results. This value is null when there are no more results to return.
* On failure, responds with `SdkError<GetReplicationRunsError>`
### impl Client
#### pub fn get_servers(&self) -> GetServersFluentBuilder
Constructs a fluent builder for the `GetServers` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The token for the next set of results.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results to return in a single call. The default value is 50. To retrieve the remaining results, make another call with the returned `NextToken` value.
+ `vm_server_address_list(VmServerAddress)` / `set_vm_server_address_list(Option<Vec<VmServerAddress>>)`: The server addresses.
* On success, responds with `GetServersOutput` with field(s):
+ `last_modified_on(Option<DateTime>)`: The time when the server was last modified.
+ `server_catalog_status(Option<ServerCatalogStatus>)`: The status of the server catalog.
+ `server_list(Option<Vec<Server>>)`: Information about the servers.
+ `next_token(Option<String>)`: The token required to retrieve the next set of results. This value is null when there are no more results to return.
* On failure, responds with `SdkError<GetServersError>`
### impl Client
#### pub fn import_app_catalog(&self) -> ImportAppCatalogFluentBuilder
Constructs a fluent builder for the `ImportAppCatalog` operation.
* The fluent builder is configurable:
+ `role_name(impl Into<String>)` / `set_role_name(Option<String>)`: The name of the service role. If you omit this parameter, we create a service-linked role for Migration Hub in your account. Otherwise, the role that you provide must have the policy and trust policy described in the *Migration Hub User Guide*.
* On success, responds with `ImportAppCatalogOutput`
* On failure, responds with `SdkError<ImportAppCatalogError>`
### impl Client
#### pub fn import_server_catalog(&self) -> ImportServerCatalogFluentBuilder
Constructs a fluent builder for the `ImportServerCatalog` operation.
* The fluent builder takes no input, just `send` it.
* On success, responds with `ImportServerCatalogOutput`
* On failure, responds with `SdkError<ImportServerCatalogError>`
### impl Client
#### pub fn launch_app(&self) -> LaunchAppFluentBuilder
Constructs a fluent builder for the `LaunchApp` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `LaunchAppOutput`
* On failure, responds with `SdkError<LaunchAppError>`
### impl Client
#### pub fn list_apps(&self) -> ListAppsFluentBuilder
Constructs a fluent builder for the `ListApps` operation.
* The fluent builder is configurable:
+ `app_ids(impl Into<String>)` / `set_app_ids(Option<Vec<String>>)`: The unique application IDs.
+ `next_token(impl Into<String>)` / `set_next_token(Option<String>)`: The token for the next set of results.
+ `max_results(i32)` / `set_max_results(Option<i32>)`: The maximum number of results to return in a single call. The default value is 100. To retrieve the remaining results, make another call with the returned `NextToken` value.
* On success, responds with `ListAppsOutput` with field(s):
+ `apps(Option<Vec<AppSummary>>)`: The application summaries.
+ `next_token(Option<String>)`: The token required to retrieve the next set of results. This value is null when there are no more results to return.
* On failure, responds with `SdkError<ListAppsError>`
### impl Client
#### pub fn notify_app_validation_output(
&self
) -> NotifyAppValidationOutputFluentBuilder
Constructs a fluent builder for the `NotifyAppValidationOutput` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
+ `notification_context(NotificationContext)` / `set_notification_context(Option<NotificationContext>)`: The notification information.
* On success, responds with `NotifyAppValidationOutputOutput`
* On failure, responds with `SdkError<NotifyAppValidationOutputError>`
### impl Client
#### pub fn put_app_launch_configuration(
&self
) -> PutAppLaunchConfigurationFluentBuilder
Constructs a fluent builder for the `PutAppLaunchConfiguration` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
+ `role_name(impl Into<String>)` / `set_role_name(Option<String>)`: The name of service role in the customer’s account that CloudFormation uses to launch the application.
+ `auto_launch(bool)` / `set_auto_launch(Option<bool>)`: Indicates whether the application is configured to launch automatically after replication is complete.
+ `server_group_launch_configurations(ServerGroupLaunchConfiguration)` / `set_server_group_launch_configurations(Option<Vec<ServerGroupLaunchConfiguration>>)`: Information about the launch configurations for server groups in the application.
* On success, responds with `PutAppLaunchConfigurationOutput`
* On failure, responds with `SdkError<PutAppLaunchConfigurationError>`
### impl Client
#### pub fn put_app_replication_configuration(
&self
) -> PutAppReplicationConfigurationFluentBuilder
Constructs a fluent builder for the `PutAppReplicationConfiguration` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
+ `server_group_replication_configurations(ServerGroupReplicationConfiguration)` / `set_server_group_replication_configurations(Option<Vec<ServerGroupReplicationConfiguration>>)`: Information about the replication configurations for server groups in the application.
* On success, responds with `PutAppReplicationConfigurationOutput`
* On failure, responds with `SdkError<PutAppReplicationConfigurationError>`
### impl Client
#### pub fn put_app_validation_configuration(
&self
) -> PutAppValidationConfigurationFluentBuilder
Constructs a fluent builder for the `PutAppValidationConfiguration` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
+ `app_validation_configurations(AppValidationConfiguration)` / `set_app_validation_configurations(Option<Vec<AppValidationConfiguration>>)`: The configuration for application validation.
+ `server_group_validation_configurations(ServerGroupValidationConfiguration)` / `set_server_group_validation_configurations(Option<Vec<ServerGroupValidationConfiguration>>)`: The configuration for instance validation.
* On success, responds with `PutAppValidationConfigurationOutput`
* On failure, responds with `SdkError<PutAppValidationConfigurationError>`
### impl Client
#### pub fn start_app_replication(&self) -> StartAppReplicationFluentBuilder
Constructs a fluent builder for the `StartAppReplication` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `StartAppReplicationOutput`
* On failure, responds with `SdkError<StartAppReplicationError>`
### impl Client
#### pub fn start_on_demand_app_replication(
&self
) -> StartOnDemandAppReplicationFluentBuilder
Constructs a fluent builder for the `StartOnDemandAppReplication` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
+ `description(impl Into<String>)` / `set_description(Option<String>)`: The description of the replication run.
* On success, responds with `StartOnDemandAppReplicationOutput`
* On failure, responds with `SdkError<StartOnDemandAppReplicationError>`
### impl Client
#### pub fn start_on_demand_replication_run(
&self
) -> StartOnDemandReplicationRunFluentBuilder
Constructs a fluent builder for the `StartOnDemandReplicationRun` operation.
* The fluent builder is configurable:
+ `replication_job_id(impl Into<String>)` / `set_replication_job_id(Option<String>)`: The ID of the replication job.
+ `description(impl Into<String>)` / `set_description(Option<String>)`: The description of the replication run.
* On success, responds with `StartOnDemandReplicationRunOutput` with field(s):
+ `replication_run_id(Option<String>)`: The ID of the replication run.
* On failure, responds with `SdkError<StartOnDemandReplicationRunError>`
### impl Client
#### pub fn stop_app_replication(&self) -> StopAppReplicationFluentBuilder
Constructs a fluent builder for the `StopAppReplication` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `StopAppReplicationOutput`
* On failure, responds with `SdkError<StopAppReplicationError>`
### impl Client
#### pub fn terminate_app(&self) -> TerminateAppFluentBuilder
Constructs a fluent builder for the `TerminateApp` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
* On success, responds with `TerminateAppOutput`
* On failure, responds with `SdkError<TerminateAppError>`
### impl Client
#### pub fn update_app(&self) -> UpdateAppFluentBuilder
Constructs a fluent builder for the `UpdateApp` operation.
* The fluent builder is configurable:
+ `app_id(impl Into<String>)` / `set_app_id(Option<String>)`: The ID of the application.
+ `name(impl Into<String>)` / `set_name(Option<String>)`: The new name of the application.
+ `description(impl Into<String>)` / `set_description(Option<String>)`: The new description of the application.
+ `role_name(impl Into<String>)` / `set_role_name(Option<String>)`: The name of the service role in the customer’s account used by Server Migration Service.
+ `server_groups(ServerGroup)` / `set_server_groups(Option<Vec<ServerGroup>>)`: The server groups in the application to update.
+ `tags(Tag)` / `set_tags(Option<Vec<Tag>>)`: The tags to associate with the application.
* On success, responds with `UpdateAppOutput` with field(s):
+ `app_summary(Option<AppSummary>)`: A summary description of the application.
+ `server_groups(Option<Vec<ServerGroup>>)`: The updated server groups in the application.
+ `tags(Option<Vec<Tag>>)`: The tags associated with the application.
* On failure, responds with `SdkError<UpdateAppError>`
### impl Client
#### pub fn update_replication_job(&self) -> UpdateReplicationJobFluentBuilder
Constructs a fluent builder for the `UpdateReplicationJob` operation.
* The fluent builder is configurable:
+ `replication_job_id(impl Into<String>)` / `set_replication_job_id(Option<String>)`: The ID of the replication job.
+ `frequency(i32)` / `set_frequency(Option<i32>)`: The time between consecutive replication runs, in hours.
+ `next_replication_run_start_time(DateTime)` / `set_next_replication_run_start_time(Option<DateTime>)`: The start time of the next replication run.
+ `license_type(LicenseType)` / `set_license_type(Option<LicenseType>)`: The license type to be used for the AMI created by a successful replication run.
+ `role_name(impl Into<String>)` / `set_role_name(Option<String>)`: The name of the IAM role to be used by Server Migration Service.
+ `description(impl Into<String>)` / `set_description(Option<String>)`: The description of the replication job.
+ `number_of_recent_amis_to_keep(i32)` / `set_number_of_recent_amis_to_keep(Option<i32>)`: The maximum number of SMS-created AMIs to retain. The oldest is deleted after the maximum number is reached and a new AMI is created.
+ `encrypted(bool)` / `set_encrypted(Option<bool>)`: When true, the replication job produces encrypted AMIs. For more information, `KmsKeyId`.
+ `kms_key_id(impl Into<String>)` / `set_kms_key_id(Option<String>)`: The ID of the KMS key for replication jobs that produce encrypted AMIs. This value can be any of the following:
- KMS key ID
- KMS key alias
- ARN referring to the KMS key ID
- ARN referring to the KMS key aliasIf encrypted is enabled but a KMS key ID is not specified, the customer’s default KMS key for Amazon EBS is used.
* On success, responds with `UpdateReplicationJobOutput`
* On failure, responds with `SdkError<UpdateReplicationJobError>`
### impl Client
#### pub fn from_conf(conf: Config) -> Self
Creates a new client from the service `Config`.
##### Panics
This method will panic if the `conf` has retry or timeouts enabled without a `sleep_impl`.
If you experience this panic, it can be fixed by setting the `sleep_impl`, or by disabling retries and timeouts.
#### pub fn config(&self) -> &Config
Returns the client’s configuration.
### impl Client
#### pub fn new(sdk_config: &SdkConfig) -> Self
Creates a new client from an SDK Config.
##### Panics
* This method will panic if the `sdk_config` is missing an async sleep implementation. If you experience this panic, set the `sleep_impl` on the Config passed into this function to fix it.
* This method will panic if the `sdk_config` is missing an HTTP connector. If you experience this panic, set the
`http_connector` on the Config passed into this function to fix it.
Trait Implementations
---
### impl Clone for Client
#### fn clone(&self) -> Client
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read moreAuto Trait Implementations
---
### impl !RefUnwindSafe for Client
### impl Send for Client
### impl Sync for Client
### impl Unpin for Client
### impl !UnwindSafe for Client
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Type Alias aws_sdk_sms::error::SdkError
===
```
pub type SdkError<E, R = HttpResponse> = SdkError<E, R>;
```
Error type returned by the client.
Aliased Type
---
```
enum SdkError<E, R = HttpResponse> {
ConstructionFailure(ConstructionFailure),
TimeoutError(TimeoutError),
DispatchFailure(DispatchFailure),
ResponseError(ResponseError<R>),
ServiceError(ServiceError<E, R>),
}
```
Variants
---
### ConstructionFailure(ConstructionFailure)
The request failed during construction. It was not dispatched over the network.
### TimeoutError(TimeoutError)
The request failed due to a timeout. The request MAY have been sent and received.
### DispatchFailure(DispatchFailure)
The request failed during dispatch. An HTTP response was not received. The request MAY have been sent.
### ResponseError(ResponseError<R>)
A response was received but it was not parseable according the the protocol (for example the server hung up without sending a complete response)
### ServiceError(ServiceError<E, R>)
An error response was received from the service
Trait Implementations
---
### impl<E, R> ProvideErrorMetadata for SdkError<E, R>where
E: ProvideErrorMetadata,
#### fn meta(&self) -> &ErrorMetadata
Returns error metadata, which includes the error code, message,
request ID, and potentially additional information.#### fn code(&self) -> Option<&strReturns the error code if it’s available.#### fn message(&self) -> Option<&strReturns the error message, if there is one.### impl<E, R> RequestId for SdkError<E, R>where
R: HttpHeaders,
#### fn request_id(&self) -> Option<&strReturns the request ID, or `None` if the service could not be reached.
Module aws_sdk_sms::types
===
Data structures used by operation inputs/outputs.
Modules
---
* buildersBuilders
* errorError types that AWS Server Migration Service can respond with.
Structs
---
* AppSummaryInformation about the application.
* AppValidationConfigurationConfiguration for validating an application.
* AppValidationOutputOutput from validating an application.
* ConnectorRepresents a connector.
* LaunchDetailsDetails about the latest launch of an application.
* NotificationContextContains the status of validating an application.
* ReplicationJobRepresents a replication job.
* ReplicationRunRepresents a replication run.
* ReplicationRunStageDetailsDetails of the current stage of a replication run.
* S3LocationLocation of an Amazon S3 object.
* ServerRepresents a server.
* ServerGroupLogical grouping of servers.
* ServerGroupLaunchConfigurationLaunch configuration for a server group.
* ServerGroupReplicationConfigurationReplication configuration for a server group.
* ServerGroupValidationConfigurationConfiguration for validating an instance.
* ServerLaunchConfigurationLaunch configuration for a server.
* ServerReplicationConfigurationReplication configuration of a server.
* ServerReplicationParametersThe replication parameters for replicating a server.
* ServerValidationConfigurationConfiguration for validating an instance.
* ServerValidationOutputContains output from validating an instance.
* SourceContains the location of a validation script.
* SsmOutputContains the location of validation output.
* SsmValidationParametersContains validation parameters.
* TagKey/value pair that can be assigned to an application.
* UserDataA script that runs on first launch of an Amazon EC2 instance. Used for configuring the server during launch.
* UserDataValidationParametersContains validation parameters.
* ValidationOutputContains validation output.
* VmServerRepresents a VM server.
* VmServerAddressRepresents a VM server location.
Enums
---
* AppLaunchConfigurationStatusWhen writing a match expression against `AppLaunchConfigurationStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* AppLaunchStatusWhen writing a match expression against `AppLaunchStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* AppReplicationConfigurationStatusWhen writing a match expression against `AppReplicationConfigurationStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* AppReplicationStatusWhen writing a match expression against `AppReplicationStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* AppStatusWhen writing a match expression against `AppStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* AppValidationStrategyWhen writing a match expression against `AppValidationStrategy`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ConnectorCapabilityWhen writing a match expression against `ConnectorCapability`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ConnectorStatusWhen writing a match expression against `ConnectorStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* LicenseTypeWhen writing a match expression against `LicenseType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* OutputFormatWhen writing a match expression against `OutputFormat`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ReplicationJobStateWhen writing a match expression against `ReplicationJobState`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ReplicationRunStateWhen writing a match expression against `ReplicationRunState`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ReplicationRunTypeWhen writing a match expression against `ReplicationRunType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ScriptTypeWhen writing a match expression against `ScriptType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ServerCatalogStatusWhen writing a match expression against `ServerCatalogStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ServerTypeWhen writing a match expression against `ServerType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ServerValidationStrategyWhen writing a match expression against `ServerValidationStrategy`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* ValidationStatusWhen writing a match expression against `ValidationStatus`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
* VmManagerTypeWhen writing a match expression against `VmManagerType`, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
Module aws_sdk_sms::primitives
===
Primitives such as `Blob` or `DateTime` used by other types.
Structs
---
* DateTimeDateTime in time.
* UnknownVariantValueOpaque struct used as inner data for the `Unknown` variant defined in enums in the crate
Enums
---
* DateTimeFormatFormats for representing a `DateTime` in the Smithy protocols.
Struct aws_sdk_sms::Config
===
```
pub struct Config { /* private fields */ }
```
Configuration for a aws_sdk_sms service client.
Service configuration allows for customization of endpoints, region, credentials providers,
and retry configuration. Generally, it is constructed automatically for you from a shared configuration loaded by the `aws-config` crate. For example:
```
// Load a shared config from the environment let shared_config = aws_config::from_env().load().await;
// The client constructor automatically converts the shared config into the service config let client = Client::new(&shared_config);
```
The service config can also be constructed manually using its builder.
Implementations
---
### impl Config
#### pub fn builder() -> Builder
Constructs a config builder.
#### pub fn to_builder(&self) -> Builder
Converts this config back into a builder so that it can be tweaked.
#### pub fn http_connector(&self) -> Option<SharedHttpConnectorReturn the `SharedHttpConnector` to use when making requests, if any.
#### pub fn endpoint_resolver(&self) -> SharedEndpointResolver
Returns the endpoint resolver.
#### pub fn retry_config(&self) -> Option<&RetryConfigReturn a reference to the retry configuration contained in this config, if any.
#### pub fn sleep_impl(&self) -> Option<SharedAsyncSleepReturn a cloned shared async sleep implementation from this config, if any.
#### pub fn timeout_config(&self) -> Option<&TimeoutConfigReturn a reference to the timeout configuration contained in this config, if any.
#### pub fn interceptors(&self) -> impl Iterator<Item = SharedInterceptor> + '_
Returns interceptors currently registered by the user.
#### pub fn time_source(&self) -> Option<SharedTimeSourceReturn time source used for this service.
#### pub fn app_name(&self) -> Option<&AppNameReturns the name of the app that is using the client, if it was provided.
This *optional* name is used to identify the application in the user agent that gets sent along with requests.
#### pub fn invocation_id_generator(&self) -> Option<SharedInvocationIdGeneratorReturns the invocation ID generator if one was given in config.
The invocation ID generator generates ID values for the `amz-sdk-invocation-id` header. By default, this will be a random UUID. Overriding it may be useful in tests that examine the HTTP request and need to be deterministic.
#### pub fn new(config: &SdkConfig) -> Self
Creates a new service config from a shared `config`.
#### pub fn signing_service(&self) -> &'static str
The signature version 4 service signing name to use in the credential scope when signing requests.
The signing service may be overridden by the `Endpoint`, or by specifying a custom
`SigningService` during operation construction
#### pub fn region(&self) -> Option<&RegionReturns the AWS region, if it was provided.
#### pub fn credentials_cache(&self) -> Option<SharedCredentialsCacheReturns the credentials cache.
Trait Implementations
---
### impl Clone for Config
#### fn clone(&self) -> Config
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn from(sdk_config: &SdkConfig) -> Self
Converts to this type from the input type.Auto Trait Implementations
---
### impl !RefUnwindSafe for Config
### impl Send for Config
### impl Sync for Config
### impl Unpin for Config
### impl !UnwindSafe for Config
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Module aws_sdk_sms::config
===
Configuration for AWS Server Migration Service.
Modules
---
* endpointTypes needed to configure endpoint resolution.
* interceptorsTypes needed to implement `Interceptor`.
* retryRetry configuration.
* timeoutTimeout configuration.
Structs
---
* AppNameApp name that can be configured with an AWS SDK client to become part of the user agent string.
* BuilderBuilder for creating a `Config`.
* ConfigConfiguration for a aws_sdk_sms service client.
* ConfigBagLayered configuration structure
* CredentialsAWS SDK Credentials
* RegionThe region to send requests to.
* RuntimeComponentsComponents that can only be set in runtime plugins that the orchestrator uses directly to call an operation.
* SharedAsyncSleepWrapper type for sharable `AsyncSleep`
* SharedInterceptorInterceptor wrapper that may be shared
* SleepFuture returned by `AsyncSleep`.
Traits
---
* AsyncSleepAsync trait with a `sleep` function.
* InterceptorAn interceptor allows injecting code into the SDK ’s request execution pipeline.
Module aws_sdk_sms::operation
===
All operations that this crate can perform.
Modules
---
* create_appTypes for the `CreateApp` operation.
* create_replication_jobTypes for the `CreateReplicationJob` operation.
* delete_appTypes for the `DeleteApp` operation.
* delete_app_launch_configurationTypes for the `DeleteAppLaunchConfiguration` operation.
* delete_app_replication_configurationTypes for the `DeleteAppReplicationConfiguration` operation.
* delete_app_validation_configurationTypes for the `DeleteAppValidationConfiguration` operation.
* delete_replication_jobTypes for the `DeleteReplicationJob` operation.
* delete_server_catalogTypes for the `DeleteServerCatalog` operation.
* disassociate_connectorTypes for the `DisassociateConnector` operation.
* generate_change_setTypes for the `GenerateChangeSet` operation.
* generate_templateTypes for the `GenerateTemplate` operation.
* get_appTypes for the `GetApp` operation.
* get_app_launch_configurationTypes for the `GetAppLaunchConfiguration` operation.
* get_app_replication_configurationTypes for the `GetAppReplicationConfiguration` operation.
* get_app_validation_configurationTypes for the `GetAppValidationConfiguration` operation.
* get_app_validation_outputTypes for the `GetAppValidationOutput` operation.
* get_connectorsTypes for the `GetConnectors` operation.
* get_replication_jobsTypes for the `GetReplicationJobs` operation.
* get_replication_runsTypes for the `GetReplicationRuns` operation.
* get_serversTypes for the `GetServers` operation.
* import_app_catalogTypes for the `ImportAppCatalog` operation.
* import_server_catalogTypes for the `ImportServerCatalog` operation.
* launch_appTypes for the `LaunchApp` operation.
* list_appsTypes for the `ListApps` operation.
* notify_app_validation_outputTypes for the `NotifyAppValidationOutput` operation.
* put_app_launch_configurationTypes for the `PutAppLaunchConfiguration` operation.
* put_app_replication_configurationTypes for the `PutAppReplicationConfiguration` operation.
* put_app_validation_configurationTypes for the `PutAppValidationConfiguration` operation.
* start_app_replicationTypes for the `StartAppReplication` operation.
* start_on_demand_app_replicationTypes for the `StartOnDemandAppReplication` operation.
* start_on_demand_replication_runTypes for the `StartOnDemandReplicationRun` operation.
* stop_app_replicationTypes for the `StopAppReplication` operation.
* terminate_appTypes for the `TerminateApp` operation.
* update_appTypes for the `UpdateApp` operation.
* update_replication_jobTypes for the `UpdateReplicationJob` operation.
Traits
---
* RequestIdImplementers add a function to return an AWS request ID
Enum aws_sdk_sms::Error
===
```
#[non_exhaustive]pub enum Error {
DryRunOperationException(DryRunOperationException),
InternalError(InternalError),
InvalidParameterException(InvalidParameterException),
MissingRequiredParameterException(MissingRequiredParameterException),
NoConnectorsAvailableException(NoConnectorsAvailableException),
OperationNotPermittedException(OperationNotPermittedException),
ReplicationJobAlreadyExistsException(ReplicationJobAlreadyExistsException),
ReplicationJobNotFoundException(ReplicationJobNotFoundException),
ReplicationRunLimitExceededException(ReplicationRunLimitExceededException),
ServerCannotBeReplicatedException(ServerCannotBeReplicatedException),
TemporarilyUnavailableException(TemporarilyUnavailableException),
UnauthorizedOperationException(UnauthorizedOperationException),
Unhandled(Unhandled),
}
```
All possible error types for this service.
Variants (Non-exhaustive)
---
Non-exhaustive enums could have additional variants added in future. Therefore, when matching against variants of non-exhaustive enums, an extra wildcard arm must be added to account for any future variants.### DryRunOperationException(DryRunOperationException)
The user has the required permissions, so the request would have succeeded, but a dry run was performed.
### InternalError(InternalError)
An internal error occurred.
### InvalidParameterException(InvalidParameterException)
A specified parameter is not valid.
### MissingRequiredParameterException(MissingRequiredParameterException)
A required parameter is missing.
### NoConnectorsAvailableException(NoConnectorsAvailableException)
There are no connectors available.
### OperationNotPermittedException(OperationNotPermittedException)
This operation is not allowed.
### ReplicationJobAlreadyExistsException(ReplicationJobAlreadyExistsException)
The specified replication job already exists.
### ReplicationJobNotFoundException(ReplicationJobNotFoundException)
The specified replication job does not exist.
### ReplicationRunLimitExceededException(ReplicationRunLimitExceededException)
You have exceeded the number of on-demand replication runs you can request in a 24-hour period.
### ServerCannotBeReplicatedException(ServerCannotBeReplicatedException)
The specified server cannot be replicated.
### TemporarilyUnavailableException(TemporarilyUnavailableException)
The service is temporarily unavailable.
### UnauthorizedOperationException(UnauthorizedOperationException)
You lack permissions needed to perform this operation. Check your IAM policies, and ensure that you are using the correct access keys.
### Unhandled(Unhandled)
An unexpected error occurred (e.g., invalid JSON returned by the service or an unknown error code).
Trait Implementations
---
### impl Debug for Error
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports.
#### fn from(err: CreateAppError) -> Self
Converts to this type from the input type.### impl From<CreateReplicationJobError> for Error
#### fn from(err: CreateReplicationJobError) -> Self
Converts to this type from the input type.### impl From<DeleteAppError> for Error
#### fn from(err: DeleteAppError) -> Self
Converts to this type from the input type.### impl From<DeleteAppLaunchConfigurationError> for Error
#### fn from(err: DeleteAppLaunchConfigurationError) -> Self
Converts to this type from the input type.### impl From<DeleteAppReplicationConfigurationError> for Error
#### fn from(err: DeleteAppReplicationConfigurationError) -> Self
Converts to this type from the input type.### impl From<DeleteAppValidationConfigurationError> for Error
#### fn from(err: DeleteAppValidationConfigurationError) -> Self
Converts to this type from the input type.### impl From<DeleteReplicationJobError> for Error
#### fn from(err: DeleteReplicationJobError) -> Self
Converts to this type from the input type.### impl From<DeleteServerCatalogError> for Error
#### fn from(err: DeleteServerCatalogError) -> Self
Converts to this type from the input type.### impl From<DisassociateConnectorError> for Error
#### fn from(err: DisassociateConnectorError) -> Self
Converts to this type from the input type.### impl From<GenerateChangeSetError> for Error
#### fn from(err: GenerateChangeSetError) -> Self
Converts to this type from the input type.### impl From<GenerateTemplateError> for Error
#### fn from(err: GenerateTemplateError) -> Self
Converts to this type from the input type.### impl From<GetAppError> for Error
#### fn from(err: GetAppError) -> Self
Converts to this type from the input type.### impl From<GetAppLaunchConfigurationError> for Error
#### fn from(err: GetAppLaunchConfigurationError) -> Self
Converts to this type from the input type.### impl From<GetAppReplicationConfigurationError> for Error
#### fn from(err: GetAppReplicationConfigurationError) -> Self
Converts to this type from the input type.### impl From<GetAppValidationConfigurationError> for Error
#### fn from(err: GetAppValidationConfigurationError) -> Self
Converts to this type from the input type.### impl From<GetAppValidationOutputError> for Error
#### fn from(err: GetAppValidationOutputError) -> Self
Converts to this type from the input type.### impl From<GetConnectorsError> for Error
#### fn from(err: GetConnectorsError) -> Self
Converts to this type from the input type.### impl From<GetReplicationJobsError> for Error
#### fn from(err: GetReplicationJobsError) -> Self
Converts to this type from the input type.### impl From<GetReplicationRunsError> for Error
#### fn from(err: GetReplicationRunsError) -> Self
Converts to this type from the input type.### impl From<GetServersError> for Error
#### fn from(err: GetServersError) -> Self
Converts to this type from the input type.### impl From<ImportAppCatalogError> for Error
#### fn from(err: ImportAppCatalogError) -> Self
Converts to this type from the input type.### impl From<ImportServerCatalogError> for Error
#### fn from(err: ImportServerCatalogError) -> Self
Converts to this type from the input type.### impl From<LaunchAppError> for Error
#### fn from(err: LaunchAppError) -> Self
Converts to this type from the input type.### impl From<ListAppsError> for Error
#### fn from(err: ListAppsError) -> Self
Converts to this type from the input type.### impl From<NotifyAppValidationOutputError> for Error
#### fn from(err: NotifyAppValidationOutputError) -> Self
Converts to this type from the input type.### impl From<PutAppLaunchConfigurationError> for Error
#### fn from(err: PutAppLaunchConfigurationError) -> Self
Converts to this type from the input type.### impl From<PutAppReplicationConfigurationError> for Error
#### fn from(err: PutAppReplicationConfigurationError) -> Self
Converts to this type from the input type.### impl From<PutAppValidationConfigurationError> for Error
#### fn from(err: PutAppValidationConfigurationError) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<CreateAppError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<CreateAppError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<CreateReplicationJobError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<CreateReplicationJobError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DeleteAppError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DeleteAppError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DeleteAppLaunchConfigurationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DeleteAppLaunchConfigurationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DeleteAppReplicationConfigurationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DeleteAppReplicationConfigurationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DeleteAppValidationConfigurationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DeleteAppValidationConfigurationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DeleteReplicationJobError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DeleteReplicationJobError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DeleteServerCatalogError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DeleteServerCatalogError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DisassociateConnectorError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DisassociateConnectorError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<GenerateChangeSetError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<GenerateChangeSetError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<GenerateTemplateError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<GenerateTemplateError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<GetAppError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<GetAppError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<GetAppLaunchConfigurationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<GetAppLaunchConfigurationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<GetAppReplicationConfigurationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<GetAppReplicationConfigurationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<GetAppValidationConfigurationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<GetAppValidationConfigurationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<GetAppValidationOutputError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<GetAppValidationOutputError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<GetConnectorsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<GetConnectorsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<GetReplicationJobsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<GetReplicationJobsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<GetReplicationRunsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<GetReplicationRunsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<GetServersError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<GetServersError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ImportAppCatalogError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ImportAppCatalogError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ImportServerCatalogError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ImportServerCatalogError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<LaunchAppError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<LaunchAppError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ListAppsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ListAppsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<NotifyAppValidationOutputError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<NotifyAppValidationOutputError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<PutAppLaunchConfigurationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<PutAppLaunchConfigurationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<PutAppReplicationConfigurationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<PutAppReplicationConfigurationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<PutAppValidationConfigurationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<PutAppValidationConfigurationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<StartAppReplicationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<StartAppReplicationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<StartOnDemandAppReplicationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<StartOnDemandAppReplicationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<StartOnDemandReplicationRunError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<StartOnDemandReplicationRunError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<StopAppReplicationError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<StopAppReplicationError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<TerminateAppError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<TerminateAppError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<UpdateAppError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<UpdateAppError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<UpdateReplicationJobError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<UpdateReplicationJobError, R>) -> Self
Converts to this type from the input type.### impl From<StartAppReplicationError> for Error
#### fn from(err: StartAppReplicationError) -> Self
Converts to this type from the input type.### impl From<StartOnDemandAppReplicationError> for Error
#### fn from(err: StartOnDemandAppReplicationError) -> Self
Converts to this type from the input type.### impl From<StartOnDemandReplicationRunError> for Error
#### fn from(err: StartOnDemandReplicationRunError) -> Self
Converts to this type from the input type.### impl From<StopAppReplicationError> for Error
#### fn from(err: StopAppReplicationError) -> Self
Converts to this type from the input type.### impl From<TerminateAppError> for Error
#### fn from(err: TerminateAppError) -> Self
Converts to this type from the input type.### impl From<UpdateAppError> for Error
#### fn from(err: UpdateAppError) -> Self
Converts to this type from the input type.### impl From<UpdateReplicationJobError> for Error
#### fn from(err: UpdateReplicationJobError) -> Self
Converts to this type from the input type.### impl RequestId for Error
#### fn request_id(&self) -> Option<&strReturns the request ID, or `None` if the service could not be reached.Auto Trait Implementations
---
### impl !RefUnwindSafe for Error
### impl Send for Error
### impl Sync for Error
### impl Unpin for Error
### impl !UnwindSafe for Error
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToString for Twhere
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Module aws_sdk_sms::client
===
Client for calling AWS Server Migration Service.
### Constructing a `Client`
A `Config` is required to construct a client. For most use cases, the `aws-config`
crate should be used to automatically resolve this config using
`aws_config::load_from_env()`, since this will resolve an `SdkConfig` which can be shared across multiple different AWS SDK clients. This config resolution process can be customized by calling `aws_config::from_env()` instead, which returns a `ConfigLoader` that uses the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
```
let config = aws_config::load_from_env().await;
let client = aws_sdk_sms::Client::new(&config);
```
Occasionally, SDKs may have additional service-specific that can be set on the `Config` that is absent from `SdkConfig`, or slightly different settings for a specific client may be desired.
The `Config` struct implements `From<&SdkConfig>`, so setting these specific settings can be done as follows:
```
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_sms::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();
```
See the `aws-config` docs and `Config` for more information on customizing configuration.
*Note:* Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
Using the `Client`
---
A client has a function for every operation that can be performed by the service.
For example, the `CreateApp` operation has a `Client::create_app`, function which returns a builder for that operation.
The fluent builder ultimately has a `send()` function that returns an async future that returns a result, as illustrated below:
```
let result = client.create_app()
.name("example")
.send()
.await;
```
The underlying HTTP requests that get made by this can be modified with the `customize_operation`
function on the fluent builder. See the `customize` module for more information.
Modules
---
* customizeOperation customization and supporting types.
Structs
---
* ClientClient for AWS Server Migration Service
Module aws_sdk_sms::error
===
Common errors and error handling utilities.
Structs
---
* DisplayErrorContextProvides a `Display` impl for an `Error` that outputs the full error context
Traits
---
* ProvideErrorMetadataTrait to retrieve error metadata from a result
Type Aliases
---
* BoxErrorA boxed error that is `Send` and `Sync`.
* SdkErrorError type returned by the client.
Module aws_sdk_sms::meta
===
Information about this crate.
Statics
---
* PKG_VERSIONCrate version number. |
lterpalettefinder | cran | R | Package ‘lterpalettefinder’
January 20, 2023
Type Package
Title Extract Color Palettes from Photos and Pick Official LTER
Palettes
Version 1.1.0
Maintainer <NAME> <<EMAIL>>
Description Allows identification of palettes derived from LTER (Long Term Ecological Research)
photographs based on user criteria.
Also facilitates extraction of palettes from users' photos directly.
License BSD_3_clause + file LICENSE
Encoding UTF-8
LazyData true
Language en-US
BugReports https://github.com/lter/lterpalettefinder/issues
RoxygenNote 7.2.3
Depends R (>= 3.5)
Imports dplyr, ggplot2, graphics, grDevices, jpeg, magick, magrittr,
png, stats, tools, tidyr, tiff
Suggests knitr, rmarkdown, testthat (>= 3.0.0)
VignetteBuilder knitr
Config/testthat/edition 3
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0003-3905-1078>),
<NAME> [ctb],
<NAME> [ctb] (<https://orcid.org/0000-0002-7751-6238>),
National Science Foundation [fnd] (NSF 1929393, 09/01/2019 -
08/31/2024)
Repository CRAN
Date/Publication 2023-01-20 17:00:02 UTC
R topics documented:
palette_chec... 2
palette_dem... 3
palette_extrac... 4
palette_fin... 4
palette_ggdem... 5
palette_option... 6
palette_sor... 7
palette_subsampl... 7
palette_check Check Hexadecimal Code Formatting
Description
Accepts the hexadecimal code vector and tests if it is formatted correctly
Usage
palette_check(palette)
Arguments
palette (character) Vector of hexadecimal codes returned by ‘palette_extract()‘, ‘..._sort()‘,
or ‘..._subsample()‘
Value
An error message or nothing
Examples
# Check for misformatted hexcodes
palette_check(palette = c("#8e847a", "#9fc7f2"))
palette_demo Demonstrate Extracted Palette with HEX Labels - Base Plot Edition
Description
Accepts the hexadecimal code vector returned by ‘palette_extract()‘, ‘..._sort()‘, or ‘..._subsam-
ple()‘ and creates a base plot of all returned colors labeled with their HEX codes. This will facilitate
(hopefully) your selection of which of the 25 colors you would like to use in a given context.
Usage
palette_demo(
palette,
export = FALSE,
export_name = "my_palette",
export_path = getwd()
)
Arguments
palette (character) Vector of hexadecimal codes like that returned by ‘palette_extract()‘,
‘..._sort()‘, or ‘..._subsample()‘
export (logical) Whether or not to export the demo plot
export_name (character) Name for exported plot
export_path (character) File path to save exported plot (defaults to working directory)
Value
A plot of the retrieved colors labeled with their HEX codes
Examples
# Extract colors from a supplied image
my_colors <- palette_extract(
image = system.file("extdata", "lyon-fire.png", package = "lterpalettefinder")
)
# Plot that result
palette_demo(palette = my_colors)
palette_extract Extract Hexadecimal Codes from an Image
Description
Retrieves hexadecimal codes for the colors in an image file. Currently only PNG, JPEG, TIFF, and
HEIC files are supported. The function automatically removes dark colors and removes ’similar’
colors to yield 25 colors from which you can select the subset that works best for your visualization
needs. Note that photos that are very dark may return few viable colors.
Usage
palette_extract(image, sort = FALSE, progress_bar = TRUE)
Arguments
image (character) Name/path to PNG, JPEG, TIFF, or HEIC file from which to extract
colors
sort (logical) Whether extracted HEX codes should be sorted by hue and saturation
progress_bar (logical) Whether to ‘message‘ a progress bar
Value
(character) Vector containing all hexadecimal codes remaining after extraction and removal of
’dark’ and ’similar’ colors
Examples
# Extract colors from a supplied image
my_colors <- palette_extract(image = system.file("extdata", "lyon-fire.png",
package = "lterpalettefinder"), sort = TRUE, progress_bar = FALSE)
# Plot that result
palette_demo(palette = my_colors)
palette_find Find a Long Term Ecological Research (LTER) Site-Derived Palette
Description
From a dataframe of all possible palettes (updated periodically so check back!) specify the charac-
teristics of the palette you want and retrieve the palettes that match those criteria. Can specify by
number of colors in the palette, type of palette (e.g., qualitative, sequential, etc.), or which LTER
site the palette came from.
Usage
palette_find(site = "all", name = "all", type = "all")
Arguments
site (character) Vector of three-letter LTER site abbreviations for which to return
palettes or "all" or "LTER" for the LTER logo colors
name (character) Vector of palette names (if known) for which to return palettes
type (character) Vector of palette types (i.e., qualitative, tricolor, sequential, or di-
verging) for which to return palettes
Value
(dataframe / character) If more than one palette, a dataframe is returned; if exactly one palette, a
character vector is returned
Examples
# Look at all palette options by calling the function without specifying arguments
lterpalettefinder::palette_find()
# What if our query returns NO options?
palette_find(name = "no such name")
# What if our query returns MULTIPLE options?
palette_find(site = "sbc")
# What if our query returns JUST ONE option? (this is desirable)
palette_find(name = "salamander")
palette_ggdemo Demonstrate Extracted Palette with HEX Labels - ggplot2 Edition
Description
Accepts the hexadecimal code vector returned by ‘palette_extract()‘, ‘..._sort()‘, or ‘..._subsam-
ple()‘ and creates a simple plot of all returned colors labeled with their HEX codes. This will
facilitate (hopefully) your selection of which of the 25 colors you would like to use in a given
context.
Usage
palette_ggdemo(palette)
Arguments
palette (character) Vector of hexadecimal codes returned by ‘palette_extract()‘, ‘..._sort()‘,
or ‘..._subsample()‘
Value
A ggplot2 plot
Examples
# Extract colors from a supplied image
my_colors <- palette_extract(image = system.file("extdata", "lyon-fire.png",
package = "lterpalettefinder"))
# Plot that result
palette_ggdemo(palette = my_colors)
palette_options LTER Palette Options
Description
For each palette, data includes the photographer, LTER site, number of included colors, and the
hexadecimal codes for each color (data are in ’wide’ format)
Usage
palette_options
Format
A dataframe with 14 variables and one row per palette (currently 14 rows)
photographer name of the photographer who took the picture
palette_full_name concatenation of LTER site and palette name, separated by a hyphen
lter_site three-letter LTER site name abbreviation
palette_name a unique-within site-name for each palette based on the picture’s content
palette_type either "qualitative", "sequential", or "diverging" depending on the pattern of colors in
the palette
color_... the hexadecimal code for colors 1 through n for each palette
Source
<NAME>., <NAME>, G. 2022.
palette_sort Sort Hexadecimal Codes by Hue and Saturation
Description
Sorts hexademical codes retrieved by ‘palette_extract()‘ by hue and saturation. This allows for
reasonably good identification of ’similar’ colors in the way that a human eye would perceive them
(as opposed to a computer’s representation of colors).
Usage
palette_sort(palette)
Arguments
palette (character) Vector returned by ‘palette_extract()‘
Value
(character) Vector containing all hexadecimal codes returned by ‘palette_extract()‘
Examples
# Extract colors from a supplied image
my_colors <- palette_extract(image = system.file("extdata", "lyon-fire.png",
package = "lterpalettefinder"))
# Plot that result
palette_demo(palette = my_colors)
# Now sort
sort_colors <- palette_sort(palette = my_colors)
# And plot again to show change
palette_demo(palette = sort_colors)
palette_subsample Randomly Subsample HEX Codes
Description
Randomly subsample the HEX codes returned by ‘palette_extract()‘ or ‘palette_sort()‘ to desired
length. Can also set random seed for reproducibility.
Usage
palette_subsample(palette, wanted = 5, random_seed = 36)
Arguments
palette (character) Vector of hexadecimal codes like those returned by ‘palette_extract()‘
or ‘palette_sort()‘
wanted (numeric) Integer for how many colors should be returned
random_seed (numeric) Integer for ‘base::set.seed()‘
Value
(character) Vector of hexadecimal codes of user-specified length
Examples
# Extract colors from a supplied image
my_colors <- palette_extract(image = system.file("extdata", "lyon-fire.png",
package = "lterpalettefinder"))
# Plot that result
palette_ggdemo(palette = my_colors)
# Now randomly subsample
random_colors <- palette_subsample(palette = my_colors, wanted = 5)
# And plot again to show change
palette_ggdemo(palette = random_colors) |
github.com/cloudflare/goflow/v3 | go | Go | README
[¶](#section-readme)
---
### GoFlow
This application is a NetFlow/IPFIX/sFlow collector in Go.
It gathers network information (IP, interfaces, routers) from different flow protocols,
serializes it in a protobuf format and sends the messages to Kafka using Sarama's library.
#### Why
The diversity of devices and the amount of network samples at Cloudflare required its own pipeline.
We focused on building tools that could be easily monitored and maintained.
The main goal is to have full visibility of a network while allowing other teams to develop on it.
##### Modularity
In order to enable load-balancing and optimizations, the GoFlow library has a `decoder` which converts the payload of a flow packet into a Go structure.
The `producer` functions (one per protocol) then converts those structures into a protobuf (`pb/flow.pb`)
which contains the fields a network engineer is interested in.
The flow packets usually contains multiples samples This acts as an abstraction of a sample.
The `transport` provides different way of processing the protobuf. Either sending it via Kafka or print it on the console.
Finally, `utils` provide functions that are directly used by the CLI utils.
GoFlow is a wrapper of all the functions and chains thems into producing bytes into Kafka.
There is also one CLI tool per protocol.
You can build your own collector using this base and replace parts:
* Use different transport (eg: RabbitMQ instead of Kafka)
* Convert to another format (eg: Cap'n Proto, Avro, instead of protobuf)
* Decode different samples (eg: not only IP networks, add MPLS)
* Different metrics system (eg: use [expvar](https://golang.org/pkg/expvar/) instead of Prometheus)
##### Protocol difference
The sampling protocols can be very different:
**sFlow** is a stateless protocol which sends the full header of a packet with router information
(interfaces, destination AS) while **NetFlow/IPFIX** rely on templates that contain fields (eg: source IPv6).
The sampling rate in NetFlow/IPFIX is provided by **Option Data Sets**. This is why it can take a few minutes for the packets to be decoded until all the templates are received (**Option Template** and **Data Template**).
Both of these protocols bundle multiple samples (**Data Set** in NetFlow/IPFIX and **Flow Sample** in sFlow)
in one packet.
The advantages of using an abstract network flow format, such as protobuf, is it enables summing over the protocols (eg: per ASN or per port, rather than per (ASN, router) and (port, router)).
#### Features
Collection:
* NetFlow v5
* IPFIX/NetFlow v9
+ Handles sampling rate provided by the Option Data Set
* sFlow v5: RAW, IPv4, IPv6, Ethernet samples, Gateway data, router data, switch data
Production:
* Convert to protobuf
* Sends to Kafka producer
* Prints to the console
Monitoring:
* Prometheus metrics
* Time to decode
* Samples rates
* Payload information
* NetFlow Templates
#### Run
Download the latest release and just run the following command:
```
./goflow -h
```
Enable or disable a protocol using `-nf=false` or `-sflow=false`.
Define the port and addresses of the protocols using `-nf.addr`, `-nf.port` for NetFlow and `-sflow.addr`, `-slow.port` for sFlow.
Set the brokers or the Kafka brokers SRV record using: `-kafka.brokers 127.0.0.1:9092,[::1]:9092` or `-kafka.srv`.
Disable Kafka sending `-kafka=false`.
You can hash the protobuf by key when you send it to Kafka.
You can collect NetFlow/IPFIX, NetFlow v5 and sFlow using the same collector or use the single-protocol collectors.
You can define the number of workers per protocol using `-workers` .
#### Docker
We also provide a all-in-one Docker container. To run it in debug mode without sending into Kafka:
```
$ sudo docker run --net=host -ti cloudflare/goflow:latest -kafka=false
```
#### Environment
To get an example of pipeline, check out [flow-pipeline](https://github.com/cloudflare/flow-pipeline)
##### How is it used at Cloudflare
The samples flowing into Kafka are **processed** and special fields are inserted using other databases:
* User plan
* Country
* ASN and BGP information
The extended protobuf has the same base of the one in this repo. The **compatibility** with other software is preserved when adding new fields (thus the fields will be lost if re-serialized).
Once the updated flows are back into Kafka, they are **consumed** by **database inserters** (Clickhouse, Amazon Redshift, Google BigTable...)
to allow for static analysis. Other teams access the network data just like any other log (SQL query).
##### Output format
If you want to develop applications, build `pb/flow.proto` into the language you want:
Example in Go:
```
PROTOCPATH=$HOME/go/bin/ make proto
```
Example in Java:
```
export SRC_DIR="path/to/goflow-pb"
export DST_DIR="path/to/java/app/src/main/java"
protoc -I=$SRC_DIR --java_out=$DST_DIR $SRC_DIR/flow.proto
```
The fields are listed in the following table.
You can find information on how they are populated from the original source:
* For [sFlow](https://sflow.org/developers/specifications.php)
* For [NetFlow v5](https://www.cisco.com/c/en/us/td/docs/net_mgmt/netflow_collection_engine/3-6/user/guide/format.html)
* For [NetFlow v9](https://www.cisco.com/en/US/technologies/tk648/tk362/technologies_white_paper09186a00800a3db9.html)
* For [IPFIX](https://www.iana.org/assignments/ipfix/ipfix.xhtml)
| Field | Description | NetFlow v5 | sFlow | NetFlow v9 | IPFIX |
| --- | --- | --- | --- | --- | --- |
| Type | Type of flow message | NETFLOW_V5 | SFLOW_5 | NETFLOW_V9 | IPFIX |
| TimeReceived | Timestamp of when the message was received | Included | Included | Included | Included |
| SequenceNum | Sequence number of the flow packet | Included | Included | Included | Included |
| SamplingRate | Sampling rate of the flow | Included | Included | Included | Included |
| FlowDirection | Direction of the flow | | | DIRECTION (61) | flowDirection (61) |
| SamplerAddress | Address of the device that generated the packet | IP source of packet | Agent IP | IP source of packet | IP source of packet |
| TimeFlowStart | Time the flow started | System uptime and first | =TimeReceived | System uptime and FIRST_SWITCHED (22) | flowStartXXX (150, 152, 154, 156) |
| TimeFlowEnd | Time the flow ended | System uptime and last | =TimeReceived | System uptime and LAST_SWITCHED (23) | flowEndXXX (151, 153, 155, 157) |
| Bytes | Number of bytes in flow | dOctets | Length of sample | IN_BYTES (1) OUT_BYTES (23) | octetDeltaCount (1) postOctetDeltaCount (23) |
| Packets | Number of packets in flow | dPkts | =1 | IN_PKTS (2) OUT_PKTS (24) | packetDeltaCount (1) postPacketDeltaCount (24) |
| SrcAddr | Source address (IP) | srcaddr (IPv4 only) | Included | Included | IPV4_SRC_ADDR (8) IPV6_SRC_ADDR (27) |
| DstAddr | Destination address (IP) | dstaddr (IPv4 only) | Included | Included | IPV4_DST_ADDR (12) IPV6_DST_ADDR (28) |
| Etype | Ethernet type (0x86dd for IPv6...) | IPv4 | Included | Included | Included |
| Proto | Protocol (UDP, TCP, ICMP...) | prot | Included | PROTOCOL (4) | protocolIdentifier (4) |
| SrcPort | Source port (when UDP/TCP/SCTP) | srcport | Included | L4_SRC_PORT (7) | sourceTransportPort (7) |
| DstPort | Destination port (when UDP/TCP/SCTP) | dstport | Included | L4_DST_PORT (11) | destinationTransportPort (11) |
| InIf | Input interface | input | Included | INPUT_SNMP (10) | ingressInterface (10) |
| OutIf | Output interface | output | Included | OUTPUT_SNMP (14) | egressInterface (14) |
| SrcMac | Source mac address | | Included | IN_SRC_MAC (56) | sourceMacAddress (56) |
| DstMac | Destination mac address | | Included | OUT_DST_MAC (57) | postDestinationMacAddress (57) |
| SrcVlan | Source VLAN ID | | From ExtendedSwitch | SRC_VLAN (59) | vlanId (58) |
| DstVlan | Destination VLAN ID | | From ExtendedSwitch | DST_VLAN (59) | postVlanId (59) |
| VlanId | 802.11q VLAN ID | | Included | SRC_VLAN (59) | postVlanId (59) |
| IngressVrfID | VRF ID | | | | ingressVRFID (234) |
| EgressVrfID | VRF ID | | | | egressVRFID (235) |
| IPTos | IP Type of Service | tos | Included | SRC_TOS (5) | ipClassOfService (5) |
| ForwardingStatus | Forwarding status | | | FORWARDING_STATUS (89) | forwardingStatus (89) |
| IPTTL | IP Time to Live | | Included | IPTTL (52) | minimumTTL (52 |
| TCPFlags | TCP flags | tcp_flags | Included | TCP_FLAGS (6) | tcpControlBits (6) |
| IcmpType | ICMP Type | | Included | ICMP_TYPE (32) | icmpTypeXXX (176, 178) icmpTypeCodeXXX (32, 139) |
| IcmpCode | ICMP Code | | Included | ICMP_TYPE (32) | icmpCodeXXX (177, 179) icmpTypeCodeXXX (32, 139) |
| IPv6FlowLabel | IPv6 Flow Label | | Included | IPV6_FLOW_LABEL (31) | flowLabelIPv6 (31) |
| FragmentId | IP Fragment ID | | Included | IPV4_IDENT (54) | fragmentIdentification (54) |
| FragmentOffset | IP Fragment Offset | | Included | FRAGMENT_OFFSET (88) | fragmentOffset (88) and fragmentFlags (197) |
| BiFlowDirection | BiFlow Identification | | | | biflowDirection (239) |
| SrcAS | Source AS number | src_as | From ExtendedGateway | SRC_AS (16) | bgpSourceAsNumber (16) |
| DstAS | Destination AS number | dst_as | From ExtendedGateway | DST_AS (17) | bgpDestinationAsNumber (17) |
| NextHop | Nexthop address | nexthop | From ExtendedGateway | IPV4_NEXT_HOP (15) BGP_IPV4_NEXT_HOP (18) IPV6_NEXT_HOP (62) BGP_IPV6_NEXT_HOP (63) | ipNextHopIPv4Address (15) bgpNextHopIPv4Address (18) ipNextHopIPv6Address (62) bgpNextHopIPv6Address (63) |
| NextHopAS | Nexthop AS number | | From ExtendedGateway | | |
| SrcNet | Source address mask | src_mask | From ExtendedRouter | SRC_MASK (9) IPV6_SRC_MASK (29) | sourceIPv4PrefixLength (9) sourceIPv6PrefixLength (29) |
| DstNet | Destination address mask | dst_mask | From ExtendedRouter | DST_MASK (13) IPV6_DST_MASK (30) | destinationIPv4PrefixLength (13) destinationIPv6PrefixLength (30) |
| HasEncap | Indicates if has GRE encapsulation | | Included | | |
| xxxEncap fields | Same as field but inside GRE | | Included | | |
| HasMPLS | Indicates the presence of MPLS header | | Included | | |
| MPLSCount | Count of MPLS layers | | Included | | |
| MPLSxTTL | TTL of the MPLS label | | Included | | |
| MPLSxLabel | MPLS label | | Included | | |
If you are implementing flow processors to add more data to the protobuf,
we suggest you use field IDs ≥ 1000.
##### Implementation notes
The pipeline at Cloudflare is connecting collectors with flow processors that will add more information: with IP address, add country, ASN, etc.
For aggregation, we are using Materialized tables in Clickhouse.
Dictionaries help correlating flows with country and ASNs.
A few collectors can treat hundred of thousands of samples.
We also experimented successfully flow aggregation with Flink using a
[Keyed Session Window](https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/operators/windows.html#session-windows):
this sums the `Bytes x SamplingRate` and `Packets x SamplingRate` received during a 5 minutes **window** while allowing 2 more minutes in the case where some flows were delayed before closing the **session**.
The BGP information provided by routers can be unreliable (if the router does not have a BGP full-table or it is a static route).
You can use Maxmind [prefix to ASN](https://dev.maxmind.com/geoip/geoip2/geolite2/) in order to solve this issue.
#### License
Licensed under the BSD 3 License.
None |
aws-sdk-elasticloadbalancing | rust | Rust | Struct aws_sdk_elasticloadbalancing::Client
===
```
pub struct Client { /* private fields */ }
```
Client for Elastic Load Balancing
Client for invoking operations on Elastic Load Balancing. Each operation on Elastic Load Balancing is a method on this this struct. `.send()` MUST be invoked on the generated operations to dispatch the request to the service.
### Constructing a `Client`
A `Config` is required to construct a client. For most use cases, the `aws-config`
crate should be used to automatically resolve this config using
`aws_config::load_from_env()`, since this will resolve an `SdkConfig` which can be shared across multiple different AWS SDK clients. This config resolution process can be customized by calling `aws_config::from_env()` instead, which returns a `ConfigLoader` that uses the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
```
let config = aws_config::load_from_env().await;
let client = aws_sdk_elasticloadbalancing::Client::new(&config);
```
Occasionally, SDKs may have additional service-specific that can be set on the `Config` that is absent from `SdkConfig`, or slightly different settings for a specific client may be desired.
The `Config` struct implements `From<&SdkConfig>`, so setting these specific settings can be done as follows:
```
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_elasticloadbalancing::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();
```
See the `aws-config` docs and `Config` for more information on customizing configuration.
*Note:* Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
Using the `Client`
---
A client has a function for every operation that can be performed by the service.
For example, the `ApplySecurityGroupsToLoadBalancer` operation has a `Client::apply_security_groups_to_load_balancer`, function which returns a builder for that operation.
The fluent builder ultimately has a `send()` function that returns an async future that returns a result, as illustrated below:
```
let result = client.apply_security_groups_to_load_balancer()
.load_balancer_name("example")
.send()
.await;
```
The underlying HTTP requests that get made by this can be modified with the `customize_operation`
function on the fluent builder. See the `customize` module for more information.
Implementations
---
### impl Client
#### pub fn add_tags(&self) -> AddTagsFluentBuilder
Constructs a fluent builder for the `AddTags` operation.
* The fluent builder is configurable:
+ `load_balancer_names(impl Into<String>)` / `set_load_balancer_names(Option<Vec<String>>)`: The name of the load balancer. You can specify one load balancer only.
+ `tags(Tag)` / `set_tags(Option<Vec<Tag>>)`: The tags.
* On success, responds with `AddTagsOutput`
* On failure, responds with `SdkError<AddTagsError>`
### impl Client
#### pub fn apply_security_groups_to_load_balancer(
&self
) -> ApplySecurityGroupsToLoadBalancerFluentBuilder
Constructs a fluent builder for the `ApplySecurityGroupsToLoadBalancer` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
+ `security_groups(impl Into<String>)` / `set_security_groups(Option<Vec<String>>)`: The IDs of the security groups to associate with the load balancer. Note that you cannot specify the name of the security group.
* On success, responds with `ApplySecurityGroupsToLoadBalancerOutput` with field(s):
+ `security_groups(Option<Vec<String>>)`: The IDs of the security groups associated with the load balancer.
* On failure, responds with `SdkError<ApplySecurityGroupsToLoadBalancerError>`
### impl Client
#### pub fn attach_load_balancer_to_subnets(
&self
) -> AttachLoadBalancerToSubnetsFluentBuilder
Constructs a fluent builder for the `AttachLoadBalancerToSubnets` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
+ `subnets(impl Into<String>)` / `set_subnets(Option<Vec<String>>)`: The IDs of the subnets to add. You can add only one subnet per Availability Zone.
* On success, responds with `AttachLoadBalancerToSubnetsOutput` with field(s):
+ `subnets(Option<Vec<String>>)`: The IDs of the subnets attached to the load balancer.
* On failure, responds with `SdkError<AttachLoadBalancerToSubnetsError>`
### impl Client
#### pub fn configure_health_check(&self) -> ConfigureHealthCheckFluentBuilder
Constructs a fluent builder for the `ConfigureHealthCheck` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
+ `health_check(HealthCheck)` / `set_health_check(Option<HealthCheck>)`: The configuration information.
* On success, responds with `ConfigureHealthCheckOutput` with field(s):
+ `health_check(Option<HealthCheck>)`: The updated health check.
* On failure, responds with `SdkError<ConfigureHealthCheckError>`
### impl Client
#### pub fn create_app_cookie_stickiness_policy(
&self
) -> CreateAppCookieStickinessPolicyFluentBuilder
Constructs a fluent builder for the `CreateAppCookieStickinessPolicy` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
+ `policy_name(impl Into<String>)` / `set_policy_name(Option<String>)`: The name of the policy being created. Policy names must consist of alphanumeric characters and dashes (-). This name must be unique within the set of policies for this load balancer.
+ `cookie_name(impl Into<String>)` / `set_cookie_name(Option<String>)`: The name of the application cookie used for stickiness.
* On success, responds with `CreateAppCookieStickinessPolicyOutput`
* On failure, responds with `SdkError<CreateAppCookieStickinessPolicyError>`
### impl Client
#### pub fn create_lb_cookie_stickiness_policy(
&self
) -> CreateLBCookieStickinessPolicyFluentBuilder
Constructs a fluent builder for the `CreateLBCookieStickinessPolicy` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
+ `policy_name(impl Into<String>)` / `set_policy_name(Option<String>)`: The name of the policy being created. Policy names must consist of alphanumeric characters and dashes (-). This name must be unique within the set of policies for this load balancer.
+ `cookie_expiration_period(i64)` / `set_cookie_expiration_period(Option<i64>)`: The time period, in seconds, after which the cookie should be considered stale. If you do not specify this parameter, the default value is 0, which indicates that the sticky session should last for the duration of the browser session.
* On success, responds with `CreateLbCookieStickinessPolicyOutput`
* On failure, responds with `SdkError<CreateLBCookieStickinessPolicyError>`
### impl Client
#### pub fn create_load_balancer(&self) -> CreateLoadBalancerFluentBuilder
Constructs a fluent builder for the `CreateLoadBalancer` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
This name must be unique within your set of load balancers for the region, must have a maximum of 32 characters, must contain only alphanumeric characters or hyphens, and cannot begin or end with a hyphen.
+ `listeners(Listener)` / `set_listeners(Option<Vec<Listener>>)`: The listeners.
For more information, see Listeners for Your Classic Load Balancer in the *Classic Load Balancers Guide*.
+ `availability_zones(impl Into<String>)` / `set_availability_zones(Option<Vec<String>>)`: One or more Availability Zones from the same region as the load balancer.
You must specify at least one Availability Zone.
You can add more Availability Zones after you create the load balancer using `EnableAvailabilityZonesForLoadBalancer`.
+ `subnets(impl Into<String>)` / `set_subnets(Option<Vec<String>>)`: The IDs of the subnets in your VPC to attach to the load balancer. Specify one subnet per Availability Zone specified in `AvailabilityZones`.
+ `security_groups(impl Into<String>)` / `set_security_groups(Option<Vec<String>>)`: The IDs of the security groups to assign to the load balancer.
+ `scheme(impl Into<String>)` / `set_scheme(Option<String>)`: The type of a load balancer. Valid only for load balancers in a VPC.
By default, Elastic Load Balancing creates an Internet-facing load balancer with a DNS name that resolves to public IP addresses. For more information about Internet-facing and Internal load balancers, see Load Balancer Scheme in the *Elastic Load Balancing User Guide*.
Specify `internal` to create a load balancer with a DNS name that resolves to private IP addresses.
+ `tags(Tag)` / `set_tags(Option<Vec<Tag>>)`: A list of tags to assign to the load balancer.
For more information about tagging your load balancer, see Tag Your Classic Load Balancer in the *Classic Load Balancers Guide*.
* On success, responds with `CreateLoadBalancerOutput` with field(s):
+ `dns_name(Option<String>)`: The DNS name of the load balancer.
* On failure, responds with `SdkError<CreateLoadBalancerError>`
### impl Client
#### pub fn create_load_balancer_listeners(
&self
) -> CreateLoadBalancerListenersFluentBuilder
Constructs a fluent builder for the `CreateLoadBalancerListeners` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
+ `listeners(Listener)` / `set_listeners(Option<Vec<Listener>>)`: The listeners.
* On success, responds with `CreateLoadBalancerListenersOutput`
* On failure, responds with `SdkError<CreateLoadBalancerListenersError>`
### impl Client
#### pub fn create_load_balancer_policy(
&self
) -> CreateLoadBalancerPolicyFluentBuilder
Constructs a fluent builder for the `CreateLoadBalancerPolicy` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
+ `policy_name(impl Into<String>)` / `set_policy_name(Option<String>)`: The name of the load balancer policy to be created. This name must be unique within the set of policies for this load balancer.
+ `policy_type_name(impl Into<String>)` / `set_policy_type_name(Option<String>)`: The name of the base policy type. To get the list of policy types, use `DescribeLoadBalancerPolicyTypes`.
+ `policy_attributes(PolicyAttribute)` / `set_policy_attributes(Option<Vec<PolicyAttribute>>)`: The policy attributes.
* On success, responds with `CreateLoadBalancerPolicyOutput`
* On failure, responds with `SdkError<CreateLoadBalancerPolicyError>`
### impl Client
#### pub fn delete_load_balancer(&self) -> DeleteLoadBalancerFluentBuilder
Constructs a fluent builder for the `DeleteLoadBalancer` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
* On success, responds with `DeleteLoadBalancerOutput`
* On failure, responds with `SdkError<DeleteLoadBalancerError>`
### impl Client
#### pub fn delete_load_balancer_listeners(
&self
) -> DeleteLoadBalancerListenersFluentBuilder
Constructs a fluent builder for the `DeleteLoadBalancerListeners` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
+ `load_balancer_ports(i32)` / `set_load_balancer_ports(Option<Vec<i32>>)`: The client port numbers of the listeners.
* On success, responds with `DeleteLoadBalancerListenersOutput`
* On failure, responds with `SdkError<DeleteLoadBalancerListenersError>`
### impl Client
#### pub fn delete_load_balancer_policy(
&self
) -> DeleteLoadBalancerPolicyFluentBuilder
Constructs a fluent builder for the `DeleteLoadBalancerPolicy` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
+ `policy_name(impl Into<String>)` / `set_policy_name(Option<String>)`: The name of the policy.
* On success, responds with `DeleteLoadBalancerPolicyOutput`
* On failure, responds with `SdkError<DeleteLoadBalancerPolicyError>`
### impl Client
#### pub fn deregister_instances_from_load_balancer(
&self
) -> DeregisterInstancesFromLoadBalancerFluentBuilder
Constructs a fluent builder for the `DeregisterInstancesFromLoadBalancer` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
+ `instances(Instance)` / `set_instances(Option<Vec<Instance>>)`: The IDs of the instances.
* On success, responds with `DeregisterInstancesFromLoadBalancerOutput` with field(s):
+ `instances(Option<Vec<Instance>>)`: The remaining instances registered with the load balancer.
* On failure, responds with `SdkError<DeregisterInstancesFromLoadBalancerError>`
### impl Client
#### pub fn describe_account_limits(&self) -> DescribeAccountLimitsFluentBuilder
Constructs a fluent builder for the `DescribeAccountLimits` operation.
* The fluent builder is configurable:
+ `marker(impl Into<String>)` / `set_marker(Option<String>)`: The marker for the next set of results. (You received this marker from a previous call.)
+ `page_size(i32)` / `set_page_size(Option<i32>)`: The maximum number of results to return with this call.
* On success, responds with `DescribeAccountLimitsOutput` with field(s):
+ `limits(Option<Vec<Limit>>)`: Information about the limits.
+ `next_marker(Option<String>)`: The marker to use when requesting the next set of results. If there are no additional results, the string is empty.
* On failure, responds with `SdkError<DescribeAccountLimitsError>`
### impl Client
#### pub fn describe_instance_health(&self) -> DescribeInstanceHealthFluentBuilder
Constructs a fluent builder for the `DescribeInstanceHealth` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
+ `instances(Instance)` / `set_instances(Option<Vec<Instance>>)`: The IDs of the instances.
* On success, responds with `DescribeInstanceHealthOutput` with field(s):
+ `instance_states(Option<Vec<InstanceState>>)`: Information about the health of the instances.
* On failure, responds with `SdkError<DescribeInstanceHealthError>`
### impl Client
#### pub fn describe_load_balancer_attributes(
&self
) -> DescribeLoadBalancerAttributesFluentBuilder
Constructs a fluent builder for the `DescribeLoadBalancerAttributes` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
* On success, responds with `DescribeLoadBalancerAttributesOutput` with field(s):
+ `load_balancer_attributes(Option<LoadBalancerAttributes>)`: Information about the load balancer attributes.
* On failure, responds with `SdkError<DescribeLoadBalancerAttributesError>`
### impl Client
#### pub fn describe_load_balancer_policies(
&self
) -> DescribeLoadBalancerPoliciesFluentBuilder
Constructs a fluent builder for the `DescribeLoadBalancerPolicies` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
+ `policy_names(impl Into<String>)` / `set_policy_names(Option<Vec<String>>)`: The names of the policies.
* On success, responds with `DescribeLoadBalancerPoliciesOutput` with field(s):
+ `policy_descriptions(Option<Vec<PolicyDescription>>)`: Information about the policies.
* On failure, responds with `SdkError<DescribeLoadBalancerPoliciesError>`
### impl Client
#### pub fn describe_load_balancer_policy_types(
&self
) -> DescribeLoadBalancerPolicyTypesFluentBuilder
Constructs a fluent builder for the `DescribeLoadBalancerPolicyTypes` operation.
* The fluent builder is configurable:
+ `policy_type_names(impl Into<String>)` / `set_policy_type_names(Option<Vec<String>>)`: The names of the policy types. If no names are specified, describes all policy types defined by Elastic Load Balancing.
* On success, responds with `DescribeLoadBalancerPolicyTypesOutput` with field(s):
+ `policy_type_descriptions(Option<Vec<PolicyTypeDescription>>)`: Information about the policy types.
* On failure, responds with `SdkError<DescribeLoadBalancerPolicyTypesError>`
### impl Client
#### pub fn describe_load_balancers(&self) -> DescribeLoadBalancersFluentBuilder
Constructs a fluent builder for the `DescribeLoadBalancers` operation.
This operation supports pagination; See `into_paginator()`.
* The fluent builder is configurable:
+ `load_balancer_names(impl Into<String>)` / `set_load_balancer_names(Option<Vec<String>>)`: The names of the load balancers.
+ `marker(impl Into<String>)` / `set_marker(Option<String>)`: The marker for the next set of results. (You received this marker from a previous call.)
+ `page_size(i32)` / `set_page_size(Option<i32>)`: The maximum number of results to return with this call (a number from 1 to 400). The default is 400.
* On success, responds with `DescribeLoadBalancersOutput` with field(s):
+ `load_balancer_descriptions(Option<Vec<LoadBalancerDescription>>)`: Information about the load balancers.
+ `next_marker(Option<String>)`: The marker to use when requesting the next set of results. If there are no additional results, the string is empty.
* On failure, responds with `SdkError<DescribeLoadBalancersError>`
### impl Client
#### pub fn describe_tags(&self) -> DescribeTagsFluentBuilder
Constructs a fluent builder for the `DescribeTags` operation.
* The fluent builder is configurable:
+ `load_balancer_names(impl Into<String>)` / `set_load_balancer_names(Option<Vec<String>>)`: The names of the load balancers.
* On success, responds with `DescribeTagsOutput` with field(s):
+ `tag_descriptions(Option<Vec<TagDescription>>)`: Information about the tags.
* On failure, responds with `SdkError<DescribeTagsError>`
### impl Client
#### pub fn detach_load_balancer_from_subnets(
&self
) -> DetachLoadBalancerFromSubnetsFluentBuilder
Constructs a fluent builder for the `DetachLoadBalancerFromSubnets` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
+ `subnets(impl Into<String>)` / `set_subnets(Option<Vec<String>>)`: The IDs of the subnets.
* On success, responds with `DetachLoadBalancerFromSubnetsOutput` with field(s):
+ `subnets(Option<Vec<String>>)`: The IDs of the remaining subnets for the load balancer.
* On failure, responds with `SdkError<DetachLoadBalancerFromSubnetsError>`
### impl Client
#### pub fn disable_availability_zones_for_load_balancer(
&self
) -> DisableAvailabilityZonesForLoadBalancerFluentBuilder
Constructs a fluent builder for the `DisableAvailabilityZonesForLoadBalancer` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
+ `availability_zones(impl Into<String>)` / `set_availability_zones(Option<Vec<String>>)`: The Availability Zones.
* On success, responds with `DisableAvailabilityZonesForLoadBalancerOutput` with field(s):
+ `availability_zones(Option<Vec<String>>)`: The remaining Availability Zones for the load balancer.
* On failure, responds with `SdkError<DisableAvailabilityZonesForLoadBalancerError>`
### impl Client
#### pub fn enable_availability_zones_for_load_balancer(
&self
) -> EnableAvailabilityZonesForLoadBalancerFluentBuilder
Constructs a fluent builder for the `EnableAvailabilityZonesForLoadBalancer` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
+ `availability_zones(impl Into<String>)` / `set_availability_zones(Option<Vec<String>>)`: The Availability Zones. These must be in the same region as the load balancer.
* On success, responds with `EnableAvailabilityZonesForLoadBalancerOutput` with field(s):
+ `availability_zones(Option<Vec<String>>)`: The updated list of Availability Zones for the load balancer.
* On failure, responds with `SdkError<EnableAvailabilityZonesForLoadBalancerError>`
### impl Client
#### pub fn modify_load_balancer_attributes(
&self
) -> ModifyLoadBalancerAttributesFluentBuilder
Constructs a fluent builder for the `ModifyLoadBalancerAttributes` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
+ `load_balancer_attributes(LoadBalancerAttributes)` / `set_load_balancer_attributes(Option<LoadBalancerAttributes>)`: The attributes for the load balancer.
* On success, responds with `ModifyLoadBalancerAttributesOutput` with field(s):
+ `load_balancer_name(Option<String>)`: The name of the load balancer.
+ `load_balancer_attributes(Option<LoadBalancerAttributes>)`: Information about the load balancer attributes.
* On failure, responds with `SdkError<ModifyLoadBalancerAttributesError>`
### impl Client
#### pub fn register_instances_with_load_balancer(
&self
) -> RegisterInstancesWithLoadBalancerFluentBuilder
Constructs a fluent builder for the `RegisterInstancesWithLoadBalancer` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
+ `instances(Instance)` / `set_instances(Option<Vec<Instance>>)`: The IDs of the instances.
* On success, responds with `RegisterInstancesWithLoadBalancerOutput` with field(s):
+ `instances(Option<Vec<Instance>>)`: The updated list of instances for the load balancer.
* On failure, responds with `SdkError<RegisterInstancesWithLoadBalancerError>`
### impl Client
#### pub fn remove_tags(&self) -> RemoveTagsFluentBuilder
Constructs a fluent builder for the `RemoveTags` operation.
* The fluent builder is configurable:
+ `load_balancer_names(impl Into<String>)` / `set_load_balancer_names(Option<Vec<String>>)`: The name of the load balancer. You can specify a maximum of one load balancer name.
+ `tags(TagKeyOnly)` / `set_tags(Option<Vec<TagKeyOnly>>)`: The list of tag keys to remove.
* On success, responds with `RemoveTagsOutput`
* On failure, responds with `SdkError<RemoveTagsError>`
### impl Client
#### pub fn set_load_balancer_listener_ssl_certificate(
&self
) -> SetLoadBalancerListenerSSLCertificateFluentBuilder
Constructs a fluent builder for the `SetLoadBalancerListenerSSLCertificate` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
+ `load_balancer_port(i32)` / `set_load_balancer_port(Option<i32>)`: The port that uses the specified SSL certificate.
+ `ssl_certificate_id(impl Into<String>)` / `set_ssl_certificate_id(Option<String>)`: The Amazon Resource Name (ARN) of the SSL certificate.
* On success, responds with `SetLoadBalancerListenerSslCertificateOutput`
* On failure, responds with `SdkError<SetLoadBalancerListenerSSLCertificateError>`
### impl Client
#### pub fn set_load_balancer_policies_for_backend_server(
&self
) -> SetLoadBalancerPoliciesForBackendServerFluentBuilder
Constructs a fluent builder for the `SetLoadBalancerPoliciesForBackendServer` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
+ `instance_port(i32)` / `set_instance_port(Option<i32>)`: The port number associated with the EC2 instance.
+ `policy_names(impl Into<String>)` / `set_policy_names(Option<Vec<String>>)`: The names of the policies. If the list is empty, then all current polices are removed from the EC2 instance.
* On success, responds with `SetLoadBalancerPoliciesForBackendServerOutput`
* On failure, responds with `SdkError<SetLoadBalancerPoliciesForBackendServerError>`
### impl Client
#### pub fn set_load_balancer_policies_of_listener(
&self
) -> SetLoadBalancerPoliciesOfListenerFluentBuilder
Constructs a fluent builder for the `SetLoadBalancerPoliciesOfListener` operation.
* The fluent builder is configurable:
+ `load_balancer_name(impl Into<String>)` / `set_load_balancer_name(Option<String>)`: The name of the load balancer.
+ `load_balancer_port(i32)` / `set_load_balancer_port(Option<i32>)`: The external port of the load balancer.
+ `policy_names(impl Into<String>)` / `set_policy_names(Option<Vec<String>>)`: The names of the policies. This list must include all policies to be enabled. If you omit a policy that is currently enabled, it is disabled. If the list is empty, all current policies are disabled.
* On success, responds with `SetLoadBalancerPoliciesOfListenerOutput`
* On failure, responds with `SdkError<SetLoadBalancerPoliciesOfListenerError>`
### impl Client
#### pub fn from_conf(conf: Config) -> Self
Creates a new client from the service `Config`.
##### Panics
This method will panic if the `conf` has retry or timeouts enabled without a `sleep_impl`.
If you experience this panic, it can be fixed by setting the `sleep_impl`, or by disabling retries and timeouts.
#### pub fn config(&self) -> &Config
Returns the client’s configuration.
### impl Client
#### pub fn new(sdk_config: &SdkConfig) -> Self
Creates a new client from an SDK Config.
##### Panics
* This method will panic if the `sdk_config` is missing an async sleep implementation. If you experience this panic, set the `sleep_impl` on the Config passed into this function to fix it.
* This method will panic if the `sdk_config` is missing an HTTP connector. If you experience this panic, set the
`http_connector` on the Config passed into this function to fix it.
Trait Implementations
---
### impl Clone for Client
#### fn clone(&self) -> Client
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read moreAuto Trait Implementations
---
### impl !RefUnwindSafe for Client
### impl Send for Client
### impl Sync for Client
### impl Unpin for Client
### impl !UnwindSafe for Client
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Type Alias aws_sdk_elasticloadbalancing::error::SdkError
===
```
pub type SdkError<E, R = HttpResponse> = SdkError<E, R>;
```
Error type returned by the client.
Aliased Type
---
```
enum SdkError<E, R = HttpResponse> {
ConstructionFailure(ConstructionFailure),
TimeoutError(TimeoutError),
DispatchFailure(DispatchFailure),
ResponseError(ResponseError<R>),
ServiceError(ServiceError<E, R>),
}
```
Variants
---
### ConstructionFailure(ConstructionFailure)
The request failed during construction. It was not dispatched over the network.
### TimeoutError(TimeoutError)
The request failed due to a timeout. The request MAY have been sent and received.
### DispatchFailure(DispatchFailure)
The request failed during dispatch. An HTTP response was not received. The request MAY have been sent.
### ResponseError(ResponseError<R>)
A response was received but it was not parseable according the the protocol (for example the server hung up without sending a complete response)
### ServiceError(ServiceError<E, R>)
An error response was received from the service
Trait Implementations
---
### impl<E, R> ProvideErrorMetadata for SdkError<E, R>where
E: ProvideErrorMetadata,
#### fn meta(&self) -> &ErrorMetadata
Returns error metadata, which includes the error code, message,
request ID, and potentially additional information.#### fn code(&self) -> Option<&strReturns the error code if it’s available.#### fn message(&self) -> Option<&strReturns the error message, if there is one.### impl<E, R> RequestId for SdkError<E, R>where
R: HttpHeaders,
#### fn request_id(&self) -> Option<&strReturns the request ID, or `None` if the service could not be reached.
Module aws_sdk_elasticloadbalancing::primitives
===
Primitives such as `Blob` or `DateTime` used by other types.
Structs
---
* DateTimeDateTime in time.
Enums
---
* DateTimeFormatFormats for representing a `DateTime` in the Smithy protocols.
Struct aws_sdk_elasticloadbalancing::Config
===
```
pub struct Config { /* private fields */ }
```
Configuration for a aws_sdk_elasticloadbalancing service client.
Service configuration allows for customization of endpoints, region, credentials providers,
and retry configuration. Generally, it is constructed automatically for you from a shared configuration loaded by the `aws-config` crate. For example:
```
// Load a shared config from the environment let shared_config = aws_config::from_env().load().await;
// The client constructor automatically converts the shared config into the service config let client = Client::new(&shared_config);
```
The service config can also be constructed manually using its builder.
Implementations
---
### impl Config
#### pub fn builder() -> Builder
Constructs a config builder.
#### pub fn to_builder(&self) -> Builder
Converts this config back into a builder so that it can be tweaked.
#### pub fn http_connector(&self) -> Option<SharedHttpConnectorReturn the `SharedHttpConnector` to use when making requests, if any.
#### pub fn endpoint_resolver(&self) -> SharedEndpointResolver
Returns the endpoint resolver.
#### pub fn retry_config(&self) -> Option<&RetryConfigReturn a reference to the retry configuration contained in this config, if any.
#### pub fn sleep_impl(&self) -> Option<SharedAsyncSleepReturn a cloned shared async sleep implementation from this config, if any.
#### pub fn timeout_config(&self) -> Option<&TimeoutConfigReturn a reference to the timeout configuration contained in this config, if any.
#### pub fn interceptors(&self) -> impl Iterator<Item = SharedInterceptor> + '_
Returns interceptors currently registered by the user.
#### pub fn time_source(&self) -> Option<SharedTimeSourceReturn time source used for this service.
#### pub fn app_name(&self) -> Option<&AppNameReturns the name of the app that is using the client, if it was provided.
This *optional* name is used to identify the application in the user agent that gets sent along with requests.
#### pub fn invocation_id_generator(&self) -> Option<SharedInvocationIdGeneratorReturns the invocation ID generator if one was given in config.
The invocation ID generator generates ID values for the `amz-sdk-invocation-id` header. By default, this will be a random UUID. Overriding it may be useful in tests that examine the HTTP request and need to be deterministic.
#### pub fn new(config: &SdkConfig) -> Self
Creates a new service config from a shared `config`.
#### pub fn signing_service(&self) -> &'static str
The signature version 4 service signing name to use in the credential scope when signing requests.
The signing service may be overridden by the `Endpoint`, or by specifying a custom
`SigningService` during operation construction
#### pub fn region(&self) -> Option<&RegionReturns the AWS region, if it was provided.
#### pub fn credentials_cache(&self) -> Option<SharedCredentialsCacheReturns the credentials cache.
Trait Implementations
---
### impl Clone for Config
#### fn clone(&self) -> Config
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn from(sdk_config: &SdkConfig) -> Self
Converts to this type from the input type.Auto Trait Implementations
---
### impl !RefUnwindSafe for Config
### impl Send for Config
### impl Sync for Config
### impl Unpin for Config
### impl !UnwindSafe for Config
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Module aws_sdk_elasticloadbalancing::config
===
Configuration for Elastic Load Balancing.
Modules
---
* endpointTypes needed to configure endpoint resolution.
* interceptorsTypes needed to implement `Interceptor`.
* retryRetry configuration.
* timeoutTimeout configuration.
Structs
---
* AppNameApp name that can be configured with an AWS SDK client to become part of the user agent string.
* BuilderBuilder for creating a `Config`.
* ConfigConfiguration for a aws_sdk_elasticloadbalancing service client.
* ConfigBagLayered configuration structure
* CredentialsAWS SDK Credentials
* RegionThe region to send requests to.
* RuntimeComponentsComponents that can only be set in runtime plugins that the orchestrator uses directly to call an operation.
* SharedAsyncSleepWrapper type for sharable `AsyncSleep`
* SharedInterceptorInterceptor wrapper that may be shared
* SleepFuture returned by `AsyncSleep`.
Traits
---
* AsyncSleepAsync trait with a `sleep` function.
* InterceptorAn interceptor allows injecting code into the SDK ’s request execution pipeline.
Module aws_sdk_elasticloadbalancing::operation
===
All operations that this crate can perform.
Modules
---
* add_tagsTypes for the `AddTags` operation.
* apply_security_groups_to_load_balancerTypes for the `ApplySecurityGroupsToLoadBalancer` operation.
* attach_load_balancer_to_subnetsTypes for the `AttachLoadBalancerToSubnets` operation.
* configure_health_checkTypes for the `ConfigureHealthCheck` operation.
* create_app_cookie_stickiness_policyTypes for the `CreateAppCookieStickinessPolicy` operation.
* create_lb_cookie_stickiness_policyTypes for the `CreateLBCookieStickinessPolicy` operation.
* create_load_balancerTypes for the `CreateLoadBalancer` operation.
* create_load_balancer_listenersTypes for the `CreateLoadBalancerListeners` operation.
* create_load_balancer_policyTypes for the `CreateLoadBalancerPolicy` operation.
* delete_load_balancerTypes for the `DeleteLoadBalancer` operation.
* delete_load_balancer_listenersTypes for the `DeleteLoadBalancerListeners` operation.
* delete_load_balancer_policyTypes for the `DeleteLoadBalancerPolicy` operation.
* deregister_instances_from_load_balancerTypes for the `DeregisterInstancesFromLoadBalancer` operation.
* describe_account_limitsTypes for the `DescribeAccountLimits` operation.
* describe_instance_healthTypes for the `DescribeInstanceHealth` operation.
* describe_load_balancer_attributesTypes for the `DescribeLoadBalancerAttributes` operation.
* describe_load_balancer_policiesTypes for the `DescribeLoadBalancerPolicies` operation.
* describe_load_balancer_policy_typesTypes for the `DescribeLoadBalancerPolicyTypes` operation.
* describe_load_balancersTypes for the `DescribeLoadBalancers` operation.
* describe_tagsTypes for the `DescribeTags` operation.
* detach_load_balancer_from_subnetsTypes for the `DetachLoadBalancerFromSubnets` operation.
* disable_availability_zones_for_load_balancerTypes for the `DisableAvailabilityZonesForLoadBalancer` operation.
* enable_availability_zones_for_load_balancerTypes for the `EnableAvailabilityZonesForLoadBalancer` operation.
* modify_load_balancer_attributesTypes for the `ModifyLoadBalancerAttributes` operation.
* register_instances_with_load_balancerTypes for the `RegisterInstancesWithLoadBalancer` operation.
* remove_tagsTypes for the `RemoveTags` operation.
* set_load_balancer_listener_ssl_certificateTypes for the `SetLoadBalancerListenerSSLCertificate` operation.
* set_load_balancer_policies_for_backend_serverTypes for the `SetLoadBalancerPoliciesForBackendServer` operation.
* set_load_balancer_policies_of_listenerTypes for the `SetLoadBalancerPoliciesOfListener` operation.
Traits
---
* RequestIdImplementers add a function to return an AWS request ID
Enum aws_sdk_elasticloadbalancing::Error
===
```
#[non_exhaustive]pub enum Error {
AccessPointNotFoundException(AccessPointNotFoundException),
CertificateNotFoundException(CertificateNotFoundException),
DependencyThrottleException(DependencyThrottleException),
DuplicateAccessPointNameException(DuplicateAccessPointNameException),
DuplicateListenerException(DuplicateListenerException),
DuplicatePolicyNameException(DuplicatePolicyNameException),
DuplicateTagKeysException(DuplicateTagKeysException),
InvalidConfigurationRequestException(InvalidConfigurationRequestException),
InvalidEndPointException(InvalidEndPointException),
InvalidSchemeException(InvalidSchemeException),
InvalidSecurityGroupException(InvalidSecurityGroupException),
InvalidSubnetException(InvalidSubnetException),
ListenerNotFoundException(ListenerNotFoundException),
LoadBalancerAttributeNotFoundException(LoadBalancerAttributeNotFoundException),
OperationNotPermittedException(OperationNotPermittedException),
PolicyNotFoundException(PolicyNotFoundException),
PolicyTypeNotFoundException(PolicyTypeNotFoundException),
SubnetNotFoundException(SubnetNotFoundException),
TooManyAccessPointsException(TooManyAccessPointsException),
TooManyPoliciesException(TooManyPoliciesException),
TooManyTagsException(TooManyTagsException),
UnsupportedProtocolException(UnsupportedProtocolException),
Unhandled(Unhandled),
}
```
All possible error types for this service.
Variants (Non-exhaustive)
---
Non-exhaustive enums could have additional variants added in future. Therefore, when matching against variants of non-exhaustive enums, an extra wildcard arm must be added to account for any future variants.### AccessPointNotFoundException(AccessPointNotFoundException)
The specified load balancer does not exist.
### CertificateNotFoundException(CertificateNotFoundException)
The specified ARN does not refer to a valid SSL certificate in AWS Identity and Access Management (IAM) or AWS Certificate Manager (ACM). Note that if you recently uploaded the certificate to IAM, this error might indicate that the certificate is not fully available yet.
### DependencyThrottleException(DependencyThrottleException)
A request made by Elastic Load Balancing to another service exceeds the maximum request rate permitted for your account.
### DuplicateAccessPointNameException(DuplicateAccessPointNameException)
The specified load balancer name already exists for this account.
### DuplicateListenerException(DuplicateListenerException)
A listener already exists for the specified load balancer name and port, but with a different instance port, protocol, or SSL certificate.
### DuplicatePolicyNameException(DuplicatePolicyNameException)
A policy with the specified name already exists for this load balancer.
### DuplicateTagKeysException(DuplicateTagKeysException)
A tag key was specified more than once.
### InvalidConfigurationRequestException(InvalidConfigurationRequestException)
The requested configuration change is not valid.
### InvalidEndPointException(InvalidEndPointException)
The specified endpoint is not valid.
### InvalidSchemeException(InvalidSchemeException)
The specified value for the schema is not valid. You can only specify a scheme for load balancers in a VPC.
### InvalidSecurityGroupException(InvalidSecurityGroupException)
One or more of the specified security groups do not exist.
### InvalidSubnetException(InvalidSubnetException)
The specified VPC has no associated Internet gateway.
### ListenerNotFoundException(ListenerNotFoundException)
The load balancer does not have a listener configured at the specified port.
### LoadBalancerAttributeNotFoundException(LoadBalancerAttributeNotFoundException)
The specified load balancer attribute does not exist.
### OperationNotPermittedException(OperationNotPermittedException)
This operation is not allowed.
### PolicyNotFoundException(PolicyNotFoundException)
One or more of the specified policies do not exist.
### PolicyTypeNotFoundException(PolicyTypeNotFoundException)
One or more of the specified policy types do not exist.
### SubnetNotFoundException(SubnetNotFoundException)
One or more of the specified subnets do not exist.
### TooManyAccessPointsException(TooManyAccessPointsException)
The quota for the number of load balancers has been reached.
### TooManyPoliciesException(TooManyPoliciesException)
The quota for the number of policies for this load balancer has been reached.
### TooManyTagsException(TooManyTagsException)
The quota for the number of tags that can be assigned to a load balancer has been reached.
### UnsupportedProtocolException(UnsupportedProtocolException)
The specified protocol or signature version is not supported.
### Unhandled(Unhandled)
An unexpected error occurred (e.g., invalid JSON returned by the service or an unknown error code).
Trait Implementations
---
### impl Debug for Error
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports.
#### fn from(err: AddTagsError) -> Self
Converts to this type from the input type.### impl From<ApplySecurityGroupsToLoadBalancerError> for Error
#### fn from(err: ApplySecurityGroupsToLoadBalancerError) -> Self
Converts to this type from the input type.### impl From<AttachLoadBalancerToSubnetsError> for Error
#### fn from(err: AttachLoadBalancerToSubnetsError) -> Self
Converts to this type from the input type.### impl From<ConfigureHealthCheckError> for Error
#### fn from(err: ConfigureHealthCheckError) -> Self
Converts to this type from the input type.### impl From<CreateAppCookieStickinessPolicyError> for Error
#### fn from(err: CreateAppCookieStickinessPolicyError) -> Self
Converts to this type from the input type.### impl From<CreateLBCookieStickinessPolicyError> for Error
#### fn from(err: CreateLBCookieStickinessPolicyError) -> Self
Converts to this type from the input type.### impl From<CreateLoadBalancerError> for Error
#### fn from(err: CreateLoadBalancerError) -> Self
Converts to this type from the input type.### impl From<CreateLoadBalancerListenersError> for Error
#### fn from(err: CreateLoadBalancerListenersError) -> Self
Converts to this type from the input type.### impl From<CreateLoadBalancerPolicyError> for Error
#### fn from(err: CreateLoadBalancerPolicyError) -> Self
Converts to this type from the input type.### impl From<DeleteLoadBalancerError> for Error
#### fn from(err: DeleteLoadBalancerError) -> Self
Converts to this type from the input type.### impl From<DeleteLoadBalancerListenersError> for Error
#### fn from(err: DeleteLoadBalancerListenersError) -> Self
Converts to this type from the input type.### impl From<DeleteLoadBalancerPolicyError> for Error
#### fn from(err: DeleteLoadBalancerPolicyError) -> Self
Converts to this type from the input type.### impl From<DeregisterInstancesFromLoadBalancerError> for Error
#### fn from(err: DeregisterInstancesFromLoadBalancerError) -> Self
Converts to this type from the input type.### impl From<DescribeAccountLimitsError> for Error
#### fn from(err: DescribeAccountLimitsError) -> Self
Converts to this type from the input type.### impl From<DescribeInstanceHealthError> for Error
#### fn from(err: DescribeInstanceHealthError) -> Self
Converts to this type from the input type.### impl From<DescribeLoadBalancerAttributesError> for Error
#### fn from(err: DescribeLoadBalancerAttributesError) -> Self
Converts to this type from the input type.### impl From<DescribeLoadBalancerPoliciesError> for Error
#### fn from(err: DescribeLoadBalancerPoliciesError) -> Self
Converts to this type from the input type.### impl From<DescribeLoadBalancerPolicyTypesError> for Error
#### fn from(err: DescribeLoadBalancerPolicyTypesError) -> Self
Converts to this type from the input type.### impl From<DescribeLoadBalancersError> for Error
#### fn from(err: DescribeLoadBalancersError) -> Self
Converts to this type from the input type.### impl From<DescribeTagsError> for Error
#### fn from(err: DescribeTagsError) -> Self
Converts to this type from the input type.### impl From<DetachLoadBalancerFromSubnetsError> for Error
#### fn from(err: DetachLoadBalancerFromSubnetsError) -> Self
Converts to this type from the input type.### impl From<DisableAvailabilityZonesForLoadBalancerError> for Error
#### fn from(err: DisableAvailabilityZonesForLoadBalancerError) -> Self
Converts to this type from the input type.### impl From<EnableAvailabilityZonesForLoadBalancerError> for Error
#### fn from(err: EnableAvailabilityZonesForLoadBalancerError) -> Self
Converts to this type from the input type.### impl From<ModifyLoadBalancerAttributesError> for Error
#### fn from(err: ModifyLoadBalancerAttributesError) -> Self
Converts to this type from the input type.### impl From<RegisterInstancesWithLoadBalancerError> for Error
#### fn from(err: RegisterInstancesWithLoadBalancerError) -> Self
Converts to this type from the input type.### impl From<RemoveTagsError> for Error
#### fn from(err: RemoveTagsError) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<AddTagsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<AddTagsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ApplySecurityGroupsToLoadBalancerError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ApplySecurityGroupsToLoadBalancerError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<AttachLoadBalancerToSubnetsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<AttachLoadBalancerToSubnetsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ConfigureHealthCheckError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ConfigureHealthCheckError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<CreateAppCookieStickinessPolicyError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<CreateAppCookieStickinessPolicyError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<CreateLBCookieStickinessPolicyError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<CreateLBCookieStickinessPolicyError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<CreateLoadBalancerError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<CreateLoadBalancerError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<CreateLoadBalancerListenersError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<CreateLoadBalancerListenersError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<CreateLoadBalancerPolicyError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<CreateLoadBalancerPolicyError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DeleteLoadBalancerError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DeleteLoadBalancerError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DeleteLoadBalancerListenersError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DeleteLoadBalancerListenersError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DeleteLoadBalancerPolicyError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DeleteLoadBalancerPolicyError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DeregisterInstancesFromLoadBalancerError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DeregisterInstancesFromLoadBalancerError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeAccountLimitsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeAccountLimitsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeInstanceHealthError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeInstanceHealthError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeLoadBalancerAttributesError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeLoadBalancerAttributesError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeLoadBalancerPoliciesError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeLoadBalancerPoliciesError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeLoadBalancerPolicyTypesError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeLoadBalancerPolicyTypesError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeLoadBalancersError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeLoadBalancersError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DescribeTagsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DescribeTagsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DetachLoadBalancerFromSubnetsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DetachLoadBalancerFromSubnetsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<DisableAvailabilityZonesForLoadBalancerError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<DisableAvailabilityZonesForLoadBalancerError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<EnableAvailabilityZonesForLoadBalancerError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<EnableAvailabilityZonesForLoadBalancerError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<ModifyLoadBalancerAttributesError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<ModifyLoadBalancerAttributesError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<RegisterInstancesWithLoadBalancerError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<RegisterInstancesWithLoadBalancerError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<RemoveTagsError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<RemoveTagsError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<SetLoadBalancerListenerSSLCertificateError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<SetLoadBalancerListenerSSLCertificateError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<SetLoadBalancerPoliciesForBackendServerError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<SetLoadBalancerPoliciesForBackendServerError, R>) -> Self
Converts to this type from the input type.### impl<R> From<SdkError<SetLoadBalancerPoliciesOfListenerError, R>> for Errorwhere
R: Send + Sync + Debug + 'static,
#### fn from(err: SdkError<SetLoadBalancerPoliciesOfListenerError, R>) -> Self
Converts to this type from the input type.### impl From<SetLoadBalancerListenerSSLCertificateError> for Error
#### fn from(err: SetLoadBalancerListenerSSLCertificateError) -> Self
Converts to this type from the input type.### impl From<SetLoadBalancerPoliciesForBackendServerError> for Error
#### fn from(err: SetLoadBalancerPoliciesForBackendServerError) -> Self
Converts to this type from the input type.### impl From<SetLoadBalancerPoliciesOfListenerError> for Error
#### fn from(err: SetLoadBalancerPoliciesOfListenerError) -> Self
Converts to this type from the input type.### impl RequestId for Error
#### fn request_id(&self) -> Option<&strReturns the request ID, or `None` if the service could not be reached.Auto Trait Implementations
---
### impl !RefUnwindSafe for Error
### impl Send for Error
### impl Sync for Error
### impl Unpin for Error
### impl !UnwindSafe for Error
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToString for Twhere
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Module aws_sdk_elasticloadbalancing::client
===
Client for calling Elastic Load Balancing.
### Constructing a `Client`
A `Config` is required to construct a client. For most use cases, the `aws-config`
crate should be used to automatically resolve this config using
`aws_config::load_from_env()`, since this will resolve an `SdkConfig` which can be shared across multiple different AWS SDK clients. This config resolution process can be customized by calling `aws_config::from_env()` instead, which returns a `ConfigLoader` that uses the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
```
let config = aws_config::load_from_env().await;
let client = aws_sdk_elasticloadbalancing::Client::new(&config);
```
Occasionally, SDKs may have additional service-specific that can be set on the `Config` that is absent from `SdkConfig`, or slightly different settings for a specific client may be desired.
The `Config` struct implements `From<&SdkConfig>`, so setting these specific settings can be done as follows:
```
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_elasticloadbalancing::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();
```
See the `aws-config` docs and `Config` for more information on customizing configuration.
*Note:* Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
Using the `Client`
---
A client has a function for every operation that can be performed by the service.
For example, the `ApplySecurityGroupsToLoadBalancer` operation has a `Client::apply_security_groups_to_load_balancer`, function which returns a builder for that operation.
The fluent builder ultimately has a `send()` function that returns an async future that returns a result, as illustrated below:
```
let result = client.apply_security_groups_to_load_balancer()
.load_balancer_name("example")
.send()
.await;
```
The underlying HTTP requests that get made by this can be modified with the `customize_operation`
function on the fluent builder. See the `customize` module for more information.
Modules
---
* customizeOperation customization and supporting types.
Structs
---
* ClientClient for Elastic Load Balancing
Module aws_sdk_elasticloadbalancing::error
===
Common errors and error handling utilities.
Structs
---
* DisplayErrorContextProvides a `Display` impl for an `Error` that outputs the full error context
Traits
---
* ProvideErrorMetadataTrait to retrieve error metadata from a result
Type Aliases
---
* BoxErrorA boxed error that is `Send` and `Sync`.
* SdkErrorError type returned by the client.
Module aws_sdk_elasticloadbalancing::meta
===
Information about this crate.
Statics
---
* PKG_VERSIONCrate version number. |
tda-api | readthedoc | Python | tda-api documentation
[tda-api](index.html#document-index)
---
`tda-api`: An Unofficial TD Ameritrade Client[¶](#tda-api-an-unofficial-td-ameritrade-client)
===
Getting Started[¶](#getting-started)
---
Welcome to `tda-api`! Read this page to learn how to install and configure your first TD Ameritrade Python application.
### TD Ameritrade API Access[¶](#td-ameritrade-api-access)
All API calls to the TD Ameritrade API require an API key. Before we do anything with `tda-api`, you’ll need to create a developer account with TD Ameritrade and register an application. By the end of this section, you’ll have accomplished the three prerequisites for using `tda-api`:
1. Create an application.
2. Choose and save the callback URL (important for authenticating).
3. Receive an API key.
You can create a developer account [here](https://developer.tdameritrade.com/user/register). The instructions from here on out assume you’re logged in,
so make sure you log into the developer site after you’ve created your account.
Next, you’ll want to [create an application](https://developer.tdameritrade.com/user/me/apps/add). The app name and purpose aren’t particularly important right now, but the callback URL is. In a nutshell, the [OAuth login flow](https://requests-oauthlib.readthedocs.io/en/latest/oauth2_workflow.html#web-application-flow) that TD Ameritrade uses works by opening a TD Ameritrade login page, securely collecting credentials on their domain, and then sending an HTTP request to the callback URL with the token in the URL query.
How you use to choose your callback URL depends on whether and how you plan on distributing your app. If you’re writing an app for your own personal use, and plan to run entirely on your own machine, use `https://localhost`. If you plan on running on a server and having users send requests to you, use a URL you own, such as a dedicated endpoint on your domain.
Once your app is created and approved, you will receive your API key, also known as the Client ID. This will be visible in TDA’s [app listing page](https://developer.tdameritrade.com/user/me/apps). Record this key, since it is necessary to access most API endpoints.
### Installing `tda-api`[¶](#installing-tda-api)
This section outlines the installation process for client users. For developers,
check out [Contributing to tda-api](index.html#contributing).
The recommended method of installing `tda-api` is using `pip` from
[PyPi](https://pypi.org/project/tda-api/) in a [virtualenv](https://virtualenv.pypa.io/en/latest/). First create a virtualenv in your project directory. Here we assume your virtualenv is called `my-venv`:
```
pip install virtualenv virtualenv -v my-venv source my-venv/bin/activate
```
You are now ready to install `tda-api`:
```
pip install tda-api
```
That’s it! You’re done! You can verify the install succeeded by importing the package:
```
import tda
```
If this succeeded, you’re ready to move on to [Authentication and Client Creation](index.html#auth).
Note that if you are using a virtual environment and switch to a new terminal your virtual environment will not be active in the new terminal,
and you need to run the activate command again.
If you want to disable the loaded virtual environment in the same terminal window,
use the command:
```
deactivate
```
### Getting Help[¶](#getting-help)
If you are ever stuck, feel free to [join our Discord server](https://discord.gg/M3vjtHj) to ask questions, get advice, and chat with like-minded people. If you feel you’ve found a bug, you can [fill out a bug report](index.html#help).
Authentication and Client Creation[¶](#authentication-and-client-creation)
---
By now, you should have followed the instructions in [Getting Started](index.html#getting-started) and are ready to start making API calls. Read this page to learn how to get over the last remaining hurdle: OAuth authentication.
Before we begin, however, note that this guide is meant to users who want to run applications on their own machines, without distributing them to others. If you plan on distributing your app, or if you plan on running it on a server and allowing access to other users, this login flow is not for you.
### OAuth Refresher[¶](#oauth-refresher)
*This section is purely for the curious. If you already understand OAuth (wow,
congrats) or if you don’t care and just want to use this package as fast as possible, feel free to skip this section. If you encounter any weird behavior,
this section may help you understand what’s going on.*
Webapp authentication is a complex beast. The OAuth protocol was created to allow applications to access one anothers’ APIs securely and with the minimum level of trust possible. A full treatise on this topic is well beyond the scope of this guide, but in order to alleviate
[some](https://www.reddit.com/r/algotrading/comments/brohdx/td_ameritrade_api_auth_error/)
[of](https://www.reddit.com/r/algotrading/comments/alk7yh/tdameritrade_api_works/)
[the](https://www.reddit.com/r/algotrading/comments/914q22/successful_access_to_td_ameritrade_api/)
[confusion](https://www.reddit.com/r/algotrading/comments/c81vzq/td_ameritrade_api_access_2019_guide/)
[and](https://www.reddit.com/r/algotrading/comments/a588l1/td_ameritrade_restful_api_beginner_questions/)
[complexity](https://www.reddit.com/r/algotrading/comments/brsnsm/how_to_automate_td_ameritrade_api_auth_code_for/)
that seems to surround this part of the API, let’s give a quick explanation of how OAuth works in the context of TD Ameritrade’s API.
The first thing to understand is that the OAuth webapp flow was created to allow client-side applications consisting of a webapp frontend and a remotely hosted backend to interact with a third party API. Unlike the [backend application flow](https://requests-oauthlib.readthedocs.io/en/latest/oauth2_workflow.html#backend-application-flow), in which the remotely hosted backend has a secret which allows it to access the API on its own behalf, the webapp flow allows either the webapp frontend or the remotely host backend to access the API *on behalf of its users*.
If you’ve ever installed a GitHub, Facebook, Twitter, GMail, etc. app, you’ve seen this flow. You click on the “install” link, a login window pops up, you enter your password, and you’re presented with a page that asks whether you want to grant the app access to your account.
Here’s what’s happening under the hood. The window that pops up is the
[authentication URL](https://developer.tdameritrade.com/content/simple-auth-local-apps), which opens a login page for the target API. The aim is to allow the user to input their username and password without the webapp frontend or the remotely hosted backend seeing it. On web browsers, this is accomplished using the browser’s refusal to send credentials from one domain to another.
Once login here is successful, the API replies with a redirect to a URL that the remotely hosted backend controls. This is the callback URL. This redirect will contain a code which securely identifies the user to the API, embedded in the query of the request.
You might think that code is enough to access the API, and it would be if the API author were willing to sacrifice long-term security. The exact reasons why it doesn’t work involve some deep security topics like robustness against replay attacks and session duration limitation, but we’ll skip them here.
This code is useful only for [fetching a token from the authentication endpoint](https://developer.tdameritrade.com/authentication/apis/post/token-0). *This token* is what we want: a secure secret which the client can use to access API endpoints, and can be refreshed over time.
If you’ve gotten this far and your head isn’t spinning, you haven’t been paying attention. Security-sensitive protocols can be very complicated, and you should
**never** build your own implementation. Fortunately there exist very robust implementations of this flow, and `tda-api`’s authentication module makes using them easy.
### Fetching a Token and Creating a Client[¶](#fetching-a-token-and-creating-a-client)
`tda-api` provides an easy implementation of the client-side login flow in the
`auth` package. It uses a [selenium](https://selenium-python.readthedocs.io/) webdriver to open the TD Ameritrade authentication URL, take your login credentials, catch the post-login redirect,
and fetch a reusable token. It returns a fully-configured [HTTP Client](index.html#client), ready to send API calls. It also handles token refreshing, and writes updated tokens to the token file.
These functions are webdriver-agnostic, meaning you can use whatever webdriver-supported browser you have available on your system. You can find information about available webdriver on the [Selenium documentation](https://www.selenium.dev/documentation/en/getting_started_with_webdriver/browsers/).
`tda.auth.``client_from_login_flow`(*webdriver*, *api_key*, *redirect_url*, *token_path*, *redirect_wait_time_seconds=0.1*, *max_waits=3000*, *asyncio=False*, *token_write_func=None*)[¶](#tda.auth.client_from_login_flow)
Uses the webdriver to perform an OAuth webapp login flow and creates a client wrapped around the resulting token. The client will be configured to refresh the token as necessary, writing each updated version to
`token_path`.
| Parameters: | * **webdriver** – [selenium](https://selenium-python.readthedocs.io)
webdriver which will be used to perform the login flow.
* **api_key** – Your TD Ameritrade application’s API key, also known as the client ID.
* **redirect_url** – Your TD Ameritrade application’s redirect URL. Note this must *exactly* match the value you’ve entered in your application configuration, otherwise login will fail with a security error.
* **token_path** – Path to which the new token will be written. If the token file already exists, it will be overwritten with a new one. Updated tokens will be written to this path as well.
|
If for some reason you cannot open a web browser, such as when running in a cloud environment, the following function will guide you through the process of manually creating a token by copy-pasting relevant URLs.
`tda.auth.``client_from_manual_flow`(*api_key*, *redirect_url*, *token_path*, *asyncio=False*, *token_write_func=None*)[¶](#tda.auth.client_from_manual_flow)
Walks the user through performing an OAuth login flow by manually copy-pasting URLs, and returns a client wrapped around the resulting token.
The client will be configured to refresh the token as necessary, writing each updated version to `token_path`.
Note this method is more complicated and error prone, and should be avoided in favor of [`client_from_login_flow()`](#tda.auth.client_from_login_flow) wherever possible.
| Parameters: | * **api_key** – Your TD Ameritrade application’s API key, also known as the client ID.
* **redirect_url** – Your TD Ameritrade application’s redirect URL. Note this must *exactly* match the value you’ve entered in your application configuration, otherwise login will fail with a security error.
* **token_path** – Path to which the new token will be written. If the token file already exists, it will be overwritten with a new one. Updated tokens will be written to this path as well.
|
Once you have a token written on disk, you can reuse it without going through the login flow again.
`tda.auth.``client_from_token_file`(*token_path*, *api_key*, *asyncio=False*)[¶](#tda.auth.client_from_token_file)
Returns a session from an existing token file. The session will perform an auth refresh as needed. It will also update the token on disk whenever appropriate.
| Parameters: | * **token_path** – Path to an existing token. Updated tokens will be written to this path. If you do not yet have a token, use
[`client_from_login_flow()`](#tda.auth.client_from_login_flow) or
[`easy_client()`](#tda.auth.easy_client) to create one.
* **api_key** – Your TD Ameritrade application’s API key, also known as the client ID.
|
The following is a convenient wrapper around these two methods, calling each when appropriate:
`tda.auth.``easy_client`(*api_key*, *redirect_uri*, *token_path*, *webdriver_func=None*, *asyncio=False*)[¶](#tda.auth.easy_client)
Convenient wrapper around [`client_from_login_flow()`](#tda.auth.client_from_login_flow) and
[`client_from_token_file()`](#tda.auth.client_from_token_file). If `token_path` exists, loads the token from it. Otherwise open a login flow to fetch a new token. Returns a client configured to refresh the token to `token_path`.
*Reminder:* You should never create the token file yourself or modify it in any way. If `token_path` refers to an existing file, this method will assume that file is valid token and will attempt to parse it.
| Parameters: | * **api_key** – Your TD Ameritrade application’s API key, also known as the client ID.
* **redirect_url** – Your TD Ameritrade application’s redirect URL. Note this must *exactly* match the value you’ve entered in your application configuration, otherwise login will fail with a security error.
* **token_path** – Path that new token will be read from and written to. If If this file exists, this method will assume it’s valid and will attempt to parse it as a token. If it does not,
this method will create a new one using
[`client_from_login_flow()`](#tda.auth.client_from_login_flow). Updated tokens will be written to this path as well.
* **webdriver_func** – Function that returns a webdriver for use in fetching a new token. Will only be called if the token file cannot be found.
|
If you don’t want to create a client and just want to fetch a token, you can use the `tda-generate-token.py` script that’s installed with the library. This method is particularly useful if you want to create your token on one machine and use it on another. The script will attempt to open a web browser and perform the login flow. If it fails, it will fall back to the manual login flow:
```
# Notice we don't prefix this with "python" because this is a script that was
# installed by pip when you installed tda-api
> tda-generate-token.py --help usage: tda-generate-token.py [-h] --token_file TOKEN_FILE --api_key API_KEY --redirect_uri REDIRECT_URI
Fetch a new token and write it to a file
optional arguments:
-h, --help show this help message and exit
required arguments:
--token_file TOKEN_FILE
Path to token file. Any existing file will be overwritten
--api_key API_KEY
--redirect_uri REDIRECT_URI
```
This script is installed by `pip`, and will only be accessible if you’ve added pip’s executable locations to your `$PATH`. If you’re having a hard time, feel free to ask for help on our [Discord server](https://discord.gg/nfrd9gh).
### Advanced Functionality[¶](#advanced-functionality)
The default token fetcher functions are designed for ease of use. They make some common assumptions, most notably a writable filesystem, which are valid for 99%
of users. However, some very specialized users, for instance those hoping to deploy `tda-api` in serverless settings, require some more advanced functionality. This method provides the most flexible facility for fetching tokens possible.
**Important:** This is an extremely advanced method. If you read the documentation and think anything other than “oh wow, this is exactly what I’ve been looking for,” you don’t need this function. Please use the other helpers instead.
`tda.auth.``client_from_access_functions`(*api_key*, *token_read_func*, *token_write_func*, *asyncio=False*)[¶](#tda.auth.client_from_access_functions)
Returns a session from an existing token file, using the accessor methods to read and write the token. This is an advanced method for users who do not have access to a standard writable filesystem, such as users of AWS Lambda and other serverless products who must persist token updates on non-filesystem places, such as S3. 99.9% of users should not use this function.
Users are free to customize how they represent the token file. In theory,
since they have direct access to the token, they can get creative about how they store it and fetch it. In practice, it is *highly* recommended to simply accept the token object and use `pickle` to serialize and deserialize it, without inspecting it in any way.
| Parameters: | * **api_key** – Your TD Ameritrade application’s API key, also known as the client ID.
* **token_read_func** – Function that takes no arguments and returns a token object.
* **token_write_func** – Function that a token object and writes it. Will be called whenever the token is updated, such as when it is refreshed.
|
### Troubleshooting[¶](#troubleshooting)
As simple as it seems, this process is complex and mistakes are easy to make.
This section outlines some of the more common issues you might encounter. If you find yourself dealing with something that isn’t listed here, or if you try the suggested remedies and are still seeing issues, see the [Getting Help](index.html#help) page. You can also [join our Discord server](https://discord.gg/M3vjtHj) to ask questions.
#### “A third-party application may be attempting to make unauthorized access to your account”[¶](#a-third-party-application-may-be-attempting-to-make-unauthorized-access-to-your-account)
One attack on improperly implemented OAuth login flows involves tricking a user into submitting their credentials for a real app and then redirecting to a malicious web server (remember the `GET` request to the redirect URI contains all credentials required to access the user’s account). This is especially pernicious because from the user’s perspective, they see a real login window and probably never realize they’ve been sent to a malicious server, especially if the landing page is designed to resemble the target API’s landing page.
TD Ameritrade correctly prevents this attack by refusing to allow a login if the redirect URI does not **exactly** match the client ID/API key and redirect URI with which the app is configured. If you make *any* mistake in setting your API key or redirect URI, you’ll see this instead of a login page:
If this happens, you almost certainly copied your API key or redirect URI incorrectly. Go back to your [application list](https://developer.tdameritrade.com/user/me/apps) and copy-paste the information again. Don’t manually type it out, don’t visually spot-check it.
Copy-paste it. Make sure to include details like trailing slashes, `https`
protol specifications, and port numbers.
Note `tda-api` *does not* require you to suffix your client ID with
`@AMER.OAUTHAP`. It will accept it if you do so, but if you make even the
*slightest* mistake without noticing, you will end up seeing this error and will be very confused. We recommend simply passing the “Client ID” field in as the API key parameter without any embellishment, and letting the library handle the rest.
#### `tda-api` Hangs After Successful Login[¶](#tda-api-hangs-after-successful-login)
After opening the login window, `tda-api` loops and waits until the webdriver’s current URL starts with the given redirect URI:
```
callback_url = ''
while not callback_url.startswith(redirect_url):
callback_url = webdriver.current_url
time.sleep(redirect_wait_time_seconds)
```
Usually, it would be impossible for a successful post-login callback to not start with the callback URI, but there’s one major exception: when the callback URI starts with `http`. Behavior varies by browser and app configuration, but a callback URI starting with `http` can sometimes be redirected to one starting with `https`, in which case `tda-api` will never notice the redirect.
If this is happening to you, consider changing your callback URI to use
`https` instead of `http`. Not only will it make your life easier here, but it is *extremely* bad practice to send credentials like this over an unencrypted channel like that provided by `http`.
#### Token Parsing Failures[¶](#token-parsing-failures)
`tda-api` handles creating and refreshing tokens. Simply put, *the user should never create or modify the token file*. If you are experiencing parse errors when accessing the token file or getting exceptions when accessing it, it’s probably because you created it yourself or modified it. If you’re experiencing token parsing issues, remember that:
1. You should never create the token file yourself. If you don’t already have a token, you should pass a nonexistent file path to
[`client_from_login_flow()`](#tda.auth.client_from_login_flow) or [`easy_client()`](#tda.auth.easy_client).
If the file already exists, these methods assume it’s a valid token file. If the file does not exist, they will go through the login flow to create one.
2. You should never modify the token file. The token file is automatically managed by `tda-api`, and modifying it will almost certainly break it.
3. You should never share the token file. If the token file is shared between applications, one of them will beat the other to refreshing, locking the slower one out of using `tda-api`.
If you didn’t do any of this and are still seeing issues using a token file that you’re confident is valid, please [file a ticket](https://github.com/alexgolec/tda-api/issues). Just remember, **never share your token file, not even with** `tda-api` **developers**. Sharing the token file is as dangerous as sharing your TD Ameritrade username and password.
#### What If I Can’t Use a Browser?[¶](#what-if-i-can-t-use-a-browser)
Launching a browser can be inconvenient in some situations, most notably in containerized applications running on a cloud provider. `tda-api` supports two alternatives to creating tokens by opening a web browser.
Firstly, the [manual login flow](#manual-login) flow allows you to go through the login flow on a different machine than the one on which `tda-api`
is running. Instead of starting the web browser and automatically opening the relevant URLs, this flow allows you to manually copy-paste around the URLs. It’s a little more cumbersome, but it has no dependency on selenium.
Alterately, you can take advantage of the fact that token files are portable.
Once you create a token on one machine, such as one where you can open a web browser, you can easily copy that token file to another machine, such as your application in the cloud. However, make sure you don’t use the same token on two machines. It is recommended to delete the token created on the browser-capable machine as soon as it is copied to its destination.
HTTP Client[¶](#http-client)
---
A naive, unopinionated wrapper around the
[TD Ameritrade HTTP API](https://developer.tdameritrade.com/apis). This client provides access to all endpoints of the API in as easy and direct a way as possible. For example, here is how you can fetch the past 20 years of data for Apple stock:
**Do not attempt to use more than one Client object per token file, as this will likely cause issues with the underlying OAuth2 session management**
```
from tda.auth import easy_client from tda.client import Client
c = easy_client(
api_key='APIKEY',
redirect_uri='https://localhost',
token_path='/tmp/token.pickle')
resp = c.get_price_history('AAPL',
period_type=Client.PriceHistory.PeriodType.YEAR,
period=Client.PriceHistory.Period.TWENTY_YEARS,
frequency_type=Client.PriceHistory.FrequencyType.DAILY,
frequency=Client.PriceHistory.Frequency.DAILY)
assert resp.status_code == httpx.codes.OK history = resp.json()
```
Note we we create a new client using the `auth` package as described in
[Authentication and Client Creation](index.html#auth). Creating a client directly is possible, but not recommended.
### Asyncio Support[¶](#asyncio-support)
An asynchronous variant is available through a keyword to the client constructor. This allows for higher-performance API usage, at the cost of slightly increased application complexity.
```
from tda.auth import easy_client from tda.client import Client
async def main():
c = easy_client(
api_key='APIKEY',
redirect_uri='https://localhost',
token_path='/tmp/token.pickle',
asyncio=True)
resp = await c.get_price_history('AAPL',
period_type=Client.PriceHistory.PeriodType.YEAR,
period=Client.PriceHistory.Period.TWENTY_YEARS,
frequency_type=Client.PriceHistory.FrequencyType.DAILY,
frequency=Client.PriceHistory.Frequency.DAILY)
assert resp.status_code == httpx.codes.OK
history = resp.json()
if __name__ == '__main__':
import asyncio
asyncio.run_until_complete(main())
```
For more examples, please see the `examples/async` directory in GitHub.
### Calling Conventions[¶](#calling-conventions)
Function parameters are categorized as either required or optional.
Required parameters, such as `'AAPL'` in the example above, are passed as positional arguments. Optional parameters, like `period_type` and the rest,
are passed as keyword arguments.
Parameters which have special values recognized by the API are represented by [Python enums](https://docs.python.org/3/library/enum.html).
This is because the API rejects requests which pass unrecognized values, and this enum wrapping is provided as a convenient mechanism to avoid consternation caused by accidentally passing an unrecognized value.
By default, passing values other than the required enums will raise a
`ValueError`. If you believe the API accepts a value that isn’t supported here, you can use `set_enforce_enums` to disable this behavior at your own risk. If you *do* find a supported value that isn’t listed here, please open an issue describing it or submit a PR adding the new functionality.
### Return Values[¶](#return-values)
All methods return a response object generated under the hood by the
[HTTPX](https://www.python-httpx.org/quickstart/#response-content) module.
For a full listing of what’s possible, read that module’s documentation. Most if not all users can simply use the following pattern:
```
r = client.some_endpoint()
assert r.status_code == httpx.codes.OK, r.raise_for_status()
data = r.json()
```
The API indicates errors using the response status code, and this pattern will raise the appropriate exception if the response is not a success. The data can be fetched by calling the `.json()` method.
This data will be pure python data structures which can be directly accessed.
You can also use your favorite data analysis library’s dataframe format using the appropriate library. For instance you can create a [pandas](https://pandas.pydata.org/) dataframe using [its conversion method](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.from_dict.html).
**Note:** Because the author has no relationship whatsoever with TD Ameritrade,
this document makes no effort to describe the structure of the returned JSON objects. TDA might change them at any time, at which point this document will become silently out of date. Instead, each of the methods described below contains a link to the official documentation. For endpoints that return meaningful JSON objects, it includes a JSON schema which describes the return value. Please use that documentation or your own experimentation when figuring out how to use the data returned by this API.
### Creating a New Client[¶](#creating-a-new-client)
99.9% of users should not create their own clients, and should instead follow the instructions outlined in [Authentication and Client Creation](index.html#auth). For those brave enough to build their own, the constructor looks like this:
`Client.``__init__`(*api_key*, *session*, ***, *enforce_enums=True*, *token_metadata=None*)[¶](#tda.client.Client.__init__)
Create a new client with the given API key and session. Set enforce_enums=False to disable strict input type checking.
### Orders[¶](#orders)
#### Placing New Orders[¶](#placing-new-orders)
Placing new orders can be a complicated task. The [`Client.place_order()`](#tda.client.Client.place_order) method is used to create all orders, from equities to options. The precise order type is defined by a complex order spec. TDA provides some [example order specs](https://developer.tdameritrade.com/content/place-order-samples) to illustrate the process and provides a schema in the [place order documentation](https://developer.tdameritrade.com/account-access/apis/post/accounts/%7BaccountId%7D/orders-0), but beyond that we’re on our own.
`tda-api` includes some helpers, described in [Order Templates](index.html#order-templates), which provide an incomplete utility for creating various order types. While it only scratches the surface of what’s possible, we encourage you to use that module instead of creating your own order specs.
`Client.``place_order`(*account_id*, *order_spec*)[¶](#tda.client.Client.place_order)
Place an order for a specific account. If order creation was successful, the response will contain the ID of the generated order. See
[`tda.utils.Utils.extract_order_id()`](index.html#tda.utils.Utils.extract_order_id) for more details. Note unlike most methods in this library, responses for successful calls to this method typically do not contain `json()` data, and attempting to extract it will likely result in an exception.
[Official documentation](https://developer.tdameritrade.com/account-access/apis/post/accounts/%7BaccountId%7D/orders-0).
#### Accessing Existing Orders[¶](#accessing-existing-orders)
`Client.``get_orders_by_path`(*account_id*, ***, *max_results=None*, *from_entered_datetime=None*, *to_entered_datetime=None*, *status=None*, *statuses=None*)[¶](#tda.client.Client.get_orders_by_path)
Orders for a specific account. At most one of `status` and
`statuses` may be set. [Official documentation](https://developer.tdameritrade.com/account-access/apis/get/accounts/%7BaccountId%7D/orders-0).
| Parameters: | * **max_results** – The maximum number of orders to retrieve.
* **from_entered_datetime** – Specifies that no orders entered before this time should be returned. Date must be within 60 days from today’s date.
`toEnteredTime` must also be set.
* **to_entered_datetime** – Specifies that no orders entered after this time should be returned. `fromEnteredTime`
must also be set.
* **status** – Restrict query to orders with this status. See
[`Order.Status`](#tda.client.Client.Order.Status) for options.
* **statuses** – Restrict query to orders with any of these statuses.
See [`Order.Status`](#tda.client.Client.Order.Status) for options.
|
`Client.``get_orders_by_query`(***, *max_results=None*, *from_entered_datetime=None*, *to_entered_datetime=None*, *status=None*, *statuses=None*)[¶](#tda.client.Client.get_orders_by_query)
Orders for all linked accounts. At most one of `status` and
`statuses` may be set.
[Official documentation](https://developer.tdameritrade.com/account-access/apis/get/orders-0).
| Parameters: | * **max_results** – The maximum number of orders to retrieve.
* **from_entered_datetime** – Specifies that no orders entered before this time should be returned. Date must be within 60 days from today’s date.
`toEnteredTime` must also be set.
* **to_entered_datetime** – Specifies that no orders entered after this time should be returned. `fromEnteredTime`
must also be set.
* **status** – Restrict query to orders with this status. See
[`Order.Status`](#tda.client.Client.Order.Status) for options.
* **statuses** – Restrict query to orders with any of these statuses.
See [`Order.Status`](#tda.client.Client.Order.Status) for options.
|
`Client.``get_order`(*order_id*, *account_id*)[¶](#tda.client.Client.get_order)
Get a specific order for a specific account by its order ID.
[Official documentation](https://developer.tdameritrade.com/account-access/apis/get/accounts/%7BaccountId%7D/orders/%7BorderId%7D-0).
*class* `tda.client.Client.``Order`[¶](#tda.client.Client.Order)
*class* `Status`[¶](#tda.client.Client.Order.Status)
Order statuses passed to [`get_orders_by_path()`](#tda.client.Client.get_orders_by_path) and
[`get_orders_by_query()`](#tda.client.Client.get_orders_by_query)
`ACCEPTED` *= 'ACCEPTED'*[¶](#tda.client.Client.Order.Status.ACCEPTED)
`AWAITING_CONDITION` *= 'AWAITING_CONDITION'*[¶](#tda.client.Client.Order.Status.AWAITING_CONDITION)
`AWAITING_MANUAL_REVIEW` *= 'AWAITING_MANUAL_REVIEW'*[¶](#tda.client.Client.Order.Status.AWAITING_MANUAL_REVIEW)
`AWAITING_PARENT_ORDER` *= 'AWAITING_PARENT_ORDER'*[¶](#tda.client.Client.Order.Status.AWAITING_PARENT_ORDER)
`AWAITING_UR_OUR` *= 'AWAITING_UR_OUR'*[¶](#tda.client.Client.Order.Status.AWAITING_UR_OUR)
`CANCELED` *= 'CANCELED'*[¶](#tda.client.Client.Order.Status.CANCELED)
`EXPIRED` *= 'EXPIRED'*[¶](#tda.client.Client.Order.Status.EXPIRED)
`FILLED` *= 'FILLED'*[¶](#tda.client.Client.Order.Status.FILLED)
`PENDING_ACTIVATION` *= 'PENDING_ACTIVATION'*[¶](#tda.client.Client.Order.Status.PENDING_ACTIVATION)
`PENDING_CANCEL` *= 'PENDING_CANCEL'*[¶](#tda.client.Client.Order.Status.PENDING_CANCEL)
`PENDING_REPLACE` *= 'PENDING_REPLACE'*[¶](#tda.client.Client.Order.Status.PENDING_REPLACE)
`QUEUED` *= 'QUEUED'*[¶](#tda.client.Client.Order.Status.QUEUED)
`REJECTED` *= 'REJECTED'*[¶](#tda.client.Client.Order.Status.REJECTED)
`REPLACED` *= 'REPLACED'*[¶](#tda.client.Client.Order.Status.REPLACED)
`WORKING` *= 'WORKING'*[¶](#tda.client.Client.Order.Status.WORKING)
#### Editing Existing Orders[¶](#editing-existing-orders)
Endpoints for canceling and replacing existing orders.
Annoyingly, while these endpoints require an order ID, it seems that when placing new orders the API does not return any metadata about the new order. As a result, if you want to cancel or replace an order after you’ve created it, you must search for it using the methods described in [Accessing Existing Orders](#accessing-existing-orders).
`Client.``cancel_order`(*order_id*, *account_id*)[¶](#tda.client.Client.cancel_order)
Cancel a specific order for a specific account.
[Official documentation](https://developer.tdameritrade.com/account-access/apis/delete/accounts/%7BaccountId%7D/orders/%7BorderId%7D-0).
`Client.``replace_order`(*account_id*, *order_id*, *order_spec*)[¶](#tda.client.Client.replace_order)
Replace an existing order for an account. The existing order will be replaced by the new order. Once replaced, the old order will be canceled and a new order will be created.
[Official documentation](https://developer.tdameritrade.com/account-access/apis/put/accounts/%7BaccountId%7D/orders/%7BorderId%7D-0).
### Account Info[¶](#account-info)
These methods provide access to useful information about accounts. An incomplete list of the most interesting bits:
* Account balances, including available trading balance
* Positions
* Order history
See the official documentation for each method for a complete response schema.
`Client.``get_account`(*account_id*, ***, *fields=None*)[¶](#tda.client.Client.get_account)
Account balances, positions, and orders for a specific account.
[Official documentation](https://developer.tdameritrade.com/account-access/apis/get/accounts/%7BaccountId%7D-0).
| Parameters: | **fields** – Balances displayed by default, additional fields can be added here by adding values from [`Account.Fields`](#tda.client.Client.Account.Fields). |
`Client.``get_accounts`(***, *fields=None*)[¶](#tda.client.Client.get_accounts)
Account balances, positions, and orders for all linked accounts.
[Official documentation](https://developer.tdameritrade.com/account-access/apis/get/accounts-0).
| Parameters: | **fields** – Balances displayed by default, additional fields can be added here by adding values from [`Account.Fields`](#tda.client.Client.Account.Fields). |
*class* `tda.client.Client.``Account`[¶](#tda.client.Client.Account)
*class* `Fields`[¶](#tda.client.Client.Account.Fields)
Account fields passed to [`get_account()`](#tda.client.Client.get_account) and
[`get_accounts()`](#tda.client.Client.get_accounts)
`ORDERS` *= 'orders'*[¶](#tda.client.Client.Account.Fields.ORDERS)
`POSITIONS` *= 'positions'*[¶](#tda.client.Client.Account.Fields.POSITIONS)
### Instrument Info[¶](#instrument-info)
Note: symbol fundamentals (P/E ratios, number of shares outstanding, dividend yield, etc.) is available using the `Instrument.Projection.FUNDAMENTAL`
projection.
`Client.``search_instruments`(*symbols*, *projection*)[¶](#tda.client.Client.search_instruments)
Search or retrieve instrument data, including fundamental data.
[Official documentation](https://developer.tdameritrade.com/instruments/apis/get/instruments).
| Parameters: | **projection** – Query type. See [`Instrument.Projection`](#tda.client.Client.Instrument.Projection) for options. |
`Client.``get_instrument`(*cusip*)[¶](#tda.client.Client.get_instrument)
Get an instrument by CUSIP.
[Official documentation](https://developer.tdameritrade.com/instruments/apis/get/instruments/%7Bcusip%7D).
*class* `tda.client.Client.``Instrument`[¶](#tda.client.Client.Instrument)
*class* `Projection`[¶](#tda.client.Client.Instrument.Projection)
Search query type for [`search_instruments()`](#tda.client.Client.search_instruments). See the
[official documentation](https://developer.tdameritrade.com/instruments/apis/get/instruments) for details on the semantics of each.
`DESC_REGEX` *= 'desc-regex'*[¶](#tda.client.Client.Instrument.Projection.DESC_REGEX)
`DESC_SEARCH` *= 'desc-search'*[¶](#tda.client.Client.Instrument.Projection.DESC_SEARCH)
`FUNDAMENTAL` *= 'fundamental'*[¶](#tda.client.Client.Instrument.Projection.FUNDAMENTAL)
`SYMBOL_REGEX` *= 'symbol-regex'*[¶](#tda.client.Client.Instrument.Projection.SYMBOL_REGEX)
`SYMBOL_SEARCH` *= 'symbol-search'*[¶](#tda.client.Client.Instrument.Projection.SYMBOL_SEARCH)
### Option Chain[¶](#option-chain)
Unfortunately, option chains are well beyond the ability of your humble author.
You are encouraged to read the official API documentation to learn more.
If you *are* knowledgeable enough to write something more substantive here,
please follow the instructions in [Contributing to tda-api](index.html#contributing) to send in a patch.
`Client.``get_option_chain`(*symbol*, ***, *contract_type=None*, *strike_count=None*, *include_quotes=None*, *strategy=None*, *interval=None*, *strike=None*, *strike_range=None*, *from_date=None*, *to_date=None*, *volatility=None*, *underlying_price=None*, *interest_rate=None*, *days_to_expiration=None*, *exp_month=None*, *option_type=None*)[¶](#tda.client.Client.get_option_chain)
Get option chain for an optionable Symbol.
[Official documentation](https://developer.tdameritrade.com/option-chains/apis/get/marketdata/chains).
| Parameters: | * **contract_type** – Type of contracts to return in the chain. See
[`Options.ContractType`](#tda.client.Client.Options.ContractType) for choices.
* **strike_count** – The number of strikes to return above and below the at-the-money price.
* **include_quotes** – Include quotes for options in the option chain?
* **strategy** – If passed, returns a Strategy Chain. See
[`Options.Strategy`](#tda.client.Client.Options.Strategy) for choices.
* **interval** – Strike interval for spread strategy chains (see
`strategy` param).
* **strike** – Return options only at this strike price.
* **strike_range** – Return options for the given range. See
[`Options.StrikeRange`](#tda.client.Client.Options.StrikeRange) for choices.
* **from_date** – Only return expirations after this date. For strategies, expiration refers to the nearest term expiration in the strategy. Accepts `datetime.date`
and `datetime.datetime`.
* **to_date** – Only return expirations before this date. For strategies, expiration refers to the nearest term expiration in the strategy. Accepts `datetime.date`
and `datetime.datetime`.
* **volatility** – Volatility to use in calculations. Applies only to
`ANALYTICAL` strategy chains.
* **underlying_price** – Underlying price to use in calculations.
Applies only to `ANALYTICAL` strategy chains.
* **interest_rate** – Interest rate to use in calculations. Applies only to `ANALYTICAL` strategy chains.
* **days_to_expiration** – Days to expiration to use in calculations.
Applies only to `ANALYTICAL` strategy chains
* **exp_month** – Return only options expiring in the specified month. See
[`Options.ExpirationMonth`](#tda.client.Client.Options.ExpirationMonth) for choices.
* **option_type** – Types of options to return. See
[`Options.Type`](#tda.client.Client.Options.Type) for choices.
|
*class* `tda.client.Client.``Options`[¶](#tda.client.Client.Options)
*class* `ContractType`[¶](#tda.client.Client.Options.ContractType)
An enumeration.
`ALL` *= 'ALL'*[¶](#tda.client.Client.Options.ContractType.ALL)
`CALL` *= 'CALL'*[¶](#tda.client.Client.Options.ContractType.CALL)
`PUT` *= 'PUT'*[¶](#tda.client.Client.Options.ContractType.PUT)
*class* `ExpirationMonth`[¶](#tda.client.Client.Options.ExpirationMonth)
An enumeration.
`APRIL` *= 'APR'*[¶](#tda.client.Client.Options.ExpirationMonth.APRIL)
`AUGUST` *= 'AUG'*[¶](#tda.client.Client.Options.ExpirationMonth.AUGUST)
`DECEMBER` *= 'DEC'*[¶](#tda.client.Client.Options.ExpirationMonth.DECEMBER)
`FEBRUARY` *= 'FEB'*[¶](#tda.client.Client.Options.ExpirationMonth.FEBRUARY)
`JANUARY` *= 'JAN'*[¶](#tda.client.Client.Options.ExpirationMonth.JANUARY)
`JULY` *= 'JUL'*[¶](#tda.client.Client.Options.ExpirationMonth.JULY)
`JUNE` *= 'JUN'*[¶](#tda.client.Client.Options.ExpirationMonth.JUNE)
`MARCH` *= 'MAR'*[¶](#tda.client.Client.Options.ExpirationMonth.MARCH)
`MAY` *= 'MAY'*[¶](#tda.client.Client.Options.ExpirationMonth.MAY)
`NOVEMBER` *= 'NOV'*[¶](#tda.client.Client.Options.ExpirationMonth.NOVEMBER)
`OCTOBER` *= 'OCT'*[¶](#tda.client.Client.Options.ExpirationMonth.OCTOBER)
`SEPTEMBER` *= 'SEP'*[¶](#tda.client.Client.Options.ExpirationMonth.SEPTEMBER)
*class* `Strategy`[¶](#tda.client.Client.Options.Strategy)
An enumeration.
`ANALYTICAL` *= 'ANALYTICAL'*[¶](#tda.client.Client.Options.Strategy.ANALYTICAL)
`BUTTERFLY` *= 'BUTTERFLY'*[¶](#tda.client.Client.Options.Strategy.BUTTERFLY)
`CALENDAR` *= 'CALENDAR'*[¶](#tda.client.Client.Options.Strategy.CALENDAR)
`COLLAR` *= 'COLLAR'*[¶](#tda.client.Client.Options.Strategy.COLLAR)
`CONDOR` *= 'CONDOR'*[¶](#tda.client.Client.Options.Strategy.CONDOR)
`COVERED` *= 'COVERED'*[¶](#tda.client.Client.Options.Strategy.COVERED)
`DIAGONAL` *= 'DIAGONAL'*[¶](#tda.client.Client.Options.Strategy.DIAGONAL)
`ROLL` *= 'ROLL'*[¶](#tda.client.Client.Options.Strategy.ROLL)
`SINGLE` *= 'SINGLE'*[¶](#tda.client.Client.Options.Strategy.SINGLE)
`STRADDLE` *= 'STRADDLE'*[¶](#tda.client.Client.Options.Strategy.STRADDLE)
`STRANGLE` *= 'STRANGLE'*[¶](#tda.client.Client.Options.Strategy.STRANGLE)
`VERTICAL` *= 'VERTICAL'*[¶](#tda.client.Client.Options.Strategy.VERTICAL)
*class* `StrikeRange`[¶](#tda.client.Client.Options.StrikeRange)
An enumeration.
`ALL` *= 'ALL'*[¶](#tda.client.Client.Options.StrikeRange.ALL)
`IN_THE_MONEY` *= 'IN_THE_MONEY'*[¶](#tda.client.Client.Options.StrikeRange.IN_THE_MONEY)
`NEAR_THE_MONEY` *= 'NEAR_THE_MONEY'*[¶](#tda.client.Client.Options.StrikeRange.NEAR_THE_MONEY)
`OUT_OF_THE_MONEY` *= 'OUT_OF_THE_MONEY'*[¶](#tda.client.Client.Options.StrikeRange.OUT_OF_THE_MONEY)
`STRIKES_ABOVE_MARKET` *= 'STRIKES_ABOVE_MARKET'*[¶](#tda.client.Client.Options.StrikeRange.STRIKES_ABOVE_MARKET)
`STRIKES_BELOW_MARKET` *= 'STRIKES_BELOW_MARKET'*[¶](#tda.client.Client.Options.StrikeRange.STRIKES_BELOW_MARKET)
`STRIKES_NEAR_MARKET` *= 'STRIKES_NEAR_MARKET'*[¶](#tda.client.Client.Options.StrikeRange.STRIKES_NEAR_MARKET)
*class* `Type`[¶](#tda.client.Client.Options.Type)
An enumeration.
`ALL` *= 'ALL'*[¶](#tda.client.Client.Options.Type.ALL)
`NON_STANDARD` *= 'NS'*[¶](#tda.client.Client.Options.Type.NON_STANDARD)
`STANDARD` *= 'S'*[¶](#tda.client.Client.Options.Type.STANDARD)
### Price History[¶](#price-history)
Fetching price history is somewhat complicated due to the fact that only certain combinations of parameters are valid. To avoid accidentally making it impossible to send valid requests, this method performs no validation on its parameters. If you are receiving empty requests or other weird return values, see the official documentation for more details.
`Client.``get_price_history`(*symbol*, ***, *period_type=None*, *period=None*, *frequency_type=None*, *frequency=None*, *start_datetime=None*, *end_datetime=None*, *need_extended_hours_data=None*)[¶](#tda.client.Client.get_price_history)
Get price history for a symbol.
[Official documentation](https://developer.tdameritrade.com/price-history/apis/get/marketdata/%7Bsymbol%7D/pricehistory).
| Parameters: | * **period_type** – The type of period to show.
* **period** – The number of periods to show. Should not be provided if
`start_datetime` and `end_datetime`.
* **frequency_type** – The type of frequency with which a new candle is formed.
* **frequency** – The number of the frequencyType to be included in each candle.
* **start_datetime** – Start date.
* **end_datetime** – End date. Default is previous trading day.
* **need_extended_hours_data** – If true, return extended hours data.
Otherwise return regular market hours only.
|
*class* `tda.client.Client.``PriceHistory`[¶](#tda.client.Client.PriceHistory)
*class* `Frequency`[¶](#tda.client.Client.PriceHistory.Frequency)
An enumeration.
`DAILY` *= 1*[¶](#tda.client.Client.PriceHistory.Frequency.DAILY)
`EVERY_FIFTEEN_MINUTES` *= 15*[¶](#tda.client.Client.PriceHistory.Frequency.EVERY_FIFTEEN_MINUTES)
`EVERY_FIVE_MINUTES` *= 5*[¶](#tda.client.Client.PriceHistory.Frequency.EVERY_FIVE_MINUTES)
`EVERY_MINUTE` *= 1*[¶](#tda.client.Client.PriceHistory.Frequency.EVERY_MINUTE)
`EVERY_TEN_MINUTES` *= 10*[¶](#tda.client.Client.PriceHistory.Frequency.EVERY_TEN_MINUTES)
`EVERY_THIRTY_MINUTES` *= 30*[¶](#tda.client.Client.PriceHistory.Frequency.EVERY_THIRTY_MINUTES)
`MONTHLY` *= 1*[¶](#tda.client.Client.PriceHistory.Frequency.MONTHLY)
`WEEKLY` *= 1*[¶](#tda.client.Client.PriceHistory.Frequency.WEEKLY)
*class* `FrequencyType`[¶](#tda.client.Client.PriceHistory.FrequencyType)
An enumeration.
`DAILY` *= 'daily'*[¶](#tda.client.Client.PriceHistory.FrequencyType.DAILY)
`MINUTE` *= 'minute'*[¶](#tda.client.Client.PriceHistory.FrequencyType.MINUTE)
`MONTHLY` *= 'monthly'*[¶](#tda.client.Client.PriceHistory.FrequencyType.MONTHLY)
`WEEKLY` *= 'weekly'*[¶](#tda.client.Client.PriceHistory.FrequencyType.WEEKLY)
*class* `Period`[¶](#tda.client.Client.PriceHistory.Period)
An enumeration.
`FIFTEEN_YEARS` *= 15*[¶](#tda.client.Client.PriceHistory.Period.FIFTEEN_YEARS)
`FIVE_DAYS` *= 5*[¶](#tda.client.Client.PriceHistory.Period.FIVE_DAYS)
`FIVE_YEARS` *= 5*[¶](#tda.client.Client.PriceHistory.Period.FIVE_YEARS)
`FOUR_DAYS` *= 4*[¶](#tda.client.Client.PriceHistory.Period.FOUR_DAYS)
`ONE_DAY` *= 1*[¶](#tda.client.Client.PriceHistory.Period.ONE_DAY)
`ONE_MONTH` *= 1*[¶](#tda.client.Client.PriceHistory.Period.ONE_MONTH)
`ONE_YEAR` *= 1*[¶](#tda.client.Client.PriceHistory.Period.ONE_YEAR)
`SIX_MONTHS` *= 6*[¶](#tda.client.Client.PriceHistory.Period.SIX_MONTHS)
`TEN_DAYS` *= 10*[¶](#tda.client.Client.PriceHistory.Period.TEN_DAYS)
`TEN_YEARS` *= 10*[¶](#tda.client.Client.PriceHistory.Period.TEN_YEARS)
`THREE_DAYS` *= 3*[¶](#tda.client.Client.PriceHistory.Period.THREE_DAYS)
`THREE_MONTHS` *= 3*[¶](#tda.client.Client.PriceHistory.Period.THREE_MONTHS)
`THREE_YEARS` *= 3*[¶](#tda.client.Client.PriceHistory.Period.THREE_YEARS)
`TWENTY_YEARS` *= 20*[¶](#tda.client.Client.PriceHistory.Period.TWENTY_YEARS)
`TWO_DAYS` *= 2*[¶](#tda.client.Client.PriceHistory.Period.TWO_DAYS)
`TWO_MONTHS` *= 2*[¶](#tda.client.Client.PriceHistory.Period.TWO_MONTHS)
`TWO_YEARS` *= 2*[¶](#tda.client.Client.PriceHistory.Period.TWO_YEARS)
`YEAR_TO_DATE` *= 1*[¶](#tda.client.Client.PriceHistory.Period.YEAR_TO_DATE)
*class* `PeriodType`[¶](#tda.client.Client.PriceHistory.PeriodType)
An enumeration.
`DAY` *= 'day'*[¶](#tda.client.Client.PriceHistory.PeriodType.DAY)
`MONTH` *= 'month'*[¶](#tda.client.Client.PriceHistory.PeriodType.MONTH)
`YEAR` *= 'year'*[¶](#tda.client.Client.PriceHistory.PeriodType.YEAR)
`YEAR_TO_DATE` *= 'ytd'*[¶](#tda.client.Client.PriceHistory.PeriodType.YEAR_TO_DATE)
### Current Quotes[¶](#current-quotes)
`Client.``get_quote`(*symbol*)[¶](#tda.client.Client.get_quote)
Get quote for a symbol. Note due to limitations in URL encoding, this method is not recommended for instruments with symbols symbols containing non-alphanumeric characters, for example as futures like
`/ES`. To get quotes for those symbols, use [`Client.get_quotes()`](#tda.client.Client.get_quotes).
[Official documentation](https://developer.tdameritrade.com/quotes/apis/get/marketdata/%7Bsymbol%7D/quotes).
`Client.``get_quotes`(*symbols*)[¶](#tda.client.Client.get_quotes)
Get quote for a symbol. This method supports all symbols, including those containing non-alphanumeric characters like `/ES`.
[Official documentation](https://developer.tdameritrade.com/quotes/apis/get/marketdata/quotes).
### Other Endpoints[¶](#other-endpoints)
Note If your account limited to delayed quotes, these quotes will also be delayed.
#### Transaction History[¶](#transaction-history)
`Client.``get_transaction`(*account_id*, *transaction_id*)[¶](#tda.client.Client.get_transaction)
Transaction for a specific account.
[Official documentation](https://developer.tdameritrade.com/transaction-history/apis/get/accounts/%7BaccountId%7D/transactions/%7BtransactionId%7D-0).
`Client.``get_transactions`(*account_id*, ***, *transaction_type=None*, *symbol=None*, *start_date=None*, *end_date=None*)[¶](#tda.client.Client.get_transactions)
Transaction for a specific account.
[Official documentation](https://developer.tdameritrade.com/transaction-history/apis/get/accounts/%7BaccountId%7D/transactions-0).
| Parameters: | * **transaction_type** – Only transactions with the specified type will be returned.
* **symbol** – Only transactions with the specified symbol will be returned.
* **start_date** – Only transactions after this date will be returned.
Note the maximum date range is one year.
Accepts `datetime.date` and `datetime.datetime`.
* **end_date** – Only transactions before this date will be returned Note the maximum date range is one year.
Accepts `datetime.date` and `datetime.datetime`.
|
*class* `tda.client.Client.``Transactions`[¶](#tda.client.Client.Transactions)
*class* `TransactionType`[¶](#tda.client.Client.Transactions.TransactionType)
An enumeration.
`ADVISORY_FEES` *= 'ADVISORY_FEES'*[¶](#tda.client.Client.Transactions.TransactionType.ADVISORY_FEES)
`ALL` *= 'ALL'*[¶](#tda.client.Client.Transactions.TransactionType.ALL)
`BUY_ONLY` *= 'BUY_ONLY'*[¶](#tda.client.Client.Transactions.TransactionType.BUY_ONLY)
`CASH_IN_OR_CASH_OUT` *= 'CASH_IN_OR_CASH_OUT'*[¶](#tda.client.Client.Transactions.TransactionType.CASH_IN_OR_CASH_OUT)
`CHECKING` *= 'CHECKING'*[¶](#tda.client.Client.Transactions.TransactionType.CHECKING)
`DIVIDEND` *= 'DIVIDEND'*[¶](#tda.client.Client.Transactions.TransactionType.DIVIDEND)
`INTEREST` *= 'INTEREST'*[¶](#tda.client.Client.Transactions.TransactionType.INTEREST)
`OTHER` *= 'OTHER'*[¶](#tda.client.Client.Transactions.TransactionType.OTHER)
`SELL_ONLY` *= 'SELL_ONLY'*[¶](#tda.client.Client.Transactions.TransactionType.SELL_ONLY)
`TRADE` *= 'TRADE'*[¶](#tda.client.Client.Transactions.TransactionType.TRADE)
#### Saved Orders[¶](#saved-orders)
`Client.``create_saved_order`(*account_id*, *order_spec*)[¶](#tda.client.Client.create_saved_order)
Save an order for a specific account.
[Official documentation](https://developer.tdameritrade.com/account-access/apis/post/accounts/%7BaccountId%7D/savedorders-0).
`Client.``delete_saved_order`(*account_id*, *order_id*)[¶](#tda.client.Client.delete_saved_order)
Delete a specific saved order for a specific account.
[Official documentation](https://developer.tdameritrade.com/account-access/apis/delete/accounts/%7BaccountId%7D/savedorders/%7BsavedOrderId%7D-0).
`Client.``get_saved_order`(*account_id*, *order_id*)[¶](#tda.client.Client.get_saved_order)
Specific saved order by its ID, for a specific account.
[Official documentation](https://developer.tdameritrade.com/account-access/apis/get/accounts/%7BaccountId%7D/savedorders/%7BsavedOrderId%7D-0).
`Client.``get_saved_orders_by_path`(*account_id*)[¶](#tda.client.Client.get_saved_orders_by_path)
Saved orders for a specific account.
[Official documentation](https://developer.tdameritrade.com/account-access/apis/get/accounts/%7BaccountId%7D/savedorders-0).
`Client.``replace_saved_order`(*account_id*, *order_id*, *order_spec*)[¶](#tda.client.Client.replace_saved_order)
Replace an existing saved order for an account. The existing saved order will be replaced by the new order.
[Official documentation](https://developer.tdameritrade.com/account-access/apis/put/accounts/%7BaccountId%7D/savedorders/%7BsavedOrderId%7D-0).
#### Market Hours[¶](#market-hours)
`Client.``get_hours_for_multiple_markets`(*markets*, *date*)[¶](#tda.client.Client.get_hours_for_multiple_markets)
Retrieve market hours for specified markets.
[Official documentation](https://developer.tdameritrade.com/market-hours/apis/get/marketdata/hours).
| Parameters: | * **markets** – Market to return hours for. Iterable of
[`Markets`](#tda.client.Client.Markets).
* **date** – The date for which market hours information is requested.
Accepts `datetime.date` and `datetime.datetime`.
|
`Client.``get_hours_for_single_market`(*market*, *date*)[¶](#tda.client.Client.get_hours_for_single_market)
Retrieve market hours for specified single market.
[Official documentation](https://developer.tdameritrade.com/market-hours/apis/get/marketdata/%7Bmarket%7D/hours).
| Parameters: | * **markets** – Market to return hours for. Instance of
[`Markets`](#tda.client.Client.Markets).
* **date** – The date for which market hours information is requested.
Accepts `datetime.date` and `datetime.datetime`.
|
*class* `tda.client.Client.``Markets`[¶](#tda.client.Client.Markets)
Values for [`get_hours_for_multiple_markets()`](#tda.client.Client.get_hours_for_multiple_markets) and
[`get_hours_for_single_market()`](#tda.client.Client.get_hours_for_single_market).
`BOND` *= 'BOND'*[¶](#tda.client.Client.Markets.BOND)
`EQUITY` *= 'EQUITY'*[¶](#tda.client.Client.Markets.EQUITY)
`FOREX` *= 'FOREX'*[¶](#tda.client.Client.Markets.FOREX)
`FUTURE` *= 'FUTURE'*[¶](#tda.client.Client.Markets.FUTURE)
`OPTION` *= 'OPTION'*[¶](#tda.client.Client.Markets.OPTION)
#### Movers[¶](#movers)
`Client.``get_movers`(*index*, *direction*, *change*)[¶](#tda.client.Client.get_movers)
Top 10 (up or down) movers by value or percent for a particular market.
[Official documentation](https://developer.tdameritrade.com/movers/apis/get/marketdata/%7Bindex%7D/movers).
| Parameters: | * **direction** – See [`Movers.Direction`](#tda.client.Client.Movers.Direction)
* **change** – See [`Movers.Change`](#tda.client.Client.Movers.Change)
|
*class* `tda.client.Client.``Movers`[¶](#tda.client.Client.Movers)
*class* `Change`[¶](#tda.client.Client.Movers.Change)
Values for [`get_movers()`](#tda.client.Client.get_movers)
`PERCENT` *= 'percent'*[¶](#tda.client.Client.Movers.Change.PERCENT)
`VALUE` *= 'value'*[¶](#tda.client.Client.Movers.Change.VALUE)
*class* `Direction`[¶](#tda.client.Client.Movers.Direction)
Values for [`get_movers()`](#tda.client.Client.get_movers)
`DOWN` *= 'down'*[¶](#tda.client.Client.Movers.Direction.DOWN)
`UP` *= 'up'*[¶](#tda.client.Client.Movers.Direction.UP)
#### User Info and Preferences[¶](#user-info-and-preferences)
`Client.``get_preferences`(*account_id*)[¶](#tda.client.Client.get_preferences)
Preferences for a specific account.
[Official documentation](https://developer.tdameritrade.com/user-principal/apis/get/accounts/%7BaccountId%7D/preferences-0).
`Client.``get_user_principals`(*fields=None*)[¶](#tda.client.Client.get_user_principals)
User Principal details.
[Official documentation](https://developer.tdameritrade.com/user-principal/apis/get/userprincipals-0).
`Client.``update_preferences`(*account_id*, *preferences*)[¶](#tda.client.Client.update_preferences)
Update preferences for a specific account.
Please note that the directOptionsRouting and directEquityRouting values cannot be modified via this operation.
[Official documentation](https://developer.tdameritrade.com/user-principal/apis/put/accounts/%7BaccountId%7D/preferences-0).
*class* `tda.client.Client.``UserPrincipals`[¶](#tda.client.Client.UserPrincipals)
*class* `Fields`[¶](#tda.client.Client.UserPrincipals.Fields)
An enumeration.
`PREFERENCES` *= 'preferences'*[¶](#tda.client.Client.UserPrincipals.Fields.PREFERENCES)
`STREAMER_CONNECTION_INFO` *= 'streamerConnectionInfo'*[¶](#tda.client.Client.UserPrincipals.Fields.STREAMER_CONNECTION_INFO)
`STREAMER_SUBSCRIPTION_KEYS` *= 'streamerSubscriptionKeys'*[¶](#tda.client.Client.UserPrincipals.Fields.STREAMER_SUBSCRIPTION_KEYS)
`SURROGATE_IDS` *= 'surrogateIds'*[¶](#tda.client.Client.UserPrincipals.Fields.SURROGATE_IDS)
#### Watchlists[¶](#watchlists)
**Note**: These methods only support static watchlists, i.e. they cannot access dynamic watchlists.
`Client.``create_watchlist`(*account_id*, *watchlist_spec*)[¶](#tda.client.Client.create_watchlist)
‘Create watchlist for specific account.This method does not verify that the symbol or asset type are valid.
[Official documentation](https://developer.tdameritrade.com/watchlist/apis/post/accounts/%7BaccountId%7D/watchlists-0).
`Client.``delete_watchlist`(*account_id*, *watchlist_id*)[¶](#tda.client.Client.delete_watchlist)
Delete watchlist for a specific account.
[Official documentation](https://developer.tdameritrade.com/watchlist/apis/delete/accounts/%7BaccountId%7D/watchlists/%7BwatchlistId%7D-0).
`Client.``get_watchlist`(*account_id*, *watchlist_id*)[¶](#tda.client.Client.get_watchlist)
Specific watchlist for a specific account.
[Official documentation](https://developer.tdameritrade.com/watchlist/apis/get/accounts/%7BaccountId%7D/watchlists/%7BwatchlistId%7D-0).
`Client.``get_watchlists_for_multiple_accounts`()[¶](#tda.client.Client.get_watchlists_for_multiple_accounts)
All watchlists for all of the user’s linked accounts.
[Official documentation](https://developer.tdameritrade.com/watchlist/apis/get/accounts/watchlists-0).
`Client.``get_watchlists_for_single_account`(*account_id*)[¶](#tda.client.Client.get_watchlists_for_single_account)
All watchlists of an account.
[Official documentation](https://developer.tdameritrade.com/watchlist/apis/get/accounts/%7BaccountId%7D/watchlists-0).
`Client.``replace_watchlist`(*account_id*, *watchlist_id*, *watchlist_spec*)[¶](#tda.client.Client.replace_watchlist)
Replace watchlist for a specific account. This method does not verify that the symbol or asset type are valid.
[Official documentation](https://developer.tdameritrade.com/watchlist/apis/put/accounts/%7BaccountId%7D/watchlists/%7BwatchlistId%7D-0).
`Client.``update_watchlist`(*account_id*, *watchlist_id*, *watchlist_spec*)[¶](#tda.client.Client.update_watchlist)
Partially update watchlist for a specific account: change watchlist name, add to the beginning/end of a watchlist, update or delete items in a watchlist. This method does not verify that the symbol or asset type are valid.
[Official documentation](https://developer.tdameritrade.com/watchlist/apis/patch/accounts/%7BaccountId%7D/watchlists/%7BwatchlistId%7D-0).
Streaming Client[¶](#streaming-client)
---
A wapper around the
[TD Ameritrade Streaming API](https://developer.tdameritrade.com/content/streaming-data). This API is a websockets-based streaming API that provides to up-to-the-second data on market activity. Most impressively, it provides realtime data, including Level Two and time of sale data for major equities, options, and futures exchanges.
Here’s an example of how you can receive book snapshots of `GOOG` (note if you run this outside regular trading hours you may not see anything):
```
from tda.auth import easy_client from tda.client import Client from tda.streaming import StreamClient
import asyncio import json
client = easy_client(
api_key='APIKEY',
redirect_uri='https://localhost',
token_path='/tmp/token.pickle')
stream_client = StreamClient(client, account_id=1234567890)
async def read_stream():
await stream_client.login()
await stream_client.quality_of_service(StreamClient.QOSLevel.EXPRESS)
# Always add handlers before subscribing because many streams start sending
# data immediately after success, and messages with no handlers are dropped.
stream_client.add_nasdaq_book_handler(
lambda msg: print(json.dumps(msg, indent=4)))
await stream_client.nasdaq_book_subs(['GOOG'])
while True:
await stream_client.handle_message()
asyncio.run(read_stream())
```
This API uses Python
[coroutines](https://docs.python.org/3/library/asyncio-task.html) to simplify implementation and preserve performance. As a result, it requires Python 3.8 or higher to use. `tda.stream` will not be available on older versions of Python.
### Use Overview[¶](#use-overview)
The example above demonstrates the end-to-end workflow for using `tda.stream`.
There’s more in there than meets the eye, so let’s dive into the details.
#### Logging In[¶](#logging-in)
Before we can perform any stream operations, the client must be logged in to the stream. Unlike the HTTP client, in which every request is authenticated using a token, this client sends unauthenticated requests and instead authenticates the entire stream. As a result, this login process is distinct from the token generation step that’s used in the HTTP client.
Stream login is accomplished simply by calling [`StreamClient.login()`](#tda.streaming.StreamClient.login). Once this happens successfully, all stream operations can be performed. Attemping to perform operations that require login before this function is called raises an exception.
`StreamClient.``login`()[¶](#tda.streaming.StreamClient.login)
[Official Documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640574)
Performs initial stream setup:
* Fetches streaming information from the HTTP client’s
[`get_user_principals()`](index.html#tda.client.Client.get_user_principals) method
* Initializes the socket
* Builds and sends and authentication request
* Waits for response indicating login success
All stream operations are available after this method completes.
#### Setting Quality of Service[¶](#setting-quality-of-service)
By default, the stream’s update frequency is set to 1000ms. The frequency can be increased by calling the `quality_of_service` function and passing an appropriate `QOSLevel` value.
`StreamClient.``quality_of_service`(*qos_level*)[¶](#tda.streaming.StreamClient.quality_of_service)
[Official Documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640578)
Specifies the frequency with which updated data should be sent to the client. If not called, the frequency will default to every second.
| Parameters: | **qos_level** – Quality of service level to request. See
[`QOSLevel`](#tda.streaming.StreamClient.QOSLevel) for options. |
*class* `StreamClient.``QOSLevel`[¶](#tda.streaming.StreamClient.QOSLevel)
Quality of service levels
`EXPRESS` *= '0'*[¶](#tda.streaming.StreamClient.QOSLevel.EXPRESS)
500ms between updates. Fastest available
`REAL_TIME` *= '1'*[¶](#tda.streaming.StreamClient.QOSLevel.REAL_TIME)
750ms between updates
`FAST` *= '2'*[¶](#tda.streaming.StreamClient.QOSLevel.FAST)
1000ms between updates. Default value.
`MODERATE` *= '3'*[¶](#tda.streaming.StreamClient.QOSLevel.MODERATE)
1500ms between updates
`SLOW` *= '4'*[¶](#tda.streaming.StreamClient.QOSLevel.SLOW)
3000ms between updates
`DELAYED` *= '5'*[¶](#tda.streaming.StreamClient.QOSLevel.DELAYED)
5000ms between updates
#### Subscribing to Streams[¶](#subscribing-to-streams)
These functions have names that follow the pattern `SERVICE_NAME_subs`. These functions send a request to enable streaming data for a particular data stream.
They are *not* thread safe, so they should only be called in series.
When subscriptions are called multiple times on the same stream, the results vary. What’s more, these results aren’t documented in the official documentation. As a result, it’s recommended not to call a subscription function more than once for any given stream.
Some services, notably [Equity Charts](#equity-charts) and [Futures Charts](#futures-charts),
offer `SERVICE_NAME_add` functions which can be used to add symbols to the stream after the subscription has been created. For others, calling the subscription methods again seems to clear the old subscription and create a new one. Note this behavior is not officially documented, so this interpretation may be incorrect.
#### Registering Handlers[¶](#registering-handlers)
By themselves, the subscription functions outlined above do nothing except cause messages to be sent to the client. The `add_SERVICE_NAME_handler` functions register functions that will receive these messages when they arrive. When messages arrive, these handlers will be called serially. There is no limit to the number of handlers that can be registered to a service.
#### Handling Messages[¶](#handling-messages)
Once the stream client is properly logged in, subscribed to streams, and has handlers registered, we can start handling messages. This is done simply by awaiting on the `handle_message()` function. This function reads a single message and dispatches it to the appropriate handler or handlers.
If a message is received for which no handler is registered, that message is ignored.
Handlers should take a single argument representing the stream message received:
```
import json
def sample_handler(msg):
print(json.dumps(msg, indent=4))
```
#### Data Field Relabeling[¶](#data-field-relabeling)
Under the hood, this API returns JSON objects with numerical key representing labels:
```
{
"service": "CHART_EQUITY",
"timestamp": 1590597641293,
"command": "SUBS",
"content": [
{
"seq": 985,
"key": "MSFT",
"1": 179.445,
"2": 179.57,
"3": 179.4299,
"4": 179.52,
"5": 53742.0,
"6": 339,
"7": 1590597540000,
"8": 18409
},
]
}
```
These labels are tricky to decode, and require a knowledge of the documentation to decode properly. `tda-api` makes your life easier by doing this decoding for you, replacing numerical labels with strings pulled from the documentation.
For instance, the message above would be relabeled as:
```
{
"service": "CHART_EQUITY",
"timestamp": 1590597641293,
"command": "SUBS",
"content": [
{
"seq": 985,
"key": "MSFT",
"OPEN_PRICE": 179.445,
"HIGH_PRICE": 179.57,
"LOW_PRICE": 179.4299,
"CLOSE_PRICE": 179.52,
"VOLUME": 53742.0,
"SEQUENCE": 339,
"CHART_TIME": 1590597540000,
"CHART_DAY": 18409
},
]
}
```
This documentation describes the various fields and their numerical values. You can find them by investigating the various enum classes ending in `***Fields`.
Some streams, such as the ones described in [Level One Quotes](#level-one), allow you to specify a subset of fields to be returned. Subscription handlers for these services take a list of the appropriate field enums the extra `fields`
parameter. If nothing is passed to this parameter, all supported fields are requested.
#### Interpreting Sequence Numbers[¶](#interpreting-sequence-numbers)
Many endpoints include a `seq` parameter in their data contents. The official documentation is unclear on the interpretation of this value: the [time of sale](https://developer.tdameritrade.com/content/streaming-data#_Toc504640628)
documentation states that messages containing already-observed values of `seq`
can be ignored, but other streams contain this field both in their metadata and in their content, and yet their documentation doesn’t mention ignoring any
`seq` values.
This presents a design choice: should `tda-api` ignore duplicate `seq`
values on users’ behalf? Given the ambiguity of the documentation, it was decided to not ignore them and instead pass them to all handlers. Clients are encouraged to use their judgment in handling these values.
#### Unimplemented Streams[¶](#unimplemented-streams)
This document lists the streams supported by `tda-api`. Eagle-eyed readers may notice that some streams are described in the documentation but were not implemented. This is due to complexity or anticipated lack of interest. If you feel you’d like a stream added, please file an issue
[here](https://github.com/alexgolec/tda-api/issues) or see the
[contributing guidelines](https://github.com/alexgolec/tda-api/blob/master/CONTRIBUTING.rst) to learn how to add the functionality yourself.
### Enabling Real-Time Data Access[¶](#enabling-real-time-data-access)
By default, TD Ameritrade delivers delayed quotes. However, as of this writing,
real time streaming is available for all streams, including quotes and level two depth of book data. It is also available for free, which in the author’s opinion is an impressive feature for a retail brokerage. For most users it’s enough to
[sign the relevant exchange agreements](https://invest.ameritrade.com/grid/p/site#r=jPage/cgi-bin/apps/u/AccountSettings) and then [subscribe to the relevant streams](https://invest.ameritrade.com/grid/p/site#r=jPage/cgi-bin/apps/u/Subscriptions), although your mileage may vary.
Please remember that your use of this API is subject to agreeing to TDAmeritrade’s terms of service. Please don’t reach out to us asking for help enabling real-time data. Answers to most questions are a Google search away.
### OHLCV Charts[¶](#ohlcv-charts)
These streams summarize trading activity on a minute-by-minute basis for equities and futures, providing OHLCV (Open/High/Low/Close/Volume) data.
#### Equity Charts[¶](#equity-charts)
Minute-by-minute OHLCV data for equities.
`StreamClient.``chart_equity_subs`(*symbols*)[¶](#tda.streaming.StreamClient.chart_equity_subs)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640587)
Subscribe to equity charts. Behavior is undefined if called multiple times.
| Parameters: | **symbols** – Equity symbols to subscribe to. |
`StreamClient.``chart_equity_add`(*symbols*)[¶](#tda.streaming.StreamClient.chart_equity_add)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640588)
Add a symbol to the equity charts subscription. Behavior is undefined if called before [`chart_equity_subs()`](#tda.streaming.StreamClient.chart_equity_subs).
| Parameters: | **symbols** – Equity symbols to add to the subscription. |
`StreamClient.``add_chart_equity_handler`(*handler*)[¶](#tda.streaming.StreamClient.add_chart_equity_handler)
Adds a handler to the equity chart subscription. See
[Handling Messages](#id1) for details.
*class* `StreamClient.``ChartEquityFields`[¶](#tda.streaming.StreamClient.ChartEquityFields)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640589)
Data fields for equity OHLCV data. Primarily an implementation detail and not used in client code. Provided here as documentation for key values stored returned in the stream messages.
`SYMBOL` *= 0*[¶](#tda.streaming.StreamClient.ChartEquityFields.SYMBOL)
Ticker symbol in upper case. Represented in the stream as the
`key` field.
`OPEN_PRICE` *= 1*[¶](#tda.streaming.StreamClient.ChartEquityFields.OPEN_PRICE)
Opening price for the minute
`HIGH_PRICE` *= 2*[¶](#tda.streaming.StreamClient.ChartEquityFields.HIGH_PRICE)
Highest price for the minute
`LOW_PRICE` *= 3*[¶](#tda.streaming.StreamClient.ChartEquityFields.LOW_PRICE)
Chart’s lowest price for the minute
`CLOSE_PRICE` *= 4*[¶](#tda.streaming.StreamClient.ChartEquityFields.CLOSE_PRICE)
Closing price for the minute
`VOLUME` *= 5*[¶](#tda.streaming.StreamClient.ChartEquityFields.VOLUME)
Total volume for the minute
`SEQUENCE` *= 6*[¶](#tda.streaming.StreamClient.ChartEquityFields.SEQUENCE)
Identifies the candle minute. Explicitly labeled “not useful” in the official documentation.
`CHART_TIME` *= 7*[¶](#tda.streaming.StreamClient.ChartEquityFields.CHART_TIME)
Milliseconds since Epoch
`CHART_DAY` *= 8*[¶](#tda.streaming.StreamClient.ChartEquityFields.CHART_DAY)
Documented as not useful, included for completeness
#### Futures Charts[¶](#futures-charts)
Minute-by-minute OHLCV data for futures.
`StreamClient.``chart_futures_subs`(*symbols*)[¶](#tda.streaming.StreamClient.chart_futures_subs)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640587)
Subscribe to futures charts. Behavior is undefined if called multiple times.
| Parameters: | **symbols** – Futures symbols to subscribe to. |
`StreamClient.``chart_futures_add`(*symbols*)[¶](#tda.streaming.StreamClient.chart_futures_add)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640590)
Add a symbol to the futures chart subscription. Behavior is undefined if called before [`chart_futures_subs()`](#tda.streaming.StreamClient.chart_futures_subs).
| Parameters: | **symbols** – Futures symbols to add to the subscription. |
`StreamClient.``add_chart_futures_handler`(*handler*)[¶](#tda.streaming.StreamClient.add_chart_futures_handler)
Adds a handler to the futures chart subscription. See
[Handling Messages](#id1) for details.
*class* `StreamClient.``ChartFuturesFields`[¶](#tda.streaming.StreamClient.ChartFuturesFields)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640592)
Data fields for equity OHLCV data. Primarily an implementation detail and not used in client code. Provided here as documentation for key values stored returned in the stream messages.
`SYMBOL` *= 0*[¶](#tda.streaming.StreamClient.ChartFuturesFields.SYMBOL)
Ticker symbol in upper case. Represented in the stream as the
`key` field.
`CHART_TIME` *= 1*[¶](#tda.streaming.StreamClient.ChartFuturesFields.CHART_TIME)
Milliseconds since Epoch
`OPEN_PRICE` *= 2*[¶](#tda.streaming.StreamClient.ChartFuturesFields.OPEN_PRICE)
Opening price for the minute
`HIGH_PRICE` *= 3*[¶](#tda.streaming.StreamClient.ChartFuturesFields.HIGH_PRICE)
Highest price for the minute
`LOW_PRICE` *= 4*[¶](#tda.streaming.StreamClient.ChartFuturesFields.LOW_PRICE)
Chart’s lowest price for the minute
`CLOSE_PRICE` *= 5*[¶](#tda.streaming.StreamClient.ChartFuturesFields.CLOSE_PRICE)
Closing price for the minute
`VOLUME` *= 6*[¶](#tda.streaming.StreamClient.ChartFuturesFields.VOLUME)
Total volume for the minute
### Level One Quotes[¶](#level-one-quotes)
Level one quotes provide an up-to-date view of bid/ask/volume data. In particular they list the best available bid and ask prices, together with the requested volume of each. They are updated live as market conditions change.
#### Equities Quotes[¶](#equities-quotes)
Level one quotes for equities traded on NYSE, AMEX, and PACIFIC.
`StreamClient.``level_one_equity_subs`(*symbols*, ***, *fields=None*)[¶](#tda.streaming.StreamClient.level_one_equity_subs)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640599)
Subscribe to level one equity quote data.
| Parameters: | * **symbols** – Equity symbols to receive quotes for
* **fields** – Iterable of [`LevelOneEquityFields`](#tda.streaming.StreamClient.LevelOneEquityFields) representing the fields to return in streaming entries. If unset, all fields will be requested.
|
`StreamClient.``add_level_one_equity_handler`(*handler*)[¶](#tda.streaming.StreamClient.add_level_one_equity_handler)
Register a function to handle level one equity quotes as they are sent.
See [Handling Messages](#id1) for details.
*class* `StreamClient.``LevelOneEquityFields`[¶](#tda.streaming.StreamClient.LevelOneEquityFields)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640599)
Fields for equity quotes.
`SYMBOL` *= 0*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.SYMBOL)
Ticker symbol in upper case. Represented in the stream as the
`key` field.
`BID_PRICE` *= 1*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.BID_PRICE)
Current Best Bid Price
`ASK_PRICE` *= 2*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.ASK_PRICE)
Current Best Ask Price
`LAST_PRICE` *= 3*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.LAST_PRICE)
Price at which the last trade was matched
`BID_SIZE` *= 4*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.BID_SIZE)
Number of shares for bid
`ASK_SIZE` *= 5*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.ASK_SIZE)
Number of shares for ask
`ASK_ID` *= 6*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.ASK_ID)
Exchange with the best ask
`BID_ID` *= 7*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.BID_ID)
Exchange with the best bid
`TOTAL_VOLUME` *= 8*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.TOTAL_VOLUME)
Aggregated shares traded throughout the day, including pre/post market hours. Note volume is set to zero at 7:28am ET.
`LAST_SIZE` *= 9*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.LAST_SIZE)
Number of shares traded with last trade, in 100’s
`TRADE_TIME` *= 10*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.TRADE_TIME)
Trade time of the last trade, in seconds since midnight EST
`QUOTE_TIME` *= 11*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.QUOTE_TIME)
Trade time of the last quote, in seconds since midnight EST
`HIGH_PRICE` *= 12*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.HIGH_PRICE)
Day’s high trade price. Notes:
* According to industry standard, only regular session trades set the High and Low.
* If a stock does not trade in the AM session, high and low will be zero.
* High/low reset to 0 at 7:28am ET
`LOW_PRICE` *= 13*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.LOW_PRICE)
Day’s low trade price. Same notes as `HIGH_PRICE`.
`BID_TICK` *= 14*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.BID_TICK)
Indicates Up or Downtick (NASDAQ NMS & Small Cap). Updates whenever bid updates.
`CLOSE_PRICE` *= 15*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.CLOSE_PRICE)
Previous day’s closing price. Notes:
* Closing prices are updated from the DB when Pre-Market tasks are run by TD Ameritrade at 7:29AM ET.
* As long as the symbol is valid, this data is always present.
* This field is updated every time the closing prices are loaded from DB
`EXCHANGE_ID` *= 16*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.EXCHANGE_ID)
Primary “listing” Exchange.
`MARGINABLE` *= 17*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.MARGINABLE)
Stock approved by the Federal Reserve and an investor’s broker as being suitable for providing collateral for margin debt?
`SHORTABLE` *= 18*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.SHORTABLE)
Stock can be sold short?
`ISLAND_BID_DEPRECATED` *= 19*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.ISLAND_BID_DEPRECATED)
Deprecated, documented for completeness.
`ISLAND_ASK_DEPRECATED` *= 20*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.ISLAND_ASK_DEPRECATED)
Deprecated, documented for completeness.
`ISLAND_VOLUME_DEPRECATED` *= 21*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.ISLAND_VOLUME_DEPRECATED)
Deprecated, documented for completeness.
`QUOTE_DAY` *= 22*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.QUOTE_DAY)
Day of the quote
`TRADE_DAY` *= 23*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.TRADE_DAY)
Day of the trade
`VOLATILITY` *= 24*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.VOLATILITY)
Option Risk/Volatility Measurement. Notes:
* Volatility is reset to 0 when Pre-Market tasks are run at 7:28 AM ET
* Once per day descriptions are loaded from the database when Pre-Market tasks are run at 7:29:50 AM ET.
`DESCRIPTION` *= 25*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.DESCRIPTION)
A company, index or fund name
`LAST_ID` *= 26*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.LAST_ID)
Exchange where last trade was executed
`DIGITS` *= 27*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.DIGITS)
Valid decimal points. 4 digits for AMEX, NASDAQ, OTCBB, and PINKS,
2 for others.
`OPEN_PRICE` *= 28*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.OPEN_PRICE)
Day’s Open Price. Notes:
* Open is set to ZERO when Pre-Market tasks are run at 7:28.
* If a stock doesn’t trade the whole day, then the open price is 0.
* In the AM session, Open is blank because the AM session trades do not set the open.
`NET_CHANGE` *= 29*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.NET_CHANGE)
Current Last-Prev Close
`HIGH_52_WEEK` *= 30*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.HIGH_52_WEEK)
Highest price traded in the past 12 months, or 52 weeks
`LOW_52_WEEK` *= 31*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.LOW_52_WEEK)
Lowest price traded in the past 12 months, or 52 weeks
`PE_RATIO` *= 32*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.PE_RATIO)
Price to earnings ratio
`DIVIDEND_AMOUNT` *= 33*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.DIVIDEND_AMOUNT)
Dividen earnings Per Share
`DIVIDEND_YIELD` *= 34*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.DIVIDEND_YIELD)
Dividend Yield
`ISLAND_BID_SIZE_DEPRECATED` *= 35*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.ISLAND_BID_SIZE_DEPRECATED)
Deprecated, documented for completeness.
`ISLAND_ASK_SIZE_DEPRECATED` *= 36*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.ISLAND_ASK_SIZE_DEPRECATED)
Deprecated, documented for completeness.
`NAV` *= 37*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.NAV)
Mutual Fund Net Asset Value
`FUND_PRICE` *= 38*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.FUND_PRICE)
Mutual fund price
`EXCHANGE_NAME` *= 39*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.EXCHANGE_NAME)
Display name of exchange
`DIVIDEND_DATE` *= 40*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.DIVIDEND_DATE)
Dividend date
`IS_REGULAR_MARKET_QUOTE` *= 41*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.IS_REGULAR_MARKET_QUOTE)
Is last quote a regular quote
`IS_REGULAR_MARKET_TRADE` *= 42*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.IS_REGULAR_MARKET_TRADE)
Is last trade a regular trade
`REGULAR_MARKET_LAST_PRICE` *= 43*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.REGULAR_MARKET_LAST_PRICE)
Last price, only used when `IS_REGULAR_MARKET_TRADE` is `True`
`REGULAR_MARKET_LAST_SIZE` *= 44*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.REGULAR_MARKET_LAST_SIZE)
Last trade size, only used when `IS_REGULAR_MARKET_TRADE` is `True`
`REGULAR_MARKET_TRADE_TIME` *= 45*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.REGULAR_MARKET_TRADE_TIME)
Last trade time, only used when `IS_REGULAR_MARKET_TRADE` is `True`
`REGULAR_MARKET_TRADE_DAY` *= 46*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.REGULAR_MARKET_TRADE_DAY)
Last trade date, only used when `IS_REGULAR_MARKET_TRADE` is `True`
`REGULAR_MARKET_NET_CHANGE` *= 47*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.REGULAR_MARKET_NET_CHANGE)
`REGULAR_MARKET_LAST_PRICE` minus `CLOSE_PRICE`
`SECURITY_STATUS` *= 48*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.SECURITY_STATUS)
Indicates a symbols current trading status, Normal, Halted, Closed
`MARK` *= 49*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.MARK)
Mark Price
`QUOTE_TIME_IN_LONG` *= 50*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.QUOTE_TIME_IN_LONG)
Last quote time in milliseconds since Epoch
`TRADE_TIME_IN_LONG` *= 51*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.TRADE_TIME_IN_LONG)
Last trade time in milliseconds since Epoch
`REGULAR_MARKET_TRADE_TIME_IN_LONG` *= 52*[¶](#tda.streaming.StreamClient.LevelOneEquityFields.REGULAR_MARKET_TRADE_TIME_IN_LONG)
Regular market trade time in milliseconds since Epoch
#### Options Quotes[¶](#options-quotes)
Level one quotes for options. Note you can use
[`Client.get_option_chain()`](index.html#tda.client.Client.get_option_chain) to fetch available option symbols.
`StreamClient.``level_one_option_subs`(*symbols*, ***, *fields=None*)[¶](#tda.streaming.StreamClient.level_one_option_subs)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640602)
Subscribe to level one option quote data.
| Parameters: | * **symbols** – Option symbols to receive quotes for
* **fields** – Iterable of [`LevelOneOptionFields`](#tda.streaming.StreamClient.LevelOneOptionFields) representing the fields to return in streaming entries. If unset, all fields will be requested.
|
`StreamClient.``add_level_one_option_handler`(*handler*)[¶](#tda.streaming.StreamClient.add_level_one_option_handler)
Register a function to handle level one options quotes as they are sent.
See [Handling Messages](#id1) for details.
*class* `StreamClient.``LevelOneOptionFields`[¶](#tda.streaming.StreamClient.LevelOneOptionFields)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640601)
`SYMBOL` *= 0*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.SYMBOL)
Ticker symbol in upper case. Represented in the stream as the
`key` field.
`DESCRIPTION` *= 1*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.DESCRIPTION)
A company, index or fund name
`BID_PRICE` *= 2*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.BID_PRICE)
Current Best Bid Price
`ASK_PRICE` *= 3*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.ASK_PRICE)
Current Best Ask Price
`LAST_PRICE` *= 4*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.LAST_PRICE)
Price at which the last trade was matched
`HIGH_PRICE` *= 5*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.HIGH_PRICE)
Day’s high trade price. Notes:
* According to industry standard, only regular session trades set the High and Low.
* If an option does not trade in the AM session, high and low will be zero.
* High/low reset to 0 at 7:28am ET.
`LOW_PRICE` *= 6*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.LOW_PRICE)
Day’s low trade price. Same notes as `HIGH_PRICE`.
`CLOSE_PRICE` *= 7*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.CLOSE_PRICE)
Previous day’s closing price. Closing prices are updated from the DB when Pre-Market tasks are run at 7:29AM ET.
`TOTAL_VOLUME` *= 8*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.TOTAL_VOLUME)
Aggregated shares traded throughout the day, including pre/post market hours. Reset to zero at 7:28am ET.
`OPEN_INTEREST` *= 9*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.OPEN_INTEREST)
Open interest
`VOLATILITY` *= 10*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.VOLATILITY)
Option Risk/Volatility Measurement. Volatility is reset to 0 when Pre-Market tasks are run at 7:28 AM ET.
`QUOTE_TIME` *= 11*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.QUOTE_TIME)
Trade time of the last quote in seconds since midnight EST
`TRADE_TIME` *= 12*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.TRADE_TIME)
Trade time of the last quote in seconds since midnight EST
`MONEY_INTRINSIC_VALUE` *= 13*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.MONEY_INTRINSIC_VALUE)
Money intrinsic value
`QUOTE_DAY` *= 14*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.QUOTE_DAY)
Day of the quote
`TRADE_DAY` *= 15*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.TRADE_DAY)
Day of the trade
`EXPIRATION_YEAR` *= 16*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.EXPIRATION_YEAR)
Option expiration year
`MULTIPLIER` *= 17*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.MULTIPLIER)
Option multiplier
`DIGITS` *= 18*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.DIGITS)
Valid decimal points. 4 digits for AMEX, NASDAQ, OTCBB, and PINKS,
2 for others.
`OPEN_PRICE` *= 19*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.OPEN_PRICE)
Day’s Open Price. Notes:
* Open is set to ZERO when Pre-Market tasks are run at 7:28.
* If a stock doesn’t trade the whole day, then the open price is 0.
* In the AM session, Open is blank because the AM session trades do not set the open.
`BID_SIZE` *= 20*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.BID_SIZE)
Number of shares for bid
`ASK_SIZE` *= 21*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.ASK_SIZE)
Number of shares for ask
`LAST_SIZE` *= 22*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.LAST_SIZE)
Number of shares traded with last trade, in 100’s
`NET_CHANGE` *= 23*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.NET_CHANGE)
Current Last-Prev Close
`STRIKE_PRICE` *= 24*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.STRIKE_PRICE)
`CONTRACT_TYPE` *= 25*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.CONTRACT_TYPE)
`UNDERLYING` *= 26*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.UNDERLYING)
`EXPIRATION_MONTH` *= 27*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.EXPIRATION_MONTH)
`DELIVERABLES` *= 28*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.DELIVERABLES)
`TIME_VALUE` *= 29*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.TIME_VALUE)
`EXPIRATION_DAY` *= 30*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.EXPIRATION_DAY)
`DAYS_TO_EXPIRATION` *= 31*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.DAYS_TO_EXPIRATION)
`DELTA` *= 32*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.DELTA)
`GAMMA` *= 33*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.GAMMA)
`THETA` *= 34*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.THETA)
`VEGA` *= 35*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.VEGA)
`RHO` *= 36*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.RHO)
`SECURITY_STATUS` *= 37*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.SECURITY_STATUS)
Indicates a symbols current trading status, Normal, Halted, Closed
`THEORETICAL_OPTION_VALUE` *= 38*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.THEORETICAL_OPTION_VALUE)
`UNDERLYING_PRICE` *= 39*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.UNDERLYING_PRICE)
`UV_EXPIRATION_TYPE` *= 40*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.UV_EXPIRATION_TYPE)
`MARK` *= 41*[¶](#tda.streaming.StreamClient.LevelOneOptionFields.MARK)
Mark Price
#### Futures Quotes[¶](#futures-quotes)
Level one quotes for futures.
`StreamClient.``level_one_futures_subs`(*symbols*, ***, *fields=None*)[¶](#tda.streaming.StreamClient.level_one_futures_subs)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640604)
Subscribe to level one futures quote data.
| Parameters: | * **symbols** – Futures symbols to receive quotes for
* **fields** – Iterable of [`LevelOneFuturesFields`](#tda.streaming.StreamClient.LevelOneFuturesFields) representing the fields to return in streaming entries. If unset, all fields will be requested.
|
`StreamClient.``add_level_one_futures_handler`(*handler*)[¶](#tda.streaming.StreamClient.add_level_one_futures_handler)
Register a function to handle level one futures quotes as they are sent.
See [Handling Messages](#id1) for details.
*class* `StreamClient.``LevelOneFuturesFields`[¶](#tda.streaming.StreamClient.LevelOneFuturesFields)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640603)
`SYMBOL` *= 0*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.SYMBOL)
Ticker symbol in upper case. Represented in the stream as the
`key` field.
`BID_PRICE` *= 1*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.BID_PRICE)
Current Best Bid Price
`ASK_PRICE` *= 2*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.ASK_PRICE)
Current Best Ask Price
`LAST_PRICE` *= 3*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.LAST_PRICE)
Price at which the last trade was matched
`BID_SIZE` *= 4*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.BID_SIZE)
Number of shares for bid
`ASK_SIZE` *= 5*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.ASK_SIZE)
Number of shares for ask
`ASK_ID` *= 6*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.ASK_ID)
Exchange with the best ask
`BID_ID` *= 7*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.BID_ID)
Exchange with the best bid
`TOTAL_VOLUME` *= 8*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.TOTAL_VOLUME)
Aggregated shares traded throughout the day, including pre/post market hours
`LAST_SIZE` *= 9*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.LAST_SIZE)
Number of shares traded with last trade
`QUOTE_TIME` *= 10*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.QUOTE_TIME)
Trade time of the last quote in milliseconds since epoch
`TRADE_TIME` *= 11*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.TRADE_TIME)
Trade time of the last trade in milliseconds since epoch
`HIGH_PRICE` *= 12*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.HIGH_PRICE)
Day’s high trade price
`LOW_PRICE` *= 13*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.LOW_PRICE)
Day’s low trade price
`CLOSE_PRICE` *= 14*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.CLOSE_PRICE)
Previous day’s closing price
`EXCHANGE_ID` *= 15*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.EXCHANGE_ID)
Primary “listing” Exchange. Notes:
* I → ICE
* E → CME
* L → LIFFEUS
`DESCRIPTION` *= 16*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.DESCRIPTION)
Description of the product
`LAST_ID` *= 17*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.LAST_ID)
Exchange where last trade was executed
`OPEN_PRICE` *= 18*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.OPEN_PRICE)
Day’s Open Price
`NET_CHANGE` *= 19*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.NET_CHANGE)
Current Last-Prev Close
`FUTURE_PERCENT_CHANGE` *= 20*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.FUTURE_PERCENT_CHANGE)
Current percent change
`EXCHANGE_NAME` *= 21*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.EXCHANGE_NAME)
Name of exchange
`SECURITY_STATUS` *= 22*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.SECURITY_STATUS)
Trading status of the symbol. Indicates a symbol’s current trading status, Normal, Halted, Closed.
`OPEN_INTEREST` *= 23*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.OPEN_INTEREST)
The total number of futures ontracts that are not closed or delivered on a particular day
`MARK` *= 24*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.MARK)
Mark-to-Market value is calculated daily using current prices to determine profit/loss
`TICK` *= 25*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.TICK)
Minimum price movement
`TICK_AMOUNT` *= 26*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.TICK_AMOUNT)
Minimum amount that the price of the market can change
`PRODUCT` *= 27*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.PRODUCT)
Futures product
`FUTURE_PRICE_FORMAT` *= 28*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.FUTURE_PRICE_FORMAT)
Display in fraction or decimal format.
`FUTURE_TRADING_HOURS` *= 29*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.FUTURE_TRADING_HOURS)
Trading hours. Notes:
* days: 0 = monday-friday, 1 = sunday.
* 7 = Saturday
* 0 = [-2000,1700] ==> open, close
* 1 = [-1530,-1630,-1700,1515] ==> open, close, open, close
* 0 = [-1800,1700,d,-1700,1900] ==> open, close, DST-flag, open, close
* If the DST-flag is present, the following hours are for DST days:
<http://www.cmegroup.com/trading_hours/`FUTURE_IS_TRADEABLE` *= 30*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.FUTURE_IS_TRADEABLE)
Flag to indicate if this future contract is tradable
`FUTURE_MULTIPLIER` *= 31*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.FUTURE_MULTIPLIER)
Point value
`FUTURE_IS_ACTIVE` *= 32*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.FUTURE_IS_ACTIVE)
Indicates if this contract is active
`FUTURE_SETTLEMENT_PRICE` *= 33*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.FUTURE_SETTLEMENT_PRICE)
Closing price
`FUTURE_ACTIVE_SYMBOL` *= 34*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.FUTURE_ACTIVE_SYMBOL)
Symbol of the active contract
`FUTURE_EXPIRATION_DATE` *= 35*[¶](#tda.streaming.StreamClient.LevelOneFuturesFields.FUTURE_EXPIRATION_DATE)
Expiration date of this contract in milliseconds since epoch
#### Forex Quotes[¶](#forex-quotes)
Level one quotes for foreign exchange pairs.
`StreamClient.``level_one_forex_subs`(*symbols*, ***, *fields=None*)[¶](#tda.streaming.StreamClient.level_one_forex_subs)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640606)
Subscribe to level one forex quote data.
| Parameters: | * **symbols** – Forex symbols to receive quotes for
* **fields** – Iterable of [`LevelOneForexFields`](#tda.streaming.StreamClient.LevelOneForexFields) representing the fields to return in streaming entries. If unset, all fields will be requested.
|
`StreamClient.``add_level_one_forex_handler`(*handler*)[¶](#tda.streaming.StreamClient.add_level_one_forex_handler)
Register a function to handle level one forex quotes as they are sent.
See [Handling Messages](#id1) for details.
*class* `StreamClient.``LevelOneForexFields`[¶](#tda.streaming.StreamClient.LevelOneForexFields)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640606)
`SYMBOL` *= 0*[¶](#tda.streaming.StreamClient.LevelOneForexFields.SYMBOL)
Ticker symbol in upper case. Represented in the stream as the
`key` field.
`BID_PRICE` *= 1*[¶](#tda.streaming.StreamClient.LevelOneForexFields.BID_PRICE)
Current Best Bid Price
`ASK_PRICE` *= 2*[¶](#tda.streaming.StreamClient.LevelOneForexFields.ASK_PRICE)
Current Best Ask Price
`LAST_PRICE` *= 3*[¶](#tda.streaming.StreamClient.LevelOneForexFields.LAST_PRICE)
Price at which the last trade was matched
`BID_SIZE` *= 4*[¶](#tda.streaming.StreamClient.LevelOneForexFields.BID_SIZE)
Number of shares for bid
`ASK_SIZE` *= 5*[¶](#tda.streaming.StreamClient.LevelOneForexFields.ASK_SIZE)
Number of shares for ask
`TOTAL_VOLUME` *= 6*[¶](#tda.streaming.StreamClient.LevelOneForexFields.TOTAL_VOLUME)
Aggregated shares traded throughout the day, including pre/post market hours
`LAST_SIZE` *= 7*[¶](#tda.streaming.StreamClient.LevelOneForexFields.LAST_SIZE)
Number of shares traded with last trade
`QUOTE_TIME` *= 8*[¶](#tda.streaming.StreamClient.LevelOneForexFields.QUOTE_TIME)
Trade time of the last quote in milliseconds since epoch
`TRADE_TIME` *= 9*[¶](#tda.streaming.StreamClient.LevelOneForexFields.TRADE_TIME)
Trade time of the last trade in milliseconds since epoch
`HIGH_PRICE` *= 10*[¶](#tda.streaming.StreamClient.LevelOneForexFields.HIGH_PRICE)
Day’s high trade price
`LOW_PRICE` *= 11*[¶](#tda.streaming.StreamClient.LevelOneForexFields.LOW_PRICE)
Day’s low trade price
`CLOSE_PRICE` *= 12*[¶](#tda.streaming.StreamClient.LevelOneForexFields.CLOSE_PRICE)
Previous day’s closing price
`EXCHANGE_ID` *= 13*[¶](#tda.streaming.StreamClient.LevelOneForexFields.EXCHANGE_ID)
Primary “listing” Exchange
`DESCRIPTION` *= 14*[¶](#tda.streaming.StreamClient.LevelOneForexFields.DESCRIPTION)
Description of the product
`OPEN_PRICE` *= 15*[¶](#tda.streaming.StreamClient.LevelOneForexFields.OPEN_PRICE)
Day’s Open Price
`NET_CHANGE` *= 16*[¶](#tda.streaming.StreamClient.LevelOneForexFields.NET_CHANGE)
Current Last-Prev Close
`EXCHANGE_NAME` *= 18*[¶](#tda.streaming.StreamClient.LevelOneForexFields.EXCHANGE_NAME)
Name of exchange
`DIGITS` *= 19*[¶](#tda.streaming.StreamClient.LevelOneForexFields.DIGITS)
Valid decimal points
`SECURITY_STATUS` *= 20*[¶](#tda.streaming.StreamClient.LevelOneForexFields.SECURITY_STATUS)
Trading status of the symbol. Indicates a symbols current trading status, Normal, Halted, Closed.
`TICK` *= 21*[¶](#tda.streaming.StreamClient.LevelOneForexFields.TICK)
Minimum price movement
`TICK_AMOUNT` *= 22*[¶](#tda.streaming.StreamClient.LevelOneForexFields.TICK_AMOUNT)
Minimum amount that the price of the market can change
`PRODUCT` *= 23*[¶](#tda.streaming.StreamClient.LevelOneForexFields.PRODUCT)
Product name
`TRADING_HOURS` *= 24*[¶](#tda.streaming.StreamClient.LevelOneForexFields.TRADING_HOURS)
Trading hours
`IS_TRADABLE` *= 25*[¶](#tda.streaming.StreamClient.LevelOneForexFields.IS_TRADABLE)
Flag to indicate if this forex is tradable
`MARKET_MAKER` *= 26*[¶](#tda.streaming.StreamClient.LevelOneForexFields.MARKET_MAKER)
`HIGH_52_WEEK` *= 27*[¶](#tda.streaming.StreamClient.LevelOneForexFields.HIGH_52_WEEK)
Higest price traded in the past 12 months, or 52 weeks
`LOW_52_WEEK` *= 28*[¶](#tda.streaming.StreamClient.LevelOneForexFields.LOW_52_WEEK)
Lowest price traded in the past 12 months, or 52 weeks
`MARK` *= 29*[¶](#tda.streaming.StreamClient.LevelOneForexFields.MARK)
Mark-to-Market value is calculated daily using current prices to determine profit/loss
#### Futures Options Quotes[¶](#futures-options-quotes)
Level one quotes for futures options.
`StreamClient.``level_one_futures_options_subs`(*symbols*, ***, *fields=None*)[¶](#tda.streaming.StreamClient.level_one_futures_options_subs)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640610)
Subscribe to level one futures options quote data.
| Parameters: | * **symbols** – Futures options symbols to receive quotes for
* **fields** – Iterable of [`LevelOneFuturesOptionsFields`](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields)
representing the fields to return in streaming entries.
If unset, all fields will be requested.
|
`StreamClient.``add_level_one_futures_options_handler`(*handler*)[¶](#tda.streaming.StreamClient.add_level_one_futures_options_handler)
Register a function to handle level one futures options quotes as they are sent. See [Handling Messages](#id1) for details.
*class* `StreamClient.``LevelOneFuturesOptionsFields`[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640609)
`SYMBOL` *= 0*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.SYMBOL)
Ticker symbol in upper case. Represented in the stream as the
`key` field.
`BID_PRICE` *= 1*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.BID_PRICE)
Current Best Bid Price
`ASK_PRICE` *= 2*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.ASK_PRICE)
Current Best Ask Price
`LAST_PRICE` *= 3*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.LAST_PRICE)
Price at which the last trade was matched
`BID_SIZE` *= 4*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.BID_SIZE)
Number of shares for bid
`ASK_SIZE` *= 5*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.ASK_SIZE)
Number of shares for ask
`ASK_ID` *= 6*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.ASK_ID)
Exchange with the best ask
`BID_ID` *= 7*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.BID_ID)
Exchange with the best bid
`TOTAL_VOLUME` *= 8*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.TOTAL_VOLUME)
Aggregated shares traded throughout the day, including pre/post market hours
`LAST_SIZE` *= 9*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.LAST_SIZE)
Number of shares traded with last trade
`QUOTE_TIME` *= 10*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.QUOTE_TIME)
Trade time of the last quote in milliseconds since epoch
`TRADE_TIME` *= 11*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.TRADE_TIME)
Trade time of the last trade in milliseconds since epoch
`HIGH_PRICE` *= 12*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.HIGH_PRICE)
Day’s high trade price
`LOW_PRICE` *= 13*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.LOW_PRICE)
Day’s low trade price
`CLOSE_PRICE` *= 14*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.CLOSE_PRICE)
Previous day’s closing price
`EXCHANGE_ID` *= 15*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.EXCHANGE_ID)
Primary “listing” Exchange. Notes:
* I → ICE
* E → CME
* L → LIFFEUS
`DESCRIPTION` *= 16*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.DESCRIPTION)
Description of the product
`LAST_ID` *= 17*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.LAST_ID)
Exchange where last trade was executed
`OPEN_PRICE` *= 18*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.OPEN_PRICE)
Day’s Open Price
`NET_CHANGE` *= 19*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.NET_CHANGE)
Current Last-Prev Close
`FUTURE_PERCENT_CHANGE` *= 20*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.FUTURE_PERCENT_CHANGE)
Current percent change
`EXCHANGE_NAME` *= 21*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.EXCHANGE_NAME)
Name of exchange
`SECURITY_STATUS` *= 22*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.SECURITY_STATUS)
Trading status of the symbol. Indicates a symbols current trading status, Normal, Halted, Closed.
`OPEN_INTEREST` *= 23*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.OPEN_INTEREST)
The total number of futures ontracts that are not closed or delivered on a particular day
`MARK` *= 24*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.MARK)
Mark-to-Market value is calculated daily using current prices to determine profit/loss
`TICK` *= 25*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.TICK)
Minimum price movement
`TICK_AMOUNT` *= 26*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.TICK_AMOUNT)
Minimum amount that the price of the market can change
`PRODUCT` *= 27*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.PRODUCT)
Futures product
`FUTURE_PRICE_FORMAT` *= 28*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.FUTURE_PRICE_FORMAT)
Display in fraction or decimal format
`FUTURE_TRADING_HOURS` *= 29*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.FUTURE_TRADING_HOURS)
Trading hours
`FUTURE_IS_TRADEABLE` *= 30*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.FUTURE_IS_TRADEABLE)
Flag to indicate if this future contract is tradable
`FUTURE_MULTIPLIER` *= 31*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.FUTURE_MULTIPLIER)
Point value
`FUTURE_IS_ACTIVE` *= 32*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.FUTURE_IS_ACTIVE)
Indicates if this contract is active
`FUTURE_SETTLEMENT_PRICE` *= 33*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.FUTURE_SETTLEMENT_PRICE)
Closing price
`FUTURE_ACTIVE_SYMBOL` *= 34*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.FUTURE_ACTIVE_SYMBOL)
Symbol of the active contract
`FUTURE_EXPIRATION_DATE` *= 35*[¶](#tda.streaming.StreamClient.LevelOneFuturesOptionsFields.FUTURE_EXPIRATION_DATE)
Expiration date of this contract, in milliseconds since epoch
### Level Two Order Book[¶](#level-two-order-book)
Level two streams provide a view on continuous order books of various securities.
The level two order book describes the current bids and asks on the market, and these streams provide snapshots of that state.
Due to the lack of [official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640612), these streams are largely reverse engineered. While the labeled data represents a best effort attempt to interpret stream fields, it’s possible that something is wrong or incorrectly labeled.
The documentation lists more book types than are implemented here. In particular, it also lists `FOREX_BOOK`, `FUTURES_BOOK`, and
`FUTURES_OPTIONS_BOOK` as accessible streams. All experimentation has resulted in these streams refusing to connect, typically returning errors about unavailable services. Due to this behavior and the lack of official documentation for book streams generally, `tda-api` assumes these streams are not actually implemented, and so excludes them. If you have any insight into using them, please
[let us know](https://github.com/alexgolec/tda-api/issues).
#### Equities Order Books: NYSE and NASDAQ[¶](#equities-order-books-nyse-and-nasdaq)
`tda-api` supports level two data for NYSE and NASDAQ, which are the two major exchanges dealing in equities, ETFs, etc. Stocks are typically listed on one or the other, and it is useful to learn about the differences between them:
> * [“The NYSE and NASDAQ: How They Work” on Investopedia](https://www.investopedia.com/articles/basics/03/103103.asp)
> * [“Here’s the difference between the NASDAQ and NYSE” on Business Insider](https://www.businessinsider.com/heres-the-difference-between-the-nasdaq-and-nyse-2017-7)
> * [“Can Stocks Be Traded on More Than One Exchange?” on Investopedia](https://www.investopedia.com/ask/answers/05/stockmultipleexchanges.asp)
You can identify on which exchange a symbol is listed by using
[`Client.search_instruments()`](index.html#tda.client.Client.search_instruments):
```
r = c.search_instruments(['GOOG'], projection=c.Instrument.Projection.FUNDAMENTAL)
assert r.status_code == httpx.codes.OK, r.raise_for_status()
print(r.json()['GOOG']['exchange']) # Outputs NASDAQ
```
However, many symbols have order books available on these streams even though this API call returns neither NYSE nor NASDAQ. The only sure-fire way to find out whether the order book is available is to attempt to subscribe and see what happens.
Note to preserve equivalence with what little documentation there is, the NYSE book is called “listed.” Testing indicates this stream corresponds to the NYSE book, but if you find any behavior that suggests otherwise please
[let us know](https://github.com/alexgolec/tda-api/issues).
`StreamClient.``listed_book_subs`(*symbols*)[¶](#tda.streaming.StreamClient.listed_book_subs)
Subscribe to the NYSE level two order book. Note this stream has no official documentation.
`StreamClient.``add_listed_book_handler`(*handler*)[¶](#tda.streaming.StreamClient.add_listed_book_handler)
Register a function to handle level two NYSE book data as it is updated See [Handling Messages](#id1) for details.
`StreamClient.``nasdaq_book_subs`(*symbols*)[¶](#tda.streaming.StreamClient.nasdaq_book_subs)
Subscribe to the NASDAQ level two order book. Note this stream has no official documentation.
`StreamClient.``add_nasdaq_book_handler`(*handler*)[¶](#tda.streaming.StreamClient.add_nasdaq_book_handler)
Register a function to handle level two NASDAQ book data as it is updated See [Handling Messages](#id1) for details.
#### Options Order Book[¶](#options-order-book)
This stream provides the order book for options. It’s not entirely clear what exchange it aggregates from, but it’s been tested to work and deliver data. The leading hypothesis is that it is bethe order book for the
[Chicago Board of Exchange](https://www.cboe.com/us/options) options exchanges, although this is an admittedly an uneducated guess.
`StreamClient.``options_book_subs`(*symbols*)[¶](#tda.streaming.StreamClient.options_book_subs)
Subscribe to the level two order book for options. Note this stream has no official documentation, and it’s not entirely clear what exchange it corresponds to. Use at your own risk.
`StreamClient.``add_options_book_handler`(*handler*)[¶](#tda.streaming.StreamClient.add_options_book_handler)
Register a function to handle level two options book data as it is updated See [Handling Messages](#id1) for details.
### Time of Sale[¶](#time-of-sale)
The data in [Level Two Order Book](#level-two) describes the bids and asks for various instruments, but by itself is insufficient to determine when trades actually take place. The time of sale streams notify on trades as they happen. Together with the level two data, they provide a fairly complete picture of what is happening on an exchange.
All time of sale streams uss a common set of fields:
*class* `StreamClient.``TimesaleFields`[¶](#tda.streaming.StreamClient.TimesaleFields)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640626)
`SYMBOL` *= 0*[¶](#tda.streaming.StreamClient.TimesaleFields.SYMBOL)
Ticker symbol in upper case. Represented in the stream as the
`key` field.
`TRADE_TIME` *= 1*[¶](#tda.streaming.StreamClient.TimesaleFields.TRADE_TIME)
Trade time of the last trade in milliseconds since epoch
`LAST_PRICE` *= 2*[¶](#tda.streaming.StreamClient.TimesaleFields.LAST_PRICE)
Price at which the last trade was matched
`LAST_SIZE` *= 3*[¶](#tda.streaming.StreamClient.TimesaleFields.LAST_SIZE)
Number of shares traded with last trade
`LAST_SEQUENCE` *= 4*[¶](#tda.streaming.StreamClient.TimesaleFields.LAST_SEQUENCE)
Number of shares for bid
#### Equity Trades[¶](#equity-trades)
`StreamClient.``timesale_equity_subs`(*symbols*, ***, *fields=None*)[¶](#tda.streaming.StreamClient.timesale_equity_subs)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640628)
Subscribe to time of sale notifications for equities.
| Parameters: | **symbols** – Equity symbols to subscribe to |
`StreamClient.``add_timesale_equity_handler`(*handler*)[¶](#tda.streaming.StreamClient.add_timesale_equity_handler)
Register a function to handle equity trade notifications as they happen See [Handling Messages](#id1) for details.
#### Futures Trades[¶](#futures-trades)
`StreamClient.``timesale_futures_subs`(*symbols*, ***, *fields=None*)[¶](#tda.streaming.StreamClient.timesale_futures_subs)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640628)
Subscribe to time of sale notifications for futures.
| Parameters: | **symbols** – Futures symbols to subscribe to |
`StreamClient.``add_timesale_futures_handler`(*handler*)[¶](#tda.streaming.StreamClient.add_timesale_futures_handler)
Register a function to handle futures trade notifications as they happen See [Handling Messages](#id1) for details.
#### Options Trades[¶](#options-trades)
`StreamClient.``timesale_options_subs`(*symbols*, ***, *fields=None*)[¶](#tda.streaming.StreamClient.timesale_options_subs)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640628)
Subscribe to time of sale notifications for options.
| Parameters: | **symbols** – Options symbols to subscribe to |
`StreamClient.``add_timesale_options_handler`(*handler*)[¶](#tda.streaming.StreamClient.add_timesale_options_handler)
Register a function to handle options trade notifications as they happen See [Handling Messages](#id1) for details.
### News Headlines[¶](#news-headlines)
TD Ameritrade supposedly supports streaming news headlines. However, we have yet to receive any reports of successful access to this stream. Attempts to read this stream result in messages like the following, followed by TDA-initiated stream closure:
```
{
"notify": [
{
"service": "NEWS_HEADLINE",
"timestamp": 1591500923797,
"content": {
"code": 17,
"msg": "Not authorized for all quotes."
}
}
]
}
```
The current hypothesis is that this stream requires some permissions or paid access that so far no one has had.If you manage to get this stream working, or even if you manage to get it to fail with a different message than the one above, please [report it](https://github.com/alexgolec/tda-api/issues). In the meantime, `tda-api` provides the following methods for attempting to access this stream.
`StreamClient.``news_headline_subs`(*symbols*)[¶](#tda.streaming.StreamClient.news_headline_subs)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640626)
Subscribe to news headlines related to the given symbols.
`StreamClient.``add_news_headline_handler`(*handler*)[¶](#tda.streaming.StreamClient.add_news_headline_handler)
Register a function to handle news headlines as they are provided. See
[Handling Messages](#id1) for details.
*class* `StreamClient.``NewsHeadlineFields`[¶](#tda.streaming.StreamClient.NewsHeadlineFields)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640626)
`SYMBOL` *= 0*[¶](#tda.streaming.StreamClient.NewsHeadlineFields.SYMBOL)
Ticker symbol in upper case. Represented in the stream as the
`key` field.
`ERROR_CODE` *= 1*[¶](#tda.streaming.StreamClient.NewsHeadlineFields.ERROR_CODE)
Specifies if there is any error
`STORY_DATETIME` *= 2*[¶](#tda.streaming.StreamClient.NewsHeadlineFields.STORY_DATETIME)
Headline’s datetime in milliseconds since epoch
`HEADLINE_ID` *= 3*[¶](#tda.streaming.StreamClient.NewsHeadlineFields.HEADLINE_ID)
Unique ID for the headline
`STATUS` *= 4*[¶](#tda.streaming.StreamClient.NewsHeadlineFields.STATUS)
`HEADLINE` *= 5*[¶](#tda.streaming.StreamClient.NewsHeadlineFields.HEADLINE)
News headline
`STORY_ID` *= 6*[¶](#tda.streaming.StreamClient.NewsHeadlineFields.STORY_ID)
`COUNT_FOR_KEYWORD` *= 7*[¶](#tda.streaming.StreamClient.NewsHeadlineFields.COUNT_FOR_KEYWORD)
`KEYWORD_ARRAY` *= 8*[¶](#tda.streaming.StreamClient.NewsHeadlineFields.KEYWORD_ARRAY)
`IS_HOT` *= 9*[¶](#tda.streaming.StreamClient.NewsHeadlineFields.IS_HOT)
`STORY_SOURCE` *= 10*[¶](#tda.streaming.StreamClient.NewsHeadlineFields.STORY_SOURCE)
### Account Activity[¶](#account-activity)
This stream allows you to monitor your account activity, including order execution/cancellation/expiration/etc. `tda-api` provide utilities for setting up and reading the stream, but leaves the task of parsing the [response XML object](https://developer.tdameritrade.com/content/streaming-data#_Toc504640581)
to the user.
`StreamClient.``account_activity_sub`()[¶](#tda.streaming.StreamClient.account_activity_sub)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640580)
Subscribe to account activity for the account id associated with this streaming client. See [`AccountActivityFields`](#tda.streaming.StreamClient.AccountActivityFields) for more info.
`StreamClient.``add_account_activity_handler`(*handler*)[¶](#tda.streaming.StreamClient.add_account_activity_handler)
Adds a handler to the account activity subscription. See
[Handling Messages](#id1) for details.
*class* `StreamClient.``AccountActivityFields`[¶](#tda.streaming.StreamClient.AccountActivityFields)
[Official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640580)
Data fields for equity account activity. Primarily an implementation detail and not used in client code. Provided here as documentation for key values stored returned in the stream messages.
`SUBSCRIPTION_KEY` *= 0*[¶](#tda.streaming.StreamClient.AccountActivityFields.SUBSCRIPTION_KEY)
Subscription key. Represented in the stream as the
`key` field.
`ACCOUNT` *= 1*[¶](#tda.streaming.StreamClient.AccountActivityFields.ACCOUNT)
Account # subscribed
`MESSAGE_TYPE` *= 2*[¶](#tda.streaming.StreamClient.AccountActivityFields.MESSAGE_TYPE)
Refer to the [message type table in the official documentation](https://developer.tdameritrade.com/content/streaming-data#_Toc504640581)
`MESSAGE_DATA` *= 3*[¶](#tda.streaming.StreamClient.AccountActivityFields.MESSAGE_DATA)
The core data for the message. Either XML Message data describing the update, `NULL` in some cases, or plain text in case of
`ERROR`.
### Troubleshooting[¶](#troubleshooting)
There are a number of issues you might encounter when using the streaming client. This section attempts to provide a non-authoritative listing of the issues you may encounter when using this client.
Unfortunately, use of the streaming client by non-TDAmeritrade apps is poorly documented and apparently completely unsupported. This section attempts to provide a non-authoritative listing of the issues you may encounter, but please note that these are best effort explanations resulting from reverse engineering and crowdsourced experience. Take them with a grain of salt.
If you have specific questions, please join our [Discord server](https://discord.gg/nfrd9gh) to discuss with the community.
#### `ConnectionClosedOK: code = 1000 (OK), no reason` Immediately on Stream Start[¶](#connectionclosedok-code-1000-ok-no-reason-immediately-on-stream-start)
There are a few known causes for this issue:
##### Streaming Account ID Doesn’t Match Token Account[¶](#streaming-account-id-doesn-t-match-token-account)
TDA allows you to link multiple accounts together, so that logging in to one main account allows you to have access to data from all other linked accounts.
This is not a problem for the HTTP client, but the streaming client is a little more restrictive. In particular, it appears that opening a `StreamClient` with an account ID that is different from the account ID corresponding to the username that was used to create the token is disallowed.
If you’re encountering this issue, make sure you are using the account ID of the account which was used during token login. If you’re unsure which account was used to create the token, delete your token and create a new one, taking note of the account ID.
##### Multiple Concurrent Streams[¶](#multiple-concurrent-streams)
TDA allows only one open stream per account ID. If you open a second one, it will immediately close itself with this error. This is not a limitation of
`tda-api`, this is a TDAmeritrade limitation. If you want to use multiple streams, you need to have multiple accounts, create a separate token under each,
and pass each one’s account ID into its own client.
#### `ConnectionClosedError: code = 1006 (connection closed abnormally [internal])`[¶](#connectionclosederror-code-1006-connection-closed-abnormally-internal)
TDA has the right to kill the connection at any time for any reason, and this error appears to be a catchall for these sorts of failures. If you are encountering this error, it is almost certainly not the fault of the
`tda-api` library, but rather either an internal failure on TDA’s side or a failure in the logic of your own code.
That being said, there have been a number of situations where this error was encountered, and this section attempts to record the resolution of these failures.
##### Your Handler is Too Slow[¶](#your-handler-is-too-slow)
`tda-api` cannot perform websocket message acknowledgement when your handler code is running. As a result, if your handler code takes longer than the stream update frequency, a backlog of unacknowledged messages will develop. TDA has been observed to terminate connections when many messages are unacknowledged.
Fixing this is a task for the application developer: if you are writing to a database or filesystem as part of your handler, consider profiling it to make the write faster. You may also consider deferring your writes so that slow operations don’t happen in the hotpath of the message handler.
#### JSONDecodeError[¶](#jsondecodeerror)
This is an error that is most often raised when TDA sends an invalid JSON string. See [Custom JSON Decoding](index.html#custom-json-decoding) for details.
For reasons known only to TDAmeritrade’s development team, the API occasionally emits invalid stream messages for some endpoints. Because this issue does not affect all endpoints, and because `tda-api`’s authors are not in the business of handling quirks of an API they don’t control, the library simply passes these errors up to the user.
However, some applications cannot handle complete failure. What’s more, some users have insight into how to work around these decoder errors. The streaming client supports setting a custom JSON decoder to help with this:
`StreamClient.``set_json_decoder`(*json_decoder*)[¶](#tda.streaming.StreamClient.set_json_decoder)
Sets a custom JSON decoder.
| Parameters: | **json_decoder** – Custom JSON decoder to use for to decode all
incoming JSON strings. See
[`StreamJsonDecoder`](#tda.streaming.StreamJsonDecoder) for details. |
Users are free to implement their own JSON decoders by subclassing the following abstract base class:
*class* `tda.streaming.``StreamJsonDecoder`[¶](#tda.streaming.StreamJsonDecoder)
`decode_json_string`(*raw*)[¶](#tda.streaming.StreamJsonDecoder.decode_json_string)
Parse a JSON-formatted string into a proper object. Raises
`JSONDecodeError` on parse failure.
Users looking for an out-of-the-box solution can consider using the community-maintained decoder described in [Custom JSON Decoding](index.html#custom-json-decoding). Note that while this decoder is constantly improving, it is not guaranteed to solve whatever JSON decoding errors your may be encountering.
Order Templates[¶](#order-templates)
---
`tda-api` strives to be easy to use. This means making it easy to do simple things, while making it possible to do complicated things. Order construction is a major challenge to this mission: both simple and complicated orders use the same format, meaning simple orders require a surprising amount of sophistication to place.
We get around this by providing templates that make it easy to place common orders, while allowing advanced users to modify the orders returned from the templates to create more complex ones. Very advanced users can even create their own orders from scratch. This page describes the simple templates, while the
[OrderBuilder Reference](index.html#order-builder) page documents the order builder in all its complexity.
### Using These Templates[¶](#using-these-templates)
These templates serve two purposes. First, they are designed to choose defaults so you can immediately [place them](index.html#placing-new-orders). These defaults are:
> * All orders execute during the current normal trading session. If placed
> outside of trading hours, the execute during the next normal trading session.
> * Time-in-force is set to `DAY`.
> * All other fields (such as requested destination, etc.) are left unset,
> meaning they receive default treatment from TD Ameritrade. Note this
> treatment depends on TDA’s implementation, and may change without warning.
Secondly, they serve as starting points for building more complex order types.
All templates return a pre-populated `OrderBuilder` object, meaning complex functionality can be specified by modifying the returned object. For example,
here is how you would place an order to buy `GOOG` for no more than $1250 at any time in the next six months:
```
from tda.orders.equities import equity_buy_limit from tda.orders.common import Duration, Session
client = ... # See "Authentication and Client Creation"
client.place_order(
1000, # account_id
equity_buy_limit('GOOG', 1, 1250.0)
.set_duration(Duration.GOOD_TILL_CANCEL)
.set_session(Session.SEAMLESS)
.build())
```
You can find a full reference for all supported fields in [OrderBuilder Reference](index.html#order-builder).
### Equity Templates[¶](#equity-templates)
#### Buy orders[¶](#buy-orders)
`tda.orders.equities.``equity_buy_market`(*symbol*, *quantity*)[¶](#tda.orders.equities.equity_buy_market)
Returns a pre-filled `OrderBuilder` for an equity buy market order.
`tda.orders.equities.``equity_buy_limit`(*symbol*, *quantity*, *price*)[¶](#tda.orders.equities.equity_buy_limit)
Returns a pre-filled `OrderBuilder` for an equity buy limit order.
#### Sell orders[¶](#sell-orders)
`tda.orders.equities.``equity_sell_market`(*symbol*, *quantity*)[¶](#tda.orders.equities.equity_sell_market)
Returns a pre-filled `OrderBuilder` for an equity sell market order.
`tda.orders.equities.``equity_sell_limit`(*symbol*, *quantity*, *price*)[¶](#tda.orders.equities.equity_sell_limit)
Returns a pre-filled `OrderBuilder` for an equity sell limit order.
#### Sell short orders[¶](#sell-short-orders)
`tda.orders.equities.``equity_sell_short_market`(*symbol*, *quantity*)[¶](#tda.orders.equities.equity_sell_short_market)
Returns a pre-filled `OrderBuilder` for an equity short sell market order.
`tda.orders.equities.``equity_sell_short_limit`(*symbol*, *quantity*, *price*)[¶](#tda.orders.equities.equity_sell_short_limit)
Returns a pre-filled `OrderBuilder` for an equity short sell limit order.
#### Buy to cover orders[¶](#buy-to-cover-orders)
`tda.orders.equities.``equity_buy_to_cover_market`(*symbol*, *quantity*)[¶](#tda.orders.equities.equity_buy_to_cover_market)
Returns a pre-filled `OrderBuilder` for an equity buy-to-cover market order.
`tda.orders.equities.``equity_buy_to_cover_limit`(*symbol*, *quantity*, *price*)[¶](#tda.orders.equities.equity_buy_to_cover_limit)
Returns a pre-filled `OrderBuilder` for an equity buy-to-cover limit order.
### Options Templates[¶](#options-templates)
TD Ameritrade supports over a dozen options strategies, each of which involve a precise structure in the order builder. `tda-api` is slowly gaining support for these strategies, and they are documented here as they become ready for use.
As time goes on, more templates will be added here.
In the meantime, you can construct all supported options orders using the
[OrderBuilder](index.html#order-builder), although you will have to construct them yourself.
Note orders placed using these templates may be rejected, depending on the user’s options trading authorization.
#### Building Options Symbols[¶](#building-options-symbols)
All templates require option symbols, which are somewhat more involved than equity symbols. They encode the underlying, the expiration date, option type
(put or call) and the strike price. They are especially tricky to extract because both the TD Ameritrade UI and the thinkorswim UI don’t reveal the symbol in the option chain view.
Real trading symbols can be found by requesting the [Option Chain](index.html#option-chain). They can also be built using the `OptionSymbol` helper, which provides utilities for creating options symbols. Note it only emits syntactically correct symbols and does not validate whether the symbol actually represents a traded option:
```
from tda.order.options import OptionSymbol
symbol = OptionSymbol(
'TSLA', datetime.date(year=2020, month=11, day=20), 'P', '1360').build()
```
*class* `tda.orders.options.``OptionSymbol`(*underlying_symbol*, *expiration_date*, *contract_type*, *strike_price_as_string*)[¶](#tda.orders.options.OptionSymbol)
Construct an option symbol from its constituent parts. Options symbols have the following format: `[Underlying]_[Two digit month][Two digit day][Two digit year]['P' or 'C'][Strike price]`. Examples include:
> * `GOOG_012122P620`: GOOG Jan 21 2022 620 Put
> * `TSLA_112020C1360`: TSLA Nov 20 2020 1360 Call
> * `SPY_121622C335`: SPY Dec 16 2022 335 Call
Note while each of the individual parts is validated by itself, the option symbol itself may not represent a traded option:
> * Some underlyings do not support options.
> * Not all dates have valid option expiration dates.
> * Not all strike prices are valid options strikes.
You can use [`get_option_chain()`](index.html#tda.client.Client.get_option_chain) to obtain real option symbols for an underlying, as well as extensive data in pricing,
bid/ask spread, volume, etc.
| Parameters: | * **underlying_symbol** – Symbol of the underlying. Not validated.
* **expiration_date** – Expiration date. Accepts `datetime.date`,
`datetime.datetime`, or strings with the format `[Two digit month][Two digit day][Two digit year]`.
* **contract_type** – `P` for put or `C` for call.
* **strike_price_as_string** – Strike price, represented by a string as you would see at the end of a real option symbol.
|
#### Single Options[¶](#single-options)
Buy and sell single options.
`tda.orders.options.``option_buy_to_open_market`(*symbol*, *quantity*)[¶](#tda.orders.options.option_buy_to_open_market)
Returns a pre-filled `OrderBuilder` for a buy-to-open market order.
`tda.orders.options.``option_buy_to_open_limit`(*symbol*, *quantity*, *price*)[¶](#tda.orders.options.option_buy_to_open_limit)
Returns a pre-filled `OrderBuilder` for a buy-to-open limit order.
`tda.orders.options.``option_sell_to_open_market`(*symbol*, *quantity*)[¶](#tda.orders.options.option_sell_to_open_market)
Returns a pre-filled `OrderBuilder` for a sell-to-open market order.
`tda.orders.options.``option_sell_to_open_limit`(*symbol*, *quantity*, *price*)[¶](#tda.orders.options.option_sell_to_open_limit)
Returns a pre-filled `OrderBuilder` for a sell-to-open limit order.
`tda.orders.options.``option_buy_to_close_market`(*symbol*, *quantity*)[¶](#tda.orders.options.option_buy_to_close_market)
Returns a pre-filled `OrderBuilder` for a buy-to-close market order.
`tda.orders.options.``option_buy_to_close_limit`(*symbol*, *quantity*, *price*)[¶](#tda.orders.options.option_buy_to_close_limit)
Returns a pre-filled `OrderBuilder` for a buy-to-close limit order.
`tda.orders.options.``option_sell_to_close_market`(*symbol*, *quantity*)[¶](#tda.orders.options.option_sell_to_close_market)
Returns a pre-filled `OrderBuilder` for a sell-to-close market order.
`tda.orders.options.``option_sell_to_close_limit`(*symbol*, *quantity*, *price*)[¶](#tda.orders.options.option_sell_to_close_limit)
Returns a pre-filled `OrderBuilder` for a sell-to-close limit order.
#### Vertical Spreads[¶](#vertical-spreads)
Vertical spreads are a complex option strategy that provides both limited upside and limited downside. They are constructed using by buying an option at one strike while simultaneously selling another option with the same underlying and expiration date, except with a different strike, and they can be constructed using either puts or call. You can find more information about this strategy on
[Investopedia](https://www.investopedia.com/articles/active-trading/032614/which-vertical-option-spread-should-you-use.asp)
`tda-api` provides utilities for opening and closing vertical spreads in various ways. It follows the standard `(bull/bear) (put/call)` naming convention, where the name specifies the market attitude and the option type used in construction.
For consistency’s sake, the option with the smaller strike price is always passed first, followed by the higher strike option. You can find the option symbols by consulting the return value of the [Option Chain](index.html#option-chain) client call.
##### Call Verticals[¶](#call-verticals)
`tda.orders.options.``bull_call_vertical_open`(*long_call_symbol*, *short_call_symbol*, *quantity*, *net_debit*)[¶](#tda.orders.options.bull_call_vertical_open)
Returns a pre-filled `OrderBuilder` that opens a bull call vertical position. See [Vertical Spreads](#vertical-spreads) for details.
`tda.orders.options.``bull_call_vertical_close`(*long_call_symbol*, *short_call_symbol*, *quantity*, *net_credit*)[¶](#tda.orders.options.bull_call_vertical_close)
Returns a pre-filled `OrderBuilder` that closes a bull call vertical position. See [Vertical Spreads](#vertical-spreads) for details.
`tda.orders.options.``bear_call_vertical_open`(*short_call_symbol*, *long_call_symbol*, *quantity*, *net_credit*)[¶](#tda.orders.options.bear_call_vertical_open)
Returns a pre-filled `OrderBuilder` that opens a bear call vertical position. See [Vertical Spreads](#vertical-spreads) for details.
`tda.orders.options.``bear_call_vertical_close`(*short_call_symbol*, *long_call_symbol*, *quantity*, *net_debit*)[¶](#tda.orders.options.bear_call_vertical_close)
Returns a pre-filled `OrderBuilder` that closes a bear call vertical position. See [Vertical Spreads](#vertical-spreads) for details.
##### Put Verticals[¶](#put-verticals)
`tda.orders.options.``bull_put_vertical_open`(*long_put_symbol*, *short_put_symbol*, *quantity*, *net_credit*)[¶](#tda.orders.options.bull_put_vertical_open)
Returns a pre-filled `OrderBuilder` that opens a bull put vertical position. See [Vertical Spreads](#vertical-spreads) for details.
`tda.orders.options.``bull_put_vertical_close`(*long_put_symbol*, *short_put_symbol*, *quantity*, *net_debit*)[¶](#tda.orders.options.bull_put_vertical_close)
Returns a pre-filled `OrderBuilder` that closes a bull put vertical position. See [Vertical Spreads](#vertical-spreads) for details.
`tda.orders.options.``bear_put_vertical_open`(*short_put_symbol*, *long_put_symbol*, *quantity*, *net_debit*)[¶](#tda.orders.options.bear_put_vertical_open)
Returns a pre-filled `OrderBuilder` that opens a bear put vertical position. See [Vertical Spreads](#vertical-spreads) for details.
`tda.orders.options.``bear_put_vertical_close`(*short_put_symbol*, *long_put_symbol*, *quantity*, *net_credit*)[¶](#tda.orders.options.bear_put_vertical_close)
Returns a pre-filled `OrderBuilder` that closes a bear put vertical position. See [Vertical Spreads](#vertical-spreads) for details.
### Utility Methods[¶](#utility-methods)
These methods return orders that represent complex multi-order strategies,
namely “one cancels other” and “first triggers second” strategies. Note they expect all their parameters to be of type `OrderBuilder`. You can construct these orders using the templates above or by
[creating them from scratch](index.html#order-builder).
Note that you do **not** construct composite orders by placing the constituent orders and then passing the results to the utility methods:
```
order_one = c.place_order(config.account_id,
option_buy_to_open_limit(trade_symbol, contracts, safety_ask)
.set_duration(Duration.GOOD_TILL_CANCEL)
.set_session(Session.NORMAL)
.build())
order_two = c.place_order(config.account_id,
option_sell_to_close_limit(trade_symbol, half, double)
.set_duration(Duration.GOOD_TILL_CANCEL)
.set_session(Session.NORMAL)
.build())
# THIS IS BAD, DO NOT DO THIS exec_trade = c.place_order(config.account_id, first_triggers_second(order_one, order_two))
```
What’s happening here is both constituent orders are being executed, and then
`place_order` will fail. Creating an `OrderBuilder` defers their execution,
subject to your composite order rules.
**Note:** It appears that using these methods requires disabling Advanced Features on your account. It is not entirely clear why this is the case, but we’ve seen numerous reports of issues with OCO and trigger orders being resolved by this method. You can disable advanced features by calling TDAmeritrade support and requesting that they be turned off. If you need more help, we recommend [joining our discord](https://discord.gg/M3vjtHj) to ask the community for help.
`tda.orders.common.``one_cancels_other`(*order1*, *order2*)[¶](#tda.orders.common.one_cancels_other)
If one of the orders is executed, immediately cancel the other.
`tda.orders.common.``first_triggers_second`(*first_order*, *second_order*)[¶](#tda.orders.common.first_triggers_second)
If `first_order` is executed, immediately place `second_order`.
### What happened to `EquityOrderBuilder`?[¶](#what-happened-to-equityorderbuilder)
Long-time users and new users following outdated tutorials may notice that this documentation no longer mentions the `EquityOrderBuilder` class. This class used to be used to create equities orders, and offered a subset of the functionality offered by the [OrderBuilder](index.html#order-builder). This class has been removed in favor of the order builder and the above templates.
`OrderBuilder` Reference[¶](#orderbuilder-reference)
---
The [`Client.place_order()`](index.html#tda.client.Client.place_order) method expects a rather complex JSON object that describes the desired order. TDA provides some
[example order specs](https://developer.tdameritrade.com/content/place-order-samples) to illustrate the process and provides a schema in the [place order documentation](https://developer.tdameritrade.com/account-access/apis/post/accounts/%7BaccountId%7D/orders-0), but beyond that we’re on our own. `tda-api` aims to be useful to everyone, from users who want to easily place common equities and options trades, to advanced users who want to place complex multi-leg,
multi-asset type trades.
For users interested in simple trades, `tda-api` supports pre-built
[Order Templates](index.html#order-templates) that allow fast construction of many common trades.
Advanced users can modify these trades however they like, and can even build trades from scratch.
This page describes the features of the complete order schema in all their complexity. It is aimed at advanced users who want to create complex orders.
Less advanced users can use the [order templates](index.html#order-templates) to create orders. If they find themselves wanting to go beyond those templates,
they can return to this page to learn how.
### Optional: Order Specification Introduction[¶](#optional-order-specification-introduction)
Before we dive in to creating order specs, let’s briefly introduce their structure. This section is optional, although users wanting to use more advanced featured like stop prices and complex options orders will likely want to read it.
Here is an example of a spec that places a limit order to buy 13 shares of
`MSFT` for no more than $190. This is exactly the order that would be returned by [`tda.orders.equities.equity_buy_limit()`](index.html#tda.orders.equities.equity_buy_limit):
```
{
"session": "NORMAL",
"duration": "DAY",
"orderType": "LIMIT",
"price": "190.90",
"orderLegCollection": [
{
"instruction": "BUY",
"instrument": {
"assetType": "EQUITY",
"symbol": "MSFT"
},
"quantity": 1
}
],
"orderStrategyType": "SINGLE"
}
```
Some key points are:
> * The `LIMIT` order type notifies TD that you’d like to place a limit order.
> * The order strategy type is `SINGLE`, meaning this order is not a composite
> order.
> * The order leg collection contains a single leg to purchase the equity.
> * The price is specified *outside* the order leg. This may seem
> counterintuitive, but it’s important when placing composite options orders.
If this seems like a lot of detail to specify a rather simple order, it is. The thing about the order spec object is that it can express *every* order that can be made through the TD Ameritrade API. For an advanced example, here is a order spec for a standing order to enter a long position in `GOOG` at $1310 or less that triggers a one-cancels-other order that exits the position if the price rises to $1400 or falls below $1250:
```
{
"session": "NORMAL",
"duration": "GOOD_TILL_CANCEL",
"orderType": "LIMIT",
"price": "1310.00",
"orderLegCollection": [
{
"instruction": "BUY",
"instrument": {
"assetType": "EQUITY",
"symbol": "GOOG"
},
"quantity": 1
}
],
"orderStrategyType": "TRIGGER",
"childOrderStrategies": [
{
"orderStrategyType": "OCO",
"childOrderStrategies": [
{
"session": "NORMAL",
"duration": "GOOD_TILL_CANCEL",
"orderType": "LIMIT",
"price": "1400.00",
"orderLegCollection": [
{
"instruction": "SELL",
"instrument": {
"assetType": "EQUITY",
"symbol": "GOOG"
},
"quantity": 1
}
]
},
{
"session": "NORMAL",
"duration": "GOOD_TILL_CANCEL",
"orderType": "STOP_LIMIT",
"stopPrice": "1250.00",
"orderLegCollection": [
{
"instruction": "SELL",
"instrument": {
"assetType": "EQUITY",
"symbol": "GOOG"
},
"quantity": 1
}
]
}
]
}
]
}
```
While this looks complex, it can be broken down into the same components as the simpler buy order:
> * This time, the `LIMIT` order type applies to the top-level order.
> * The order strategy type is `TRIGGER`, which tells TD Ameritrade to hold off
> placing the the second order until the first one completes.
> * The order leg collection still contains a single leg, and the price is still
> defined outside the order leg. This is typical for equities orders.
There are also a few things that aren’t there in the simple buy order:
> * The `childOrderStrategies` contains the `OCO` order that is triggered
> when the first `LIMIT` order is executed.
> * If you look carefully, you’ll notice that the inner `OCO` is a
> fully-featured suborder in itself.
This order is large and complex, and it takes a lot of reading to understand what’s going on here. Fortunately for you, you don’t have to; `tda-api` cuts down on this complexity by providing templates and helpers to make building orders easy:
```
from tda.orders.common import OrderType from tda.orders.generic import OrderBuilder
one_triggers_other(
equity_buy_limit('GOOG', 1, 1310),
one_cancels_other(
equity_sell_limit('GOOG', 1, 1400),
equity_sell_limit('GOOG', 1, 1240)
.set_order_type(OrderType.STOP_LIMIT)
.clear_price()
.set_stop_price(1250)
)
```
You can find the full listing of order templates and utility functions
[here](index.html#order-templates).
Now that you have some background on how orders are structured, let’s dive into the order builder itself.
### Constructing `OrderBuilder` Objects from Historical Orders[¶](#constructing-orderbuilder-objects-from-historical-orders)
TDAmeritrade supports a huge array of order specifications, including both equity and option orders, stop, conditionals, etc. However, the exact format of these orders is tricky: if you don’t specify the order *exactly* how TDA expects it, you’ll either have your order rejected for no reason, or you’ll end up placing a different order than you intended.
Meanwhile, thinkorswim and the TDAmeritrade web and app UIs let you easily place these orders, just not in a programmatic way. `tda-api` helps bridge this gap by allowing you to place a complex order through your preferred UI and then producing code that would have generated this order using `tda-api`. This process looks like this:
1. Place an order using your favorite UI.
2. Call the following script to generate code for the most recently-placed order:
```
# Notice we don't prefix this with "python" because this is a script that was
# installed by pip when you installed tda-api tda-orders-codegen.py --token_file <your token file path> --api_key <your API key>
```
3. Copy-paste the resulting code and adapt it to your needs.
This script is installed by `pip`, and will only be accessible if you’ve added pip’s executable locations to your `$PATH`. If you’re having a hard time, feel free to ask for help on our [Discord server](https://discord.gg/nfrd9gh).
### `OrderBuilder` Reference[¶](#id1)
This section provides a detailed reference of the generic order builder. You can use it to help build your own custom orders, or you can modify the pre-built orders generated by `tda-api`’s order templates.
Unfortunately, this reference is largely reverse-engineered. It was initially generated from the schema provided in the [official API documents](https://developer.tdameritrade.com/account-access/apis/post/accounts/%7BaccountId%7D/orders-0), but many of the finer points, such as which fields should be populated for which order types, etc. are best guesses. If you find something is inaccurate or missing, please [let us know](https://github.com/alexgolec/tda-api/issues).
That being said, experienced traders who understand how various order types and complex strategies work should find this builder easy to use, at least for the order types with which they are familiar. Here are some resources you can use to learn more, courtesy of the Securites and Exchange Commission:
> * [Trading Basics: Understanding the Different Ways to Buy and Sell Stock](https://www.sec.gov/investor/alerts/trading101basics.pdf)
> * [Trade Execution: What Every Investor Should Know](https://www.sec.gov/reportspubs/investor-publications/investorpubstradexechtm.html)
> * [Investor Bulletin: An Introduction to Options](https://www.sec.gov/oiea/investor-alerts-bulletins/ib_introductionoptions.html)
You can also find TD Ameritrade’s official documentation on orders [here](https://www.tdameritrade.com/retail-en_us/resources/pdf/SDPS819.pdf),
although it doesn’t actually cover all functionality that `tda-api` supports.
#### Order Types[¶](#order-types)
Here are the order types that can be used:
*class* `tda.orders.common.``OrderType`[¶](#tda.orders.common.OrderType)
Type of equity or option order to place.
`MARKET` *= 'MARKET'*[¶](#tda.orders.common.OrderType.MARKET)
Execute the order immediately at the best-available price.
[More Info](https://www.investopedia.com/terms/m/marketorder.asp).
`LIMIT` *= 'LIMIT'*[¶](#tda.orders.common.OrderType.LIMIT)
Execute the order at your price or better.
[More info](https://www.investopedia.com/terms/l/limitorder.asp).
`STOP` *= 'STOP'*[¶](#tda.orders.common.OrderType.STOP)
Wait until the price reaches the stop price, and then immediately place a market order.
[More Info](https://www.investopedia.com/terms/l/limitorder.asp).
`STOP_LIMIT` *= 'STOP_LIMIT'*[¶](#tda.orders.common.OrderType.STOP_LIMIT)
Wait until the price reaches the stop price, and then immediately place a limit order at the specified price. [More Info](https://www.investopedia.com/terms/s/stop-limitorder.asp).
`TRAILING_STOP` *= 'TRAILING_STOP'*[¶](#tda.orders.common.OrderType.TRAILING_STOP)
Similar to `STOP`, except if the price moves in your favor, the stop price is adjusted in that direction. Places a market order if the stop condition is met.
[More info](https://www.investopedia.com/terms/t/trailingstop.asp).
`TRAILING_STOP_LIMIT` *= 'TRAILING_STOP_LIMIT'*[¶](#tda.orders.common.OrderType.TRAILING_STOP_LIMIT)
Similar to `STOP_LIMIT`, except if the price moves in your favor, the stop price is adjusted in that direction. Places a limit order at the specified price if the stop condition is met.
[More info](https://www.investopedia.com/terms/t/trailingstop.asp).
`MARKET_ON_CLOSE` *= 'MARKET_ON_CLOSE'*[¶](#tda.orders.common.OrderType.MARKET_ON_CLOSE)
Place the order at the closing price immediately upon market close.
[More info](https://www.investopedia.com/terms/m/marketonclose.asp)
`EXERCISE` *= 'EXERCISE'*[¶](#tda.orders.common.OrderType.EXERCISE)
Exercise an option.
`NET_DEBIT` *= 'NET_DEBIT'*[¶](#tda.orders.common.OrderType.NET_DEBIT)
Place an order for an options spread resulting in a net debit.
[More info](https://www.investopedia.com/ask/answers/042215/whats-difference-between-credit-spread-and-debt-spread.asp)
`NET_CREDIT` *= 'NET_CREDIT'*[¶](#tda.orders.common.OrderType.NET_CREDIT)
Place an order for an options spread resulting in a net credit.
[More info](https://www.investopedia.com/ask/answers/042215/whats-difference-between-credit-spread-and-debt-spread.asp)
`NET_ZERO` *= 'NET_ZERO'*[¶](#tda.orders.common.OrderType.NET_ZERO)
Place an order for an options spread resulting in neither a credit nor a debit.
[More info](https://www.investopedia.com/ask/answers/042215/whats-difference-between-credit-spread-and-debt-spread.asp)
`OrderBuilder.``set_order_type`(*order_type*)[¶](#tda.orders.generic.OrderBuilder.set_order_type)
Set the order type. See [`OrderType`](#tda.orders.common.OrderType) for details.
`OrderBuilder.``clear_order_type`()[¶](#tda.orders.generic.OrderBuilder.clear_order_type)
Clear the order type.
#### Session and Duration[¶](#session-and-duration)
Together, these fields control when the order will be placed and how long it will remain active. Note `tda-api`’s [templates](index.html#order-templates) place orders that are active for the duration of the current normal trading session.
If you want to modify the default session and duration, you can use these methods to do so.
*class* `tda.orders.common.``Session`[¶](#tda.orders.common.Session)
The market session during which the order trade should be executed.
`NORMAL` *= 'NORMAL'*[¶](#tda.orders.common.Session.NORMAL)
Normal market hours, from 9:30am to 4:00pm Eastern.
`AM` *= 'AM'*[¶](#tda.orders.common.Session.AM)
Premarket session, from 8:00am to 9:30am Eastern.
`PM` *= 'PM'*[¶](#tda.orders.common.Session.PM)
After-market session, from 4:00pm to 8:00pm Eastern.
`SEAMLESS` *= 'SEAMLESS'*[¶](#tda.orders.common.Session.SEAMLESS)
Orders are active during all trading sessions except the overnight session. This is the union of `NORMAL`, `AM`, and `PM`.
*class* `tda.orders.common.``Duration`[¶](#tda.orders.common.Duration)
Length of time over which the trade will be active.
`DAY` *= 'DAY'*[¶](#tda.orders.common.Duration.DAY)
Cancel the trade at the end of the trading day. Note if the order cannot be filled all at once, you may see partial executions throughout the day.
`GOOD_TILL_CANCEL` *= 'GOOD_TILL_CANCEL'*[¶](#tda.orders.common.Duration.GOOD_TILL_CANCEL)
Keep the trade open for six months, or until the end of the cancel date,
whichever is shorter. Note if the order cannot be filled all at once, you may see partial executions over the lifetime of the order.
`FILL_OR_KILL` *= 'FILL_OR_KILL'*[¶](#tda.orders.common.Duration.FILL_OR_KILL)
Either execute the order immediately at the specified price, or cancel it immediately.
`OrderBuilder.``set_duration`(*duration*)[¶](#tda.orders.generic.OrderBuilder.set_duration)
Set the order duration. See [`Duration`](#tda.orders.common.Duration) for details.
`OrderBuilder.``clear_duration`()[¶](#tda.orders.generic.OrderBuilder.clear_duration)
Clear the order duration.
`OrderBuilder.``set_session`(*session*)[¶](#tda.orders.generic.OrderBuilder.set_session)
Set the order session. See [`Session`](#tda.orders.common.Session) for details.
`OrderBuilder.``clear_session`()[¶](#tda.orders.generic.OrderBuilder.clear_session)
Clear the order session.
#### Price[¶](#price)
Price is the amount you’d like to pay for each unit of the position you’re taking:
> * For equities and simple options limit orders, this is the price which you’d
> like to pay/receive.
> * For complex options limit orders (net debit/net credit), this is the total
> credit or debit you’d like to receive.
In other words, the price is the sum of the prices of the [Order Legs](#order-legs).
This is particularly powerful for complex multi-leg options orders, which support complex top and/or limit orders that trigger when the price of a position reaches certain levels. In those cases, the price of an order can drop below the specified price as a result of movements in multiple legs of the trade.
##### Note on Truncation[¶](#note-on-truncation)
**Important Note:** Under the hood, the TDAmeritrade API expects price as a string, whereas `tda-api` allows setting prices as a floating point number for convenience. The passed value is then converted to a string under the hood,
which involves some truncation logic:
> * If the price has absolute value less than one, truncate (not round!) to
> four decimal places. For example, 0.186992 will become 0.1869.
> * For all other values, truncate to two decimal places. The above example would
> become 0.18.
This behavior is meant as a sane heuristic, and there are almost certainly situations where it is not the correct thing to do. You can sidestep this entire process by passing your price as a string, although be forewarned that TDAmeritrade may reject your order or even interpret it in unexpected ways.
`OrderBuilder.``set_price`(*price*)[¶](#tda.orders.generic.OrderBuilder.set_price)
Set the order price. Note price can be passed as either a float or an str. See [Note on Truncation](#number-truncation).
`OrderBuilder.``copy_price`(*price*)[¶](#tda.orders.generic.OrderBuilder.copy_price)
Directly set the stop price, avoiding all the validation and truncation logic from [`set_price()`](#tda.orders.generic.OrderBuilder.set_price).
`OrderBuilder.``clear_price`()[¶](#tda.orders.generic.OrderBuilder.clear_price)
Clear the order price
#### Order Legs[¶](#order-legs)
Order legs are where the actual assets being bought or sold are specified. For simple equity or single-options orders, there is just one leg. However, for complex multi-leg options trades, there can be more than one leg.
Note that order legs often do not execute all at once. Order legs can be executed over the specified [`Duration`](#tda.orders.common.Duration) of the order.
What’s more, if order legs request a large number of shares, legs themselves can be partially filled. You can control this setting using the
[`SpecialInstruction`](#tda.orders.common.SpecialInstruction) value `ALL_OR_NONE`.
With all that out of the way, order legs are relatively simple to specify.
`tda-api` currently supports equity and option order legs:
`OrderBuilder.``add_equity_leg`(*instruction*, *symbol*, *quantity*)[¶](#tda.orders.generic.OrderBuilder.add_equity_leg)
Add an equity order leg.
| Parameters: | * **instruction** – Instruction for the leg. See
[`EquityInstruction`](#tda.orders.common.EquityInstruction) for valid options.
* **symbol** – Equity symbol
* **quantity** – Number of shares for the order
|
*class* `tda.orders.common.``EquityInstruction`[¶](#tda.orders.common.EquityInstruction)
Instructions for opening and closing equity positions.
`BUY` *= 'BUY'*[¶](#tda.orders.common.EquityInstruction.BUY)
Open a long equity position
`SELL` *= 'SELL'*[¶](#tda.orders.common.EquityInstruction.SELL)
Close a long equity position
`SELL_SHORT` *= 'SELL_SHORT'*[¶](#tda.orders.common.EquityInstruction.SELL_SHORT)
Open a short equity position
`BUY_TO_COVER` *= 'BUY_TO_COVER'*[¶](#tda.orders.common.EquityInstruction.BUY_TO_COVER)
Close a short equity position
`OrderBuilder.``add_option_leg`(*instruction*, *symbol*, *quantity*)[¶](#tda.orders.generic.OrderBuilder.add_option_leg)
Add an option order leg.
| Parameters: | * **instruction** – Instruction for the leg. See
[`OptionInstruction`](#tda.orders.common.OptionInstruction) for valid options.
* **symbol** – Option symbol
* **quantity** – Number of contracts for the order
|
*class* `tda.orders.common.``OptionInstruction`[¶](#tda.orders.common.OptionInstruction)
Instructions for opening and closing options positions.
`BUY_TO_OPEN` *= 'BUY_TO_OPEN'*[¶](#tda.orders.common.OptionInstruction.BUY_TO_OPEN)
Enter a new long option position
`SELL_TO_CLOSE` *= 'SELL_TO_CLOSE'*[¶](#tda.orders.common.OptionInstruction.SELL_TO_CLOSE)
Exit an existing long option position
`SELL_TO_OPEN` *= 'SELL_TO_OPEN'*[¶](#tda.orders.common.OptionInstruction.SELL_TO_OPEN)
Enter a short position in an option
`BUY_TO_CLOSE` *= 'BUY_TO_CLOSE'*[¶](#tda.orders.common.OptionInstruction.BUY_TO_CLOSE)
Exit an existing short position in an option
`OrderBuilder.``clear_order_legs`()[¶](#tda.orders.generic.OrderBuilder.clear_order_legs)
Clear all order legs.
#### Requested Destination[¶](#requested-destination)
By default, TD Ameritrade sends trades to whichever exchange provides the best price. This field allows you to request a destination exchange for your trade,
although whether your order is actually executed there is up to TDA.
*class* `tda.orders.common.``Destination`[¶](#tda.orders.common.Destination)
Destinations for when you want to request a specific destination for your order.
`INET` *= 'INET'*[¶](#tda.orders.common.Destination.INET)
`ECN_ARCA` *= 'ECN_ARCA'*[¶](#tda.orders.common.Destination.ECN_ARCA)
`CBOE` *= 'CBOE'*[¶](#tda.orders.common.Destination.CBOE)
`AMEX` *= 'AMEX'*[¶](#tda.orders.common.Destination.AMEX)
`PHLX` *= 'PHLX'*[¶](#tda.orders.common.Destination.PHLX)
`ISE` *= 'ISE'*[¶](#tda.orders.common.Destination.ISE)
`BOX` *= 'BOX'*[¶](#tda.orders.common.Destination.BOX)
`NYSE` *= 'NYSE'*[¶](#tda.orders.common.Destination.NYSE)
`NASDAQ` *= 'NASDAQ'*[¶](#tda.orders.common.Destination.NASDAQ)
`BATS` *= 'BATS'*[¶](#tda.orders.common.Destination.BATS)
`C2` *= 'C2'*[¶](#tda.orders.common.Destination.C2)
`AUTO` *= 'AUTO'*[¶](#tda.orders.common.Destination.AUTO)
`OrderBuilder.``set_requested_destination`(*requested_destination*)[¶](#tda.orders.generic.OrderBuilder.set_requested_destination)
Set the requested destination. See
[`Destination`](#tda.orders.common.Destination) for details.
`OrderBuilder.``clear_requested_destination`()[¶](#tda.orders.generic.OrderBuilder.clear_requested_destination)
Clear the requested destination.
#### Special Instructions[¶](#special-instructions)
Trades can contain special instructions which handle some edge cases:
*class* `tda.orders.common.``SpecialInstruction`[¶](#tda.orders.common.SpecialInstruction)
Special instruction for trades.
`ALL_OR_NONE` *= 'ALL_OR_NONE'*[¶](#tda.orders.common.SpecialInstruction.ALL_OR_NONE)
Disallow partial order execution.
[More info](https://www.investopedia.com/terms/a/aon.asp).
`DO_NOT_REDUCE` *= 'DO_NOT_REDUCE'*[¶](#tda.orders.common.SpecialInstruction.DO_NOT_REDUCE)
Do not reduce order size in response to cash dividends.
[More info](https://www.investopedia.com/terms/d/dnr.asp).
`ALL_OR_NONE_DO_NOT_REDUCE` *= 'ALL_OR_NONE_DO_NOT_REDUCE'*[¶](#tda.orders.common.SpecialInstruction.ALL_OR_NONE_DO_NOT_REDUCE)
Combination of `ALL_OR_NONE` and `DO_NOT_REDUCE`.
`OrderBuilder.``set_special_instruction`(*special_instruction*)[¶](#tda.orders.generic.OrderBuilder.set_special_instruction)
Set the special instruction. See
[`SpecialInstruction`](#tda.orders.common.SpecialInstruction) for details.
`OrderBuilder.``clear_special_instruction`()[¶](#tda.orders.generic.OrderBuilder.clear_special_instruction)
Clear the special instruction.
#### Complex Options Strategies[¶](#complex-options-strategies)
TD Ameritrade supports a number of complex options strategies. These strategies are complex affairs, with each leg of the trade specified in the order legs. TD performs additional validation on these strategies, so they are somewhat complicated to place. However, the benefit is more flexibility, as trades like trailing stop orders based on net debit/credit can be specified.
Unfortunately, due to the complexity of these orders and the lack of any real documentation, we cannot offer definitively say how to structure these orders. A few things have been observed, however:
> * The legs of the order can be placed by adding them as option order legs using
> [`add_option_leg()`](#tda.orders.generic.OrderBuilder.add_option_leg).
> * For spreads resulting in a new debit/credit, the price represents the overall
> debit or credit desired.
If you successfully use these strategies, we want to know about it. Please let us know by joining our [Discord server](https://discord.gg/nfrd9gh) to chat about it, or by [creating a feature request](https://github.com/alexgolec/tda-api/issues).
*class* `tda.orders.common.``ComplexOrderStrategyType`[¶](#tda.orders.common.ComplexOrderStrategyType)
Explicit order strategies for executing multi-leg options orders.
`NONE` *= 'NONE'*[¶](#tda.orders.common.ComplexOrderStrategyType.NONE)
No complex order strategy. This is the default.
`COVERED` *= 'COVERED'*[¶](#tda.orders.common.ComplexOrderStrategyType.COVERED)
[Covered call](https://tickertape.tdameritrade.com/trading/selling-covered-call-options-strategy-income-hedging-15135)
`VERTICAL` *= 'VERTICAL'*[¶](#tda.orders.common.ComplexOrderStrategyType.VERTICAL)
[Vertical spread](https://tickertape.tdameritrade.com/trading/vertical-credit-spreads-high-probability-15846)
`BACK_RATIO` *= 'BACK_RATIO'*[¶](#tda.orders.common.ComplexOrderStrategyType.BACK_RATIO)
[Ratio backspread](https://tickertape.tdameritrade.com/trading/pricey-stocks-ratio-spreads-15306)
`CALENDAR` *= 'CALENDAR'*[¶](#tda.orders.common.ComplexOrderStrategyType.CALENDAR)
[Calendar spread](https://tickertape.tdameritrade.com/trading/calendar-spreads-trading-primer-15095)
`DIAGONAL` *= 'DIAGONAL'*[¶](#tda.orders.common.ComplexOrderStrategyType.DIAGONAL)
[Diagonal spread](https://tickertape.tdameritrade.com/trading/love-your-diagonal-spread-15030)
`STRADDLE` *= 'STRADDLE'*[¶](#tda.orders.common.ComplexOrderStrategyType.STRADDLE)
[Straddle spread](https://tickertape.tdameritrade.com/trading/straddle-strangle-option-volatility-16208)
`STRANGLE` *= 'STRANGLE'*[¶](#tda.orders.common.ComplexOrderStrategyType.STRANGLE)
[Strandle spread](https://tickertape.tdameritrade.com/trading/straddle-strangle-option-volatility-16208)
`COLLAR_SYNTHETIC` *= 'COLLAR_SYNTHETIC'*[¶](#tda.orders.common.ComplexOrderStrategyType.COLLAR_SYNTHETIC)
`BUTTERFLY` *= 'BUTTERFLY'*[¶](#tda.orders.common.ComplexOrderStrategyType.BUTTERFLY)
[Butterfly spread](https://tickertape.tdameritrade.com/trading/butterfly-spread-options-15976)
`CONDOR` *= 'CONDOR'*[¶](#tda.orders.common.ComplexOrderStrategyType.CONDOR)
[Condor spread](https://www.investopedia.com/terms/c/condorspread.asp)
`IRON_CONDOR` *= 'IRON_CONDOR'*[¶](#tda.orders.common.ComplexOrderStrategyType.IRON_CONDOR)
[Iron condor spread](https://tickertape.tdameritrade.com/trading/iron-condor-options-spread-your-trading-wings-15948)
`VERTICAL_ROLL` *= 'VERTICAL_ROLL'*[¶](#tda.orders.common.ComplexOrderStrategyType.VERTICAL_ROLL)
[Roll a vertical spread](https://tickertape.tdameritrade.com/trading/exit-winning-losing-trades-16685)
`COLLAR_WITH_STOCK` *= 'COLLAR_WITH_STOCK'*[¶](#tda.orders.common.ComplexOrderStrategyType.COLLAR_WITH_STOCK)
[Collar strategy](https://tickertape.tdameritrade.com/trading/stock-hedge-options-collars-15529)
`DOUBLE_DIAGONAL` *= 'DOUBLE_DIAGONAL'*[¶](#tda.orders.common.ComplexOrderStrategyType.DOUBLE_DIAGONAL)
[Double diagonal spread](https://optionstradingiq.com/the-ultimate-guide-to-double-diagonal-spreads/)
`UNBALANCED_BUTTERFLY` *= 'UNBALANCED_BUTTERFLY'*[¶](#tda.orders.common.ComplexOrderStrategyType.UNBALANCED_BUTTERFLY)
[Unbalanced butterfy spread](https://tickertape.tdameritrade.com/trading/unbalanced-butterfly-strong-directional-bias-15913)
`UNBALANCED_CONDOR` *= 'UNBALANCED_CONDOR'*[¶](#tda.orders.common.ComplexOrderStrategyType.UNBALANCED_CONDOR)
`UNBALANCED_IRON_CONDOR` *= 'UNBALANCED_IRON_CONDOR'*[¶](#tda.orders.common.ComplexOrderStrategyType.UNBALANCED_IRON_CONDOR)
`UNBALANCED_VERTICAL_ROLL` *= 'UNBALANCED_VERTICAL_ROLL'*[¶](#tda.orders.common.ComplexOrderStrategyType.UNBALANCED_VERTICAL_ROLL)
`CUSTOM` *= 'CUSTOM'*[¶](#tda.orders.common.ComplexOrderStrategyType.CUSTOM)
A custom multi-leg order strategy.
`OrderBuilder.``set_complex_order_strategy_type`(*complex_order_strategy_type*)[¶](#tda.orders.generic.OrderBuilder.set_complex_order_strategy_type)
Set the complex order strategy type. See
[`ComplexOrderStrategyType`](#tda.orders.common.ComplexOrderStrategyType) for details.
`OrderBuilder.``clear_complex_order_strategy_type`()[¶](#tda.orders.generic.OrderBuilder.clear_complex_order_strategy_type)
Clear the complex order strategy type.
#### Composite Orders[¶](#composite-orders)
`tda-api` supports composite order strategies, in which execution of one order has an effect on another:
> * `OCO`, or “one cancels other” orders, consist of a pair of orders where
> execution of one immediately cancels the other.
> * `TRIGGER` orders consist of a pair of orders where execution of one
> immediately results in placement of the other.
`tda-api` provides helpers to specify these easily:
[`one_cancels_other()`](index.html#tda.orders.common.one_cancels_other) and
[`first_triggers_second()`](index.html#tda.orders.common.first_triggers_second). This is almost certainly easier than specifying these orders manually. However, if you still want to create them yourself, you can specify these composite order strategies like so:
*class* `tda.orders.common.``OrderStrategyType`[¶](#tda.orders.common.OrderStrategyType)
Rules for composite orders.
`SINGLE` *= 'SINGLE'*[¶](#tda.orders.common.OrderStrategyType.SINGLE)
No chaining, only a single order is submitted
`OCO` *= 'OCO'*[¶](#tda.orders.common.OrderStrategyType.OCO)
Execution of one order cancels the other
`TRIGGER` *= 'TRIGGER'*[¶](#tda.orders.common.OrderStrategyType.TRIGGER)
Execution of one order triggers placement of the other
`OrderBuilder.``set_order_strategy_type`(*order_strategy_type*)[¶](#tda.orders.generic.OrderBuilder.set_order_strategy_type)
Set the order strategy type. See
[`OrderStrategyType`](#tda.orders.common.OrderStrategyType) for more details.
`OrderBuilder.``clear_order_strategy_type`()[¶](#tda.orders.generic.OrderBuilder.clear_order_strategy_type)
Clear the order strategy type.
#### Undocumented Fields[¶](#undocumented-fields)
Unfortunately, your humble author is not an expert in all things trading. The order spec schema describes some things that are outside my ability to document,
so rather than make stuff up, I’m putting them here in the hopes that someone will come along and shed some light on them. You can make suggestions by filing an issue on our
[GitHub issues page](https://github.com/alexgolec/tda-api/issues),
or by joining our [Discord server](https://discord.gg/M3vjtHj).
##### Quantity[¶](#quantity)
This one seems obvious: doesn’t the quantity mean the number of stock I want to buy? The trouble is that the order legs also have a `quantity` field, which suggests this field means something else. The leading hypothesis is that is outlines the number of copies of the order to place, although we have yet to verify that.
`OrderBuilder.``set_quantity`(*quantity*)[¶](#tda.orders.generic.OrderBuilder.set_quantity)
Exact semantics unknown. See [Quantity](#undocumented-quantity) for a discussion.
`OrderBuilder.``clear_quantity`()[¶](#tda.orders.generic.OrderBuilder.clear_quantity)
Clear the order-level quantity. Note this does not affect order legs.
##### Stop Order Configuration[¶](#stop-order-configuration)
Stop orders and their variants (stop limit, trailing stop, trailing stop limit)
support some rather complex configuration. Both stops prices and the limit prices of the resulting order can be configured to follow the market in a dynamic fashion. The market dimensions that they follow can also be configured differently, and it appears that which dimensions are supported varies by order type.
We have unfortunately not yet done a thorough analysis of what’s supported, nor have we made the effort to make it simple and easy. While we’re *pretty* sure we understand how these fields work, they’ve been temporarily placed into the
“undocumented” section, pending a followup. Users are invited to experiment with these fields at their own risk.
`OrderBuilder.``set_stop_price`(*stop_price*)[¶](#tda.orders.generic.OrderBuilder.set_stop_price)
Set the stop price. Note price can be passed as either a float or an str. See [Note on Truncation](#number-truncation).
`OrderBuilder.``copy_stop_price`(*stop_price*)[¶](#tda.orders.generic.OrderBuilder.copy_stop_price)
Directly set the stop price, avoiding all the validation and truncation logic from [`set_stop_price()`](#tda.orders.generic.OrderBuilder.set_stop_price).
`OrderBuilder.``clear_stop_price`()[¶](#tda.orders.generic.OrderBuilder.clear_stop_price)
Clear the stop price.
*class* `tda.orders.common.``StopPriceLinkBasis`[¶](#tda.orders.common.StopPriceLinkBasis)
An enumeration.
`MANUAL` *= 'MANUAL'*[¶](#tda.orders.common.StopPriceLinkBasis.MANUAL)
`BASE` *= 'BASE'*[¶](#tda.orders.common.StopPriceLinkBasis.BASE)
`TRIGGER` *= 'TRIGGER'*[¶](#tda.orders.common.StopPriceLinkBasis.TRIGGER)
`LAST` *= 'LAST'*[¶](#tda.orders.common.StopPriceLinkBasis.LAST)
`BID` *= 'BID'*[¶](#tda.orders.common.StopPriceLinkBasis.BID)
`ASK` *= 'ASK'*[¶](#tda.orders.common.StopPriceLinkBasis.ASK)
`ASK_BID` *= 'ASK_BID'*[¶](#tda.orders.common.StopPriceLinkBasis.ASK_BID)
`MARK` *= 'MARK'*[¶](#tda.orders.common.StopPriceLinkBasis.MARK)
`AVERAGE` *= 'AVERAGE'*[¶](#tda.orders.common.StopPriceLinkBasis.AVERAGE)
`OrderBuilder.``set_stop_price_link_basis`(*stop_price_link_basis*)[¶](#tda.orders.generic.OrderBuilder.set_stop_price_link_basis)
Set the stop price link basis. See
[`StopPriceLinkBasis`](#tda.orders.common.StopPriceLinkBasis) for details.
`OrderBuilder.``clear_stop_price_link_basis`()[¶](#tda.orders.generic.OrderBuilder.clear_stop_price_link_basis)
Clear the stop price link basis.
*class* `tda.orders.common.``StopPriceLinkType`[¶](#tda.orders.common.StopPriceLinkType)
An enumeration.
`VALUE` *= 'VALUE'*[¶](#tda.orders.common.StopPriceLinkType.VALUE)
`PERCENT` *= 'PERCENT'*[¶](#tda.orders.common.StopPriceLinkType.PERCENT)
`TICK` *= 'TICK'*[¶](#tda.orders.common.StopPriceLinkType.TICK)
`OrderBuilder.``set_stop_price_link_type`(*stop_price_link_type*)[¶](#tda.orders.generic.OrderBuilder.set_stop_price_link_type)
Set the stop price link type. See
[`StopPriceLinkType`](#tda.orders.common.StopPriceLinkType) for details.
`OrderBuilder.``clear_stop_price_link_type`()[¶](#tda.orders.generic.OrderBuilder.clear_stop_price_link_type)
Clear the stop price link type.
`OrderBuilder.``set_stop_price_offset`(*stop_price_offset*)[¶](#tda.orders.generic.OrderBuilder.set_stop_price_offset)
Set the stop price offset.
`OrderBuilder.``clear_stop_price_offset`()[¶](#tda.orders.generic.OrderBuilder.clear_stop_price_offset)
Clear the stop price offset.
*class* `tda.orders.common.``StopType`[¶](#tda.orders.common.StopType)
An enumeration.
`STANDARD` *= 'STANDARD'*[¶](#tda.orders.common.StopType.STANDARD)
`BID` *= 'BID'*[¶](#tda.orders.common.StopType.BID)
`ASK` *= 'ASK'*[¶](#tda.orders.common.StopType.ASK)
`LAST` *= 'LAST'*[¶](#tda.orders.common.StopType.LAST)
`MARK` *= 'MARK'*[¶](#tda.orders.common.StopType.MARK)
`OrderBuilder.``set_stop_type`(*stop_type*)[¶](#tda.orders.generic.OrderBuilder.set_stop_type)
Set the stop type. See
[`StopType`](#tda.orders.common.StopType) for more details.
`OrderBuilder.``clear_stop_type`()[¶](#tda.orders.generic.OrderBuilder.clear_stop_type)
Clear the stop type.
*class* `tda.orders.common.``PriceLinkBasis`[¶](#tda.orders.common.PriceLinkBasis)
An enumeration.
`MANUAL` *= 'MANUAL'*[¶](#tda.orders.common.PriceLinkBasis.MANUAL)
`BASE` *= 'BASE'*[¶](#tda.orders.common.PriceLinkBasis.BASE)
`TRIGGER` *= 'TRIGGER'*[¶](#tda.orders.common.PriceLinkBasis.TRIGGER)
`LAST` *= 'LAST'*[¶](#tda.orders.common.PriceLinkBasis.LAST)
`BID` *= 'BID'*[¶](#tda.orders.common.PriceLinkBasis.BID)
`ASK` *= 'ASK'*[¶](#tda.orders.common.PriceLinkBasis.ASK)
`ASK_BID` *= 'ASK_BID'*[¶](#tda.orders.common.PriceLinkBasis.ASK_BID)
`MARK` *= 'MARK'*[¶](#tda.orders.common.PriceLinkBasis.MARK)
`AVERAGE` *= 'AVERAGE'*[¶](#tda.orders.common.PriceLinkBasis.AVERAGE)
`OrderBuilder.``set_price_link_basis`(*price_link_basis*)[¶](#tda.orders.generic.OrderBuilder.set_price_link_basis)
Set the price link basis. See
[`PriceLinkBasis`](#tda.orders.common.PriceLinkBasis) for details.
`OrderBuilder.``clear_price_link_basis`()[¶](#tda.orders.generic.OrderBuilder.clear_price_link_basis)
Clear the price link basis.
*class* `tda.orders.common.``PriceLinkType`[¶](#tda.orders.common.PriceLinkType)
An enumeration.
`VALUE` *= 'VALUE'*[¶](#tda.orders.common.PriceLinkType.VALUE)
`PERCENT` *= 'PERCENT'*[¶](#tda.orders.common.PriceLinkType.PERCENT)
`TICK` *= 'TICK'*[¶](#tda.orders.common.PriceLinkType.TICK)
`OrderBuilder.``set_price_link_type`(*price_link_type*)[¶](#tda.orders.generic.OrderBuilder.set_price_link_type)
Set the price link type. See
[`PriceLinkType`](#tda.orders.common.PriceLinkType) for more details.
`OrderBuilder.``clear_price_link_type`()[¶](#tda.orders.generic.OrderBuilder.clear_price_link_type)
Clear the price link basis.
`OrderBuilder.``set_activation_price`(*activation_price*)[¶](#tda.orders.generic.OrderBuilder.set_activation_price)
Set the activation price.
`OrderBuilder.``clear_activation_price`()[¶](#tda.orders.generic.OrderBuilder.clear_activation_price)
Clear the activation price.
Utilities[¶](#utilities)
---
This section describes miscellaneous utility methods provided by `tda-api`.
All utilities are presented under the `Utils` class:
*class* `tda.utils.``Utils`(*client*, *account_id*)[¶](#tda.utils.Utils)
Helper for placing orders on equities. Provides easy-to-use implementations for common tasks such as market and limit orders.
`__init__`(*client*, *account_id*)[¶](#tda.utils.Utils.__init__)
Creates a new `Utils` instance. For convenience, this object assumes the user wants to work with a single account ID at a time.
`set_account_id`(*account_id*)[¶](#tda.utils.Utils.set_account_id)
Set the account ID used by this `Utils` instance.
### Get the Most Recent Order[¶](#get-the-most-recent-order)
For successfully placed orders, [`tda.client.Client.place_order()`](index.html#tda.client.Client.place_order) returns the ID of the newly created order, encoded in the `r.headers['Location']`
header. This method inspects the response and extracts the order ID from the contents, if it’s there. This order ID can then be used to monitor or modify the order as described in the [Client documentation](index.html#orders-section). Example usage:
```
# Assume client and order already exist and are valid account_id = 123456 r = client.place_order(account_id, order)
assert r.status_code == httpx.codes.OK, r.raise_for_status()
order_id = Utils(client, account_id).extract_order_id(r)
assert order_id is not None
```
`Utils.``extract_order_id`(*place_order_response*)[¶](#tda.utils.Utils.extract_order_id)
Attempts to extract the order ID from a response object returned by
[`Client.place_order()`](index.html#tda.client.Client.place_order). Return
`None` if the order location is not contained in the response.
| Parameters: | **place_order_response** – Order response as returned by
[`Client.place_order()`](index.html#tda.client.Client.place_order). Note this method requires that the order was successful. |
| Raises: | **ValueError** – if the order was not succesful or if the order’s account ID is not equal to the account ID set in this
`Utils` object. |
Example Application[¶](#example-application)
---
To illustrate some of the functionality of `tda-api`, here is an example application that finds stocks that pay a dividend during the month of your birthday and purchases one of each.
```
import atexit import datetime import dateutil import httpx import sys import tda
API_KEY = '[email protected]'
REDIRECT_URI = 'https://localhost:8080/'
TOKEN_PATH = 'ameritrade-credentials.json'
YOUR_BIRTHDAY = datetime.datetime(year=1969, month=4, day=20)
SP500_URL = "https://tda-api.readthedocs.io/en/latest/_static/sp500.txt"
def make_webdriver():
# Import selenium here because it's slow to import
from selenium import webdriver
driver = webdriver.Chrome()
atexit.register(lambda: driver.quit())
return driver
# Create a new client client = tda.auth.easy_client(
API_KEY,
REDIRECT_URI,
TOKEN_PATH,
make_webdriver)
# Load S&P 500 composition from documentation sp500 = httpx.get(
SP500_URL, headers={
"User-Agent": "Mozilla/5.0"}).read().decode().split()
# Fetch fundamentals for all symbols and filter out the ones with ex-dividend
# dates in the future and dividend payment dates on your birth month. Note we
# perform the fetch in two calls because the API places an upper limit on the
# number of symbols you can fetch at once.
today = datetime.datetime.today()
birth_month_dividends = []
for s in (sp500[:250], sp500[250:]):
r = client.search_instruments(
s, tda.client.Client.Instrument.Projection.FUNDAMENTAL)
assert r.status_code == httpx.codes.OK, r.raise_for_status()
for symbol, f in r.json().items():
# Parse ex-dividend date
ex_div_string = f['fundamental']['dividendDate']
if not ex_div_string.strip():
continue
ex_dividend_date = dateutil.parser.parse(ex_div_string)
# Parse payment date
pay_date_string = f['fundamental']['dividendPayDate']
if not pay_date_string.strip():
continue
pay_date = dateutil.parser.parse(pay_date_string)
# Check dates
if (ex_dividend_date > today
and pay_date.month == YOUR_BIRTHDAY.month):
birth_month_dividends.append(symbol)
if not birth_month_dividends:
print('Sorry, no stocks are paying out in your birth month yet. This is ',
'most likely because the dividends haven\'t been announced yet. ',
'Try again closer to your birthday.')
sys.exit(1)
# Purchase one share of each the stocks that pay in your birthday month.
account_id = int(input(
'Input your TDA account number to place orders (<Ctrl-C> to quit): '))
for symbol in birth_month_dividends:
print('Buying one share of', symbol)
# Build the order spec and place the order
order = tda.orders.equities.equity_buy_market(symbol, 1)
r = client.place_order(account_id, order)
assert r.status_code == httpx.codes.OK, r.raise_for_status()
```
Getting Help[¶](#getting-help)
---
Even the most experienced developer needs help on occasion. This page describes how you can get help and make progress.
### Asking for Help on Discord[¶](#asking-for-help-on-discord)
`tda-api` has a vibrant community that hangs out in our [discord server](https://discord.gg/M3vjtHj). If you’re having any sort of trouble, this server should be your first stop. Just make sure you follow a few rules to ask for help.
#### Provide Adequate Information[¶](#provide-adequate-information)
Nothing makes it easier to help you than information. The more information you provide, the easier it’ll be to help you. If you are asking for advice on how to do something, share whatever code you’ve written or research you’ve performed. If you’re asking for help about an error, make sure you provide **at least** the following information:
> 1. Your OS (Windows? Mac OS? Linux?) and execution environment (VSCode? A raw
> terminal? A docker container in the cloud?)
> 2. Your `tda-api` version. You can see this by executing
> `print(tda.__version__)` in a python shell.
> 3. The full stack trace and error message. Descriptions of errors will be met
> with requests to provide more information.
> 4. Code that reproduces the error. If you’re shy about your code, write a small
> script that reproduces the error when run.
Optionally, you may want to share diagnostic logs generated by `tda-api`. Not only does this provide even more information to the community, reading through the logs might also help you solve the problem yourself. You can read about enabling logging [here](#enable-logging).
#### Format Your Request Properly[¶](#format-your-request-properly)
Take advantage of Discord’s wonderful [support for code blocks](https://support.discord.com/hc/en-us/articles/210298617-Markdown-Text-101-Chat-Formatting-Bold-Italic-Underline-)
and format your error, stack traces, and code using triple backticks. To do this, put ````` before and after your message. Failing to do this will be met with a request to edit your message to be better formatted.
### Reporting a Bug[¶](#reporting-a-bug)
`tda-api` is not perfect. Features are missing, documentation may be out of date, and it almost certainly contains bugs. If you think of a way in which
`tda-api` can be improved, we’re more than happy to hear it.
This section outlines the process for getting help if you found a bug. If you need general help using `tda-api`, or just want to chat with other people interested in developing trading strategies, you can
[join our discord](https://discord.gg/M3vjtHj).
If you still want to submit an issue, we ask that you follow a few guidelines to make everyone’s lives easier:
#### Enable Logging[¶](#enable-logging)
Behind the scenes, `tda-api` performs diagnostic logging of its activity using Python’s [logging](https://docs.python.org/3/library/logging.html) module.
You can enable this debug information by telling the root logger to print these messages:
```
import logging logging.getLogger('').addHandler(logging.StreamHandler())
```
Sometimes, this additional logging is enough to help you debug. Before you ask for help, carefully read through your logs to see if there’s anything there that helps you.
#### Gather Logs For Your Bug Report[¶](#gather-logs-for-your-bug-report)
If you still can’t figure out what’s going wrong, `tda-api` has special functionality for gathering and preparing logs for filing issues. It works by capturing `tda-api`’s logs, anonymizing them, and then dumping them to the console when the program exits. You can enable this by calling this method
**before doing anything else in your application**:
```
tda.debug.enable_bug_report_logging()
```
This method will redact the logs to scrub them of common secrets, like account IDs, tokens, access keys, etc. However, this redaction is not guaranteed to be perfect, and it is your responsibility to make sure they are clean before you ask for help.
When filing a issue, please upload the logs along with your description. **If you do not include logs with your issue, your issue may be closed**.
For completeness, here is this method’s documentation:
`debug.``enable_bug_report_logging`()[¶](#tda.debug.enable_bug_report_logging)
Turns on bug report logging. Will collect all logged output, redact out anything that should be kept secret, and emit the result at program exit.
Notes:
* This method does a best effort redaction. Never share its output without verifying that all secret information is properly redacted.
* Because this function records all logged output, it has a performance penalty. It should not be called in production code.
#### Submit Your Ticket[¶](#submit-your-ticket)
You are now ready to write your bug. Before you do, be warned that your issue may be be closed if:
> * It does not include code. The first thing we do when we receive your issue is
> we try to reproduce your failure. We can’t do that if you don’t show us your
> code.
> * It does not include logs. It’s very difficult to debug problems without logs.
> * Logs are not adequately redacted. This is for your own protection.
> * Logs are copy-pasted into the issue message field. Please write them to a
> file and attach them to your issue.
> * You do not follow the issue template. We’re not *super* strict about this
> one, but you should at least include all the information it asks for.
You can file an issue on our [GitHub page](https://github.com/alexgolec/tda-api/issues).
Community-Contributed Functionality[¶](#community-contributed-functionality)
---
When maintaining `tda-api`, the authors have two goals: make common things easy, and make uncommon things possible. This meets the needs of vast majority of the community, while allowing advanced users or those with very niche requirements to progress using potentially custom approaches.
However, this philosophy explicitly excludes functionality that is potentially useful to many users, but is either not directly related to the core functionality of the API wrapper. This is where the `contrib` module comes into play.
This module is a collection of high-quality code that was produced by the community and for the community. It includes utility methods that provide additional functionality beyond the core library, fixes for quirks in API behavior, etc. This page lists the available functionality. If you’d like to discuss this or propose/request new additions, please join our [Discord server](https://discord.gg/Ddha8cm6dx).
### Custom JSON Decoding[¶](#custom-json-decoding)
TDA’s API occasionally emits invalid JSON in the stream. This class implements all known workarounds and hacks to get around these quirks:
*class* `tda.contrib.util.``HeuristicJsonDecoder`[¶](#tda.contrib.util.HeuristicJsonDecoder)
`decode_json_string`(*raw*)[¶](#tda.contrib.util.HeuristicJsonDecoder.decode_json_string)
Attempts the following, in order:
1. Return the JSON decoding of the raw string.
2. Replace all instances of `\\\\` with `\\` and return the
decoding.
Note alternative (and potentially expensive) transformations are only
performed when `JSONDecodeError` exceptions are raised by earlier
stages.
You can use it as follows:
```
from tda.contrib.util import HeuristicJsonDecoder
stream_client = # ... create your stream stream_client.set_json_decoder(HeuristicJsonDecoder())
# ... continue as normal
```
If you encounter invalid stream items that are not fixed by using this decoder,
please let us know in our [Discord server](https://discord.gg/Ddha8cm6dx) or follow the guide in [Contributing to tda-api](index.html#contributing) to add new functionality.
Contributing to `tda-api`[¶](#contributing-to-tda-api)
---
Fixing a bug? Adding a feature? Just cleaning up for the sake of cleaning up?
Great! No improvement is too small for me, and I’m always happy to take pull requests. Read this guide to learn how to set up your environment so you can contribute.
### Setting up the Dev Environment[¶](#setting-up-the-dev-environment)
Dependencies are listed in the requirements.txt file. These development requirements are distinct from the requirements listed in setup.py and include some additional packages around testing, documentation generation, etc.
Before you install anything, I highly recommend setting up a virtualenv so you don’t pollute your system installation directories:
```
pip install virtualenv virtualenv -v virtualenv source virtualenv/bin/activate
```
Next, install project requirements:
```
pip install -r requirements.txt
```
Finally, verify everything works by running tests:
```
make test
```
At this point you can make your changes.
Note that if you are using a virtual environment and switch to a new terminal your virtual environment will not be active in the new terminal,
and you need to run the activate command again.
If you want to disable the loaded virtual environment in the same terminal window,
use the command:
```
deactivate
```
### Development Guidelines[¶](#development-guidelines)
#### Test your changes[¶](#test-your-changes)
This project aims for high test coverage. All changes must be properly tested,
and we will accept no PRs that lack appropriate unit testing. We also expect existing tests to pass. You can run your tests using:
```
make test
```
#### Document your code[¶](#document-your-code)
Documentation is how users learn to use your code, and no feature is complete without a full description of how to use it. If your PR changes external-facing interfaces, or if it alters semantics, the changes must be thoroughly described in the docstrings of the affected components. If your change adds a substantial new module, a new section in the documentation may be justified.
Documentation is built using [Sphinx](https://www.sphinx-doc.org/en/master/).
You can build the documentation using the Makefile.sphinx makefile. For example you can build the HTML documentation like so:
```
make -f Makefile.sphinx
```
Indices and tables[¶](#indices-and-tables)
===
* [Index](genindex.html)
* [Module Index](py-modindex.html)
* [Search Page](search.html)
**Disclaimer:** *tda-api is an unofficial API wrapper. It is in no way endorsed by or affiliated with TD Ameritrade or any associated organization.
Make sure to read and understand the terms of service of the underlying API before using this package. This authors accept no responsibility for any damage that might stem from use of this package. See the LICENSE file for more details.* |
melodium-repository | rust | Rust | Crate melodium_repository
===
Mélodium repository crate
---
Mélodium repository utilities.
This crate provides repository logic for the Mélodium environment.
Look at the Mélodium crate or the Mélodium Project for more detailed information.
### Features
* `network` (disabled by default): allow network access to retrieve packages;
* `cargo` (disabled by default): allow to extract package informations from `Cargo.toml` files.
### Network security
This crate uses different security implementations depending on platform it is built for.
When built for `apple` targets, it uses the native system TLS implementation.
When build for Windows systems with `msvc` target, it uses the MS TLS implementation.
For all other targets, `rustls` is used.
Only applicable when `network` feature is enabled.
Re-exports
---
* `pub use error::RepositoryError;`
* `pub use error::RepositoryResult;`
* `pub use repository::Repository;`
* `pub use repository_config::RepositoryConfig;`
Modules
---
* error
* global
* network
* repository
* repository_config
* technical
* utils`cargo`
Crate melodium_repository
===
Mélodium repository crate
---
Mélodium repository utilities.
This crate provides repository logic for the Mélodium environment.
Look at the Mélodium crate or the Mélodium Project for more detailed information.
### Features
* `network` (disabled by default): allow network access to retrieve packages;
* `cargo` (disabled by default): allow to extract package informations from `Cargo.toml` files.
### Network security
This crate uses different security implementations depending on platform it is built for.
When built for `apple` targets, it uses the native system TLS implementation.
When build for Windows systems with `msvc` target, it uses the MS TLS implementation.
For all other targets, `rustls` is used.
Only applicable when `network` feature is enabled.
Re-exports
---
* `pub use error::RepositoryError;`
* `pub use error::RepositoryResult;`
* `pub use repository::Repository;`
* `pub use repository_config::RepositoryConfig;`
Modules
---
* error
* global
* network
* repository
* repository_config
* technical
* utils`cargo`
Struct melodium_repository::error::RepositoryError
===
```
pub struct RepositoryError {
pub id: u32,
pub kind: RepositoryErrorKind,
}
```
Fields
---
`id: u32``kind: RepositoryErrorKind`Implementations
---
### impl RepositoryError
#### pub fn already_existing_package(
id: u32,
package: String,
version: Version
) -> Self
#### pub fn unknown_package(id: u32, package: String, version: Version) -> Self
#### pub fn fs_error(id: u32, error: Error) -> Self
#### pub fn json_error(id: u32, error: Error) -> Self
#### pub fn no_network(id: u32) -> Self
#### pub fn network_error(id: u32, error: String) -> Self
#### pub fn platform_dependent(id: u32, package: String, version: Version) -> Self
#### pub fn not_platform_dependent(
id: u32,
package: String,
version: Version
) -> Self
#### pub fn platform_unavailable(
id: u32,
package: String,
version: Version,
platform: Platform,
availability: Availability
) -> Self
#### pub fn package_element_absent(
id: u32,
package: String,
version: Version,
platform: Option<(Platform, Availability)>,
element: Element,
path: PathBuf
) -> Self
Trait Implementations
---
### impl Debug for RepositoryError
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl !RefUnwindSafe for RepositoryError
### impl Send for RepositoryError
### impl Sync for RepositoryError
### impl Unpin for RepositoryError
### impl !UnwindSafe for RepositoryError
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
T: Any,
#### fn into_any(self: Box<T, Global>) -> Box<dyn Any, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T, Global>) -> Rc<dyn Any, GlobalConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static)
Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere
T: Any + Send + Sync,
#### fn into_any_arc(self: Arc<T, Global>) -> Arc<dyn Any + Sync + Send, GlobalConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToString for Twhere
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct melodium_repository::repository::Repository
===
```
pub struct Repository { /* private fields */ }
```
Implementations
---
### impl Repository
#### pub fn new(config: RepositoryConfig) -> Self
#### pub fn config(&self) -> &RepositoryConfig
#### pub fn packages(&self) -> &Vec<Package#### pub fn add_package(&mut self, package: Package) -> RepositoryResult<()#### pub fn load_packages(&mut self) -> RepositoryResult<()#### pub fn load_packages_with_network(&mut self) -> RepositoryResult<()#### pub fn set_platform_availability(
&mut self,
package: &Package,
platform: &Platform,
availability: &Availability,
element: Element
) -> RepositoryResult<()Set the availability of an element of a package for a given platform.
If package is from a type that doesn’t have platform availability notion, this function return error.
#### pub fn get_platform_availability(
&self,
package: &Package,
platform: &Platform,
availability: &Availability
) -> RepositoryResult<ElementGet the availability of an element of a package for a given platform.
#### pub fn get_platform_availability_with_network(
&mut self,
package: &Package,
platform: &Platform,
availability: &Availability
) -> RepositoryResult<ElementGet the availability of an element of a package for a given platform, interrogating through network if not available locally.
#### pub fn get_package_element(
&self,
package: &Package,
platform_availability: Option<(&Platform, &Availability)>
) -> RepositoryResult<ElementGet element of a package for a given platform.
This function only checks registered availability, but not if element is really present on disk.
See also [reach_platform_element].
#### pub fn get_package_element_path(
&self,
package: &Package,
platform_availability: Option<(&Platform, &Availability)>
) -> RepositoryResult<PathBufGet the full path of an element of a package.
This function only checks registered availability, but not if element is really present on disk.
See also [reach_platform_element].
#### pub fn reach_package_element(
&self,
package: &Package,
platform_availability: Option<(&Platform, &Availability)>
) -> RepositoryResult<PathBufGet the full path of a present element of a package.
This function return an error if the element is not present on filesystem.
#### pub fn reach_package_element_with_network(
&mut self,
package: &Package,
platform_availability: Option<(&Platform, &Availability)>
) -> RepositoryResult<PathBufGet the full path of a present element of a package for a given platform.
This function try to download the element if it is not present on the filesystem.
#### pub fn remove_package(
&mut self,
name: &str,
version: &Version
) -> RepositoryResult<()#### pub fn get_package(
&self,
name: &str,
version_req: &VersionReq
) -> RepositoryResult<Option<Package>#### pub fn get_package_with_network(
&mut self,
name: &str,
version_req: &VersionReq
) -> RepositoryResult<Option<Package>#### pub fn set_package_details(
&self,
details: &PackageDetails
) -> RepositoryResult<()#### pub fn get_package_details(
&self,
package: &Package
) -> RepositoryResult<PackageDetails#### pub fn get_package_details_with_network(
&self,
package: &Package
) -> RepositoryResult<PackageDetailsTrait Implementations
---
### impl Debug for Repository
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for Repository
### impl Send for Repository
### impl Sync for Repository
### impl Unpin for Repository
### impl UnwindSafe for Repository
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
T: Any,
#### fn into_any(self: Box<T, Global>) -> Box<dyn Any, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T, Global>) -> Rc<dyn Any, GlobalConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static)
Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere
T: Any + Send + Sync,
#### fn into_any_arc(self: Arc<T, Global>) -> Arc<dyn Any + Sync + Send, GlobalConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct melodium_repository::repository_config::RepositoryConfig
===
```
pub struct RepositoryConfig {
pub repository_location: PathBuf,
pub network: Option<NetworkRepositoryConfiguration>,
}
```
Fields
---
`repository_location: PathBuf``network: Option<NetworkRepositoryConfiguration>`Trait Implementations
---
### impl Clone for RepositoryConfig
#### fn clone(&self) -> RepositoryConfig
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for RepositoryConfig
### impl Send for RepositoryConfig
### impl Sync for RepositoryConfig
### impl Unpin for RepositoryConfig
### impl UnwindSafe for RepositoryConfig
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
T: Any,
#### fn into_any(self: Box<T, Global>) -> Box<dyn Any, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T, Global>) -> Rc<dyn Any, GlobalConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static)
Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere
T: Any + Send + Sync,
#### fn into_any_arc(self: Arc<T, Global>) -> Arc<dyn Any + Sync + Send, GlobalConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V |
simest | cran | R | Package ‘simest’
October 14, 2022
Title Constrained Single Index Model Estimation
Type Package
LazyLoad yes
LazyData yes
Version 0.4
Author <NAME> <<EMAIL>>,
<NAME> <<EMAIL>>
Maintainer <NAME> <<EMAIL>>
Date 2017-04-08.
Depends nnls, cobs
Description Estimation of function and index vector in single index model with and with-
out shape constraints including different smoothness conditions.
License GPL-2
NeedsCompilation yes
Repository CRAN
Date/Publication 2017-04-25 14:35:06 UTC
R topics documented:
cpe... 2
cvx.lip.re... 3
cvx.lse.con.re... 5
cvx.lse.re... 6
cvx.pen.re... 8
derivcvxpe... 10
fastmerg... 11
pent... 12
predcvxpe... 12
sim.es... 13
simestgc... 16
smooth.pen.re... 18
solve.pentadia... 20
spen_egc... 21
cpen C code for convex penalized least squares regression.
Description
This function is only intended for an internal use.
Usage
cpen(dim, t_input, z_input, w_input, a0_input,
lambda_input, Ky_input, L_input, U_input,
fun_input, res_input, flag, tol_input,
zhat_input, iter, Deriv_input)
Arguments
dim vector of sample size and maximum iteration.
t_input x-vector in cvx.pen.reg.
z_input y-vector in cvx.pen.reg.
w_input w-vector in cvx.pen.reg.
a0_input initial vector for iterative algorithm.
lambda_input lambda-value in cvx.pen.reg.
Ky_input Internal vector used for algorithm.
L_input Internal vector. Set to 0.
U_input Internal vector. Set to 0.
fun_input Internal vector. Set to 0.
res_input Internal vector. Set to 0.
flag Logical for stop criterion.
tol_input tolerance level used in cvx.pen.reg.
zhat_input Internal vector. Set to zero. Stores the final output.
iter Iteration number inside the algorithm.
Deriv_input Internal vector. Set to zero. Stores the derivative vector.
Details
See the source for more details about the algorithm.
Value
Does not return anything. Changes the inputs according to the iterations.
Author(s)
<NAME>, <EMAIL>.
Source
<NAME>., <NAME>. and <NAME>. (2003). Quadratic Convergence of Newton’s Method for Convex
Interpolation and Smoothing. Constructive Approximation, 19(1):123-143.
cvx.lip.reg Convex Least Squares Regression with Lipschitz Constraint
Description
This function provides an estimate of the non-parametric regression function with a shape constraint
of convexity and a smoothness constraint via a Lipschitz bound.
Usage
cvx.lip.reg(t, z, w = NULL, L,...)
## Default S3 method:
cvx.lip.reg(t, z, w = NULL, L, ...)
## S3 method for class 'cvx.lip.reg'
plot(x,...)
## S3 method for class 'cvx.lip.reg'
print(x,...)
## S3 method for class 'cvx.lip.reg'
predict(object, newdata = NULL, deriv = 0, ...)
Arguments
t a numeric vector giving the values of the predictor variable.
z a numeric vector giving the values of the response variable.
w an optional numeric vector of the same length as x; Defaults to all elements 1/n.
L a numeric value providing the Lipschitz bound on the function.
... additional arguments.
x an object of class ‘cvx.lip.reg’.
object An object of class ‘cvx.lip.reg’.
newdata a matrix of new data points in the predict function.
deriv a numeric either 0 or 1 representing which derivative to evaluate.
Details
The function minimizes
Xn
i=1
subject to
θ2 − θ1 θn − θn−1
−L ≤ ≤ ··· ≤ ≤L
t2 − t1 tn − tn−1
for sorted t values and z reorganized such that zi corresponds to the new sorted ti . This function uses
the nnls function from the nnls package to perform the constrained minimization of least squares.
plot function provides the scatterplot along with fitted curve; it also includes some diagnostic plots
for residuals. Predict function now allows calculating the first derivative also.
Value
An object of class ‘cvx.lip.reg’, basically a list including the elements
x.values sorted ‘t’ values provided as input.
y.values corresponding ‘z’ values in input.
fit.values corresponding fit values of same length as that of ‘x.values’.
deriv corresponding values of the derivative of same length as that of ‘x.values’.
residuals residuals obtained from the fit.
minvalue minimum value of the objective function attained.
iter Always set to 1.
convergence a numeric indicating the convergence of the code.
Author(s)
<NAME>, <EMAIL>.
Source
<NAME> and <NAME>. (1995). Solving Least Squares Problems. SIAM.
References
<NAME>. and <NAME>. (2009). Non-negativity Constraints in Numerical Analysis. Sympo-
sium on the Birth of Numerical Analysis.
See Also
See also the function nnls.
Examples
args(cvx.lip.reg)
x <- runif(50,-1,1)
y <- x^2 + rnorm(50,0,0.3)
tmp <- cvx.lip.reg(x, y, L = 10)
print(tmp)
plot(tmp)
predict(tmp, newdata = rnorm(10,0,0.1))
cvx.lse.con.reg Convex Least Squares Regression.
Description
This function provides an estimate of the non-parametric regression function with a shape constraint
of convexity and no smoothness constraint. Note that convexity by itself provides some implicit
smoothness.
Usage
cvx.lse.con.reg(t, z, w = NULL,...)
## Default S3 method:
cvx.lse.con.reg(t, z, w = NULL, ...)
Arguments
t a numeric vector giving the values of the predictor variable.
z a numeric vector giving the values of the response variable.
w an optional numeric vector of the same length as t; Defaults to all elements 1/n.
... additional arguments.
Details
This function does the same thing as cvx.lse.reg except that here we use conreg function from
cobs package which is faster than cvx.lse.reg. The plot, predict, print functions of cvx.lse.reg
also apply for cvx.lse.con.reg.
Value
An object of class ‘cvx.lse.reg’, basically a list including the elements
x.values sorted ‘t’ values provided as input.
y.values corresponding ‘z’ values in input.
fit.values corresponding fit values of same length as that of ‘x.values’.
deriv corresponding values of the derivative of same length as that of ‘x.values’.
iter number of steps taken to complete the iterations.
residuals residuals obtained from the fit.
minvalue minimum value of the objective function attained.
convergence a numeric indicating the convergence of the code. Always set to 1.
Author(s)
<NAME>, <EMAIL>
Source
<NAME> and <NAME>. (1995). Solving Least Squares Problems. SIAM.
References
<NAME>. and <NAME>. (2009). Non-negativity Constraints in Numerical Analysis. Sympo-
sium on the Birth of Numerical Analysis.
<NAME>. and <NAME>. (2014). coneproj: An R package for the primal or dual cone projections
with routines for constrained regression. Journal of Statistical Software 61(12), 1 – 22.
Examples
args(cvx.lse.con.reg)
x <- runif(50,-1,1)
y <- x^2 + rnorm(50,0,0.3)
tmp <- cvx.lse.con.reg(x, y)
print(tmp)
plot(tmp)
predict(tmp, newdata = rnorm(10,0,0.1))
cvx.lse.reg Convex Least Squares Regression.
Description
This function provides an estimate of the non-parametric regression function with a shape constraint
of convexity and no smoothness constraint. Note that convexity by itself provides some implicit
smoothness.
Usage
cvx.lse.reg(t, z, w = NULL,...)
## Default S3 method:
cvx.lse.reg(t, z, w = NULL, ...)
## S3 method for class 'cvx.lse.reg'
plot(x,...)
## S3 method for class 'cvx.lse.reg'
print(x,...)
## S3 method for class 'cvx.lse.reg'
predict(object, newdata = NULL, deriv = 0, ...)
Arguments
t a numeric vector giving the values of the predictor variable.
z a numeric vector giving the values of the response variable.
w an optional numeric vector of the same length as t; Defaults to all elements 1/n.
... additional arguments.
x An object of class ‘cvx.lse.reg’. This is for plot and print function.
object An object of class ‘cvx.lse.reg’.
newdata a matrix of new data points in the predict function.
deriv a numeric either 0 or 1 representing which derivative to evaluate.
Details
The function minimizes
Xn
i=1
subject to
θ2 − θ1 θn − θn−1
≤ ··· ≤
t2 − t1 tn − tn−1
for sorted t values and z reorganized such that zi corresponds to the new sorted ti . This function
previously used the coneA function from the coneproj package to perform the constrained mini-
mization of least squares. Currently, the code makes use of the nnls function from nnls package
for the same purpose. plot function provides the scatterplot along with fitted curve; it also includes
some diagnostic plots for residuals. Predict function now allows computation of the first derivative.
Value
An object of class ‘cvx.lse.reg’, basically a list including the elements
x.values sorted ‘t’ values provided as input.
y.values corresponding ‘z’ values in input.
fit.values corresponding fit values of same length as that of ‘x.values’.
deriv corresponding values of the derivative of same length as that of ‘x.values’.
iter number of steps taken to complete the iterations.
residuals residuals obtained from the fit.
minvalue minimum value of the objective function attained.
convergence a numeric indicating the convergence of the code.
Author(s)
<NAME>, <EMAIL>
Source
<NAME> and <NAME>. (1995). Solving Least Squares Problems. SIAM.
References
<NAME>. and <NAME>. (2009). Non-negativity Constraints in Numerical Analysis. Sympo-
sium on the Birth of Numerical Analysis.
<NAME>. and <NAME>. (2014). coneproj: An R package for the primal or dual cone projections
with routines for constrained regression. Journal of Statistical Software 61(12), 1 – 22.
Examples
args(cvx.lse.reg)
x <- runif(50,-1,1)
y <- x^2 + rnorm(50,0,0.3)
tmp <- cvx.lse.reg(x, y)
print(tmp)
plot(tmp)
predict(tmp, newdata = rnorm(10,0,0.1))
cvx.pen.reg Penalized Smooth Convex Regression.
Description
This function provides an estimate of the non-parametric regression function with a shape constraint
of convexity and smoothness constraint provided through square integral of second derivative.
Usage
cvx.pen.reg(x, y, lambda, w = NULL, tol = 1e-05, maxit = 1000,...)
## Default S3 method:
cvx.pen.reg(x, y, lambda, w = NULL, tol = 1e-05, maxit = 1000,...)
## S3 method for class 'cvx.pen.reg'
plot(x,...)
## S3 method for class 'cvx.pen.reg'
print(x,...)
## S3 method for class 'cvx.pen.reg'
predict(object, newdata = NULL,...)
Arguments
x a numeric vector giving the values of the predictor variable. For plot and print
functions, x represents an object of class cvx.pen.reg.
y a numeric vector giving the values of the response variable.
lambda a numeric value giving the penalty value.
w an optional numeric vector of the same length as x; Defaults to all 1.
maxit an integer giving the maxmimum number of steps taken by the algorithm; de-
faults to 1000.
tol a numeric providing the tolerance level for convergence.
... any additional arguments.
object An object of class ‘cvx.pen.reg’. This is for predict function.
newdata a vector of new data points to be used in the predict function.
Details
The function minimizes
X n Z
wi (yi − f (xi )) + λ {f 00 (x)}2 dx
i=1
subject to convexity constraint on f . plot function provides the scatterplot along with fitted curve;
it also includes some diagnostic plots for residuals. Predict function returns a matrix containing the
inputted newdata along with the function values, derivatives and second derivatives.
Value
An object of class ‘cvx.pen.reg’, basically a list including the elements
x.values sorted ‘x’ values provided as input.
y.values corresponding ‘y’ values in input.
fit.values corresponding fit values of same length as that of ‘x.values’.
deriv corresponding values of the derivative of same length as that of ‘x.values’.
iter number of steps taken to complete the iterations.
residuals residuals obtained from the fit.
minvalue minimum value of the objective function attained.
convergence a numeric indicating the convergence of the code.
alpha a numeric vector of length 2 less than ‘x’. This represents the coefficients of the
B-splines in the second derivative of the estimator.
AlphaMVal a numeric vector needed for predict function.
lower a numeric vector needed for predict function.
upper a numeric vector needed for predict function.
Author(s)
<NAME>, <EMAIL>, <NAME>, <EMAIL>.
Source
<NAME>. and <NAME>. (1988). An Algorithm for Computing Constrained Smoothing Spline
Functions. Numer. Math., 52(5):583–595.
Dontchev, <NAME>., <NAME>. and Qi, L. (2003). Quadratic Convergence of Newton’s Method for Convex
Interpolation and Smoothing. Constructive Approximation, 19(1):123-143.
Examples
args(cvx.pen.reg)
x <- runif(50,-1,1)
y <- x^2 + rnorm(50,0,0.3)
tmp <- cvx.pen.reg(x, y, lambda = 0.01)
print(tmp)
plot(tmp)
predict(tmp, newdata = rnorm(10,0,0.1))
derivcvxpec C code for prediction using cvx.lse.reg, cvx.lip.reg and cvx.lse.con.reg.
Description
This function is only intended for an internal use.
Usage
derivcvxpec(dim, t, zhat, D, kk)
Arguments
dim vector of sample size, size of newdata and which derivative to compute.
t x-vector in cvx.lse.reg and others.
zhat prediction obtained from cvx.lse.reg and others.
D derivative vector obtained from cvx.lse.reg and others.
kk vector storing the final prediction.
Details
The estimate is a linear interpolator and the algorithm implements this.
Value
Does not return anything. Changes the inputs according to the algorithm.
Author(s)
<NAME>, <EMAIL>.
fastmerge Pre-binning of Data Points.
Description
Numerical tolerance problems in non-parametric regression makes it necessary for pre-binning of
data points. This procedure is implicitly performed by most of the regression function in R. This
function implements this procedure with a given tolerance level.
Usage
fastmerge(DataMat, w = NULL, tol = 1e-04)
Arguments
DataMat a numeric matrix/vector with rows as data points.
w an optional numeric vector of the same length as x; Defaults to all elements 1.
tol a numeric value providing the tolerance for identifying duplicates with respect
to the first column.
Details
If two values in the first column of DataMat are separated by a value less than tol then the corre-
sponding rows are merged.
Value
A list including the elements
DataMat a numeric matrix/vector with rows sorted with respect to the first column.
w obtained weights corresponding to the merged points.
Author(s)
<NAME>, <EMAIL>.
See Also
See also the function smooth.spline.
Examples
args(fastmerge)
x <- runif(100,-1,1)
y <- runif(100,-1,1)
DataMat <- cbind(x, y)
tmp <- fastmerge(DataMat)
penta C code for solving pentadiagonal linear equations.
Description
This function is only intended for an internal use.
Usage
penta(dim, E, A, D, C, F, B, X)
Arguments
dim vector containing dimension of linear system.
E Internal vector storing for one of the sub-diagonals.
A Internal vector storing for one of the sub-diagonals.
D Internal vector storing for one of the sub-diagonals.
C Internal vector storing for one of the sub-diagonals.
F Internal vector storing for one of the sub-diagonals.
B Internal vector storing for the right hand side of linear equation.
X Vector to store the solution.
Value
Does not return anything. Changes the inputs according to the algorithm.
Author(s)
<NAME>, <EMAIL>.
predcvxpen C code for prediction using cvx.lse.reg, cvx.lip.reg and cvx.lse.con.reg
for function and its derivatives.
Description
This function is only intended for an internal use.
Usage
predcvxpen(dim, x, t, zhat, deriv, L, U, fun, P, Q, R)
Arguments
dim vector of sample size, size of newdata.
x Newdata.
t x-vector in cvx.pen.reg
zhat prediction obtained from cvx.pen.reg
deriv derivative vector obtained from cvx.pen.reg
L Internal vector obtained from cpen function.
U Internal vector obtained from cpen function.
fun vector containing the function estimate.
P Internal vector set to zero.
Q Internal vector set to zero.
R Internal vector set to zero.
Details
The estimate is characterized by a fixed point equation which gives the algorithm for prediction.
Value
Does not return anything. Changes the inputs according to the algorithm.
Author(s)
<NAME>, <EMAIL>.
sim.est Single Index Model Estimation: Objective Function Approach.
Description
This function provides an estimate of the non-parametric function and the index vector by minimiz-
ing an objective function specified by the method argument.
Usage
sim.est(x, y, w = NULL, beta.init = NULL, nmulti = NULL, L = NULL,
lambda = NULL, maxit = 100, bin.tol = 1e-05, beta.tol = 1e-05,
method = c("cvx.pen","cvx.lip","cvx.lse","smooth.pen"),
progress = TRUE, force = FALSE)
## Default S3 method:
sim.est(x, y, w = NULL, beta.init = NULL, nmulti = NULL, L = NULL,
lambda = NULL, maxit = 100, bin.tol = 1e-05, beta.tol = 1e-05,
method = c("cvx.pen","cvx.lip","cvx.lse","smooth.pen"),
progress = TRUE, force = FALSE)
## S3 method for class 'sim.est'
plot(x,...)
## S3 method for class 'sim.est'
print(x,...)
## S3 method for class 'sim.est'
predict(object, newdata = NULL, deriv = 0, ...)
Arguments
x a numeric matrix giving the values of the predictor variables or covariates. For
functions plot and print, ‘x’ is an object of class ‘sim.est’.
y a numeric vector giving the values of the response variable.
method a string indicating which method to use for regression.
lambda a numeric value giving the penalty value for cvx.pen and cvx.lip.
L a numeric value giving the Lipschitz bound for cvx.lip.
w an optional numeric vector of the same length as x; Defaults to all 1.
beta.init An numeric vector giving the initial value for the index vector.
nmulti An integer giving the number of multiple starts to be used for iterative algorithm.
If beta.init is provided then the nmulti is set to 1.
bin.tol A tolerance level upto which the x values used in regression are recognized as
distinct values.
beta.tol A tolerance level for stopping iterative algorithm for the index vector.
maxit An integer specifying the maximum number of iterations for each initial β vec-
tor.
progress A logical denoting if progress of the algorithm is to be printed. Defaults to
TRUE.
force A logical indicating the use of cvx.lse.reg or cvx.lse.con.reg. Defaults to
FALSE and uses cvx.lse.con.reg
object An object of class ‘sim.est’.
... Any additional arguments to be passed.
newdata a matrix of new data points in the predict function.
deriv a numeric either 0 or 1 representing which derivative to evaluate.
Details
The function minimizes
Xn Z
wi (yi − f (x> 2
i=1
with constraints on f dictated by method = ‘cvx.pen’ or ‘smooth.pen’. For method = ‘cvx.lip’ or
‘cvx.lse’, the function minimizes
Xn
wi (yi − f (x>
i β))
i=1
with constraints on f disctated by method = ‘cvx.lip’ or ‘cvx.lse’. The penalty parameter λ is
not choosen by any criteria. It has to be specified for using method = ‘cvx.pen’, ‘cvx.lip’ or
‘smooth.pen’ and λ denotes the Lipschitz constant for using the method = ‘cvx.lip.reg’. plot
function provides the scatterplot along with fitted curve; it also includes some diagnostic plots for
residuals and progression of the algorithm. Predict function now allows calculation of the first
derivative. In applications, it might be advantageous to scale of the covariate matrix x before pass-
ing into the function which brings more stability to the algorithm.
Value
An object of class ‘sim.est’, basically a list including the elements
beta A numeric vector storing the estimate of the index vector.
nmulti Number of multistarts used.
x.mat the input ‘x’ matrix with possibly aggregated rows.
BetaInit a matrix storing the initial vectors taken or given for the index parameter.
lambda Given input lambda.
L Given input L.
K an integer storing the row index of BetaInit which lead to the estimator beta.
BetaPath a list containing the paths taken by each initial index vector for nmulti times.
ObjValPath a matrix with nmulti rows storing the path of objective function value for multi-
ple starts.
convergence a numeric storing convergence status for the index parameter.
itervec a vector of length nmulti storing the number of iterations taken by each of the
multiple starts.
iter a numeric giving the total number of iterations taken.
method method given as input.
regress An output of the regression function used needed for predict.
x.values sorted ‘x.betahat’ values obtained by the algorithm.
y.values corresponding ‘y’ values in input.
fit.values corresponding fit values of same length as that of xβ.
deriv corresponding values of the derivative of same length as that of xβ.
residuals residuals obtained from the fit.
minvalue minimum value of the objective function attained.
Author(s)
<NAME>, <EMAIL>
Source
Kuchibhotla, <NAME>., <NAME>. and <NAME>. (2015+). On Single Index Models with Convex Link.
Examples
args(sim.est)
x <- matrix(runif(50*3,-1,1),ncol = 3)
b0 <- rep_len(1,3)/sqrt(3)
y <- (x%*%b0)^2 + rnorm(50,0,0.3)
tmp1 <- sim.est(x, y, lambda = 0.01, method = "cvx.pen", nmulti = 5)
tmp3 <- sim.est(x, y, lambda = 0.01, method = "smooth.pen", nmulti = 5)
print(tmp1)
print(tmp3)
plot(tmp1)
plot(tmp3)
predict(tmp1, newdata = c(0,0,0))
predict(tmp3, newdata = c(0,0,0))
simestgcv Single Index Model Estimation: Objective Function Approach.
Description
This function provides an estimate of the non-parametric function and the index vector by minimiz-
ing an objective function specified by the method argument and also by choosing tuning parameter
using GCV.
Usage
simestgcv(x, y, w = NULL, beta.init = NULL, nmulti = NULL,
lambda = NULL, maxit = 100, bin.tol = 1e-06,
beta.tol = 1e-05, agcv.iter = 100, progress = TRUE)
## Default S3 method:
simestgcv(x, y, w = NULL, beta.init = NULL, nmulti = NULL,
lambda = NULL, maxit = 100, bin.tol = 1e-06,
beta.tol = 1e-05, agcv.iter = 100, progress = TRUE)
Arguments
x a numeric matrix giving the values of the predictor variables or covariates. For
functions plot and print, ‘x’ is an object of class ‘sim.est’.
y a numeric vector giving the values of the response variable.
lambda a numeric vector giving lower and upper bounds for penalty used in cvx.pen
and cvx.lip.
w an optional numeric vector of the same length as x; Defaults to all 1.
beta.init An numeric vector giving the initial value for the index vector.
nmulti An integer giving the number of multiple starts to be used for iterative algorithm.
If beta.init is provided then the nmulti is set to 1.
agcv.iter An integer providing the number of random numbers to be used in estimating
GCV. See smooth.pen.reg for more details.
progress A logical denoting if progress of the algorithm to be printed. Defaults to TRUE.
bin.tol A tolerance level upto which the x values used in regression are recognized as
distinct values.
beta.tol A tolerance level for stopping iterative algorithm for the index vector.
maxit An integer specifying the maximum number of iterations for each initial β vec-
tor.
Details
The function minimizes
X n Z
wi (yi − f (x> 2
i=1
with no constraints on f. The penalty parameter λ is choosen by the GCV criterion between the
bounds given by lambda. Plot and predict function work as in the case of sim.est function.
Value
An object of class ‘sim.est’, basically a list including the elements
beta A numeric vector storing the estimate of the index vector.
nmulti Number of multistarts used.
x.mat the input ‘x’ matrix with possibly aggregated rows.
BetaInit a matrix storing the initial vectors taken or given for the index parameter.
lambda Given input lambda.
K an integer storing the row index of BetaInit which lead to the estimator beta.
BetaPath a list containing the paths taken by each initial index vector for nmulti times.
ObjValPath a matrix with nmulti rows storing the path of objective function value for multi-
ple starts.
convergence a numeric storing convergence status for the index parameter.
itervec a vector of length nmulti storing the number of iterations taken by each of the
multiple starts.
iter a numeric giving the total number of iterations taken.
method method is always set to "smooth.pen.reg".
regress An output of the regression function used needed for predict.
x.values sorted ‘x.betahat’ values obtained by the algorithm.
y.values corresponding ‘y’ values in input.
fit.values corresponding fit values of same length as that of xβ.
deriv corresponding values of the derivative of same length as that of xβ.
residuals residuals obtained from the fit.
minvalue minimum value of the objective function attained.
Author(s)
<NAME>, <EMAIL>
Source
Kuchibhotla, <NAME>., <NAME>. and Sen, B. (2015+). On Single Index Models with Convex Link.
Examples
args(sim.est)
x <- matrix(runif(20*2,-1,1),ncol = 2)
b0 <- rep_len(1,2)/sqrt(2)
y <- (x%*%b0)^2 + rnorm(20,0,0.3)
tmp2 <- simestgcv(x, y, lambda = c(20^{1/6}, 20^{1/4}), nmulti = 1,
agcv.iter = 10, maxit = 10, beta.tol = 1e-03)
print(tmp2)
plot(tmp2)
predict(tmp2, newdata = c(0,0))
smooth.pen.reg Penalized Smooth/Smoothing Spline Regression.
Description
This function provides an estimate of the non-parameteric regression function using smoothing
splines.
Usage
smooth.pen.reg(x, y, lambda, w = NULL, agcv = FALSE, agcv.iter = 100, ...)
## Default S3 method:
smooth.pen.reg(x, y, lambda, w = NULL, agcv = FALSE, agcv.iter = 100, ...)
## S3 method for class 'smooth.pen.reg'
plot(x,...)
## S3 method for class 'smooth.pen.reg'
print(x,...)
## S3 method for class 'smooth.pen.reg'
predict(object, newdata = NULL, deriv = 0, ...)
Arguments
x a numeric vector giving the values of the predictor variable. For functions plot
and print, ‘x’ is an object of class ‘smooth.pen.reg’.
y a numeric vector giving the values of the response variable.
lambda a numeric value giving the penalty value.
w an optional numeric vector of the same length as x; Defaults to all 1.
agcv a logical denoting if an estimate of generalized cross-validation is needed.
agcv.iter a numeric denoting the number of random vectors used to estimate the GCV.
See details.
... additional arguments.
object An object of class ‘smooth.pen.reg’.
newdata a matrix of new data points in the predict function.
deriv a numeric either 0 or 1 representing which derivative to evaluate.
Details
The function minimizes
Xn Z
wi (yi − f (xi )) + λ {f 00 (x)}2 dx
i=1
without any constraint on f . This function implements in R the algorithm noted in Green and
Silverman (1994). The function smooth.spline in R is not suitable for single index model estimation
as it chooses λ using GCV by default. plot function provides the scatterplot along with fitted curve;
it also includes some diagnostic plots for residuals. Predict function now allows computation of the
first derivative. Calculation of generalized cross-validation requires the computation of diagonal
elements of the hat matrix involved which is cumbersone and is computationally expensive (and
also is unstable). smooth.Pspline of pspline package provides the GCV criterion value which
matches the usual GCV when all the weights are equal to 1 but is not clear what it is for weights
unequal. We use an estimate of GCV (formula of which is given in Green and Silverman (1994))
proposed by Girard which is very stable and computationally cheap. For more details about this
randomized GCV, see Girard (1989).
Value
An object of class ‘smooth.pen.reg’, basically a list including the elements
x.values sorted ‘x’ values provided as input.
y.values corresponding ‘y’ values in input.
fit.values corresponding fit values of same length as that of ‘x.values’.
deriv corresponding values of the derivative of same length as that of ‘x.values’.
iter Always set to 1.
residuals residuals obtained from the fit.
minvalue minimum value of the objective function attained.
convergence Always set to 0.
agcv.score Asymptotic GCV approximation. Proposed in Silverman (1982) as a computa-
tionally fast approximation to GCV.
splinefun An object of class ‘smooth.spline’ needed for predict.
Author(s)
<NAME>, <EMAIL>.
Source
Green, P. J. and <NAME>. (1994) Non-parametric Regression and Generalized Linear Mod-
els: A Roughness Penalty Approach. Chapman and Hall.
Girard, <NAME>. (1989) A Fast ’ Monte-Carlo Cross-Validation’ Procedure for Large Least Squares
Problems with Noisy Data. Numerische Mathematik, 56, 1-23.
Examples
args(smooth.pen.reg)
x <- runif(50,-1,1)
y <- x^2 + rnorm(50,0,0.3)
tmp <- smooth.pen.reg(x, y, lambda = 0.01, agcv = TRUE)
print(tmp)
plot(tmp)
predict(tmp, newdata = rnorm(10,0,0.1))
solve.pentadiag Pentadiagonal Linear Solver.
Description
A function to solve pentadiagonal system of linear equations.
Usage
## S3 method for class 'pentadiag'
solve(a, b, ...)
Arguments
a a numeric square matrix with pentadiagonal rows. The function does NOT check
for pentadiagonal matrix.
b a numeric vector of the same length as nrows(a). This argument cannot be a
matrix.
... any additional arguments
Details
This function is written mainly for use in this package. It may not be the most efficient code.
Value
A vector containing the solution.
Author(s)
<NAME>, <EMAIL>
Examples
A <- matrix(c(2,1,1,0,0,
1,2,1,1,0,
1,1,2,1,1,
0,1,1,2,1,
0,0,1,1,2),nrow = 5)
b <- rnorm(5)
tmp <- solve.pentadiag(A, b)
spen_egcv C code for smoothing splines with randomized GCV computation.
Description
This function is only intended for an internal use.
Usage
spen_egcv(dim, x, y, w, h, QtyPerm, lambda, m, nforApp,
EGCVflag, agcv)
Arguments
dim vector of sample size.
x x-vector in smooth.pen.reg.
y y-vector in smooth.pen.reg.
w w-vector in smooth.pen.reg.
h difference vector for x for internal use.
QtyPerm Second order difference for x for internal use.
lambda smoothing parameter input for smooth.pen.reg.
m vector to store the prediction vector.
nforApp Number of iterations for approximate GCV.
EGCVflag Logical when GCV is needed.
agcv Internal scalar. Set to 0. Stores the approximate GCV.
Details
This is same as smooth.spline except for small changes.
Value
Does not return anything. Changes the inputs according to the iterations.
Author(s)
<NAME>, <EMAIL>.
See Also
smooth.spline |
go.etcd.io/etcd/raft/v3 | go | Go | README
[¶](#section-readme)
---
### Raft library
Raft is a protocol with which a cluster of nodes can maintain a replicated state machine.
The state machine is kept in sync through the use of a replicated log.
For more details on Raft, see "In Search of an Understandable Consensus Algorithm"
(<https://raft.github.io/raft.pdf>) by <NAME> and <NAME>.
This Raft library is stable and feature complete. As of 2016, it is **the most widely used** Raft library in production, serving tens of thousands clusters each day. It powers distributed systems such as etcd, Kubernetes, Docker Swarm, Cloud Foundry Diego, CockroachDB, TiDB, Project Calico, Flannel, Hyperledger and more.
Most Raft implementations have a monolithic design, including storage handling, messaging serialization, and network transport. This library instead follows a minimalistic design philosophy by only implementing the core raft algorithm. This minimalism buys flexibility, determinism, and performance.
To keep the codebase small as well as provide flexibility, the library only implements the Raft algorithm; both network and disk IO are left to the user. Library users must implement their own transportation layer for message passing between Raft peers over the wire. Similarly, users must implement their own storage layer to persist the Raft log and state.
In order to easily test the Raft library, its behavior should be deterministic. To achieve this determinism, the library models Raft as a state machine. The state machine takes a `Message` as input. A message can either be a local timer update or a network message sent from a remote peer. The state machine's output is a 3-tuple `{[]Messages, []LogEntries, NextState}` consisting of an array of `Messages`, `log entries`, and `Raft state changes`. For state machines with the same state, the same state machine input should always generate the same state machine output.
A simple example application, *raftexample*, is also available to help illustrate how to use this package in practice: <https://github.com/etcd-io/etcd/tree/main/contrib/raftexample### Features
This raft implementation is a full feature implementation of Raft protocol. Features includes:
* Leader election
* Log replication
* Log compaction
* Membership changes
* Leadership transfer extension
* Efficient linearizable read-only queries served by both the leader and followers
+ leader checks with quorum and bypasses Raft log before processing read-only queries
+ followers asks leader to get a safe read index before processing read-only queries
* More efficient lease-based linearizable read-only queries served by both the leader and followers
+ leader bypasses Raft log and processing read-only queries locally
+ followers asks leader to get a safe read index before processing read-only queries
+ this approach relies on the clock of the all the machines in raft group
This raft implementation also includes a few optional enhancements:
* Optimistic pipelining to reduce log replication latency
* Flow control for log replication
* Batching Raft messages to reduce synchronized network I/O calls
* Batching log entries to reduce disk synchronized I/O
* Writing to leader's disk in parallel
* Internal proposal redirection from followers to leader
* Automatic stepping down when the leader loses quorum
* Protection against unbounded log growth when quorum is lost
#### Notable Users
* [cockroachdb](https://github.com/cockroachdb/cockroach) A Scalable, Survivable, Strongly-Consistent SQL Database
* [dgraph](https://github.com/dgraph-io/dgraph) A Scalable, Distributed, Low Latency, High Throughput Graph Database
* [etcd](https://github.com/etcd-io/etcd) A distributed reliable key-value store
* [tikv](https://github.com/pingcap/tikv) A Distributed transactional key value database powered by Rust and Raft
* [swarmkit](https://github.com/docker/swarmkit) A toolkit for orchestrating distributed systems at any scale.
* [chain core](https://github.com/chain/chain) Software for operating permissioned, multi-asset blockchain networks
#### Usage
The primary object in raft is a Node. Either start a Node from scratch using raft.StartNode or start a Node from some initial state using raft.RestartNode.
To start a three-node cluster
```
storage := raft.NewMemoryStorage()
c := &raft.Config{
ID: 0x01,
ElectionTick: 10,
HeartbeatTick: 1,
Storage: storage,
MaxSizePerMsg: 4096,
MaxInflightMsgs: 256,
}
// Set peer list to the other nodes in the cluster.
// Note that they need to be started separately as well.
n := raft.StartNode(c, []raft.Peer{{ID: 0x02}, {ID: 0x03}})
```
Start a single node cluster, like so:
```
// Create storage and config as shown above.
// Set peer list to itself, so this node can become the leader of this single-node cluster.
peers := []raft.Peer{{ID: 0x01}}
n := raft.StartNode(c, peers)
```
To allow a new node to join this cluster, do not pass in any peers. First, add the node to the existing cluster by calling `ProposeConfChange` on any existing node inside the cluster. Then, start the node with an empty peer list, like so:
```
// Create storage and config as shown above.
n := raft.StartNode(c, nil)
```
To restart a node from previous state:
```
storage := raft.NewMemoryStorage()
// Recover the in-memory storage from persistent snapshot, state and entries.
storage.ApplySnapshot(snapshot)
storage.SetHardState(state)
storage.Append(entries)
c := &raft.Config{
ID: 0x01,
ElectionTick: 10,
HeartbeatTick: 1,
Storage: storage,
MaxSizePerMsg: 4096,
MaxInflightMsgs: 256,
}
// Restart raft without peer information.
// Peer information is already included in the storage.
n := raft.RestartNode(c)
```
After creating a Node, the user has a few responsibilities:
First, read from the Node.Ready() channel and process the updates it contains. These steps may be performed in parallel, except as noted in step 2.
1. Write Entries, HardState and Snapshot to persistent storage in order, i.e. Entries first, then HardState and Snapshot if they are not empty. If persistent storage supports atomic writes then all of them can be written together. Note that when writing an Entry with Index i, any previously-persisted entries with Index >= i must be discarded.
2. Send all Messages to the nodes named in the To field. It is important that no messages be sent until the latest HardState has been persisted to disk, and all Entries written by any previous Ready batch (Messages may be sent while entries from the same batch are being persisted). To reduce the I/O latency, an optimization can be applied to make leader write to disk in parallel with its followers (as explained at section 10.2.1 in Raft thesis). If any Message has type MsgSnap, call Node.ReportSnapshot() after it has been sent (these messages may be large). Note: Marshalling messages is not thread-safe; it is important to make sure that no new entries are persisted while marshalling. The easiest way to achieve this is to serialise the messages directly inside the main raft loop.
3. Apply Snapshot (if any) and CommittedEntries to the state machine. If any committed Entry has Type EntryConfChange, call Node.ApplyConfChange() to apply it to the node. The configuration change may be cancelled at this point by setting the NodeID field to zero before calling ApplyConfChange (but ApplyConfChange must be called one way or the other, and the decision to cancel must be based solely on the state machine and not external information such as the observed health of the node).
4. Call Node.Advance() to signal readiness for the next batch of updates. This may be done at any time after step 1, although all updates must be processed in the order they were returned by Ready.
Second, all persisted log entries must be made available via an implementation of the Storage interface. The provided MemoryStorage type can be used for this (if repopulating its state upon a restart), or a custom disk-backed implementation can be supplied.
Third, after receiving a message from another node, pass it to Node.Step:
```
func recvRaftRPC(ctx context.Context, m raftpb.Message) {
n.Step(ctx, m)
}
```
Finally, call `Node.Tick()` at regular intervals (probably via a `time.Ticker`). Raft has two important timeouts: heartbeat and the election timeout. However, internally to the raft package time is represented by an abstract "tick".
The total state machine handling loop will look something like this:
```
for {
select {
case <-s.Ticker:
n.Tick()
case rd := <-s.Node.Ready():
saveToStorage(rd.HardState, rd.Entries, rd.Snapshot)
send(rd.Messages)
if !raft.IsEmptySnap(rd.Snapshot) {
processSnapshot(rd.Snapshot)
}
for _, entry := range rd.CommittedEntries {
process(entry)
if entry.Type == raftpb.EntryConfChange {
var cc raftpb.ConfChange
cc.Unmarshal(entry.Data)
s.Node.ApplyConfChange(cc)
}
}
s.Node.Advance()
case <-s.done:
return
}
}
```
To propose changes to the state machine from the node to take application data, serialize it into a byte slice and call:
```
n.Propose(ctx, data)
```
If the proposal is committed, data will appear in committed entries with type raftpb.EntryNormal. There is no guarantee that a proposed command will be committed; the command may have to be reproposed after a timeout.
To add or remove node in a cluster, build ConfChange struct 'cc' and call:
```
n.ProposeConfChange(ctx, cc)
```
After config change is committed, some committed entry with type raftpb.EntryConfChange will be returned. This must be applied to node through:
```
var cc raftpb.ConfChange
cc.Unmarshal(data)
n.ApplyConfChange(cc)
```
Note: An ID represents a unique node in a cluster for all time. A given ID MUST be used only once even if the old node has been removed.
This means that for example IP addresses make poor node IDs since they may be reused. Node IDs must be non-zero.
#### Implementation notes
This implementation is up to date with the final Raft thesis (<https://github.com/ongardie/dissertation/blob/master/stanford.pdf)>, although this implementation of the membership change protocol differs somewhat from that described in chapter 4. The key invariant that membership changes happen one node at a time is preserved, but in our implementation the membership change takes effect when its entry is applied, not when it is added to the log (so the entry is committed under the old membership instead of the new). This is equivalent in terms of safety, since the old and new configurations are guaranteed to overlap.
To ensure there is no attempt to commit two membership changes at once by matching log positions (which would be unsafe since they should have different quorum requirements), any proposed membership change is simply disallowed while any uncommitted change appears in the leader's log.
This approach introduces a problem when removing a member from a two-member cluster: If one of the members dies before the other one receives the commit of the confchange entry, then the member cannot be removed any more since the cluster cannot make progress. For this reason it is highly recommended to use three or more nodes in every cluster.
#### Go docs
More detailed development documentation can be found in go docs: <https://pkg.go.dev/go.etcd.io/etcd/raft/v3>.
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
* [Usage](#hdr-Usage)
* [Implementation notes](#hdr-Implementation_notes)
* [MessageType](#hdr-MessageType)
Package raft sends and receives messages in the Protocol Buffer format defined in the raftpb package.
Raft is a protocol with which a cluster of nodes can maintain a replicated state machine.
The state machine is kept in sync through the use of a replicated log.
For more details on Raft, see "In Search of an Understandable Consensus Algorithm"
(<https://raft.github.io/raft.pdf>) by <NAME> and <NAME>.
A simple example application, _raftexample_, is also available to help illustrate how to use this package in practice:
<https://github.com/etcd-io/etcd/tree/main/contrib/raftexample#### Usage [¶](#hdr-Usage)
The primary object in raft is a Node. You either start a Node from scratch using raft.StartNode or start a Node from some initial state using raft.RestartNode.
To start a node from scratch:
```
storage := raft.NewMemoryStorage()
c := &Config{
ID: 0x01,
ElectionTick: 10,
HeartbeatTick: 1,
Storage: storage,
MaxSizePerMsg: 4096,
MaxInflightMsgs: 256,
}
n := raft.StartNode(c, []raft.Peer{{ID: 0x02}, {ID: 0x03}})
```
To restart a node from previous state:
```
storage := raft.NewMemoryStorage()
// recover the in-memory storage from persistent
// snapshot, state and entries.
storage.ApplySnapshot(snapshot)
storage.SetHardState(state)
storage.Append(entries)
c := &Config{
ID: 0x01,
ElectionTick: 10,
HeartbeatTick: 1,
Storage: storage,
MaxSizePerMsg: 4096,
MaxInflightMsgs: 256,
}
// restart raft without peer information.
// peer information is already included in the storage.
n := raft.RestartNode(c)
```
Now that you are holding onto a Node you have a few responsibilities:
First, you must read from the Node.Ready() channel and process the updates it contains. These steps may be performed in parallel, except as noted in step 2.
1. Write HardState, Entries, and Snapshot to persistent storage if they are not empty. Note that when writing an Entry with Index i, any previously-persisted entries with Index >= i must be discarded.
2. Send all Messages to the nodes named in the To field. It is important that no messages be sent until the latest HardState has been persisted to disk,
and all Entries written by any previous Ready batch (Messages may be sent while entries from the same batch are being persisted). To reduce the I/O latency, an optimization can be applied to make leader write to disk in parallel with its followers (as explained at section 10.2.1 in Raft thesis). If any Message has type MsgSnap, call Node.ReportSnapshot() after it has been sent (these messages may be large).
Note: Marshalling messages is not thread-safe; it is important that you make sure that no new entries are persisted while marshalling.
The easiest way to achieve this is to serialize the messages directly inside your main raft loop.
3. Apply Snapshot (if any) and CommittedEntries to the state machine.
If any committed Entry has Type EntryConfChange, call Node.ApplyConfChange()
to apply it to the node. The configuration change may be cancelled at this point by setting the NodeID field to zero before calling ApplyConfChange
(but ApplyConfChange must be called one way or the other, and the decision to cancel must be based solely on the state machine and not external information such as the observed health of the node).
4. Call Node.Advance() to signal readiness for the next batch of updates.
This may be done at any time after step 1, although all updates must be processed in the order they were returned by Ready.
Second, all persisted log entries must be made available via an implementation of the Storage interface. The provided MemoryStorage type can be used for this (if you repopulate its state upon a restart), or you can supply your own disk-backed implementation.
Third, when you receive a message from another node, pass it to Node.Step:
```
func recvRaftRPC(ctx context.Context, m raftpb.Message) {
n.Step(ctx, m)
}
```
Finally, you need to call Node.Tick() at regular intervals (probably via a time.Ticker). Raft has two important timeouts: heartbeat and the election timeout. However, internally to the raft package time is represented by an abstract "tick".
The total state machine handling loop will look something like this:
```
for {
select {
case <-s.Ticker:
n.Tick()
case rd := <-s.Node.Ready():
saveToStorage(rd.State, rd.Entries, rd.Snapshot)
send(rd.Messages)
if !raft.IsEmptySnap(rd.Snapshot) {
processSnapshot(rd.Snapshot)
}
for _, entry := range rd.CommittedEntries {
process(entry)
if entry.Type == raftpb.EntryConfChange {
var cc raftpb.ConfChange
cc.Unmarshal(entry.Data)
s.Node.ApplyConfChange(cc)
}
}
s.Node.Advance()
case <-s.done:
return
}
}
```
To propose changes to the state machine from your node take your application data, serialize it into a byte slice and call:
```
n.Propose(ctx, data)
```
If the proposal is committed, data will appear in committed entries with type raftpb.EntryNormal. There is no guarantee that a proposed command will be committed; you may have to re-propose after a timeout.
To add or remove a node in a cluster, build ConfChange struct 'cc' and call:
```
n.ProposeConfChange(ctx, cc)
```
After config change is committed, some committed entry with type raftpb.EntryConfChange will be returned. You must apply it to node through:
```
var cc raftpb.ConfChange cc.Unmarshal(data)
n.ApplyConfChange(cc)
```
Note: An ID represents a unique node in a cluster for all time. A given ID MUST be used only once even if the old node has been removed.
This means that for example IP addresses make poor node IDs since they may be reused. Node IDs must be non-zero.
#### Implementation notes [¶](#hdr-Implementation_notes)
This implementation is up to date with the final Raft thesis
(<https://github.com/ongardie/dissertation/blob/master/stanford.pdf>), although our implementation of the membership change protocol differs somewhat from that described in chapter 4. The key invariant that membership changes happen one node at a time is preserved, but in our implementation the membership change takes effect when its entry is applied, not when it is added to the log (so the entry is committed under the old membership instead of the new). This is equivalent in terms of safety,
since the old and new configurations are guaranteed to overlap.
To ensure that we do not attempt to commit two membership changes at once by matching log positions (which would be unsafe since they should have different quorum requirements), we simply disallow any proposed membership change while any uncommitted change appears in the leader's log.
This approach introduces a problem when you try to remove a member from a two-member cluster: If one of the members dies before the other one receives the commit of the confchange entry, then the member cannot be removed any more since the cluster cannot make progress.
For this reason it is highly recommended to use three or more nodes in every cluster.
#### MessageType [¶](#hdr-MessageType)
Package raft sends and receives message in Protocol Buffer format (defined in raftpb package). Each state (follower, candidate, leader) implements its own 'step' method ('stepFollower', 'stepCandidate', 'stepLeader') when advancing with the given raftpb.Message. Each step is determined by its raftpb.MessageType. Note that every step is checked by one common method
'Step' that safety-checks the terms of node and incoming message to prevent stale log entries:
```
'MsgHup' is used for election. If a node is a follower or candidate, the
'tick' function in 'raft' struct is set as 'tickElection'. If a follower or candidate has not received any heartbeat before the election timeout, it passes 'MsgHup' to its Step method and becomes (or remains) a candidate to start a new election.
'MsgBeat' is an internal type that signals the leader to send a heartbeat of the 'MsgHeartbeat' type. If a node is a leader, the 'tick' function in the 'raft' struct is set as 'tickHeartbeat', and triggers the leader to send periodic 'MsgHeartbeat' messages to its followers.
'MsgProp' proposes to append data to its log entries. This is a special type to redirect proposals to leader. Therefore, send method overwrites raftpb.Message's term with its HardState's term to avoid attaching its local term to 'MsgProp'. When 'MsgProp' is passed to the leader's 'Step'
method, the leader first calls the 'appendEntry' method to append entries to its log, and then calls 'bcastAppend' method to send those entries to its peers. When passed to candidate, 'MsgProp' is dropped. When passed to follower, 'MsgProp' is stored in follower's mailbox(msgs) by the send method. It is stored with sender's ID and later forwarded to leader by rafthttp package.
'MsgApp' contains log entries to replicate. A leader calls bcastAppend,
which calls sendAppend, which sends soon-to-be-replicated logs in 'MsgApp'
type. When 'MsgApp' is passed to candidate's Step method, candidate reverts back to follower, because it indicates that there is a valid leader sending
'MsgApp' messages. Candidate and follower respond to this message in
'MsgAppResp' type.
'MsgAppResp' is response to log replication request('MsgApp'). When
'MsgApp' is passed to candidate or follower's Step method, it responds by calling 'handleAppendEntries' method, which sends 'MsgAppResp' to raft mailbox.
'MsgVote' requests votes for election. When a node is a follower or candidate and 'MsgHup' is passed to its Step method, then the node calls
'campaign' method to campaign itself to become a leader. Once 'campaign'
method is called, the node becomes candidate and sends 'MsgVote' to peers in cluster to request votes. When passed to leader or candidate's Step method and the message's Term is lower than leader's or candidate's,
'MsgVote' will be rejected ('MsgVoteResp' is returned with Reject true).
If leader or candidate receives 'MsgVote' with higher term, it will revert back to follower. When 'MsgVote' is passed to follower, it votes for the sender only when sender's last term is greater than MsgVote's term or sender's last term is equal to MsgVote's term but sender's last committed index is greater than or equal to follower's.
'MsgVoteResp' contains responses from voting request. When 'MsgVoteResp' is passed to candidate, the candidate calculates how many votes it has won. If it's more than majority (quorum), it becomes leader and calls 'bcastAppend'.
If candidate receives majority of votes of denials, it reverts back to follower.
'MsgPreVote' and 'MsgPreVoteResp' are used in an optional two-phase election protocol. When Config.PreVote is true, a pre-election is carried out first
(using the same rules as a regular election), and no node increases its term number unless the pre-election indicates that the campaigning node would win.
This minimizes disruption when a partitioned node rejoins the cluster.
'MsgSnap' requests to install a snapshot message. When a node has just become a leader or the leader receives 'MsgProp' message, it calls
'bcastAppend' method, which then calls 'sendAppend' method to each follower. In 'sendAppend', if a leader fails to get term or entries,
the leader requests snapshot by sending 'MsgSnap' type message.
'MsgSnapStatus' tells the result of snapshot install message. When a follower rejected 'MsgSnap', it indicates the snapshot request with
'MsgSnap' had failed from network issues which causes the network layer to fail to send out snapshots to its followers. Then leader considers follower's progress as probe. When 'MsgSnap' were not rejected, it indicates that the snapshot succeeded and the leader sets follower's progress to probe and resumes its log replication.
'MsgHeartbeat' sends heartbeat from leader. When 'MsgHeartbeat' is passed to candidate and message's term is higher than candidate's, the candidate reverts back to follower and updates its committed index from the one in this heartbeat. And it sends the message to its mailbox. When
'MsgHeartbeat' is passed to follower's Step method and message's term is higher than follower's, the follower updates its leaderID with the ID from the message.
'MsgHeartbeatResp' is a response to 'MsgHeartbeat'. When 'MsgHeartbeatResp'
is passed to leader's Step method, the leader knows which follower responded. And only when the leader's last committed index is greater than follower's Match index, the leader runs 'sendAppend` method.
'MsgUnreachable' tells that request(message) wasn't delivered. When
'MsgUnreachable' is passed to leader's Step method, the leader discovers that the follower that sent this 'MsgUnreachable' is not reachable, often indicating 'MsgApp' is lost. When follower's progress state is replicate,
the leader sets it back to probe.
```
### Index [¶](#pkg-index)
* [Constants](#pkg-constants)
* [Variables](#pkg-variables)
* [func DescribeConfState(state pb.ConfState) string](#DescribeConfState)
* [func DescribeEntries(ents []pb.Entry, f EntryFormatter) string](#DescribeEntries)
* [func DescribeEntry(e pb.Entry, f EntryFormatter) string](#DescribeEntry)
* [func DescribeHardState(hs pb.HardState) string](#DescribeHardState)
* [func DescribeMessage(m pb.Message, f EntryFormatter) string](#DescribeMessage)
* [func DescribeReady(rd Ready, f EntryFormatter) string](#DescribeReady)
* [func DescribeSnapshot(snap pb.Snapshot) string](#DescribeSnapshot)
* [func DescribeSoftState(ss SoftState) string](#DescribeSoftState)
* [func IsEmptyHardState(st pb.HardState) bool](#IsEmptyHardState)
* [func IsEmptySnap(sp pb.Snapshot) bool](#IsEmptySnap)
* [func IsLocalMsg(msgt pb.MessageType) bool](#IsLocalMsg)
* [func IsResponseMsg(msgt pb.MessageType) bool](#IsResponseMsg)
* [func MustSync(st, prevst pb.HardState, entsnum int) bool](#MustSync)
* [func PayloadSize(e pb.Entry) int](#PayloadSize)
* [func ResetDefaultLogger()](#ResetDefaultLogger)
* [func SetLogger(l Logger)](#SetLogger)
* [type BasicStatus](#BasicStatus)
* [type CampaignType](#CampaignType)
* [type Config](#Config)
* [type DefaultLogger](#DefaultLogger)
* + [func (l *DefaultLogger) Debug(v ...interface{})](#DefaultLogger.Debug)
+ [func (l *DefaultLogger) Debugf(format string, v ...interface{})](#DefaultLogger.Debugf)
+ [func (l *DefaultLogger) EnableDebug()](#DefaultLogger.EnableDebug)
+ [func (l *DefaultLogger) EnableTimestamps()](#DefaultLogger.EnableTimestamps)
+ [func (l *DefaultLogger) Error(v ...interface{})](#DefaultLogger.Error)
+ [func (l *DefaultLogger) Errorf(format string, v ...interface{})](#DefaultLogger.Errorf)
+ [func (l *DefaultLogger) Fatal(v ...interface{})](#DefaultLogger.Fatal)
+ [func (l *DefaultLogger) Fatalf(format string, v ...interface{})](#DefaultLogger.Fatalf)
+ [func (l *DefaultLogger) Info(v ...interface{})](#DefaultLogger.Info)
+ [func (l *DefaultLogger) Infof(format string, v ...interface{})](#DefaultLogger.Infof)
+ [func (l *DefaultLogger) Panic(v ...interface{})](#DefaultLogger.Panic)
+ [func (l *DefaultLogger) Panicf(format string, v ...interface{})](#DefaultLogger.Panicf)
+ [func (l *DefaultLogger) Warning(v ...interface{})](#DefaultLogger.Warning)
+ [func (l *DefaultLogger) Warningf(format string, v ...interface{})](#DefaultLogger.Warningf)
* [type EntryFormatter](#EntryFormatter)
* [type Logger](#Logger)
* [type MemoryStorage](#MemoryStorage)
* + [func NewMemoryStorage() *MemoryStorage](#NewMemoryStorage)
* + [func (ms *MemoryStorage) Append(entries []pb.Entry) error](#MemoryStorage.Append)
+ [func (ms *MemoryStorage) ApplySnapshot(snap pb.Snapshot) error](#MemoryStorage.ApplySnapshot)
+ [func (ms *MemoryStorage) Compact(compactIndex uint64) error](#MemoryStorage.Compact)
+ [func (ms *MemoryStorage) CreateSnapshot(i uint64, cs *pb.ConfState, data []byte) (pb.Snapshot, error)](#MemoryStorage.CreateSnapshot)
+ [func (ms *MemoryStorage) Entries(lo, hi, maxSize uint64) ([]pb.Entry, error)](#MemoryStorage.Entries)
+ [func (ms *MemoryStorage) FirstIndex() (uint64, error)](#MemoryStorage.FirstIndex)
+ [func (ms *MemoryStorage) InitialState() (pb.HardState, pb.ConfState, error)](#MemoryStorage.InitialState)
+ [func (ms *MemoryStorage) LastIndex() (uint64, error)](#MemoryStorage.LastIndex)
+ [func (ms *MemoryStorage) SetHardState(st pb.HardState) error](#MemoryStorage.SetHardState)
+ [func (ms *MemoryStorage) Snapshot() (pb.Snapshot, error)](#MemoryStorage.Snapshot)
+ [func (ms *MemoryStorage) Term(i uint64) (uint64, error)](#MemoryStorage.Term)
* [type Node](#Node)
* + [func RestartNode(c *Config) Node](#RestartNode)
+ [func StartNode(c *Config, peers []Peer) Node](#StartNode)
* [type Peer](#Peer)
* [type ProgressType](#ProgressType)
* [type RawNode](#RawNode)
* + [func NewRawNode(config *Config) (*RawNode, error)](#NewRawNode)
* + [func (rn *RawNode) Advance(rd Ready)](#RawNode.Advance)
+ [func (rn *RawNode) ApplyConfChange(cc pb.ConfChangeI) *pb.ConfState](#RawNode.ApplyConfChange)
+ [func (rn *RawNode) BasicStatus() BasicStatus](#RawNode.BasicStatus)
+ [func (rn *RawNode) Bootstrap(peers []Peer) error](#RawNode.Bootstrap)
+ [func (rn *RawNode) Campaign() error](#RawNode.Campaign)
+ [func (rn *RawNode) HasReady() bool](#RawNode.HasReady)
+ [func (rn *RawNode) Propose(data []byte) error](#RawNode.Propose)
+ [func (rn *RawNode) ProposeConfChange(cc pb.ConfChangeI) error](#RawNode.ProposeConfChange)
+ [func (rn *RawNode) ReadIndex(rctx []byte)](#RawNode.ReadIndex)
+ [func (rn *RawNode) Ready() Ready](#RawNode.Ready)
+ [func (rn *RawNode) ReportSnapshot(id uint64, status SnapshotStatus)](#RawNode.ReportSnapshot)
+ [func (rn *RawNode) ReportUnreachable(id uint64)](#RawNode.ReportUnreachable)
+ [func (rn *RawNode) Status() Status](#RawNode.Status)
+ [func (rn *RawNode) Step(m pb.Message) error](#RawNode.Step)
+ [func (rn *RawNode) Tick()](#RawNode.Tick)
+ [func (rn *RawNode) TickQuiesced()](#RawNode.TickQuiesced)
+ [func (rn *RawNode) TransferLeader(transferee uint64)](#RawNode.TransferLeader)
+ [func (rn *RawNode) WithProgress(visitor func(id uint64, typ ProgressType, pr tracker.Progress))](#RawNode.WithProgress)
* [type ReadOnlyOption](#ReadOnlyOption)
* [type ReadState](#ReadState)
* [type Ready](#Ready)
* [type SnapshotStatus](#SnapshotStatus)
* [type SoftState](#SoftState)
* [type StateType](#StateType)
* + [func (st StateType) MarshalJSON() ([]byte, error)](#StateType.MarshalJSON)
+ [func (st StateType) String() string](#StateType.String)
* [type Status](#Status)
* + [func (s Status) MarshalJSON() ([]byte, error)](#Status.MarshalJSON)
+ [func (s Status) String() string](#Status.String)
* [type Storage](#Storage)
#### Examples [¶](#pkg-examples)
* [Node](#example-Node)
### Constants [¶](#pkg-constants)
```
const None [uint64](/builtin#uint64) = 0
```
None is a placeholder node ID used when there is no leader.
### Variables [¶](#pkg-variables)
```
var ErrCompacted = [errors](/errors).[New](/errors#New)("requested index is unavailable due to compaction")
```
ErrCompacted is returned by Storage.Entries/Compact when a requested index is unavailable because it predates the last snapshot.
```
var ErrProposalDropped = [errors](/errors).[New](/errors#New)("raft proposal dropped")
```
ErrProposalDropped is returned when the proposal is ignored by some cases,
so that the proposer can be notified and fail fast.
```
var ErrSnapOutOfDate = [errors](/errors).[New](/errors#New)("requested index is older than the existing snapshot")
```
ErrSnapOutOfDate is returned by Storage.CreateSnapshot when a requested index is older than the existing snapshot.
```
var ErrSnapshotTemporarilyUnavailable = [errors](/errors).[New](/errors#New)("snapshot is temporarily unavailable")
```
ErrSnapshotTemporarilyUnavailable is returned by the Storage interface when the required snapshot is temporarily unavailable.
```
var ErrStepLocalMsg = [errors](/errors).[New](/errors#New)("raft: cannot step raft local message")
```
ErrStepLocalMsg is returned when try to step a local raft message
```
var ErrStepPeerNotFound = [errors](/errors).[New](/errors#New)("raft: cannot step as peer not found")
```
ErrStepPeerNotFound is returned when try to step a response message but there is no peer found in raft.prs for that node.
```
var (
// ErrStopped is returned by methods on Nodes that have been stopped.
ErrStopped = [errors](/errors).[New](/errors#New)("raft: stopped")
)
```
```
var ErrUnavailable = [errors](/errors).[New](/errors#New)("requested entry at index is unavailable")
```
ErrUnavailable is returned by Storage interface when the requested log entries are unavailable.
### Functions [¶](#pkg-functions)
####
func [DescribeConfState](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/util.go#L78) [¶](#DescribeConfState)
```
func DescribeConfState(state [pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[ConfState](/go.etcd.io/etcd/raft/[email protected]/raftpb#ConfState)) [string](/builtin#string)
```
####
func [DescribeEntries](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/util.go#L204) [¶](#DescribeEntries)
```
func DescribeEntries(ents [][pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[Entry](/go.etcd.io/etcd/raft/[email protected]/raftpb#Entry), f [EntryFormatter](#EntryFormatter)) [string](/builtin#string)
```
DescribeEntries calls DescribeEntry for each Entry, adding a newline to each.
####
func [DescribeEntry](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/util.go#L166) [¶](#DescribeEntry)
```
func DescribeEntry(e [pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[Entry](/go.etcd.io/etcd/raft/[email protected]/raftpb#Entry), f [EntryFormatter](#EntryFormatter)) [string](/builtin#string)
```
DescribeEntry returns a concise human-readable description of an Entry for debugging.
####
func [DescribeHardState](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/util.go#L64) [¶](#DescribeHardState)
```
func DescribeHardState(hs [pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[HardState](/go.etcd.io/etcd/raft/[email protected]/raftpb#HardState)) [string](/builtin#string)
```
####
func [DescribeMessage](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/util.go#L133) [¶](#DescribeMessage)
```
func DescribeMessage(m [pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[Message](/go.etcd.io/etcd/raft/[email protected]/raftpb#Message), f [EntryFormatter](#EntryFormatter)) [string](/builtin#string)
```
DescribeMessage returns a concise human-readable description of a Message for debugging.
####
func [DescribeReady](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/util.go#L90) [¶](#DescribeReady)
```
func DescribeReady(rd [Ready](#Ready), f [EntryFormatter](#EntryFormatter)) [string](/builtin#string)
```
####
func [DescribeSnapshot](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/util.go#L85) [¶](#DescribeSnapshot)
```
func DescribeSnapshot(snap [pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[Snapshot](/go.etcd.io/etcd/raft/[email protected]/raftpb#Snapshot)) [string](/builtin#string)
```
####
func [DescribeSoftState](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/util.go#L74) [¶](#DescribeSoftState)
```
func DescribeSoftState(ss [SoftState](#SoftState)) [string](/builtin#string)
```
####
func [IsEmptyHardState](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/node.go#L97) [¶](#IsEmptyHardState)
```
func IsEmptyHardState(st [pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[HardState](/go.etcd.io/etcd/raft/[email protected]/raftpb#HardState)) [bool](/builtin#bool)
```
IsEmptyHardState returns true if the given HardState is empty.
####
func [IsEmptySnap](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/node.go#L102) [¶](#IsEmptySnap)
```
func IsEmptySnap(sp [pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[Snapshot](/go.etcd.io/etcd/raft/[email protected]/raftpb#Snapshot)) [bool](/builtin#bool)
```
IsEmptySnap returns true if the given Snapshot is empty.
####
func [IsLocalMsg](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/util.go#L43) [¶](#IsLocalMsg)
```
func IsLocalMsg(msgt [pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[MessageType](/go.etcd.io/etcd/raft/[email protected]/raftpb#MessageType)) [bool](/builtin#bool)
```
####
func [IsResponseMsg](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/util.go#L48) [¶](#IsResponseMsg)
```
func IsResponseMsg(msgt [pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[MessageType](/go.etcd.io/etcd/raft/[email protected]/raftpb#MessageType)) [bool](/builtin#bool)
```
####
func [MustSync](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/node.go#L583) [¶](#MustSync)
```
func MustSync(st, prevst [pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[HardState](/go.etcd.io/etcd/raft/[email protected]/raftpb#HardState), entsnum [int](/builtin#int)) [bool](/builtin#bool)
```
MustSync returns true if the hard state and count of Raft entries indicate that a synchronous write to persistent storage is required.
####
func [PayloadSize](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/util.go#L160) [¶](#PayloadSize)
```
func PayloadSize(e [pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[Entry](/go.etcd.io/etcd/raft/[email protected]/raftpb#Entry)) [int](/builtin#int)
```
PayloadSize is the size of the payload of this Entry. Notably, it does not depend on its Index or Term.
####
func [ResetDefaultLogger](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/logger.go#L51) [¶](#ResetDefaultLogger)
```
func ResetDefaultLogger()
```
####
func [SetLogger](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/logger.go#L45) [¶](#SetLogger)
```
func SetLogger(l [Logger](#Logger))
```
### Types [¶](#pkg-types)
####
type [BasicStatus](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/status.go#L33) [¶](#BasicStatus)
```
type BasicStatus struct {
ID [uint64](/builtin#uint64)
[pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[HardState](/go.etcd.io/etcd/raft/[email protected]/raftpb#HardState)
[SoftState](#SoftState)
Applied [uint64](/builtin#uint64)
LeadTransferee [uint64](/builtin#uint64)
}
```
BasicStatus contains basic information about the Raft peer. It does not allocate.
####
type [CampaignType](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/raft.go#L99) [¶](#CampaignType)
```
type CampaignType [string](/builtin#string)
```
CampaignType represents the type of campaigning the reason we use the type of string instead of uint64 is because it's simpler to compare and fill in raft entries
####
type [Config](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/raft.go#L116) [¶](#Config)
```
type Config struct {
// ID is the identity of the local raft. ID cannot be 0.
ID [uint64](/builtin#uint64)
// ElectionTick is the number of Node.Tick invocations that must pass between
// elections. That is, if a follower does not receive any message from the
// leader of current term before ElectionTick has elapsed, it will become
// candidate and start an election. ElectionTick must be greater than
// HeartbeatTick. We suggest ElectionTick = 10 * HeartbeatTick to avoid
// unnecessary leader switching.
ElectionTick [int](/builtin#int)
// HeartbeatTick is the number of Node.Tick invocations that must pass between
// heartbeats. That is, a leader sends heartbeat messages to maintain its
// leadership every HeartbeatTick ticks.
HeartbeatTick [int](/builtin#int)
// Storage is the storage for raft. raft generates entries and states to be
// stored in storage. raft reads the persisted entries and states out of
// Storage when it needs. raft reads out the previous state and configuration
// out of storage when restarting.
Storage [Storage](#Storage)
// Applied is the last applied index. It should only be set when restarting
// raft. raft will not return entries to the application smaller or equal to
// Applied. If Applied is unset when restarting, raft might return previous
// applied entries. This is a very application dependent configuration.
Applied [uint64](/builtin#uint64)
// MaxSizePerMsg limits the max byte size of each append message. Smaller
// value lowers the raft recovery cost(initial probing and message lost
// during normal operation). On the other side, it might affect the
// throughput during normal replication. Note: math.MaxUint64 for unlimited,
// 0 for at most one entry per message.
MaxSizePerMsg [uint64](/builtin#uint64)
// MaxCommittedSizePerReady limits the size of the committed entries which
// can be applied.
MaxCommittedSizePerReady [uint64](/builtin#uint64)
// MaxUncommittedEntriesSize limits the aggregate byte size of the
// uncommitted entries that may be appended to a leader's log. Once this
// limit is exceeded, proposals will begin to return ErrProposalDropped
// errors. Note: 0 for no limit.
MaxUncommittedEntriesSize [uint64](/builtin#uint64)
// MaxInflightMsgs limits the max number of in-flight append messages during
// optimistic replication phase. The application transportation layer usually
// has its own sending buffer over TCP/UDP. Setting MaxInflightMsgs to avoid
// overflowing that sending buffer. TODO (xiangli): feedback to application to
// limit the proposal rate?
MaxInflightMsgs [int](/builtin#int)
// CheckQuorum specifies if the leader should check quorum activity. Leader
// steps down when quorum is not active for an electionTimeout.
CheckQuorum [bool](/builtin#bool)
// PreVote enables the Pre-Vote algorithm described in raft thesis section
// 9.6. This prevents disruption when a node that has been partitioned away
// rejoins the cluster.
PreVote [bool](/builtin#bool)
// ReadOnlyOption specifies how the read only request is processed.
//
// ReadOnlySafe guarantees the linearizability of the read only request by
// communicating with the quorum. It is the default and suggested option.
//
// ReadOnlyLeaseBased ensures linearizability of the read only request by
// relying on the leader lease. It can be affected by clock drift.
// If the clock drift is unbounded, leader might keep the lease longer than it
// should (clock can move backward/pause without any bound). ReadIndex is not safe
// in that case.
// CheckQuorum MUST be enabled if ReadOnlyOption is ReadOnlyLeaseBased.
ReadOnlyOption [ReadOnlyOption](#ReadOnlyOption)
// Logger is the logger used for raft log. For multinode which can host
// multiple raft group, each raft group can have its own logger
Logger [Logger](#Logger)
// DisableProposalForwarding set to true means that followers will drop
// proposals, rather than forwarding them to the leader. One use case for
// this feature would be in a situation where the Raft leader is used to
// compute the data of a proposal, for example, adding a timestamp from a
// hybrid logical clock to data in a monotonically increasing way. Forwarding
// should be disabled to prevent a follower with an inaccurate hybrid
// logical clock from assigning the timestamp and then forwarding the data
// to the leader.
DisableProposalForwarding [bool](/builtin#bool)
}
```
Config contains the parameters to start a raft.
####
type [DefaultLogger](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/logger.go#L73) [¶](#DefaultLogger)
```
type DefaultLogger struct {
*[log](/log).[Logger](/log#Logger)
// contains filtered or unexported fields
}
```
DefaultLogger is a default implementation of the Logger interface.
####
func (*DefaultLogger) [Debug](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/logger.go#L86) [¶](#DefaultLogger.Debug)
```
func (l *[DefaultLogger](#DefaultLogger)) Debug(v ...interface{})
```
####
func (*DefaultLogger) [Debugf](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/logger.go#L92) [¶](#DefaultLogger.Debugf)
```
func (l *[DefaultLogger](#DefaultLogger)) Debugf(format [string](/builtin#string), v ...interface{})
```
####
func (*DefaultLogger) [EnableDebug](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/logger.go#L82) [¶](#DefaultLogger.EnableDebug)
```
func (l *[DefaultLogger](#DefaultLogger)) EnableDebug()
```
####
func (*DefaultLogger) [EnableTimestamps](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/logger.go#L78) [¶](#DefaultLogger.EnableTimestamps)
```
func (l *[DefaultLogger](#DefaultLogger)) EnableTimestamps()
```
####
func (*DefaultLogger) [Error](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/logger.go#L106) [¶](#DefaultLogger.Error)
```
func (l *[DefaultLogger](#DefaultLogger)) Error(v ...interface{})
```
####
func (*DefaultLogger) [Errorf](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/logger.go#L110) [¶](#DefaultLogger.Errorf)
```
func (l *[DefaultLogger](#DefaultLogger)) Errorf(format [string](/builtin#string), v ...interface{})
```
####
func (*DefaultLogger) [Fatal](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/logger.go#L122) [¶](#DefaultLogger.Fatal)
```
func (l *[DefaultLogger](#DefaultLogger)) Fatal(v ...interface{})
```
####
func (*DefaultLogger) [Fatalf](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/logger.go#L127) [¶](#DefaultLogger.Fatalf)
```
func (l *[DefaultLogger](#DefaultLogger)) Fatalf(format [string](/builtin#string), v ...interface{})
```
####
func (*DefaultLogger) [Info](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/logger.go#L98) [¶](#DefaultLogger.Info)
```
func (l *[DefaultLogger](#DefaultLogger)) Info(v ...interface{})
```
####
func (*DefaultLogger) [Infof](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/logger.go#L102) [¶](#DefaultLogger.Infof)
```
func (l *[DefaultLogger](#DefaultLogger)) Infof(format [string](/builtin#string), v ...interface{})
```
####
func (*DefaultLogger) [Panic](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/logger.go#L132) [¶](#DefaultLogger.Panic)
```
func (l *[DefaultLogger](#DefaultLogger)) Panic(v ...interface{})
```
####
func (*DefaultLogger) [Panicf](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/logger.go#L136) [¶](#DefaultLogger.Panicf)
```
func (l *[DefaultLogger](#DefaultLogger)) Panicf(format [string](/builtin#string), v ...interface{})
```
####
func (*DefaultLogger) [Warning](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/logger.go#L114) [¶](#DefaultLogger.Warning)
```
func (l *[DefaultLogger](#DefaultLogger)) Warning(v ...interface{})
```
####
func (*DefaultLogger) [Warningf](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/logger.go#L118) [¶](#DefaultLogger.Warningf)
```
func (l *[DefaultLogger](#DefaultLogger)) Warningf(format [string](/builtin#string), v ...interface{})
```
####
type [EntryFormatter](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/util.go#L129) [¶](#EntryFormatter)
```
type EntryFormatter func([][byte](/builtin#byte)) [string](/builtin#string)
```
EntryFormatter can be implemented by the application to provide human-readable formatting of entry data. Nil is a valid EntryFormatter and will use a default format.
####
type [Logger](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/logger.go#L25) [¶](#Logger)
```
type Logger interface {
Debug(v ...interface{})
Debugf(format [string](/builtin#string), v ...interface{})
Error(v ...interface{})
Errorf(format [string](/builtin#string), v ...interface{})
Info(v ...interface{})
Infof(format [string](/builtin#string), v ...interface{})
Warning(v ...interface{})
Warningf(format [string](/builtin#string), v ...interface{})
Fatal(v ...interface{})
Fatalf(format [string](/builtin#string), v ...interface{})
Panic(v ...interface{})
Panicf(format [string](/builtin#string), v ...interface{})
}
```
####
type [MemoryStorage](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/storage.go#L76) [¶](#MemoryStorage)
```
type MemoryStorage struct {
// Protects access to all fields. Most methods of MemoryStorage are
// run on the raft goroutine, but Append() is run on an application
// goroutine.
[sync](/sync).[Mutex](/sync#Mutex)
// contains filtered or unexported fields
}
```
MemoryStorage implements the Storage interface backed by an in-memory array.
####
func [NewMemoryStorage](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/storage.go#L89) [¶](#NewMemoryStorage)
```
func NewMemoryStorage() *[MemoryStorage](#MemoryStorage)
```
NewMemoryStorage creates an empty MemoryStorage.
####
func (*MemoryStorage) [Append](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/storage.go#L241) [¶](#MemoryStorage.Append)
```
func (ms *[MemoryStorage](#MemoryStorage)) Append(entries [][pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[Entry](/go.etcd.io/etcd/raft/[email protected]/raftpb#Entry)) [error](/builtin#error)
```
Append the new entries to storage.
TODO (xiangli): ensure the entries are continuous and entries[0].Index > ms.entries[0].Index
####
func (*MemoryStorage) [ApplySnapshot](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/storage.go#L174) [¶](#MemoryStorage.ApplySnapshot)
```
func (ms *[MemoryStorage](#MemoryStorage)) ApplySnapshot(snap [pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[Snapshot](/go.etcd.io/etcd/raft/[email protected]/raftpb#Snapshot)) [error](/builtin#error)
```
ApplySnapshot overwrites the contents of this Storage object with those of the given snapshot.
####
func (*MemoryStorage) [Compact](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/storage.go#L218) [¶](#MemoryStorage.Compact)
```
func (ms *[MemoryStorage](#MemoryStorage)) Compact(compactIndex [uint64](/builtin#uint64)) [error](/builtin#error)
```
Compact discards all log entries prior to compactIndex.
It is the application's responsibility to not attempt to compact an index greater than raftLog.applied.
####
func (*MemoryStorage) [CreateSnapshot](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/storage.go#L194) [¶](#MemoryStorage.CreateSnapshot)
```
func (ms *[MemoryStorage](#MemoryStorage)) CreateSnapshot(i [uint64](/builtin#uint64), cs *[pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[ConfState](/go.etcd.io/etcd/raft/[email protected]/raftpb#ConfState), data [][byte](/builtin#byte)) ([pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[Snapshot](/go.etcd.io/etcd/raft/[email protected]/raftpb#Snapshot), [error](/builtin#error))
```
CreateSnapshot makes a snapshot which can be retrieved with Snapshot() and can be used to reconstruct the state at that point.
If any configuration changes have been made since the last compaction,
the result of the last ApplyConfChange must be passed in.
####
func (*MemoryStorage) [Entries](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/storage.go#L110) [¶](#MemoryStorage.Entries)
```
func (ms *[MemoryStorage](#MemoryStorage)) Entries(lo, hi, maxSize [uint64](/builtin#uint64)) ([][pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[Entry](/go.etcd.io/etcd/raft/[email protected]/raftpb#Entry), [error](/builtin#error))
```
Entries implements the Storage interface.
####
func (*MemoryStorage) [FirstIndex](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/storage.go#L155) [¶](#MemoryStorage.FirstIndex)
```
func (ms *[MemoryStorage](#MemoryStorage)) FirstIndex() ([uint64](/builtin#uint64), [error](/builtin#error))
```
FirstIndex implements the Storage interface.
####
func (*MemoryStorage) [InitialState](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/storage.go#L97) [¶](#MemoryStorage.InitialState)
```
func (ms *[MemoryStorage](#MemoryStorage)) InitialState() ([pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[HardState](/go.etcd.io/etcd/raft/[email protected]/raftpb#HardState), [pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[ConfState](/go.etcd.io/etcd/raft/[email protected]/raftpb#ConfState), [error](/builtin#error))
```
InitialState implements the Storage interface.
####
func (*MemoryStorage) [LastIndex](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/storage.go#L144) [¶](#MemoryStorage.LastIndex)
```
func (ms *[MemoryStorage](#MemoryStorage)) LastIndex() ([uint64](/builtin#uint64), [error](/builtin#error))
```
LastIndex implements the Storage interface.
####
func (*MemoryStorage) [SetHardState](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/storage.go#L102) [¶](#MemoryStorage.SetHardState)
```
func (ms *[MemoryStorage](#MemoryStorage)) SetHardState(st [pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[HardState](/go.etcd.io/etcd/raft/[email protected]/raftpb#HardState)) [error](/builtin#error)
```
SetHardState saves the current HardState.
####
func (*MemoryStorage) [Snapshot](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/storage.go#L166) [¶](#MemoryStorage.Snapshot)
```
func (ms *[MemoryStorage](#MemoryStorage)) Snapshot() ([pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[Snapshot](/go.etcd.io/etcd/raft/[email protected]/raftpb#Snapshot), [error](/builtin#error))
```
Snapshot implements the Storage interface.
####
func (*MemoryStorage) [Term](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/storage.go#L130) [¶](#MemoryStorage.Term)
```
func (ms *[MemoryStorage](#MemoryStorage)) Term(i [uint64](/builtin#uint64)) ([uint64](/builtin#uint64), [error](/builtin#error))
```
Term implements the Storage interface.
####
type [Node](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/node.go#L126) [¶](#Node)
```
type Node interface {
// Tick increments the internal logical clock for the Node by a single tick. Election
// timeouts and heartbeat timeouts are in units of ticks.
Tick()
// Campaign causes the Node to transition to candidate state and start campaigning to become leader.
Campaign(ctx [context](/context).[Context](/context#Context)) [error](/builtin#error)
// Propose proposes that data be appended to the log. Note that proposals can be lost without
// notice, therefore it is user's job to ensure proposal retries.
Propose(ctx [context](/context).[Context](/context#Context), data [][byte](/builtin#byte)) [error](/builtin#error)
// ProposeConfChange proposes a configuration change. Like any proposal, the
// configuration change may be dropped with or without an error being
// returned. In particular, configuration changes are dropped unless the
// leader has certainty that there is no prior unapplied configuration
// change in its log.
//
// The method accepts either a pb.ConfChange (deprecated) or pb.ConfChangeV2
// message. The latter allows arbitrary configuration changes via joint
// consensus, notably including replacing a voter. Passing a ConfChangeV2
// message is only allowed if all Nodes participating in the cluster run a
// version of this library aware of the V2 API. See pb.ConfChangeV2 for
// usage details and semantics.
ProposeConfChange(ctx [context](/context).[Context](/context#Context), cc [pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[ConfChangeI](/go.etcd.io/etcd/raft/[email protected]/raftpb#ConfChangeI)) [error](/builtin#error)
// Step advances the state machine using the given message. ctx.Err() will be returned, if any.
Step(ctx [context](/context).[Context](/context#Context), msg [pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[Message](/go.etcd.io/etcd/raft/[email protected]/raftpb#Message)) [error](/builtin#error)
// Ready returns a channel that returns the current point-in-time state.
// Users of the Node must call Advance after retrieving the state returned by Ready.
//
// NOTE: No committed entries from the next Ready may be applied until all committed entries
// and snapshots from the previous one have finished.
Ready() <-chan [Ready](#Ready)
// Advance notifies the Node that the application has saved progress up to the last Ready.
// It prepares the node to return the next available Ready.
//
// The application should generally call Advance after it applies the entries in last Ready.
//
// However, as an optimization, the application may call Advance while it is applying the
// commands. For example. when the last Ready contains a snapshot, the application might take
// a long time to apply the snapshot data. To continue receiving Ready without blocking raft
// progress, it can call Advance before finishing applying the last ready.
Advance()
// ApplyConfChange applies a config change (previously passed to
// ProposeConfChange) to the node. This must be called whenever a config
// change is observed in Ready.CommittedEntries, except when the app decides
// to reject the configuration change (i.e. treats it as a noop instead), in
// which case it must not be called.
//
// Returns an opaque non-nil ConfState protobuf which must be recorded in
// snapshots.
ApplyConfChange(cc [pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[ConfChangeI](/go.etcd.io/etcd/raft/[email protected]/raftpb#ConfChangeI)) *[pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[ConfState](/go.etcd.io/etcd/raft/[email protected]/raftpb#ConfState)
// TransferLeadership attempts to transfer leadership to the given transferee.
TransferLeadership(ctx [context](/context).[Context](/context#Context), lead, transferee [uint64](/builtin#uint64))
// ReadIndex request a read state. The read state will be set in the ready.
// Read state has a read index. Once the application advances further than the read
// index, any linearizable read requests issued before the read request can be
// processed safely. The read state will have the same rctx attached.
// Note that request can be lost without notice, therefore it is user's job
// to ensure read index retries.
ReadIndex(ctx [context](/context).[Context](/context#Context), rctx [][byte](/builtin#byte)) [error](/builtin#error)
// Status returns the current status of the raft state machine.
Status() [Status](#Status)
// ReportUnreachable reports the given node is not reachable for the last send.
ReportUnreachable(id [uint64](/builtin#uint64))
// ReportSnapshot reports the status of the sent snapshot. The id is the raft ID of the follower
// who is meant to receive the snapshot, and the status is SnapshotFinish or SnapshotFailure.
// Calling ReportSnapshot with SnapshotFinish is a no-op. But, any failure in applying a
// snapshot (for e.g., while streaming it from leader to follower), should be reported to the
// leader with SnapshotFailure. When leader sends a snapshot to a follower, it pauses any raft
// log probes until the follower can apply the snapshot and advance its state. If the follower
// can't do that, for e.g., due to a crash, it could end up in a limbo, never getting any
// updates from the leader. Therefore, it is crucial that the application ensures that any
// failure in snapshot sending is caught and reported back to the leader; so it can resume raft
// log probing in the follower.
ReportSnapshot(id [uint64](/builtin#uint64), status [SnapshotStatus](#SnapshotStatus))
// Stop performs any necessary termination of the Node.
Stop()
}
```
Node represents a node in a raft cluster.
Example [¶](#example-Node)
```
package main
import (
pb "go.etcd.io/etcd/raft/v3/raftpb"
)
func applyToStore(ents []pb.Entry) {}
func sendMessages(msgs []pb.Message) {}
func saveStateToDisk(st pb.HardState) {}
func saveToDisk(ents []pb.Entry) {}
func main() {
c := &Config{}
n := StartNode(c, nil)
defer n.Stop()
// stuff to n happens in other goroutines
// the last known state
var prev pb.HardState
for {
// Ready blocks until there is new state ready.
rd := <-n.Ready()
if !isHardStateEqual(prev, rd.HardState) {
saveStateToDisk(rd.HardState)
prev = rd.HardState
}
saveToDisk(rd.Entries)
go applyToStore(rd.CommittedEntries)
sendMessages(rd.Messages)
}
}
```
```
Output:
```
Share Format
Run
####
func [RestartNode](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/node.go#L238) [¶](#RestartNode)
```
func RestartNode(c *[Config](#Config)) [Node](#Node)
```
RestartNode is similar to StartNode but does not take a list of peers.
The current membership of the cluster will be restored from the Storage.
If the caller has an existing state machine, pass in the last log index that has been applied to it; otherwise use zero.
####
func [StartNode](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/node.go#L218) [¶](#StartNode)
```
func StartNode(c *[Config](#Config), peers [][Peer](#Peer)) [Node](#Node)
```
StartNode returns a new Node given configuration and a list of raft peers.
It appends a ConfChangeAddNode entry for each given peer to the initial log.
Peers must not be zero length; call RestartNode in that case.
####
type [Peer](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/node.go#L209) [¶](#Peer)
```
type Peer struct {
ID [uint64](/builtin#uint64)
Context [][byte](/builtin#byte)
}
```
####
type [ProgressType](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/rawnode.go#L195) [¶](#ProgressType)
```
type ProgressType [byte](/builtin#byte)
```
ProgressType indicates the type of replica a Progress corresponds to.
```
const (
// ProgressTypePeer accompanies a Progress for a regular peer replica.
ProgressTypePeer [ProgressType](#ProgressType) = [iota](/builtin#iota)
// ProgressTypeLearner accompanies a Progress for a learner replica.
ProgressTypeLearner
)
```
####
type [RawNode](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/rawnode.go#L34) [¶](#RawNode)
```
type RawNode struct {
// contains filtered or unexported fields
}
```
RawNode is a thread-unsafe Node.
The methods of this struct correspond to the methods of Node and are described more fully there.
####
func [NewRawNode](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/rawnode.go#L47) [¶](#NewRawNode)
```
func NewRawNode(config *[Config](#Config)) (*[RawNode](#RawNode), [error](/builtin#error))
```
NewRawNode instantiates a RawNode from the given configuration.
See Bootstrap() for bootstrapping an initial state; this replaces the former
'peers' argument to this method (with identical behavior). However, It is recommended that instead of calling Bootstrap, applications bootstrap their state manually by setting up a Storage that has a first index > 1 and which stores the desired ConfState as its InitialState.
####
func (*RawNode) [Advance](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/rawnode.go#L174) [¶](#RawNode.Advance)
```
func (rn *[RawNode](#RawNode)) Advance(rd [Ready](#Ready))
```
Advance notifies the RawNode that the application has applied and saved progress in the last Ready results.
####
func (*RawNode) [ApplyConfChange](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/rawnode.go#L104) [¶](#RawNode.ApplyConfChange)
```
func (rn *[RawNode](#RawNode)) ApplyConfChange(cc [pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[ConfChangeI](/go.etcd.io/etcd/raft/[email protected]/raftpb#ConfChangeI)) *[pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[ConfState](/go.etcd.io/etcd/raft/[email protected]/raftpb#ConfState)
```
ApplyConfChange applies a config change to the local node. The app must call this when it applies a configuration change, except when it decides to reject the configuration change, in which case no call must take place.
####
func (*RawNode) [BasicStatus](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/rawnode.go#L190) [¶](#RawNode.BasicStatus)
```
func (rn *[RawNode](#RawNode)) BasicStatus() [BasicStatus](#BasicStatus)
```
BasicStatus returns a BasicStatus. Notably this does not contain the Progress map; see WithProgress for an allocation-free way to inspect it.
####
func (*RawNode) [Bootstrap](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/bootstrap.go#L30) [¶](#RawNode.Bootstrap)
```
func (rn *[RawNode](#RawNode)) Bootstrap(peers [][Peer](#Peer)) [error](/builtin#error)
```
Bootstrap initializes the RawNode for first use by appending configuration changes for the supplied peers. This method returns an error if the Storage is nonempty.
It is recommended that instead of calling this method, applications bootstrap their state manually by setting up a Storage that has a first index > 1 and which stores the desired ConfState as its InitialState.
####
func (*RawNode) [Campaign](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/rawnode.go#L75) [¶](#RawNode.Campaign)
```
func (rn *[RawNode](#RawNode)) Campaign() [error](/builtin#error)
```
Campaign causes this RawNode to transition to candidate state.
####
func (*RawNode) [HasReady](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/rawnode.go#L152) [¶](#RawNode.HasReady)
```
func (rn *[RawNode](#RawNode)) HasReady() [bool](/builtin#bool)
```
HasReady called when RawNode user need to check if any Ready pending.
Checking logic in this method should be consistent with Ready.containsUpdates().
####
func (*RawNode) [Propose](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/rawnode.go#L82) [¶](#RawNode.Propose)
```
func (rn *[RawNode](#RawNode)) Propose(data [][byte](/builtin#byte)) [error](/builtin#error)
```
Propose proposes data be appended to the raft log.
####
func (*RawNode) [ProposeConfChange](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/rawnode.go#L93) [¶](#RawNode.ProposeConfChange)
```
func (rn *[RawNode](#RawNode)) ProposeConfChange(cc [pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[ConfChangeI](/go.etcd.io/etcd/raft/[email protected]/raftpb#ConfChangeI)) [error](/builtin#error)
```
ProposeConfChange proposes a config change. See (Node).ProposeConfChange for details.
####
func (*RawNode) [ReadIndex](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/rawnode.go#L239) [¶](#RawNode.ReadIndex)
```
func (rn *[RawNode](#RawNode)) ReadIndex(rctx [][byte](/builtin#byte))
```
ReadIndex requests a read state. The read state will be set in ready.
Read State has a read index. Once the application advances further than the read index, any linearizable read requests issued before the read request can be processed safely. The read state will have the same rctx attached.
####
func (*RawNode) [Ready](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/rawnode.go#L125) [¶](#RawNode.Ready)
```
func (rn *[RawNode](#RawNode)) Ready() [Ready](#Ready)
```
Ready returns the outstanding work that the application needs to handle. This includes appending and applying entries or a snapshot, updating the HardState,
and sending messages. The returned Ready() *must* be handled and subsequently passed back via Advance().
####
func (*RawNode) [ReportSnapshot](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/rawnode.go#L224) [¶](#RawNode.ReportSnapshot)
```
func (rn *[RawNode](#RawNode)) ReportSnapshot(id [uint64](/builtin#uint64), status [SnapshotStatus](#SnapshotStatus))
```
ReportSnapshot reports the status of the sent snapshot.
####
func (*RawNode) [ReportUnreachable](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/rawnode.go#L219) [¶](#RawNode.ReportUnreachable)
```
func (rn *[RawNode](#RawNode)) ReportUnreachable(id [uint64](/builtin#uint64))
```
ReportUnreachable reports the given node is not reachable for the last send.
####
func (*RawNode) [Status](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/rawnode.go#L183) [¶](#RawNode.Status)
```
func (rn *[RawNode](#RawNode)) Status() [Status](#Status)
```
Status returns the current status of the given group. This allocates, see BasicStatus and WithProgress for allocation-friendlier choices.
####
func (*RawNode) [Step](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/rawnode.go#L110) [¶](#RawNode.Step)
```
func (rn *[RawNode](#RawNode)) Step(m [pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[Message](/go.etcd.io/etcd/raft/[email protected]/raftpb#Message)) [error](/builtin#error)
```
Step advances the state machine using the given message.
####
func (*RawNode) [Tick](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/rawnode.go#L58) [¶](#RawNode.Tick)
```
func (rn *[RawNode](#RawNode)) Tick()
```
Tick advances the internal logical clock by a single tick.
####
func (*RawNode) [TickQuiesced](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/rawnode.go#L70) [¶](#RawNode.TickQuiesced)
```
func (rn *[RawNode](#RawNode)) TickQuiesced()
```
TickQuiesced advances the internal logical clock by a single tick without performing any other state machine processing. It allows the caller to avoid periodic heartbeats and elections when all of the peers in a Raft group are known to be at the same state. Expected usage is to periodically invoke Tick or TickQuiesced depending on whether the group is "active" or "quiesced".
WARNING: Be very careful about using this method as it subverts the Raft state machine. You should probably be using Tick instead.
####
func (*RawNode) [TransferLeader](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/rawnode.go#L231) [¶](#RawNode.TransferLeader)
```
func (rn *[RawNode](#RawNode)) TransferLeader(transferee [uint64](/builtin#uint64))
```
TransferLeader tries to transfer leadership to the given transferee.
####
func (*RawNode) [WithProgress](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/rawnode.go#L206) [¶](#RawNode.WithProgress)
```
func (rn *[RawNode](#RawNode)) WithProgress(visitor func(id [uint64](/builtin#uint64), typ [ProgressType](#ProgressType), pr [tracker](/go.etcd.io/etcd/raft/[email protected]/tracker).[Progress](/go.etcd.io/etcd/raft/[email protected]/tracker#Progress)))
```
WithProgress is a helper to introspect the Progress for this node and its peers.
####
type [ReadOnlyOption](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/raft.go#L47) [¶](#ReadOnlyOption)
```
type ReadOnlyOption [int](/builtin#int)
```
```
const (
// ReadOnlySafe guarantees the linearizability of the read only request by
// communicating with the quorum. It is the default and suggested option.
ReadOnlySafe [ReadOnlyOption](#ReadOnlyOption) = [iota](/builtin#iota)
// ReadOnlyLeaseBased ensures linearizability of the read only request by
// relying on the leader lease. It can be affected by clock drift.
// If the clock drift is unbounded, leader might keep the lease longer than it
// should (clock can move backward/pause without any bound). ReadIndex is not safe
// in that case.
ReadOnlyLeaseBased
)
```
####
type [ReadState](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/read_only.go#L24) [¶](#ReadState)
```
type ReadState struct {
Index [uint64](/builtin#uint64)
RequestCtx [][byte](/builtin#byte)
}
```
ReadState provides state for read only query.
It's caller's responsibility to call ReadIndex first before getting this state from ready, it's also caller's duty to differentiate if this state is what it requests through RequestCtx, eg. given a unique id as RequestCtx
####
type [Ready](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/node.go#L52) [¶](#Ready)
```
type Ready struct {
// The current volatile state of a Node.
// SoftState will be nil if there is no update.
// It is not required to consume or store SoftState.
*[SoftState](#SoftState)
// The current state of a Node to be saved to stable storage BEFORE
// Messages are sent.
// HardState will be equal to empty state if there is no update.
[pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[HardState](/go.etcd.io/etcd/raft/[email protected]/raftpb#HardState)
// ReadStates can be used for node to serve linearizable read requests locally
// when its applied index is greater than the index in ReadState.
// Note that the readState will be returned when raft receives msgReadIndex.
// The returned is only valid for the request that requested to read.
ReadStates [][ReadState](#ReadState)
// Entries specifies entries to be saved to stable storage BEFORE
// Messages are sent.
Entries [][pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[Entry](/go.etcd.io/etcd/raft/[email protected]/raftpb#Entry)
// Snapshot specifies the snapshot to be saved to stable storage.
Snapshot [pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[Snapshot](/go.etcd.io/etcd/raft/[email protected]/raftpb#Snapshot)
// CommittedEntries specifies entries to be committed to a
// store/state-machine. These have previously been committed to stable
// store.
CommittedEntries [][pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[Entry](/go.etcd.io/etcd/raft/[email protected]/raftpb#Entry)
// Messages specifies outbound messages to be sent AFTER Entries are
// committed to stable storage.
// If it contains a MsgSnap message, the application MUST report back to raft
// when the snapshot has been received or has failed by calling ReportSnapshot.
Messages [][pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[Message](/go.etcd.io/etcd/raft/[email protected]/raftpb#Message)
// MustSync indicates whether the HardState and Entries must be synchronously
// written to disk or if an asynchronous write is permissible.
MustSync [bool](/builtin#bool)
}
```
Ready encapsulates the entries and messages that are ready to read,
be saved to stable storage, committed or sent to other peers.
All fields in Ready are read-only.
####
type [SnapshotStatus](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/node.go#L24) [¶](#SnapshotStatus)
```
type SnapshotStatus [int](/builtin#int)
```
```
const (
SnapshotFinish [SnapshotStatus](#SnapshotStatus) = 1
SnapshotFailure [SnapshotStatus](#SnapshotStatus) = 2
)
```
####
type [SoftState](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/node.go#L40) [¶](#SoftState)
```
type SoftState struct {
Lead [uint64](/builtin#uint64) // must use atomic operations to access; keep 64-bit aligned.
RaftState [StateType](#StateType)
}
```
SoftState provides state that is useful for logging and debugging.
The state is volatile and does not need to be persisted to the WAL.
####
type [StateType](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/raft.go#L102) [¶](#StateType)
```
type StateType [uint64](/builtin#uint64)
```
StateType represents the role of a node in a cluster.
```
const (
StateFollower [StateType](#StateType) = [iota](/builtin#iota)
StateCandidate
StateLeader
StatePreCandidate
)
```
Possible values for StateType.
####
func (StateType) [MarshalJSON](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/util.go#L25) [¶](#StateType.MarshalJSON)
```
func (st [StateType](#StateType)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (StateType) [String](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/raft.go#L111) [¶](#StateType.String)
```
func (st [StateType](#StateType)) String() [string](/builtin#string)
```
####
type [Status](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/status.go#L26) [¶](#Status)
```
type Status struct {
[BasicStatus](#BasicStatus)
Config [tracker](/go.etcd.io/etcd/raft/[email protected]/tracker).[Config](/go.etcd.io/etcd/raft/[email protected]/tracker#Config)
Progress map[[uint64](/builtin#uint64)][tracker](/go.etcd.io/etcd/raft/[email protected]/tracker).[Progress](/go.etcd.io/etcd/raft/[email protected]/tracker#Progress)
}
```
Status contains information about this Raft peer and its view of the system.
The Progress is only populated on the leader.
####
func (Status) [MarshalJSON](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/status.go#L80) [¶](#Status.MarshalJSON)
```
func (s [Status](#Status)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON translates the raft status into JSON.
TODO: try to simplify this by introducing ID type into raft
####
func (Status) [String](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/status.go#L99) [¶](#Status.String)
```
func (s [Status](#Status)) String() [string](/builtin#string)
```
####
type [Storage](https://github.com/etcd-io/etcd/blob/raft/v3.5.9/raft/storage.go#L46) [¶](#Storage)
```
type Storage interface {
// InitialState returns the saved HardState and ConfState information.
InitialState() ([pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[HardState](/go.etcd.io/etcd/raft/[email protected]/raftpb#HardState), [pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[ConfState](/go.etcd.io/etcd/raft/[email protected]/raftpb#ConfState), [error](/builtin#error))
// Entries returns a slice of log entries in the range [lo,hi).
// MaxSize limits the total size of the log entries returned, but
// Entries returns at least one entry if any.
Entries(lo, hi, maxSize [uint64](/builtin#uint64)) ([][pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[Entry](/go.etcd.io/etcd/raft/[email protected]/raftpb#Entry), [error](/builtin#error))
// Term returns the term of entry i, which must be in the range
// [FirstIndex()-1, LastIndex()]. The term of the entry before
// FirstIndex is retained for matching purposes even though the
// rest of that entry may not be available.
Term(i [uint64](/builtin#uint64)) ([uint64](/builtin#uint64), [error](/builtin#error))
// LastIndex returns the index of the last entry in the log.
LastIndex() ([uint64](/builtin#uint64), [error](/builtin#error))
// FirstIndex returns the index of the first log entry that is
// possibly available via Entries (older entries have been incorporated
// into the latest Snapshot; if storage only contains the dummy entry the
// first log entry is not available).
FirstIndex() ([uint64](/builtin#uint64), [error](/builtin#error))
// Snapshot returns the most recent snapshot.
// If snapshot is temporarily unavailable, it should return ErrSnapshotTemporarilyUnavailable,
// so raft state machine could know that Storage needs some time to prepare
// snapshot and call Snapshot later.
Snapshot() ([pb](/go.etcd.io/etcd/raft/[email protected]/raftpb).[Snapshot](/go.etcd.io/etcd/raft/[email protected]/raftpb#Snapshot), [error](/builtin#error))
}
```
Storage is an interface that may be implemented by the application to retrieve log entries from storage.
If any Storage method returns an error, the raft instance will become inoperable and refuse to participate in elections; the application is responsible for cleanup and recovery in this case. |
explore | cran | R | Package ‘explore’
October 11, 2023
Type Package
Title Simplifies Exploratory Data Analysis
Version 1.1.0
Author <NAME>
Maintainer <NAME> <<EMAIL>>
Description Interactive data exploration with one line of code, automated
reporting or use an easy to remember set of tidy functions for low code
exploratory data analysis.
License MIT + file LICENSE
Encoding UTF-8
URL https://rolkra.github.io/explore/,
https://github.com/rolkra/explore
Depends R (>= 3.5.0)
Imports dplyr (>= 1.1.0), DT (>= 0.3.0), forcats (>= 1.0.0), ggplot2
(>= 3.4.0), gridExtra, magrittr, palmerpenguins, rlang (>=
1.1.0), rmarkdown, rpart, rpart.plot, shiny, stats, stringr,
tibble
RoxygenNote 7.2.3
Suggests knitr, MASS, randomForest, testthat (>= 3.0.0)
VignetteBuilder knitr
Config/testthat/edition 3
BugReports https://github.com/rolkra/explore/issues
NeedsCompilation no
Repository CRAN
Date/Publication 2023-10-11 08:00:02 UTC
R topics documented:
abtes... 3
abtest_shin... 4
abtest_targetnu... 5
abtest_targetpc... 6
add_var_i... 7
add_var_random_0... 7
add_var_random_ca... 8
add_var_random_db... 9
add_var_random_in... 10
add_var_random_moo... 11
add_var_random_starsig... 11
balance_targe... 12
clean_va... 13
count_pc... 14
create_data_ap... 14
create_data_bu... 15
create_data_chur... 16
create_data_empt... 17
create_data_newslette... 17
create_data_perso... 18
create_data_rando... 19
create_data_unfai... 20
create_notebook_explor... 21
data_dict_m... 21
decryp... 22
describ... 23
describe_al... 24
describe_ca... 24
describe_nu... 25
describe_tb... 26
encryp... 26
explain_fores... 27
explain_logre... 28
explain_tre... 28
explor... 30
explore_al... 31
explore_ba... 32
explore_co... 33
explore_coun... 34
explore_densit... 35
explore_shin... 36
explore_targetpc... 37
explore_tb... 38
format_num_aut... 38
format_num_kM... 39
format_num_spac... 39
format_targe... 40
format_typ... 40
get_nro... 41
get_typ... 41
get_var_bucket... 42
guess_cat_nu... 42
plot_legend_targetpc... 43
plot_tex... 43
plot_var_inf... 44
predict_targe... 44
replace_na_wit... 45
repor... 46
rescale0... 46
simplify_tex... 47
target_explore_ca... 47
target_explore_nu... 48
total_fig_heigh... 49
use_data_bee... 50
use_data_diamond... 50
use_data_iri... 51
use_data_mp... 51
use_data_mtcar... 52
use_data_penguin... 52
use_data_starwar... 53
use_data_titani... 53
weight_targe... 54
abtest A/B testing
Description
A/B testing
Usage
abtest(data, expr, n, target, sign_level = 0.05)
Arguments
data A dataset. If no data is provided, a shiny app is launched
expr Logical expression, that return in a FALSE/TRUE
n A Variable for number of observations (count data)
target Target variable
sign_level Significance Level (typical 0.01/0.05/0.10)
Value
Plot that shows if difference is significant
Examples
## Using chi2-test or t-test depending on target type
data <- create_data_buy(obs = 100)
abtest(data, female_ind == 1, target = buy) # chi2 test
abtest(data, city_ind == 1, target = age) # t test
## If small number of observations, Fisher's Exact test
## is used for a binary target (if <= 5 observations in a subgroup)
data <- create_data_buy(obs = 25, seed = 1)
abtest(data, female_ind == 1, target = buy) # Fisher's Exact test
abtest_shiny A/B testing interactive
Description
Launches a shiny app to A/B test
Usage
abtest_shiny(
size_a = 100,
size_b = 100,
success_a = 10,
success_b = 20,
success_unit = "percent",
sign_level = 0.05
)
Arguments
size_a Size of Group A
size_b Size of Group B
success_a Success of Group A
success_b Success of Group B
success_unit "count" | "percent"
sign_level Significance Level (typical 0.01/0.05/0.10)
Examples
# Only run examples in interactive R sessions
if (interactive()) {
abtest_shiny()
}
abtest_targetnum A/B testing comparing two mean
Description
A/B testing comparing two mean
Usage
abtest_targetnum(data, expr, target, sign_level = 0.05)
Arguments
data A dataset
expr Expression, that results in a FALSE/TRUE
target Target variable (must be numeric)
sign_level Significance Level (typical 0.01/0.05/0.10)
Value
Plot that shows if difference is significant
Examples
data <- create_data_buy(obs = 100)
abtest(data, city_ind == 1, target = age)
abtest_targetpct A/B testing comparing percent per group
Description
A/B testing comparing percent per group
Usage
abtest_targetpct(
data,
expr,
n,
target,
sign_level = 0.05,
group_label,
ab_label = FALSE
)
Arguments
data A dataset
expr Expression, that results in a FALSE/TRUE
n A Variable for number of observations (count data)
target Target variable (must be 0/1 or FALSE/TRUE)
sign_level Significance Level (typical 0.01/0.05/0.10)
group_label Label of groups (default = expr)
ab_label Label Groups as A and B (default = FALSE)
Value
Plot that shows if difference is significant
Examples
data <- create_data_buy(obs = 100)
abtest(data, female_ind == 1, target = buy)
abtest(data, age >= 40, target = buy)
add_var_id Add a variable id at first column in dataset
Description
Add a variable id at first column in dataset
Usage
add_var_id(data, name = "id", overwrite = FALSE)
Arguments
data A dataset
name Name of new variable (as string)
overwrite Can new id variable overwrite an existing variable in dataset?
Value
Data set containing new id variable
Examples
library(magrittr)
iris %>% add_var_id() %>% head()
iris %>% add_var_id(name = "iris_nr") %>% head()
add_var_random_01 Add a random 0/1 variable to dataset
Description
Add a random 0/1 variable to dataset
Usage
add_var_random_01(
data,
name = "random_01",
prob = c(0.5, 0.5),
overwrite = TRUE,
seed
)
Arguments
data A dataset
name Name of new variable (as string)
prob Vector of probabilities
overwrite Can new random variable overwrite an existing variable in dataset?
seed Seed for random number generation (integer)
Value
Dataset containing new random variable
Examples
library(magrittr)
iris %>% add_var_random_01() %>% head()
iris %>% add_var_random_01(name = "my_var") %>% head()
add_var_random_cat Add a random categorical variable to dataset
Description
Add a random categorical variable to dataset
Usage
add_var_random_cat(
data,
name = "random_cat",
cat = LETTERS[1:6],
prob,
overwrite = TRUE,
seed
)
Arguments
data A dataset
name Name of new variable (as string)
cat Vector of categories
prob Vector of probabilities
overwrite Can new random variable overwrite an existing variable in dataset?
seed Seed for random number generation (integer)
Value
Dataset containing new random variable
Examples
library(magrittr)
iris %>% add_var_random_cat() %>% head()
iris %>% add_var_random_cat(name = "my_cat") %>% head()
iris %>% add_var_random_cat(cat = c("Version A", "Version B")) %>% head()
iris %>% add_var_random_cat(cat = c(1,2,3,4,5)) %>% head()
add_var_random_dbl Add a random double variable to dataset
Description
Add a random double variable to dataset
Usage
add_var_random_dbl(
data,
name = "random_dbl",
min_val = 0,
max_val = 100,
overwrite = TRUE,
seed
)
Arguments
data A dataset
name Name of new variable (as string)
min_val Minimum random integers
max_val Maximum random integers
overwrite Can new random variable overwrite an existing variable in dataset?
seed Seed for random number generation (integer)
Value
Dataset containing new random variable
Examples
library(magrittr)
iris %>% add_var_random_dbl() %>% head()
iris %>% add_var_random_dbl(name = "random_var") %>% head()
iris %>% add_var_random_dbl(min_val = 1, max_val = 10) %>% head()
add_var_random_int Add a random integer variable to dataset
Description
Add a random integer variable to dataset
Usage
add_var_random_int(
data,
name = "random_int",
min_val = 1,
max_val = 10,
overwrite = TRUE,
seed
)
Arguments
data A dataset
name Name of new variable (as string)
min_val Minimum random integers
max_val Maximum random integers
overwrite Can new random variable overwrite an existing variable in dataset?
seed Seed for random number generation (integer)
Value
Dataset containing new random variable
Examples
library(magrittr)
iris %>% add_var_random_int() %>% head()
iris %>% add_var_random_int(name = "random_var") %>% head()
iris %>% add_var_random_int(min_val = 1, max_val = 10) %>% head()
add_var_random_moon Add a random moon variable to dataset
Description
Add a random moon variable to dataset
Usage
add_var_random_moon(data, name = "random_moon", overwrite = TRUE, seed)
Arguments
data A dataset
name Name of new variable (as string)
overwrite Can new random variable overwrite an existing variable in dataset?
seed Seed for random number generation (integer)
Value
Dataset containing new random variable
Examples
library(magrittr)
iris %>% add_var_random_moon() %>% head()
add_var_random_starsign
Add a random starsign variable to dataset
Description
Add a random starsign variable to dataset
Usage
add_var_random_starsign(
data,
name = "random_starsign",
lang = "en",
overwrite = TRUE,
seed
)
Arguments
data A dataset
name Name of new variable (as string)
lang Language used for starsign (en = English, de = Deutsch, es = Espanol)
overwrite Can new random variable overwrite an existing variable in dataset?
seed Seed for random number generation (integer)
Value
Dataset containing new random variable
Examples
library(magrittr)
iris %>% add_var_random_starsign() %>% head()
iris %>% add_var_random_starsign(lang = "de") %>% head()
balance_target Balance target variable
Description
Balances the target variable in your dataset using downsampling. Target must be 0/1, FALSE/TRUE
ore no/yes
Usage
balance_target(data, target, min_prop = 0.1, seed)
Arguments
data A dataset
target Target variable (0/1, TRUE/FALSE, yes/no)
min_prop Minimum proportion of one of the target categories
seed Seed for random number generator
Value
Data
Examples
iris$is_versicolor <- ifelse(iris$Species == "versicolor", 1, 0)
balanced <- balance_target(iris, target = is_versicolor, min_prop = 0.5)
describe(balanced, is_versicolor)
clean_var Clean variable
Description
Clean variable (replace NA values, set min_val and max_val)
Usage
clean_var(
data,
var,
na = NA,
min_val = NA,
max_val = NA,
max_cat = NA,
rescale01 = FALSE,
simplify_text = FALSE,
name = NA
)
Arguments
data A dataset
var Name of variable
na Value that replaces NA
min_val All values < min_val are converted to min_val (var numeric or character)
max_val All values > max_val are converted to max_val (var numeric or character)
max_cat Maximum number of different factor levels for categorical variable (if more,
.OTHER is added)
rescale01 IF TRUE, value is rescaled between 0 and 1 (var must be numeric)
simplify_text If TRUE, a character variable is simplified (trim, upper, ...)
name New name of variable (as string)
Value
Dataset
Examples
library(magrittr)
iris %>% clean_var(Sepal.Width, max_val = 3.5, name = "sepal_width") %>% head()
iris %>% clean_var(Sepal.Width, rescale01 = TRUE) %>% head()
count_pct Adds percentage to dplyr::count()
Description
Adds variables total and pct (percentage) to dplyr::count()
Usage
count_pct(data, ...)
Arguments
data A dataset
... Other parameters passed to count()
Value
Dataset
Examples
count_pct(iris, Species)
create_data_app Create data app
Description
Artificial data that can be used for unit-testing or teaching
Usage
create_data_app(obs = 1000, add_id = FALSE, seed = 123)
Arguments
obs Number of observations
add_id Add an id-variable to data?
seed Seed for randomization (integer)
Value
A dataset as tibble
Examples
create_data_app()
create_data_buy Create data buy
Description
Artificial data that can be used for unit-testing or teaching
Usage
create_data_buy(
obs = 1000,
target_name = "buy",
factorise_target = FALSE,
target1_prob = 0.5,
add_extreme = TRUE,
flip_gender = FALSE,
add_id = FALSE,
seed = 123
)
Arguments
obs Number of observations
target_name Variable name of target
factorise_target
Should target variable be factorised? (from 0/1 to factor no/yes)?
target1_prob Probability that target = 1
add_extreme Add an observation with extreme values?
flip_gender Should Male/Female be flipped in data?
add_id Add an id-variable to data?
seed Seed for randomization
Details
Variables in dataset:
• id = Identifier
• period = Year & Month (YYYYMM)
• city_ind = Indicating if customer is residing in a city (1 = yes, 0 = no)
• female_ind = Gender of customer is female (1 = yes, 0 = no)
• fixedvoice_ind = Customer has a fixed voice product (1 = yes, 0 = no)
• fixeddata_ind = Customer has a fixed data product (1 = yes, 0 = no)
• fixedtv_ind = Customer has a fixed TV product (1 = yes, 0 = no)
• mobilevoice_ind = Customer has a mobile voice product (1 = yes, 0 = no)
• mobiledata_prd = Customer has a mobile data product (NO/MOBILE STICK/BUSINESS)
• bbi_speed_ind = Customer has a Broadband Internet (BBI) with extra speed
• bbi_usg_gb = Broadband Internet (BBI) usage in Gigabyte (GB) last month
• hh_single = Expected to be a Single Household (1 = yes, 0 = no)
Target in dataset:
• buy (may be renamed) = Did customer buy a new product in next month? (1 = yes, 0 = no)
Value
A dataset as tibble
Examples
create_data_buy()
create_data_churn Create data churn
Description
Artificial data that can be used for unit-testing or teaching
Usage
create_data_churn(
obs = 1000,
target_name = "churn",
factorise_target = FALSE,
target1_prob = 0.4,
add_id = FALSE,
seed = 123
)
Arguments
obs Number of observations
target_name Variable name of target
factorise_target
Should target variable be factorised?
target1_prob Probability that target = 1
add_id Add an id-variable to data?
seed Seed for randomization (integer)
Value
A dataset as tibble
Examples
create_data_churn()
create_data_empty Create an empty dataset
Description
Create an empty dataset
Usage
create_data_empty(obs = 1000, add_id = FALSE, seed = 123)
Arguments
obs Number of observations
add_id Add an id
seed Seed for randomization (integer)
Value
Dataset as tibble
Examples
create_data_empty(obs = 100)
create_data_empty(obs = 100, add_id = TRUE)
create_data_newsletter
Create data newsletter
Description
Artificial data that can be used for unit-testing or teaching (fairness & AI bias)
Usage
create_data_newsletter(obs = 1000, add_id = FALSE, seed = 123)
Arguments
obs Number of observations
add_id Add an id-variable to data?
seed Seed for randomization (integer)
Value
A dataset as tibble
Examples
create_data_newsletter()
create_data_person Create data person
Description
Artificial data that can be used for unit-testing or teaching
Usage
create_data_person(obs = 1000, add_id = FALSE, seed = 123)
Arguments
obs Number of observations
add_id Add an id
seed Seed for randomization (integer)
Value
A dataset as tibble
Examples
create_data_person()
create_data_random Create data random
Description
Random data that can be used for unit-testing or teaching
Usage
create_data_random(
obs = 1000,
vars = 10,
target_name = "target_ind",
factorise_target = FALSE,
target1_prob = 0.5,
add_id = TRUE,
seed = 123
)
Arguments
obs Number of observations
vars Number of variables
target_name Variable name of target
factorise_target
Should target variable be factorised? (from 0/1 to facotr no/yes)?
target1_prob Probability that target = 1
add_id Add an id-variable to data?
seed Seed for randomization
Details
Variables in dataset:
• id = Identifier
• var_X = variable containing values between 0 and 100
Target in dataset:
• target_ind (may be renamed) = random values (1 = yes, 0 = no)
Value
A dataset as tibble
Examples
create_data_random(obs = 100, vars = 5)
create_data_unfair Create data unfair
Description
Artificial data that can be used for unit-testing or teaching (fairness & AI bias)
Usage
create_data_unfair(
obs = 1000,
target_name = "target_ind",
factorise_target = FALSE,
target1_prob = 0.25,
add_id = FALSE,
seed = 123
)
Arguments
obs Number of observations
target_name Variable name of target
factorise_target
Should target variable be factorised?
target1_prob Probability that target = 1
add_id Add an id-variable to data?
seed Seed for randomization (integer)
Value
A dataset as tibble
Examples
create_data_unfair()
create_notebook_explore
Generate a notebook
Description
Generate an RMarkdown Notebook template for a report. You must provide a output-directory
(parameter output_dir). The default file-name is "notebook-explore.Rmd" (may overwrite existing
file with same name)
Usage
create_notebook_explore(output_file = "notebook-explore.Rmd", output_dir)
Arguments
output_file Filename of the html report
output_dir Directory where to save the html report
Examples
create_notebook_explore(output_file = "explore.Rmd", output_dir = tempdir())
data_dict_md Create a data dictionary Markdown file
Description
Create a data dictionary Markdown file
Usage
data_dict_md(
data,
title = "",
description = NA,
output_file = "data_dict.md",
output_dir
)
Arguments
data A dataframe (data dictionary for all variables)
title Title of the data dictionary
description Detailed description of variables in data (dataframe with columns ’variable’ and
’description’)
output_file Output filename for Markdown file
output_dir Directory where the Markdown file is saved
Value
Create Markdown file
Examples
# Data dictionary of a dataframe
data_dict_md(iris,
title = "iris flower data set",
output_dir = tempdir())
# Data dictionary of a dataframe with additional description of variables
description <- data.frame(
variable = c("Species"),
description = c("Species of Iris flower"))
data_dict_md(iris,
title = "iris flower data set",
description = description,
output_dir = tempdir())
decrypt decrypt text
Description
decrypt text
Usage
decrypt(text, codeletters = c(toupper(letters), letters, 0:9), shift = 18)
Arguments
text A text (character)
codeletters A string of letters that are used for decryption
shift Number of elements shifted
Value
Decrypted text
Examples
decrypt("zw336 E693v")
describe Describe a dataset or variable
Description
Describe a dataset or variable (depending on input parameters)
Usage
describe(data, var, n, target, out = "text", ...)
Arguments
data A dataset
var A variable of the dataset
n Weights variable for count-data
target Target variable (0/1 or FALSE/TRUE)
out Output format ("text"|"list") of variable description
... Further arguments
Value
Description as table, text or list
Examples
# Load package
library(magrittr)
# Describe a dataset
iris %>% describe()
# Describe a variable
iris %>% describe(Species)
iris %>% describe(Sepal.Length)
describe_all Describe all variables of a dataset
Description
Describe all variables of a dataset
Usage
describe_all(data, out = "large")
Arguments
data A dataset
out Output format ("small"|"large")
Value
Dataset (tibble)
Examples
describe_all(iris)
describe_cat Describe categorical variable
Description
Describe categorical variable
Usage
describe_cat(data, var, n, max_cat = 10, out = "text", margin = 0)
Arguments
data A dataset
var Variable or variable name
n Weights variable for count-data
max_cat Maximum number of categories displayed
out Output format ("text"|"list"|"tibble"|"df")
margin Left margin for text output (number of spaces)
Value
Description as text or list
Examples
describe_cat(iris, Species)
describe_num Describe numerical variable
Description
Describe numerical variable
Usage
describe_num(data, var, n, out = "text", margin = 0)
Arguments
data A dataset
var Variable or variable name
n Weights variable for count-data
out Output format ("text"|"list")
margin Left margin for text output (number of spaces)
Value
Description as text or list
Examples
describe_num(iris, Sepal.Length)
describe_tbl Describe table
Description
Describe table (e.g. number of rows and columns of dataset)
Usage
describe_tbl(data, n, target, out = "text")
Arguments
data A dataset
n Weights variable for count-data
target Target variable (binary)
out Output format ("text"|"list")
Value
Description as text or list
Examples
describe_tbl(iris)
iris[1,1] <- NA
describe_tbl(iris)
encrypt encrypt text
Description
encrypt text
Usage
encrypt(text, codeletters = c(toupper(letters), letters, 0:9), shift = 18)
Arguments
text A text (character)
codeletters A string of letters that are used for encryption
shift Number of elements shifted
Value
Encrypted text
Examples
encrypt("hello world")
explain_forest Explain a target using Random Forest.
Description
Explain a target using Random Forest.
Usage
explain_forest(data, target, ntree = 50, out = "plot", ...)
Arguments
data A dataset
target Target variable (binary)
ntree Number of trees used for Random Forest
out Output of the function: "plot" | "model" | "importance" | all"
... Arguments passed on to randomForest::randomForest
Value
Plot of importance (if out = "plot")
Examples
data <- create_data_buy()
explain_forest(data, target = buy)
explain_logreg Explain a binary target using a logistic regression (glm). Model cho-
sen by AIC in a Stepwise Algorithm (MASS::stepAIC()).
Description
Explain a binary target using a logistic regression (glm). Model chosen by AIC in a Stepwise
Algorithm (MASS::stepAIC()).
Usage
explain_logreg(data, target, out = "tibble", ...)
Arguments
data A dataset
target Target variable (binary)
out Output of the function: "tibble" | "model"
... Further arguments
Value
Dataset with results (term, estimate, std.error, z.value, p.value)
Examples
data <- iris
data$is_versicolor <- ifelse(iris$Species == "versicolor", 1, 0)
data$Species <- NULL
explain_logreg(data, target = is_versicolor)
explain_tree Explain a target using a simple decision tree (classification or regres-
sion)
Description
Explain a target using a simple decision tree (classification or regression)
Usage
explain_tree(
data,
target,
n,
max_cat = 10,
max_target_cat = 5,
maxdepth = 3,
minsplit = 20,
cp = 0,
weights = NA,
size = 0.7,
out = "plot",
...
)
Arguments
data A dataset
target Target variable
n weights variable (for count data)
max_cat Drop categorical variables with higher number of levels
max_target_cat Maximum number of categories to be plotted for target (except NA)
maxdepth Set the maximum depth of any node of the final tree, with the root node counted
as depth 0. Values greater than 30 rpart will give nonsense results on 32-bit
machines.
minsplit the minimum number of observations that must exist in a node in order for a
split to be attempted.
cp complexity parameter. Any split that does not decrease the overall lack of fit by
a factor of cp is not attempted. For instance, with anova splitting, this means
that the overall R-squared must increase by cp at each step. The main role of
this parameter is to save computing time by pruning off splits that are obviously
not worthwhile. Essentially,the user informs the program that any split which
does not improve the fit by cp will likely be pruned off by cross-validation, and
that hence the program need not pursue it.
weights optional case weights.
size Text size of plot
out Output of function: "plot" | "model"
... Further arguments
Value
Plot or additional the model (if out = "model")
Examples
data <- iris
data$is_versicolor <- ifelse(iris$Species == "versicolor", 1, 0)
data$Species <- NULL
explain_tree(data, target = is_versicolor)
explore Explore a dataset or variable
Description
Explore a dataset or variable
Usage
explore(
data,
var,
var2,
n,
target,
targetpct,
split,
min_val = NA,
max_val = NA,
auto_scale = TRUE,
na = NA,
...
)
Arguments
data A dataset
var A variable
var2 A variable for checking correlation
n A Variable for number of observations (count data)
target Target variable (0/1 or FALSE/TRUE)
targetpct Plot variable as target% (FALSE/TRUE)
split Alternative to targetpct (split = !targetpct)
min_val All values < min_val are converted to min_val
max_val All values > max_val are converted to max_val
auto_scale Use 0.2 and 0.98 quantile for min_val and max_val (if min_val and max_val are
not defined)
na Value to replace NA
... Further arguments (like flip = TRUE/FALSE)
Value
Plot object
Examples
## Launch Shiny app (in interactive R sessions)
if (interactive()) {
explore(iris)
}
## Explore grafically
# Load library
library(magrittr)
# Explore a variable
iris %>% explore(Species)
iris %>% explore(Sepal.Length)
iris %>% explore(Sepal.Length, min_val = 4, max_val = 7)
# Explore a variable with a target
iris$is_virginica <- ifelse(iris$Species == "virginica", 1, 0)
iris %>% explore(Species, target = is_virginica)
iris %>% explore(Sepal.Length, target = is_virginica)
# Explore correlation between two variables
iris %>% explore(Species, Petal.Length)
iris %>% explore(Sepal.Length, Petal.Length)
# Explore correlation between two variables and split by target
iris %>% explore(Sepal.Length, Petal.Length, target = is_virginica)
explore_all Explore all variables
Description
Explore all variables of a dataset (create plots)
Usage
explore_all(data, n, target, ncol = 2, targetpct, split = TRUE)
Arguments
data A dataset
n Weights variable (only for count data)
target Target variable (0/1 or FALSE/TRUE)
ncol Layout of plots (number of columns)
targetpct Plot variable as target% (FALSE/TRUE)
split Split by target (TRUE|FALSE)
Value
Plot
Examples
explore_all(iris)
iris$is_virginica <- ifelse(iris$Species == "virginica", 1, 0)
explore_all(iris, target = is_virginica)
explore_bar Explore categorical variable using bar charts
Description
Create a barplot to explore a categorical variable. If a target is selected, the barplot is created for all
levels of the target.
Usage
explore_bar(
data,
var,
target,
flip = NA,
title = "",
numeric = NA,
max_cat = 30,
max_target_cat = 5,
legend_position = "right",
label,
label_size = 2.7,
...
)
Arguments
data A dataset
var variable
target target (can have more than 2 levels)
flip Should plot be flipped? (change of x and y)
title Title of the plot (if empty var name)
numeric Display variable as numeric (not category)
max_cat Maximum number of categories to be plotted
max_target_cat Maximum number of categories to be plotted for target (except NA)
legend_position
Position of the legend ("bottom"|"top"|"none")
label Show labels? (if empty, automatic)
label_size Size of labels
... Further arguments
Value
Plot object (bar chart)
explore_cor Explore the correlation between two variables
Description
Explore the correlation between two variables
Usage
explore_cor(
data,
x,
y,
target,
bins = 8,
min_val = NA,
max_val = NA,
auto_scale = TRUE,
title = NA,
color = "grey",
...
)
Arguments
data A dataset
x Variable on x axis
y Variable on y axis
target Target variable (categorical)
bins Number of bins
min_val All values < min_val are converted to min_val
max_val All values > max_val are converted to max_val
auto_scale Use 0.2 and 0.98 quantile for min_val and max_val (if min_val and max_val are
not defined)
title Title of the plot
color Color of the plot
... Further arguments
Value
Plot
Examples
explore_cor(iris, x = Sepal.Length, y = Sepal.Width)
explore_count Explore count data (categories + frequency)
Description
Create a plot to explore count data (categories + freuency) Variable named ’n’ is auto detected as
Frequency
Usage
explore_count(
data,
cat,
n,
target,
pct = FALSE,
split = TRUE,
title = NA,
numeric = FALSE,
max_cat = 30,
max_target_cat = 5,
flip = NA
)
Arguments
data A dataset (categories + frequency)
cat Numerical variable
n Number of observations (frequency)
target Target variable
pct Show as percent?
split Split by target (FALSE/TRUE)
title Title of the plot
numeric Display variable as numeric (not category)
max_cat Maximum number of categories to be plotted
max_target_cat Maximum number of categories to be plotted for target (except NA)
flip Flip plot? (for categorical variables)
Value
Plot object
Examples
library(dplyr)
iris %>%
count(Species) %>%
explore_count(Species)
explore_density Explore density of variable
Description
Create a density plot to explore numerical variable
Usage
explore_density(
data,
var,
target,
title = "",
min_val = NA,
max_val = NA,
color = "grey",
auto_scale = TRUE,
max_target_cat = 5,
...
)
Arguments
data A dataset
var Variable
target Target variable (0/1 or FALSE/TRUE)
title Title of the plot (if empty var name)
min_val All values < min_val are converted to min_val
max_val All values > max_val are converted to max_val
color Color of plot
auto_scale Use 0.02 and 0.98 percent quantile for min_val and max_val (if min_val and
max_val are not defined)
max_target_cat Maximum number of levels of target shown in the plot (except NA).
... Further arguments
Value
Plot object (density plot)
Examples
explore_density(iris, "Sepal.Length")
iris$is_virginica <- ifelse(iris$Species == "virginica", 1, 0)
explore_density(iris, Sepal.Length, target = is_virginica)
explore_shiny Explore dataset interactive
Description
Launches a shiny app to explore a dataset
Usage
explore_shiny(data, target)
Arguments
data A dataset
target Target variable (0/1 or FALSE/TRUE)
Examples
# Only run examples in interactive R sessions
if (interactive()) {
explore_shiny(iris)
}
explore_targetpct Explore variable + binary target (values 0/1)
Description
Create a plot to explore relation between a variable and a binary target as target percent. The target
variable is choosen automatically if possible (name starts with ’target’)
Usage
explore_targetpct(
data,
var,
target = NULL,
title = NA,
min_val = NA,
max_val = NA,
auto_scale = TRUE,
na = NA,
flip = NA,
...
)
Arguments
data A dataset
var Numerical variable
target Target variable (0/1 or FALSE/TRUE)
title Title of the plot
min_val All values < min_val are converted to min_val
max_val All values > max_val are converted to max_val
auto_scale Use 0.2 and 0.98 quantile for min_val and max_val (if min_val and max_val are
not defined)
na Value to replace NA
flip Flip plot? (for categorical variables)
... Further arguments
Value
Plot object
Examples
iris$target01 <- ifelse(iris$Species == "versicolor",1,0)
explore_targetpct(iris)
explore_tbl Explore table
Description
Explore a table. Plots variable types, variables with no variance and variables with NA
Usage
explore_tbl(data, n)
Arguments
data A dataset
n Weight variable for count data
Examples
explore_tbl(iris)
format_num_auto Format number as character string (auto)
Description
Formats a number depending on the value as number with space, scientific or big number as k (1
000), M (1 000 000) or B (1 000 000 000)
Usage
format_num_auto(number = 0, digits = 1)
Arguments
number A number (integer or real)
digits Number of digits
Value
Formatted number as text
Examples
format_num_kMB(5500, digits = 2)
format_num_kMB Format number as character string (kMB)
Description
Formats a big number as k (1 000), M (1 000 000) or B (1 000 000 000)
Usage
format_num_kMB(number = 0, digits = 1)
Arguments
number A number (integer or real)
digits Number of digits
Value
Formatted number as text
Examples
format_num_kMB(5500, digits = 2)
format_num_space Format number as character string (space as big.mark)
Description
Formats a big number using space as big.mark (1000 = 1 000)
Usage
format_num_space(number = 0, digits = 1)
Arguments
number A number (integer or real)
digits Number of digits
Value
Formatted number as text
Examples
format_num_space(5500, digits = 2)
format_target Format target
Description
Formats a target as a 0/1 variable. If target is numeric, 1 = above average.
Usage
format_target(target)
Arguments
target Variable as vector
Value
Formated target
Examples
iris$is_virginica <- ifelse(iris$Species == "virginica", "yes", "no")
iris$target <- format_target(iris$is_virginica)
table(iris$target)
format_type Format type description
Description
Format type description of variable to 3 letters (int|dbl|lgl|chr|dat)
Usage
format_type(type)
Arguments
type Type description ("integer", "double", "logical", character", "date")
Value
Formatted type description (int|dbl|lgl|chr|dat)
Examples
format_type(typeof(iris$Species))
get_nrow Get number of rows for a grid plot (deprecated, use total_fig_height()
instead)
Description
Get number of rows for a grid plot (deprecated, use total_fig_height() instead)
Usage
get_nrow(varnames, exclude = 0, ncol = 2)
Arguments
varnames List of variables to be plotted
exclude Number of variables that will be excluded from plot
ncol Number of columns (default = 2)
Value
Number of rows
Examples
get_nrow(names(iris), ncol = 2)
get_type Return type of variable
Description
Return value of typeof, except if variable contains hide, then return "other"
Usage
get_type(var)
Arguments
var A vector (dataframe column)
Value
Value of typeof or "other"
Examples
get_type(iris$Species)
get_var_buckets Put variables into "buckets" to create a set of plots instead one large
plot
Description
Put variables into "buckets" to create a set of plots instead one large plot
Usage
get_var_buckets(data, bucket_size = 100, var_name_target = NA, var_name_n = NA)
Arguments
data A dataset
bucket_size Maximum number of variables in one bucket
var_name_target
Name of the target variable (if defined)
var_name_n Name of the weight (n) variable (if defined)
Value
Buckets as a list
Examples
get_var_buckets(iris)
get_var_buckets(iris, bucket_size = 2)
get_var_buckets(iris, bucket_size = 2, var_name_target = "Species")
guess_cat_num Return if variable is categorical or numerical
Description
Guess if variable is categorical or numerical based on name, type and values of variable
Usage
guess_cat_num(var, descr)
Arguments
var A vector (dataframe column)
descr A description of the variable (optional)
Value
"cat" (categorical), "num" (numerical) or "oth" (other)
Examples
guess_cat_num(iris$Species)
plot_legend_targetpct Plots a legend that can be used for explore_all with a binary target
Description
Plots a legend that can be used for explore_all with a binary target
Usage
plot_legend_targetpct(border = TRUE)
Arguments
border Draw a border?
Value
Base plot
Examples
plot_legend_targetpct(border = TRUE)
plot_text Plot a text
Description
Plots a text (base plot) and let you choose text-size and color
Usage
plot_text(text = "hello world", size = 1.2, color = "black")
Arguments
text Text as string
size Text-size
color Text-color
Value
Plot
Examples
plot_text("hello", size = 2, color = "red")
plot_var_info Plot a variable info
Description
Creates a ggplot with the variable-name as title and a text
Usage
plot_var_info(data, var, info = "")
Arguments
data A dataset
var Variable
info Text to plot
Value
Plot (ggplot)
predict_target Predict target using a trained model.
Description
Predict target using a trained model.
Usage
predict_target(data, model, name = "prediction")
Arguments
data A dataset (data.frame or tbl)
model A model created with explain_*() function
name Prefix of variable-name for prediction
Value
data containing predicted probabilities for target values
Examples
data_train <- create_data_buy(seed = 1)
data_test <- create_data_buy(seed = 2)
model <- explain_tree(data_train, target = buy, out = "model")
data <- predict_target(data = data_test, model = model)
describe(data)
replace_na_with Replace NA
Description
Replace NA values of a variable in a dataframe
Usage
replace_na_with(data, var_name, with)
Arguments
data A dataframe
var_name Name of variable where NAs are replaced
with Value instead of NA
Value
Updated dataframe
Examples
data <- data.frame(nr = c(1,2,3,NA,NA))
replace_na_with(data, "nr", 0)
report Generate a report of all variables
Description
Generate a report of all variables If target is defined, the relation to the target is reported
Usage
report(data, n, target, targetpct, split, output_file, output_dir)
Arguments
data A dataset
n Weights variable for count data
target Target variable (0/1 or FALSE/TRUE)
targetpct Plot variable as target% (FALSE/TRUE)
split Alternative to targetpct (split = !targetpct)
output_file Filename of the html report
output_dir Directory where to save the html report
Examples
if (rmarkdown::pandoc_available("1.12.3")) {
report(iris, output_dir = tempdir())
}
rescale01 Rescales a numeric variable into values between 0 and 1
Description
Rescales a numeric variable into values between 0 and 1
Usage
rescale01(x)
Arguments
x numeric vector (to be rescaled)
Value
vector with values between 0 and 1
Examples
rescale01(0:10)
simplify_text Simplifies a text string
Description
A text string is converted into a simplified version by trimming, converting to upper case, replacing
german Umlaute, dropping special characters like comma and semicolon and replacing multiple
spaces with one space.
Usage
simplify_text(text)
Arguments
text text string
Value
text string
Examples
simplify_text(" Hello World !, ")
target_explore_cat Explore categorical variable + target
Description
Create a plot to explore relation between categorical variable and a binary target
Usage
target_explore_cat(
data,
var,
target = "target_ind",
min_val = NA,
max_val = NA,
flip = TRUE,
num2char = TRUE,
title = NA,
auto_scale = TRUE,
na = NA,
max_cat = 30,
legend_position = "bottom"
)
Arguments
data A dataset
var Categorical variable
target Target variable (0/1 or FALSE/TRUE)
min_val All values < min_val are converted to min_val
max_val All values > max_val are converted to max_val
flip Should plot be flipped? (change of x and y)
num2char If TRUE, numeric values in variable are converted into character
title Title of plot
auto_scale Not used, just for compatibility
na Value to replace NA
max_cat Maximum numbers of categories to be plotted
legend_position
Position of legend ("right"|"bottom"|"non")
Value
Plot object
target_explore_num Explore categorical variable + target
Description
Create a plot to explore relation between numerical variable and a binary target
Usage
target_explore_num(
data,
var,
target = "target_ind",
min_val = NA,
max_val = NA,
flip = TRUE,
title = NA,
auto_scale = TRUE,
na = NA,
legend_position = "bottom"
)
Arguments
data A dataset
var Numerical variable
target Target variable (0/1 or FALSE/TRUE)
min_val All values < min_val are converted to min_val
max_val All values > max_val are converted to max_val
flip Should plot be flipped? (change of x and y)
title Title of plot
auto_scale Use 0.02 and 0.98 quantile for min_val and max_val (if min_val and max_val
are not defined)
na Value to replace NA
legend_position
Position of legend ("right"|"bottom"|"non")
Value
Plot object
total_fig_height Get fig.height for RMarkdown-junk using explore_all()
Description
Get fig.height for RMarkdown-junk using explore_all()
Usage
total_fig_height(
data,
var_name_n,
var_name_target,
nvar = NA,
ncol = 2,
size = 3
)
Arguments
data A dataset
var_name_n Weights variable for count data? (TRUE / MISSING)
var_name_target
Target variable (TRUE / MISSING)
nvar Number of variables to plot
ncol Number of columns (default = 2)
size fig.height of 1 plot (default = 3)
Value
Number of rows
Examples
total_fig_height(iris)
total_fig_height(iris, var_name_target = "Species")
total_fig_height(nvar = 5)
use_data_beer Use the beer data set
Description
This data set is an incomplete collection of popular beers in Austria, Germany and Switzerland.
Data are collected from various websites in 2023. Some of the collected data may be incorrect.
Usage
use_data_beer()
Value
Dataset as tibble
Examples
use_data_beer()
use_data_diamonds Use the diamonds data set
Description
This data set comes with the ggplot2 package. It contains the prices and other attributes of almost
54,000 diamonds.
Usage
use_data_diamonds()
Value
Dataset
See Also
ggplot2::diamonds
Examples
use_data_diamonds()
use_data_iris Use the iris flower data set
Description
This data set comes with base R. The data set gives the measurements in centimeters of the variables
sepal length and width and petal length and width, respectively, for 50 flowers from each of 3 species
of iris. The species are Iris setosa, versicolor, and virginica.
Usage
use_data_iris()
Value
Dataset as tibble
Examples
use_data_iris()
use_data_mpg Use the mpg data set
Description
This data set comes with the ggplot2 package. It contains a subset of the fuel economy data that the
EPA makes available on https://fueleconomy.gov/. It contains only models which had a new release
every year between 1999 and 2008 - this was used as a proxy for the popularity of the car.
Usage
use_data_mpg()
Value
Dataset
See Also
ggplot2::mpg
Examples
use_data_mpg()
use_data_mtcars Use the mtcars data set
Description
This data set comes with base R. The data was extracted from the 1974 Motor Trend US maga-
zine, and comprises fuel consumption and 10 aspects of automobile design and performance for 32
automobiles (1973–74 models).
Usage
use_data_mtcars()
Value
Dataset
Examples
use_data_mtcars()
use_data_penguins Use the penguins data set
Description
This data set comes with the palmerpenguins package. It contains measurements for penguin
species, island in Palmer Archipelago, size (flipper length, body mass, bill dimensions), and sex.
Usage
use_data_penguins()
Value
Dataset
See Also
palmerpenguins::penguins
Examples
use_data_penguins()
use_data_starwars Use the starwars data set
Description
This data set comes with the dplyr package. It contains data of 87 star war characters
Usage
use_data_starwars()
Value
Dataset
See Also
dplyr::starwars
Examples
use_data_starwars()
use_data_titanic Use the titanic data set
Description
This data set comes with base R. Survival of passengers on the Titanic.
Usage
use_data_titanic(count = FALSE)
Arguments
count use count data
Value
Dataset
Examples
use_data_titanic(count = TRUE)
use_data_titanic(count = FALSE)
weight_target Weight target variable
Description
Create weights for the target variable in your dataset so that are equal weights for target = 0 and
target = 1. Target must be 0/1, FALSE/TRUE ore no/yes
Usage
weight_target(data, target)
Arguments
data A dataset
target Target variable (0/1, TRUE/FALSE, yes/no)
Value
Weights for each observation (as a vector)
Examples
iris$is_versicolor <- ifelse(iris$Species == "versicolor", 1, 0)
weights <- weight_target(iris, target = is_versicolor)
versicolor <- iris$is_versicolor
table(versicolor, weights) |
blitzortung | rust | Rust | Crate blitzortung
===
Blitzortung.org client library.
### Live data
For realtime data, the Blitzortung.org websocket servers are used.
**This requires the `live` feature to be enabled.**
Modules
---
* live`live`Live data from Blitzortung.org via websockets.
* stream`live`Internal stream utilities.
Crate blitzortung
===
Blitzortung.org client library.
### Live data
For realtime data, the Blitzortung.org websocket servers are used.
**This requires the `live` feature to be enabled.**
Modules
---
* live`live`Live data from Blitzortung.org via websockets.
* stream`live`Internal stream utilities.
Module blitzortung::live
===
Available on **crate feature `live`** only.Live data from Blitzortung.org via websockets.
Structs
---
* SingleStreamA single websocket stream.
* StationA station monitoring lightning strikes.
* StreamFactoryZero-sized `stream::Factory` marker type for
`SingleStream`s.
* StrikeA lightning strike.
Enums
---
* StreamErrorAn error that can occur when streaming data.
Constants
---
* WS\_SERVERSWebsocket servers used.
Functions
---
* streamCreate a stream of lightning strikes.
Module blitzortung::stream
===
Available on **crate feature `live`** only.Internal stream utilities.
Structs
---
* InfiniteAn infinite stream that is guaranteed to never yield `None`.
Traits
---
* FactoryA stream factory. |
HDtweedie | cran | R | Package ‘HDtweedie’
October 12, 2022
Title The Lasso for Tweedie's Compound Poisson Model Using an IRLS-BMD
Algorithm
Version 1.2
Date 2022-05-09
Author
<NAME> <<EMAIL>>, <NAME> <<EMAIL>>, <NAME> <<EMAIL>>
Maintainer <NAME> <<EMAIL>>
Imports methods
Description The Tweedie lasso model implements an iteratively reweighed least square (IRLS) strat-
egy that incorporates a blockwise majorization decent (BMD) method, for efficiently comput-
ing solution paths of the (grouped) lasso and the (grouped) elastic net methods.
License GPL-2
NeedsCompilation yes
Repository CRAN
Date/Publication 2022-05-10 02:20:02 UTC
R topics documented:
aut... 2
coef.cv.HDtweedi... 3
coef.HDtweedi... 5
cv.HDtweedi... 6
HDtweedi... 8
plot.cv.HDtweedi... 11
plot.HDtweedi... 12
predict.cv.HDtweedi... 14
predict.HDtweedi... 15
print.HDtweedi... 17
auto A motor insurance dataset
Description
The motor insurance dataset is originially retrieved from the SAS Enterprise Miner database. The
included dataset is generated by re-organization and transformation as described in Qian et al.
(2016).
Usage
data(auto)
Details
This data set contains 2812 policy samples with 56 predictors. See Qian et al. (2016) for a detailed
description of the generation of these predictors. The response is the aggregate claim loss (in
thousand dollars). The predictors are expanded from the following original variables:
CAR_TYPE: car type, 6 categories
JOBCLASS: job class, 8 categories
MAX_EDUC: education level, 5 categories
KIDSDRIV: number of children passengers
TRAVTIME: time to travel from home to work
BLUEBOOK: car value
NPOLICY: number of policies
MVR_PTS: motor vehicle record point
AGE: driver age
HOMEKIDS: number of children at home
YOJ: years on job
INCOME: income
HOME_VAL: home value
SAMEHOME: years in current address
CAR_USE: whether the car is for commercial use
RED_CAR: whether the car color is red
REVOLKED: whether the driver’s license was revoked in the past
GENDER: gender
MARRIED: whether married
PARENT1: whether a single parent
AREA: whether the driver lives in urban area
Value
A list with the following elements:
x a [2812 x 56] matrix giving 2812 policy records with 56 predictors
y the aggregate claim loss
References
<NAME>. and <NAME>. (2005), “On Modeling Claim Frequency Data In General Insurance
With Extra Zeros”, Insurance: Mathematics and Economics, 36, 153-163.
<NAME> (2013). “cplm: Compound Poisson Linear Models”. A vignette for R package cplm.
Available from https://CRAN.R-project.org/package=cplm
<NAME>., <NAME>., <NAME>. and <NAME>. (2016), “Tweedie’s Compound Poisson Model With
Grouped Elastic Net,” Journal of Computational and Graphical Statistics, 25, 606-625.
Examples
# load HDtweedie library
library(HDtweedie)
# load data set
data(auto)
# how many samples and how many predictors ?
dim(auto$x)
# repsonse y
auto$y
coef.cv.HDtweedie get coefficients or make coefficient predictions from a "cv.HDtweedie"
object.
Description
This function gets coefficients or makes coefficient predictions from a cross-validated HDtweedie
model, using the "cv.HDtweedie" object, and the optimal value chosen for lambda.
Usage
## S3 method for class 'cv.HDtweedie'
coef(object,s=c("lambda.1se","lambda.min"),...)
Arguments
object fitted cv.HDtweedie object.
s value(s) of the penalty parameter lambda at which predictions are required. De-
fault is the value s="lambda.1se" stored on the CV object, it is the largest
value of lambda such that error is within 1 standard error of the minimum. Al-
ternatively s="lambda.min" can be used, it is the optimal value of lambda that
gives minimum cross validation error cvm. If s is numeric, it is taken as the
value(s) of lambda to be used.
... not used. Other arguments to predict.
Details
This function makes it easier to use the results of cross-validation to get coefficients or make coef-
ficient predictions.
Value
The coefficients at the requested values for lambda.
Author(s)
<NAME>, <NAME> and <NAME>
Maintainer: <NAME> <<EMAIL>>
References
<NAME>., <NAME>., <NAME>. and <NAME>. (2016), “Tweedie’s Compound Poisson Model With
Grouped Elastic Net,” Journal of Computational and Graphical Statistics, 25, 606-625.
<NAME>., <NAME>., and <NAME>. (2010), "Regularization paths for generalized linear
models via coordinate descent," Journal of Statistical Software, 33, 1.
See Also
cv.HDtweedie, and predict.cv.HDtweedie methods.
Examples
# load HDtweedie library
library(HDtweedie)
# load data set
data(auto)
# 5-fold cross validation using the lasso
cv0 <- cv.HDtweedie(x=auto$x,y=auto$y,p=1.5,nfolds=5)
# the coefficients at lambda = lambda.1se
coef(cv0)
# define group index
group1 <- c(rep(1,5),rep(2,7),rep(3,4),rep(4:14,each=3),15:21)
# 5-fold cross validation using the grouped lasso
cv1 <- cv.HDtweedie(x=auto$x,y=auto$y,group=group1,p=1.5,nfolds=5)
# the coefficients at lambda = lambda.min
coef(cv1, s = cv1$lambda.min)
coef.HDtweedie get coefficients or make coefficient predictions from an "HDtweedie"
object.
Description
Computes the coefficients at the requested values for lambda from a fitted HDtweedie object.
Usage
## S3 method for class 'HDtweedie'
coef(object, s = NULL, ...)
Arguments
object fitted HDtweedie model object.
s value(s) of the penalty parameter lambda at which predictions are required. De-
fault is the entire sequence used to create the model.
... not used. Other arguments to predict.
Details
s is the new vector at which predictions are requested. If s is not in the lambda sequence used
for fitting the model, the coef function will use linear interpolation to make predictions. The new
values are interpolated using a fraction of coefficients from both left and right lambda indices.
Value
The coefficients at the requested values for lambda.
Author(s)
<NAME>, <NAME> and <NAME>
Maintainer: <NAME> <<EMAIL>>
References
<NAME>., <NAME>., <NAME>. and <NAME>. (2016), “Tweedie’s Compound Poisson Model With
Grouped Elastic Net,” Journal of Computational and Graphical Statistics, 25, 606-625.
See Also
predict.HDtweedie method
Examples
# load HDtweedie library
library(HDtweedie)
# load data set
data(auto)
# fit the lasso
m0 <- HDtweedie(x=auto$x,y=auto$y,p=1.5)
# the coefficients at lambda = 0.01
coef(m0,s=0.01)
# define group index
group1 <- c(rep(1,5),rep(2,7),rep(3,4),rep(4:14,each=3),15:21)
# fit grouped lasso
m1 <- HDtweedie(x=auto$x,y=auto$y,group=group1,p=1.5)
# the coefficients at lambda = 0.01 and 0.04
coef(m1,s=c(0.01,0.04))
cv.HDtweedie Cross-validation for HDtweedie
Description
Does k-fold cross-validation for HDtweedie, produces a plot, and returns a value for lambda. This
function is modified based on the cv function from the glmnet package.
Usage
cv.HDtweedie(x, y, group = NULL, p, weights, lambda = NULL,
pred.loss = c("deviance", "mae", "mse"),
nfolds = 5, foldid, ...)
Arguments
x matrix of predictors, of dimension n × p; each row is an observation vector.
y response variable. This argument should be non-negative.
group To apply the grouped lasso, it is a vector of consecutive integers describing the
grouping of the coefficients (see example below). To apply the lasso, the user
can ignore this argument, and the vector is automatically generated by treating
each variable as a group.
p the power used for variance-mean relation of Tweedie model. Default is 1.50.
weights the observation weights. Default is equal weight.
lambda optional user-supplied lambda sequence; default is NULL, and HDtweedie chooses
its own sequence.
pred.loss loss to use for cross-validation error. Valid options are:
• "deviance" Deviance.
• "mae" Mean absolute error.
• "mse" Mean square error.
Default is "deviance".
nfolds number of folds - default is 5. Although nfolds can be as large as the sample
size (leave-one-out CV), it is not recommended for large datasets. Smallest
value allowable is nfolds=3.
foldid an optional vector of values between 1 and nfold identifying what fold each
observation is in. If supplied, nfold can be missing.
... other arguments that can be passed to HDtweedie.
Details
The function runs HDtweedie nfolds+1 times; the first to get the lambda sequence, and then the re-
mainder to compute the fit with each of the folds omitted. The average error and standard deviation
over the folds are computed.
Value
an object of class cv.HDtweedie is returned, which is a list with the ingredients of the cross-
validation fit.
lambda the values of lambda used in the fits.
cvm the mean cross-validated error - a vector of length length(lambda).
cvsd estimate of standard error of cvm.
cvupper upper curve = cvm+cvsd.
cvlower lower curve = cvm-cvsd.
name a text string indicating type of measure (for plotting purposes).
HDtweedie.fit a fitted HDtweedie object for the full data.
lambda.min The optimal value of lambda that gives minimum cross validation error cvm.
lambda.1se The largest value of lambda such that error is within 1 standard error of the
minimum.
Author(s)
<NAME>, <NAME> and <NAME>
Maintainer: <NAME> <<EMAIL>>
References
<NAME>., <NAME>., <NAME>. and <NAME>. (2016), “Tweedie’s Compound Poisson Model With
Grouped Elastic Net,” Journal of Computational and Graphical Statistics, 25, 606-625.
See Also
HDtweedie, plot.cv.HDtweedie, predict.cv.HDtweedie, and coef.cv.HDtweedie methods.
Examples
# load HDtweedie library
library(HDtweedie)
# load data set
data(auto)
# 5-fold cross validation using the lasso
cv0 <- cv.HDtweedie(x=auto$x,y=auto$y,p=1.5,nfolds=5)
# define group index
group1 <- c(rep(1,5),rep(2,7),rep(3,4),rep(4:14,each=3),15:21)
# 5-fold cross validation using the grouped lasso
cv1 <- cv.HDtweedie(x=auto$x,y=auto$y,group=group1,p=1.5,nfolds=5)
HDtweedie Fits the regularization paths for lasso-type methods of the Tweedie
model
Description
Fits regularization paths for lasso-type methods of the Tweedie model at a sequence of regulariza-
tion parameters lambda.
Usage
HDtweedie(x, y, group = NULL,
p = 1.50,
weights = rep(1,nobs),
alpha = 1,
nlambda = 100,
lambda.factor = ifelse(nobs < nvars, 0.05, 0.001),
lambda = NULL,
pf = sqrt(bs),
dfmax = as.integer(max(group)) + 1,
pmax = min(dfmax * 1.2, as.integer(max(group))),
standardize = FALSE,
eps = 1e-08, maxit = 3e+08)
Arguments
x matrix of predictors, of dimension n × p; each row is an observation vector.
y response variable. This argument should be non-negative.
group To apply the grouped lasso, it is a vector of consecutive integers describing the
grouping of the coefficients (see example below). To apply the lasso, the user
can ignore this argument, and the vector is automatically generated by treating
each variable as a group.
p the power used for variance-mean relation of Tweedie model. Default is 1.50.
weights the observation weights. Default is equal weight.
alpha The elasticnet mixing parameter, with 0 ≤ α ≤ 1. The penalty is defined as
(1 − α)/2||β||22 + α||β||1 .
alpha=1 is the lasso penalty, and alpha=0 the ridge penalty. Default is 1.
nlambda the number of lambda values - default is 100.
lambda.factor the factor for getting the minimal lambda in lambda sequence, where min(lambda)
= lambda.factor * max(lambda). max(lambda) is the smallest value of lambda
for which all coefficients are zero. The default depends on the relationship be-
tween n (the number of rows in the matrix of predictors) and p (the number of
predictors). If n >= p, the default is 0.001, close to zero. If n < p, the default
is 0.05. A very small value of lambda.factor will lead to a saturated fit. It
takes no effect if there is user-defined lambda sequence.
lambda a user supplied lambda sequence. Typically, by leaving this option unspecified
users can have the program compute its own lambda sequence based on nlambda
and lambda.factor. Supplying a value of lambda overrides this. It is better to
supply a decreasing sequence of lambda values than a single (small) value. If
not, the program will sort user-defined lambda sequence in decreasing order
automatically.
pf penalty factor, a vector in length of bn (bn is the total number of groups). Sep-
arate penalty weights can be applied to each group to allow differential shrink-
age. Can be 0 for some groups, which implies no shrinkage, and results in that
group always being included in the model. Default value for each entry is the
square-root of the corresponding size of each group (for the lasso, it is 1 for each
variable).
dfmax limit the maximum number of groups in the model. Default is bs+1.
pmax limit the maximum number of groups ever to be nonzero. For example once a
group enters the model, no matter how many times it exits or re-enters model
through the path, it will be counted only once. Default is min(dfmax*1.2,bs).
eps convergence termination tolerance. Defaults value is 1e-8.
standardize logical flag for variable standardization, prior to fitting the model sequence. If
TRUE, x matrix isP normalized such that each column is centered and sum squares
N
of each column i=1 x2ij /N = 1. The coefficients are always returned on the
original scale. Default is FALSE.
maxit maximum number of inner-layer BMD iterations allowed. Default is 3e8.
Details
The sequence of models implied by lambda is fit by the IRLS-BMD algorithm. This gives a
(grouped) lasso or (grouped) elasticnet regularization path for fitting the Tweedie generalized linear
regression paths, by maximizing the corresponding penalized Tweedie log-likelihood. If the group
argument is ignored, the function fits the lasso. Users can tweak the penalty by choosing different
alpha and penalty factor.
For computing speed reason, if models are not converging or running slow, consider increasing eps,
decreasing nlambda, or increasing lambda.factor before increasing maxit.
Value
An object with S3 class HDtweedie.
call the call that produced this object
b0 intercept sequence of length length(lambda)
beta a p*length(lambda) matrix of coefficients.
df the number of nonzero groups for each value of lambda.
dim dimension of coefficient matrix (ices)
lambda the actual sequence of lambda values used
npasses total number of iterations (the most inner loop) summed over all lambda values
jerr error flag, for warnings and errors, 0 if no error.
group a vector of consecutive integers describing the grouping of the coefficients.
Author(s)
<NAME>, <NAME> and <NAME>
Maintainer: <NAME> <<EMAIL>>
References
<NAME>., <NAME>., <NAME>. and <NAME>. (2016), “Tweedie’s Compound Poisson Model With
Grouped Elastic Net,” Journal of Computational and Graphical Statistics, 25, 606-625.
See Also
plot.HDtweedie
Examples
# load HDtweedie library
library(HDtweedie)
# load auto data set
data(auto)
# fit the lasso
m0 <- HDtweedie(x=auto$x,y=auto$y,p=1.5)
# define group index
group1 <- c(rep(1,5),rep(2,7),rep(3,4),rep(4:14,each=3),15:21)
# fit the grouped lasso
m1 <- HDtweedie(x=auto$x,y=auto$y,group=group1,p=1.5)
# fit the grouped elastic net
m2 <- HDtweedie(x=auto$x,y=auto$y,group=group1,p=1.5,alpha=0.7)
plot.cv.HDtweedie plot the cross-validation curve produced by cv.HDtweedie
Description
Plots the cross-validation curve, and upper and lower standard deviation curves, as a function of
the lambda values used. This function is modified based on the plot.cv function from the glmnet
package.
Usage
## S3 method for class 'cv.HDtweedie'
plot(x, sign.lambda, ...)
Arguments
x fitted cv.HDtweedie object
sign.lambda either plot against log(lambda) (default) or its negative if sign.lambda=-1.
... other graphical parameters to plot
Details
A plot is produced.
Author(s)
<NAME>, <NAME> and <NAME>
Maintainer: <NAME> <<EMAIL>>
References
<NAME>., <NAME>., <NAME>. and <NAME>. (2016), “Tweedie’s Compound Poisson Model With
Grouped Elastic Net,” Journal of Computational and Graphical Statistics, 25, 606-625.
<NAME>., <NAME>., and <NAME>. (2010), “Regularization paths for generalized linear
models via coordinate descent,” Journal of Statistical Software, 33, 1.
See Also
cv.HDtweedie.
Examples
# load HDtweedie library
library(HDtweedie)
# load data set
data(auto)
# 5-fold cross validation using the lasso
cv0 <- cv.HDtweedie(x=auto$x,y=auto$y,p=1.5,nfolds=5,lambda.factor=.0005)
# make a CV plot
plot(cv0)
# define group index
group1 <- c(rep(1,5),rep(2,7),rep(3,4),rep(4:14,each=3),15:21)
# 5-fold cross validation using the grouped lasso
cv1 <- cv.HDtweedie(x=auto$x,y=auto$y,group=group1,p=1.5,nfolds=5,lambda.factor=.0005)
# make a CV plot
plot(cv1)
plot.HDtweedie Plot solution paths from a "HDtweedie" object
Description
Produces a coefficient profile plot of the coefficient paths for a fitted HDtweedie object.
Usage
## S3 method for class 'HDtweedie'
plot(x, group = FALSE, log.l = TRUE, ...)
Arguments
x fitted HDtweedie model
group what is on the Y-axis. Plot the norm of each group if TRUE. Plot each coefficient
if FALSE.
log.l what is on the X-axis. Plot against the log-lambda sequence if TRUE. Plot against
the lambda sequence if FALSE.
... other graphical parameters to plot
Details
A coefficient profile plot is produced.
Author(s)
<NAME>, <NAME> and <NAME>
Maintainer: <NAME> <<EMAIL>>
References
<NAME>., <NAME>., <NAME>. and <NAME>. (2016), “Tweedie’s Compound Poisson Model With
Grouped Elastic Net,” Journal of Computational and Graphical Statistics, 25, 606-625.
Examples
# load HDtweedie library
library(HDtweedie)
# load data set
data(auto)
# fit the lasso
m0 <- HDtweedie(x=auto$x,y=auto$y,p=1.5)
# make plot
plot(m0) # plots the coefficients against the log-lambda sequence
# define group index
group1 <- c(rep(1,5),rep(2,7),rep(3,4),rep(4:14,each=3),15:21)
# fit group lasso
m1 <- HDtweedie(x=auto$x,y=auto$y,group=group1,p=1.5)
# make plots
par(mfrow=c(1,3))
plot(m1) # plots the coefficients against the log-lambda sequence
plot(m1,group=TRUE) # plots group norm against the log-lambda sequence
plot(m1,log.l=FALSE) # plots against the lambda sequence
predict.cv.HDtweedie make predictions from a "cv.HDtweedie" object.
Description
This function makes predictions from a cross-validated HDtweedie model, using the stored "cv.HDtweedie"
object, and the optimal value chosen for lambda.
Usage
## S3 method for class 'cv.HDtweedie'
predict(object, newx, s=c("lambda.1se","lambda.min"),...)
Arguments
object fitted cv.HDtweedie object.
newx matrix of new values for x at which predictions are to be made. Must be a matrix.
See documentation for predict.HDtweedie.
s value(s) of the penalty parameter lambda at which predictions are required.
Default is the value s="lambda.1se" stored on the CV object. Alternatively
s="lambda.min" can be used. If s is numeric, it is taken as the value(s) of
lambda to be used.
... not used. Other arguments to predict.
Details
This function makes it easier to use the results of cross-validation to make a prediction.
Value
The returned object depends on the . . . argument which is passed on to the predict method for
HDtweedie objects.
Author(s)
<NAME>, <NAME> and <NAME>
Maintainer: <NAME> <<EMAIL>>
References
<NAME>., <NAME>., <NAME>. and <NAME>. (2016), “Tweedie’s Compound Poisson Model With
Grouped Elastic Net,” Journal of Computational and Graphical Statistics, 25, 606-625.
See Also
cv.HDtweedie, and coef.cv.HDtweedie methods.
Examples
# load HDtweedie library
library(HDtweedie)
# load data set
data(auto)
# 5-fold cross validation using the lasso
cv0 <- cv.HDtweedie(x=auto$x,y=auto$y,p=1.5,nfolds=5)
# predicted mean response at lambda = lambda.1se, newx = x[1,]
pre = predict(cv0, newx = auto$x[1,], type = "response")
# define group index
group1 <- c(rep(1,5),rep(2,7),rep(3,4),rep(4:14,each=3),15:21)
# 5-fold cross validation using the grouped lasso
cv1 <- cv.HDtweedie(x=auto$x,y=auto$y,group=group1,p=1.5,nfolds=5)
# predicted the log mean response at lambda = lambda.min, x[1:5,]
pre = predict(cv1, newx = auto$x[1:5,], s = cv1$lambda.min, type = "link")
predict.HDtweedie make predictions from a "HDtweedie" object.
Description
Similar to other predict methods, this functions predicts fitted values from a HDtweedie object.
Usage
## S3 method for class 'HDtweedie'
predict(object, newx, s = NULL,
type=c("response","link"), ...)
Arguments
object fitted HDtweedie model object.
newx matrix of new values for x at which predictions are to be made. Must be a matrix.
s value(s) of the penalty parameter lambda at which predictions are required. De-
fault is the entire sequence used to create the model.
type type of prediction required:
• Type "response" gives the mean response estimate.
• Type "link" gives the estimate for log mean response.
... Not used. Other arguments to predict.
Details
s is the new vector at which predictions are requested. If s is not in the lambda sequence used for
fitting the model, the predict function will use linear interpolation to make predictions. The new
values are interpolated using a fraction of predicted values from both left and right lambda indices.
Value
The object returned depends on type.
Author(s)
<NAME>, <NAME> and <NAME>
Maintainer: <NAME> <<EMAIL>>
References
<NAME>., <NAME>., <NAME>. and <NAME>. (2016), “Tweedie’s Compound Poisson Model With
Grouped Elastic Net,” Journal of Computational and Graphical Statistics, 25, 606-625.
See Also
coef method
Examples
# load HDtweedie library
library(HDtweedie)
# load auto data set
data(auto)
# fit the lasso
m0 <- HDtweedie(x=auto$x,y=auto$y,p=1.5)
# predicted mean response at x[10,]
print(predict(m0,type="response",newx=auto$x[10,]))
# define group index
group1 <- c(rep(1,5),rep(2,7),rep(3,4),rep(4:14,each=3),15:21)
# fit the grouped lasso
m1 <- HDtweedie(x=auto$x,y=auto$y,group=group1,p=1.5)
# predicted the log mean response at x[1:5,]
print(predict(m1,type="link",newx=auto$x[1:5,]))
print.HDtweedie print a HDtweedie object
Description
Print the nonzero group counts at each lambda along the HDtweedie path.
Usage
## S3 method for class 'HDtweedie'
print(x, digits = max(3, getOption("digits") - 3), ...)
Arguments
x fitted HDtweedie object
digits significant digits in printout
... additional print arguments
Details
Print the information about the nonzero group counts at each lambda step in the HDtweedie object.
The result is a two-column matrix with columns Df and Lambda. The Df column is the number of
the groups that have nonzero within-group coefficients, the Lambda column is the the corresponding
lambda.
Value
a two-column matrix, the first columns is the number of nonzero group counts and the second
column is Lambda.
Author(s)
<NAME>, <NAME> and <NAME>
Maintainer: <NAME> <<EMAIL>>
References
<NAME>., <NAME>., <NAME>. and <NAME>. (2016), “Tweedie’s Compound Poisson Model With
Grouped Elastic Net,” Journal of Computational and Graphical Statistics, 25, 606-625.
Examples
# load HDtweedie library
library(HDtweedie)
# load auto data set
data(auto)
# fit the lasso
m0 <- HDtweedie(x=auto$x,y=auto$y,p=1.5)
# print out results
print(m0)
# define group index
group1 <- c(rep(1,5),rep(2,7),rep(3,4),rep(4:14,each=3),15:21)
# fit the grouped lasso
m1 <- HDtweedie(x=auto$x,y=auto$y,group=group1,p=1.5)
# print out results
print(m1) |
IRTest | cran | R | Package ‘IRTest’
September 19, 2023
Type Package
Title Parameter Estimation of Item Response Theory with Estimation of
Latent Distribution
Version 1.11.0
Description Item response theory (IRT) parameter estimation using marginal maximum likeli-
hood and expectation-maximization algorithm
(Bock & Aitkin, 1981 <doi:10.1007/BF02293801>).
Within parameter estimation algorithm, several methods for latent distribution estima-
tion are available
(Li, 2022 <http://www.riss.kr/link?id=T16374105>).
Reflecting some features of the true latent distribution, these latent distribution estimation meth-
ods can possibly enhance the estimation accuracy and free the normality assumption on the la-
tent distribution.
License GPL (>= 3)
Encoding UTF-8
LazyData true
RoxygenNote 7.2.3
URL https://github.com/SeewooLi/IRTest
BugReports https://github.com/SeewooLi/IRTest/issues
Suggests knitr, rmarkdown, testthat (>= 3.0.0), V8, gridExtra
VignetteBuilder knitr
Imports betafunctions, dcurver, ggplot2
Depends R (>= 2.10)
Config/testthat/edition 3
NeedsCompilation no
Author <NAME> [aut, cre, cph]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-09-19 21:20:02 UTC
R topics documented:
cat_clp... 2
DataGeneratio... 3
dist... 6
GH... 8
inform_f_ite... 8
inform_f_tes... 9
IRTest_Dic... 9
IRTest_Mi... 13
IRTest_Pol... 19
item_fi... 23
latent_distributio... 24
original_par_2G... 25
plot.irtes... 27
plot_ite... 28
print.irtes... 29
print.irtest_summar... 30
recategoriz... 31
reliabilit... 31
summary.irtes... 33
cat_clps A recommendation for the category collapsing of items based on item
parameters
Description
In a polytomous item, one or more score categories may not have the highest probability among the
categories in an acceptable θ range. In this case, the category could be regarded as a redundant one
in a psychometric point of view and can be collapsed into another score category. This function
returns a recommendation for a recategorization scheme based on item parameters.
Usage
cat_clps(item.matrix, range = c(-4, 4), increment = 0.005)
Arguments
item.matrix A matrix of item parameters.
range A range of θ to be evaluated.
increment A width of the grid scheme.
Value
A list of recommended recategorization for each item.
Author(s)
<NAME> <<EMAIL>>
DataGeneration Generating artificial item response data
Description
This function generates artificial item response data with users specified item types, details of item
parameters, and latent distribution.
Usage
DataGeneration(
seed = 1,
N = 2000,
nitem_D = 0,
nitem_P = 0,
model_D = "2PL",
model_P = "GPCM",
latent_dist = NULL,
item_D = NULL,
item_P = NULL,
theta = NULL,
prob = 0.5,
d = 1.7,
sd_ratio = 1,
m = 0,
s = 1,
a_l = 0.8,
a_u = 2.5,
b_m = NULL,
b_sd = NULL,
c_l = 0,
c_u = 0.2,
categ = NULL
)
Arguments
seed A numeric value that is used on random sampling. The seed number can guar-
antee the replicability of the result.
N A numeric value. The number of examinees.
nitem_D A numeric value. The number of dichotomous items.
nitem_P A numeric value. The number of polytomous items.
model_D A vector of length nitem_D. The ith element is the probability model for the ith
dichotomous item.
model_P A character string that represents the probability model for the polytomous
items.
latent_dist A character string that determines the type of latent distribution. Currently avail-
able options are "beta" (four-parameter beta distribution; rBeta.4P), "chi"
(χ2 distribution; rchisq), "normal", "Normal", or "N" (standard normal dis-
tribution; rnorm), and "Mixture" or "2NM" (two-component Gaussian mixture
distribution; see Li (2021) for details.)
item_D Default is NULL. An item parameter matrix can be specified. The number of
columns should be 3: a parameter for the first, b parameter for the second, and
c parameter for the third column.
item_P Default is NULL. An item parameter matrix can be specified. The number of
columns should be 7: a parameter for the first, and b parameters for the rest of
the columns.
theta Default is NULL. An ability parameter vector can be specified.
prob A numeric value required when latent_dist = "Mixture". It is a π = nN1
parameter of two-component Gaussian mixture distribution, where n1 is the es-
timated number of examinees who belong to the first Gaussian component and
N is the total number of examinees (Li, 2021).
d A numeric value required when latent_dist = "Mixture". It is a δ = µ2 −µ σ̄
parameter of two-component Gaussian mixture distribution, where µ1 is the es-
timated mean of the first Gaussian component, µ2 is the estimated mean of the
second Gaussian component, and σ̄ = 1 is the standard deviation of the latent
distribution (Li, 2021). Without loss of generality, µ2 ≥ µ1 , thus δ ≥ 0, is
assumed.
sd_ratio A numeric value required when latent_dist = "Mixture". It is a ζ = σσ12
parameter of two-component Gaussian mixture distribution, where σ1 is the es-
timated standard deviation of the first Gaussian component, σ2 is the estimated
standard deviation of the second Gaussian component (Li, 2021).
m A numeric value of the overall mean of the latent distribution. The default is 0.
s A numeric value of the overall standard deviation of the latent distribution. The
default is 1.
a_l A numeric value. The lower bound of item discrimination parameters (a).
a_u A numeric value. The upper bound of item discrimination parameters (a).
b_m A numeric value. The mean of item difficulty parameters (b). If unspecified, m
is passed on to the value.
b_sd A numeric value. The standard deviation of item difficulty parameters (b). If
unspecified, s is passed on to the value.
c_l A numeric value. The lower bound of item guessing parameters (c).
c_u A numeric value. The lower bound of item guessing parameters (c).
categ A numeric vector of length nitem_P. The ith element equals the number of
categories of the ith polyotomous item.
Value
This function returns a list which contains several objects:
theta A vector of ability parameters (θ).
item_D A matrix of dichotomous item parameters.
initialitem_D A matrix that contains initial item parameter values for dichotomous items.
data_D A matrix of dichotomous item responses where rows indicate examinees and
columns indicate items.
item_P A matrix of polytomous item parameters.
initialitem_P A matrix that contains initial item parameter values for polytomous items.
data_P A matrix of polytomous item responses where rows indicate examinees and
columns indicate items.
Author(s)
<NAME> <<EMAIL>>
References
<NAME>. (2021). Using a two-component normal mixture distribution as a latent distribution in esti-
mating parameters of item response models. Journal of Educational Evaluation, 34(4), 759-789.
Examples
# Dichotomous item responses only
Alldata <- DataGeneration(seed = 1,
model_D = rep(3, 10),
N=500,
nitem_D = 10,
nitem_P = 0,
latent_dist = "2NM",
d = 1.664,
sd_ratio = 2,
prob = 0.3)
data <- Alldata$data_D
item <- Alldata$item_D
initialitem <- Alldata$initialitem_D
theta <- Alldata$theta
# Polytomous item responses only
Alldata <- DataGeneration(seed = 2,
N=1000,
item_D=NULL,
item_P=NULL,
theta = NULL,
nitem_D = 0,
nitem_P = 10,
categ = rep(3:7,each = 2),
latent_dist = "2NM",
d = 1.664,
sd_ratio = 2,
prob = 0.3)
data <- Alldata$data_P
item <- Alldata$item_P
initialitem <- Alldata$initialitem_P
theta <- Alldata$theta
# Mixed-format items
Alldata <- DataGeneration(seed = 2,
model_D = rep(1:2, each=10),# 1PL model is applied to item #1~10
N=1000,
nitem_D = 20,
nitem_P = 10,
categ = rep(3:7,each = 2),# 3 categories for item #21-22,
# ...,
latent_dist = "2NM",
d = 1.664,
sd_ratio = 2,
prob = 0.3)
DataD <- Alldata$data_D
DataP <- Alldata$data_P
itemD <- Alldata$item_D
itemP <- Alldata$item_P
initialitemD <- Alldata$initialitem_D
initialitemP <- Alldata$initialitem_P
theta <- Alldata$theta
dist2 Re-parameterized two-component normal mixture distribution
Description
Probability density for the re-parameterized two-component normal mixture distribution.
Usage
dist2(x, prob = 0.5, d = 0, sd_ratio = 1, overallmean = 0, overallsd = 1)
Arguments
x A numeric vector. The location to evaluate the density function.
prob A numeric value of π = nN1 parameter of two-component Gaussian mixture
distribution, where n1 is the estimated number of examinees who belong to the
first Gaussian component and N is the total number of examinees (Li, 2021).
d A numeric value of δ = µ2 −µ σ̄
parameter of two-component Gaussian mixture
distribution, where µ1 is the estimated mean of the first Gaussian component, µ2
is the estimated mean of the second Gaussian component, and σ̄ is the standard
deviation of the latent distribution (Li, 2021). Without loss of generality, µ2 ≥
µ1 , thus δ ≥ 0, is assumed.
sd_ratio A numeric value of ζ = σσ21 parameter of two-component Gaussian mixture
distribution, where σ1 is the estimated standard deviation of the first Gaussian
component, σ2 is the estimated standard deviation of the second Gaussian com-
ponent (Li, 2021).
overallmean A numeric value of µ̄ that determines the overall mean of two-component Gaus-
sian mixture distribution.
overallsd A numeric value of σ̄ that determines the overall standard deviation of two-
component Gaussian mixture distribution.
Details
The overall mean and overall standard deviation can be expressed with original parameters of two-component Gaus
1) Overall mean (µ̄)
µ̄ = πµ1 + (1 − π)µ2
2) Overall standard deviation (σ̄)
q
σ̄ = πσ12 + (1 − π)σ22 + π(1 − π)(µ2 − µ1 )2
Value
The evaluated probability density value(s).
Author(s)
<NAME> <<EMAIL>>
References
Li, S. (2021). Using a two-component normal mixture distribution as a latent distribution in esti-
mating parameters of item response models. Journal of Educational Evaluation, 34(4), 759-789.
GHc Gauss-Hermite constants
Description
a vector that contains Gauss-Hermite constants
Usage
GHc
Format
An object of class numeric of length 21.
inform_f_item Item information function
Description
Item information function
Usage
inform_f_item(x, test, item, type = NULL)
Arguments
x A vector of θ value(s).
test An object returned from an estimation function.
item A numeric value indicating an item. If n is provided, item information is calcu-
lated for the nth item.
type A character value for a mixed format test which determines the item type: "d"
stands for a dichotomous item, and "p" stands for a polytomous item.
Value
A vector of item information values of the same length as x.
Author(s)
<NAME> <<EMAIL>>
inform_f_test Test information function
Description
Test information function
Usage
inform_f_test(x, test)
Arguments
x A vector of θ value(s).
test An object returned from an estimation function.
Value
A vector of test information values of the same length as x.
Author(s)
<NAME> <<EMAIL>>
IRTest_Dich Item and ability parameters estimation for dichotomous items
Description
This function estimates IRT item and ability parameters when all items are scored dichotomously.
Based on Bock & Aitkin’s (1981) marginal maximum likelihood and EM algorithm (EM-MML),
this function incorporates several latent distribution estimation algorithms which could free the
normality assumption on the latent variable. If the normality assumption is violated, application of
these latent distribution estimation methods could reflect some features of the unknown true latent
distribution, and, thus, could provide more accurate parameter estimates (Li, 2021; Woods & Lin,
2009; Woods & Thissen, 2006).
Usage
IRTest_Dich(
data,
model = "2PL",
range = c(-6, 6),
q = 121,
initialitem = NULL,
ability_method = "EAP",
latent_dist = "Normal",
max_iter = 200,
threshold = 1e-04,
bandwidth = "SJ-ste",
h = NULL
)
Arguments
data A matrix of item responses where responses are coded as 0 or 1. Rows and
columns indicate examinees and items, respectively.
model A vector that represents types of item characteristic functions applied to each
item. Insert 1, "1PL", "Rasch", or "RASCH" for one-parameter logistic model,
2, "2PL" for two-parameter logistic model, and 3, "3PL" for three-parameter
logistic model. The default is "2PL".
range Range of the latent variable to be considered in the quadrature scheme. The
default is from -6 to 6: c(-6, 6).
q A numeric value that represents the number of quadrature points. The default
value is 121.
initialitem A matrix of initial item parameter values for starting the estimation algorithm
ability_method The ability parameter estimation method. The available options are Expected a
posteriori (EAP) and Maximum Likelihood Estimates (MLE). The default is EAP.
latent_dist A character string that determines latent distribution estimation method. Insert
"Normal", "normal", or "N" to assume normal distribution on the latent distri-
bution, "EHM" for empirical histogram method (Mislevy, 1984; Mislevy & Bock,
1985), "Mixture" or "2NM" for the method of two-component Gaussian mixture
distribution (Li, 2021; Mislevy, 1984), "DC" or "Davidian" for Davidian-curve
method (Woods & Lin, 2009), "KDE" for kernel density estimation method (Li,
2022), and "LLS" for log-linear smoothing method (Casabianca, Lewis, 2015).
The default value is set to "Normal" for the conventional normality assumption
on latent distribution.
max_iter A numeric value that determines the maximum number of iterations in the EM-
MML. The default value is 200.
threshold A numeric value that determines the threshold of EM-MML convergence. A
maximum item parameter change is monitored and compared with the threshold.
The default value is 0.0001.
bandwidth A character value is needed when "KDE" is used for the latent distribution esti-
mation. This argument determines which bandwidth estimation method is used
for "KDE". The default value is "SJ-ste". See density for possible options.
h A natural number less than or equal to 10 is needed when "DC" is used for
the latent distribution estimation. This argument determines the complexity of
Davidian curve.
Details
The probabilities for correct response (u = 1) in one-, two-, and three-parameter logistic models can be expressed as
1) One-parameter logistic (1PL) model
exp (θ − b)
P (u = 1|θ, b) =
2) Two-parameter logistic (2PL) model
exp (a(θ − b))
P (u = 1|θ, a, b) =
3) Three-parameter logistic (3PL) model
exp (a(θ − b))
P (u = 1|θ, a, b, c) = c + (1 − c)
The estimated latent distribution for each of the latent distribution estimation method can be expressed as follows;
1) Empirical histogram method
P (θ = Xk ) = A(Xk )
where k = 1, 2, ..., q, Xk is the location of the kth quadrature point, and A(Xk ) is a value
of probability mass function evaluated at Xk . Empirical histogram method thus has q − 1
parameters.
2) Two-component Gaussian mixture distribution
P (θ = X) = πφ(X; µ1 , σ1 ) + (1 − π)φ(X; µ2 , σ2 )
where φ(X; µ, σ) is the value of a Gaussian component with mean µ and standard deviation
σ evaluated at X.
3) Davidian curve method
X
λ
P (θ = X) = mλ X φ(X; 0, 1)
where h corresponds to the argument h and determines the degree of the polynomial.
4) Kernel density estimation method
N
P (θ = X) = K
where N is the number of examinees, θj is jth examinee’s ability parameter, h is the band-
width which corresponds to the argument bandwidth, and K(·) is a kernel function. The
Gaussian kernel is used in this function.
5) Log-linear smoothing method
h
!
X
P (θ = Xq ) = exp β0 + βm Xqm
where h is the hyper parameter which determines the smoothness of the density, and θ can
take total Q finite values (X1 , . . . , Xq , . . . , XQ ).
Value
This function returns a list which contains several objects:
par_est The item parameter estimates.
se The asymptotic standard errors for item parameter estimates.
fk The estimated frequencies of examinees at each quadrature points.
iter The number of EM-MML iterations required for the convergence.
quad The location of quadrature points.
diff The final value of the monitored maximum item parameter change.
Ak The estimated discrete latent distribution. It is discrete (i.e., probability mass
function) since quadrature scheme of EM-MML is used.
Pk The posterior probabilities for each examinees at each quadrature points.
theta The estimated ability parameter values. If ability_method = "MLE", and if an
examinee answers all or none of the items correctly, the function returns ±Inf.
theta_se The asymptotic standard errors of ability parameter estimates. Available only
when ability_method = "MLE". If an examinee answers all or none of the items
correctly, the function returns NA.
logL The deviance (i.e., -2logL).
density_par The estimated density parameters. If latent_dist = "2NM", prob is the es-
timated π = nN1 parameter of two-component Gaussian mixture distribution,
where n1 is the estimated number of examinees who belong to the first Gaus-
sian component and N is the total number of examinees; d is the estimated
δ = µ2 −µ
σ̄
1
parameter of two-component Gaussian mixture distribution, where
µ1 is the estimated mean of the first Gaussian component, µ2 is the estimated
mean of the second Gaussian component, and σ̄ = 1 is the standard deviation of
the latent distribution; and sd_ratio is the estimated ζ = σσ21 parameter of two-
component Gaussian mixture distribution, where σ1 is the estimated standard
deviation of the first Gaussian component, σ2 is the estimated standard devia-
tion of the second Gaussian component (Li, 2021). Without loss of generality,
µ2 ≥ µ1 , thus δ ≥ 0, is assumed.
Options A replication of input arguments and other information.
Author(s)
<NAME> <<EMAIL>>
References
<NAME>., & <NAME>. (1981). Marginal maximum likelihood estimation of item parameters:
Application of an EM algorithm. Psychometrika, 46(4), 443-459.
<NAME>., & <NAME>. (2015). IRT item parameter recovery with marginal maximum
likelihood estimation using loglinear smoothing models. Journal of Educational and Behavioral
Statistics, 40(6), 547-578.
<NAME>. (2021). Using a two-component normal mixture distribution as a latent distribution in esti-
mating parameters of item response models. Journal of Educational Evaluation, 34(4), 759-789.
<NAME>. (2022). The effect of estimating latent distribution using kernel density estimation method
on the accuracy and efficiency of parameter estimation of item response models [Master’s thesis,
Yonsei University, Seoul]. Yonsei University Library.
<NAME>. (1984). Estimating latent distributions. Psychometrika, 49(3), 359-381.
<NAME>., & <NAME>. (1985). Implementation of the EM algorithm in the estimation of
item parameters: The BILOG computer program. In <NAME> (Ed.). Proceedings of the 1982
item response theory and computerized adaptive testing conference (pp. 189-202). University of
Minnesota, Department of Psychology, Computerized Adaptive Testing Conference.
<NAME>., & <NAME>. (2009). Item response theory with estimation of the latent density using
Davidian curves. Applied Psychological Measurement, 33(2), 102-117.
<NAME>., & <NAME>. (2006). Item response theory with estimation of the latent population
distribution using spline-based densities. Psychometrika, 71(2), 281-301.
Examples
## Not run:
# A preparation of dichotomous item response data
data <- DataGeneration(seed = 1,
model_D = rep(1, 10),
N=500,
nitem_D = 10,
nitem_P = 0,
latent_dist = "2NM",
d = 1.664,
sd_ratio = 2,
prob = 0.3)$data_D
# Analysis
M1 <- IRTest_Dich(data)
## End(Not run)
IRTest_Mix Item and ability parameters estimation for a mixed-format item re-
sponse data
Description
This function estimates IRT item and ability parameters when a test consists of mixed-format items
(i.e., a combination of dichotomous and polytomous items). In educational context, the combination
of these two item formats takes an advantage; Dichotomous item format expedites scoring and is
conducive to cover broad domain, while Polytomous item format (e.g., free response item) encour-
ages students to exert complex cognitive skills (Lee et al., 2020). Based on Bock & Aitkin’s (1981)
marginal maximum likelihood and EM algorithm (EM-MML), this function incorporates several
latent distribution estimation algorithms which could free the normality assumption on the latent
variable. If the normality assumption is violated, application of these latent distribution estimation
methods could reflect some features of the unknown true latent distribution, and, thus, could provide
more accurate parameter estimates (Li, 2021; Woods & Lin, 2009; Woods & Thissen, 2006).
Usage
IRTest_Mix(
data_D,
data_P,
model_D = "2PL",
model_P = "GPCM",
range = c(-6, 6),
q = 121,
initialitem_D = NULL,
initialitem_P = NULL,
ability_method = "EAP",
latent_dist = "Normal",
max_iter = 200,
threshold = 1e-04,
bandwidth = "SJ-ste",
h = NULL
)
Arguments
data_D A matrix of item responses where responses are coded as 0 or 1. Rows and
columns indicate examinees and items, respectively.
data_P A matrix of polytomous item responses where responses are coded as 0, 1, ...,
or m for an m+1 category item. Rows and columns indicate examinees and items,
respectively.
model_D A vector dichotomous item response models. Insert 1, "1PL", "Rasch", or
"RASCH" for one-parameter logistic model, 2, "2PL" for two-parameter logistic
model, and 3, "3PL" for three-parameter logistic model. The default is "2PL".
model_P A character value of an item response model. Currently, PCM, GPCM, and GRM are
available. The default is "GPCM".
range Range of the latent variable to be considered in the quadrature scheme. The
default is from -6 to 6: c(-6, 6).
q A numeric value that represents the number of quadrature points. The default
value is 121.
initialitem_D A matrix of initial dichotomous item parameter values for starting the estimation
algorithm.
initialitem_P A matrix of initial polytomous item parameter values for starting the estimation
algorithm.
ability_method The ability parameter estimation method. The available options are Expected a
posteriori (EAP) and Maximum Likelihood Estimates (MLE). The default is EAP.
latent_dist A character string that determines latent distribution estimation method. Insert
"Normal", "normal", or "N" to assume normal distribution on the latent distri-
bution, "EHM" for empirical histogram method (Mislevy, 1984; Mislevy & Bock,
1985), "Mixture" or "2NM" for the method of two-component Gaussian mixture
distribution (Li, 2021; Mislevy, 1984), "DC" or "Davidian" for Davidian-curve
method (Woods & Lin, 2009), "KDE" for kernel density estimation method (Li,
2022), and "LLS" for log-linear smoothing method (Casabianca, Lewis, 2015).
The default value is set to "Normal" for the conventional normality assumption
on latent distribution.
max_iter A numeric value that determines the maximum number of iterations in the EM-
MML. The default value is 200.
threshold A numeric value that determines the threshold of EM-MML convergence. A
maximum item parameter change is monitored and compared with the threshold.
The default value is 0.0001.
bandwidth A character value is needed when "KDE" is used for the latent distribution esti-
mation. This argument determines which bandwidth estimation method is used
for "KDE". The default value is "SJ-ste". See density for possible options.
h A natural number less than or equal to 10 is needed when "DC" is used for
the latent distribution estimation. This argument determines the complexity of
Davidian curve.
Details
Dichotomous: The probabilities for correct response (u = 1) in one-, two-, and three-parameter logistic models can
1) One-parameter logistic (1PL) model
exp (θ − b)
P (u = 1|θ, b) =
2) Two-parameter logistic (2PL) model
exp (a(θ − b))
P (u = 1|θ, a, b) =
3) Three-parameter logistic (3PL) model
exp (a(θ − b))
P (u = 1|θ, a, b, c) = c + (1 − c)
Polytomous: The probability for scoring k (i.e., u = k; k = 0, 1, ..., m; m ≥ 2)can be expressed as follows;
1) partial credit model (PCM)
P (u = 0|θ, b1 , ..., bm ) = Pm Pc
P (u = 1|θ, b1 , ..., bm ) = Pm Pc
..
.
Pm
P (u = m|θ, b1 , ..., bm ) = Pm Pc
2) generalized partial credit model (GPCM)
P (u = 0|θ, a, b1 , ..., bm ) = Pm Pc
P (u = 1|θ, a, b1 , ..., bm ) = Pm Pc
..
.
Pm
P (u = m|θ, a, b1 , ..., bm ) = Pm Pc
The estimated latent distribution for each of the latent distribution estimation method can be expressed as follows;
1) Empirical histogram method
P (θ = Xk ) = A(Xk )
where k = 1, 2, ..., q, Xk is the location of the kth quadrature point, and A(Xk ) is a value
of probability mass function evaluated at Xk . Empirical histogram method thus has q − 1
parameters.
2) Two-component Gaussian mixture distribution
P (θ = X) = πφ(X; µ1 , σ1 ) + (1 − π)φ(X; µ2 , σ2 )
where φ(X; µ, σ) is the value of a Gaussian component with mean µ and standard deviation
σ evaluated at X.
3) Davidian curve method
( h
X
λ
P (θ = X) = mλ X φ(X; 0, 1)
where h corresponds to the argument h and determines the degree of the polynomial.
4) Kernel density estimation method
N
P (θ = X) = K
where N is the number of examinees, θj is jth examinee’s ability parameter, h is the band-
width which corresponds to the argument bw, and K(•) is a kernel function. The Gaussian
kernel is used in this function.
5) Log-linear smoothing method
h
!
X
m
P (θ = Xq ) = exp β0 + βm Xq
where h is the hyper parameter which determines the smoothness of the density, and θ can
take total Q finite values (X1 , . . . , Xq , . . . , XQ ).
Value
This function returns a list which contains several objects:
par_est The list item parameter estimates. The first object of par_est is the matrix of
item parameter estimates for dichotomous items, and The second object is the
matrix of item parameter estimates for polytomous items.
se The standard errors for item parameter estimates. The first object of se is the
matrix of standard errors for dichotomous items, and The second object is the
matrix of standard errors for polytomous items.
fk The estimated frequencies of examinees at each quadrature points.
iter The number of EM-MML iterations required for the convergence.
quad The location of quadrature points.
diff The final value of the monitored maximum item parameter change.
Ak The estimated discrete latent distribution. It is discrete (i.e., probability mass
function) since quadrature scheme of EM-MML is used.
Pk The posterior probabilities for each examinees at each quadrature points.
theta The estimated ability parameter values.
theta_se The asymptotic standard errors of ability parameter estimates. Available only
when ability_method = "MLE". If an examinee answers all or none of the items
correctly, the function returns NA.
logL The deviance (i.e., -2logL).
density_par The estimated density parameters. If latent_dist = "2NM", prob is the es-
timated π = nN1 parameter of two-component Gaussian mixture distribution,
where n1 is the estimated number of examinees who belong to the first Gaus-
sian component and N is the total number of examinees; d is the estimated
δ = µ2 −µ
σ̄
1
parameter of two-component Gaussian mixture distribution, where
µ1 is the estimated mean of the first Gaussian component, µ2 is the estimated
mean of the second Gaussian component, and σ̄ = 1 is the standard deviation of
the latent distribution; and sd_ratio is the estimated ζ = σσ21 parameter of two-
component Gaussian mixture distribution, where σ1 is the estimated standard
deviation of the first Gaussian component, σ2 is the estimated standard devia-
tion of the second Gaussian component (Li, 2021). Without loss of generality,
µ2 ≥ µ1 , thus δ ≥ 0, is assumed.
Options A replication of input arguments and other information.
Author(s)
<NAME> <<EMAIL>>
References
<NAME>., & <NAME>. (1981). Marginal maximum likelihood estimation of item parameters:
Application of an EM algorithm. Psychometrika, 46(4), 443-459.
<NAME>., & <NAME>. (2015). IRT item parameter recovery with marginal maximum
likelihood estimation using loglinear smoothing models. Journal of Educational and Behavioral
Statistics, 40(6), 547-578.
<NAME>., <NAME>., <NAME>., & <NAME>. (2020). IRT Approaches to Modeling Scores on
Mixed-Format Tests. Journal of Educational Measurement, 57(2), 230-254.
<NAME>. (2021). Using a two-component normal mixture distribution as a latent distribution in esti-
mating parameters of item response models. Journal of Educational Evaluation, 34(4), 759-789.
<NAME>. (2022). The effect of estimating latent distribution using kernel density estimation method
on the accuracy and efficiency of parameter estimation of item response models [Master’s thesis,
Yonsei University, Seoul]. Yonsei University Library.
<NAME>. (1984). Estimating latent distributions. Psychometrika, 49(3), 359-381.
<NAME>., & <NAME>. (1985). Implementation of the EM algorithm in the estimation of
item parameters: The BILOG computer program. In <NAME> (Ed.). Proceedings of the 1982
item response theory and computerized adaptive testing conference (pp. 189-202). University of
Minnesota, Department of Psychology, Computerized Adaptive Testing Conference.
<NAME>., & <NAME>. (2009). Item response theory with estimation of the latent density using
Davidian curves. Applied Psychological Measurement, 33(2), 102-117.
<NAME>., & <NAME>. (2006). Item response theory with estimation of the latent population
distribution using spline-based densities. Psychometrika, 71(2), 281-301.
Examples
## Not run:
# A preparation of mixed-format item response data
Alldata <- DataGeneration(seed = 2,
model_D = rep(1:2, each=3),# 1PL model is applied to item #1~10
N=1000,
nitem_D = 6,
nitem_P = 5,
categ = rep(3:7,each = 1),# 3 categories for item #21-22,
# ...,
latent_dist = "2NM",
d = 1.664,
sd_ratio = 2,
prob = 0.3)
DataD <- Alldata$data_D # item response data for the dichotomous items
DataP <- Alldata$data_P # item response data for the polytomous items
# Analysis
M1 <- IRTest_Mix(DataD, DataP)
## End(Not run)
IRTest_Poly Item and ability parameters estimation for polytomous items
Description
This function estimates IRT item and ability parameters when all items are scored polytomously.
Based on Bock & Aitkin’s (1981) marginal maximum likelihood and EM algorithm (EM-MML),
this function incorporates several latent distribution estimation algorithms which could free the
normality assumption on the latent variable. If the normality assumption is violated, application
of these latent distribution estimation methods could reflect some features of the unknown true
latent distribution, and, thus, could provide more accurate parameter estimates (Li, 2021; Woods &
Lin, 2009; <NAME>, 2006). Only generalized partial credit model (GPCM) is currently
available.
Usage
IRTest_Poly(
data,
model = "GPCM",
range = c(-6, 6),
q = 121,
initialitem = NULL,
ability_method = "EAP",
latent_dist = "Normal",
max_iter = 200,
threshold = 1e-04,
bandwidth = "SJ-ste",
h = NULL
)
Arguments
data A matrix of item responses where responses are coded as 0, 1, ..., m for an
m+1 category item. Rows and columns indicate examinees and items, respec-
tively.
model A character value that represents the type of a item characteristic function ap-
plied to the items. Currently, PCM, GPCM, and GRM are available. The default is
"GPCM".
range Range of the latent variable to be considered in the quadrature scheme. The
default is from -6 to 6: c(-6, 6).
q A numeric value that represents the number of quadrature points. The default
value is 121.
initialitem A matrix of initial item parameter values for starting the estimation algorithm.
This matrix determines the number of categories for each item.
ability_method The ability parameter estimation method. The available options are Expected a
posteriori (EAP) and Maximum Likelihood Estimates (MLE). The default is EAP.
latent_dist A character string that determines latent distribution estimation method. Insert
"Normal", "normal", or "N" to assume normal distribution on the latent distri-
bution, "EHM" for empirical histogram method (Mislevy, 1984; Mislevy & Bock,
1985), "Mixture" or "2NM" for the method of two-component Gaussian mixture
distribution (Li, 2021; Mislevy, 1984), "DC" or "Davidian" for Davidian-curve
method (Woods & Lin, 2009), "KDE" for kernel density estimation method (Li,
2022), and "LLS" for log-linear smoothing method (Casabianca, Lewis, 2015).
The default value is set to "Normal" for the conventional normality assumption
on latent distribution.
max_iter A numeric value that determines the maximum number of iterations in the EM-
MML. The default value is 200.
threshold A numeric value that determines the threshold of EM-MML convergence. A
maximum item parameter change is monitored and compared with the threshold.
The default value is 0.0001.
bandwidth A character value is needed when "KDE" is used for the latent distribution esti-
mation. This argument determines which bandwidth estimation method is used
for "KDE". The default value is "SJ-ste". See density for possible options.
h A natural number less than or equal to 10 is needed when "DC" is used for
the latent distribution estimation. This argument determines the complexity of
Davidian curve.
Details
The probability for scoring k (i.e., u = k; k = 0, 1, ..., m; m ≥ 2)can be expressed as follows;
1) partial credit model (PCM)
P (u = 0|θ, b1 , ..., bm ) = Pm Pc
P (u = 1|θ, b1 , ..., bm ) = Pm Pc
..
.
Pm
P (u = m|θ, b1 , ..., bm ) = Pm Pc
2) generalized partial credit model (GPCM)
P (u = 0|θ, a, b1 , ..., bm ) = Pm Pc
P (u = 1|θ, a, b1 , ..., bm ) = Pm Pc
..
.
Pm
P (u = m|θ, a, b1 , ..., bm ) = Pm Pc
The estimated latent distribution for each of the latent distribution estimation method can be expressed as follows;
1) Empirical histogram method
P (θ = Xk ) = A(Xk )
where k = 1, 2, ..., q, Xk is the location of the kth quadrature point, and A(Xk ) is a value
of probability mass function evaluated at Xk . Empirical histogram method thus has q − 1
parameters.
2) Two-component Gaussian mixture distribution
P (θ = X) = πφ(X; µ1 , σ1 ) + (1 − π)φ(X; µ2 , σ2 )
where φ(X; µ, σ) is the value of a Gaussian component with mean µ and standard deviation
σ evaluated at X.
3) Davidian curve method
X
λ
P (θ = X) = mλ X φ(X; 0, 1)
where h corresponds to the argument h and determines the degree of the polynomial.
4) Kernel density estimation method
N
P (θ = X) = K
where N is the number of examinees, θj is jth examinee’s ability parameter, h is the band-
width which corresponds to the argument bw, and K(•) is a kernel function. The Gaussian
kernel is used in this function.
5) Log-linear smoothing method
h
!
X
m
P (θ = Xq ) = exp β0 + βm Xq
where h is the hyper parameter which determines the smoothness of the density, and θ can
take total Q finite values (X1 , . . . , Xq , . . . , XQ ).
Value
This function returns a list which contains several objects:
par_est The item parameter estimates.
se The standard errors for item parameter estimates.
fk The estimated frequencies of examinees at each quadrature points.
iter The number of EM-MML iterations required for the convergence.
quad The location of quadrature points.
diff The final value of the monitored maximum item parameter change.
Ak The estimated discrete latent distribution. It is discrete (i.e., probability mass
function) since quadrature scheme of EM-MML is used.
Pk The posterior probabilities for each examinees at each quadrature points.
theta The estimated ability parameter values.
theta_se The asymptotic standard errors of ability parameter estimates. Available only
when ability_method = "MLE". If an examinee answers all or none of the items
correctly, the function returns NA.
logL The deviance (i.e., -2logL).
density_par The estimated density parameters. If latent_dist = "2NM", prob is the es-
timated π = nN1 parameter of two-component Gaussian mixture distribution,
where n1 is the estimated number of examinees who belong to the first Gaus-
sian component and N is the total number of examinees; d is the estimated
δ = µ2 −µ
σ̄
1
parameter of two-component Gaussian mixture distribution, where
µ1 is the estimated mean of the first Gaussian component, µ2 is the estimated
mean of the second Gaussian component, and σ̄ = 1 is the standard deviation of
the latent distribution; and sd_ratio is the estimated ζ = σσ21 parameter of two-
component Gaussian mixture distribution, where σ1 is the estimated standard
deviation of the first Gaussian component, σ2 is the estimated standard devia-
tion of the second Gaussian component (Li, 2021). Without loss of generality,
µ2 ≥ µ1 , thus δ ≥ 0, is assumed.
Options A replication of input arguments and other information.
Author(s)
<NAME> <<EMAIL>>
References
<NAME>., & <NAME>. (1981). Marginal maximum likelihood estimation of item parameters:
Application of an EM algorithm. Psychometrika, 46(4), 443-459.
<NAME>., & <NAME>. (2015). IRT item parameter recovery with marginal maximum
likelihood estimation using loglinear smoothing models. Journal of Educational and Behavioral
Statistics, 40(6), 547-578.
<NAME>. (2021). Using a two-component normal mixture distribution as a latent distribution in esti-
mating parameters of item response models. Journal of Educational Evaluation, 34(4), 759-789.
<NAME>. (2022). The effect of estimating latent distribution using kernel density estimation method
on the accuracy and efficiency of parameter estimation of item response models [Master’s thesis,
Yonsei University, Seoul]. Yonsei University Library.
<NAME>. (1984). Estimating latent distributions. Psychometrika, 49(3), 359-381.
<NAME>., & <NAME>. (1985). Implementation of the EM algorithm in the estimation of
item parameters: The BILOG computer program. In D. J. Weiss (Ed.). Proceedings of the 1982
item response theory and computerized adaptive testing conference (pp. 189-202). University of
Minnesota, Department of Psychology, Computerized Adaptive Testing Conference.
<NAME>., & <NAME>. (2009). Item response theory with estimation of the latent density using
Davidian curves. Applied Psychological Measurement, 33(2), 102-117.
<NAME>., & <NAME>. (2006). Item response theory with estimation of the latent population
distribution using spline-based densities. Psychometrika, 71(2), 281-301.
Examples
## Not run:
# A preparation of dichotomous item response data
data <- DataGeneration(seed = 1,
model_P = "GPCM",
categ = rep(3:4, each = 4),
N=1000,
nitem_D = 0,
nitem_P = 8,
latent_dist = "2NM",
d = 1.414,
sd_ratio = 2,
prob = 0.5)$data_P
# Analysis
M1 <- IRTest_Poly(data)
## End(Not run)
item_fit Item fit diagnostics
Description
This function analyses and reports item-fit test results.
Usage
item_fit(x, bins = 10, bin.center = "mean")
Arguments
x A model fit object from either IRTest_Dich, IRTest_Poly, or IRTest_Mix.
bins The number of bins to be used for calculating the statistics. Following Yen’s Q1
(1981) , the default is 10.
bin.center A method for calculating the center of each bin. Following Yen’s Q1 (1981) ,
the default is "mean". Use "median" for Bock’s χ2 (1960).
Details
Bock’s χ2 (1960) or Yen’s Q1 (1981) is currently available.
Value
This function returns a matrix of item-fit test results.
Author(s)
<NAME> <<EMAIL>>
References
Bock, R.D. (1960), Methods and applications of optimal scaling. Chapel Hill, NC: L.L. Thurstone
Psychometric Laboratory.
<NAME>. (1981). Using simulation results to choose a latent trait model. Applied Psychological
Measurement, 5(2), 245–262.
latent_distribution Latent density function
Description
Density function of the estimated latent distribution with mean and standard deviation equal to 0
and 1, respectively.
Usage
latent_distribution(x, model.fit)
Arguments
x A numeric vector. Value(s) in the theta scale to evaluate the PDF.
model.fit An object returned from an estimation function.
Value
The evaluated values of PDF, a length of which equals to that of x.
Examples
## Not run:
# Data generation and model fitting
data <- DataGeneration(seed = 1,
model_P = "GPCM",
N=1000,
nitem_D = 0,
nitem_P = 10,
categ = rep(5,10),
latent_dist = "2NM",
d = 1.664,
sd_ratio = 2,
prob = 0.3)$data_P
M1 <- IRTest_Poly(data = data, latent_dist = "KDE")
# Plotting the latent distribution
ggplot()+
stat_function(fun=latent_distribution, args=list(M1))+
lims(x=c(-6,6))
## End(Not run)
original_par_2GM Recovering original parameters of two-component Gaussian mixture
distribution from re-parameterized parameters
Description
Recovering original parameters of two-component Gaussian mixture distribution from re-parameterized
parameters
Usage
original_par_2GM(
prob = 0.5,
d = 0,
sd_ratio = 1,
overallmean = 0,
overallsd = 1
)
Arguments
prob A numeric value of π = nN1 parameter of two-component Gaussian mixture
distribution, where n1 is the estimated number of examinees who belong to the
first Gaussian component and N is the total number of examinees (Li, 2021).
d A numeric value of δ = µ2 −µ σ̄
parameter of two-component Gaussian mixture
distribution, where µ1 is the estimated mean of the first Gaussian component, µ2
is the estimated mean of the second Gaussian component, and σ̄ is the standard
deviation of the latent distribution (Li, 2021). Without loss of generality, µ2 ≥
µ1 , thus δ ≥ 0, is assumed.
sd_ratio A numeric value of ζ = σσ21 parameter of two-component Gaussian mixture
distribution, where σ1 is the estimated standard deviation of the first Gaussian
component, σ2 is the estimated standard deviation of the second Gaussian com-
ponent (Li, 2021).
overallmean A numeric value of µ̄ that determines the overall mean of two-component Gaus-
sian mixture distribution.
overallsd A numeric value of σ̄ that determines the overall standard deviation of two-
component Gaussian mixture distribution.
Details
The original two-component Gaussian mixture distribution
f (x) = π × φ(x|µ1 , σ1 ) + (1 − π) × φ(x|µ2 , σ2 )
, where φ is a Gaussian component.
The re-parameterized two-component Gaussian mixture distribution
f (x) = 2GM (x|π, δ, ζ, µ̄, σ̄)
, where µ̄ is overall mean and σ̄ is overall standard deviation of the distribution.
The original parameters of two-component Gaussian mixture distribution can be retrieved as follows;
1) Mean of the first Gaussian component (m1).
µ1 = −(1 − π)δσ̄ + µ̄
2) Mean of the second Gaussian component (m2).
3) Standard deviation of the first Gaussian component (s1).
σ12 = σ̄ 2
4) Standard deviation of the second Gaussian component (s2).
!
2 2 1 − π(1 − π)δ 2
σ2 = σ̄ 1 = ζ 2 σ12
Value
This function returns a vector of length 4: c(m1,m2,s1,s2).
m1 The location parameter (mean) of the first Gaussian component.
m2 The location parameter (mean) of the second Gaussian component.
s1 The scale parameter (standard deviation) of the first Gaussian component.
s2 The scale parameter (standard deviation) of the second Gaussian component.
Author(s)
<NAME> <<EMAIL>>
References
<NAME>. (2021). Using a two-component normal mixture distribution as a latent distribution in esti-
mating parameters of item response models. Journal of Educational Evaluation, 34(4), 759-789.
plot.irtest Plotting the estimated latent distribution
Description
This function draws a plot of the estimated latent distribution (the population distribution of the
latent variable).
Usage
## S3 method for class 'irtest'
plot(x, ...)
Arguments
x A class == "irtest" object obtained from either IRTest_Dich, IRTest_Poly,
or IRTest_Mix.
... Other argument(s) passed on to draw a plot of an estimated latent distribu-
tion. Arguments are passed on to stat_function, if the distribution estima-
tion method is the one using two-component normal mixture distribution (i.e.,
latent_dist == "Mixture" or "2NM") or the normal distribution (i.e., latent_dist
== "N", "normal", or "Normal"). Otherwise, they are passed on to geom_line.
Value
A plot of estimated latent distribution.
Author(s)
<NAME> <<EMAIL>>
Examples
## Not run:
# Data generation and model fitting
data <- DataGeneration(seed = 1,
#model_D = rep(1, 10),
N=1000,
nitem_D = 0,
nitem_P = 8,
categ = rep(3:4,each = 4),
latent_dist = "2NM",
d = 1.664,
sd_ratio = 2,
prob = 0.3)$data_P
M1 <- IRTest_Poly(data = data, latent_dist = "KDE")
# Plotting the latent distribution
plot(x=M1, linewidth = 1, color = 'red') +
ggplot2::lims(x = c(-6, 6), y = c(0, .5))
## End(Not run)
plot_item Plot of item response function
Description
This function draws a item response function of an item of the fitted model.
Usage
plot_item(x, item.number, type = NULL)
Arguments
x A model fit object from either IRTest_Dich, IRTest_Poly, or IRTest_Mix.
item.number A numeric value indicating the item number.
type Type of an item being either "d" (dichotomous item) or "p" (polytomous item);
item.number=1, type="d" indicates the first dichotomous item. This value
will be used only when a model fit of mixed-format data is passed onto the
function.
Value
This function returns a plot of item response function.
Author(s)
<NAME> <<EMAIL>>
print.irtest Print of the result
Description
This function prints the summarized information.
Usage
## S3 method for class 'irtest'
print(x, ...)
Arguments
x An object returned from summary.irtest.
... Additional arguments (currently nonfunctioning).
Value
Printed texts on the console recommending the usage of summary function and the direct access to
the details using "$" sign.
Author(s)
<NAME> <<EMAIL>>
Examples
## Not run:
data <- DataGeneration(seed = 1,
#model_D = rep(1, 10),
N=1000,
nitem_D = 0,
nitem_P = 8,
categ = rep(3:4,each = 4),
latent_dist = "2NM",
d = 1.664,
sd_ratio = 2,
prob = 0.3)$data_P
M1 <- IRTest_Poly(data = data, latent_dist = "KDE")
M1
## End(Not run)
print.irtest_summary Print of the summary
Description
This function prints the summarized information.
Usage
## S3 method for class 'irtest_summary'
print(x, ...)
Arguments
x An object returned from summary.irtest.
... Additional arguments (currently nonfunctioning).
Value
Printed summarized texts on the console.
Author(s)
<NAME> <<EMAIL>>
Examples
## Not run:
Alldata <- DataGeneration(seed = 1,
#model_D = rep(1, 10),
N=1000,
nitem_D = 0,
nitem_P = 8,
categ = rep(3:4,each = 4),
latent_dist = "2NM",
d = 1.664,
sd_ratio = 2,
prob = 0.3)
data <- Alldata$data_P
item <- Alldata$item_P
initialitem <- Alldata$initialitem_P
theta <- Alldata$theta
M1 <- IRTest_Poly(initialitem = initialitem,
data = data,
model = "GPCM",
latent_dist = "Mixture",
max_iter = 200,
threshold = .001,
)
summary(M1)
## End(Not run)
recategorize Recategorization of data using a new categorization scheme
Description
With a recategorization scheme as an input, this function implements recategorization for the input
data.
Usage
recategorize(data, new_cat)
Arguments
data An item response matrix
new_cat A list of a new categorization scheme
Value
Recategorized data
reliability Marginal reliability coefficient of IRT
Description
Marginal reliability coefficient of IRT
Usage
reliability(x)
Arguments
x A model fit object from either IRTest_Dich, IRTest_Poly, or IRTest_Mix.
Details
Reliability coefficient on summed-score scale In accordance with the concept of reliability in
classical test theory (CTT), this function calculates the IRT reliability coefficient.
The basic concept and formula of the reliability coefficient can be expressed as follows (Kim,
Feldt, 2010):
An observed score of Item i, Xi , is decomposed as the sum of a true score Ti and an error ei .
Then, with the assumption of σTi ej = σei ej = 0, the reliability coefficient of a test is defined
as;
ρT X = ρXX 0 = 2T = 2 T 2 = 1 − 2e
σX σT + σe σX
Reliability coefficient on θ scale For the coefficient on the θ scale, this function calculates the
parallel-forms reliability (Green et al., 1984; Kim, 2012):
σE
ρθ̂θ̂ =
0 =
2 2 −1
σE + E σθ̂|θ 1 + E I θ̂
(θ̂|θ)
2
This assumes that σE = σθ2 = 1. Although the formula is often employed in several IRT
(θ̂|θ)
studies and applications, the underlying assumption may not be true.
Value
Estimated marginal reliability coefficient.
Author(s)
<NAME> <<EMAIL>>
References
<NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (1984). Technical
guidelines for assessing computerized adaptive tests. Journal of Educational Measurement, 21(4),
347–360. <NAME>. (2012). A note on the reliability coefficients for item response model-based
ability estimates. Psychometrika, 77(1), 153-162. <NAME>., <NAME>. (2010). The estimation of
the IRT reliability coefficient and its lower and upper bounds, with comparisons to CTT reliability
statistics. Asia Pacific Education Review, 11, 179–188.
Examples
## Not run:
Alldata <- DataGeneration(seed = 1,
model_D = rep(1, 10),
N=500,
nitem_D = 10,
latent_dist = "2NM",
d = 1.664,
sd_ratio = 2,
prob = 0.3)
data <- Alldata$data_D
# Analysis
M1 <- IRTest_Dich(data)
# Reliability coefficients
rel <- reliability(M1)
## On the summed-score scale
rel$summed.score.scale$test
rel$summed.score.scale$item
## On the summed-score scale
rel$theta.scale
## End(Not run)
summary.irtest Summary of the results
Description
These functions summarize the outputs (e.g., convergence of the estimation algorithm, parameter
estimates, AIC, etc.).
Usage
## S3 method for class 'irtest'
summary(object, ...)
Arguments
object An class == "irtest" object obtained from either IRTest_Dich, IRTest_Poly,
or IRTest_Mix.
... Other argument(s) passed on to summarize the results.
Value
A plot of estimated latent distribution. |
imbo | readthedoc | JSON | Behat API Extension 5.0.0 documentation
[Behat API Extension](index.html#document-index)
---
Behat API Extension[¶](#behat-api-extension)
===
An open source ([MIT licensed](http://opensource.org/licenses/MIT)) Behat extension that provides an easy way to test JSON-based APIs in Behat 3.
Installation guide[¶](#installation-guide)
---
### Requirements[¶](#requirements)
Refer to `composer.json` for more details.
### Installation[¶](#installation)
Install the extension using [Composer](https://getcomposer.org/):
```
composer require --dev imbo/behat-api-extension
```
### Configuration[¶](#configuration)
After you have installed the extension you need to activate it in your Behat configuration file (for instance `behat.yml`):
```
default:
suites:
default:
# ...
extensions:
Imbo\BehatApiExtension: ~
```
The following configuration options are required for the extension to work as expected:
| Key | Type | Default value | Description |
| --- | --- | --- | --- |
| `apiClient.base_uri` | string | <http://localhost:8080> | Base URI of the application under test. |
It should be noted that everything in the `apiClient` configuration array is passed directly to the Guzzle Client instance used internally by the extension.
Example of a configuration file with several configuration entries:
```
default:
suites:
default:
# ...
extensions:
Imbo\BehatApiExtension:
apiClient:
base_uri: http://localhost:8080
timeout: 5.0
verify: false
```
Refer to the [Guzzle documentation](http://docs.guzzlephp.org/en/stable/) for available configuration options for the Guzzle client.
### Upgrading[¶](#upgrading)
This section will cover breaking changes between major versions and other related information to ease upgrading to the latest version.
#### Migration from v4.x to v5.x[¶](#migration-from-v4-x-to-v5-x)
Changes
* [Internal HTTP client configuration](#internal-http-client-configuration)
##### [Internal HTTP client configuration](#id4)[¶](#internal-http-client-configuration)
Previous versions of the extension suggested using the `GuzzleHttp\Client::getConfig()` method to customize the internal HTTP client. This method has been deprecated, and the initialization of the internal HTTP client in the extension had to be changed as a consequence. Refer to the [Configure the API client](index.html#configure-the-api-client) section for more information.
#### Migration from v3.x to v4.x[¶](#migration-from-v3-x-to-v4-x)
Changes
* [PHP version requirement](#php-version-requirement)
* [Type hints](#type-hints)
##### [PHP version requirement](#id5)[¶](#php-version-requirement)
`v4.x` requires `PHP >= 8.1`.
##### [Type hints](#id6)[¶](#type-hints)
Type hints have been added to a plethora of the code base, so child classes will most likely break as a consequence. You will have to add missing type hints if you have extended any classes that have type hints added to them.
#### Migration from v2.x to v3.x[¶](#migration-from-v2-x-to-v3-x)
The usage of Behat API Extension itself has not changed between these versions, but `>=3.0` requires `PHP >= 7.4`.
#### Migrating from v1.x to v2.x[¶](#migrating-from-v1-x-to-v2-x)
Changes
* [Configuration change](#configuration-change)
* [Renamed public methods](#renamed-public-methods)
* [Updated steps](#updated-steps)
* [Functions names for the JSON matcher](#functions-names-for-the-json-matcher)
* [Exceptions](#exceptions)
##### [Configuration change](#id7)[¶](#configuration-change)
In `v1` the extension only had a single configuration option, which was `base_uri`. This is still an option in `v2`, but it has been added to an `apiClient` key.
**v1 behat.yml**
```
default:
suites:
default:
# ...
extensions:
Imbo\BehatApiExtension:
base_uri: http://localhost:8080
```
**v2 behat.yml**
```
default:
suites:
default:
# ...
extensions:
Imbo\BehatApiExtension:
apiClient:
base_uri: http://localhost:8080
```
##### [Renamed public methods](#id8)[¶](#renamed-public-methods)
The following public methods in the `Imbo\BehatApiExtension\Context\ApiContext` class have been renamed:
| `v1` method name | `v2` method name |
| --- | --- |
| `givenIAttachAFileToTheRequest` | `addMultipartFileToRequest` |
| `givenIAuthenticateAs` | `setBasicAuth` |
| `givenTheRequestHeaderIs` | `addRequestHeader` |
| `giventhefollowingformparametersareset` | `setRequestFormParams` |
| `givenTheRequestBodyIs` | `setRequestBody` |
| `givenTheRequestBodyContains` | `setRequestBodyToFileResource` |
| `whenIRequestPath` | `requestPath` |
| `thenTheResponseCodeIs` | `assertResponseCodeIs` |
| `thenTheResponseCodeIsNot` | `assertResponseCodeIsNot` |
| `thenTheResponseReasonPhraseIs` | `assertResponseReasonPhraseIs` |
| `thenTheResponseStatusLineIs` | `assertResponseStatusLineIs` |
| `thenTheResponseIs` | `assertResponseIs` |
| `thenTheResponseIsNot` | `assertResponseIsNot` |
| `thenTheResponseHeaderExists` | `assertResponseHeaderExists` |
| `thenTheResponseHeaderDoesNotExist` | `assertResponseHeaderDoesNotExists` |
| `thenTheResponseHeaderIs` | `assertResponseHeaderIs` |
| `thenTheResponseHeaderMatches` | `assertResponseHeaderMatches` |
| `thenTheResponseBodyIsAnEmptyObject` | `assertResponseBodyIsAnEmptyJsonObject` |
| `thenTheResponseBodyIsAnEmptyArray` | `assertResponseBodyIsAnEmptyJsonArray` |
| `thenTheResponseBodyIsAnArrayOfLength` | `assertResponseBodyJsonArrayLength` |
| `thenTheResponseBodyIsAnArrayWithALengthOfAtLeast` | `assertResponseBodyJsonArrayMinLength` |
| `thenTheResponseBodyIsAnArrayWithALengthOfAtMost` | `assertResponseBodyJsonArrayMaxLength` |
| `thenTheResponseBodyIs` | `assertResponseBodyIs` |
| `thenTheResponseBodyMatches` | `assertResponseBodyMatches` |
| `thenTheResponseBodyContains` | `assertResponseBodyContainsJson` |
Some methods have also been removed (as the result of removed steps):
* `whenIRequestPathWithBody`
* `whenIRequestPathWithJsonBody`
* `whenISendFile`
##### [Updated steps](#id9)[¶](#updated-steps)
`v1` contained several `When` steps that could configure the request as well as sending it, in the same step. These steps has been removed in `v2.0.0`, and the extension now requires you to configure all aspects of the request using the `Given` steps prior to issuing one of the few `When` steps.
Removed / updated steps
* [Given the request body is `:string`](#given-the-request-body-is-string)
* [When I request `:path` using HTTP `:method` with body: `<PyStringNode>`](#when-i-request-path-using-http-method-with-body-pystringnode)
* [When I request `:path` using HTTP `:method` with JSON body: `<PyStringNode>`](#when-i-request-path-using-http-method-with-json-body-pystringnode)
* [When I send `:filePath` (as `:mimeType`) to `:path` using HTTP `:method`](#when-i-send-filepath-as-mimetype-to-path-using-http-method)
* [Then the response body is an empty object](#then-the-response-body-is-an-empty-object)
* [Then the response body is an empty array](#then-the-response-body-is-an-empty-array)
* [Then the response body is an array of length `:length`](#then-the-response-body-is-an-array-of-length-length)
* [Then the response body is an array with a length of at least `:length`](#then-the-response-body-is-an-array-with-a-length-of-at-least-length)
* [Then the response body is an array with a length of at most `:length`](#then-the-response-body-is-an-array-with-a-length-of-at-most-length)
* [Then the response body contains: `<PyStringNode>`](#then-the-response-body-contains-pystringnode)
###### [Given the request body is `:string`](#id12)[¶](#given-the-request-body-is-string)
This step now uses a `<PyStringNode>` instead of a regular string:
**v1**
```
Given the request body is "some data"
```
**v2**
```
Given the request body is:
"""
some data
"""
```
###### [When I request `:path` using HTTP `:method` with body: `<PyStringNode>`](#id13)[¶](#when-i-request-path-using-http-method-with-body-pystringnode)
The body needs to be set using a `Given` step and not in the `When` step:
**v1**
```
When I request "/some/path" using HTTP POST with body:
"""
{"some":"data"}
"""
```
**v2**
```
Given the request body is:
"""
{"some":"data"}
"""
When I request "/some/path" using HTTP POST
```
###### [When I request `:path` using HTTP `:method` with JSON body: `<PyStringNode>`](#id14)[¶](#when-i-request-path-using-http-method-with-json-body-pystringnode)
The `Content-Type` header and body needs to be set using `Given` steps:
**v1**
```
When I request "/some/path" using HTTP POST with JSON body:
"""
{"some":"data"}
"""
```
**v2**
```
Given the request body is:
"""
{"some":"data"}
"""
And the "Content-Type" request header is "application/json"
When I request "/some/path" using HTTP POST
```
###### [When I send `:filePath` (as `:mimeType`) to `:path` using HTTP `:method`](#id15)[¶](#when-i-send-filepath-as-mimetype-to-path-using-http-method)
These steps must be replaced with the following:
**v1**
```
When I send "/some/file.jpg" to "/some/endpoint" using HTTP POST
```
```
When I send "/some/file" as "application/json" to "/some/endpoint" using HTTP POST
```
**v2**
```
Given the request body contains "/some/file.jpg"
When I request "/some/endpoint" using HTTP POST
```
```
Given the request body contains "/some/file"
And the "Content-Type" request header is "application/json"
When I request "/some/endpoint" using HTTP POST
```
The first form in the old and new versions will guess the mime type of the file and set the `Content-Type` request header accordingly.
###### [Then the response body is an empty object](#id16)[¶](#then-the-response-body-is-an-empty-object)
Slight change that adds “JSON” in the step text for clarification:
**v1**
```
Then the response body is an empty object
```
**v2**
```
Then the response body is an empty JSON object
```
###### [Then the response body is an empty array](#id17)[¶](#then-the-response-body-is-an-empty-array)
Slight change that adds “JSON” in the step text for clarification:
**v1**
```
Then the response body is an empty array
```
**v2**
```
Then the response body is an empty JSON array
```
###### [Then the response body is an array of length `:length`](#id18)[¶](#then-the-response-body-is-an-array-of-length-length)
Slight change that adds “JSON” in the step text for clarification:
**v1**
```
Then the response body is an array of length 5
```
**v2**
```
Then the response body is a JSON array of length 5
```
###### [Then the response body is an array with a length of at least `:length`](#id19)[¶](#then-the-response-body-is-an-array-with-a-length-of-at-least-length)
Slight change that adds “JSON” in the step text for clarification:
**v1**
```
Then the response body is an array with a length of at least 5
```
**v2**
```
Then the response body is a JSON array with a length of at least 5
```
###### [Then the response body is an array with a length of at most `:length`](#id20)[¶](#then-the-response-body-is-an-array-with-a-length-of-at-most-length)
Slight change that adds “JSON” in the step text for clarification:
**v1**
```
Then the response body is an array with a length of at most 5
```
**v2**
```
Then the response body is a JSON array with a length of at most 5
```
###### [Then the response body contains: `<PyStringNode>`](#id21)[¶](#then-the-response-body-contains-pystringnode)
Slight change that adds “JSON” in the step text for clarification:
**v1**
```
Then the response body contains:
"""
{"some": "value"}
"""
```
**v2**
```
Then the response body contains JSON:
"""
{"some": "value"}
"""
```
##### [Functions names for the JSON matcher](#id10)[¶](#functions-names-for-the-json-matcher)
When recursively checking a JSON response body, some custom functions exist that is represented as the value in a key / value pair. Below is a table of all available functions in `v1` along with the updated names used in `v2`:
| `v1` function | `v2` function |
| --- | --- |
| `@length(num)` | `@arrayLength(num)` |
| `@atLeast(num)` | `@arrayMinLength(num)` |
| `@atMost(num)` | `@arrayMaxLength(num)` |
| `<re>/pattern/</re>` | `@regExp(/pattern/)` |
`v2` have also added more such functions, refer to the [Custom matcher functions and targeting](index.html#custom-matcher-functions-and-targeting) section for a complete list.
##### [Exceptions](#id11)[¶](#exceptions)
The extension will from `v2` on throw native PHP exceptions or namespaced exceptions (like for instance `Imbo\BehatApiExtension\Exception\AssertionException`). In `v1` exceptions could come directly from `beberlei/assert`, which is the assertion library used in the extension. The fact that the extension uses this library is an implementation detail, and it should be possible to switch out this library without making any changes to the public API of the extension.
If versions after `v2` throws other exceptions it should be classified as a bug and fixed accordingly.
End user guide[¶](#end-user-guide)
---
### Set up the request[¶](#set-up-the-request)
The following steps can be used prior to sending a request.
Available steps
* [Given I attach `:path` to the request as `:partName`](#given-i-attach-path-to-the-request-as-partname)
* [Given the following multipart form parameters are set: `<TableNode>`](#given-the-following-multipart-form-parameters-are-set-tablenode)
* [Given I am authenticating as `:username` with password `:password`](#given-i-am-authenticating-as-username-with-password-password)
* [Given I get an OAuth token using password grant from `:path` with `:username` and `:password` in scope `:scope` using client ID `:clientId` (and client secret `:clientSecret`)](#given-i-get-an-oauth-token-using-password-grant-from-path-with-username-and-password-in-scope-scope-using-client-id-clientid-and-client-secret-clientsecret)
* [Given the `:header` request header is `:value`](#given-the-header-request-header-is-value)
* [Given the `:header` request header contains `:value`](#given-the-header-request-header-contains-value)
* [Given the following form parameters are set: `<TableNode>`](#given-the-following-form-parameters-are-set-tablenode)
* [Given the request body is: `<PyStringNode>`](#given-the-request-body-is-pystringnode)
* [Given the request body contains `:path`](#given-the-request-body-contains-path)
* [Given the response body contains a JWT identified by `:name`, signed with `:secret`: `<PyStringNode>`](#given-the-response-body-contains-a-jwt-identified-by-name-signed-with-secret-pystringnode)
* [Given the query parameter `:name` is `:value`](#given-the-query-parameter-name-is-value)
* [Given the query parameter `:name` is: `<TableNode>`](#given-the-query-parameter-name-is-tablenode)
* [Given the following query parameters are set: `<TableNode>`](#given-the-following-query-parameters-are-set-tablenode)
#### [Given I attach `:path` to the request as `:partName`](#id3)[¶](#given-i-attach-path-to-the-request-as-partname)
Attach a file to the request (causing a `multipart/form-data` request, populating the `$_FILES` array on the server). Can be repeated to attach several files. If a specified file does not exist an `InvalidArgumentException` exception will be thrown. `:path` is relative to the working directory unless it’s absolute.
**Examples:**
| Step | `:path` | Entry in `$_FILES` on the server (`:partName`) |
| --- | --- | --- |
| Given I attach “`/path/to/file.jpg`” to the request as “`file1`” | `/path/to/file.jpg` | $_FILES[’`file1`’] |
| Given I attach “`c:\some\file.jpg`” to the request as “`file2`” | `c:\some\file.jpg` | $_FILES[’`file2`’] |
| Given I attach “`features/some.feature`” to the request as “`feature`” | `features/some.feature` | $_FILES[’`feature`’] |
This step can not be used when sending requests with a request body. Doing so results in an `InvalidArgumentException` exception.
#### [Given the following multipart form parameters are set: `<TableNode>`](#id4)[¶](#given-the-following-multipart-form-parameters-are-set-tablenode)
This step can be used to set form parameters (as if the request is a `<form>` being submitted). A table node must be used to specify which fields / values to send:
```
Given the following multipart form parameters are set:
| name | value |
| foo | bar |
| bar | foo |
| bar | bar |
```
The first row in the table must contain two values: `name` and `value`. The rows that follows are the fields / values you want to send. This step sets the HTTP method to `POST` by default and the `Content-Type` request header to `multipart/form-data`.
This step can not be used when sending requests with a request body. Doing so results in an `InvalidArgumentException` exception.
To use a different HTTP method, simply specify the wanted method in the [When I request :path using HTTP :method](index.html#when-i-request-path-using-http-method) step.
#### [Given I am authenticating as `:username` with password `:password`](#id5)[¶](#given-i-am-authenticating-as-username-with-password-password)
Use this step to set up basic authentication to the next request.
**Examples:**
| Step | `:username` | `:password` |
| --- | --- | --- |
| Given I am authenticating as “`foo`” with password “`bar`” | `foo` | `bar` |
#### [Given I get an OAuth token using password grant from `:path` with `:username` and `:password` in scope `:scope` using client ID `:clientId` (and client secret `:clientSecret`)](#id6)[¶](#given-i-get-an-oauth-token-using-password-grant-from-path-with-username-and-password-in-scope-scope-using-client-id-clientid-and-client-secret-clientsecret)
Send a request using password grant to the given `:path` for an access token that will be added as a `Authorization` header for the next request. The endpoint is required to respond with a JSON object that contains the `access_token` key, for instance:
```
{
"access_token": "some-token"
}
```
Given the above response body, the next request will have the following header set: `Authorization: Bearer some-token`.
**Examples:**
```
Given I get an OAuth token using password grant from "/token" with "user" and "password" in scope "scope" using client ID "id" and client secret "secret"
When I request "/path/that/requires/token/in/header"
```
The second step in the above example will include the required `Authorization` header given the response from `/token` as seen in the first step.
#### [Given the `:header` request header is `:value`](#id7)[¶](#given-the-header-request-header-is-value)
Set the `:header` request header to `:value`. Can be repeated to set multiple headers. When repeated with the same `:header` the last value will be used.
Trying to force specific headers to have certain values combined with other steps that ends up modifying request headers (for instance attaching files) can lead to undefined behavior.
**Examples:**
| Step | `:header` | `:value` |
| --- | --- | --- |
| Given the “`User-Agent`” request header is “`test/1.0`” | `User-Agent` | `test/1.0` |
| Given the “`Accept`” request header is “`application/json`” | `Accept` | `application/json` |
#### [Given the `:header` request header contains `:value`](#id8)[¶](#given-the-header-request-header-contains-value)
Add `:value` to the `:header` request header. Can be repeated to set multiple headers. When repeated with the same `:header` the header will be converted to an array.
**Examples:**
| Step | `:header` | `:value` |
| --- | --- | --- |
| Given the “`X-Foo`” request header contains “`Bar`” | `X-Foo` | `Bar` |
#### [Given the following form parameters are set: `<TableNode>`](#id9)[¶](#given-the-following-form-parameters-are-set-tablenode)
This step can be used to set form parameters (as if the request is a `<form>` being submitted). A table node must be used to specify which fields / values to send:
```
Given the following form parameters are set:
| name | value |
| foo | bar |
| bar | foo |
| bar | bar |
```
The first row in the table must contain two values: `name` and `value`. The rows that follows are the fields / values you want to send. This step sets the HTTP method to `POST` by default and the `Content-Type` request header to `application/x-www-form-urlencoded`, unless the step is combined with [Given I attach :path to the request as :partName](#given-i-attach-path-to-the-request-as-partname), in which case the `Content-Type` request header will be set to `multipart/form-data` and all the specified fields will be sent as parts in the multipart request.
This step can not be used when sending requests with a request body. Doing so results in an `InvalidArgumentException` exception.
To use a different HTTP method, simply specify the wanted method in the [When I request :path using HTTP :method](index.html#when-i-request-path-using-http-method) step.
#### [Given the request body is: `<PyStringNode>`](#id10)[¶](#given-the-request-body-is-pystringnode)
Set the request body to a string represented by the contents of the `<PyStringNode>`.
**Examples:**
```
Given the request body is:
"""
{
"some": "data"
}
"""
```
#### [Given the request body contains `:path`](#id11)[¶](#given-the-request-body-contains-path)
This step can be used to set the contents of the file at `:path` in the request body. If the file does not exist or is not readable the step will fail.
**Examples:**
| Step | `:path` |
| --- | --- |
| Given the request body contains “`/path/to/file`” | `/path/to/file` |
The step will figure out the mime type of the file (using [mime_content_type](http://php.net/mime_content_type)) and set the `Content-Type` request header as well. If you wish to override the mime type you can use the [Given the :header request header is :value](#given-the-header-request-header-is-value) step **after** setting the request body.
#### [Given the response body contains a JWT identified by `:name`, signed with `:secret`: `<PyStringNode>`](#id12)[¶](#given-the-response-body-contains-a-jwt-identified-by-name-signed-with-secret-pystringnode)
This step can be used to prepare the [JWT](https://jwt.io/) custom matcher function with data that it is going to match on. If the response contains JWTs these can be registered with this step, then matched with the [Then the response body contains JSON: <PyStringNode>](index.html#then-the-response-body-contains-json) step after the response has been received. The `<PyStringNode>` represents the payload of the JWT:
**Examples:**
```
Given the response body contains a JWT identified by "my JWT", signed with "some secret":
"""
{
"some": "data",
"value": "@regExp(/(some|expression)/i)"
}
"""
```
The above step would register a JWT which can be matched with `@jwt(my JWT)` using the [@jwt()](index.html#jwt-custom-matcher) custom matcher function. The way the payload is matched is similar to matching a JSON response body, as explained in the [Then the response body contains JSON: <PyStringNode>](index.html#then-the-response-body-contains-json) section, which means [custom matcher functions](index.html#custom-matcher-functions-and-targeting) can be used, as seen in the example above.
#### [Given the query parameter `:name` is `:value`](#id13)[¶](#given-the-query-parameter-name-is-value)
This step can be used to set a single query parameter to a specific value for the upcoming request.
**Examples:**
```
Given the query parameter "foo" is "bar"
And the query parameter "bar" is "foo"
When I request "/path"
```
The above steps would end up with a request to `/path?foo=bar&bar=foo`.
Note
When this step is used all query parameters specified in the path portion of `When I request "/path"` are ignored.
#### [Given the query parameter `:name` is: `<TableNode>`](#id14)[¶](#given-the-query-parameter-name-is-tablenode)
This step can be used to set multiple values to a single query parameter for the upcoming request.
**Examples:**
```
Given the query parameter "foo" is:
| value |
| foo |
| bar |
When I request "/path"
```
The above steps would end up with a request to `/path?foo[0]=foo&foo[1]=bar`.
Note
When this step is used all query parameters specified in the path portion of `When I request "/path"` are ignored.
#### [Given the following query parameters are set: `<TableNode>`](#id15)[¶](#given-the-following-query-parameters-are-set-tablenode)
This step can be used to set multiple query parameters at once for the upcoming request.
**Examples:**
```
Given the following query parameters are set:
| name | value |
| foo | bar |
| bar | foo |
When I request "/path"
```
The above steps would end up with a request to `/path?foo=bar&bar=foo`.
Note
When this step is used all query parameters specified in the path portion of `When I request "/path"` are ignored.
### Send the request[¶](#send-the-request)
After setting up the request it can be sent to the server in a few different ways. Keep in mind that all configuration regarding the request must be done prior to any of the following steps, as they will actually send the request.
Available steps
* [When I request `:path`](#when-i-request-path)
* [When I request `:path` using HTTP `:method`](#when-i-request-path-using-http-method)
#### [When I request `:path`](#id2)[¶](#when-i-request-path)
Request `:path` using HTTP GET. Shorthand for [When I request :path using HTTP GET](#when-i-request-path-using-http-method).
#### [When I request `:path` using HTTP `:method`](#id3)[¶](#when-i-request-path-using-http-method)
`:path` is relative to the `base_uri` configuration option, and `:method` is any HTTP method, for instance `POST` or `DELETE`. If `:path` starts with a slash, it will be relative to the root of `base_uri`.
**Examples:**
*Assume that the ``base_uri`` configuration option has been set to ``http://example.com/dir`` in the following examples.*
| Step | `:path` | `:method` | Resulting URI |
| --- | --- | --- | --- |
| When I request “`/?foo=bar&bar=foo`” | `/?foo=bar&bar=foo` | `GET` | `http://example.com/?foo=bar&bar=foo` |
| When I request “`/some/path`” using HTTP `DELETE` | `/some/path` | `DELETE` | `http://example.com/some/path` |
| When I request “`foobar`” using HTTP `POST` | `foobar` | `POST` | `http://example.com/dir/foobar` |
### Verify server response[¶](#verify-server-response)
After a request has been sent, some steps exist that can be used to verify the response from the server.
Available steps
* [Then the response code is `:code`](#then-the-response-code-is-code)
* [Then the response code is not `:code`](#then-the-response-code-is-not-code)
* [Then the response reason phrase is `:phrase`](#then-the-response-reason-phrase-is-phrase)
* [Then the response reason phrase is not `:phrase`](#then-the-response-reason-phrase-is-not-phrase)
* [Then the response reason phrase matches `:pattern`](#then-the-response-reason-phrase-matches-pattern)
* [Then the response status line is `:line`](#then-the-response-status-line-is-line)
* [Then the response status line is not `:line`](#then-the-response-status-line-is-not-line)
* [Then the response status line matches `:pattern`](#then-the-response-status-line-matches-pattern)
* [Then the response is `:group`](#then-the-response-is-group)
* [Then the response is not `:group`](#then-the-response-is-not-group)
* [Then the `:header` response header exists](#then-the-header-response-header-exists)
* [Then the `:header` response header does not exist](#then-the-header-response-header-does-not-exist)
* [Then the `:header` response header is `:value`](#then-the-header-response-header-is-value)
* [Then the `:header` response header is not `:value`](#then-the-header-response-header-is-not-value)
* [Then the `:header` response header matches `:pattern`](#then-the-header-response-header-matches-pattern)
* [Then the response body is empty](#then-the-response-body-is-empty)
* [Then the response body is an empty JSON object](#then-the-response-body-is-an-empty-json-object)
* [Then the response body is an empty JSON array](#then-the-response-body-is-an-empty-json-array)
* [Then the response body is a JSON array of length `:length`](#then-the-response-body-is-a-json-array-of-length-length)
* [Then the response body is a JSON array with a length of at least `:length`](#then-the-response-body-is-a-json-array-with-a-length-of-at-least-length)
* [Then the response body is a JSON array with a length of at most `:length`](#then-the-response-body-is-a-json-array-with-a-length-of-at-most-length)
* [Then the response body is: `<PyStringNode>`](#then-the-response-body-is-pystringnode)
* [Then the response body is not: `<PyStringNode>`](#then-the-response-body-is-not-pystringnode)
* [Then the response body matches: `<PyStringNode>`](#then-the-response-body-matches-pystringnode)
* [Then the response body contains JSON: `<PyStringNode>`](#then-the-response-body-contains-json-pystringnode)
+ [Regular value matching](#regular-value-matching)
+ [Custom matcher functions and targeting](#custom-matcher-functions-and-targeting)
#### [Then the response code is `:code`](#id4)[¶](#then-the-response-code-is-code)
Asserts that the response code equals `:code`.
**Examples:**
* Then the response code is `200`
* Then the response code is `404`
#### [Then the response code is not `:code`](#id5)[¶](#then-the-response-code-is-not-code)
Asserts that the response code **does not** equal `:code`.
**Examples:**
* Then the response code is not `200`
* Then the response code is not `404`
#### [Then the response reason phrase is `:phrase`](#id6)[¶](#then-the-response-reason-phrase-is-phrase)
Assert that the response reason phrase equals `:phrase`. The comparison is case sensitive.
**Examples:**
* Then the response reason phrase is “`OK`”
* Then the response reason phrase is “`Bad Request`”
#### [Then the response reason phrase is not `:phrase`](#id7)[¶](#then-the-response-reason-phrase-is-not-phrase)
Assert that the response reason phrase does not equal `:phrase`. The comparison is case sensitive.
**Examples:**
* Then the response reason phrase is not “`OK`”
* Then the response reason phrase is not “`Bad Request`”
#### [Then the response reason phrase matches `:pattern`](#id8)[¶](#then-the-response-reason-phrase-matches-pattern)
Assert that the response reason phrase matches the regular expression `:pattern`. The pattern must be a valid regular expression, including delimiters, and can also include optional modifiers.
**Examples:**
* Then the response reason phrase matches “`/ok/i`”
* Then the response reason phrase matches “`/OK/`”
For more information regarding regular expressions and the usage of modifiers, [refer to the PHP manual](http://php.net/pcre).
#### [Then the response status line is `:line`](#id9)[¶](#then-the-response-status-line-is-line)
Assert that the response status line equals `:line`. The comparison is case sensitive.
**Examples:**
* Then the response status line is “`200 OK`”
* Then the response status line is “`304 Not Modified`”
#### [Then the response status line is not `:line`](#id10)[¶](#then-the-response-status-line-is-not-line)
Assert that the response status line does not equal `:line`. The comparison is case sensitive.
**Examples:**
* Then the response status line is not “`200 OK`”
* Then the response status line is not “`304 Not Modified`”
#### [Then the response status line matches `:pattern`](#id11)[¶](#then-the-response-status-line-matches-pattern)
Assert that the response status line matches the regular expression `:pattern`. The pattern must be a valid regular expression, including delimiters, and can also include optional modifiers.
**Examples:**
* Then the response status line matches “`/200 ok/i`”
* Then the response status line matches “`/200 OK/`”
For more information regarding regular expressions and the usage of modifiers, [refer to the PHP manual](http://php.net/pcre).
#### [Then the response is `:group`](#id12)[¶](#then-the-response-is-group)
Asserts that the response is in `:group`.
Allowed groups and their response code ranges are:
| Group | Response code range |
| --- | --- |
| `informational` | 100 to 199 |
| `success` | 200 to 299 |
| `redirection` | 300 to 399 |
| `client error` | 400 to 499 |
| `server error` | 500 to 599 |
**Examples:**
* Then the response is “`informational`”
* Then the response is “`client error`”
#### [Then the response is not `:group`](#id13)[¶](#then-the-response-is-not-group)
Assert that the response is not in `:group`.
Allowed groups and their ranges are:
| Group | Response code range |
| --- | --- |
| `informational` | 100 to 199 |
| `success` | 200 to 299 |
| `redirection` | 300 to 399 |
| `client error` | 400 to 499 |
| `server error` | 500 to 599 |
**Examples:**
* Then the response is not “`informational`”
* Then the response is not “`client error`”
#### [Then the `:header` response header exists](#id14)[¶](#then-the-header-response-header-exists)
Assert that the `:header` response header exists. The value of `:header` is case-insensitive.
**Examples:**
* Then the “`Vary`” response header exists
* Then the “`content-length`” response header exists
#### [Then the `:header` response header does not exist](#id15)[¶](#then-the-header-response-header-does-not-exist)
Assert that the `:header` response header does not exist. The value of `:header` is case-insensitive.
**Examples:**
* Then the “`Vary`” response header does not exist
* Then the “`content-length`” response header does not exist
#### [Then the `:header` response header is `:value`](#id16)[¶](#then-the-header-response-header-is-value)
Assert that the value of the `:header` response header equals `:value`. The value of `:header` is case-insensitive, but the value of `:value` is not.
**Examples:**
* Then the “`Content-Length`” response header is “`15000`”
* Then the “`X-foo`” response header is “`foo, bar`”
#### [Then the `:header` response header is not `:value`](#id17)[¶](#then-the-header-response-header-is-not-value)
Assert that the value of the `:header` response header **does not** equal `:value`. The value of `:header` is case-insensitive, but the value of `:value` is not.
**Examples:**
* Then the “`Content-Length`” response header is not “`15000`”
* Then the “`X-foo`” response header is not “`foo, bar`”
#### [Then the `:header` response header matches `:pattern`](#id18)[¶](#then-the-header-response-header-matches-pattern)
Assert that the value of the `:header` response header matches the regular expression `:pattern`. The pattern must be a valid regular expression, including delimiters, and can also include optional modifiers. The value of `:header` is case-insensitive.
**Examples:**
* Then the “`content-length`” response header matches “`/[0-9]+/`”
* Then the “`x-foo`” response header matches “`/(FOO|BAR)/i`”
* Then the “`X-FOO`” response header matches “`/^(foo|bar)$/`”
For more information regarding regular expressions and the usage of modifiers, [refer to the PHP manual](http://php.net/pcre).
#### [Then the response body is empty](#id19)[¶](#then-the-response-body-is-empty)
Assert that the response body is empty.
#### [Then the response body is an empty JSON object](#id20)[¶](#then-the-response-body-is-an-empty-json-object)
Assert that the response body is an empty JSON object (`{}`).
#### [Then the response body is an empty JSON array](#id21)[¶](#then-the-response-body-is-an-empty-json-array)
Assert that the response body is an empty JSON array (`[]`).
#### [Then the response body is a JSON array of length `:length`](#id22)[¶](#then-the-response-body-is-a-json-array-of-length-length)
Assert that the length of the JSON array in the response body equals `:length`.
**Examples:**
* Then the response body is a JSON array of length `1`
* Then the response body is a JSON array of length `3`
If the response body does not contain a JSON array, the test will fail.
#### [Then the response body is a JSON array with a length of at least `:length`](#id23)[¶](#then-the-response-body-is-a-json-array-with-a-length-of-at-least-length)
Assert that the length of the JSON array in the response body has a length of at least `:length`.
**Examples:**
* Then the response body is a JSON array with a length of at least `4`
* Then the response body is a JSON array with a length of at least `5`
If the response body does not contain a JSON array, the test will fail.
#### [Then the response body is a JSON array with a length of at most `:length`](#id24)[¶](#then-the-response-body-is-a-json-array-with-a-length-of-at-most-length)
Assert that the length of the JSON array in the response body has a length of at most `:length`.
**Examples:**
* Then the response body is a JSON array with a length of at most `4`
* Then the response body is a JSON array with a length of at most `5`
If the response body does not contain a JSON array, the test will fail.
#### [Then the response body is: `<PyStringNode>`](#id25)[¶](#then-the-response-body-is-pystringnode)
Assert that the response body equals the text found in the `<PyStringNode>`. The comparison is case-sensitive.
**Examples:**
```
Then the response body is:
"""
{"foo":"bar"}
"""
```
```
Then the response body is:
"""
foo
"""
```
#### [Then the response body is not: `<PyStringNode>`](#id26)[¶](#then-the-response-body-is-not-pystringnode)
Assert that the response body **does not** equal the value found in `<PyStringNode>`. The comparison is case sensitive.
**Examples:**
```
Then the response body is not:
"""
some value
"""
```
#### [Then the response body matches: `<PyStringNode>`](#id27)[¶](#then-the-response-body-matches-pystringnode)
Assert that the response body matches the regular expression pattern found in `<PyStringNode>`. The expression must be a valid regular expression, including delimiters and optional modifiers.
**Examples:**
```
Then the response body matches:
"""
/^{"FOO": ?"BAR"}$/i
"""
```
```
Then the response body matches:
"""
/foo/
"""
```
#### [Then the response body contains JSON: `<PyStringNode>`](#id28)[¶](#then-the-response-body-contains-json-pystringnode)
Used to recursively match the response body (or a subset of the response body) against a JSON blob.
In addition to regular value matching some custom matching-functions also exist, for asserting value types, array lengths and so forth. There is also a regular expression type matcher that can be used to match string values.
##### [Regular value matching](#id29)[¶](#regular-value-matching)
Assume the following JSON response for the examples in this section:
```
{
"string": "string value",
"integer": 123,
"double": 1.23,
"bool": true,
"null": null,
"object":
{
"string": "string value",
"integer": 123,
"double": 1.23,
"bool": true,
"null": null,
"object":
{
"string": "string value",
"integer": 123,
"double": 1.23,
"bool": true,
"null": null
}
},
"array":
[
"string value",
123,
1.23,
true,
null,
{
"string": "string value",
"integer": 123,
"double": 1.23,
"bool": true,
"null": null
}
]
}
```
**Example: Regular value matching of a subset of the response**
```
Then the response body contains JSON:
"""
{
"string": "string value",
"bool": true
}
"""
```
**Example: Check values in objects**
```
Then the response body contains JSON:
"""
{
"object":
{
"string": "string value",
"object":
{
"null": null,
"integer": 123
}
}
}
"""
```
**Example: Check numerically indexed array contents**
```
Then the response body contains JSON:
"""
{
"array":
[
true,
"string value",
{
"integer": 123
}
]
}
"""
```
Notice that the order of the values in the arrays does not matter. To be able to target specific indexes in an array a special syntax needs to be used. Please refer to [Custom matcher functions and targeting](#custom-matcher-functions-and-targeting) for more information and examples.
##### [Custom matcher functions and targeting](#id30)[¶](#custom-matcher-functions-and-targeting)
In some cases the need for more advanced matching arises. All custom functions is used in place of the string value they are validating, and because of the way JSON works, they need to be specified as strings to keep the JSON valid.
* [Array length - `@arrayLength` / `@arrayMaxLength` / `@arrayMinLength`](#array-length-arraylength-arraymaxlength-arrayminlength)
* [Variable type - `@variableType`](#variable-type-variabletype)
* [Regular expression matching - `@regExp`](#regular-expression-matching-regexp)
* [Match specific keys in a numerically indexed array - `<key>[<index>]`](#match-specific-keys-in-a-numerically-indexed-array-key-index)
* [Numeric comparison - `@gt` / `@lt`](#numeric-comparison-gt-lt)
* [JWT token matching - `@jwt`](#jwt-token-matching-jwt)
###### [Array length - `@arrayLength` / `@arrayMaxLength` / `@arrayMinLength`](#id31)[¶](#array-length-arraylength-arraymaxlength-arrayminlength)
Three functions exist for asserting the length of regular numerically indexed JSON arrays, `@arrayLength`, `@arrayMaxLength` and `@arrayMinLength`. Given the following response body:
```
{
"items":
[
"foo",
"bar",
"foobar",
"barfoo",
123
]
}
```
one can assert the exact length using `@arrayLength`:
```
Then the response body contains JSON:
"""
{"items": "@arrayLength(5)"}
"""
```
or use the relative length matchers:
```
Then the response body contains JSON:
"""
{"items": "@arrayMaxLength(10)"}
"""
And the response body contains JSON:
"""
{"items": "@arrayMinLength(3)"}
"""
```
###### [Variable type - `@variableType`](#id32)[¶](#variable-type-variabletype)
To be able to assert the variable type of specific values, the `@variableType` function can be used. The following types can be asserted:
* `bool` / `boolean`
* `int` / `integer`
* `double` / `float`
* `string`
* `array`
* `object`
* `null`
* `scalar`
* `any`
Given the following response:
```
{
"bool value": true,
"int value": 123,
"double value": 1.23,
"string value": "some string",
"array value": [1, 2, 3],
"object value": {"foo": "bar"},
"null value": null,
"scalar value": 3.1416
}
```
the type of the values can be asserted like this:
```
Then the response body contains JSON:
"""
{
"bool value": "@variableType(bool)",
"int value": "@variableType(int)",
"double value": "@variableType(double)",
"string value": "@variableType(string)",
"array value": "@variableType(array)",
"object value": "@variableType(object)",
"null value": "@variableType(null)",
"scalar value": "@variableType(scalar)"
}
"""
```
The `bool`, `int` and `double` types can also be expressed using `boolean`, `integer` and `float` respectively. There is no difference in the actual validation being executed.
For the `@variableType(scalar)` assertion refer to the [is_scalar function](http://php.net/is_scalar) in the PHP manual as to what is considered to be a scalar.
When using `any` as a type, the validation will basically allow any types, including `null`. One can also match against multiple types using `|` (for instance `@variableType(int|double|string)`). When using multiple types the validation will succeed (and stop) as soon as the value being tested matches one of the supplied types. Validation is done in the order specified.
###### [Regular expression matching - `@regExp`](#id33)[¶](#regular-expression-matching-regexp)
To use regular expressions to match values, the `@regExp` function exists, that takes a regular expression as an argument, complete with delimiters and optional modifiers. Example:
```
Then the response body contains JSON:
"""
{
"foo": "@regExp(/(some|expression)/i)",
"bar":
{
"baz": "@regExp(/[0-9]+/)"
}
}
"""
```
This can be used to match variables of type `string`, `integer` and `float`/`double` only, and the value that is matched will be cast to a string before doing the match. Refer to the [PHP manual](http://php.net/pcre) regarding how regular expressions work in PHP.
###### [Match specific keys in a numerically indexed array - `<key>[<index>]`](#id34)[¶](#match-specific-keys-in-a-numerically-indexed-array-key-index)
If you need to verify an element at a specific index within a numerically indexed array, use the `key[<index>]` notation as the key, and not the regular field name. Consider the following response body:
```
{
"items":
[
"foo",
"bar",
{
"some":
{
"nested": "object",
"foo": "bar"
}
},
[1, 2, 3]
]
}
```
If you need to verify the values, use something like the following step:
```
Then the response body contains JSON:
"""
{
"items[0]": "foo",
"items[1]": "@regExp(/(foo|bar|baz)/)",
"items[2]":
{
"some":
{
"foo": "@regExp(/ba(r|z)/)"
}
},
"items[3]": "@arrayLength(3)"
}
"""
```
If the response body contains a numerical array as the root node, you will need to use a special syntax for validation. Consider the following response body:
```
[
"foo",
123,
{
"foo": "bar"
},
"bar",
[1, 2, 3]
]
```
To validate this, use the following step:
```
Then the response body contains JSON:
"""
{
"[0]": "foo",
"[1]": 123,
"[2]":
{
"foo": "bar"
},
"[3]": "@regExp(/bar/)",
"[4]": "@arrayLength(3)"
}
"""
```
###### [Numeric comparison - `@gt` / `@lt`](#id35)[¶](#numeric-comparison-gt-lt)
To verify that a numeric value is greater than or less than a value, the `@gt` and `@lt` functions can be used respectively. Given the following response body:
```
{
"some-int": 123,
"some-double": 1.23,
"some-string": "123"
}
```
one can compare the numeric values using:
```
Then the response body contains JSON:
"""
{
"some-int": "@gt(120)",
"some-double": "@gt(1.20)",
"some-string": "@gt(120)"
}
"""
And the response body contains JSON:
"""
{
"some-int": "@lt(125)",
"some-double": "@lt(1.25)",
"some-string": "@lt(125)"
}
"""
```
###### [JWT token matching - `@jwt`](#id36)[¶](#jwt-token-matching-jwt)
To verify a JWT in the response body the `@jwt()` custom matcher function can be used. The argument it takes is the name of a JWT token registered with the [Given the response body contains a JWT identified by :name, signed with :secret: <PyStringNode>](index.html#given-the-response-body-contains-a-jwt) step earlier in the scenario.
Given the following response body:
```
{
"value": "<KEY>"
}
```
one can validate the JWT using a combination of two steps:
```
# Register the JWT Given the response body contains a JWT identified by "my JWT", signed with "secret":
"""
{
"user": "Some user"
}
"""
# Other steps ...
# After the request has been made, one can match the JWT in the response And the response body contains JSON:
"""
{
"value": "@jwt(my JWT)"
}
"""
```
### Extending the extension[¶](#extending-the-extension)
If you want to implement your own assertions, or for instance add custom authentication for all requests made against your APIs you can extend the context class provided by the extension to access the client, request, request options, response and the array contains comparator properties. These properties are accessed via the protected `$this->client`, `$this->request`, `$this->requestOptions`, `$this->response` and `$this->arrayContainsComparator` properties respectively. Keep in mind that `$this->response` is not populated until the client has made a request, i.e. after any of the aforementioned `@When` steps have finished.
#### Add `@Given`’s, `@When`’s and/or `@Then`’s[¶](#add-given-s-when-s-and-or-then-s)
If you want to add a `@Given`, `@When` and/or `@Then` step, simply add a method in your `FeatureContext` class along with the step using annotations in the `phpdoc` block:
```
<?php use Imbo\BehatApiExtension\Context\ApiContext;
use Imbo\BehatApiExtension\Exception\AssertionFailedException;
class FeatureContext extends ApiContext
{
/**
* @Then I want to check something
*/
public function assertSomething()
{
// do some assertions on $this->response, and throw a AssertionFailedException
// exception if the assertion fails.
}
}
```
With the above example you can now use `Then I want to check something` can be used in your feature files along with the steps defined by the extension.
#### Configure the API client[¶](#configure-the-api-client)
If you wish to configure the internal API client (`GuzzleHttp\Client`) this can be done in the initialization-phase:
```
<?php use GuzzleHttp\HandlerStack;
use GuzzleHttp\Middleware;
use Imbo\BehatApiExtension\Context\ApiContext;
class FeatureContext extends ApiContext
{
public function initializeClient(array $config): static
{
$stack = $config['handler'] ?? HandlerStack::create();
$stack->push(Middleware::mapRequest(
fn ($req) => $req->withAddedHeader('Some-Custom-Header', 'some value')
));
$config['handler'] = $stack;
return parent::initializeClient($config);
}
}
```
#### Register custom matcher functions[¶](#register-custom-matcher-functions)
The extension comes with some built in matcher functions used to verify JSON-content (see [Then the response body contains JSON: <PyStringNode>](index.html#then-the-response-body-contains-json)), like for instance `@arrayLength` and `@regExp`. These functions are basically callbacks to PHP methods / functions, so you can easily define your own and use them in your tests:
```
<?php use Imbo\BehatApiExtension\Context\ApiContext;
use Imbo\BehatApiExtension\ArrayContainsComparator;
class FeatureContext extends ApiContext
{
/**
* Add a custom function called @gt to the comparator
*/
public function setArrayContainsComparator(ArrayContainsComparator $comparator): self
{
$comparator->addFunction('gt', function ($num, $gt) {
$num = (int) $num;
$gt = (int) $gt;
if ($num <= $gt) {
throw new InvalidArgumentException(sprintf(
'Expected number to be greater than %d, got: %d.',
$gt,
$num
));
}
});
return parent::setArrayContainsComparator($comparator);
}
}
```
The above snippet adds a custom matcher function called `@gt` that can be used to check if a number is greater than another number. Given the following response body:
```
{
"number": 42
}
```
the number in the `number` key could be verified with:
```
Then the response body contains JSON:
"""
{
"number": "@gt(40)"
}
"""
``` |
provenance | cran | R | Package ‘provenance’
August 28, 2023
Title Statistical Toolbox for Sedimentary Provenance Analysis
Version 4.2
Date 2023-08-28
Description Bundles a number of established statistical methods to facilitate the visual interpreta-
tion of large datasets in sedimentary geology. Includes functionality for adaptive kernel den-
sity estimation, principal component analysis, correspondence analysis, multidimensional scal-
ing, generalised procrustes analysis and individual differences scaling using a variety of dissimi-
larity measures. Univariate provenance proxies, such as single-grain ages or (isotopic) composi-
tions are compared with the Kolmogorov-Smirnov, Kuiper, Wasserstein-2 or Sircombe-
Hazelton L2 distances. Categorical provenance proxies such as chemical compositions are com-
pared with the Aitchison and Bray-Curtis distances,and count data with the chi-square dis-
tance. Varietal data can either be converted to one or more distributional datasets, or di-
rectly compared using the multivariate Wasserstein distance. Also included are tools to plot com-
positional and count data on ternary diagrams and point-counting data on radial plots, to calcu-
late the sample size required for specified levels of statistical precision, and to assess the ef-
fects of hydraulic sorting on detrital compositions. Includes an intuitive query-based user inter-
face for users who are not proficient in R.
Author <NAME> [aut, cre]
Maintainer <NAME> <<EMAIL>>
Depends R (>= 3.0.0), IsoplotR (>= 5.2)
Imports MASS, methods
Suggests transport, T4transport
URL https://www.ucl.ac.uk/~ucfbpve/provenance/
License GPL-2
LazyData true
RoxygenNote 7.2.1
Encoding UTF-8
NeedsCompilation no
Repository CRAN
Date/Publication 2023-08-28 13:20:02 UTC
R topics documented:
AL... 3
amalgamat... 4
as.acom... 5
as.compositiona... 6
as.count... 6
as.data.fram... 7
as.varieta... 8
bote... 9
bray.dis... 9
C... 10
central.count... 11
CL... 12
combin... 13
densitie... 13
diss.distributiona... 14
endmember... 15
get.... 16
get.... 17
get.... 18
GP... 18
indsca... 19
KD... 20
KDE... 21
KS.dis... 23
Kuiper.dis... 23
lines.ternar... 24
MD... 25
minsortin... 26
Nami... 28
PC... 29
plot.C... 29
plot.compositiona... 30
plot.distributiona... 31
plot.GP... 32
plot.INDSCA... 33
plot.KD... 34
plot.KDE... 35
plot.MD... 35
plot.minsortin... 37
plot.PC... 38
plot.ternar... 39
points.ternar... 40
procruste... 40
provenanc... 41
radialplot.count... 42
read.compositiona... 44
read.count... 45
read.densitie... 46
read.distributiona... 47
read.varieta... 49
restor... 50
SH.dis... 51
SNS... 52
subse... 52
summaryplo... 53
ternar... 54
ternary.ellips... 55
text.ternar... 56
varietal2distributiona... 56
Wasserstein.dis... 57
ALR Additive logratio transformation
Description
Calculates Aitchison’s additive logratio transformation for a dataset of class compositional or a
compositional data matrix.
Usage
ALR(x, ...)
## Default S3 method:
ALR(x, inverse = FALSE, ...)
## S3 method for class 'compositional'
ALR(x, ...)
Arguments
x an object of class compositional OR a matrix of numerical values
... optional arguments
inverse perform the inverse inverse logratio transformation?
Value
a matrix of ALR coordinates OR an object of class compositional (if inverse=TRUE).
Examples
# logratio plot of trace element concentrations:
data(Namib)
alr <- ALR(Namib$Trace)
pairs(alr[,1:5])
title('log(X/Pb)')
amalgamate Group components of a composition
Description
Adds several components of a composition together into a single component
Usage
amalgamate(x, ...)
## Default S3 method:
amalgamate(x, ...)
## S3 method for class 'compositional'
amalgamate(x, ...)
## S3 method for class 'counts'
amalgamate(x, ...)
## S3 method for class 'SRDcorrected'
amalgamate(x, ...)
## S3 method for class 'varietal'
amalgamate(x, ...)
Arguments
x a compositional dataset
... a series of new labels assigned to strings or vectors of strings denoting the com-
ponents that need amalgamating
Value
an object of the same class as X with fewer components
Examples
data(Namib)
HMcomponents <- c("zr","tm","rt","TiOx","sph","ap","ep",
"gt","st","amp","cpx","opx")
am <- amalgamate(Namib$PTHM,feldspars=c("KF","P"),
lithics=c("Lm","Lv","Ls"),heavies=HMcomponents)
plot(ternary(am))
as.acomp create an acomp object
Description
Convert an object of class compositional to an object of class acomp for use in the compositions
package
Usage
as.acomp(x)
Arguments
x an object of class compositional
Value
a data.frame
Examples
data(Namib)
qfl <- ternary(Namib$PT,c('Q'),c('KF','P'),c('Lm','Lv','Ls'))
plot(qfl,type="QFL.dickinson")
qfl.acomp <- as.acomp(qfl)
## uncomment the next two lines to plot an error
## ellipse using the 'compositions' package:
# library(compositions)
# ellipses(mean(qfl.acomp),var(qfl.acomp),r=2)
as.compositional create a compositional object
Description
Convert an object of class matrix, data.frame or acomp to an object of class compositional
Usage
as.compositional(x, method = NULL, colmap = "rainbow")
Arguments
x an object of class matrix, data.frame or acomp
method dissimilarity measure, either "aitchison" for Aitchison’s CLR-distance or "bray"
for the Bray-Curtis distance.
colmap the colour map to be used in pie charts.
Value
an object of class compositional
Examples
data(Namib)
PT.acomp <- as.acomp(Namib$PT)
PT.compositional <- as.compositional(PT.acomp)
print(Namib$PT$x - PT.compositional$x)
## uncomment the following lines for an illustration of using this
## function to integrate 'provenance' with 'compositions'
# library(compositions)
# data(Glacial)
# a.glac <- acomp(Glacial)
# c.glac <- as.compositional(a.glac)
# summaryplot(c.glac,ncol=8)
as.counts create a counts object
Description
Convert an object of class matrix or data.frame to an object of class counts
Usage
as.counts(x, method = "chisq", colmap = "rainbow")
Arguments
x an object of class matrix or data.frame
method either "chisq" (for the chi-square distance) or "bray" (for the Bray-Curtis dis-
tance)
colmap the colour map to be used in pie charts.
Value
an object of class counts
Examples
X <- matrix(c(0,100,0,30,11,2,94,36,0),nrow=3,ncol=3)
rownames(X) <- 1:3
colnames(X) <- c('a','b','c')
comp <- as.counts(X)
d <- diss(comp)
as.data.frame create a data.frame object
Description
Convert an object of class compositional to a data.frame for use in the robCompositions pack-
age
Usage
## S3 method for class 'compositional'
as.data.frame(x, ...)
## S3 method for class 'counts'
as.data.frame(x, ...)
Arguments
x an object of class compositional
... optional arguments to be passed on to the generic function
Value
a data.frame
Examples
data(Namib)
Major.frame <- as.data.frame(Namib$Major)
## uncomment the next two lines to plot an error
## ellipse using the robCompositions package:
# library(robCompositions)
# plot(pcaCoDa(Major.frame))
as.varietal create a varietal object
Description
Convert an object of class matrix or data.frame to an object of class varietal
Usage
as.varietal(x, snames = NULL, method = "KS")
Arguments
x an object of class matrix or data.frame
snames either a vector of sample names, an integer marking the length of the sample
name prefix, or NULL. read.varietal assumes that the row names of the .csv
file consist of character strings marking the sample names, followed by a num-
ber.
method either 'KS' (for the Kolmogorov-Smirnov statistic) or 'W2' (for the Wasserstein-
2 distance).
Value
an object of class varietal
Examples
fn <- system.file("SNSM/Ttn_chem.csv",package="provenance")
ap1 <- read.csv(fn)
ap2 <- as.varietal(x=ap1,snames=3)
botev Compute the optimal kernel bandwidth
Description
Uses the diffusion algorithm of Botev (2011) to calculate the bandwidth for kernel density estima-
tion
Usage
botev(x)
Arguments
x a vector of ordinal data
Value
a scalar value with the optimal bandwidth
Author(s)
<NAME>
References
Botev, <NAME>., <NAME>, and <NAME>. "Kernel density estimation via diffusion." The
Annals of Statistics 38.5 (2010): 2916-2957.
Examples
fname <- system.file("Namib/DZ.csv",package="provenance")
bw <- botev(read.distributional(fname)$x$N1)
print(bw)
bray.diss Bray-Curtis dissimilarity
Description
Calculates the Bray-Curtis dissimilarity between two samples
Usage
bray.diss(x, ...)
## Default S3 method:
bray.diss(x, y, ...)
## S3 method for class 'compositional'
bray.diss(x, ...)
Arguments
x a vector containing the first compositional sample
... optional arguments
y a vector of length(x) containing the second compositional sample
Value
a scalar value
Examples
data(Namib)
print(bray.diss(Namib$HM$x["N1",],Namib$HM$x["N2",]))
CA Correspondence Analysis
Description
Performs Correspondence Analysis of point-counting data
Usage
CA(x, nf = 2, ...)
Arguments
x an object of class counts
nf number of correspondence factors (dimensions)
... optional arguments to the corresp function of the MASS package
Value
an object of classes CA, which is synonymous to the MASS package’s correspondence class.
Examples
data(Namib)
plot(CA(Namib$PT))
central.counts Calculate central compositions
Description
Computes the logratio mean composition of a continuous mixture of point-counting data.
Usage
## S3 method for class 'counts'
central(x, ...)
Arguments
x an object of class counts
... optional arguments
Details
The central composition assumes that the observed point-counting distribution is the combination
of two sources of scatter: counting uncertainty and true geological dispersion.
Value
an [5 x n] matrix with n being the number of categories and the rows containing:
theta the ‘central’ composition.
err the standard error for the central composition.
sigma the overdispersion parameter, i.e. the coefficient of variation of the underlying logistic nor-
mal distribution. central computes a continuous mixture model for each component (col-
umn) separately. Covariance terms are not reported.
LL the lower limit of a ‘1 sigma’ region for theta.
UL the upper limit of a ‘1 sigma’ region for theta.
mswd the mean square of the weighted deviates, a.k.a. reduced chi-square statistic.
p.value the p-value for age homogeneity
CLR Centred logratio transformation
Description
Calculates Aitchison’s centered logratio transformation for a dataset of class compositional or a
compositional data matrix.
Usage
CLR(x, ...)
## Default S3 method:
CLR(x, inverse = FALSE, ...)
## S3 method for class 'compositional'
CLR(x, ...)
Arguments
x an object of class compositional OR a matrix of numerical values
... optional arguments
inverse perform the inverse inverse logratio transformation?
Value
a matrix of CLR coordinates OR an object of class compositional (if inverse=TRUE)
Examples
# The following code shows that applying provenance's PCA function
# to compositional data is equivalent to applying R's built-in
# princomp function to the CLR transformed data.
data(Namib)
plot(PCA(Namib$Major))
dev.new()
clrdat <- CLR(Namib$Major)
biplot(princomp(clrdat))
combine Combine samples of distributional data
Description
Lumps all single grain analyses of several samples together under a new name
Usage
combine(x, ...)
Arguments
x a distributional dataset
... a series of new labels assigned to strings or vectors of strings denoting the sam-
ples that need amalgamating
Value
a distributional data object with fewer samples than x
Examples
data(Namib)
combined <- combine(Namib$DZ,
east=c('N3','N4','N5','N6','N7','N8','N9','N10'),
west=c('N1','N2','N11','N12','T8','T13'))
summaryplot(KDEs(combined))
densities A list of rock and mineral densities
Description
List of rock and mineral densities using the following abbreviations: Q (quartz), KF (K-feldspar),
P (plagioclase), F (feldspar), Lvf (felsic/porfiritic volcanic rock fragments), Lvm (microlithic / por-
firitic / trachitic volcanic rock fragments), Lcc (calcite), Lcd (dolomite), Lp (marl), Lch (chert),
Lms (argillaceous / micaceous rock fragments), Lmv (metavolcanics), Lmf (metasediments), Lmb
(metabasites), Lv (volcanic rock fragments), Lc (carbonates), Ls (sedimentary rock fragments),
Lm (metamorphic rock fragments), Lu (serpentinite), mica, opaques, FeOx (Fe-oxides), turbids, zr
(zircon), tm (tourmaline), rt (rutile), TiOx (Ti-oxides), sph (titanite), ap (apatite), mon (monazite),
oth (other minerals), ep (epidote), othLgM (prehnite + pumpellyite + lawsonite + carpholite), gt
(garnet), ctd (chloritoid), st (staurolite), and (andalusite), ky (kyanite), sil (sillimanite), amp (am-
phibole), px (pyroxene), cpx (clinopyroxene), opx (orthopyroxene), ol (olivine), spinel and othHM
(other heavy minerals).
Author(s)
<NAME> and <NAME>
References
Resentini, A, <NAME> and <NAME>. "MinSORTING: An Excel worksheet for modelling
mineral grain-size distribution in sediments, with application to detrital geochronology and prove-
nance studies." Computers & Geosciences 59 (2013): 90-97.
Garzanti, E, <NAME> and <NAME>. "Settling equivalence of detrital minerals and grain-size de-
pendence of sediment composition." Earth and Planetary Science Letters 273.1 (2008): 138-151.
See Also
restore, minsorting
Examples
N8 <- subset(Namib$HM,select="N8")
distribution <- minsorting(N8,densities,phi=2,sigmaphi=1,medium="air",by=0.05)
plot(distribution)
diss.distributional Calculate the dissimilarity matrix between two datasets of class
distributional, compositional, counts or varietal
Description
Calculate the dissimilarity matrix between two datasets of class distributional or compositional
using the Kolmogorov-Smirnov, Sircombe-Hazelton, Aitchison or Bray-Curtis distance
Usage
## S3 method for class 'distributional'
diss(x, method = NULL, log = FALSE, verbose = FALSE, ...)
## S3 method for class 'compositional'
diss(x, method = NULL, ...)
## S3 method for class 'counts'
diss(x, method = NULL, ...)
## S3 method for class 'varietal'
diss(x, method = NULL, ...)
Arguments
x an object of class distributional, compositional or counts
method if x has class distributional: either "KS", "Wasserstein", "Kuiper" or
"SH";
if x has class compositional: either "aitchison" or "bray";
if x has class counts: either "chisq" or "bray";
if x has class varietal: either "KS", "W2_1D" or "W2".
log logical. If TRUE, subjects the distributional data to a logarithmic transformation
before calculating the Wasserstein distance.
verbose logical. If TRUE, gives progress updates during the construction of the dissimi-
larity matrix.
... optional arguments
Details
"KS" stands for the Kolmogorov-Smirnov statistic, "W2_1D" for the 1-dimensional Wasserstein-2
distance, "Kuiper" for the Kuiper statistic, "SH" for the Sircombe-Hazelton distance, "aitchison"
for the Aitchison logratio distance, "bray" for the Bray-Curtis distance, "chisq" for the Chi-square
distance, and "W2" for the 2-dimensional Wasserstein-2 distance.
Value
an object of class diss
See Also
KS.diss bray.diss SH.diss Wasserstein.diss Kuiper.diss
Examples
data(Namib)
print(round(100*diss(Namib$DZ)))
endmembers Petrographic end-member compositions
Description
A compositional dataset comprising the mineralogical compositions of the following end-members:
undissected_magmatic_arc, dissected_magmatic_arc, ophiolite, recycled_clastic, undissected_continental_
transitional_continental_block, dissected_continental_block, subcreted_axial_belt
and subducted_axial_belt
Author(s)
<NAME> and <NAME>
References
<NAME>, <NAME> and <NAME>. "MinSORTING: An Excel worksheet for modelling
mineral grain-size distribution in sediments, with application to detrital geochronology and prove-
nance studies." Computers & Geosciences 59 (2013): 90-97.
<NAME>, <NAME> and <NAME>. "Settling equivalence of detrital minerals and grain-size de-
pendence of sediment composition." Earth and Planetary Science Letters 273.1 (2008): 138-151.
See Also
minsorting
Examples
ophiolite <- subset(endmembers,select="ophiolite")
plot(minsorting(ophiolite,densities,by=0.05))
get.f Calculate the largest fraction that is likely to be missed
Description
For a given sample size, returns the largest fraction which has been sampled with (1-p) x 100 %
likelihood.
Usage
get.f(n, p = 0.05)
Arguments
n the number of grains in the detrital sample
p the required level of confidence
Value
the largest fraction that is sampled with at least (1-p) x 100% certainty
References
Vermeesch, Pieter. "How many grains are needed for a provenance study?" Earth and Planetary
Science Letters 224.3 (2004): 441-451.
Examples
print(get.f(60))
print(get.f(117))
get.n Calculate the number of grains required to achieve a desired level of
sampling resolution
Description
Returns the number of grains that need to be analysed to decrease the likelihood of missing any
fraction greater than a given size below a given level.
Usage
get.n(p = 0.05, f = 0.05)
Arguments
p the probability that all n grains in the sample have missed at least one fraction
of size f
f the size of the smallest resolvable fraction (0<f<1)
n, the number of grains in the sample
Value
the number of grains needed to reduce the chance of missing at least one fraction f of the total
population to less than p
References
Vermeesch, Pieter. "How many grains are needed for a provenance study?." Earth and Planetary
Science Letters 224.3 (2004): 441-451.
Examples
# number of grains required to be 99% that no fraction greater than 5% was missed:
print(get.n(0.01))
# number of grains required to be 90% that no fraction greater than 10% was missed:
print(get.n(p=0.1,f=0.1))
get.p Calculate the probability of missing a given population fraction
Description
For a given sample size, returns the likelihood of missing any fraction greater than a given size
Usage
get.p(n, f = 0.05)
Arguments
n the number of grains in the detrital sample
f the size of the smallest resolvable fraction (0<f<1)
Value
the probability that all n grains in the sample have missed at least one fraction of size f
References
Vermeesch, Pieter. "How many grains are needed for a provenance study?." Earth and Planetary
Science Letters 224.3 (2004): 441-451.
Examples
print(get.p(60))
print(get.p(117))
GPA Generalised Procrustes Analysis of configurations
Description
Given a number of (2D) configurations, this function uses a combination of transformations (reflec-
tions, rotations, translations and scaling) to find a ‘consensus’ configuration which best matches all
the component configurations in a least-squares sense.
Usage
GPA(X, scale = TRUE)
Arguments
X a list of dissimilarity matrices
scale boolean flag indicating if the transformation should include the scaling operation
Value
a two column vector with the coordinates of the group configuration
See Also
procrustes
indscal Individual Differences Scaling of provenance data
Description
Performs 3-way Multidimensional Scaling analysis using Carroll and Chang (1970)’s INdividual
Differences SCALing method as implemented using De Leeuw and Mair (2011)’s stress majoriza-
tion algorithm.
Usage
indscal(..., type = "ordinal", itmax = 1000)
Arguments
... a sequence of datasets of class distributional, compositional, counts or
varietal, OR a single object of class varietal.
type is either "ratio" or "ordinal"
itmax Maximum number of iterations
Value
an object of class INDSCAL, i.e. a list containing the following items:
delta: Observed dissimilarities
obsdiss: List of observed dissimilarities, normalized
confdiss: List of configuration dissimilarities
conf: List of matrices of final configurations
gspace: Joint configurations aka group stimulus space
cweights: Configuration weights
stress: Stress-1 value
spp: Stress per point
sps: Stress per subject (matrix)
ndim: Number of dimensions
model: Type of smacof model
niter: Number of iterations
nobj: Number of objects
Author(s)
<NAME> and <NAME>
References
<NAME>., & <NAME>. (2009). Multidimensional scaling using majorization: The R package
smacof. Journal of Statistical Software, 31(3), 1-30, <https://www.jstatsoft.org/v31/i03/>
Examples
## Not run:
attach(Namib)
plot(indscal(DZ,HM,PT,Major,Trace))
## End(Not run)
KDE Create a kernel density estimate
Description
Turns a vector of numbers into an object of class KDE using a combination of the Botev (2010)
bandwidth selector and the Abramson (1982) adaptive kernel bandwidth modifier.
Usage
KDE(x, from = NA, to = NA, bw = NA, adaptive = TRUE, log = FALSE, n = 512, ...)
Arguments
x a vector of numbers
from minimum age of the time axis. If NULL, this is set automatically
to maximum age of the time axis. If NULL, this is set automatically
bw the bandwidth of the KDE. If NULL, bw will be calculated automatically using
botev()
adaptive boolean flag controlling if the adaptive KDE modifier of Abramson (1982) is
used
log transform the ages to a log scale if TRUE
n horizontal resolution of the density estimate
... optional arguments to be passed on to density
Value
an object of class KDE, i.e. a list containing the following items:
x: horizontal plot coordinates
y: vertical plot coordinates
bw: the base bandwidth of the density estimate
ages: the data values from the input to the KDE function
See Also
KDEs
Examples
data(Namib)
samp <- Namib$DZ$x[['N1']]
dens <- KDE(samp,0,3000,kernel="epanechnikov")
plot(dens)
KDEs Generate an object of class KDEs
Description
Convert a dataset of class distributional into an object of class KDEs for further processing by
the summaryplot function.
Usage
KDEs(
x,
from = NA,
to = NA,
bw = NA,
samebandwidth = TRUE,
adaptive = TRUE,
normalise = FALSE,
log = FALSE,
n = 512,
...
)
Arguments
x an object of class distributional
from minimum limit of the x-axis.
to maximum limit of the x-axis.
bw the bandwidth of the kernel density estimates. If bw = NA, the bandwidth will be
set automatically using botev()
samebandwidth boolean flag indicating whether the same bandwidth should be used for all sam-
ples. If samebandwidth = TRUE and bw = NULL, then the function will use the
median bandwidth of all the samples.
adaptive boolean flag switching on the adaptive bandwidth modifier of Abramson (1982)
normalise boolean flag indicating whether or not the KDEs should all integrate to the same
value.
log boolean flag indicating whether the data should by plotted on a logarithmic
scale.
n horizontal resolution of the density estimates
... optional parameters to be passed on to density
Value
an object of class KDEs, i.e. a list containing the following items:
kdes: a named list with objects of class KDE
from: the beginning of the common time scale
to: the end of the common time scale
themax: the maximum probability density of all the KDEs
pch: the plot symbol to be used by plot.KDEs
xlabel: the x-axis label to be used by plot.KDEs
See Also
KDE
Examples
data(Namib)
KDEs <- KDEs(Namib$DZ,0,3000,pch=NA)
summaryplot(KDEs,ncol=3)
KS.diss Kolmogorov-Smirnov dissimilarity
Description
Returns the Kolmogorov-Smirnov dissimilarity between two samples
Usage
KS.diss(x, ...)
## Default S3 method:
KS.diss(x, y, ...)
## S3 method for class 'distributional'
KS.diss(x, ...)
Arguments
x the first sample as a vector
... optional arguments
y the second sample as a vector
Value
a scalar value representing the maximum vertical distance between the two cumulative distributions
Examples
data(Namib)
print(KS.diss(Namib$DZ$x[['N1']],Namib$DZ$x[['T8']]))
Kuiper.diss Kuiper dissimilarity
Description
Returns the Kuiper dissimilarity between two samples
Usage
Kuiper.diss(x, ...)
## Default S3 method:
Kuiper.diss(x, y, ...)
## S3 method for class 'distributional'
Kuiper.diss(x, ...)
Arguments
x the first sample as a vector
... optional arguments
y the second sample as a vector
Value
a scalar value representing the sum of the maximum vertical distances above and below the cumu-
lative distributions of x and y
Examples
data(Namib)
print(Kuiper.diss(Namib$DZ$x[['N1']],Namib$DZ$x[['T8']]))
lines.ternary Ternary line plotting
Description
Add lines to an existing ternary diagram
Usage
## S3 method for class 'ternary'
lines(x, ...)
Arguments
x an object of class ternary, or a three-column data frame or matrix
... optional arguments to the generic lines function
Examples
tern <- ternary(Namib$PT,'Q',c('KF','P'),c('Lm','Lv','Ls'))
plot(tern,pch=21,bg='red',labels=NULL)
middle <- matrix(c(0.01,0.49,0.01,0.49,0.98,0.02),2,3)
lines(ternary(middle))
MDS Multidimensional Scaling
Description
Performs classical or nonmetric Multidimensional Scaling analysis of provenance data
Usage
MDS(x, ...)
## Default S3 method:
MDS(x, classical = FALSE, k = 2, ...)
## S3 method for class 'compositional'
MDS(x, classical = FALSE, k = 2, ...)
## S3 method for class 'counts'
MDS(x, classical = FALSE, k = 2, ...)
## S3 method for class 'distributional'
MDS(x, classical = FALSE, k = 2, nb = 0, ...)
## S3 method for class 'varietal'
MDS(x, classical = FALSE, k = 2, nb = 0, ...)
Arguments
x an object of class distributional, compositional, counts, varietal or
diss
... optional arguments
If x has class distributional, ... is passed on to diss.distributional.
If x has class compositional, ... is passed on to diss.compositional.
If x has class counts, ... is passed on to diss.counts.
If x has class varietal, ... is passed on to diss.varietal.
Otherwise, ... is passed on to cmdscale (if classical=TRUE), to isoMDS (if
classical=FALSE).
classical boolean flag indicating whether classical (TRUE) or nonmetric (FALSE) MDS
should be used
k the desired dimensionality of the solution
nb number of bootstrap resamples. If nb>0, then plot.MDS(...) will visualise
the sampling uncertainty as polygons (inspired by Nordsvan et al. 2020). The
bigger nb, the slower the calculations. nb=10 seems a good compromise.
Value
an object of class MDS, i.e. a list containing the following items:
points: a two column vector of the fitted configuration
classical: a boolean flag indicating whether the MDS configuration was obtained by classical
(TRUE) or nonmetric (FALSE) MDS.
diss: the dissimilarity matrix used for the MDS analysis
stress: (only if classical=TRUE) the final stress achieved (in percent)
References
<NAME>., <NAME>., <NAME>., <NAME>. and <NAME>., 2020. Resampling
(detrital) zircon age distributions for accurate multidimensional scaling solutions. Earth-Science
Reviews, p.103149.
<NAME>., 2013, Multi-sample comparison of detrital age distributions. Chemical Geology
v.341, 140-146, doi:10.1016/j.chemgeo.2013.01.010
Examples
data(Namib)
plot(MDS(Namib$Major,classical=TRUE))
minsorting Assess settling equivalence of detrital components
Description
Models grain size distribution of minerals and rock fragments of different densities
Usage
minsorting(
X,
dens,
sname = NULL,
phi = 2,
sigmaphi = 1,
medium = "freshwater",
from = -2.25,
to = 5.5,
by = 0.25
)
Arguments
X an object of class compositional
dens a vector of mineral and rock densities
sname sample name if unspecified, the first sample of the dataset will be used
phi the mean grain size of the sample in Krumbein’s phi units
sigmaphi the standard deviation of the grain size distirbution, in phi units
medium the transport medium, one of either "air", "freshwater" or "seawater"
from the minimum grain size to be evaluated, in phi units
to the maximum grain size to be evaluated, in phi units
by the grain size interval of the output table, in phi units
Value
an object of class minsorting, i.e. a list with two tables:
mfract: the grain size distribution of each mineral (sum of the columns = 1)
mcomp: the composition of each grain size fraction (sum of the rows = 1)
Author(s)
<NAME> and <NAME>
References
<NAME>, <NAME> and <NAME>. "MinSORTING: An Excel worksheet for modelling
mineral grain-size distribution in sediments, with application to detrital geochronology and prove-
nance studies." Computers & Geosciences 59 (2013): 90-97.
Garzanti, E, Ando, S and Vezzoli, G. "Settling equivalence of detrital minerals and grain-size de-
pendence of sediment composition." Earth and Planetary Science Letters 273.1 (2008): 138-151.
See Also
restore
Examples
data(endmembers,densities)
distribution <- minsorting(endmembers,densities,sname='ophiolite',phi=2,
sigmaphi=1,medium="seawater",by=0.05)
plot(distribution,cumulative=FALSE)
Namib An example dataset
Description
A large dataset of provenance data from Namibia comprised of 14 sand samples from the Namib
Sand Sea and 2 samples from the Orange River.
Details
Namib is a list containing the following 6 items:
DZ: a distributional dataset containing the zircon U-Pb ages for ca. 100 grains from each sample,
as well as their (1-sigma) analytical uncertainties.
PT: a compositional dataset with the bulk petrography of the samples, i.e. the quartz (‘Q’), K-
feldspar (‘KF’), plagioclase (‘P’), and lithic fragments of metamorphic (‘Lm’), volcanic (‘Lv’) and
sedimentary (‘Ls’) origin.
HM: a compositional dataset containing the heavy mineral composition of the samples, comprised
of zircon (‘zr’), tourmaline (‘tm’), rutile (‘rt’), Ti-oxides (‘TiOx’), titanite (‘sph’), apatite (‘ap’),
epidote (‘ep’), garnet (‘gt’), staurolite (‘st’), andalusite (‘and’), kyanite (‘ky’), sillimanite (‘sil’),
amphibole (‘amp’), clinopyroxene (‘cpx’) and orthopyroxene (‘opx’).
PTHM: a compositional dataset combining the variables contained in PT and HM plus ‘mica’, ‘opaques’,
’turbids’ and ’other’ transparent heavy minerals (‘LgM’), normalised to 100.
Major: a compositional dataset listing the concentrations (in wt TiO2, P2O5 and MnO.
Trace: a compositional data listing the concentrations (in ppm) of Rb, Sr, Ba, Sc, Y, La, Ce, Pr,
Nd, Sm, Gd, Dy, Er, Yb, Th, U, Zr, Hf, V, Nb, Cr, Co, Ni, Cu, Zn, Ga and Pb.
Author(s)
<NAME> and <NAME>
References
Vermeesch, P. and Garzanti, E., Making geological sense of ’Big Data’ in sedimentary provenance
analysis, Chemical Geology 409 (2015) 20-27
Examples
samp <- Namib$DZ$x[['N1']]
dens <- KDE(samp,0,3000)
plot(dens)
PCA Principal Component Analysis
Description
Performs PCA of compositional data using a centred logratio distance
Usage
PCA(x, ...)
Arguments
x an object of class compositional
... optional arguments to R’s princomp function
Value
an object of classes PCA, which is synonymous to the stats package’s prcomp class.
Examples
data(Namib)
plot(MDS(Namib$Major,classical=TRUE))
dev.new()
plot(PCA(Namib$Major),asp=1)
print("This example demonstrates the equivalence of classical MDS and PCA")
plot.CA Point-counting biplot
Description
Plot the results of a correspondence analysis as a biplot
Usage
## S3 method for class 'CA'
plot(x, labelcol = "black", vectorcol = "red", components = c(1, 2), ...)
Arguments
x an object of class CA
labelcol colour of the sample labels (may be a vector).
vectorcol colour of the vector loadings for the variables
components two-element vector of components to be plotted
... optional arguments of the generic biplot function
See Also
CA
Examples
data(Namib)
plot(CA(Namib$PT))
plot.compositional Plot a pie chart
Description
Plots an object of class compositional as a pie chart
Usage
## S3 method for class 'compositional'
plot(x, sname, annotate = TRUE, colmap = NULL, ...)
Arguments
x an object of class compositional
sname the sample name
annotate a boolean flag controlling if the pies of the pie-chart should be labelled
colmap an optional string with the name of one of R’s built-in colour palettes (e.g.,
heat.colors, terrain.colors, topo.colors, cm.colors), which are to be used for plot-
ting the data.
... optional parameters to be passed on to the graphics object
Examples
data(Namib)
plot(Namib$Major,'N1',colmap='heat.colors')
plot.distributional Plot continuous data as histograms or cumulative age distributions
Description
Plot one or several samples from a distributional dataset as a histogram or Cumulative Age
Distributions (CAD).
Usage
## S3 method for class 'distributional'
plot(
x,
snames = NULL,
annotate = TRUE,
CAD = FALSE,
pch = NA,
verticals = TRUE,
colmap = NULL,
...
)
Arguments
x an object of class distributional
snames a string or a vector of string with the names of the samples that need plotting if
snames is a vector, then the function will default to a CAD.
annotate boolean flag indicating whether the x- and y-axis should be labelled
CAD boolean flag indicating whether the data should be plotted as a cumulative age
distribution or a histogram. For multi-sample plots, the function will override
this value with TRUE.
pch an optional symbol to mark the sample points along the CAD
verticals boolean flag indicating if the horizontal lines of the CAD should be connected
by vertical lines
colmap an optional string with the name of one of R’s built-in colour palettes (e.g.,
heat.colors, terrain.colors, topo.colors, cm.colors), which are to be used for plot-
ting the data.
... optional arguments to the generic plot function
Examples
data(Namib)
plot(Namib$DZ,c('N1','N2'))
plot.GPA Plot a Procrustes configuration
Description
Plots the group configuration of a Generalised Procrustes Analysis
Usage
## S3 method for class 'GPA'
plot(x, pch = NA, pos = NULL, col = "black", bg = "white", cex = 1, ...)
Arguments
x an object of class GPA
pch plot symbol
pos position of the sample labels relative to the plot symbols if pch != NA
col plot colour (may be a vector)
bg background colour (may be a vector)
cex relative size of plot symbols
... optional arguments to the generic plot function
See Also
procrustes
Examples
data(Namib)
GPA <- procrustes(Namib$DZ,Namib$HM)
coast <- c('N1','N2','N3','N10','N11','N12','T8','T13')
snames <- names(Namib$DZ)
bgcol <- rep('yellow',length(snames))
bgcol[which(snames %in% coast)] <- 'red'
plot(GPA,pch=21,bg=bgcol)
plot.INDSCAL Plot an INDSCAL group configuration and source weights
Description
Given an object of class INDSCAL, generates two plots: the group configuration and the subject
weights. Together, these describe a 3-way MDS model.
Usage
## S3 method for class 'INDSCAL'
plot(
x,
asp = 1,
pch = NA,
pos = NULL,
col = "black",
bg = "white",
cex = 1,
xlab = "X",
ylab = "Y",
xaxt = "n",
yaxt = "n",
option = 2,
...
)
Arguments
x an object of class INDSCAL
asp the aspect ratio of the plot
pch plot symbol (may be a vector)
pos position of the sample labels relative to the plot symbols if pch != NA
col plot colour (may be a vector)
bg background colour (may be a vector)
cex relative size of plot symbols
xlab a string with the label of the x axis
ylab a string with the label of the y axis
xaxt if = ’s’, adds ticks to the x axis
yaxt if = ’s’, adds ticks to the y axis
option either:
0: only plot the group configuration, do not show the source weights
1: only show the source weights, do not plot the group configuration
2: show both the group configuration and source weights in separate windows
... optional arguments to the generic plot function
See Also
indscal
Examples
data(Namib)
coast <- c('N1','N2','N3','N10','N11','N12','T8','T13')
snames <- names(Namib$DZ)
pch <- rep(21,length(snames))
pch[which(snames %in% coast)] <- 22
plot(indscal(Namib$DZ,Namib$HM),pch=pch)
plot.KDE Plot a kernel density estimate
Description
Plots an object of class KDE
Usage
## S3 method for class 'KDE'
plot(x, pch = "|", xlab = "age [Ma]", ylab = "", ...)
Arguments
x an object of class KDE
pch the symbol used to show the samples. May be a vector. Set pch = NA to turn
them off.
xlab the label of the x-axis
ylab the label of the y-axis
... optional parameters to be passed on to the graphics object
See Also
KDE
Examples
data(Namib)
samp <- Namib$DZ$x[['N1']]
dens <- KDE(samp,from=0,to=3000)
plot(dens)
plot.KDEs Plot one or more kernel density estimates
Description
Plots an object of class KDEs
Usage
## S3 method for class 'KDEs'
plot(x, sname = NA, annotate = TRUE, pch = "|", ...)
Arguments
x an object of class KDEs
sname optional sample name. If sname=NA, all samples are shown on a summary plot
annotate add a time axis?
pch symbol to be used to mark the sample points along the x-axis. Change to NA to
omit.
... optional parameters to be passed on to the summaryplot function
See Also
KDEs summaryplot
Examples
data(Namib)
kdes <- KDEs(Namib$DZ)
plot(kdes,ncol=2)
plot.MDS Plot an MDS configuration
Description
Plots the coordinates of a multidimensional scaling analysis as an X-Y scatter plot or ‘map’ and, if
x$classical = FALSE, a Shepard plot.
Usage
## S3 method for class 'MDS'
plot(
x,
nnlines = FALSE,
pch = NA,
pos = NULL,
cex = 1,
col = "black",
bg = "white",
oma = rep(1, 4),
mar = rep(2, 4),
mgp = c(2, 1, 0),
xpd = NA,
Shepard = 2,
...
)
Arguments
x an object of class MDS
nnlines if TRUE, draws nearest neighbour lines
pch plot character (see ?plot for details). May be a vector.
pos position of the sample labels relative to the plot symbols if pch != NA
cex relative size of plot symbols (see ?par for details)
col plot colour (may be a vector)
bg background colour (may be a vector)
oma A vector of the form c(bottom, left, top, right) giving the size of the outer
margins in lines of text.
mar A numerical vector of the form c(bottom, left,top, right) that gives the
number of lines of margin to be specified on the four sides of the plot.
mgp The margin line (in mex units) for the axis title, axis labels and axis line. See
?par for further details.
xpd A logical value or NA. See ?par for further details.
Shepard either:
0: only plot the MDS configuration, do not show the Shepard plot
1: only show the Shepard plot, do not plot the MDS configuration
2: show both the MDS configuration and Shepard plot in separate windows
... optional arguments to the generic plot function
See Also
MDS
Examples
data(Namib)
mds <- MDS(Namib$DZ)
coast <- c('N1','N2','N3','N10','N11','N12','T8','T13')
snames <- names(Namib$DZ)
bgcol <- rep('yellow',length(snames))
bgcol[which(snames %in% coast)] <- 'red'
plot(mds,pch=21,bg=bgcol)
plot.minsorting Plot inferred grain size distributions
Description
Plot the grain size distributions of the different minerals under consideration
Usage
## S3 method for class 'minsorting'
plot(x, cumulative = FALSE, components = NULL, ...)
Arguments
x an object of class minsorting
cumulative boolean flag indicating whether the grain size distribution should be plotted as a
density or cumulative probability curve.
components string or list of strings with the names of a subcomposition that needs plotting
... optional parameters to be passed on to graphics::matplot (see ?par for details)
See Also
minsorting
Examples
data(endmembers,densities)
OPH <- subset(endmembers,select="ophiolite")
distribution <- minsorting(OPH,densities,phi=2,sigmaphi=1,
medium="air",by=0.05)
plot(distribution,components=c('F','px','opaques'))
plot.PCA Compositional biplot
Description
Plot the results of a principal components analysis as a biplot
Usage
## S3 method for class 'PCA'
plot(
x,
labelcol = "black",
vectorcol = "red",
choices = 1L:2L,
scale = 1,
pc.biplot = FALSE,
...
)
Arguments
x an object of class PCA
labelcol colour(s) of the sample labels (may be a vector).
vectorcol colour of the vector loadings for the variables
choices see the help pages of the generic biplot function.
scale see the help pages of the generic biplot function.
pc.biplot see the help pages of the generic biplot function.
... optional arguments of the generic biplot function
See Also
PCA
Examples
data(Namib)
plot(PCA(Namib$Major))
plot.ternary Plot a ternary diagram
Description
Plots triplets of compositional data on a ternary diagram
Usage
## S3 method for class 'ternary'
plot(
x,
type = "grid",
pch = NA,
pos = NULL,
labels = names(x),
showpath = FALSE,
bg = NA,
col = "cornflowerblue",
ticks = seq(0, 1, 0.25),
ticklength = 0.02,
lty = 2,
lwd = 1,
...
)
Arguments
x an object of class ternary, or a three-column data frame or matrix
type adds annotations to the ternary diagram, one of either empty, grid, QFL.descriptive,
QFL.folk or QFL.dickinson
pch plot character, see ?par for details (may be a vector)
pos position of the sample labels relative to the plot symbols if pch != NA
labels vector of strings to be added to the plot symbols
showpath if x has class SRDcorrected, and showpath==TRUE, the intermediate values
of the SRD correction will be plotted on the ternary diagram as well as the final
composition
bg background colour for the plot symbols (may be a vector)
col colour to be used for the background lines (if applicable)
ticks vector of tick values between 0 and 1
ticklength number between 0 and 1 to mark the length of the ticks
lty line type for the annotations (see type)
lwd line thickness for the annotations
... optional arguments to the generic points function
See Also
ternary
Examples
data(Namib)
tern <- ternary(Namib$PT,'Q',c('KF','P'),c('Lm','Lv','Ls'))
plot(tern,type='QFL.descriptive',pch=21,bg='red',labels=NULL)
points.ternary Ternary point plotting
Description
Add points to an existing ternary diagram
Usage
## S3 method for class 'ternary'
points(x, ...)
Arguments
x an object of class ternary, or a three-column data frame or matrix
... optional arguments to the generic points function
Examples
tern <- ternary(Namib$PT,'Q',c('KF','P'),c('Lm','Lv','Ls'))
plot(tern,pch=21,bg='red',labels=NULL)
# add the geometric mean composition as a yellow square:
gmean <- ternary(exp(colMeans(log(tern$x))))
points(gmean,pch=22,bg='yellow')
procrustes Generalised Procrustes Analysis of provenance data
Description
Given a number of input datasets, this function performs an MDS analysis on each of these and the
feeds the resulting configurations into the GPA() function.
Usage
procrustes(...)
Arguments
... a sequence of datasets of classes distributional, counts, compositional
and varietal OR a single object of class varietal.
Value
an object of class GPA, i.e. a list containing the following items:
points: a two column vector with the coordinates of the group configuration
labels: a list with the sample names
Author(s)
<NAME>
References
Gower, J.C. (1975). Generalized Procrustes analysis, Psychometrika, 40, 33-50.
See Also
GPA
Examples
data(Namib)
gpa1 <- procrustes(Namib$DZ,Namib$HM)
plot(gpa1)
data(SNSM)
gpa2 <- procrustes(SNSM$ap)
plot(gpa2)
provenance Menu-based interface for provenance
Description
For those less familiar with the syntax of the R programming language, the provenance() function
provides a user-friendly way to access the most important functionality in the form of a menu-
based query interface. Further details and examples are provided on https://www.ucl.ac.uk/
~ucfbpve/provenance/
provenance provides statistical tools to interpret large amounts of distributional (single grain anal-
yses) and compositional (mineralogical and bulk chemical) data from the command line, or using a
menu-based user interface.
Usage
provenance()
Details
A list of documented functions may be viewed by typing help(package='provenance'). De-
tailed instructions are provided at https://www.ucl.ac.uk/~ucfbpve/provenance/ and in the
Sedimentary Geology paper by Vermeesch, Resentini and Garzanti (2016).
Author(s)
<NAME>
Maintainer: <NAME> <<EMAIL>>
References
<NAME>., Resentini, A. and Garzanti, E., an R package for statistical provenance analysis,
Sedimentary Geology, doi:10.1016/j.sedgeo.2016.01.009.
Vermeesch, P., Resentini, A. and Garzanti, E., 2016, An R package for statistical provenance anal-
ysis, Sedimentary Geology, 336, 14-25.
See Also
Useful links:
• https://www.ucl.ac.uk/~ucfbpve/provenance/
radialplot.counts Visualise point-counting data on a radial plot
Description
Implementation of a graphical device developed by Rex Galbraith to display several estimates of
the same quantity that have different standard errors.
Usage
## S3 method for class 'counts'
radialplot(
x,
num = 1,
den = 2,
from = NA,
to = NA,
t0 = NA,
sigdig = 2,
show.numbers = FALSE,
pch = 21,
levels = NA,
clabel = "",
bg = c("white", "red"),
title = TRUE,
...
)
Arguments
x an object of class counts
num index or name of the numerator variable
den index or name of the denominator variable
from minimum limit of the radial scale
to maximum limit of the radial scale
t0 central value
sigdig the number of significant digits of the numerical values reported in the title of
the graphical output.
show.numbers boolean flag (TRUE to show sample numbers)
pch plot character (default is a filled circle)
levels a vector with additional values to be displayed as different background colours
of the plot symbols.
clabel label of the colour legend
bg a vector of two background colours for the plot symbols. If levels=NA, then
only the first colour is used. If levels is a vector of numbers, then bg is used to
construct a colour ramp.
title add a title to the plot?
... additional arguments to the generic points function
Details
The radial plot (Galbraith, 1988, 1990) is a graphical device that was specifically designed to display
heteroscedastic data, and is constructed as follows. Consider a set of dates {t1 , ..., ti , ..., tn } and
uncertainties {s[t1 ], ..., s[ti ], ..., s[tn ]}. Define zi = z[ti ] to be a transformation of ti (e.g., zi =
log[ti ]), and let s[zi ] be its propagated analytical uncertainty (i.e., s[zi ] = s[ti ]/ti in the case of
a logarithmic transformation). Create a scatterplot of (xi , yi ) values, where xi = 1/s[zi ] and
yi = (zi − z◦ )/s[zi ], where z◦ is some reference value such as the mean. The slope of a line
connecting the origin of this scatterplot with any of the (xi , yi )s is proportional to zi and, hence,
the date ti . These dates can be more easily visualised by drawing a radial scale at some convenient
distance from the origin and annotating it with labelled ticks at the appropriate angles. While the
angular position of each data point represents the date, its horizontal distance from the origin is
proportional to the precision. Imprecise measurements plot on the left hand side of the radial plot,
whereas precise age determinations are found further towards the right. Thus, radial plots allow the
observer to assess both the magnitude and the precision of quantitative data in one glance.
References
<NAME>., 1988. Graphical display of estimates having differing standard errors. Technomet-
rics, 30(3), pp.271-281.
<NAME>., 1990. The radial plot: graphical assessment of spread in ages. International Journal
of Radiation Applications and Instrumentation. Part D. Nuclear Tracks and Radiation Measure-
ments, 17(3), pp.207-214.
<NAME>. and <NAME>., 1993. Statistical models for mixed fission track ages. Nuclear
Tracks and Radiation Measurements, 21(4), pp.459-470.
Examples
data(Namib)
radialplot(Namib$PT,components=c('Q','P'))
read.compositional Read a .csv file with compositional data
Description
Reads a data table containing compositional data (e.g. chemical concentrations)
Usage
read.compositional(
fname,
method = NULL,
colmap = "rainbow",
sep = ",",
dec = ".",
row.names = 1,
header = TRUE,
check.names = FALSE,
...
)
Arguments
fname a string with the path to the .csv file
method either "bray" (for the Bray-Curtis distance) or "aitchison" (for Aitchison’s cen-
tral logratio distance). If omitted, the function defaults to ’aitchison’, unless
there are zeros present in the data.
colmap an optional string with the name of one of R’s built-in colour palettes (e.g.,
heat.colors, terrain.colors, topo.colors, cm.colors), which are to be used for plot-
ting the data.
sep the field separator character. Values on each line of the file are separated by this
character.
dec the character used in the file for decimal points.
row.names a vector of row names. This can be a vector giving the actual row names, or
a single number giving the column of the which contains the row names, or
character string the name of the table column containing the row names.
header a logical value indicating whether the file contains the names of the variables as
its first line.
check.names logical. If TRUE then the names of the variables in the frame are checked to
ensure that they are syntactically variable names.
... optional arguments to the built-in read.table function
Value
an object of class compositional, i.e. a list with the following items:
x: a data frame with the samples as rows and the categories as columns
method: either "aitchison" (for Aitchison’s centred logratio distance) or "bray" (for the Bray-Curtis
distance)
colmap: the colour map provided by the input argument
name: the name of the data object, extracted from the file path
Examples
fname <- system.file("Namib/Major.csv",package="provenance")
Major <- read.compositional(fname)
plot(PCA(Major))
read.counts Read a .csv file with point-counting data
Description
Reads a data table containing point-counting data (e.g. petrographic, heavy mineral, palaeontologi-
cal or palynological data)
Usage
read.counts(
fname,
method = "chisq",
colmap = "rainbow",
sep = ",",
dec = ".",
row.names = 1,
header = TRUE,
check.names = FALSE,
...
)
Arguments
fname a string with the path to the .csv file
method either "chisq" (for the chi-square distance) or "bray" (for the Bray-Curtis dis-
tance)
colmap an optional string with the name of one of R’s built-in colour palettes (e.g.,
heat.colors, terrain.colors, topo.colors, cm.colors), which are to be used for plot-
ting the data.
sep the field separator character. Values on each line of the file are separated by this
character.
dec the character used in the file for decimal points.
row.names a vector of row names. This can be a vector giving the actual row names, or
a single number giving the column of the which contains the row names, or
character string the name of the table column containing the row names.
header a logical value indicating whether the file contains the names of the variables as
its first line.
check.names logical. If TRUE then the names of the variables in the frame are checked to
ensure that they are syntactically variable names.
... optional arguments to the built-in read.table function
Value
an object of class counts, i.e. a list with the following items:
x: a data frame with the samples as rows and the categories as columns
colmap: the colour map provided by the input argument
name: the name of the data object, extracted from the file path
Examples
fname <- system.file("Namib/HM.csv",package="provenance")
Major <- read.counts(fname)
#plot(PCA(HM))
read.densities Read a .csv file with mineral and rock densities
Description
Reads a data table containing densities to be used for hydraulic sorting corrections (minsorting and
srd functions)
Usage
read.densities(
fname,
sep = ",",
dec = ".",
header = TRUE,
check.names = FALSE,
...
)
Arguments
fname a string with the path to the .csv file
sep the field separator character. Values on each line of the file are separated by this
character.
dec the character used in the file for decimal points.
header a logical value indicating whether the file contains the names of the variables as
its first line.
check.names logical. If TRUE then the names of the variables in the frame are checked to
ensure that they are syntactically variable names.
... optional arguments to the built-in read.table function
Value
a vector with mineral and rock densities
Examples
data(Namib,densities)
N8 <- subset(Namib$HM,select="N8")
distribution <- minsorting(N8,densities,phi=2,sigmaphi=1,medium="air",by=0.05)
plot(distribution)
read.distributional Read a .csv file with distributional data
Description
Reads a data table containing distributional data, i.e. lists of continuous data such as detrital zircon
U-Pb ages.
Usage
read.distributional(
fname,
errorfile = NA,
method = "KS",
xlab = "age [Ma]",
colmap = "rainbow",
sep = ",",
dec = ".",
header = TRUE,
check.names = FALSE,
...
)
Arguments
fname the path of a .csv file with the input data, arranged in columns.
errorfile the (optional) path of a .csv file with the standard errors of the input data, ar-
ranged by column in the same order as fname. Must be specified if the data are
to be compared with the Sircombe-Hazelton dissimilarity.
method an optional string specifying the dissimilarity measure which should be used for
comparing this with other datasets. Should be one of either "KS" (for Kolmogorov-
Smirnov), "Kuiper" (for Kuiper) or "SH" (for Sircombe and Hazelton). If
method = "SH", then errorfile should be specified. If method = "SH" and
errorfile is unspecified, then the program will default back to the Kolmogorov-
Smirnov dissimilarity.
xlab an optional string specifying the nature and units of the data. This string is used
to label kernel density estimates.
colmap an optional string with the name of one of R’s built-in colour palettes (e.g.,
heat.colors, terrain.colors, topo.colors, cm.colors), which are to be used for plot-
ting the data.
sep the field separator character. Values on each line of the file are separated by this
character.
dec the character used in the file for decimal points.
header a logical value indicating whether the file contains the names of the variables as
its first line.
check.names logical. If TRUE then the names of the variables in the frame are checked to
ensure that they are syntactically variable names.
... optional arguments to the built-in read.csv function
Value
an object of class distributional, i.e. a list with the following items:
x: a named list of vectors containing the numerical data for each sample
err: an (optional) named list of vectors containing the standard errors of x
method: either "KS" (for Kolmogorov-Smirnov), "Kuiper" (for the Kuiper statistic) or "SH" (for
Sircombe Hazelton)
breaks: a vector with the locations of the histogram bin edges
xlab: a string containing the label to be given to the x-axis on all plots
colmap: the colour map provided by the input argument
name: the name of the data object, extracted from the file path
Examples
agefile <- system.file("Namib/DZ.csv",package="provenance")
errfile <- system.file("Namib/DZerr.csv",package="provenance")
DZ <- read.distributional(agefile,errfile)
plot(KDE(DZ$x$N1))
read.varietal Read a .csv file with varietal data
Description
Reads a data table containing compositional data (e.g. chemical concentrations) for multiple grains
and multiple samples
Usage
read.varietal(
fname,
snames = NULL,
sep = ",",
dec = ".",
method = "KS",
check.names = FALSE,
row.names = 1,
...
)
Arguments
fname file name (character string)
snames either a vector of sample names, an integer marking the length of the sample
name prefix, or NULL. read.varietal assumes that the row names of the .csv
file consist of character strings marking the sample names, followed by a num-
ber.
sep the field separator character. Values on each line of the file are separated by this
character.
dec the character used in the file for decimal points.
method an optional string specifying the dissimilarity measure which should be used for
comparing this with other datasets. Should be one of either "KS" (for Kolmogorov-
Smirnov) or "Kuiper" (for Kuiper)
check.names logical. If TRUE then the names of the variables in the frame are checked to
ensure that they are syntactically variable names.
row.names logical. See the documentation for the read.table function.
... optional arguments to the built-in read.csv function
Value
an object of class varietal, i.e. a list with the following items:
x: a compositional data table
snames: a vector of strings corresponding to the sample names
name: the name of the dataset, extracted from the file path
Examples
fn <- system.file("SNSM/Ttn_chem.csv",package="provenance")
Ttn <- read.varietal(fname=fn,snames=3)
plot(MDS(Ttn))
restore Undo the effect of hydraulic sorting
Description
Restore the detrital composition back to a specified source rock density (SRD)
Usage
restore(X, dens, target = 2.71)
Arguments
X an object of class compositional
dens a vector of rock and mineral densities
target the target density (in g/cm3)
Value
an object of class SRDcorrected, i.e. an object of class compositional which is a daughter
of class compositional containing the restored composition, plus one additional member called
restoration, containing the intermediate steps of the SRD correction algorithm.
Author(s)
<NAME> and <NAME>
References
<NAME>, <NAME> and <NAME>. "Settling equivalence of detrital minerals and grain-size de-
pendence of sediment composition." Earth and Planetary Science Letters 273.1 (2008): 138-151.
See Also
minsorting
Examples
data(Namib,densities)
rescomp <- restore(Namib$PTHM,densities,2.71)
HMcomp <- c("zr","tm","rt","sph","ap","ep","gt",
"st","amp","cpx","opx")
amcomp <- amalgamate(rescomp,Plag="P",HM=HMcomp,Opq="opaques")
plot(ternary(amcomp),showpath=TRUE)
SH.diss Sircombe and Hazelton distance
Description
Calculates Sircombe and Hazelton’s L2 distance between the Kernel Functional Estimates (KFEs,
not to be confused with Kernel Density Estimates!) of two samples with specified analytical uncer-
tainties
Usage
SH.diss(x, i, j, c.con = 0)
Arguments
x an object of class distributional
i index of the first sample
j index of the second sample
c.con smoothing bandwidth of the kernel functional estimate
Value
a scalar value expressing the L2 distance between the KFEs of samples i and j
Author(s)
<NAME> and <NAME>
References
Sircombe, <NAME>., and <NAME>. "Comparison of detrital zircon age distributions by kernel
functional estimation." Sedimentary Geology 171.1 (2004): 91-111.
See Also
KS.diss
Examples
datfile <- system.file("Namib/DZ.csv",package="provenance")
errfile <- system.file("Namib/DZerr.csv",package="provenance")
DZ <- read.distributional(datfile,errfile)
d <- SH.diss(DZ,1,2)
print(d)
SNSM varietal data example
Description
A list of varietal datasets including detrital zircon (zr), apatite (ap) and titanite (tit) compositions
from the Sierra Nevada de Santa Marta, provided by L. Caracciolo (FAU Erlangen).
Author(s)
<NAME>, <NAME> and <NAME>.
Examples
plot(MDS(SNSM$tit))
subset Get a subset of provenance data
Description
Return a subset of provenance data according to some specified indices
Usage
## S3 method for class 'distributional'
subset(x, subset = NULL, select = NULL, ...)
## S3 method for class 'compositional'
subset(x, subset = NULL, components = NULL, select = NULL, ...)
## S3 method for class 'counts'
subset(x, subset = NULL, components = NULL, select = NULL, ...)
## S3 method for class 'varietal'
subset(x, subset = NULL, components = NULL, select = NULL, ...)
Arguments
x an object of class distributional, compositional, counts or varietal.
subset logical expression indicating elements or rows to keep: missing values are taken
as false.
select a vector of sample names
... optional arguments for the generic subset function
components vector of categories (column names) to keep
Value
an object of the same class as x
See Also
amalgamate, combine
Examples
data(Namib)
coast <- c("N1","N2","T8","T13","N12","N13")
ZTRcoast <- subset(Namib$HM,select=coast,components=c('gt','cpx','ep'))
DZcoast <- subset(Namib$DZ,select=coast)
summaryplot(ZTRcoast,KDEs(DZcoast),ncol=2)
summaryplot Joint plot of several provenance datasets
Description
Arranges kernel density estimates and pie charts in a grid format
Usage
summaryplot(..., ncol = 1, pch = NA)
Arguments
... a sequence of datasets of class compositional, distributional, counts or
KDEs.
ncol the number of columns
pch (optional) symbol to be used to mark the sample points along the x-axis of the
KDEs (if appropriate).
Value
a summary plot of all the data comprised of KDEs for the datasets of class KDEs, pie charts for those
of class compositional or counts and histograms for those of class distributional.
See Also
KDEs
Examples
data(Namib)
KDEs <- KDEs(Namib$DZ,0,3000)
summaryplot(KDEs,Namib$HM,Namib$PT,ncol=2)
ternary Define a ternary composition
Description
Create an object of class ternary
Usage
ternary(X, x = 1, y = 2, z = 3)
Arguments
X an object of class compositional OR a matrix or data frame with numerical
data
x string/number or a vector of strings/numbers indicating the variables/indices
making up the first subcomposition of the ternary system.
y second (set of) variables
z third (set of) variables
Value
an object of class ternary, i.e. a list containing:
x: a three column matrix (or vector) of ternary compositions.
and (if X is of class SRDcorrected)
restoration: a list of intermediate ternary compositions inherited from the SRD correction
See Also
restore
Examples
data(Namib)
tern <- ternary(Namib$PT,c('Q'),c('KF','P'),c('Lm','Lv','Ls'))
plot(tern,type="QFL")
ternary.ellipse Ternary confidence ellipse
Description
plot a 100(1 − α)% confidence region around the data or around its mean.
Usage
ternary.ellipse(x, ...)
## Default S3 method:
ternary.ellipse(x, alpha = 0.05, population = TRUE, ...)
## S3 method for class 'compositional'
ternary.ellipse(x, alpha = 0.05, population = TRUE, ...)
## S3 method for class 'counts'
ternary.ellipse(x, alpha = 0.05, population = TRUE, ...)
Arguments
x an object of class ternary
... optional formatting arguments
alpha cutoff level for the confidence ellipse
population show the standard deviation of the entire population or the standard error of the
mean?
Examples
data(Namib)
tern <- ternary(Namib$Major,'CaO','Na2O','K2O')
plot(tern)
ternary.ellipse(tern)
text.ternary Ternary text plotting
Description
Add text an existing ternary diagram
Usage
## S3 method for class 'ternary'
text(x, labels = 1:nrow(x$x), ...)
Arguments
x an object of class ternary, or a three-column data frame or matrix
labels a character vector or expression specifying the text to be written
... optional arguments to the generic text function
Examples
data(Namib)
tern <- ternary(Namib$Major,'CaO','Na2O','K2O')
plot(tern,pch=21,bg='red',labels=NULL)
# add the geometric mean composition as a text label:
gmean <- ternary(exp(colMeans(log(tern$x))))
text(gmean,labels='geometric mean')
varietal2distributional
Convert varietal to distributional data
Description
Convert an object of class varietal either to a list of distributional objects by breaking it up into
separate elements, or to a single distributional object corresponding to the first principal component.
Usage
varietal2distributional(x, bycol = FALSE, plot = FALSE)
Arguments
x an object of class varietal.
bycol logical. If TRUE, returns a list of distributional objects (one for each element). If
FALSE, returns a single distributional object (containing the PC1 scores for each
sample).
plot logical. If TRUE, shows the PCA biplot that is used when bycol is FALSE.
Examples
Ttn_file <- system.file("SNSM/Ttn_chem.csv",package="provenance")
Ttn <- read.varietal(fn=Ttn_file,snames=3)
varietal2distributional(Ttn,bycol=FALSE,plot=TRUE)
Wasserstein.diss Wasserstein distance
Description
Returns the Wasserstein distance between two samples
Usage
Wasserstein.diss(x, ...)
## Default S3 method:
Wasserstein.diss(x, y, ...)
## S3 method for class 'distributional'
Wasserstein.diss(x, log = FALSE, ...)
## S3 method for class 'varietal'
Wasserstein.diss(x, package = "transport", verbose = FALSE, ...)
Arguments
x the first sample as a vector
... optional arguments to the transport::wasserstein() or T4transport::wasserstein()
functions. Warning: the latter function is very slow.
y the second sample as a vector
log logical. Take the lograthm of the data before calculating the distances?
package the name of the package that provides the 2D Wasserstein distance. Currently,
this can be either 'transport' or T4transport.
verbose logical. If TRUE, gives progress updates during the construction of the dissimi-
larity matrix.
Value
a scalar value
Author(s)
The default S3 method was written by <NAME>, using modified code from <NAME>-
macher’s transport package (transport1d function), as implemented in IsoplotR.
Examples
data(Namib)
print(Wasserstein.diss(Namib$DZ$x[['N1']],Namib$DZ$x[['T8']])) |
jiebaR | cran | R | Package ‘jiebaR’
October 13, 2022
Type Package
Title Chinese Text Segmentation
Description Chinese text segmentation, keyword extraction and speech tagging
For R.
Version 0.11
Date 2019-12-13
Author <NAME>, <NAME>
Maintainer <NAME> <<EMAIL>>
License MIT + file LICENSE
Depends R (>= 3.3), jiebaRD
Imports Rcpp (>= 0.12.1)
LinkingTo Rcpp (>= 0.12.1)
Suggests knitr,testthat,devtools,rmarkdown,roxygen2
URL https://github.com/qinwf/jiebaR/
BugReports https://github.com/qinwf/jiebaR/issues
VignetteBuilder knitr
NeedsCompilation yes
RoxygenNote 6.1.1
Repository CRAN
Date/Publication 2019-12-13 17:40:02 UTC
R topics documented:
<=.keyword... 2
<=.qse... 3
<=.segmen... 4
<=.simhas... 5
<=.tagge... 6
apply_lis... 6
DICTPAT... 7
distanc... 7
edit_dic... 8
file_codin... 9
filter_segmen... 10
fre... 10
get_id... 11
get_qsegmode... 12
get_tupl... 13
jieba... 13
keyword... 14
new_user_wor... 16
print.in... 16
segmen... 17
show_dictpat... 18
simhas... 18
simhash_dis... 19
taggin... 20
tobi... 21
vector_ta... 21
worke... 22
<=.keywords Keywords symbol
Description
Keywords symbol to find keywords.
Usage
## S3 method for class 'keywords'
jiebar <= code
## S3 method for class 'keywords'
jiebar[code]
Arguments
jiebar jiebaR Worker.
code A Chinese sentence or the path of a text file.
Author(s)
<NAME> <http://qinwenfeng.com>
Examples
## Not run:
words = "hello world"
test1 = worker("keywords",topn=1)
test1 <= words
## End(Not run)
<=.qseg Quick mode symbol
Description
Depreciated.
Usage
## S3 method for class 'qseg'
qseg <= code
## S3 method for class 'qseg'
qseg[code]
qseg
Arguments
qseg a qseg object
code a string
Format
qseg an environment
Details
Quick mode is depreciated, and is scheduled to be remove in v0.11.0. If you want to keep this
feature, please submit a issue on GitHub page to let me know.
Quick mode symbol to do segmentation, keyword extraction and speech tagging. This symbol will
initialize a quick_worker when it is first called, and will do segmentation or other types of work
immediately.
You can reset the default model setting by $, and it will change the default setting the next time you
use quick mode. If you only want to change the parameter temporarily, you can reset the settings of
quick_worker$. get_qsegmodel, set_qsegmodel, and reset_qsegmodel are also available for
setting quick mode settings.
Author(s)
<NAME> <http://qinwenfeng.com>
See Also
set_qsegmodel worker
Examples
## Not run:
qseg <= "This is test"
qseg <= "This is the second test"
## End(Not run)
## Not run:
qseg <= "This is test"
qseg$detect = T
qseg
get_qsegmodel()
## End(Not run)
<=.segment Text segmentation symbol
Description
Text segmentation symbol to cut words.
Usage
## S3 method for class 'segment'
jiebar <= code
## S3 method for class 'segment'
jiebar[code]
Arguments
jiebar jiebaR Worker.
code A Chinese sentence or the path of a text file.
Author(s)
<NAME> <http://qinwenfeng.com>
Examples
## Not run:
words = "hello world"
test1 = worker()
test1 <= words
## End(Not run)
<=.simhash Simhash symbol
Description
Simhash symbol to compute simhash.
Usage
## S3 method for class 'simhash'
jiebar <= code
## S3 method for class 'simhash'
jiebar[code]
Arguments
jiebar jiebaR Worker.
code A Chinese sentence or the path of a text file.
Author(s)
<NAME> <http://qinwenfeng.com>
Examples
## Not run:
words = "hello world"
test1 = worker("simhash",topn=1)
test1 <= words
## End(Not run)
<=.tagger Tagger symbol
Description
Tagger symbol to tag words.
Usage
## S3 method for class 'tagger'
jiebar <= code
## S3 method for class 'tagger'
jiebar[code]
Arguments
jiebar jiebaR Worker.
code A Chinese sentence or the path of a text file.
Author(s)
<NAME> <http://qinwenfeng.com>
Examples
## Not run:
words = "hello world"
test1 = worker("tag")
test1 <= words
## End(Not run)
apply_list Apply list input to a worker
Description
Apply list input to a worker
Usage
apply_list(input, worker)
Arguments
input a list of characters
worker a worker
Examples
cutter = worker()
apply_list(list("this is test", "that is not test"), cutter)
apply_list(list("this is test", list("that is not test","ab c")), cutter)
DICTPATH The path of dictionary
Description
The path of dictionary, and it is used by segmentation and other function.
Usage
DICTPATH
HMMPATH
USERPATH
IDFPATH
STOPPATH
Format
character
distance Hamming distance of words
Description
This function uses Simhash worker to do keyword extraction and finds the keywords from two
inputs, and then computes Hamming distance between them.
Usage
distance(codel, coder, jiebar)
vector_distance(codel, coder, jiebar)
Arguments
codel For distance, a Chinese sentence or the path of a text file, For vector_distance,
a character vector of segmented words.
coder For distance, a Chinese sentence or the path of a text file, For vector_distance,
a character vector of segmented words.
jiebar jiebaR worker
Author(s)
<NAME>
References
http://en.wikipedia.org/wiki/Hamming_distance
See Also
worker
Examples
## Not run:
words = "hello world"
simhasher = worker("simhash", topn = 1)
simhasher <= words
distance("hello world" , "hello world!" , simhasher)
vector_distance(c("hello","world") , c("hello", "world","!") , simhasher)
## End(Not run)
edit_dict Edit default user dictionary
Description
Edit the default user dictionary.
Usage
edit_dict(name = "user")
Arguments
name the name of dictionary including user, system, stop_word.
Details
There are three column in the system dictionary. Each column is seperated by space. The first
column is the word, and the second column is the frequency of word. The third column is speech
tag using labels compatible with ictclas.
There are two column in the user dictionary. The first column is the word, and the second column
is speech tag using labels compatible with ictclas. Frequencies of words in the user dictionary is set
by user_weight in worker function. If you want to provide the frequency of a new word, you can
put it in the system dictionary.
Only one column in the stop words dictionary, and it contains the stop words.
References
The ictclas speech tag : http://t.cn/RAEj7e1
file_coding Files encoding detection
Description
This function detects the encoding of input files. You can also check encoding with checkenc
package which is on GitHub.
Usage
file_coding(file)
filecoding(file)
Arguments
file A file path.
Details
This function will choose the most likely encoding, and it will be more stable for a large input text
file.
Value
The encoding of file
Author(s)
<NAME>, <NAME>
References
https://github.com/adah1972/tellenc
See Also
https://github.com/qinwf/checkenc
filter_segment Filter segmentation result
Description
This function helps remove some words in the segmentation result.
Usage
filter_segment(input, filter_words, unit = 50)
Arguments
input a string vector
filter_words a string vector of words to be removed.
unit the length of word unit to use in regular expression, and the default is 50. Long
list of a words forms a big regular expressions, it may or may not be accepted:
the POSIX standard only requires up to 256 bytes. So we use unit to split the
words in units.
Examples
filter_segment(c("abc","def"," ","."), c("abc"))
freq The frequency of words
Description
This function returns the frequency of words
Usage
freq(x)
Arguments
x a vector of words
Value
The frequency of words
Author(s)
<NAME>
Examples
freq(c("a","a","c"))
get_idf generate IDF dict
Description
Generate IDF dict from a list of documents.
Usage
get_idf(x, stop_word = STOPPATH, path = NULL)
Arguments
x a list of character
stop_word stopword path
path output path
Details
Input list contains multiple character vectors with words, and each vector represents a document.
Stop words will be removed from the result.
If path is not NULL, it will write the result to the path.
Value
a data.frame or a file
See Also
https://en.wikipedia.org/wiki/Tf-idf#Inverse_document_frequency_2
Examples
get_idf(list(c("abc","def"),c("abc"," ")))
get_qsegmodel Set quick mode model
Description
Depreciated.
Usage
get_qsegmodel()
set_qsegmodel(qsegmodel)
reset_qsegmodel()
Arguments
qsegmodel a list which has the same structure as the return value of get_qsegmodel
Details
These function can get and modify quick mode model. get_qsegmodel returns the default model
parameters. set_qsegmodel can modify quick mode model using a list, which has the same struc-
ture as the return value of get_qsegmodel. reset_qsegmodel can reset the default model to origin
jiebaR default model.
Author(s)
<NAME> <http://qinwenfeng.com>
See Also
qseg worker
Examples
## Not run:
qseg <= "This is test"
qseg <= "This is the second test"
## End(Not run)
## Not run:
qseg <= "This is test"
qseg$detect = T
qseg
get_qsegmodel()
model = get_qsegmodel()
model$detect = F
set_qsegmodel(model)
reset_qsegmodel()
## End(Not run)
get_tuple get tuple from the segmentation result
Description
get tuple from the segmentation result
Usage
get_tuple(x, size = 2, dataframe = T)
Arguments
x a character vector or list
size a integer >= 2
dataframe return data.frame
Examples
get_tuple(c("sd","sd","sd","rd"),2)
jiebaR A package for Chinese text segmentation
Description
This is a package for Chinese text segmentation, keyword extraction and speech tagging with Rcpp
and cppjieba.
Details
You can use custom dictionary. JiebaR can also identify new words, but adding new words will
ensure higher accuracy.
Author(s)
<NAME> <http://qinwenfeng.com>
References
CppJieba https://github.com/aszxqw/cppjieba;
See Also
JiebaR https://github.com/qinwf/jiebaR;
Examples
### Note: Can not display Chinese characters here.
## Not run:
words = "hello world"
engine1 = worker()
segment(words, engine1)
# "./temp.txt" is a file path
segment("./temp.txt", engine1)
engine2 = worker("hmm")
segment("./temp.txt", engine2)
engine2$write = T
segment("./temp.txt", engine2)
engine3 = worker(type = "mix", dict = "dict_path",symbol = T)
segment("./temp.txt", engine3)
## End(Not run)
## Not run:
### Keyword Extraction
engine = worker("keywords", topn = 1)
keywords(words, engine)
### Speech Tagging
tagger = worker("tag")
tagging(words, tagger)
### Simhash
simhasher = worker("simhash", topn = 1)
simhash(words, simhasher)
distance("hello world" , "hello world!" , simhasher)
show_dictpath()
## End(Not run)
keywords Keyword extraction
Description
Keyword Extraction worker uses MixSegment model to cut word and uses TF-IDF algorithm to find
the keywords. dict , hmm, idf, stop_word and topn should be provided when initializing jiebaR
worker.
Usage
keywords(code, jiebar)
vector_keywords(code, jiebar)
Arguments
code For keywords, a Chinese sentence or the path of a text file. For vector_keywords,
a character vector of segmented words.
jiebar jiebaR Worker.
Details
There is a symbol <= for this function.
Value
a vector of keywords with weight.
Author(s)
<NAME>
References
http://en.wikipedia.org/wiki/Tf-idf
See Also
<=.keywords worker
Examples
## Not run:
### Keyword Extraction
keys = worker("keywords", topn = 1)
keys <= "words of fun"
## End(Not run)
new_user_word Add user word
Description
Add user word
Usage
new_user_word(worker, words, tags = rep("n", length(words)))
Arguments
worker a jieba worker
words the new words
tags the new words tags, default "n"
Examples
cc = worker()
new_user_word(cc, "test")
new_user_word(cc, "do", "v")
print.inv Print worker settings
Description
These functoins print the worker settings.
Usage
## S3 method for class 'inv'
print(x, ...)
## S3 method for class 'jieba'
print(x, ...)
## S3 method for class 'simhash'
print(x, ...)
## S3 method for class 'keywords'
print(x, ...)
## S3 method for class 'qseg'
print(x, ...)
Arguments
x The jiebaR Worker.
... Other arguments.
Author(s)
<NAME>
segment Chinese text segmentation function
Description
The function uses initialized engines for words segmentation. You can initialize multiple engines
simultaneously using worker(). Public settings of workers can be got and modified using $, such
as WorkerName$symbol = T . Some private settings are fixed when engine is initialized, and you can
get then by WorkerName$PrivateVarible.
Usage
segment(code, jiebar, mod = NULL)
Arguments
code A Chinese sentence or the path of a text file.
jiebar jiebaR Worker.
mod change default result type, value can be "mix","hmm","query","full" or "mp"
Details
There are four kinds of models:
Maximum probability segmentation model uses Trie tree to construct a directed acyclic graph and
uses dynamic programming algorithm. It is the core segmentation algorithm. dict and user should
be provided when initializing jiebaR worker.
Hidden Markov Model uses HMM model to determine status set and observed set of words. The
default HMM model is based on People’s Daily language library. hmm should be provided when
initializing jiebaR worker.
MixSegment model uses both Maximum probability segmentation model and Hidden Markov Model
to construct segmentation. dict, hmm and user should be provided when initializing jiebaR worker.
QuerySegment model uses MixSegment to construct segmentation and then enumerates all the pos-
sible long words in the dictionary. dict, hmm and qmax should be provided when initializing jiebaR
worker.
There is a symbol <= for this function.
See Also
<=.segment worker
show_dictpath Show default path of dictionaries
Description
Show the default dictionaries’ path. HMMPATH, DICTPATH , IDFPATH, STOPPATH and USERPATH can
be changed in default environment.
Usage
show_dictpath()
Author(s)
<NAME>
simhash Simhash computation
Description
Simhash worker uses the keyword extraction worker to find the keywords and uses simhash algo-
rithm to compute simhash. dict hmm, idf and stop_word should be provided when initializing
jiebaR worker.
Usage
simhash(code, jiebar)
vector_simhash(code, jiebar)
Arguments
code For simhash, a Chinese sentence or the path of a text file. For vector_simhash,
a character vector of segmented words.
jiebar jiebaR Worker.
Details
There is a symbol <= for this function.
Author(s)
<NAME>
References
MS Charikar - Similarity Estimation Techniques from Rounding Algorithms
See Also
<=.simhash worker
Examples
## Not run:
### Simhash
words = "hello world"
simhasher = worker("simhash",topn=1)
simhasher <= words
distance("hello world" , "hello world!" , simhasher)
## End(Not run)
simhash_dist Compute Hamming distance of Simhash value
Description
Compute Hamming distance of Simhash value
Usage
simhash_dist(x, y)
simhash_dist_mat(x, y)
Arguments
x a character vector of simhash value
y a character vector of simhash value
Value
a character vector
Examples
simhash_dist("1","1")
simhash_dist("1","2")
tobin("1")
tobin("2")
simhash_dist_mat(c("1","12","123"),c("2","1"))
tagging Speech Tagging
Description
The function uses Speech Tagging worker to cut word and tags each word after segmentation using
labels compatible with ictclas. dict hmm and user should be provided when initializing jiebaR
worker.
Usage
tagging(code, jiebar)
Arguments
code a Chinese sentence or the path of a text file
jiebar jiebaR Worker
Details
There is a symbol <= for this function.
Author(s)
<NAME>
References
The ictclas speech tag : http://t.cn/RAEj7e1
See Also
<=.tagger worker
Examples
## Not run:
words = "hello world"
### Speech Tagging
tagger = worker("tag")
tagger <= words
## End(Not run)
tobin simhash value to binary
Description
simhash value to binary
Usage
tobin(x)
Arguments
x simhash value
vector_tag Tag the a character vector
Description
Tag the a character vector
Usage
vector_tag(string, jiebar)
Arguments
string a character vector of segmented words.
jiebar jiebaR Worker.
Examples
## Not run:
cc = worker()
(res = cc["this is test"])
vector_tag(res, cc)
## End(Not run)
worker Initialize jiebaR worker
Description
This function can initialize jiebaR workers. You can initialize different kinds of workers including
mix, mp, hmm, query, full, tag, simhash, and keywords. see Detail for more information.
Usage
worker(type = "mix", dict = DICTPATH, hmm = HMMPATH,
user = USERPATH, idf = IDFPATH, stop_word = STOPPATH, write = T,
qmax = 20, topn = 5, encoding = "UTF-8", detect = T,
symbol = F, lines = 1e+05, output = NULL, bylines = F,
user_weight = "max")
Arguments
type The type of jiebaR workers including mix, mp, hmm, full, query, tag, simhash,
and keywords.
dict A path to main dictionary, default value is DICTPATH, and the value is used for
mix, mp, query, full, tag, simhash and keywords workers.
hmm A path to Hidden Markov Model, default value is HMMPATH, full, and the value
is used for mix, hmm, query, tag, simhash and keywords workers.
user A path to user dictionary, default value is USERPATH, and the value is used for
mix, full, tag and mp workers.
idf A path to inverse document frequency, default value is IDFPATH, and the value
is used for simhash and keywords workers.
stop_word A path to stop word dictionary, default value is STOPPATH, and the value is used
for simhash, keywords, tagger and segment workers. Encoding of this file
is checked by file_coding, and it should be UTF-8 encoding. For segment
workers, the default STOPPATH will not be used, so you should provide another
file path.
write Whether to write the output to a file, or return a the result in a object. This value
will only be used when the input is a file path. The default value is TRUE. The
value is used for segment and speech tagging workers.
qmax Max query length of words, and the value is used for query workers.
topn The number of keywords, and the value is used for simhash and keywords
workers.
encoding The encoding of the input file. If encoding detection is enable, the value of
encoding will be ignore.
detect Whether to detect the encoding of input file using file_coding function. If
encoding detection is enable, the value of encoding will be ignore.
symbol Whether to keep symbols in the sentence.
lines The maximal number of lines to read at one time when input is a file. The value
is used for segmentation and speech tagging workers.
output A path to the output file, and default worker will generate file name by system
time stamp, the value is used for segmentation and speech tagging workers.
bylines return the result by the lines of input files
user_weight the weight of the user dict words. "min" "max" or "median".
Details
The package uses initialized engines for word segmentation, and you can initialize multiple engines
simultaneously. You can also reset the model public settings using $ such as WorkerName$symbol =
T . Some private settings are fixed when a engine is initialized, and you can get then by WorkerName$PrivateVarible.
Maximum probability segmentation model uses Trie tree to construct a directed acyclic graph and
uses dynamic programming algorithm. It is the core segmentation algorithm. dict and user should
be provided when initializing jiebaR worker.
Hidden Markov Model uses HMM model to determine status set and observed set of words. The
default HMM model is based on People’s Daily language library. hmm should be provided when
initializing jiebaR worker.
MixSegment model uses both Maximum probability segmentation model and Hidden Markov Model
to construct segmentation. dict hmm and user should be provided when initializing jiebaR worker.
QuerySegment model uses MixSegment to construct segmentation and then enumerates all the pos-
sible long words in the dictionary. dict, hmm and qmax should be provided when initializing jiebaR
worker.
FullSegment model will enumerates all the possible words in the dictionary.
Speech Tagging worker uses MixSegment model to cut word and tag each word after segmentation
using labels compatible with ictclas. dict, hmm and user should be provided when initializing
jiebaR worker.
Keyword Extraction worker uses MixSegment model to cut word and use TF-IDF algorithm to find
the keywords. dict ,hmm, idf, stop_word and topn should be provided when initializing jiebaR
worker.
Simhash worker uses the keyword extraction worker to find the keywords and uses simhash algo-
rithm to compute simhash. dict hmm, idf and stop_word should be provided when initializing
jiebaR worker.
Value
This function returns an environment containing segmentation settings and worker. Public settings
can be modified using $.
Examples
### Note: Can not display Chinese characters here.
## Not run:
words = "hello world"
engine1 = worker()
segment(words, engine1)
# "./temp.txt" is a file path
segment("./temp.txt", engine1)
engine2 = worker("hmm")
segment("./temp.txt", engine2)
engine2$write = T
segment("./temp.txt", engine2)
engine3 = worker(type = "mix", dict = "dict_path",symbol = T)
segment("./temp.txt", engine3)
## End(Not run)
## Not run:
### Keyword Extraction
engine = worker("keywords", topn = 1)
keywords(words, engine)
### Speech Tagging
tagger = worker("tag")
tagging(words, tagger)
### Simhash
simhasher = worker("simhash", topn = 1)
simhash(words, simhasher)
distance("hello world" , "hello world!" , simhasher)
show_dictpath()
## End(Not run) |
o2_err_hdlr_idx | ctan | TeX | Grammar symbols: Used cross reference.Reference of each grammar's symbol used within each rule's productions. The index uses the tripple: rule name, its subrule no, and the symbol's position within the symbol string.
**2.** **Rerror:.**
Rerrors 1.1 **Rerrors 2.2**
**3.** **Rerrors:.**
Ro2_err_hdlr 1.1 **Rerrors 2.1**
**4.** **bad char:.**
Rerror 12.1
**5.** **bad cmd-opt:.**
Rerror 5.1
**6.** **bad eos:.**
Rerror 9.1 **Rerror 16.2**
**7.** **bad esc:.**
Rerror 10.1 **Rerror 15.2**
**8.** **bad filename:.**
Rerror 4.1 **Rerror 14.2**
**9.** **bad int-no:.**
Rerror 6.1
**10.** **bad int-no range:.**
Rerror 7.1
**11.** **bad univ-seq:.**
Rerror 13.1
**12.** **comment-overrun:.**
Rerror 11.1
**13.** **eog:.**
Ro2_err_hdlr 1.2 **Ro2_err_hdlr 2.1**
**14. file-inclusion:.**
**Error 14.1 - Error 15.1 - Error 16.1 - Error 17.1**
**15. nested files exceeded:.**
**Error 1.1**
**16. no end-of-code:.**
**Error 2.1**
**17. no filename:.**
**Error 3.1 - Error 17.2**
**18. no int present:.**
**Error 8.1**
**19. |+|:.**
**Error 18.1**
**20. Grammar Rules's First Sets.**
**21.** _Ro2_err_hdlr **# in set: 16.**
bad char bad cmd-opt bad eos bad esc bad filename bad int-no bad int-no range bad univ-seq comment-overrun eog file-inclusion nested files exceeded no end-of-code no filename no int present |+|
**22.** _Rerrors **# in set: 15.**
bad char bad cmd-opt bad eos bad esc bad filename bad int-no bad int-no range bad univ-seq comment-overrun file-inclusion nested files exceeded no end-of-code no filename no int present |+|
**23.** _Rerror **# in set: 15.**
bad char bad cmd-opt bad eos bad esc bad filename bad int-no bad int-no range bad univ-seq comment-overrun file-inclusion nested files exceeded no end-of-code no filename no int present |+|
**24. LR State Network.**
List of productions with their derived LR state lists. Their subrule number and symbol string indicates the specific production being derived. The "p" symbol indicates the production's list of derived states from its coloured state. Multiple lists within a production indicate 1 of 2 things:
1) derived string that could not be merged due to a lr(1) conflict
2) partially derived string merged into another derived lr states
A partially derived string is indicated by the "merged into" symbol \(\nearrow\)used as a superscript along with the merged into state number.
**25.** **Ro2_err_hdlr.**
1 Rerrors eog
\(\triangleright\) 1 22 23
2 eog
\(\triangleright\) 1 2
**26.** **Rerrors.**
1 Rerror
\(\triangleright\) 1 25
2 Rerrors Rerror
\(\triangleright\) 1 22 24
## 27 Rerror.
1 nested files exceeded \(\triangleright\) 1 9 \(\triangleright\) 22\({}^{\nearrow}\)9
2 no end-of-code \(\triangleright\) 1 10 \(\triangleright\) 22\({}^{\nearrow}\)10
3 no filename \(\triangleright\) 1 11 \(\triangleright\) 22\({}^{\nearrow}\)11
4 bad filename \(\triangleright\) 1 12 \(\triangleright\) 1 12 \(\triangleright\) 22\({}^{\nearrow}\)12 \(\triangleright\) 22\({}^{\nearrow}\)13
6 bad int-no \(\triangleright\) 1 14 \(\triangleright\) 22\({}^{\nearrow}\)14
7 bad int-no range \(\triangleright\) 1 15 \(\triangleright\) 22\({}^{\nearrow}\)15
8 no int present \(\triangleright\) 1 16 \(\triangleright\) 22\({}^{\nearrow}\)16
9 bad eos \(\triangleright\) 1 17 \(\triangleright\) 22\({}^{\nearrow}\)17
10 bad esc \(\triangleright\) 1 18 \(\triangleright\) 22\({}^{\nearrow}\)18
11 comment-overrun \(\triangleright\) 1 19 \(\triangleright\) 22\({}^{\nearrow}\)19
12 bad char \(\triangleright\) 1 20 \(\triangleright\) 22\({}^{\nearrow}\)20
13 bad univ-seq \(\triangleright\) 1 21 \(\triangleright\) 22\({}^{\nearrow}\)21
14 file-inclusion bad filename \(\triangleright\) 1 4 6 \(\triangleright\) 22\({}^{\nearrow}\)4
15 file-inclusion bad esc \(\triangleright\) 1 4 8 \(\triangleright\) 22\({}^{\nearrow}\)4
16 file-inclusion bad eos \(\triangleright\) 1 4 7 \(\triangleright\) 22\({}^{\nearrow}\)4
17 file-inclusion no filename \(\triangleright\) 1 4 5
**29.** **Lr1 State's Follow sets and reducing lookahead sets.**
Notes on Follow set expressions:
1) The "follow set" for rule uses its literal name and tags its grammar rule rank number as a superscript.
Due to space limitations, part of the follow set information uses the rule's literal name while the follow set expressions refers to the rule's rank number. This \(<\) rule name, rule rank number \(>\) tuple allows you the reader to decifer the expressions. Transitions are represented by S\({}_{x}\)R\({}_{z}\) whereby S is the LR1 state identified by its "x" subscript where other transient calculations occur within the LR1 state network. R indicates the follow set rule with the subscript "z" as its grammar rank number that contributes to the follow set.
The \(\nearrow^{x}\) symbol indicates that a merge into state "x" has taken place. That is, the reduced subrule that depends on this follow set finds its follow set in 2 places: its birthing state that generated the sequence up to the merged into state, and the birthing state that generated the "merged into" state. So the rule's "follow set" calculation must also continue its calculation within the birth state generating the "x merged into" state.
\begin{tabular}{l c c} State: 1 & Follow Set contributors, merges, and transitions \\ \(\leftarrow\) & Follow set Rule & \(\rightarrow\) & follow set symbols contributors \\ Ro2\_err\_hdlr1 & & & \\ Local follow set yield: & & & \\ \(\leftarrow\) & Follow set Rule & \(\rightarrow\) & follow set symbols contributors \\ Rerrors2 & & R\({}_{1\cdot 1\cdot 1}\) R\({}_{2\cdot 2\cdot 1}\) & \\ Local follow set yield: & & & \\ file-inclusion, eog, |+|, nested files exceeded, no end-of-code, no filename, bad filename, bad cmd-opt, & & \\ bad int-no, bad int-no range, no int present, bad eos, bad esc, comment-overrun, bad char, bad univ-seq. & \\ \(\leftarrow\) & Follow set Rule & \(\rightarrow\) & follow set symbols contributors \\ Rerror3 & & R\({}_{2\cdot 1\cdot 1}\)\(\nearrow^{22}\) S\({}_{1}R_{2}\) & \\ Local follow set yield: & & & \\ \end{tabular}
\begin{tabular}{l c c} State: 22 & Follow Set contributors, merges, and transitions \\ \(\leftarrow\) & Follow set Rule & \(\rightarrow\) & follow set symbols contributors \\ Rerror3 & & R\({}_{2\cdot 2\cdot 2}\) S\({}_{1}R_{2}\) & \\ Local follow set yield: & & & \\ \end{tabular}
* 30. **Common Follow sets.**
* 31. **LA set: 1.** eolr.
* 32. **LA set: 2.** file-inclusion, eog, |+|, nested files exceeded, no end-of-code, no filename, bad filename, bad cmd-opt, bad int-no, bad int-no range, no int present, bad eos, bad esc, comment-overrun, bad char, bad univ-seq.
**33. Index.**
R1 --- Ro2_err_hdlr: 25.
R2 --- Rerrors: 26.
R3 --- Rerror: 27.
_Rerror_: 23.
_Rerrors_: 22.
_Ro2_err_hdlr: 21._
|
animgraph | rust | Rust | Module animgraph::character_vec3
===
Blender perspective
Constants
---
* BACKBlender perspective
* DOWNBlender perspective
* FORWARDBlender perspective
* LEFTBlender perspective
* RIGHTBlender perspective
* UPBlender perspective
Struct animgraph::ActiveStates
===
```
pub struct ActiveStates { /* private fields */ }
```
Implementations
---
### impl ActiveStates
#### pub fn get_active(&self) -> Option<StateIndex#### pub fn set_active(&mut self, index: StateIndex)
#### pub fn reset(&mut self)
#### pub fn get_start(&self) -> Option<StateIndex#### pub fn set_start(&mut self, index: StateIndex)
#### pub fn start_transition(&mut self, index: StateIndex)
#### pub fn reset_start(&mut self)
Trait Implementations
---
### impl Clone for ActiveStates
#### fn clone(&self) -> ActiveStates
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> ActiveStates
Returns the “default value” for a type.
Auto Trait Implementations
---
### impl RefUnwindSafe for ActiveStates
### impl Send for ActiveStates
### impl Sync for ActiveStates
### impl Unpin for ActiveStates
### impl UnwindSafe for ActiveStates
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct animgraph::Alpha
===
```
pub struct Alpha(pub f32);
```
[0,1]
Tuple Fields
---
`0: f32`Implementations
---
### impl Alpha
#### pub fn is_nearly_zero(self) -> bool
#### pub fn is_nearly_one(self) -> bool
#### pub fn lerp<T: Add<Output = T> + Mul<f32, Output = T>>(self, a: T, b: T) -> T
### impl Alpha
#### pub fn inverse(self) -> Self
#### pub fn interpolate(self, a: Alpha, b: Alpha) -> Alpha
Trait Implementations
---
### impl Clone for Alpha
#### fn clone(&self) -> Alpha
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> Alpha
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn from(value: Alpha) -> Self
Converts to this type from the input type.### impl FromFloatUnchecked for Alpha
#### fn from_f32(x: f32) -> Self
#### fn from_f64(x: f64) -> Self
#### fn into_f32(self) -> f32
#### fn into_f64(self) -> f64
### impl Mul<Alpha> for Alpha
#### type Output = Alpha
The resulting type after applying the `*` operator.#### fn mul(self, rhs: Self) -> Self::Output
Performs the `*` operation.
#### fn mul_assign(&mut self, rhs: Self)
Performs the `*=` operation.
#### fn cmp(&self, other: &Self) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &Self) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<Alpha> for Alpha
#### fn partial_cmp(&self, other: &Self) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl Eq for Alpha
Auto Trait Implementations
---
### impl RefUnwindSafe for Alpha
### impl Send for Alpha
### impl Sync for Alpha
### impl Unpin for Alpha
### impl UnwindSafe for Alpha
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> IOBuilder for Twhere
T: FromFloatUnchecked,
#### fn as_io(self, name: &str) -> IO
### impl<T> IOSlot<T> for Twhere
T: IOBuilder,
#### fn into_slot(self, name: &str) -> IO
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Struct animgraph::AnimationClip
===
```
pub struct AnimationClip {
pub animation: AnimationId,
pub bone_group: BoneGroupId,
pub looping: bool,
pub start: Seconds,
pub duration: Seconds,
}
```
Fields
---
`animation: AnimationId``bone_group: BoneGroupId``looping: bool``start: Seconds``duration: Seconds`Implementations
---
### impl AnimationClip
#### pub const RESOURCE_TYPE: &str = "animation_clip"
#### pub fn init_timer(&self) -> SampleTimer
#### pub const IO_TYPE: IOType = _
Trait Implementations
---
### impl Clone for AnimationClip
#### fn clone(&self) -> AnimationClip
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> AnimationClip
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn resource_type() -> &'static str
#### fn build_content(&self, name: &str) -> Result<ResourceContent### impl Serialize for AnimationClip
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for AnimationClip
### impl Send for AnimationClip
### impl Sync for AnimationClip
### impl Unpin for AnimationClip
### impl UnwindSafe for AnimationClip
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ResourceType for Twhere
T: ResourceSettings + 'static,
#### fn get_resource(&self) -> &(dyn Any + 'static)
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Struct animgraph::AnimationId
===
```
#[repr(transparent)]pub struct AnimationId(pub u32);
```
Tuple Fields
---
`0: u32`Trait Implementations
---
### impl Clone for AnimationId
#### fn clone(&self) -> AnimationId
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> AnimationId
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &AnimationId) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for AnimationId
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl Eq for AnimationId
### impl StructuralEq for AnimationId
### impl StructuralPartialEq for AnimationId
Auto Trait Implementations
---
### impl RefUnwindSafe for AnimationId
### impl Send for AnimationId
### impl Sync for AnimationId
### impl Unpin for AnimationId
### impl UnwindSafe for AnimationId
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Struct animgraph::BlendTree
===
```
pub struct BlendTree { /* private fields */ }
```
Implementations
---
### impl BlendTree
#### pub fn clear(&mut self)
#### pub fn is_empty(&self) -> bool
#### pub fn len(&self) -> usize
#### pub fn with_capacity(len: usize) -> Self
#### pub fn from_vec(tasks: Vec<BlendSample>) -> Self
#### pub fn into_inner(self) -> Vec<BlendSample#### pub fn get(&self) -> &[BlendSample]
##### Examples found in repository?
examples/third_person.rs (line 62)
```
8 9
10 11
12 13
14 15
16 17
18 19
20 21
22 23
24 25
26 27
28 29
30 31
32 33
34 35
36 37
38 39
40 41
42 43
44 45
46 47
48 49
50 51
52 53
54 55
56 57
58 59
60 61
62 63
64 65
66 67
68 69
70 71
72 73
74 75
76 77
78 79
80 81
82 83
84 85
86 87
88 89
90 91
92 93
94 95
96 97
98 99
100 101
102 103
104 105
106 107
108 109
110 111
112 113
114 115
116 117
118 119
120 121
122 123
124 125
126 127
128 129
130 131
132 133
134 135
136 137
138 139
140
```
```
&varrfn main() -> anyhow::Result<()> {
let (locomotion_definition, default_locomotion_skeleton, default_locomotion_resources) =
locomotion_graph_example()?;
let mut locomotion = create_instance(
locomotion_definition,
default_locomotion_skeleton,
default_locomotion_resources,
);
let (action_definition, default_action_skeleton, default_action_resources) =
action_graph_example()?;
let mut action = create_instance(
action_definition,
default_action_skeleton,
default_action_resources,
);
// Parameter lookups can be done ahead of time using the definition which each graph has a reference to.
let locomotion_speed = locomotion
.definition()
.get_number_parameter::<f32>("locomotion_speed")
.expect("Valid parameter");
let action_sit = action
.definition()
.get_event_parameter("sit")
.expect("Valid parameter");
// Parameters are specific to a definition
locomotion_speed.set(&mut locomotion, 2.0);
// Event has multiple states that micmics that of a state in a statemachine (entering, entered, exiting, exited)
action_sit.set(&mut action, FlowState::Entered);
let delta_time = 0.1;
let mut context = DefaultRunContext::new(delta_time);
let resources = RESOURCES.lock().expect("Single threaded");
for frame in 1..=5 {
// Multiple graphs can be concatenated where the output of the first is set
// as a reference to the next. It's ultimately up to the next graph if it decides
// to blend with the reference task at all.
context.run(&mut action);
// In this example the second graph looks for an emitted event from previous graphs
// to decided if it should blend or not.
context.run_and_append(&mut locomotion);
// The resulting blend tree is all the active animations that could be sampled
// even if they dont contribute to the final blend for things like animation events.
// The tree can be evaluated from the last sample to form a trimmed blend stack.
println!("Frame #{frame}:");
println!("- Blend Tree:");
for (index, task) in context.tree.get().iter().enumerate() {
let n = index + 1;
match task {
BlendSample::Animation {
id,
normalized_time,
} => {
println!(
" #{n} Sample {} at t={normalized_time}",
&resources.animations[id.0 as usize].name
);
}
BlendSample::Blend(_, _, a, g) => {
if *g == BoneGroupId::All {
println!(" #{n} Blend a={a}");
} else {
println!(" #{n} Masked blend a={a}");
}
}
BlendSample::Interpolate(_, _, a) => {
println!(" #{n} Interpolate a={a}");
}
}
}
struct BlendStack<'a>(&'a GlobalResources);
impl<'a> BlendTreeVisitor for BlendStack<'a> {
fn visit(&mut self, tree: &BlendTree, sample: &BlendSample) {
match *sample {
BlendSample::Animation {
id,
normalized_time,
} => {
println!(
" Sample {} at t={normalized_time}",
&self.0.animations[id.0 as usize].name
);
}
BlendSample::Interpolate(x, y, a) => {
if a < 1.0 {
tree.visit(self, x);
}
if a > 0.0 {
tree.visit(self, y);
}
if a > 0.0 && a < 1.0 {
println!(" Interpolate a={a}");
}
}
BlendSample::Blend(x, y, a, g) => {
if g == BoneGroupId::All {
if a < 1.0 {
tree.visit(self, x);
}
if a > 0.0 {
tree.visit(self, y);
}
if a > 0.0 && a < 1.0 {
println!(" Interpolate a={a}");
}
} else {
tree.visit(self, x);
tree.visit(self, y);
println!(" Masked Blend a={a}");
}
}
}
}
}
println!("\n- Blend Stack:");
context.tree.visit_root(&mut BlendStack(&resources));
println!("");
}
Ok(())
}
```
#### pub fn get_reference_task(&mut self) -> Option<BlendSampleId#### pub fn set_reference_task(&mut self, task: Option<BlendSampleId>)
#### pub fn sample_animation_clip(
&mut self,
animation: AnimationId,
time: Alpha
) -> BlendSampleId
#### pub fn interpolate(
&mut self,
a: BlendSampleId,
b: BlendSampleId,
w: Alpha
) -> BlendSampleId
#### pub fn blend_masked(
&mut self,
a: BlendSampleId,
b: BlendSampleId,
w: Alpha
) -> BlendSampleId
#### pub fn append(
&mut self,
graph: &Graph,
layers: &LayerBuilder
) -> Result<Option<BlendSampleId>#### pub fn set(
&mut self,
graph: &Graph,
layers: &LayerBuilder
) -> Result<Option<BlendSampleId>#### pub fn apply_mask(
&mut self,
sample: BlendSampleId,
group: BoneGroupId
) -> BlendSampleId
#### pub fn visit<T: BlendTreeVisitor>(&self, visitor: &mut T, sample: BlendSampleId)
##### Examples found in repository?
examples/third_person.rs (line 102)&pr &sc
```
89 90
91 92
93 94
95 96
97 98
99 100
101 102
103 104
105 106
107 108
109 110
111 112
113 114
115 116
117 118
119 120
121 122
123 124
125 126
127 128
129 130
131
```
```
&varr fn visit(&mut self, tree: &BlendTree, sample: &BlendSample) {
match *sample {
BlendSample::Animation {
id,
normalized_time,
} => {
println!(
" Sample {} at t={normalized_time}",
&self.0.animations[id.0 as usize].name
);
}
BlendSample::Interpolate(x, y, a) => {
if a < 1.0 {
tree.visit(self, x);
}
if a > 0.0 {
tree.visit(self, y);
}
if a > 0.0 && a < 1.0 {
println!(" Interpolate a={a}");
}
}
BlendSample::Blend(x, y, a, g) => {
if g == BoneGroupId::All {
if a < 1.0 {
tree.visit(self, x);
}
if a > 0.0 {
tree.visit(self, y);
}
if a > 0.0 && a < 1.0 {
println!(" Interpolate a={a}");
}
} else {
tree.visit(self, x);
tree.visit(self, y);
println!(" Masked Blend a={a}");
}
}
}
}
```
#### pub fn visit_root<T: BlendTreeVisitor>(&self, visitor: &mut T)
##### Examples found in repository?
examples/third_person.rs (line 135)
```
8 9
10 11
12 13
14 15
16 17
18 19
20 21
22 23
24 25
26 27
28 29
30 31
32 33
34 35
36 37
38 39
40 41
42 43
44 45
46 47
48 49
50 51
52 53
54 55
56 57
58 59
60 61
62 63
64 65
66 67
68 69
70 71
72 73
74 75
76 77
78 79
80 81
82 83
84 85
86 87
88 89
90 91
92 93
94 95
96 97
98 99
100 101
102 103
104 105
106 107
108 109
110 111
112 113
114 115
116 117
118 119
120 121
122 123
124 125
126 127
128 129
130 131
132 133
134 135
136 137
138 139
140
```
```
&varrfn main() -> anyhow::Result<()> {
let (locomotion_definition, default_locomotion_skeleton, default_locomotion_resources) =
locomotion_graph_example()?;
let mut locomotion = create_instance(
locomotion_definition,
default_locomotion_skeleton,
default_locomotion_resources,
);
let (action_definition, default_action_skeleton, default_action_resources) =
action_graph_example()?;
let mut action = create_instance(
action_definition,
default_action_skeleton,
default_action_resources,
);
// Parameter lookups can be done ahead of time using the definition which each graph has a reference to.
let locomotion_speed = locomotion
.definition()
.get_number_parameter::<f32>("locomotion_speed")
.expect("Valid parameter");
let action_sit = action
.definition()
.get_event_parameter("sit")
.expect("Valid parameter");
// Parameters are specific to a definition
locomotion_speed.set(&mut locomotion, 2.0);
// Event has multiple states that micmics that of a state in a statemachine (entering, entered, exiting, exited)
action_sit.set(&mut action, FlowState::Entered);
let delta_time = 0.1;
let mut context = DefaultRunContext::new(delta_time);
let resources = RESOURCES.lock().expect("Single threaded");
for frame in 1..=5 {
// Multiple graphs can be concatenated where the output of the first is set
// as a reference to the next. It's ultimately up to the next graph if it decides
// to blend with the reference task at all.
context.run(&mut action);
// In this example the second graph looks for an emitted event from previous graphs
// to decided if it should blend or not.
context.run_and_append(&mut locomotion);
// The resulting blend tree is all the active animations that could be sampled
// even if they dont contribute to the final blend for things like animation events.
// The tree can be evaluated from the last sample to form a trimmed blend stack.
println!("Frame #{frame}:");
println!("- Blend Tree:");
for (index, task) in context.tree.get().iter().enumerate() {
let n = index + 1;
match task {
BlendSample::Animation {
id,
normalized_time,
} => {
println!(
" #{n} Sample {} at t={normalized_time}",
&resources.animations[id.0 as usize].name
);
}
BlendSample::Blend(_, _, a, g) => {
if *g == BoneGroupId::All {
println!(" #{n} Blend a={a}");
} else {
println!(" #{n} Masked blend a={a}");
}
}
BlendSample::Interpolate(_, _, a) => {
println!(" #{n} Interpolate a={a}");
}
}
}
struct BlendStack<'a>(&'a GlobalResources);
impl<'a> BlendTreeVisitor for BlendStack<'a> {
fn visit(&mut self, tree: &BlendTree, sample: &BlendSample) {
match *sample {
BlendSample::Animation {
id,
normalized_time,
} => {
println!(
" Sample {} at t={normalized_time}",
&self.0.animations[id.0 as usize].name
);
}
BlendSample::Interpolate(x, y, a) => {
if a < 1.0 {
tree.visit(self, x);
}
if a > 0.0 {
tree.visit(self, y);
}
if a > 0.0 && a < 1.0 {
println!(" Interpolate a={a}");
}
}
BlendSample::Blend(x, y, a, g) => {
if g == BoneGroupId::All {
if a < 1.0 {
tree.visit(self, x);
}
if a > 0.0 {
tree.visit(self, y);
}
if a > 0.0 && a < 1.0 {
println!(" Interpolate a={a}");
}
} else {
tree.visit(self, x);
tree.visit(self, y);
println!(" Masked Blend a={a}");
}
}
}
}
}
println!("\n- Blend Stack:");
context.tree.visit_root(&mut BlendStack(&resources));
println!("");
}
Ok(())
}
```
Trait Implementations
---
### impl Clone for BlendTree
#### fn clone(&self) -> BlendTree
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn default() -> BlendTree
Returns the “default value” for a type. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for BlendTree
### impl Send for BlendTree
### impl Sync for BlendTree
### impl Unpin for BlendTree
### impl UnwindSafe for BlendTree
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct animgraph::BoneGroup
===
```
pub struct BoneGroup {
pub group: BoneGroupId,
pub weights: Vec<BoneWeight>,
}
```
Fields
---
`group: BoneGroupId``weights: Vec<BoneWeight>`Implementations
---
### impl BoneGroup
#### pub const RESOURCE_TYPE: &str = "bone_group"
#### pub const IO_TYPE: IOType = _
#### pub fn new<T: AsRef<str>>(id: u16, bones: impl Iterator<Item = T>) -> Self
##### Examples found in repository?
examples/third_person.rs (line 328)
```
305 306
307 308
309 310
311 312
313 314
315 316
317 318
319 320
321 322
323 324
325 326
327 328
329 330
331 332
333 334
335 336
337 338
339 340
341 342
343 344
345 346
347 348
349 350
351 352
353 354
355 356
357 358
359
```
```
&varr pub fn get_cached(&mut self, serialized: SerializedResource) -> RuntimeResource {
match serialized {
SerializedResource::AnimationClip(name) => {
let looping = name.contains("looping");
let animation =
if let Some(index) = self.animations.iter().position(|x| x.name == name) {
AnimationId(index as _)
} else {
let index = AnimationId(self.animations.len() as _);
self.animations.push(Animation { name });
index
};
RuntimeResource::AnimationClip(AnimationClip {
animation: animation,
bone_group: BoneGroupId::All,
looping,
start: Seconds(0.0),
duration: Seconds(1.0),
})
}
SerializedResource::BoneGroup(mut group) => {
group.sort();
let mut bones = BoneGroup::new(0, group.as_slice().iter());
let res = if let Some(res) =
self.bone_groups.iter().find(|x| x.weights == bones.weights)
{
res.clone()
} else {
bones.group = BoneGroupId::Reference(self.bone_groups.len() as _);
let res = Arc::new(bones);
self.bone_groups.push(res.clone());
res
};
RuntimeResource::BoneGroup(res)
}
SerializedResource::Skeleton(map) => {
let mut skeleton = Skeleton::from_parent_map(&map);
let res = if let Some(res) = self
.skeletons
.iter()
.find(|x| x.bones == skeleton.bones && x.parents == skeleton.parents)
{
res.clone()
} else {
skeleton.id = SkeletonId(self.skeletons.len() as _);
let res = Arc::new(skeleton);
self.skeletons.push(res.clone());
res
};
RuntimeResource::Skeleton(res)
}
}
}
```
Trait Implementations
---
### impl Clone for BoneGroup
#### fn clone(&self) -> BoneGroup
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> BoneGroup
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &BoneGroup) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl ResourceSettings for BoneGroup
#### fn resource_type() -> &'static str
#### fn build_content(&self, name: &str) -> Result<ResourceContent### impl Serialize for BoneGroup
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
Auto Trait Implementations
---
### impl RefUnwindSafe for BoneGroup
### impl Send for BoneGroup
### impl Sync for BoneGroup
### impl Unpin for BoneGroup
### impl UnwindSafe for BoneGroup
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ResourceType for Twhere
T: ResourceSettings + 'static,
#### fn get_resource(&self) -> &(dyn Any + 'static)
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Struct animgraph::BoneWeight
===
```
pub struct BoneWeight {
pub bone: Id,
pub weight: f32,
}
```
Fields
---
`bone: Id``weight: f32`Trait Implementations
---
### impl Clone for BoneWeight
#### fn clone(&self) -> BoneWeight
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> BoneWeight
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &BoneWeight) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for BoneWeight
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
Auto Trait Implementations
---
### impl RefUnwindSafe for BoneWeight
### impl Send for BoneWeight
### impl Sync for BoneWeight
### impl Unpin for BoneWeight
### impl UnwindSafe for BoneWeight
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Struct animgraph::ConstId
===
```
#[repr(transparent)]pub struct ConstId<const N: u64, const S: u64>(pub u64);
```
Tuple Fields
---
`0: u64`Implementations
---
### impl<const N: u64, const S: u64> ConstId<N, S#### pub const NAMESPACE: [u64; 4] = _
#### pub const SEED: u64 = S
#### pub const fn from_str(value: &str) -> Self
Trait Implementations
---
### impl<const N: u64, const S: u64> Clone for ConstId<N, S#### fn clone(&self) -> ConstId<N, SReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
Formats the value using the given formatter.
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl<const N: u64, const S: u64> PartialOrd<ConstId<N, S>> for ConstId<N, S#### fn partial_cmp(&self, other: &ConstId<N, S>) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
__S: Serializer,
Serialize this value into the given Serde serializer.
---
### impl<const N: u64, const S: u64> RefUnwindSafe for ConstId<N, S### impl<const N: u64, const S: u64> Send for ConstId<N, S### impl<const N: u64, const S: u64> Sync for ConstId<N, S### impl<const N: u64, const S: u64> Unpin for ConstId<N, S### impl<const N: u64, const S: u64> UnwindSafe for ConstId<N, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Struct animgraph::DefaultRunContext
===
```
pub struct DefaultRunContext {
pub events: FlowEvents,
pub layers: LayerBuilder,
pub tree: BlendTree,
pub delta_time: f64,
}
```
Fields
---
`events: FlowEvents``layers: LayerBuilder``tree: BlendTree``delta_time: f64`Implementations
---
### impl DefaultRunContext
#### pub fn new(delta_time: f64) -> Self
##### Examples found in repository?
examples/compiler_global.rs (line 79)
```
68 69
70 71
72 73
74 75
76 77
78 79
80 81
82 83
84 85
86 87
88 89
90 91
```
```
&varrfn perform_runtime_test(definition: Arc<GraphDefinition>) {
// 3. Create the graph
let mut graph = definition
.clone()
.build_with_empty_skeleton(Arc::new(EmptyResourceProvider));
// 4. Query the graph
let event = definition.get_event_by_name(TEST_EVENT).unwrap();
assert!(event.get(&graph) == FlowState::Exited);
// 5. Run the graph
let mut context = DefaultRunContext::new(1.0);
context.run(&mut graph);
assert!(context.events.emitted.is_empty());
assert!(event.get(&graph) == FlowState::Entered);
// 6. Modify parameters
let a = definition.get_number_parameter::<f32>("a").unwrap();
a.set(&mut graph, 4.0);
context.run(&mut graph);
assert_eq!(&context.events.emitted, &[Id::from_str(TEST_EVENT)]);
}
```
Hide additional examplesexamples/third_person.rs (line 44)
```
8 9
10 11
12 13
14 15
16 17
18 19
20 21
22 23
24 25
26 27
28 29
30 31
32 33
34 35
36 37
38 39
40 41
42 43
44 45
46 47
48 49
50 51
52 53
54 55
56 57
58 59
60 61
62 63
64 65
66 67
68 69
70 71
72 73
74 75
76 77
78 79
80 81
82 83
84 85
86 87
88 89
90 91
92 93
94 95
96 97
98 99
100 101
102 103
104 105
106 107
108 109
110 111
112 113
114 115
116 117
118 119
120 121
122 123
124 125
126 127
128 129
130 131
132 133
134 135
136 137
138 139
140
```
```
&varrfn main() -> anyhow::Result<()> {
let (locomotion_definition, default_locomotion_skeleton, default_locomotion_resources) =
locomotion_graph_example()?;
let mut locomotion = create_instance(
locomotion_definition,
default_locomotion_skeleton,
default_locomotion_resources,
);
let (action_definition, default_action_skeleton, default_action_resources) =
action_graph_example()?;
let mut action = create_instance(
action_definition,
default_action_skeleton,
default_action_resources,
);
// Parameter lookups can be done ahead of time using the definition which each graph has a reference to.
let locomotion_speed = locomotion
.definition()
.get_number_parameter::<f32>("locomotion_speed")
.expect("Valid parameter");
let action_sit = action
.definition()
.get_event_parameter("sit")
.expect("Valid parameter");
// Parameters are specific to a definition
locomotion_speed.set(&mut locomotion, 2.0);
// Event has multiple states that micmics that of a state in a statemachine (entering, entered, exiting, exited)
action_sit.set(&mut action, FlowState::Entered);
let delta_time = 0.1;
let mut context = DefaultRunContext::new(delta_time);
let resources = RESOURCES.lock().expect("Single threaded");
for frame in 1..=5 {
// Multiple graphs can be concatenated where the output of the first is set
// as a reference to the next. It's ultimately up to the next graph if it decides
// to blend with the reference task at all.
context.run(&mut action);
// In this example the second graph looks for an emitted event from previous graphs
// to decided if it should blend or not.
context.run_and_append(&mut locomotion);
// The resulting blend tree is all the active animations that could be sampled
// even if they dont contribute to the final blend for things like animation events.
// The tree can be evaluated from the last sample to form a trimmed blend stack.
println!("Frame #{frame}:");
println!("- Blend Tree:");
for (index, task) in context.tree.get().iter().enumerate() {
let n = index + 1;
match task {
BlendSample::Animation {
id,
normalized_time,
} => {
println!(
" #{n} Sample {} at t={normalized_time}",
&resources.animations[id.0 as usize].name
);
}
BlendSample::Blend(_, _, a, g) => {
if *g == BoneGroupId::All {
println!(" #{n} Blend a={a}");
} else {
println!(" #{n} Masked blend a={a}");
}
}
BlendSample::Interpolate(_, _, a) => {
println!(" #{n} Interpolate a={a}");
}
}
}
struct BlendStack<'a>(&'a GlobalResources);
impl<'a> BlendTreeVisitor for BlendStack<'a> {
fn visit(&mut self, tree: &BlendTree, sample: &BlendSample) {
match *sample {
BlendSample::Animation {
id,
normalized_time,
} => {
println!(
" Sample {} at t={normalized_time}",
&self.0.animations[id.0 as usize].name
);
}
BlendSample::Interpolate(x, y, a) => {
if a < 1.0 {
tree.visit(self, x);
}
if a > 0.0 {
tree.visit(self, y);
}
if a > 0.0 && a < 1.0 {
println!(" Interpolate a={a}");
}
}
BlendSample::Blend(x, y, a, g) => {
if g == BoneGroupId::All {
if a < 1.0 {
tree.visit(self, x);
}
if a > 0.0 {
tree.visit(self, y);
}
if a > 0.0 && a < 1.0 {
println!(" Interpolate a={a}");
}
} else {
tree.visit(self, x);
tree.visit(self, y);
println!(" Masked Blend a={a}");
}
}
}
}
}
println!("\n- Blend Stack:");
context.tree.visit_root(&mut BlendStack(&resources));
println!("");
}
Ok(())
}
```
#### pub fn run_without_blend(&mut self, graph: &mut Graph)
#### pub fn run_and_append(&mut self, graph: &mut Graph)
##### Examples found in repository?
examples/third_person.rs (line 54)
```
8 9
10 11
12 13
14 15
16 17
18 19
20 21
22 23
24 25
26 27
28 29
30 31
32 33
34 35
36 37
38 39
40 41
42 43
44 45
46 47
48 49
50 51
52 53
54 55
56 57
58 59
60 61
62 63
64 65
66 67
68 69
70 71
72 73
74 75
76 77
78 79
80 81
82 83
84 85
86 87
88 89
90 91
92 93
94 95
96 97
98 99
100 101
102 103
104 105
106 107
108 109
110 111
112 113
114 115
116 117
118 119
120 121
122 123
124 125
126 127
128 129
130 131
132 133
134 135
136 137
138 139
140
```
```
&varrfn main() -> anyhow::Result<()> {
let (locomotion_definition, default_locomotion_skeleton, default_locomotion_resources) =
locomotion_graph_example()?;
let mut locomotion = create_instance(
locomotion_definition,
default_locomotion_skeleton,
default_locomotion_resources,
);
let (action_definition, default_action_skeleton, default_action_resources) =
action_graph_example()?;
let mut action = create_instance(
action_definition,
default_action_skeleton,
default_action_resources,
);
// Parameter lookups can be done ahead of time using the definition which each graph has a reference to.
let locomotion_speed = locomotion
.definition()
.get_number_parameter::<f32>("locomotion_speed")
.expect("Valid parameter");
let action_sit = action
.definition()
.get_event_parameter("sit")
.expect("Valid parameter");
// Parameters are specific to a definition
locomotion_speed.set(&mut locomotion, 2.0);
// Event has multiple states that micmics that of a state in a statemachine (entering, entered, exiting, exited)
action_sit.set(&mut action, FlowState::Entered);
let delta_time = 0.1;
let mut context = DefaultRunContext::new(delta_time);
let resources = RESOURCES.lock().expect("Single threaded");
for frame in 1..=5 {
// Multiple graphs can be concatenated where the output of the first is set
// as a reference to the next. It's ultimately up to the next graph if it decides
// to blend with the reference task at all.
context.run(&mut action);
// In this example the second graph looks for an emitted event from previous graphs
// to decided if it should blend or not.
context.run_and_append(&mut locomotion);
// The resulting blend tree is all the active animations that could be sampled
// even if they dont contribute to the final blend for things like animation events.
// The tree can be evaluated from the last sample to form a trimmed blend stack.
println!("Frame #{frame}:");
println!("- Blend Tree:");
for (index, task) in context.tree.get().iter().enumerate() {
let n = index + 1;
match task {
BlendSample::Animation {
id,
normalized_time,
} => {
println!(
" #{n} Sample {} at t={normalized_time}",
&resources.animations[id.0 as usize].name
);
}
BlendSample::Blend(_, _, a, g) => {
if *g == BoneGroupId::All {
println!(" #{n} Blend a={a}");
} else {
println!(" #{n} Masked blend a={a}");
}
}
BlendSample::Interpolate(_, _, a) => {
println!(" #{n} Interpolate a={a}");
}
}
}
struct BlendStack<'a>(&'a GlobalResources);
impl<'a> BlendTreeVisitor for BlendStack<'a> {
fn visit(&mut self, tree: &BlendTree, sample: &BlendSample) {
match *sample {
BlendSample::Animation {
id,
normalized_time,
} => {
println!(
" Sample {} at t={normalized_time}",
&self.0.animations[id.0 as usize].name
);
}
BlendSample::Interpolate(x, y, a) => {
if a < 1.0 {
tree.visit(self, x);
}
if a > 0.0 {
tree.visit(self, y);
}
if a > 0.0 && a < 1.0 {
println!(" Interpolate a={a}");
}
}
BlendSample::Blend(x, y, a, g) => {
if g == BoneGroupId::All {
if a < 1.0 {
tree.visit(self, x);
}
if a > 0.0 {
tree.visit(self, y);
}
if a > 0.0 && a < 1.0 {
println!(" Interpolate a={a}");
}
} else {
tree.visit(self, x);
tree.visit(self, y);
println!(" Masked Blend a={a}");
}
}
}
}
}
println!("\n- Blend Stack:");
context.tree.visit_root(&mut BlendStack(&resources));
println!("");
}
Ok(())
}
```
#### pub fn run(&mut self, graph: &mut Graph)
##### Examples found in repository?
examples/compiler_global.rs (line 80)&pr &sc
```
68 69
70 71
72 73
74 75
76 77
78 79
80 81
82 83
84 85
86 87
88 89
90 91
```
```
&varrfn perform_runtime_test(definition: Arc<GraphDefinition>) {
// 3. Create the graph
let mut graph = definition
.clone()
.build_with_empty_skeleton(Arc::new(EmptyResourceProvider));
// 4. Query the graph
let event = definition.get_event_by_name(TEST_EVENT).unwrap();
assert!(event.get(&graph) == FlowState::Exited);
// 5. Run the graph
let mut context = DefaultRunContext::new(1.0);
context.run(&mut graph);
assert!(context.events.emitted.is_empty());
assert!(event.get(&graph) == FlowState::Entered);
// 6. Modify parameters
let a = definition.get_number_parameter::<f32>("a").unwrap();
a.set(&mut graph, 4.0);
context.run(&mut graph);
assert_eq!(&context.events.emitted, &[Id::from_str(TEST_EVENT)]);
}
```
Hide additional examplesexamples/third_person.rs (line 50)
```
8 9
10 11
12 13
14 15
16 17
18 19
20 21
22 23
24 25
26 27
28 29
30 31
32 33
34 35
36 37
38 39
40 41
42 43
44 45
46 47
48 49
50 51
52 53
54 55
56 57
58 59
60 61
62 63
64 65
66 67
68 69
70 71
72 73
74 75
76 77
78 79
80 81
82 83
84 85
86 87
88 89
90 91
92 93
94 95
96 97
98 99
100 101
102 103
104 105
106 107
108 109
110 111
112 113
114 115
116 117
118 119
120 121
122 123
124 125
126 127
128 129
130 131
132 133
134 135
136 137
138 139
140
```
```
&varrfn main() -> anyhow::Result<()> {
let (locomotion_definition, default_locomotion_skeleton, default_locomotion_resources) =
locomotion_graph_example()?;
let mut locomotion = create_instance(
locomotion_definition,
default_locomotion_skeleton,
default_locomotion_resources,
);
let (action_definition, default_action_skeleton, default_action_resources) =
action_graph_example()?;
let mut action = create_instance(
action_definition,
default_action_skeleton,
default_action_resources,
);
// Parameter lookups can be done ahead of time using the definition which each graph has a reference to.
let locomotion_speed = locomotion
.definition()
.get_number_parameter::<f32>("locomotion_speed")
.expect("Valid parameter");
let action_sit = action
.definition()
.get_event_parameter("sit")
.expect("Valid parameter");
// Parameters are specific to a definition
locomotion_speed.set(&mut locomotion, 2.0);
// Event has multiple states that micmics that of a state in a statemachine (entering, entered, exiting, exited)
action_sit.set(&mut action, FlowState::Entered);
let delta_time = 0.1;
let mut context = DefaultRunContext::new(delta_time);
let resources = RESOURCES.lock().expect("Single threaded");
for frame in 1..=5 {
// Multiple graphs can be concatenated where the output of the first is set
// as a reference to the next. It's ultimately up to the next graph if it decides
// to blend with the reference task at all.
context.run(&mut action);
// In this example the second graph looks for an emitted event from previous graphs
// to decided if it should blend or not.
context.run_and_append(&mut locomotion);
// The resulting blend tree is all the active animations that could be sampled
// even if they dont contribute to the final blend for things like animation events.
// The tree can be evaluated from the last sample to form a trimmed blend stack.
println!("Frame #{frame}:");
println!("- Blend Tree:");
for (index, task) in context.tree.get().iter().enumerate() {
let n = index + 1;
match task {
BlendSample::Animation {
id,
normalized_time,
} => {
println!(
" #{n} Sample {} at t={normalized_time}",
&resources.animations[id.0 as usize].name
);
}
BlendSample::Blend(_, _, a, g) => {
if *g == BoneGroupId::All {
println!(" #{n} Blend a={a}");
} else {
println!(" #{n} Masked blend a={a}");
}
}
BlendSample::Interpolate(_, _, a) => {
println!(" #{n} Interpolate a={a}");
}
}
}
struct BlendStack<'a>(&'a GlobalResources);
impl<'a> BlendTreeVisitor for BlendStack<'a> {
fn visit(&mut self, tree: &BlendTree, sample: &BlendSample) {
match *sample {
BlendSample::Animation {
id,
normalized_time,
} => {
println!(
" Sample {} at t={normalized_time}",
&self.0.animations[id.0 as usize].name
);
}
BlendSample::Interpolate(x, y, a) => {
if a < 1.0 {
tree.visit(self, x);
}
if a > 0.0 {
tree.visit(self, y);
}
if a > 0.0 && a < 1.0 {
println!(" Interpolate a={a}");
}
}
BlendSample::Blend(x, y, a, g) => {
if g == BoneGroupId::All {
if a < 1.0 {
tree.visit(self, x);
}
if a > 0.0 {
tree.visit(self, y);
}
if a > 0.0 && a < 1.0 {
println!(" Interpolate a={a}");
}
} else {
tree.visit(self, x);
tree.visit(self, y);
println!(" Masked Blend a={a}");
}
}
}
}
}
println!("\n- Blend Stack:");
context.tree.visit_root(&mut BlendStack(&resources));
println!("");
}
Ok(())
}
```
#### pub fn clear(&mut self)
Trait Implementations
---
### impl Clone for DefaultRunContext
#### fn clone(&self) -> DefaultRunContext
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn default() -> DefaultRunContext
Returns the “default value” for a type. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for DefaultRunContext
### impl Send for DefaultRunContext
### impl Sync for DefaultRunContext
### impl Unpin for DefaultRunContext
### impl UnwindSafe for DefaultRunContext
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct animgraph::EmptyResourceProvider
===
```
pub struct EmptyResourceProvider;
```
Trait Implementations
---
### impl Clone for EmptyResourceProvider
#### fn clone(&self) -> EmptyResourceProvider
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn initialize(
&self,
_entries: &mut [GraphResourceRef],
_definition: &GraphDefinition
)
#### fn get(&self, _index: GraphResourceRef) -> &dyn Any
Auto Trait Implementations
---
### impl RefUnwindSafe for EmptyResourceProvider
### impl Send for EmptyResourceProvider
### impl Sync for EmptyResourceProvider
### impl Unpin for EmptyResourceProvider
### impl UnwindSafe for EmptyResourceProvider
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct animgraph::Graph
===
```
pub struct Graph { /* private fields */ }
```
The persistent data of a `HierarchalStateMachine` evaluated by `FlowGraphRunner`
WIP Notes / Things likely to change if time permits:
* The ideal lifetime model would be probably be arena allocated based.
* Dyn traits might go away in favour of enum dispatch.
Implementations
---
### impl Graph
#### pub fn reset_graph(&mut self)
#### pub fn resources(&self) -> &Arc<dyn GraphResourceProvider#### pub fn skeleton(&self) -> &Arc<Skeleton#### pub fn transform_by_id(&self, bone: Id) -> Option<&Transform#### pub fn set_transforms(
&mut self,
bone_to_root: impl IntoIterator<Item = Transform>,
root_to_world: &Transform
)
#### pub fn get_root_to_world(&self) -> &Transform
#### pub fn pose(&self) -> &[Transform]
#### pub fn get_state(&self, index: StateIndex) -> FlowState
#### pub fn get_event(&self, index: Event) -> FlowState
#### pub fn set_event(&mut self, index: Event, state: FlowState) -> FlowState
#### pub fn is_state_active(&self, index: StateIndex) -> bool
#### pub fn with_timer_mut<T>(
&mut self,
index: IndexType,
f: impl FnOnce(&mut SampleTimer) -> T
) -> Option<T#### pub fn with_timer<T>(
&self,
index: IndexType,
f: impl FnOnce(&SampleTimer) -> T
) -> Option<T#### pub fn set_events_hook(&mut self, hook: Box<dyn FlowEventsHook>)
#### pub fn try_get_node(&self, index: NodeIndex) -> Option<&GraphNodeEntry#### pub fn get_bool(&self, value: GraphBoolean) -> bool
#### pub fn get_variable_boolean(&self, index: VariableIndex) -> bool
#### pub fn set_variable_boolean(&mut self, index: VariableIndex, input: bool)
#### pub fn get_number(&self, value: GraphNumber) -> f64
#### pub fn get_variable_number(&self, index: VariableIndex) -> f32
#### pub fn set_variable_number(&mut self, index: VariableIndex, input: f32)
#### pub fn set_variable_number_array<const N: usize>(
&mut self,
index: VariableIndex,
values: [f32; N]
)
#### pub fn get_variable_number_array<const N: usize>(
&self,
index: VariableIndex
) -> [f32; N]
#### pub fn get_vec3(&self, index: VariableIndex) -> Vec3
#### pub fn definition(&self) -> &Arc<GraphDefinition##### Examples found in repository?
examples/third_person.rs (line 29)&pr &sc
```
8 9
10 11
12 13
14 15
16 17
18 19
20 21
22 23
24 25
26 27
28 29
30 31
32 33
34 35
36 37
38 39
40 41
42 43
44 45
46 47
48 49
50 51
52 53
54 55
56 57
58 59
60 61
62 63
64 65
66 67
68 69
70 71
72 73
74 75
76 77
78 79
80 81
82 83
84 85
86 87
88 89
90 91
92 93
94 95
96 97
98 99
100 101
102 103
104 105
106 107
108 109
110 111
112 113
114 115
116 117
118 119
120 121
122 123
124 125
126 127
128 129
130 131
132 133
134 135
136 137
138 139
140
```
```
&varrfn main() -> anyhow::Result<()> {
let (locomotion_definition, default_locomotion_skeleton, default_locomotion_resources) =
locomotion_graph_example()?;
let mut locomotion = create_instance(
locomotion_definition,
default_locomotion_skeleton,
default_locomotion_resources,
);
let (action_definition, default_action_skeleton, default_action_resources) =
action_graph_example()?;
let mut action = create_instance(
action_definition,
default_action_skeleton,
default_action_resources,
);
// Parameter lookups can be done ahead of time using the definition which each graph has a reference to.
let locomotion_speed = locomotion
.definition()
.get_number_parameter::<f32>("locomotion_speed")
.expect("Valid parameter");
let action_sit = action
.definition()
.get_event_parameter("sit")
.expect("Valid parameter");
// Parameters are specific to a definition
locomotion_speed.set(&mut locomotion, 2.0);
// Event has multiple states that micmics that of a state in a statemachine (entering, entered, exiting, exited)
action_sit.set(&mut action, FlowState::Entered);
let delta_time = 0.1;
let mut context = DefaultRunContext::new(delta_time);
let resources = RESOURCES.lock().expect("Single threaded");
for frame in 1..=5 {
// Multiple graphs can be concatenated where the output of the first is set
// as a reference to the next. It's ultimately up to the next graph if it decides
// to blend with the reference task at all.
context.run(&mut action);
// In this example the second graph looks for an emitted event from previous graphs
// to decided if it should blend or not.
context.run_and_append(&mut locomotion);
// The resulting blend tree is all the active animations that could be sampled
// even if they dont contribute to the final blend for things like animation events.
// The tree can be evaluated from the last sample to form a trimmed blend stack.
println!("Frame #{frame}:");
println!("- Blend Tree:");
for (index, task) in context.tree.get().iter().enumerate() {
let n = index + 1;
match task {
BlendSample::Animation {
id,
normalized_time,
} => {
println!(
" #{n} Sample {} at t={normalized_time}",
&resources.animations[id.0 as usize].name
);
}
BlendSample::Blend(_, _, a, g) => {
if *g == BoneGroupId::All {
println!(" #{n} Blend a={a}");
} else {
println!(" #{n} Masked blend a={a}");
}
}
BlendSample::Interpolate(_, _, a) => {
println!(" #{n} Interpolate a={a}");
}
}
}
struct BlendStack<'a>(&'a GlobalResources);
impl<'a> BlendTreeVisitor for BlendStack<'a> {
fn visit(&mut self, tree: &BlendTree, sample: &BlendSample) {
match *sample {
BlendSample::Animation {
id,
normalized_time,
} => {
println!(
" Sample {} at t={normalized_time}",
&self.0.animations[id.0 as usize].name
);
}
BlendSample::Interpolate(x, y, a) => {
if a < 1.0 {
tree.visit(self, x);
}
if a > 0.0 {
tree.visit(self, y);
}
if a > 0.0 && a < 1.0 {
println!(" Interpolate a={a}");
}
}
BlendSample::Blend(x, y, a, g) => {
if g == BoneGroupId::All {
if a < 1.0 {
tree.visit(self, x);
}
if a > 0.0 {
tree.visit(self, y);
}
if a > 0.0 && a < 1.0 {
println!(" Interpolate a={a}");
}
} else {
tree.visit(self, x);
tree.visit(self, y);
println!(" Masked Blend a={a}");
}
}
}
}
}
println!("\n- Blend Stack:");
context.tree.visit_root(&mut BlendStack(&resources));
println!("");
}
Ok(())
}
```
#### pub fn iteration(&self) -> u64
#### pub fn get_first_transitioning_state(
&self,
machine: MachineIndex
) -> Option<StateIndex#### pub fn get_next_transitioning_state(
&self,
state: StateIndex
) -> Option<StateIndex#### pub fn get_machine_state(&self, machine: MachineIndex) -> Option<StateIndex#### pub fn get_resource<T: 'static>(&self, index: IndexType) -> Option<&T#### pub fn get_state_transition(
&self,
state: StateIndex
) -> Option<&GraphTransitionState#### pub fn get_state_transition_mut(
&mut self,
state: StateIndex
) -> Option<&mut GraphTransitionStateTrait Implementations
---
### impl PoseGraph for Graph
#### fn sample_pose(
&self,
tasks: &mut BlendTree,
index: NodeIndex
) -> Result<Option<BlendSampleId>#### fn pose_parent(&self, index: NodeIndex) -> Option<&dyn PoseParentAuto Trait Implementations
---
### impl !RefUnwindSafe for Graph
### impl Send for Graph
### impl Sync for Graph
### impl Unpin for Graph
### impl !UnwindSafe for Graph
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct animgraph::GraphCompiledNode
===
```
pub struct GraphCompiledNode {
pub type_id: IndexType,
pub value: Value,
}
```
Fields
---
`type_id: IndexType``value: Value`Trait Implementations
---
### impl Clone for GraphCompiledNode
#### fn clone(&self) -> GraphCompiledNode
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn default() -> GraphCompiledNode
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for GraphCompiledNode
### impl Send for GraphCompiledNode
### impl Sync for GraphCompiledNode
### impl Unpin for GraphCompiledNode
### impl UnwindSafe for GraphCompiledNode
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Struct animgraph::GraphDefinition
===
```
pub struct GraphDefinition { /* private fields */ }
```
Implementations
---
### impl GraphDefinition
#### pub fn get_constant_number(&self, index: ConstantIndex) -> f64
#### pub fn get_nodes(&self, range: Range<IndexType>) -> Option<&[GraphNodeEntry]#### pub fn get_node(&self, node_index: NodeIndex) -> Option<&GraphNodeEntry#### pub fn get_bool_parameter(&self, name: &str) -> Option<BoolMut#### pub fn get_number_parameter<T: FromFloatUnchecked>(
&self,
name: &str
) -> Option<NumberMut<T>##### Examples found in repository?
examples/compiler_global.rs (line 86)
```
68 69
70 71
72 73
74 75
76 77
78 79
80 81
82 83
84 85
86 87
88 89
90 91
```
```
&varrfn perform_runtime_test(definition: Arc<GraphDefinition>) {
// 3. Create the graph
let mut graph = definition
.clone()
.build_with_empty_skeleton(Arc::new(EmptyResourceProvider));
// 4. Query the graph
let event = definition.get_event_by_name(TEST_EVENT).unwrap();
assert!(event.get(&graph) == FlowState::Exited);
// 5. Run the graph
let mut context = DefaultRunContext::new(1.0);
context.run(&mut graph);
assert!(context.events.emitted.is_empty());
assert!(event.get(&graph) == FlowState::Entered);
// 6. Modify parameters
let a = definition.get_number_parameter::<f32>("a").unwrap();
a.set(&mut graph, 4.0);
context.run(&mut graph);
assert_eq!(&context.events.emitted, &[Id::from_str(TEST_EVENT)]);
}
```
Hide additional examplesexamples/third_person.rs (line 30)
```
8 9
10 11
12 13
14 15
16 17
18 19
20 21
22 23
24 25
26 27
28 29
30 31
32 33
34 35
36 37
38 39
40 41
42 43
44 45
46 47
48 49
50 51
52 53
54 55
56 57
58 59
60 61
62 63
64 65
66 67
68 69
70 71
72 73
74 75
76 77
78 79
80 81
82 83
84 85
86 87
88 89
90 91
92 93
94 95
96 97
98 99
100 101
102 103
104 105
106 107
108 109
110 111
112 113
114 115
116 117
118 119
120 121
122 123
124 125
126 127
128 129
130 131
132 133
134 135
136 137
138 139
140
```
```
&varrfn main() -> anyhow::Result<()> {
let (locomotion_definition, default_locomotion_skeleton, default_locomotion_resources) =
locomotion_graph_example()?;
let mut locomotion = create_instance(
locomotion_definition,
default_locomotion_skeleton,
default_locomotion_resources,
);
let (action_definition, default_action_skeleton, default_action_resources) =
action_graph_example()?;
let mut action = create_instance(
action_definition,
default_action_skeleton,
default_action_resources,
);
// Parameter lookups can be done ahead of time using the definition which each graph has a reference to.
let locomotion_speed = locomotion
.definition()
.get_number_parameter::<f32>("locomotion_speed")
.expect("Valid parameter");
let action_sit = action
.definition()
.get_event_parameter("sit")
.expect("Valid parameter");
// Parameters are specific to a definition
locomotion_speed.set(&mut locomotion, 2.0);
// Event has multiple states that micmics that of a state in a statemachine (entering, entered, exiting, exited)
action_sit.set(&mut action, FlowState::Entered);
let delta_time = 0.1;
let mut context = DefaultRunContext::new(delta_time);
let resources = RESOURCES.lock().expect("Single threaded");
for frame in 1..=5 {
// Multiple graphs can be concatenated where the output of the first is set
// as a reference to the next. It's ultimately up to the next graph if it decides
// to blend with the reference task at all.
context.run(&mut action);
// In this example the second graph looks for an emitted event from previous graphs
// to decided if it should blend or not.
context.run_and_append(&mut locomotion);
// The resulting blend tree is all the active animations that could be sampled
// even if they dont contribute to the final blend for things like animation events.
// The tree can be evaluated from the last sample to form a trimmed blend stack.
println!("Frame #{frame}:");
println!("- Blend Tree:");
for (index, task) in context.tree.get().iter().enumerate() {
let n = index + 1;
match task {
BlendSample::Animation {
id,
normalized_time,
} => {
println!(
" #{n} Sample {} at t={normalized_time}",
&resources.animations[id.0 as usize].name
);
}
BlendSample::Blend(_, _, a, g) => {
if *g == BoneGroupId::All {
println!(" #{n} Blend a={a}");
} else {
println!(" #{n} Masked blend a={a}");
}
}
BlendSample::Interpolate(_, _, a) => {
println!(" #{n} Interpolate a={a}");
}
}
}
struct BlendStack<'a>(&'a GlobalResources);
impl<'a> BlendTreeVisitor for BlendStack<'a> {
fn visit(&mut self, tree: &BlendTree, sample: &BlendSample) {
match *sample {
BlendSample::Animation {
id,
normalized_time,
} => {
println!(
" Sample {} at t={normalized_time}",
&self.0.animations[id.0 as usize].name
);
}
BlendSample::Interpolate(x, y, a) => {
if a < 1.0 {
tree.visit(self, x);
}
if a > 0.0 {
tree.visit(self, y);
}
if a > 0.0 && a < 1.0 {
println!(" Interpolate a={a}");
}
}
BlendSample::Blend(x, y, a, g) => {
if g == BoneGroupId::All {
if a < 1.0 {
tree.visit(self, x);
}
if a > 0.0 {
tree.visit(self, y);
}
if a > 0.0 && a < 1.0 {
println!(" Interpolate a={a}");
}
} else {
tree.visit(self, x);
tree.visit(self, y);
println!(" Masked Blend a={a}");
}
}
}
}
}
println!("\n- Blend Stack:");
context.tree.visit_root(&mut BlendStack(&resources));
println!("");
}
Ok(())
}
```
#### pub fn get_vector_parameter(&self, name: &str) -> Option<VectorMut#### pub fn get_timer_parameter(&self, name: &str) -> Option<Timer#### pub fn get_event_parameter(&self, name: &str) -> Option<Event##### Examples found in repository?
examples/third_person.rs (line 34)
```
8 9
10 11
12 13
14 15
16 17
18 19
20 21
22 23
24 25
26 27
28 29
30 31
32 33
34 35
36 37
38 39
40 41
42 43
44 45
46 47
48 49
50 51
52 53
54 55
56 57
58 59
60 61
62 63
64 65
66 67
68 69
70 71
72 73
74 75
76 77
78 79
80 81
82 83
84 85
86 87
88 89
90 91
92 93
94 95
96 97
98 99
100 101
102 103
104 105
106 107
108 109
110 111
112 113
114 115
116 117
118 119
120 121
122 123
124 125
126 127
128 129
130 131
132 133
134 135
136 137
138 139
140
```
```
&varrfn main() -> anyhow::Result<()> {
let (locomotion_definition, default_locomotion_skeleton, default_locomotion_resources) =
locomotion_graph_example()?;
let mut locomotion = create_instance(
locomotion_definition,
default_locomotion_skeleton,
default_locomotion_resources,
);
let (action_definition, default_action_skeleton, default_action_resources) =
action_graph_example()?;
let mut action = create_instance(
action_definition,
default_action_skeleton,
default_action_resources,
);
// Parameter lookups can be done ahead of time using the definition which each graph has a reference to.
let locomotion_speed = locomotion
.definition()
.get_number_parameter::<f32>("locomotion_speed")
.expect("Valid parameter");
let action_sit = action
.definition()
.get_event_parameter("sit")
.expect("Valid parameter");
// Parameters are specific to a definition
locomotion_speed.set(&mut locomotion, 2.0);
// Event has multiple states that micmics that of a state in a statemachine (entering, entered, exiting, exited)
action_sit.set(&mut action, FlowState::Entered);
let delta_time = 0.1;
let mut context = DefaultRunContext::new(delta_time);
let resources = RESOURCES.lock().expect("Single threaded");
for frame in 1..=5 {
// Multiple graphs can be concatenated where the output of the first is set
// as a reference to the next. It's ultimately up to the next graph if it decides
// to blend with the reference task at all.
context.run(&mut action);
// In this example the second graph looks for an emitted event from previous graphs
// to decided if it should blend or not.
context.run_and_append(&mut locomotion);
// The resulting blend tree is all the active animations that could be sampled
// even if they dont contribute to the final blend for things like animation events.
// The tree can be evaluated from the last sample to form a trimmed blend stack.
println!("Frame #{frame}:");
println!("- Blend Tree:");
for (index, task) in context.tree.get().iter().enumerate() {
let n = index + 1;
match task {
BlendSample::Animation {
id,
normalized_time,
} => {
println!(
" #{n} Sample {} at t={normalized_time}",
&resources.animations[id.0 as usize].name
);
}
BlendSample::Blend(_, _, a, g) => {
if *g == BoneGroupId::All {
println!(" #{n} Blend a={a}");
} else {
println!(" #{n} Masked blend a={a}");
}
}
BlendSample::Interpolate(_, _, a) => {
println!(" #{n} Interpolate a={a}");
}
}
}
struct BlendStack<'a>(&'a GlobalResources);
impl<'a> BlendTreeVisitor for BlendStack<'a> {
fn visit(&mut self, tree: &BlendTree, sample: &BlendSample) {
match *sample {
BlendSample::Animation {
id,
normalized_time,
} => {
println!(
" Sample {} at t={normalized_time}",
&self.0.animations[id.0 as usize].name
);
}
BlendSample::Interpolate(x, y, a) => {
if a < 1.0 {
tree.visit(self, x);
}
if a > 0.0 {
tree.visit(self, y);
}
if a > 0.0 && a < 1.0 {
println!(" Interpolate a={a}");
}
}
BlendSample::Blend(x, y, a, g) => {
if g == BoneGroupId::All {
if a < 1.0 {
tree.visit(self, x);
}
if a > 0.0 {
tree.visit(self, y);
}
if a > 0.0 && a < 1.0 {
println!(" Interpolate a={a}");
}
} else {
tree.visit(self, x);
tree.visit(self, y);
println!(" Masked Blend a={a}");
}
}
}
}
}
println!("\n- Blend Stack:");
context.tree.visit_root(&mut BlendStack(&resources));
println!("");
}
Ok(())
}
```
#### pub fn get_event_by_id(&self, id: Id) -> Option<Event#### pub fn get_event_by_name(&self, name: &str) -> Option<Event##### Examples found in repository?
examples/compiler_global.rs (line 75)
```
68 69
70 71
72 73
74 75
76 77
78 79
80 81
82 83
84 85
86 87
88 89
90 91
```
```
&varrfn perform_runtime_test(definition: Arc<GraphDefinition>) {
// 3. Create the graph
let mut graph = definition
.clone()
.build_with_empty_skeleton(Arc::new(EmptyResourceProvider));
// 4. Query the graph
let event = definition.get_event_by_name(TEST_EVENT).unwrap();
assert!(event.get(&graph) == FlowState::Exited);
// 5. Run the graph
let mut context = DefaultRunContext::new(1.0);
context.run(&mut graph);
assert!(context.events.emitted.is_empty());
assert!(event.get(&graph) == FlowState::Entered);
// 6. Modify parameters
let a = definition.get_number_parameter::<f32>("a").unwrap();
a.set(&mut graph, 4.0);
context.run(&mut graph);
assert_eq!(&context.events.emitted, &[Id::from_str(TEST_EVENT)]);
}
```
#### pub fn resources_entries(&self) -> &[GraphResourceEntry]
#### pub fn resource_types(&self) -> &[String]
#### pub fn reset_parameters(&self, graph: &mut Graph)
#### pub fn build(
self: Arc<GraphDefinition>,
provider: Arc<dyn GraphResourceProvider>,
skeleton: Arc<Skeleton>
) -> Graph
##### Examples found in repository?
examples/third_person.rs (line 247)
```
241 242
243 244
245 246
247 248
249 250
251
```
```
fn create_instance(
definition: Arc<GraphDefinition>,
skeleton: Option<Arc<Skeleton>>,
resources: Arc<dyn GraphResourceProvider>,
) -> Graph {
if let Some(skeleton) = skeleton {
definition.build(resources.clone(), skeleton)
} else {
definition.build_with_empty_skeleton(resources.clone())
}
}
```
#### pub fn build_with_empty_skeleton(
self: Arc<GraphDefinition>,
provider: Arc<dyn GraphResourceProvider>
) -> Graph
##### Examples found in repository?
examples/third_person.rs (line 249)
```
241 242
243 244
245 246
247 248
249 250
251
```
```
fn create_instance(
definition: Arc<GraphDefinition>,
skeleton: Option<Arc<Skeleton>>,
resources: Arc<dyn GraphResourceProvider>,
) -> Graph {
if let Some(skeleton) = skeleton {
definition.build(resources.clone(), skeleton)
} else {
definition.build_with_empty_skeleton(resources.clone())
}
}
```
Hide additional examplesexamples/compiler_global.rs (line 72)
```
68 69
70 71
72 73
74 75
76 77
78 79
80 81
82 83
84 85
86 87
88 89
90 91
```
```
&varrfn perform_runtime_test(definition: Arc<GraphDefinition>) {
// 3. Create the graph
let mut graph = definition
.clone()
.build_with_empty_skeleton(Arc::new(EmptyResourceProvider));
// 4. Query the graph
let event = definition.get_event_by_name(TEST_EVENT).unwrap();
assert!(event.get(&graph) == FlowState::Exited);
// 5. Run the graph
let mut context = DefaultRunContext::new(1.0);
context.run(&mut graph);
assert!(context.events.emitted.is_empty());
assert!(event.get(&graph) == FlowState::Entered);
// 6. Modify parameters
let a = definition.get_number_parameter::<f32>("a").unwrap();
a.set(&mut graph, 4.0);
context.run(&mut graph);
assert_eq!(&context.events.emitted, &[Id::from_str(TEST_EVENT)]);
}
```
### impl GraphDefinition
#### pub fn desc(&self) -> &HierarchicalStateMachine
#### pub fn total_events(&self) -> usize
#### pub fn total_timers(&self) -> usize
#### pub fn total_boolean_variables(&self) -> usize
#### pub fn total_number_variables(&self) -> usize
#### pub fn get_event_id(&self, index: Event) -> Id
#### pub fn get_machine_states(
&self,
machine: MachineIndex
) -> Option<Range<IndexType>#### pub fn get_branch(&self, branch_target: IndexType) -> Option<HierarchicalBranch#### pub fn get_state_branch(
&self,
state_index: StateIndex
) -> Option<(IndexType, HierarchicalBranch)#### pub fn get_state_global_condition(
&self,
state_index: StateIndex
) -> Option<GraphCondition#### pub fn get_transition_condition(
&self,
transition: IndexType
) -> Option<GraphCondition#### pub fn get_transition(
&self,
transition: IndexType
) -> Option<&HierarchicalTransition#### pub fn get_state_transitions(
&self,
state_index: StateIndex
) -> Option<Range<IndexType>#### pub fn max_subconditions(&self) -> usize
#### pub fn get_subcondition(&self, index: IndexType) -> Option<GraphConditionAuto Trait Implementations
---
### impl !RefUnwindSafe for GraphDefinition
### impl Send for GraphDefinition
### impl Sync for GraphDefinition
### impl Unpin for GraphDefinition
### impl !UnwindSafe for GraphDefinition
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct animgraph::GraphDefinitionBuilder
===
```
pub struct GraphDefinitionBuilder {
pub parameters: HashMap<String, GraphParameterEntry>,
pub desc: HierarchicalStateMachine,
pub compiled_nodes: Vec<GraphCompiledNode>,
pub node_types: Vec<String>,
pub resources: Vec<GraphResourceEntry>,
pub resource_types: Vec<String>,
pub constants: Vec<f64>,
pub events: Vec<Id>,
pub booleans: IndexType,
pub numbers: IndexType,
pub timers: IndexType,
}
```
Fields
---
`parameters: HashMap<String, GraphParameterEntry>``desc: HierarchicalStateMachine``compiled_nodes: Vec<GraphCompiledNode>``node_types: Vec<String>``resources: Vec<GraphResourceEntry>``resource_types: Vec<String>``constants: Vec<f64>``events: Vec<Id>``booleans: IndexType``numbers: IndexType``timers: IndexType`Implementations
---
### impl GraphDefinitionBuilder
#### pub fn metrics(&self) -> GraphMetrics
#### pub fn build(
self,
registry: &dyn GraphNodeProvider
) -> Result<Arc<GraphDefinition>, GraphBuilderError##### Examples found in repository?
examples/compiler_global.rs (line 56)
```
49 50
51 52
53 54
55 56
57
```
```
fn deserialize_definition(
serialized: serde_json::Value,
) -> anyhow::Result<Arc<GraphDefinition>> {
let deserialized: GraphDefinitionBuilder = serde_json::from_value(serialized)?;
// Validates the graph and deserializes the immutable nodes
let mut provider = GraphNodeRegistry::default();
add_default_constructors(&mut provider);
Ok(deserialized.build(&provider)?)
}
```
Hide additional examplesexamples/third_person.rs (line 167)&pr &sc
```
142 143
144 145
146 147
148 149
150 151
152 153
154 155
156 157
158 159
160 161
162 163
164 165
166 167
168 169
170 171
172 173
174 175
176 177
178 179
180 181
182 183
184 185
186 187
188 189
190 191
192 193
194 195
196 197
198 199
200 201
202 203
204 205
206 207
208 209
210 211
212 213
214 215
216 217
218 219
220 221
222 223
224 225
226 227
228 229
230 231
232 233
234 235
236 237
238 239
```
```
&varrfn locomotion_graph_example() -> anyhow::Result<(
Arc<GraphDefinition>,
Option<Arc<Skeleton>>,
Arc<dyn GraphResourceProvider>,
)> {
// The constructed data model can be serialized and reused
let locomotion_graph = create_locomotion_graph();
let serialized_locmotion_graph = serde_json::to_string_pretty(&locomotion_graph)?;
std::fs::write("locomotion.ag", serialized_locmotion_graph)?;
// The specific nodes allowed is decided by the compilation registry
let mut registry = NodeCompilationRegistry::default();
add_default_nodes(&mut registry);
// The resulting compilation contains additional debug information but only the builder is needed for the runtime
let locomotion_compilation = GraphDefinitionCompilation::compile(&locomotion_graph, ®istry)?;
let serialize_locomotion_definition =
serde_json::to_string_pretty(&locomotion_compilation.builder)?;
std::fs::write("locomotion.agc", serialize_locomotion_definition)?;
// The specific nodes instantiated at runtime is decided by the graph node registry
let mut graph_nodes = GraphNodeRegistry::default();
add_default_constructors(&mut graph_nodes);
// The builder validates the definition and instantiates the immutable graph nodes which processes the graph data
let locomotion_definition = locomotion_compilation.builder.build(&graph_nodes)?;
// Resources are currently application defined. SimpleResourceProvider and the implementation in this example is illustrative of possible use.
let default_locomotion_resources = Arc::new(SimpleResourceProvider::new_with_map(
&locomotion_definition,
RuntimeResource::Empty,
get_cached_resource,
)?);
// Lookup default skeleton to use since there are no actual resources to probe
let default_skeleton = default_locomotion_resources
.resources
.iter()
.find_map(|x| match x {
RuntimeResource::Skeleton(skeleton) => Some(skeleton.clone()),
_ => None,
});
Ok((
locomotion_definition,
default_skeleton,
default_locomotion_resources,
))
}
fn action_graph_example() -> anyhow::Result<(
Arc<GraphDefinition>,
Option<Arc<Skeleton>>,
Arc<dyn GraphResourceProvider>,
)> {
// The constructed data model can be serialized and reused
let action_graph = create_action_graph();
let serialized_locmotion_graph = serde_json::to_string_pretty(&action_graph)?;
std::fs::write("action.ag", serialized_locmotion_graph)?;
// The specific nodes allowed is decided by the compilation registry
let mut registry = NodeCompilationRegistry::default();
add_default_nodes(&mut registry);
// The resulting compilation contains additional debug information but only the builder is needed for the runtime
let action_compilation = GraphDefinitionCompilation::compile(&action_graph, ®istry)?;
let serialize_action_definition = serde_json::to_string_pretty(&action_compilation.builder)?;
std::fs::write("action.agc", serialize_action_definition)?;
// The specific nodes instantiated at runtime is decided by the graph node registry
let mut graph_nodes = GraphNodeRegistry::default();
add_default_constructors(&mut graph_nodes);
// The builder validates the definition and instantiates the immutable graph nodes which processes the graph data
let action_definition = action_compilation.builder.build(&graph_nodes)?;
// Resources are currently application defined. SimpleResourceProvider and the implementation in this example is illustrative of possible use.
let default_action_resources = Arc::new(SimpleResourceProvider::new_with_map(
&action_definition,
RuntimeResource::Empty,
get_cached_resource,
)?);
// Lookup default skeleton to use since there are no actual resources to probe
let default_skeleton = default_action_resources
.resources
.iter()
.find_map(|x| match x {
RuntimeResource::Skeleton(skeleton) => Some(skeleton.clone()),
_ => None,
});
Ok((
action_definition,
default_skeleton,
default_action_resources,
))
}
```
Trait Implementations
---
### impl Clone for GraphDefinitionBuilder
#### fn clone(&self) -> GraphDefinitionBuilder
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn default() -> GraphDefinitionBuilder
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for GraphDefinitionBuilder
### impl Send for GraphDefinitionBuilder
### impl Sync for GraphDefinitionBuilder
### impl Unpin for GraphDefinitionBuilder
### impl UnwindSafe for GraphDefinitionBuilder
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Struct animgraph::GraphMetrics
===
```
pub struct GraphMetrics {
pub nodes: usize,
pub constants: usize,
pub numbers: usize,
pub bools: usize,
pub events: usize,
pub timers: usize,
pub machines: usize,
pub states: usize,
pub subconditions: usize,
pub expressions: usize,
pub resource_variables: Vec<IndexType>,
pub resource_types: Vec<String>,
pub current_node: NodeIndex,
}
```
Fields
---
`nodes: usize``constants: usize``numbers: usize``bools: usize``events: usize``timers: usize``machines: usize``states: usize``subconditions: usize``expressions: usize``resource_variables: Vec<IndexType>``resource_types: Vec<String>``current_node: NodeIndex`Implementations
---
### impl GraphMetrics
#### pub fn validate_children(
&self,
range: &Range<IndexType>,
context: &str
) -> Result<(), GraphReferenceError#### pub fn validate_timer(
&self,
timer: &Timer,
context: &str
) -> Result<(), GraphReferenceError#### pub fn validate_resource<T: ResourceSettings>(
&self,
resource: &Resource<T>,
context: &str
) -> Result<(), GraphReferenceError#### pub fn validate_event(
&self,
event: &Event,
context: &str
) -> Result<(), GraphReferenceError#### pub fn validate_number(
&self,
number: GraphNumber,
context: &str
) -> Result<(), GraphReferenceError#### pub fn validate_number_ref<T: FromFloatUnchecked>(
&self,
number: &NumberRef<T>,
context: &str
) -> Result<(), GraphReferenceError#### pub fn validate_number_mut<T: FromFloatUnchecked>(
&self,
number: &NumberMut<T>,
context: &str
) -> Result<(), GraphReferenceError#### pub fn validate_bool_mut(
&self,
value: &BoolMut,
context: &str
) -> Result<(), GraphReferenceError#### pub fn validate_bool(
&self,
value: GraphBoolean,
context: &str
) -> Result<(), GraphReferenceError#### pub fn validate_condition_range(
&self,
a: IndexType,
b: IndexType,
context: &str
) -> Result<(), GraphReferenceError#### pub fn validate_node_index(
&self,
node: Option<NodeIndex>,
context: &str
) -> Result<(), GraphReferenceError#### pub fn validate_vector_ref(
&self,
value: &VectorRef,
context: &str
) -> Result<(), GraphReferenceError#### pub fn validate_vector_mut(
&self,
value: &VectorMut,
context: &str
) -> Result<(), GraphReferenceErrorTrait Implementations
---
### impl Clone for GraphMetrics
#### fn clone(&self) -> GraphMetrics
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for GraphMetrics
### impl Send for GraphMetrics
### impl Sync for GraphMetrics
### impl Unpin for GraphMetrics
### impl UnwindSafe for GraphMetrics
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct animgraph::GraphNodeConstructorMarker
===
```
pub struct GraphNodeConstructorMarker<'a, T: GraphNodeConstructor + 'a>(_);
```
Trait Implementations
---
### impl<'a, T: GraphNodeConstructor + 'a> GraphNodeBuilder for GraphNodeConstructorMarker<'a, T#### fn create(
&self,
name: &str,
node: GraphCompiledNode,
metrics: &GraphMetrics
) -> Result<GraphNodeEntryAuto Trait Implementations
---
### impl<'a, T> RefUnwindSafe for GraphNodeConstructorMarker<'a, T>where
T: RefUnwindSafe,
### impl<'a, T> Send for GraphNodeConstructorMarker<'a, T>where
T: Sync,
### impl<'a, T> Sync for GraphNodeConstructorMarker<'a, T>where
T: Sync,
### impl<'a, T> Unpin for GraphNodeConstructorMarker<'a, T### impl<'a, T> UnwindSafe for GraphNodeConstructorMarker<'a, T>where
T: RefUnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct animgraph::GraphNodeRegistry
===
```
pub struct GraphNodeRegistry {
pub builders: HashMap<String, Arc<dyn GraphNodeBuilder>>,
}
```
Fields
---
`builders: HashMap<String, Arc<dyn GraphNodeBuilder>>`Implementations
---
### impl GraphNodeRegistry
#### pub fn add(&mut self, name: &str, provider: Arc<dyn GraphNodeBuilder>)
#### pub fn register<T: GraphNodeConstructor + 'static>(&mut self)
Trait Implementations
---
### impl Clone for GraphNodeRegistry
#### fn clone(&self) -> GraphNodeRegistry
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn default() -> GraphNodeRegistry
Returns the “default value” for a type.
#### fn create_nodes(
&self,
node_types: Vec<String>,
compiled_nodes: Vec<GraphCompiledNode>,
metrics: &mut GraphMetrics
) -> Result<Vec<GraphNodeEntry>Auto Trait Implementations
---
### impl !RefUnwindSafe for GraphNodeRegistry
### impl !Send for GraphNodeRegistry
### impl !Sync for GraphNodeRegistry
### impl Unpin for GraphNodeRegistry
### impl !UnwindSafe for GraphNodeRegistry
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct animgraph::GraphResourceEntry
===
```
pub struct GraphResourceEntry {
pub resource_type: IndexType,
pub initial: Option<Value>,
}
```
Fields
---
`resource_type: IndexType``initial: Option<Value>`Trait Implementations
---
### impl Clone for GraphResourceEntry
#### fn clone(&self) -> GraphResourceEntry
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for GraphResourceEntry
### impl Send for GraphResourceEntry
### impl Sync for GraphResourceEntry
### impl Unpin for GraphResourceEntry
### impl UnwindSafe for GraphResourceEntry
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Struct animgraph::GraphResourceRef
===
```
#[repr(transparent)]pub struct GraphResourceRef(pub u32);
```
Tuple Fields
---
`0: u32`Trait Implementations
---
### impl Clone for GraphResourceRef
#### fn clone(&self) -> GraphResourceRef
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> GraphResourceRef
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &GraphResourceRef) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for GraphResourceRef
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl Eq for GraphResourceRef
### impl StructuralEq for GraphResourceRef
### impl StructuralPartialEq for GraphResourceRef
Auto Trait Implementations
---
### impl RefUnwindSafe for GraphResourceRef
### impl Send for GraphResourceRef
### impl Sync for GraphResourceRef
### impl Unpin for GraphResourceRef
### impl UnwindSafe for GraphResourceRef
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Struct animgraph::GraphTime
===
```
pub struct GraphTime {
pub delta_time: Seconds,
pub time_scale: f32,
}
```
Fields
---
`delta_time: Seconds``time_scale: f32`Trait Implementations
---
### impl Clone for GraphTime
#### fn clone(&self) -> GraphTime
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> Self
Returns the “default value” for a type. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for GraphTime
### impl Send for GraphTime
### impl Sync for GraphTime
### impl Unpin for GraphTime
### impl UnwindSafe for GraphTime
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct animgraph::GraphTransitionState
===
```
pub struct GraphTransitionState {
pub node: Option<NodeIndex>,
pub next: Option<StateIndex>,
pub transition_index: Option<IndexType>,
pub blend: Alpha,
pub completion: Alpha,
pub duration: Seconds,
}
```
Fields
---
`node: Option<NodeIndex>``next: Option<StateIndex>``transition_index: Option<IndexType>``blend: Alpha``completion: Alpha``duration: Seconds`Implementations
---
### impl GraphTransitionState
#### pub fn get_state(&self) -> FlowState
#### pub fn init(
&mut self,
blend: Alpha,
completion: Alpha,
duration: Seconds,
transition_index: IndexType
)
#### pub fn complete(&mut self)
#### pub fn update(&mut self, delta_time: Seconds)
#### pub fn is_complete(&self) -> bool
#### pub fn reset(&mut self)
Trait Implementations
---
### impl Clone for GraphTransitionState
#### fn clone(&self) -> GraphTransitionState
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> Self
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &GraphTransitionState) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for GraphTransitionState
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
Auto Trait Implementations
---
### impl RefUnwindSafe for GraphTransitionState
### impl Send for GraphTransitionState
### impl Sync for GraphTransitionState
### impl Unpin for GraphTransitionState
### impl UnwindSafe for GraphTransitionState
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Struct animgraph::Id
===
```
#[repr(transparent)]pub struct Id(pub ConstId<0, 0x2614_9574_d5fa_c1fe>);
```
Tuple Fields
---
`0: ConstId<0, 0x2614_9574_d5fa_c1fe>`Implementations
---
### impl Id
#### pub const EMPTY: Self = _
#### pub const fn from_bits(bits: u64) -> Self
#### pub const fn to_bits(self) -> u64
#### pub const fn is_empty(self) -> bool
#### pub const fn from_str(value: &str) -> Self
##### Examples found in repository?
examples/compiler_global.rs (line 90)
```
68 69
70 71
72 73
74 75
76 77
78 79
80 81
82 83
84 85
86 87
88 89
90 91
```
```
&varrfn perform_runtime_test(definition: Arc<GraphDefinition>) {
// 3. Create the graph
let mut graph = definition
.clone()
.build_with_empty_skeleton(Arc::new(EmptyResourceProvider));
// 4. Query the graph
let event = definition.get_event_by_name(TEST_EVENT).unwrap();
assert!(event.get(&graph) == FlowState::Exited);
// 5. Run the graph
let mut context = DefaultRunContext::new(1.0);
context.run(&mut graph);
assert!(context.events.emitted.is_empty());
assert!(event.get(&graph) == FlowState::Entered);
// 6. Modify parameters
let a = definition.get_number_parameter::<f32>("a").unwrap();
a.set(&mut graph, 4.0);
context.run(&mut graph);
assert_eq!(&context.events.emitted, &[Id::from_str(TEST_EVENT)]);
}
```
Trait Implementations
---
### impl Clone for Id
#### fn clone(&self) -> Id
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> Id
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn cmp(&self, other: &Id) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &Id) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<Id> for Id
#### fn partial_cmp(&self, other: &Id) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl Eq for Id
### impl StructuralEq for Id
### impl StructuralPartialEq for Id
Auto Trait Implementations
---
### impl RefUnwindSafe for Id
### impl Send for Id
### impl Sync for Id
### impl Unpin for Id
### impl UnwindSafe for Id
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Struct animgraph::Interpreter
===
```
pub struct Interpreter<'a> {
pub visitor: GraphVisitor<'a>,
pub layers: &'a mut LayerBuilder,
pub context: InterpreterContext,
}
```
Fields
---
`visitor: GraphVisitor<'a>``layers: &'a mut LayerBuilder``context: InterpreterContext`Implementations
---
### impl<'a> Interpreter<'a#### pub fn run(
graph: &'a mut Graph,
definition: &'a GraphDefinition,
events: &'a mut FlowEvents,
layers: &'a mut LayerBuilder,
dt: f64
) -> Self
Auto Trait Implementations
---
### impl<'a> !RefUnwindSafe for Interpreter<'a### impl<'a> Send for Interpreter<'a### impl<'a> Sync for Interpreter<'a### impl<'a> Unpin for Interpreter<'a### impl<'a> !UnwindSafe for Interpreter<'aBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct animgraph::InterpreterContext
===
```
pub struct InterpreterContext {
pub machine: MachineIndex,
pub state: StateIndex,
pub layer_weight: LayerWeight,
pub transition_weight: Alpha,
pub time: GraphTime,
}
```
Fields
---
`machine: MachineIndex``state: StateIndex``layer_weight: LayerWeight``transition_weight: Alpha``time: GraphTime`Implementations
---
### impl InterpreterContext
#### pub fn delta_time(&self) -> Seconds
#### pub fn apply_time_scale(&mut self, factor: f32)
### impl InterpreterContext
#### pub fn new_active() -> Self
#### pub const fn new_inactive() -> Self
Trait Implementations
---
### impl Clone for InterpreterContext
#### fn clone(&self) -> InterpreterContext
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> Self
Returns the “default value” for a type. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for InterpreterContext
### impl Send for InterpreterContext
### impl Sync for InterpreterContext
### impl Unpin for InterpreterContext
### impl UnwindSafe for InterpreterContext
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct animgraph::InvalidLayerError
===
```
pub struct InvalidLayerError;
```
Trait Implementations
---
### impl Clone for InvalidLayerError
#### fn clone(&self) -> InvalidLayerError
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, demand: &mut Demand<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports.
#### fn eq(&self, other: &InvalidLayerError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Copy for InvalidLayerError
### impl Eq for InvalidLayerError
### impl StructuralEq for InvalidLayerError
### impl StructuralPartialEq for InvalidLayerError
Auto Trait Implementations
---
### impl RefUnwindSafe for InvalidLayerError
### impl Send for InvalidLayerError
### impl Sync for InvalidLayerError
### impl Unpin for InvalidLayerError
### impl UnwindSafe for InvalidLayerError
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<E> Provider for Ewhere
E: Error + ?Sized,
#### fn provide<'a>(&'a self, demand: &mut Demand<'a>)
🔬This is a nightly-only experimental API. (`provide_any`)Data providers should implement this method to provide *all* values they are able to provide by using `demand`.
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct animgraph::Layer
===
```
pub struct Layer {
pub context: InterpreterContext,
pub layer_type: LayerType,
pub node: Option<NodeIndex>,
pub first_child: Option<usize>,
pub last_child: Option<usize>,
pub next_sibling: Option<usize>,
}
```
Fields
---
`context: InterpreterContext``layer_type: LayerType``node: Option<NodeIndex>``first_child: Option<usize>``last_child: Option<usize>``next_sibling: Option<usize>`Implementations
---
### impl Layer
#### pub fn layer_weight(&self) -> Alpha
#### pub fn transition_weight(&self) -> Alpha
Trait Implementations
---
### impl Clone for Layer
#### fn clone(&self) -> Layer
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> Layer
Returns the “default value” for a type. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for Layer
### impl Send for Layer
### impl Sync for Layer
### impl Unpin for Layer
### impl UnwindSafe for Layer
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct animgraph::LayerBuilder
===
```
pub struct LayerBuilder {
pub layers: Vec<Layer>,
pub layer_pointer: usize,
}
```
Fields
---
`layers: Vec<Layer>``layer_pointer: usize`Implementations
---
### impl LayerBuilder
#### pub fn clear(&mut self)
#### pub fn push_layer(
&mut self,
context: &InterpreterContext,
layer_type: LayerType,
node: Option<NodeIndex>
) -> usize
#### pub fn apply_layer_weight(&mut self, alpha: Alpha)
#### pub fn reset_layer_children(&mut self, context: &InterpreterContext)
#### pub fn pop_layer(&mut self, context: &InterpreterContext, parent: usize)
#### pub fn blend_layer<T: BlendContext>(
&self,
context: &mut T,
layer: &Layer
) -> Result<Option<T::Task>#### pub fn blend<T: BlendContext>(&self, context: &mut T) -> Result<Option<T::Task>Trait Implementations
---
### impl Clone for LayerBuilder
#### fn clone(&self) -> LayerBuilder
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn default() -> LayerBuilder
Returns the “default value” for a type. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for LayerBuilder
### impl Send for LayerBuilder
### impl Sync for LayerBuilder
### impl Unpin for LayerBuilder
### impl UnwindSafe for LayerBuilder
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct animgraph::LayerWeight
===
```
pub struct LayerWeight(pub Alpha);
```
Tuple Fields
---
`0: Alpha`Implementations
---
### impl LayerWeight
#### pub const ZERO: Self = _
#### pub const ONE: Self = _
#### pub fn is_nearly_zero(&self) -> bool
#### pub fn is_nearly_one(&self) -> bool
#### pub fn interpolate(a: Self, b: Self, t: Alpha) -> Self
Trait Implementations
---
### impl Clone for LayerWeight
#### fn clone(&self) -> LayerWeight
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn eq(&self, other: &LayerWeight) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Copy for LayerWeight
### impl Eq for LayerWeight
### impl StructuralEq for LayerWeight
### impl StructuralPartialEq for LayerWeight
Auto Trait Implementations
---
### impl RefUnwindSafe for LayerWeight
### impl Send for LayerWeight
### impl Sync for LayerWeight
### impl Unpin for LayerWeight
### impl UnwindSafe for LayerWeight
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct animgraph::PoseBlendContext
===
```
pub struct PoseBlendContext<'a>(pub &'a Graph, pub &'a mut BlendTree);
```
Tuple Fields
---
`0: &'a Graph``1: &'a mut BlendTree`Trait Implementations
---
### impl<'a> BlendContext for PoseBlendContext<'a#### type Task = BlendSampleId
#### fn sample(&mut self, node: NodeIndex) -> Result<Option<Self::Task>#### fn apply_parent(
&mut self,
parent: NodeIndex,
child_task: Self::Task
) -> Result<Self::Task#### fn blend_layers(
&mut self,
a: Self::Task,
b: Self::Task,
w: Alpha
) -> Result<Self::Task#### fn interpolate(
&mut self,
a: Self::Task,
b: Self::Task,
w: Alpha
) -> Result<Self::TaskAuto Trait Implementations
---
### impl<'a> !RefUnwindSafe for PoseBlendContext<'a### impl<'a> Send for PoseBlendContext<'a### impl<'a> Sync for PoseBlendContext<'a### impl<'a> Unpin for PoseBlendContext<'a### impl<'a> !UnwindSafe for PoseBlendContext<'aBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct animgraph::SampleRange
===
```
#[repr(C)]pub struct SampleRange(pub Alpha, pub Alpha);
```
Tuple Fields
---
`0: Alpha``1: Alpha`Implementations
---
### impl SampleRange
#### pub fn inverse(self) -> Self
#### pub fn ordered(&self) -> (Alpha, Alpha)
Trait Implementations
---
### impl Clone for SampleRange
#### fn clone(&self) -> SampleRange
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> SampleRange
Returns the “default value” for a type.
Auto Trait Implementations
---
### impl RefUnwindSafe for SampleRange
### impl Send for SampleRange
### impl Sync for SampleRange
### impl Unpin for SampleRange
### impl UnwindSafe for SampleRange
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct animgraph::SampleTime
===
```
pub struct SampleTime {
pub t0: Alpha,
pub t1: Alpha,
}
```
Fields
---
`t0: Alpha``t1: Alpha`Implementations
---
### impl SampleTime
#### pub const ZERO: SampleTime = _
#### pub fn is_looping(&self) -> bool
#### pub fn samples(&self) -> (SampleRange, Option<SampleRange>)
#### pub fn time(&self) -> Alpha
#### pub fn step_clamped(&mut self, x: f32)
#### pub fn step_looping(&mut self, x: f32)
Trait Implementations
---
### impl Clone for SampleTime
#### fn clone(&self) -> SampleTime
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> SampleTime
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn from(value: Alpha) -> Self
Converts to this type from the input type.### impl PartialEq<SampleTime> for SampleTime
#### fn eq(&self, other: &SampleTime) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for SampleTime
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralPartialEq for SampleTime
Auto Trait Implementations
---
### impl RefUnwindSafe for SampleTime
### impl Send for SampleTime
### impl Sync for SampleTime
### impl Unpin for SampleTime
### impl UnwindSafe for SampleTime
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Struct animgraph::SampleTimer
===
```
pub struct SampleTimer {
pub sample_time: SampleTime,
pub duration: Seconds,
pub looping: Option<NonZeroU32>,
}
```
Fields
---
`sample_time: SampleTime``duration: Seconds``looping: Option<NonZeroU32>`Implementations
---
### impl SampleTimer
#### pub const FIXED: SampleTimer = _
#### pub fn new(start: Seconds, duration: Seconds, looping: bool) -> SampleTimer
#### pub fn is_fixed(&self) -> bool
#### pub fn is_looping(&self) -> bool
#### pub fn set_looping(&mut self, value: bool)
#### pub fn tick(&mut self, delta_time: Seconds)
#### pub fn time(&self) -> Alpha
#### pub fn remaining(&self) -> Seconds
#### pub fn elapsed(&self) -> Seconds
Trait Implementations
---
### impl Clone for SampleTimer
#### fn clone(&self) -> SampleTimer
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> SampleTimer
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn as_io(self, name: &str) -> IO
### impl PartialEq<SampleTimer> for SampleTimer
#### fn eq(&self, other: &SampleTimer) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for SampleTimer
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralPartialEq for SampleTimer
Auto Trait Implementations
---
### impl RefUnwindSafe for SampleTimer
### impl Send for SampleTimer
### impl Sync for SampleTimer
### impl Unpin for SampleTimer
### impl UnwindSafe for SampleTimer
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> IOSlot<T> for Twhere
T: IOBuilder,
#### fn into_slot(self, name: &str) -> IO
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Struct animgraph::Seconds
===
```
pub struct Seconds(pub f32);
```
Tuple Fields
---
`0: f32`Implementations
---
### impl Seconds
#### pub fn normalized_offset_looping(&self, start: Seconds) -> Alpha
#### pub fn is_nearly_zero(&self) -> bool
Trait Implementations
---
### impl Add<Seconds> for Seconds
#### type Output = Seconds
The resulting type after applying the `+` operator.#### fn add(self, rhs: Self) -> Self::Output
Performs the `+` operation.
#### fn add_assign(&mut self, rhs: Self)
Performs the `+=` operation.
#### fn clone(&self) -> Seconds
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> Seconds
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn from(value: Seconds) -> Self
Converts to this type from the input type.### impl From<f32> for Seconds
#### fn from(value: f32) -> Self
Converts to this type from the input type.### impl FromFloatUnchecked for Seconds
#### fn from_f32(x: f32) -> Self
#### fn from_f64(x: f64) -> Self
#### fn into_f32(self) -> f32
#### fn into_f64(self) -> f64
### impl Mul<f32> for Seconds
#### type Output = Seconds
The resulting type after applying the `*` operator.#### fn mul(self, rhs: f32) -> Self::Output
Performs the `*` operation.
#### fn mul_assign(&mut self, rhs: f32)
Performs the `*=` operation.
#### fn cmp(&self, other: &Self) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &Seconds) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<Seconds> for Seconds
#### fn partial_cmp(&self, other: &Self) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl Eq for Seconds
### impl StructuralPartialEq for Seconds
Auto Trait Implementations
---
### impl RefUnwindSafe for Seconds
### impl Send for Seconds
### impl Sync for Seconds
### impl Unpin for Seconds
### impl UnwindSafe for Seconds
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> IOBuilder for Twhere
T: FromFloatUnchecked,
#### fn as_io(self, name: &str) -> IO
### impl<T> IOSlot<T> for Twhere
T: IOBuilder,
#### fn into_slot(self, name: &str) -> IO
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Struct animgraph::SimpleResourceProvider
===
```
pub struct SimpleResourceProvider<T> {
pub resources: Vec<T>,
pub entries: Vec<GraphResourceRef>,
}
```
Fields
---
`resources: Vec<T>``entries: Vec<GraphResourceRef>`Implementations
---
### impl<T> SimpleResourceProvider<T#### pub fn new_with_map(
definition: &GraphDefinition,
empty_value: T,
map: impl Fn(&str, Value) -> Result<T>
) -> Result<Self, SimpleResourceProviderError##### Examples found in repository?
examples/third_person.rs (lines 170-174)&pr &sc
```
142 143
144 145
146 147
148 149
150 151
152 153
154 155
156 157
158 159
160 161
162 163
164 165
166 167
168 169
170 171
172 173
174 175
176 177
178 179
180 181
182 183
184 185
186 187
188 189
190 191
192 193
194 195
196 197
198 199
200 201
202 203
204 205
206 207
208 209
210 211
212 213
214 215
216 217
218 219
220 221
222 223
224 225
226 227
228 229
230 231
232 233
234 235
236 237
238 239
```
```
&varrfn locomotion_graph_example() -> anyhow::Result<(
Arc<GraphDefinition>,
Option<Arc<Skeleton>>,
Arc<dyn GraphResourceProvider>,
)> {
// The constructed data model can be serialized and reused
let locomotion_graph = create_locomotion_graph();
let serialized_locmotion_graph = serde_json::to_string_pretty(&locomotion_graph)?;
std::fs::write("locomotion.ag", serialized_locmotion_graph)?;
// The specific nodes allowed is decided by the compilation registry
let mut registry = NodeCompilationRegistry::default();
add_default_nodes(&mut registry);
// The resulting compilation contains additional debug information but only the builder is needed for the runtime
let locomotion_compilation = GraphDefinitionCompilation::compile(&locomotion_graph, ®istry)?;
let serialize_locomotion_definition =
serde_json::to_string_pretty(&locomotion_compilation.builder)?;
std::fs::write("locomotion.agc", serialize_locomotion_definition)?;
// The specific nodes instantiated at runtime is decided by the graph node registry
let mut graph_nodes = GraphNodeRegistry::default();
add_default_constructors(&mut graph_nodes);
// The builder validates the definition and instantiates the immutable graph nodes which processes the graph data
let locomotion_definition = locomotion_compilation.builder.build(&graph_nodes)?;
// Resources are currently application defined. SimpleResourceProvider and the implementation in this example is illustrative of possible use.
let default_locomotion_resources = Arc::new(SimpleResourceProvider::new_with_map(
&locomotion_definition,
RuntimeResource::Empty,
get_cached_resource,
)?);
// Lookup default skeleton to use since there are no actual resources to probe
let default_skeleton = default_locomotion_resources
.resources
.iter()
.find_map(|x| match x {
RuntimeResource::Skeleton(skeleton) => Some(skeleton.clone()),
_ => None,
});
Ok((
locomotion_definition,
default_skeleton,
default_locomotion_resources,
))
}
fn action_graph_example() -> anyhow::Result<(
Arc<GraphDefinition>,
Option<Arc<Skeleton>>,
Arc<dyn GraphResourceProvider>,
)> {
// The constructed data model can be serialized and reused
let action_graph = create_action_graph();
let serialized_locmotion_graph = serde_json::to_string_pretty(&action_graph)?;
std::fs::write("action.ag", serialized_locmotion_graph)?;
// The specific nodes allowed is decided by the compilation registry
let mut registry = NodeCompilationRegistry::default();
add_default_nodes(&mut registry);
// The resulting compilation contains additional debug information but only the builder is needed for the runtime
let action_compilation = GraphDefinitionCompilation::compile(&action_graph, ®istry)?;
let serialize_action_definition = serde_json::to_string_pretty(&action_compilation.builder)?;
std::fs::write("action.agc", serialize_action_definition)?;
// The specific nodes instantiated at runtime is decided by the graph node registry
let mut graph_nodes = GraphNodeRegistry::default();
add_default_constructors(&mut graph_nodes);
// The builder validates the definition and instantiates the immutable graph nodes which processes the graph data
let action_definition = action_compilation.builder.build(&graph_nodes)?;
// Resources are currently application defined. SimpleResourceProvider and the implementation in this example is illustrative of possible use.
let default_action_resources = Arc::new(SimpleResourceProvider::new_with_map(
&action_definition,
RuntimeResource::Empty,
get_cached_resource,
)?);
// Lookup default skeleton to use since there are no actual resources to probe
let default_skeleton = default_action_resources
.resources
.iter()
.find_map(|x| match x {
RuntimeResource::Skeleton(skeleton) => Some(skeleton.clone()),
_ => None,
});
Ok((
action_definition,
default_skeleton,
default_action_resources,
))
}
```
Trait Implementations
---
### impl<T: Clone> Clone for SimpleResourceProvider<T#### fn clone(&self) -> SimpleResourceProvider<TReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
Formats the value using the given formatter.
&self,
entries: &mut [GraphResourceRef],
_definition: &GraphDefinition
)
#### fn get(&self, index: GraphResourceRef) -> &dyn Any
Auto Trait Implementations
---
### impl<T> RefUnwindSafe for SimpleResourceProvider<T>where
T: RefUnwindSafe,
### impl<T> Send for SimpleResourceProvider<T>where
T: Send,
### impl<T> Sync for SimpleResourceProvider<T>where
T: Sync,
### impl<T> Unpin for SimpleResourceProvider<T>where
T: Unpin,
### impl<T> UnwindSafe for SimpleResourceProvider<T>where
T: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Struct animgraph::Skeleton
===
```
pub struct Skeleton {
pub id: SkeletonId,
pub bones: Vec<Id>,
pub parents: Vec<Option<u16>>,
}
```
Fields
---
`id: SkeletonId``bones: Vec<Id>``parents: Vec<Option<u16>>`Implementations
---
### impl Skeleton
#### pub const RESOURCE_TYPE: &str = "skeleton"
#### pub const IO_TYPE: IOType = _
#### pub fn transform_by_id<'a>(
&self,
pose: &'a [Transform],
id: Id
) -> Option<&'a Transform#### pub fn from_parent_map<T: AsRef<str> + Ord>(bones: &BTreeMap<T, T>) -> Self
Produces a skeleton with topologicaly sorted bones from bones with empty parent or parents not in the list, and ignores any cyclic clusters
##### Examples found in repository?
examples/third_person.rs (line 343)
```
305 306
307 308
309 310
311 312
313 314
315 316
317 318
319 320
321 322
323 324
325 326
327 328
329 330
331 332
333 334
335 336
337 338
339 340
341 342
343 344
345 346
347 348
349 350
351 352
353 354
355 356
357 358
359
```
```
&varr pub fn get_cached(&mut self, serialized: SerializedResource) -> RuntimeResource {
match serialized {
SerializedResource::AnimationClip(name) => {
let looping = name.contains("looping");
let animation =
if let Some(index) = self.animations.iter().position(|x| x.name == name) {
AnimationId(index as _)
} else {
let index = AnimationId(self.animations.len() as _);
self.animations.push(Animation { name });
index
};
RuntimeResource::AnimationClip(AnimationClip {
animation: animation,
bone_group: BoneGroupId::All,
looping,
start: Seconds(0.0),
duration: Seconds(1.0),
})
}
SerializedResource::BoneGroup(mut group) => {
group.sort();
let mut bones = BoneGroup::new(0, group.as_slice().iter());
let res = if let Some(res) =
self.bone_groups.iter().find(|x| x.weights == bones.weights)
{
res.clone()
} else {
bones.group = BoneGroupId::Reference(self.bone_groups.len() as _);
let res = Arc::new(bones);
self.bone_groups.push(res.clone());
res
};
RuntimeResource::BoneGroup(res)
}
SerializedResource::Skeleton(map) => {
let mut skeleton = Skeleton::from_parent_map(&map);
let res = if let Some(res) = self
.skeletons
.iter()
.find(|x| x.bones == skeleton.bones && x.parents == skeleton.parents)
{
res.clone()
} else {
skeleton.id = SkeletonId(self.skeletons.len() as _);
let res = Arc::new(skeleton);
self.skeletons.push(res.clone());
res
};
RuntimeResource::Skeleton(res)
}
}
}
```
Trait Implementations
---
### impl Clone for Skeleton
#### fn clone(&self) -> Skeleton
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> Skeleton
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn resource_type() -> &'static str
#### fn build_content(&self, name: &str) -> Result<ResourceContent### impl Serialize for Skeleton
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for Skeleton
### impl Send for Skeleton
### impl Sync for Skeleton
### impl Unpin for Skeleton
### impl UnwindSafe for Skeleton
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ResourceType for Twhere
T: ResourceSettings + 'static,
#### fn get_resource(&self) -> &(dyn Any + 'static)
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Struct animgraph::SkeletonId
===
```
#[repr(transparent)]pub struct SkeletonId(pub u32);
```
Tuple Fields
---
`0: u32`Trait Implementations
---
### impl Clone for SkeletonId
#### fn clone(&self) -> SkeletonId
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> SkeletonId
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &SkeletonId) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for SkeletonId
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl Eq for SkeletonId
### impl StructuralEq for SkeletonId
### impl StructuralPartialEq for SkeletonId
Auto Trait Implementations
---
### impl RefUnwindSafe for SkeletonId
### impl Send for SkeletonId
### impl Sync for SkeletonId
### impl Unpin for SkeletonId
### impl UnwindSafe for SkeletonId
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Struct animgraph::Transform
===
```
pub struct Transform {
pub translation: Vec3A,
pub rotation: Quat,
pub scale: Vec3A,
}
```
Fields
---
`translation: Vec3A``rotation: Quat``scale: Vec3A`Implementations
---
### impl Transform
#### pub const IDENTITY: Self = _
#### pub fn character_up(&self) -> Vec3A
Blender perspective
#### pub fn character_right(&self) -> Vec3A
Blender perspective
#### pub fn character_forward(&self) -> Vec3A
Blender perspective
#### pub fn inverse(&self) -> Transform
#### pub fn transform_point3a(&self, point: Vec3A) -> Vec3A
#### pub fn transform_vector3a(&self, vector: Vec3A) -> Vec3A
#### pub fn transform_point3(&self, point: Vec3) -> Vec3
#### pub fn transform_vector3(&self, vector: Vec3) -> Vec3
#### pub fn lerp(&self, rhs: &Transform, s: f32) -> Self
Trait Implementations
---
### impl Clone for Transform
#### fn clone(&self) -> Transform
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> Self
Returns the “default value” for a type.
#### fn from(value: Affine3A) -> Self
Converts to this type from the input type.### impl From<Mat4> for Transform
#### fn from(value: Mat4) -> Self
Converts to this type from the input type.### impl Mul<Transform> for Transform
#### type Output = Transform
The resulting type after applying the `*` operator.#### fn mul(self, rhs: Transform) -> Self::Output
Performs the `*` operation.
#### fn mul_assign(&mut self, rhs: Transform)
Performs the `*=` operation.
#### fn eq(&self, other: &Transform) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl StructuralPartialEq for Transform
Auto Trait Implementations
---
### impl RefUnwindSafe for Transform
### impl Send for Transform
### impl Sync for Transform
### impl Unpin for Transform
### impl UnwindSafe for Transform
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Enum animgraph::BlendSample
===
```
pub enum BlendSample {
Animation {
id: AnimationId,
normalized_time: f32,
},
Blend(BlendSampleId, BlendSampleId, f32, BoneGroupId),
Interpolate(BlendSampleId, BlendSampleId, f32),
}
```
Variants
---
### Animation
#### Fields
`id: AnimationId``normalized_time: f32`### Blend(BlendSampleId, BlendSampleId, f32, BoneGroupId)
### Interpolate(BlendSampleId, BlendSampleId, f32)
Trait Implementations
---
### impl Clone for BlendSample
#### fn clone(&self) -> BlendSample
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn eq(&self, other: &Self) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.Auto Trait Implementations
---
### impl RefUnwindSafe for BlendSample
### impl Send for BlendSample
### impl Sync for BlendSample
### impl Unpin for BlendSample
### impl UnwindSafe for BlendSample
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Enum animgraph::BlendSampleId
===
```
pub enum BlendSampleId {
Reference,
Task(u16),
}
```
Variants
---
### Reference
### Task(u16)
Trait Implementations
---
### impl Clone for BlendSampleId
#### fn clone(&self) -> BlendSampleId
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> BlendSampleId
Returns the “default value” for a type.
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &BlendSampleId) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Copy for BlendSampleId
### impl Eq for BlendSampleId
### impl StructuralEq for BlendSampleId
### impl StructuralPartialEq for BlendSampleId
Auto Trait Implementations
---
### impl RefUnwindSafe for BlendSampleId
### impl Send for BlendSampleId
### impl Sync for BlendSampleId
### impl Unpin for BlendSampleId
### impl UnwindSafe for BlendSampleId
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Enum animgraph::BoneGroupId
===
```
pub enum BoneGroupId {
All,
Reference(u16),
}
```
Variants
---
### All
### Reference(u16)
Trait Implementations
---
### impl Clone for BoneGroupId
#### fn clone(&self) -> BoneGroupId
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> BoneGroupId
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &BoneGroupId) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for BoneGroupId
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl Eq for BoneGroupId
### impl StructuralEq for BoneGroupId
### impl StructuralPartialEq for BoneGroupId
Auto Trait Implementations
---
### impl RefUnwindSafe for BoneGroupId
### impl Send for BoneGroupId
### impl Sync for BoneGroupId
### impl Unpin for BoneGroupId
### impl UnwindSafe for BoneGroupId
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Enum animgraph::ConditionExpression
===
```
pub enum ConditionExpression {
Never,
Always,
UnaryTrue(GraphBoolean),
UnaryFalse(GraphBoolean),
Equal(GraphBoolean, GraphBoolean),
NotEqual(GraphBoolean, GraphBoolean),
Like(GraphNumber, GraphNumber),
NotLike(GraphNumber, GraphNumber),
Contains(NumberRange, GraphNumber),
NotContains(NumberRange, GraphNumber),
Ordering(Ordering, GraphNumber, GraphNumber),
NotOrdering(Ordering, GraphNumber, GraphNumber),
AllOf(IndexType, IndexType, bool),
NoneOf(IndexType, IndexType, bool),
AnyTrue(IndexType, IndexType, bool),
AnyFalse(IndexType, IndexType, bool),
ExclusiveOr(IndexType, IndexType, bool),
ExclusiveNot(IndexType, IndexType, bool),
}
```
Variants
---
### Never
### Always
### UnaryTrue(GraphBoolean)
### UnaryFalse(GraphBoolean)
### Equal(GraphBoolean, GraphBoolean)
### NotEqual(GraphBoolean, GraphBoolean)
### Like(GraphNumber, GraphNumber)
### NotLike(GraphNumber, GraphNumber)
### Contains(NumberRange, GraphNumber)
### NotContains(NumberRange, GraphNumber)
### Ordering(Ordering, GraphNumber, GraphNumber)
### NotOrdering(Ordering, GraphNumber, GraphNumber)
### AllOf(IndexType, IndexType, bool)
### NoneOf(IndexType, IndexType, bool)
### AnyTrue(IndexType, IndexType, bool)
### AnyFalse(IndexType, IndexType, bool)
### ExclusiveOr(IndexType, IndexType, bool)
### ExclusiveNot(IndexType, IndexType, bool)
Implementations
---
### impl ConditionExpression
#### pub const fn not(self) -> Self
Trait Implementations
---
### impl Clone for ConditionExpression
#### fn clone(&self) -> ConditionExpression
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> ConditionExpression
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for ConditionExpression
### impl Send for ConditionExpression
### impl Sync for ConditionExpression
### impl Unpin for ConditionExpression
### impl UnwindSafe for ConditionExpression
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Enum animgraph::FlowState
===
```
pub enum FlowState {
Exited,
Entering,
Entered,
Exiting,
}
```
Variants
---
### Exited
### Entering
### Entered
### Exiting
Implementations
---
### impl FlowState
#### pub fn reset(&mut self)
#### pub fn is_active(&self) -> bool
Trait Implementations
---
### impl Clone for FlowState
#### fn clone(&self) -> FlowState
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> FlowState
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### type Err = ()
The associated error which can be returned from parsing.#### fn from_str(s: &str) -> Result<Self, Self::ErrParses a string `s` to return a value of this type.
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &FlowState) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for FlowState
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl Eq for FlowState
### impl StructuralEq for FlowState
### impl StructuralPartialEq for FlowState
Auto Trait Implementations
---
### impl RefUnwindSafe for FlowState
### impl Send for FlowState
### impl Sync for FlowState
### impl Unpin for FlowState
### impl UnwindSafe for FlowState
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Enum animgraph::FlowStatus
===
```
pub enum FlowStatus {
Initialized,
Updating,
Transitioning,
Interrupted,
Deactivated,
}
```
Variants
---
### Initialized
### Updating
### Transitioning
### Interrupted
### Deactivated
Trait Implementations
---
### impl Clone for FlowStatus
#### fn clone(&self) -> FlowStatus
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &FlowStatus) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for FlowStatus
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl Eq for FlowStatus
### impl StructuralEq for FlowStatus
### impl StructuralPartialEq for FlowStatus
Auto Trait Implementations
---
### impl RefUnwindSafe for FlowStatus
### impl Send for FlowStatus
### impl Sync for FlowStatus
### impl Unpin for FlowStatus
### impl UnwindSafe for FlowStatus
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Enum animgraph::GraphBoolean
===
```
pub enum GraphBoolean {
Never,
Always,
Variable(VariableIndex),
Condition(IndexType),
QueryMachineActive(MachineIndex),
QueryMachineState(FlowState, MachineIndex),
QueryStateActive(StateIndex),
QueryState(FlowState, StateIndex),
QueryEventActive(Event),
QueryEvent(FlowState, Event),
}
```
Variants
---
### Never
### Always
### Variable(VariableIndex)
### Condition(IndexType)
### QueryMachineActive(MachineIndex)
### QueryMachineState(FlowState, MachineIndex)
### QueryStateActive(StateIndex)
### QueryState(FlowState, StateIndex)
### QueryEventActive(Event)
### QueryEvent(FlowState, Event)
Trait Implementations
---
### impl Clone for GraphBoolean
#### fn clone(&self) -> GraphBoolean
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> GraphBoolean
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &GraphBoolean) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for GraphBoolean
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl Eq for GraphBoolean
### impl StructuralEq for GraphBoolean
### impl StructuralPartialEq for GraphBoolean
Auto Trait Implementations
---
### impl RefUnwindSafe for GraphBoolean
### impl Send for GraphBoolean
### impl Sync for GraphBoolean
### impl Unpin for GraphBoolean
### impl UnwindSafe for GraphBoolean
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Enum animgraph::GraphBuilderError
===
```
pub enum GraphBuilderError {
ExpectedNodes(usize, usize),
ExpectedConstants(usize, usize),
ExpectedTransitions(usize, usize),
ExpectedResources(usize, usize),
InitializationFailed(Error),
NodeInitializationFailed(String, Error),
MissingNodeBuilder(String),
StateMachineValidationFailed(StateMachineValidationError),
ParameterReferenceError(GraphReferenceError),
ResourceTypeIndexOutOfRange,
}
```
Variants
---
### ExpectedNodes(usize, usize)
### ExpectedConstants(usize, usize)
### ExpectedTransitions(usize, usize)
### ExpectedResources(usize, usize)
### InitializationFailed(Error)
### NodeInitializationFailed(String, Error)
### MissingNodeBuilder(String)
### StateMachineValidationFailed(StateMachineValidationError)
### ParameterReferenceError(GraphReferenceError)
### ResourceTypeIndexOutOfRange
Trait Implementations
---
### impl Debug for GraphBuilderError
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, __formatter: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, demand: &mut Demand<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports.
#### fn from(source: Error) -> Self
Converts to this type from the input type.### impl From<GraphBuilderError> for CompileError
#### fn from(source: GraphBuilderError) -> Self
Converts to this type from the input type.### impl From<StateMachineValidationError> for GraphBuilderError
#### fn from(source: StateMachineValidationError) -> Self
Converts to this type from the input type.Auto Trait Implementations
---
### impl !RefUnwindSafe for GraphBuilderError
### impl Send for GraphBuilderError
### impl Sync for GraphBuilderError
### impl Unpin for GraphBuilderError
### impl !UnwindSafe for GraphBuilderError
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<E> Provider for Ewhere
E: Error + ?Sized,
#### fn provide<'a>(&'a self, demand: &mut Demand<'a>)
🔬This is a nightly-only experimental API. (`provide_any`)Data providers should implement this method to provide *all* values they are able to provide by using `demand`.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Enum animgraph::GraphCondition
===
```
pub enum GraphCondition {
Expression(ConditionExpression),
DebugBreak(ConditionExpression),
}
```
Variants
---
### Expression(ConditionExpression)
### DebugBreak(ConditionExpression)
Implementations
---
### impl GraphCondition
#### pub const fn has_debug_break(&self) -> bool
#### pub fn expression(&self) -> &ConditionExpression
#### pub const fn not(self) -> Self
#### pub const fn debug_break(self) -> Self
#### pub const fn new(expression: ConditionExpression) -> Self
#### pub const fn always() -> Self
#### pub const fn never() -> Self
#### pub const fn all_of(range: Range<IndexType>) -> Self
#### pub const fn none_of(range: Range<IndexType>) -> Self
#### pub const fn exlusive_or(range: Range<IndexType>) -> Self
#### pub const fn any_true(range: Range<IndexType>) -> Self
#### pub const fn any_false(range: Range<IndexType>) -> Self
#### pub const fn like(a: GraphNumber, b: GraphNumber) -> Self
#### pub const fn contains_exclusive(
range: (GraphNumber, GraphNumber),
x: GraphNumber
) -> Self
#### pub const fn contains_inclusive(
range: (GraphNumber, GraphNumber),
x: GraphNumber
) -> Self
#### pub const fn not_like(a: GraphNumber, b: GraphNumber) -> Self
#### pub const fn strictly_less(a: GraphNumber, b: GraphNumber) -> Self
#### pub const fn greater_or_equal(a: GraphNumber, b: GraphNumber) -> Self
#### pub const fn strictly_greater(a: GraphNumber, b: GraphNumber) -> Self
#### pub const fn less_or_equal(a: GraphNumber, b: GraphNumber) -> Self
#### pub const fn unary_true(a: GraphBoolean) -> Self
#### pub const fn unary_false(a: GraphBoolean) -> Self
#### pub const fn equal(a: GraphBoolean, b: GraphBoolean) -> Self
#### pub const fn not_equal(a: GraphBoolean, b: GraphBoolean) -> Self
Trait Implementations
---
### impl Clone for GraphCondition
#### fn clone(&self) -> GraphCondition
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for GraphCondition
### impl Send for GraphCondition
### impl Sync for GraphCondition
### impl Unpin for GraphCondition
### impl UnwindSafe for GraphCondition
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Enum animgraph::GraphDebugBreak
===
```
pub enum GraphDebugBreak {
Condition {
condition_index: IndexType,
},
}
```
Variants
---
### Condition
#### Fields
`condition_index: IndexType`Trait Implementations
---
### impl Clone for GraphDebugBreak
#### fn clone(&self) -> GraphDebugBreak
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &GraphDebugBreak) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Eq for GraphDebugBreak
### impl StructuralEq for GraphDebugBreak
### impl StructuralPartialEq for GraphDebugBreak
Auto Trait Implementations
---
### impl RefUnwindSafe for GraphDebugBreak
### impl Send for GraphDebugBreak
### impl Sync for GraphDebugBreak
### impl Unpin for GraphDebugBreak
### impl UnwindSafe for GraphDebugBreak
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Enum animgraph::GraphExpression
===
```
pub enum GraphExpression {
Expression(GraphNumberExpression),
DebugBreak(GraphNumberExpression),
}
```
Variants
---
### Expression(GraphNumberExpression)
### DebugBreak(GraphNumberExpression)
Implementations
---
### impl GraphExpression
#### pub const fn has_debug_break(&self) -> bool
#### pub fn expression(&self) -> &GraphNumberExpression
Trait Implementations
---
### impl Clone for GraphExpression
#### fn clone(&self) -> GraphExpression
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for GraphExpression
### impl Send for GraphExpression
### impl Sync for GraphExpression
### impl Unpin for GraphExpression
### impl UnwindSafe for GraphExpression
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Enum animgraph::GraphNodeEntry
===
```
pub enum GraphNodeEntry {
Process(Box<dyn GraphNode>),
Pose(Box<dyn PoseNode>),
PoseParent(Box<dyn PoseParent>),
}
```
Variants
---
### Process(Box<dyn GraphNode>)
### Pose(Box<dyn PoseNode>)
### PoseParent(Box<dyn PoseParent>)
Implementations
---
### impl GraphNodeEntry
#### pub fn visit_node(&self, visitor: &mut GraphVisitor<'_>)
#### pub fn as_pose(&self) -> Option<&dyn PoseNode#### pub fn as_pose_parent(&self) -> Option<&dyn PoseParentAuto Trait Implementations
---
### impl !RefUnwindSafe for GraphNodeEntry
### impl Send for GraphNodeEntry
### impl Sync for GraphNodeEntry
### impl Unpin for GraphNodeEntry
### impl !UnwindSafe for GraphNodeEntry
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Enum animgraph::GraphNumber
===
```
pub enum GraphNumber {
Zero,
One,
Iteration,
Projection(Projection, VariableIndex),
Constant(ConstantIndex),
Variable(VariableIndex),
Expression(IndexType),
}
```
Variants
---
### Zero
### One
### Iteration
### Projection(Projection, VariableIndex)
### Constant(ConstantIndex)
### Variable(VariableIndex)
### Expression(IndexType)
Trait Implementations
---
### impl Clone for GraphNumber
#### fn clone(&self) -> GraphNumber
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> GraphNumber
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &GraphNumber) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for GraphNumber
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl Eq for GraphNumber
### impl StructuralEq for GraphNumber
### impl StructuralPartialEq for GraphNumber
Auto Trait Implementations
---
### impl RefUnwindSafe for GraphNumber
### impl Send for GraphNumber
### impl Sync for GraphNumber
### impl Unpin for GraphNumber
### impl UnwindSafe for GraphNumber
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Enum animgraph::GraphNumberExpression
===
```
pub enum GraphNumberExpression {
Binary(NumberOperation, GraphNumber, GraphNumber),
}
```
Variants
---
### Binary(NumberOperation, GraphNumber, GraphNumber)
Trait Implementations
---
### impl Clone for GraphNumberExpression
#### fn clone(&self) -> GraphNumberExpression
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for GraphNumberExpression
### impl Send for GraphNumberExpression
### impl Sync for GraphNumberExpression
### impl Unpin for GraphNumberExpression
### impl UnwindSafe for GraphNumberExpression
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Enum animgraph::GraphParameterEntry
===
```
pub enum GraphParameterEntry {
Boolean(IndexType, bool),
Number(IndexType, f64),
Vector(IndexType, [f64; 3]),
Timer(IndexType),
Event(IndexType),
}
```
Variants
---
### Boolean(IndexType, bool)
### Number(IndexType, f64)
### Vector(IndexType, [f64; 3])
### Timer(IndexType)
### Event(IndexType)
Trait Implementations
---
### impl Clone for GraphParameterEntry
#### fn clone(&self) -> GraphParameterEntry
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for GraphParameterEntry
### impl Send for GraphParameterEntry
### impl Sync for GraphParameterEntry
### impl Unpin for GraphParameterEntry
### impl UnwindSafe for GraphParameterEntry
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Enum animgraph::GraphReferenceError
===
```
pub enum GraphReferenceError {
InvalidConstant(String),
InvalidNumber(String),
InvalidNumberExpression(String),
InvalidVector(String),
InvalidBool(String),
InvalidNode(String),
InvalidExpressionList(String),
MissingSubExpressions(String),
MissingMachineReference(String),
MissingStateReference(String),
MissingEventReference(String),
MissingTimerReference(String),
MissingResourceReference(String),
InvalidResourceReferenceType(String),
BackReferencingChildren(String),
MissingChildReferences(String),
}
```
Variants
---
### InvalidConstant(String)
### InvalidNumber(String)
### InvalidNumberExpression(String)
### InvalidVector(String)
### InvalidBool(String)
### InvalidNode(String)
### InvalidExpressionList(String)
### MissingSubExpressions(String)
### MissingMachineReference(String)
### MissingStateReference(String)
### MissingEventReference(String)
### MissingTimerReference(String)
### MissingResourceReference(String)
### InvalidResourceReferenceType(String)
### BackReferencingChildren(String)
### MissingChildReferences(String)
Trait Implementations
---
### impl Debug for GraphReferenceError
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, __formatter: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, demand: &mut Demand<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports.
#### fn from(source: GraphReferenceError) -> Self
Converts to this type from the input type.Auto Trait Implementations
---
### impl RefUnwindSafe for GraphReferenceError
### impl Send for GraphReferenceError
### impl Sync for GraphReferenceError
### impl Unpin for GraphReferenceError
### impl UnwindSafe for GraphReferenceError
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<E> Provider for Ewhere
E: Error + ?Sized,
#### fn provide<'a>(&'a self, demand: &mut Demand<'a>)
🔬This is a nightly-only experimental API. (`provide_any`)Data providers should implement this method to provide *all* values they are able to provide by using `demand`.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Enum animgraph::LayerType
===
```
pub enum LayerType {
Endpoint,
List,
StateMachine,
}
```
Variants
---
### Endpoint
### List
### StateMachine
Trait Implementations
---
### impl Clone for LayerType
#### fn clone(&self) -> LayerType
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> LayerType
Returns the “default value” for a type.
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &LayerType) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Copy for LayerType
### impl Eq for LayerType
### impl StructuralEq for LayerType
### impl StructuralPartialEq for LayerType
Auto Trait Implementations
---
### impl RefUnwindSafe for LayerType
### impl Send for LayerType
### impl Sync for LayerType
### impl Unpin for LayerType
### impl UnwindSafe for LayerType
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Enum animgraph::NumberOperation
===
```
pub enum NumberOperation {
Add,
Subtract,
Divide,
Multiply,
Modulus,
}
```
Variants
---
### Add
### Subtract
### Divide
### Multiply
### Modulus
Trait Implementations
---
### impl Clone for NumberOperation
#### fn clone(&self) -> NumberOperation
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &NumberOperation) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for NumberOperation
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for NumberOperation
### impl StructuralPartialEq for NumberOperation
Auto Trait Implementations
---
### impl RefUnwindSafe for NumberOperation
### impl Send for NumberOperation
### impl Sync for NumberOperation
### impl Unpin for NumberOperation
### impl UnwindSafe for NumberOperation
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Enum animgraph::NumberRange
===
```
pub enum NumberRange {
Exclusive(GraphNumber, GraphNumber),
Inclusive(GraphNumber, GraphNumber),
}
```
Variants
---
### Exclusive(GraphNumber, GraphNumber)
### Inclusive(GraphNumber, GraphNumber)
Trait Implementations
---
### impl Clone for NumberRange
#### fn clone(&self) -> NumberRange
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for NumberRange
### impl Send for NumberRange
### impl Sync for NumberRange
### impl Unpin for NumberRange
### impl UnwindSafe for NumberRange
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Enum animgraph::Projection
===
```
pub enum Projection {
Length,
Horizontal,
Vertical,
Forward,
Back,
Right,
Left,
Up,
Down,
}
```
Variants
---
### Length
### Horizontal
### Vertical
### Forward
### Back
### Right
### Left
### Up
### Down
Implementations
---
### impl Projection
#### pub fn character_projected(self, vec: Vec3) -> f32
Blender perspective
Trait Implementations
---
### impl Clone for Projection
#### fn clone(&self) -> Projection
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &Projection) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Projection
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl Eq for Projection
### impl StructuralEq for Projection
### impl StructuralPartialEq for Projection
Auto Trait Implementations
---
### impl RefUnwindSafe for Projection
### impl Send for Projection
### impl Sync for Projection
### impl Unpin for Projection
### impl UnwindSafe for Projection
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> DeserializeOwned for Twhere
T: for<'de> Deserialize<'de>,
Enum animgraph::SimpleResourceProviderError
===
```
pub enum SimpleResourceProviderError {
ResourceTypesMismatch(String),
DeserializationError(Error),
ContextError(Error),
}
```
Variants
---
### ResourceTypesMismatch(String)
### DeserializationError(Error)
### ContextError(Error)
Trait Implementations
---
### impl Debug for SimpleResourceProviderError
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, __formatter: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, demand: &mut Demand<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports.
#### fn from(source: Error) -> Self
Converts to this type from the input type.### impl From<Error> for SimpleResourceProviderError
#### fn from(source: Error) -> Self
Converts to this type from the input type.Auto Trait Implementations
---
### impl !RefUnwindSafe for SimpleResourceProviderError
### impl Send for SimpleResourceProviderError
### impl Sync for SimpleResourceProviderError
### impl Unpin for SimpleResourceProviderError
### impl !UnwindSafe for SimpleResourceProviderError
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<E> Provider for Ewhere
E: Error + ?Sized,
#### fn provide<'a>(&'a self, demand: &mut Demand<'a>)
🔬This is a nightly-only experimental API. (`provide_any`)Data providers should implement this method to provide *all* values they are able to provide by using `demand`.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
Constant animgraph::CHARACTER_BASIS
===
```
pub const CHARACTER_BASIS: Mat3A;
```
Blender perspective
Constant animgraph::CHARACTER_FORWARD
===
```
pub const CHARACTER_FORWARD: Vec3A;
```
Blender perspective
Constant animgraph::CHARACTER_RIGHT
===
```
pub const CHARACTER_RIGHT: Vec3A;
```
Blender perspective
Constant animgraph::CHARACTER_UP
===
```
pub const CHARACTER_UP: Vec3A;
```
Blender perspective |
BwQuant | cran | R | Package ‘BwQuant’
October 12, 2022
Type Package
Title Bandwidth Selectors for Local Linear Quantile Regression
Version 0.1.0
Date 2022-01-31
Depends R (>= 2.6), quantreg, KernSmooth, nleqslv
Description
Bandwidth selectors for local linear quantile regression, including cross-validation and plug-
in methods. The local linear quantile regression estimate is also implemented.
Language en-GB
License GPL-2
Encoding UTF-8
Biarch true
NeedsCompilation yes
Author <NAME> [aut, cre]
(<https://orcid.org/0000-0003-0306-8142>),
<NAME> [aut] (<https://orcid.org/0000-0003-0046-0819>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2022-02-08 16:00:11 UTC
R topics documented:
BwQuant-packag... 2
bwC... 2
bwP... 3
bwR... 4
bwY... 5
llq... 6
BwQuant-package Bandwidth selectors for local linear quantile regression
Description
The R package BwQuant implements different bandwidth selectors for local linear quantile regres-
sion, including selectors based on rule of thumb, plug-in and cross-validation tecniques.
Author(s)
<NAME> <<EMAIL>> and <NAME> <<EMAIL>>
Maintainer: <NAME> <<EMAIL>>
bwCV Computing the cross-validation bandwidth proposed by Abberger
(1998)
Description
Function to compute a bandwidth for local linear quantile regression following the cross-validation
criteria presented by Abberger (1998).
Usage
bwCV(x, y, hseq, tau)
Arguments
x numeric vector of x data.
y numeric vector of y data. This must be the same length as x.
hseq sequence of values where the cross-validation function will be evaluated.
tau the quantile order where the regression function is to be estimated. It must be a
number strictly between 0 and 1.
Details
The cross-validation function is evaluated at each element of hseq. Then, the cross-validation
selector will be the element of hseq that minimizes the cross-validation function.
Value
Returns a number with the chosen bandwidth.
Author(s)
<NAME> and <NAME>.
References
<NAME>. (1998). Cross-validation in nonparametric quantile regression. Allgemeines Statistis-
ches Archiv, 82, 149-161.
<NAME>. (2002). Variable data driven bandwidth choice in nonparametric quantile regression.
Technical Report.
See Also
The obtained bandwidth can be used in the function llqr to produce a local linear estimate of the
tau-quantile regression function.
Examples
set.seed(1234)
x=runif(100)
y=10*(x^4+x^2-x)+rexp(100)
hseq=seq(0.05,0.8,length=21)
tau=0.25
bwCV(x,y,hseq,tau)
bwPI Computing the plug-in bandwidth proposed by Conde-Amboage and
Sanchez-Sellero (2018)
Description
Function to compute a bandwidth selector for local linear quantile regression following the plug-in
rule proposed in Section 2.2 of Conde-Amboage and Sanchez-Sellero (2018).
Usage
bwPI(x, y, tau)
Arguments
x numeric vector of x data.
y numeric vector of y data. This must be the same length as x.
tau the quantile order where the regression function is to be estimated. It must be a
number strictly between 0 and 1.
Value
Returns a bandwidth for a local linear estimate of the tau-quantile regression function.
Author(s)
<NAME> and <NAME>.
References
Conde-Amboage, M. and Sanchez-Sellero, C. (2018). A plug-in bandwidth selector for nonpara-
metric quantile regression. TEST, 28, 423-450. <doi:10.1007/s11749-018-0582-6>.
See Also
The obtained bandwidth can be used in the function llqr to produce a local linear estimate of the
tau-quantile regression function.
Examples
set.seed(1234)
x=runif(100)
y=10*(x^4+x^2-x)+rexp(100)
tau=0.25
bwPI(x,y,tau)
bwRT Computing a bandwidth using a rule of thumb
Description
Function to compute a bandwidth selector for local linear quantile regression following the rule of
thumb presented in Section 2.1 of Conde-Amboage and Sanchez-Sellero (2018).
Usage
bwRT(x, y, tau)
Arguments
x numeric vector of x data.
y numeric vector of y data. This must be the same length as x.
tau the quantile order where the regression function is to be estimated. It must be a
number strictly between 0 and 1.
Value
Returns a bandwidth for a local linear estimate of the tau-quantile regression function.
Author(s)
<NAME> and <NAME>.
References
Conde-Amboage, M. and Sanchez-Sellero, C. (2018). A plug-in bandwidth selector for nonpara-
metric quantile regression. TEST, 28, 423-450. <doi:10.1007/s11749-018-0582-6>.
See Also
The obtained bandwidth can be used in the function llqr to produce a local linear estimate of the
tau-quantile regression function.
Examples
set.seed(1234)
x=runif(100)
y=10*(x^4+x^2-x)+rexp(100)
tau=0.25
bwRT(x,y,tau)
bwYJ Computing the plug-in bandwidth proposed by Yu and Jones (1998)
Description
Function to compute a bandwidth selector for local linear quantile regression following the plug-in
rule proposed by Yu and Jones (1998).
Usage
bwYJ(x, y, tau)
Arguments
x numeric vector of x data.
y numeric vector of y data. This must be the same length as x.
tau the quantile order where the regression function is to be estimated. It must be a
number strictly between 0 and 1.
Value
Returns a bandwidth for a local linear estimate of the tau-quantile regression function.
Author(s)
<NAME> and <NAME>.
References
<NAME>., <NAME>. and <NAME>. (1995). An efective bandwidth selector for local least
squares regression. Journal of the American Statistical Association. 90, 1257-1270.
<NAME>. and <NAME>. (1998). Local linear quantile regression. Journal of the American Statistical
Association, 93, 228-237.
See Also
The obtained bandwidth can be used in the function llqr to produce a local linear estimate of the
tau-quantile regression function.
Examples
set.seed(1234)
x=runif(100)
y=10*(x^4+x^2-x)+rexp(100)
tau=0.25
bwYJ(x,y,tau)
llqr Fitting a local linear quantile regression model
Description
Function that estimates the quantile regression function using a local linear kernel smoother.
Usage
llqr(x, y, tau, t, h)
Arguments
x numeric vector of x data.
y numeric vector of y data. This must be the same length as x.
tau the quantile order where the regression function is to be estimated. It must be a
number strictly between 0 and 1.
t the values of x at which the quantile regression model is to be estimated.
h the bandwidth parameter.
Value
A list with the following components:
x.values the given points at which the evaluation occurs.
y.values the estimated values of the quantile regression function at the given x.values.
Author(s)
<NAME> and <NAME>.
References
<NAME>., <NAME>. and <NAME>. (1994). Robust nonparametric function estimation. Scandina-
vian Journal of Statistics, 21, 433-446.
<NAME>. and <NAME>. (1998). Local linear quantile regression. Journal of the American Statistical
Association, 93, 228-237.
See Also
The argument h with the bandwidth parameter can be fixed to some arbitrary value or chosen by
one of the procedures implemented in the functions bwCV, bwPI, bwRT or bwYJ.
Examples
set.seed(1234)
x=runif(100)
y=10*(x^4+x^2-x)+rexp(100)
tau=0.25
h=bwPI(x,y,tau)
t=seq(0,1,length=101)
m=llqr(x,y,tau,t,h)
plot(x,y)
lines(m$x.values,m$y.values) |
nerves_system_rpi0_zbar | hex | Erlang | Raspberry Pi Model Zero
===
[![CircleCI](https://circleci.com/gh/nerves-project/nerves_system_rpi0.svg?style=svg)](https://circleci.com/gh/nerves-project/nerves_system_rpi0)
[![Hex version](https://img.shields.io/hexpm/v/nerves_system_rpi0.svg "Hex version")](https://hex.pm/packages/nerves_system_rpi0)
This is the base Nerves System configuration for the Raspberry Pi Zero and Raspberry Pi Zero W.
If you are *not* interested in [Gadget Mode](http://www.linux-usb.org/gadget/)
then check out
[nerves_system_rpi](https://github.com/nerves-project/nerves_system_rpi). That system configures the USB port in host mode by default and is probably more appropriate for your setup.
![Fritzing Raspberry Pi Zero image](assets/images/raspberry-pi-model-zero.png)
[Image credit](#fritzing)
| Feature | Description |
| --- | --- |
| CPU | 1 GHz ARM1176JZF-S |
| Memory | 512 MB |
| Storage | MicroSD |
| Linux kernel | 4.19 w/ Raspberry Pi patches |
| IEx terminal | OTG USB serial port (`ttyGS0`). Can be changed to HDMI or UART. |
| GPIO, I2C, SPI | Yes - [Elixir Circuits](https://github.com/elixir-circuits) |
| ADC | No |
| PWM | Yes, but no Elixir support |
| UART | 1 available - `ttyAMA0` |
| Camera | Yes - via rpi-userland |
| Ethernet | No |
| WiFi | Supported on the Pi Zero W |
| Bluetooth | Not supported yet |
| Audio | HDMI/Stereo out |
Using
---
The most common way of using this Nerves System is create a project with `mix nerves.new` and to export `MIX_TARGET=rpi0`. See the [Getting started guide](https://hexdocs.pm/nerves/getting-started.html#creating-a-new-nerves-app)
for more information.
If you need custom modifications to this system for your device, clone this repository and update as described in [Making custom systems](https://hexdocs.pm/nerves/systems.html#customizing-your-own-nerves-system)
If you're new to Nerves, check out the
[nerves_init_gadget](https://github.com/nerves-project/nerves_init_gadget)
project for creating a starter project for the Raspberry Pi Zero or Zero W. It will get you started with the basics like bringing up the virtual Ethernet interface, initializing the writable application data partition, and enabling ssh-based firmware updates.
Console and kernel message configuration
---
The goal of this image is to use the OTG port for console access. If you're debugging the boot process, you'll want to use the Raspberry Pi's UART pins on the GPIO connector or the HDMI output. This is enabled by updating the
`cmdline.txt` file. This may be overridden with a custom `fwup.conf` file if you don't want to rebuild this system. Add the following to your `cmdline.txt`:
```
console=ttyAMA0,115200 console=tty1 ...
```
If you'd like the IEx prompt to come out the UART pins (`ttyAMA0`) or HDMI
(`tty1`), then modify `rootfs_overlay/etc/erlinit.config` as well.
Supported OTG USB modes
---
The base image activates the `dwc2` overlay, which allows the Pi Zero to appear as a device (aka gadget mode). When plugged into a host computer via the OTG port, the Pi Zero will appear as a composite Ethernet and serial device. The virtual serial port provides access to the IEx prompt and the Ethernet device can be used for firmware updates, Erlang distribution, and anything else running over IP.
Supported WiFi devices
---
The base image includes drivers for the onboard Raspberry Pi Zero W wifi module
(`brcmfmac` driver). Due to the USB port being placed in gadget mode, this system does not support USB WiFi adapters.
Audio
---
The Raspberry Pi has many options for audio output. This system supports the HDMI and stereo audio jack output. The Linux ALSA drivers are used for audio output.
To try it out, run:
```
:os.cmd('espeak -ven+f5 -k5 -w /tmp/out.wav Hello')
:os.cmd('aplay -q /tmp/out.wav')
```
The general Raspberry Pi audio documentation mostly applies to Nerves. For example, to force audio out the HDMI port, run:
```
:os.cmd('amixer cset numid=3 2')
```
Change the last argument to `amixer` to `1` to output to the stereo output jack.
Provisioning devices
---
This system supports storing provisioning information in a small key-value store outside of any filesystem. Provisioning is an optional step and reasonable defaults are provided if this is missing.
Provisioning information can be queried using the Nerves.Runtime KV store's
[`Nerves.Runtime.KV.get/1`](https://hexdocs.pm/nerves_runtime/Nerves.Runtime.KV.html#get/1)
function.
Keys used by this system are:
| Key | Example Value | Description |
| --- | --- | --- |
| `nerves_serial_number` | `"12345678"` | By default, this string is used to create unique hostnames and Erlang node names. If unset, it defaults to part of the Raspberry Pi's device ID. |
The normal procedure would be to set these keys once in manufacturing or before deployment and then leave them alone.
For example, to provision a serial number on a running device, run the following and reboot:
```
iex> cmd("fw_setenv nerves_serial_number 12345678")
```
This system supports setting the serial number offline. To do this, set the
`NERVES_SERIAL_NUMBER` environment variable when burning the firmware. If you're programming MicroSD cards using `fwup`, the commandline is:
```
sudo NERVES_SERIAL_NUMBER=12345678 fwup path_to_firmware.fw
```
Serial numbers are stored on the MicroSD card so if the MicroSD card is replaced, the serial number will need to be reprogrammed. The numbers are stored in a U-boot environment block. This is a special region that is separate from the application partition so reformatting the application partition will not lose the serial number or any other data stored in this block.
Additional key value pairs can be provisioned by overriding the default provisioning.conf file location by setting the environment variable
`NERVES_PROVISIONING=/path/to/provisioning.conf`. The default provisioning.conf will set the `nerves_serial_number`, if you override the location to this file,
you will be responsible for setting this yourself.
Linux kernel and RPi firmware/userland
---
There's a subtle coupling between the `nerves_system_br` version and the Linux kernel version used here. `nerves_system_br` provides the versions of
`rpi-userland` and `rpi-firmware` that get installed. I prefer to match them to the Linux kernel to avoid any issues. Unfortunately, none of these are tagged by the Raspberry Pi Foundation so I either attempt to match what's in Raspbian or take versions of the repositories that have similar commit times.
Installation
---
If you're new to Nerves, check out the
[nerves_init_gadget](https://github.com/fhunleth/nerves_init_gadget) project for creating a starter project for the Raspberry Pi Zero or Zero W. It will get you started with the basics like bringing up the virtual Ethernet interface,
initializing the application partition, and enabling ssh-based firmware updates.
Linux kernel configuration notes
---
The Linux kernel compiled for Nerves is a stripped down version of the default Raspberry Pi Linux kernel. This is done to remove unnecessary features, select some Nerves-specific features, and to save space. To reproduce the kernel configuration found here, do the following (this is somewhat tedious):
1. Start with `arch/arm/configs/bcmrpi_defconfig`. This is the kernel configuration used in the official Raspberry Pi images.
2. Turn off all filesystems except for `ext4`, `squashfs`, `tmpfs`, `proc`,
`sysfs`, and `vfat`. Squashfs only needs ZLIB support.
3. `vfat` needs to default to `utf8`. Enable native language support for
`ascii`, `utf-8`, `ISO 8859-1`, codepage 437, and codepage 850.
4. Disable all network drivers and wireless LAN drivers except for Broadcom FullMAC WLAN.
5. Disable PPP and SLIP 6. Disable the WiFi drivers in the Staging drivers menus 7. Disable TV, AM/FM, Media USB adapters, DVB Frontends and Remote controller support in the Multimedia support menus.
8. Go to `Device Drivers->Sound card support`. Disable `USB sound devices` in ALSA. Disable `Open Sound System`.
9. Go to `Device Drivers->Graphics support`. Disable `DisplayLink`
10. Disable everything in `HID support` (NOTE: revisit for Bluetooth)
11. Disable everything in input device support (can't plug it in anyway)
12. In the `Device Drivers > USB support` menu, enable gadget mode and disable all host mode. It should be possible to completely disable USB host mode if all of the USB drivers in previous steps were disabled. See `DesignWare USB2 Core Support->DWC Mode Selection` and select `CDC Composite Device (Ethernet and ACM)`. If you want dual mode USB host/gadget support, you'll need to reenable a few things. There have been unresolved issues in the past with dual mode support. It's possible that they are fixed, but be sure to test. They were noticed on non-Mac platforms.
13. In `Kernel Features`, select `Preemptible Kernel (Low-Latency Desktop)`,
disable the memory allocator for compressed pages.
14. In `Userspace binary formats`, disable support for MISC binaries.
15. In `Networking support`, disable Amateur Radio support, CAN bus subsystem,
IrDA subsystem, Bluetooth, WiMAX, Plan 9, and NFC. (TBD - this may be too
harsh, please open issues if you're using any of these and it's the only
reason for you to create a custom system.)
16. In `Networking options`, disable IPsec, SCTP, Asynchronous Transfer Mode,
802.1d Ethernet Bridging, L2TP, VLAN, Appletalk, 6LoWPAN, 802.15.4, DNS
Resolver, B.A.T.M.A.N, Open vSwitch, MPLS, and the Packet Generator in Network
testing.
17. In `Networking support->Wireless`, enable "use statically compiled regulatory
rules database". Build in `cfg80211` and `mac80211`. Turn off `mac80211` mesh
networking and LED triggers. Turn off `cfg80211` wireless extensions
compatibility.
18. In `Kernel hacking`, disable KGDB, and Magic SysRq key.
19. In Device Drivers, disable MTD support. In Block devices, disable everything
but Loopback and RAM block device. Disable SCSI device support. Disable RAID
and LVM.
20. In `Enable the block layer`, deselect everything but the PC BIOS partition
type (i.e., no Mac partition support, etc.).
21. In `Enable loadable module support`, select "Trim unused exported kernel
symbols". NOTE: If you're having trouble with an out-of-tree kernel module
build, try deslecting this!!
22. In `General Setup`, turn off `initramfs/initfd` support, Kernel .config support, OProfile.
23. In `Device Drivers -> I2C -> Hardware Bus Support` compile the module into the kernel and disable everything but `BCM2708 BSC` support.
24. In `Device Drivers -> SPI` compile in the BCM2835 SPI controller and User mode SPI device driver support.
25. In `Device Drivers -> Dallas's 1-wire support`, disable everything but the
GPIO 1-Wire master and the thermometer slave. (NOTE: Why is the thermometer
compiled in? This seems historical.)
26. Disable `Hardware Monitoring support`, `Sonics Silicon Backplane support`
27. In `Device Drivers -> Character devices -> Serial drivers`, disable 8250 and
SC16IS7xx support. Disable the RAW driver.
28. In `Networking support->Network options`, disable `IP: kernel level autoconfiguration`
29. In `Networking support->Network options->TCP: advanced congestion control`
disable everything except for `CUBIC TCP`.
30. Disable `Real Time Clock`.
31. Disable everything in `Cryptographic API` and `Library routines` that can be
disabled. Sometimes you need to make multiple passes.
32. Disable EEPROM 93CX6 support, PPS support, all GPIO expanders, Speakup core,
Media staging drivers, STMicroelectronics STMPE, anything "Wolfson".
33. Disable most ALSA for SoC audio support and codecs. NOTE: We probably should
support a few, but I have no clue which ones are most relevant and there are
tons of device drivers in the list.
34. Disable IIO and UIO.
35. Disable NXP PCA9685 PWM driver
[Image credit](#fritzing): This image is from the [Fritzing](http://fritzing.org/home/) parts library. |
github.com/corazawaf/coraza-caddy | go | Go | README
[¶](#section-readme)
---
### Coraza WAF Caddy Module
[![Tests](https://github.com/corazawaf/coraza-caddy/actions/workflows/tests.yml/badge.svg)](https://github.com/corazawaf/coraza-caddy/actions/workflows/tests.yml)
[![](https://img.shields.io/badge/godoc-reference-blue.svg)](https://pkg.go.dev/github.com/corazawaf/coraza-caddy)
[![](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)
[OWASP Coraza](https://github.com/corazawaf/coraza) Caddy Module provides Web Application Firewall capabilities for Caddy.
OWASP Coraza WAF is 100% compatible with OWASP Coreruleset and Modsecurity syntax.
#### Plugin syntax
```
coraza_waf {
directives `
SecAction "id:1,pass,log"
`
include /path/to/config.conf
}
```
Sample usage:
Important: `order coraza_waf first` must be always included in your Caddyfile for Coraza module to work
```
{
order coraza_waf first
}
http://127.0.0.1:8080 {
coraza_waf {
directives `
SecAction "id:1,pass,log"
SecRule REQUEST_URI "/test5" "id:2, deny, log, phase:1"
SecRule REQUEST_URI "/test6" "id:4, deny, log, phase:3"
`
include file1.conf
include file2.conf
include /some/path/*.conf
}
reverse_proxy http://192.168.1.15:8080
}
```
#### Build Caddy with Coraza WAF
Run:
```
xcaddy build --with github.com/corazawaf/coraza-caddy
```
#### Testing
You may run the test suite by executing:
```
$ git clone https://github.com/corazawaf/coraza-caddy
$ cd coraza-caddy
$ go test ./...`
```
#### Using OWASP Core Ruleset
Clone the [coreruleset repository](https://github.com/coreruleset/coreruleset) and download the default coraza configurations from [Coraza repository](https://raw.githubusercontent.com/corazawaf/coraza/v2/master/coraza.conf-recommended), then add the following to you coraza_waf directive:
```
include caddypath/coraza.conf-recommended include caddypath/coreruleset/crs-setup.conf.example include caddypath/coreruleset/rules/*.conf
```
#### Known Issues
#### FAQ
Documentation
[¶](#section-documentation)
---
![The Go Gopher](/static/shared/gopher/airplane-1200x945.svg)
There is no documentation for this package. |
ark-relations | rust | Rust | Crate ark_relations
===
Core interface for working with various relations that are useful in zkSNARKs. At the moment, we only implement APIs for working with Rank-1 Constraint Systems (R1CS).
Modules
---
r1csCore interface for working with Rank-1 Constraint Systems (R1CS).Macros
---
lcGenerate a `LinearCombination` from arithmetic expressions involving
`Variable`s.nsGenerate a `Namespace` with name `name` from `ConstraintSystem` `cs`.
`name` must be a `&'static str`.
Crate ark_relations
===
Core interface for working with various relations that are useful in zkSNARKs. At the moment, we only implement APIs for working with Rank-1 Constraint Systems (R1CS).
Modules
---
r1csCore interface for working with Rank-1 Constraint Systems (R1CS).Macros
---
lcGenerate a `LinearCombination` from arithmetic expressions involving
`Variable`s.nsGenerate a `Namespace` with name `name` from `ConstraintSystem` `cs`.
`name` must be a `&'static str`.
Module ark_relations::r1cs
===
Core interface for working with Rank-1 Constraint Systems (R1CS).
Macros
---
info_spanConstructs a span at the info level.Structs
---
ConstraintMatricesThe A, B and C matrices of a Rank-One `ConstraintSystem`.
Also contains metadata on the structure of the constraint system and the matrices.ConstraintSystemAn Rank-One `ConstraintSystem`. Enforces constraints of the form
`⟨a_i, z⟩ ⋅ ⟨b_i, z⟩ = ⟨c_i, z⟩`, where `a_i`, `b_i`, and `c_i` are linear combinations over variables, and `z` is the concrete assignment to these variables.LcIndexAn opaque counter for symbolic linear combinations.LinearCombinationA linear combination of variables according to associated coefficients.NamespaceA namespaced `ConstraintSystemRef`.Enums
---
ConstraintSystemRefA shared reference to a constraint system that can be stored in high level variables.OptimizationGoalDefines the parameter to optimize for a `ConstraintSystem`.SynthesisErrorThis is an error that could occur during circuit synthesis contexts,
such as CRS generation, proving or verification.SynthesisModeDefines the mode of operation of a `ConstraintSystem`.VariableRepresents the different kinds of variables present in a constraint system.Traits
---
ConstraintSynthesizerComputations are expressed in terms of rank-1 constraint systems (R1CS).
The `generate_constraints` method is called to generate constraints for both CRS generation and for proving.FieldThe interface for a generic field.
Types implementing `Field` support common field operations such as addition, subtraction, multiplication, and inverses.ToConstraintFieldTypes that can be converted to a vector of `F` elements. Useful for specifying how public inputs to a constraint system should be represented inside that constraint system.Type Definitions
---
MatrixA sparse representation of constraint matrices.ResultA result type specialized to `SynthesisError`.
Macro ark_relations::lc
===
```
macro_rules! lc {
() => { ... };
}
```
Generate a `LinearCombination` from arithmetic expressions involving
`Variable`s.
Macro ark_relations::ns
===
```
macro_rules! ns {
($cs:expr, $name:expr) => { ... };
}
```
Generate a `Namespace` with name `name` from `ConstraintSystem` `cs`.
`name` must be a `&'static str`. |
RBaseX | cran | R | Package ‘RBaseX’
December 2, 2022
Type Package
Title 'BaseX' Client
Version 1.1.2
Date 2022-12-02
Description 'BaseX' <https://basex.org> is a XML database engine and a compli-
ant 'XQuery 3.1' processor with full support of 'W3C Update Facility'. This pack-
age is a full client-implementation of the client/server protocol for 'BaseX' and provides func-
tionalities to create, manipulate and query on XML-data.
License MIT + file LICENSE
Encoding UTF-8
RoxygenNote 7.2.2
Imports R6, RCurl, pingr, rex, httr, stringr, dplyr, openssl,
magrittr, tibble, data.table
Suggests testthat, glue
Author <NAME> [aut, cre]
Maintainer <NAME> <<EMAIL>>
URL https://github.com/BenEngbers/RBaseX
SystemRequirements Needs a running BaseX server instance. The testuser
with credentials ('Test'/'testBasex') should have admin rights.
Repository CRAN
NeedsCompilation no
Date/Publication 2022-12-02 13:40:02 UTC
R topics documented:
Ad... 2
Bin... 3
Clos... 4
Comman... 5
Contex... 6
Creat... 7
Execut... 8
Ful... 9
GetIntercep... 10
GetSucces... 10
Inf... 11
input_to_ra... 11
Mor... 12
NewBasexClien... 13
Nex... 13
Option... 14
pu... 15
putBinar... 16
Quer... 17
QueryClas... 17
RBase... 20
Replac... 24
RestoreIntercep... 25
result2fram... 25
result2tibbl... 26
SetIntercep... 26
SetSucces... 27
SocketClas... 27
Stor... 28
Updatin... 29
Add Add
Description
Adds a new resource to the opened database.
Usage
Add(session, path, input)
Arguments
session BasexClient instance-ID
path Path
input Additional input (optional)
Details
The input can be a UTF-8 encoded XML document, a binary resource, or any other data (such as
JSON or CSV) that can be successfully converted to a resource by the server. The utility-function
input_to_raw can be used to convert an arbitrary character vector to a stream. This method returns
self invisibly, thus making it possible to chain together multiple method calls.
Value
A list with two items
• info Aditional info
• success Boolean, indicating if the command was completed successfull
Examples
## Not run:
Add(Session, "test", "<xml>Add</xml>")
## End(Not run)
Bind Bind
Description
Binds a value to a variable.
Usage
Bind(query_obj, ...)
Arguments
query_obj QueryClass instance-ID
... Binding Information
Details
Binding information can be provided in the following ways:
• name, value Name and value for a variable.
• name, value, type Name, value and type for a variable.
• name, list(value) Name, list of values.
• name, list(value), list(type) Name, list of values, list of types.
For a list of possibe types see https://docs.basex.org/wiki/Java_Bindings#Data_Types
This method returns self invisibly, thus making it possible to chain together multiple method calls.
Value
Boolean value which indicates if the operation was executed successfull
Examples
## Not run:
query_obj <- Query(Session,
"declare variable $name external; for $i in 1 to 2 return element { $name } { $i }")
Bind(query_obj, "$name", "number")
print(Execute(query_obj))
query_obj <- Query(Session,
"declare variable $name external; for $i in 3 to 4 return element { $name } { $i }")
Bind(query_obj, "$name", "number", "xs:string")
print(Execute(query_obj))
query_obj <- Query(Session,
"declare variable $name external;
for $t in collection('TestDB/Books')/book where $t/@author = $name
return $t/@title/string()")
Bind(query_obj, "$name", list("Walmsley", "Wickham"))
print(Execute(query_obj))
query_obj <- Query(Session,
"declare variable $name external;
for $t in collection('TestDB/Books')/book where $t/@author = $name
return $t/@title/string()")
Bind(query_obj, "$name", list("Walmsley", "Wickham"), list("xs:string", "xs:string"))
print(Execute(query_obj))
## End(Not run)
Close Close
Description
Closes and unregisters the query with the specified ID
Usage
Close(query_obj)
Arguments
query_obj QueryClass instance-ID
Details
This method returns self invisibly, thus making it possible to chain together multiple method calls.
Value
This function returns a list with the following items:
• info Info
• success A boolean, indicating if the command was completed successfull
Command Command
Description
Executes a database command or a query.
Usage
Command(...)
Arguments
... The command or query to be executed. When used to execute a command, a
SessionID and a string which contains the command, are to be passed. When
used to execute a query, the QueryClass instance-ID is passed.
Details
For a list of database commands see https://docs.basex.org/wiki/Commands
’BaseX’ can be used in a Standard mode or Query mode.
In the standard mode of the Clients, a database command can be sent to the server using the Com-
mand() function of the Session. The query mode of the Clients allows you to bind external variables
to a query and evaluate the query in an iterative manner.
Value
When used to execute commands in the Standard mode, this function returns a list with the following
items:
• result
• info Aditional info
• success A boolean, indicating if the command was completed successfull
When used to execute a query, it return the result as a list.
Examples
## Not run:
Session <- NewBasexClient(user = <username>, password = "<password>")
print(Command(Session, "info")$info)
query_txt <- paste("for $i in 1 to 2", "return <xml>Text { $i }</xml>", sep = " ")
query_obj <- Query(Session, query_txt)
print(Command(query_obj))
## End(Not run)
Context Context
Description
Binds a value to the context. The type will be ignored if the string is empty. The function returns
no value.
Usage
Context(query_obj, value, type)
Arguments
query_obj QueryClass instance-ID
value Value that should be boud to the context
type The type will be ignored when the string is empty
Details
The type that is provided to the context, should be one of the standard-types. An alternative way is
to parse the document information. This method returns self invisibly, thus making it possible to
chain together multiple method calls.
Examples
## Not run:
ctxt_query_txt <- "for $t in .//text() return string-length($t)"
ctxt_query <- Query(Session, ctxt_query_txt)
ctxt_txt <- paste0("<xml>",
"<txt>Hi</txt>",
"<txt>World</txt>",
"</xml>")
Context(ctxt_query, ctxt_txt, type = "document-node()")
print(Execute(ctxt_query)) ## returns "2" "5"
ctxt_query_txt <- "for $t in parse-xml(.)//text() return string-length($t)"
Context(ctxt_query, ctxt_txt)
print(Execute(ctxt_query))
## End(Not run)
Create Create
Description
Creates a new database with the specified name and input (may be empty).
Usage
Create(session, name, input)
Arguments
session BasexClient instance-ID
name Database name
input Additional input, may be empty
Details
The input can be a UTF-8 encoded XML document, a binary resource, or any other data (such
as JSON or CSV) that can be successfully converted to a resource by the server. ’Check’ is a
convenience command that combines OPEN and CREATE DB: If a database with the name input
exists, and if there is no existing file or directory with the same name that has a newer timestamp,
the database is opened. Otherwise, a new database is created; if the specified input points to an
existing resource, it is stored as initial content. This method returns self invisibly, thus making it
possible to chain together multiple method calls.
Value
A list with two items
• info Aditional info
• success A boolean, indicating if the command was completed successfull
Examples
## Not run:
Create(, "test", "<xml>Create test</xml>")
Execute(Session, "Check test")
Create(Session, "test2",
"https://raw.githubusercontent.com/BaseXdb/basex/master/basex-api/src/test/resources/first.xml")
Create(Session, "test3", "/home/username/Test.xml")
## End(Not run)
Execute Execute
Description
Executes a database command or a query.
Usage
Execute(...)
Arguments
... The command or query to be executed. When used to execute a command, a
SessionID and a string which contains the command, are to be passed. When
used to execute a query, the QueryClass instance-ID is passed.
Details
The ’Execute’ command is deprecated and has been renamed to ’Command’. ’Execute’ is being
kept as convenience.
Value
When used to execute commands in the Standard mode, this function returns a list with the following
items:
• result
• info Aditional info
• success A boolean, indicating if the command was completed successfull
When used to execute a query, it return the result as a list.
Examples
## Not run:
Session <- NewBasexClient(user = <username>, password = "<password>")
print(Execute(Session, "info")$info)
query_txt <- paste("for $i in 1 to 2", "return <xml>Text { $i }</xml>", sep = " ")
query_obj <- Query(Session, query_txt)
print(Execute(query_obj))
## End(Not run)
Full Title Full
Description
Executes a query and returns a list of vectors, each one representing a result as a string , prefixed
by the ’XDM’ (Xpath Data Model) Meta Data <https://www.xdm.org/>. Meta Data and results are
seaparated by a ’|’.
Usage
Full(query_obj)
Arguments
query_obj QueryClass instance-ID
Examples
## Not run:
query_txt <- "collection('/TestDB/Test.xml')"
query_obj <- Query(Session, query_txt)
print(Full(query_obj))
## Return
[[1]]
[1] "2f" "/TestDB/Test.xml"
[[2]]
[1] "3c" "Line_1 line=\"1\">Content 1</Line_1"
[[3]]
[1] "2f" "/TestDB/Test.xml"
[[4]]
[1] "3c" "Line_2 line=\"2\">Content 2</Line_2"
## End(Not run)
10 GetSuccess
GetIntercept GetIntercept
Description
Current value for session$Intercept
Usage
GetIntercept(session)
Arguments
session BasexClient instance-ID
Value
Current value
GetSuccess GetSuccess
Description
Current value from session$Success
Usage
GetSuccess(session)
Arguments
session BasexClient instance-ID
Value
Current value
Info Info
Description
Returns a string with query compilation and profiling info.
Usage
Info(query_obj)
Arguments
query_obj QueryClass instance-ID
Details
If the query object has not been executed yet, an empty string is returned.
Value
This function returns a list with the following items:
• Info Info
• success A boolean, indicating if the command was completed successfull
input_to_raw input_to_raw
Description
Convert input to a length-1 character vector.
Usage
input_to_raw(input)
Arguments
input Character vector length 1
Details
If input is a reference to a file, the number of bytes corresponding to the size is read. If it is an URL,
the URL is read and converted to a ’Raw’ vector. The function does not catch errors.
Value
’Raw’ vector
More More
Description
Indicates if there are any other results in the query-result.
Usage
More(query_obj)
Arguments
query_obj QueryClass instance-ID
Value
Boolean
Examples
## Not run:
Query_1 <- Query(Session, "collection('/TestDB/Test.xml')")
iterResult <- c()
while (More(Query_1)) {
iterResult <- c(iterResult, Next(Query_1))
}
print(iterResult)
[[1]]
[1] "0d" "<Line_1 line=\"1\">Content 1</Line_1>"
[[2]]
[1] "0d" "<Line_2 line=\"2\">Content 2</Line_2>"
## End(Not run)
NewBasexClient Title
Description
Create a BaseX-client
Usage
NewBasexClient(host = "localhost", port = 1984, user, password)
Arguments
host, port Host name and port-number
user, password User credentials
Details
This creates a BaseX-client. By default it listens to port 1984 on localhost. Username and password
should be changed after the installation of ’BaseX’.
Value
BasexClient-instance
Examples
## Not run:
session <- NewBasexClient(user = <username>, password = "<password>")
## End(Not run)
Next Next
Description
Returns the next result when iterating over a query
Usage
Next(query_obj)
Arguments
query_obj QueryClass instance-ID
Examples
## Not run:
Query_1 <- Query(Session, "collection('TestDB/Test.xml')")
iterResult <- c()
while (More(Query_1)) {
iterResult <- c(iterResult, Next(Query_1))
}
print(iterResult)
[[1]]
[1] "0d" "<Line_1 line=\"1\">Content 1</Line_1>"
[[2]]
[1] "0d" "<Line_2 line=\"2\">Content 2</Line_2>"
## End(Not run)
Options Options
Description
Returns a string with all query serialization parameters, which can be assigned to the serializer
option.
Usage
Options(query_obj)
Arguments
query_obj QueryClass instance-ID
Details
For a list of possibe types see https://docs.basex.org/wiki/Java_Bindings#Data_Types
Value
This function returns a list with the following items:
• Options Options
• success A boolean, indicating if the command was completed successfull
put Put
Description
Adds or replaces a resource with the specified input.
Usage
put(session, path, input)
Arguments
session BasexClient instance-ID
path Path where to store the data
input Add or replacement
Details
The input can be a UTF-8 encoded XML document, a binary resource, or any other data (such as
JSON or CSV) that can be successfully converted to a resource by the server. This method returns
self invisibly, thus making it possible to chain together multiple method calls.
Value
A list with two items
• info Aditional info
• success A boolean, indicating if the command was completed successfull
Examples
## Not run:
put(Session, "test", "<xml>Create test</xml>")
## End(Not run)
putBinary putBinary
Description
Store or replace a binary resource in the opened database.
Usage
putBinary(session, path, input)
Arguments
session BasexClient instance-ID
path Path where to store the data
input Additional input, may be empty
Details
Use the database-command retrieve to retrieve the resource. The input can be a UTF-8 encoded
XML document, a binary resource, or any other data (such as JSON or CSV) that can be successfully
converted to a resource by the server. This method returns self invisibly, thus making it possible to
chain together multiple method calls.
Value
A list with two items
• info Aditional info
• success A boolean, indicating if the command was completed successfull
Examples
## Not run:
Execute(Session, "DROP DB BinBase")
testBin <- Execute(Session, "Check BinBase")
bais <- raw()
for (b in 252:255) bais <- c(bais, c(b)) %>% as.raw()
test <- putBinary(Session, "test.bin", bais)
print(test$success)
baos <- Execute(Session, "BINARY GET test.bin")
print(bais)
print(baos$result)
## End(Not run)
Query Query
Description
Creates a new query instance and returns it’s id.
Usage
Query(session, query_string)
Arguments
session BasexClient instance-ID
query_string query string
Details
If paste0() is used to create a multi-line statement, the lines must be separeted by a space or a
newline \n-character.
Value
Query_ID
Examples
## Not run:
query_txt <- "for $i in 1 to 2 return <xml>Text { $i }</xml>"
query_obj <- Query(Session, query_txt)
print(Execute(query_obj))
## End(Not run)
QueryClass QueryClass
Description
The client can be used in ’standard’ mode and in ’query’ mode. Query mode is used to define
queries, binding variables and for iterative evaluation.
Methods
Public methods:
• QueryClass$new()
• QueryClass$ExecuteQuery()
• QueryClass$Bind()
• QueryClass$Context()
• QueryClass$Full()
• QueryClass$More()
• QueryClass$Next()
• QueryClass$Info()
• QueryClass$Options()
• QueryClass$Updating()
• QueryClass$Close()
• QueryClass$clone()
Method new(): Initialize a new instance from QueryClass
Usage:
QueryClass$new(query, Parent)
Arguments:
query Query-string
Parent The ’Parent’ for this QueryClass-instance
Details: QueryClass-instances can only be created by calling the ’Query’-method from the
’BasexClient’-class.
Method ExecuteQuery(): Executes a query.
Usage:
QueryClass$ExecuteQuery()
Method Bind(): Binds a value to a variable.
Usage:
QueryClass$Bind(...)
Arguments:
... Binding Information
query_obj QueryClass instance-ID
Details: When using the primitive functions, this function can be chained.
Method Context(): Binds a value to the context. The type will be ignored if the string is empty.
Usage:
QueryClass$Context(value, type)
Arguments:
value Value that should be boud to the context
type The type will be ignored when the string is empty
Details: When using the primitive functions, this function can be chained.
Method Full(): Executes a query and returns a vector with all resulting items as strings, pre-
fixed by the ’XDM’ (Xpath Data Model) Meta Data <https://www.xdm.org/>.
Usage:
QueryClass$Full()
Method More(): Indicates if there are any other results in the query-result.
Usage:
QueryClass$More()
Method Next(): Returns the next result when iterating over a query
Usage:
QueryClass$Next()
Method Info(): Returns a string with query compilation and profiling info.
Usage:
QueryClass$Info()
Method Options(): Returns a string with all query serialization parameters, which can e.g. be
assigned to the serializer option.
Usage:
QueryClass$Options()
Method Updating(): Check if the query contains updating expressions.
Usage:
QueryClass$Updating()
Method Close(): Closes and unregisters the query with the specified ID
Usage:
QueryClass$Close()
Details: When using the primitive functions, this function can be chained.
Method clone(): The objects of this class are cloneable with this method.
Usage:
QueryClass$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
RBaseX RBaseX
Description
’BaseX’ is a robust, high-performance XML database engine and a highly compliant XQuery 3.1
processor with full support of the W3C Update and Full Text extensions.
The client can be used in ’standard’ mode and in ’query’ mode. Standard Mode is used for connect-
ing to a server and sending commands.
Details
’RBaseX’ was developed using R6. For most of the public methods in the R6-classes, wrapper-
functions are created. The differences in performance between R6-methods and wrapper-functions
are minimal and slightly in advantage of the R6-version.
It is easy to use the R6-calls instead of the wrapper-functions. The only important difference is that
in order to execute a query, you have to call ExecuteQuery() on a queryObject.
Methods
Public methods:
• BasexClient$new()
• BasexClient$Command()
• BasexClient$Execute()
• BasexClient$Query()
• BasexClient$Create()
• BasexClient$Add()
• BasexClient$put()
• BasexClient$Replace()
• BasexClient$putBinary()
• BasexClient$Store()
• BasexClient$set_intercept()
• BasexClient$restore_intercept()
• BasexClient$get_intercept()
• BasexClient$get_socket()
• BasexClient$set_success()
• BasexClient$get_success()
• BasexClient$clone()
Method new(): Initialize a new client-session
Usage:
BasexClient$new(host, port = 1984L, username, password)
Arguments:
host, port, username, password Host-information and user-credentials
Method Command(): Execute a command
Usage:
BasexClient$Command(command)
Arguments:
command Command
Details: For a list of database commands see https://docs.basex.org/wiki/Commands
Method Execute(): Execute a command
Usage:
BasexClient$Execute(command)
Arguments:
command Command
Details: For a list of database commands see https://docs.basex.org/wiki/Commands.
This function is replaced by ’Command’ and is obsolete.
Method Query(): Create a new query-object
Usage:
BasexClient$Query(query_string)
Arguments:
query_string Query-string
Details: A query-object has two fields. ’queryObject’ is an ID for the new created ’QueryClass’-
instance. ’success’ holds the status from the last executed operation on the queryObject.
Returns: ID for the created query-object
Method Create(): Create a new database
Usage:
BasexClient$Create(name, input)
Arguments:
name Name
input Initial content, Optional
Details: Initial content can be offered as string, URL or file.
Method Add(): Add a new resouce at the specified path
Usage:
BasexClient$Add(path, input)
Arguments:
path Path
input File, directory or XML-string
Method put(): Add or replace resource, adressed by path
Usage:
BasexClient$put(path, input)
Arguments:
path Path
input File, directory or XML-string
Method Replace(): Replace resource, adressed by path. This function is deprecated and has
been replaced by /’put/’.
Usage:
BasexClient$Replace(path, input)
Arguments:
path Path
input File, directory or XML-string
Method putBinary(): Store binary content
Usage:
BasexClient$putBinary(path, input)
Arguments:
path Path
input File, directory or XML-string
Details: Binary content can be retrieved by executing a retrieve-command
Method Store(): Store binary content
Usage:
BasexClient$Store(path, input)
Arguments:
path Path
input File, directory or XML-string
Details: Binary content can be retrieved by executing a retrieve-command. This function is
deprecated and has been replaced by /’putBinary/’.
Method set_intercept(): Toggles between using the ´success’-field, returned by the Execute-
command or using regular error-handling (try-catch).
Usage:
BasexClient$set_intercept(Intercept)
Arguments:
Intercept Boolean
Method restore_intercept(): Restore the Intercept Toggles to the original value
Usage:
BasexClient$restore_intercept()
Method get_intercept(): Get current Intercept
Usage:
BasexClient$get_intercept()
Method get_socket(): Get the socket-ID
Usage:
BasexClient$get_socket()
Returns: Socket-ID,
Method set_success(): Set the status success-from the last operation on the socket
Usage:
BasexClient$set_success(Success)
Arguments:
Success Boolean
Details: This function is intended to be used by instances from the QueryClass
Method get_success(): Get the status success-from the last operation on the socket
Usage:
BasexClient$get_success()
Returns: Boolean,
Method clone(): The objects of this class are cloneable with this method.
Usage:
BasexClient$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
Examples
## Not run:
Session <- BasexClient$new("localhost", 1984L, username = "<username>", password = "<password>")
Session$Execute("Check test")
Session$Execute("delete /")
# Add resource
Session$Add("test.xml", "<root/>")
# Bindings -----
query_txt <- "declare variable $name external; for $i in 1 to 3 return element { $name } { $i }"
query_obj <- Session$Query(query_txt)
query_obj$queryObject$Bind("$name", "number")
print(query_obj$queryObject$ExecuteQuery())
## End(Not run)
Replace Replace
Description
Replaces a resource with the specified input.
Usage
Replace(session, path, input)
Arguments
session BasexClient instance-ID
path Path where to store the data
input Replacement
Details
The ’Replace’ command is deprecated and has been renamed to ’Put’. ’Replace’ is being kept as
convenience.
The input can be a UTF-8 encoded XML document, a binary resource, or any other data (such as
JSON or CSV) that can be successfully converted to a resource by the server. This method returns
self invisibly, thus making it possible to chain together multiple method calls.
Value
A list with two items
• info Aditional info
• success A boolean, indicating if the command was completed successfull
Examples
## Not run:
Replace(Session, "test", "<xml>Create test</xml>")
## End(Not run)
RestoreIntercept RestoreIntercept
Description
Restore Intercept to original new value
Usage
RestoreIntercept(session)
Arguments
session BasexClient instance-ID
Details
This method returns self invisibly, thus making it possible to chain together multiple method calls.
result2frame result2frame
Description
Converts the query-result to a frame. The query-result is either a list (sequence) or an array. If it is
a list, ’cols’ is needed to determine the number of columns.
Usage
result2frame(...)
Arguments
... Query-result
Value
Return result from query as dataframe
result2tibble result2tibble
Description
Converts the query-result to a tibble. The query-result is either a list (sequence) or an array. If it is
a list, ’cols’ is needed to determine the number of columns.
Usage
result2tibble(...)
Arguments
... Query-result
Value
Return result from query as tibble
SetIntercept SetIntercept
Description
Assign a new value to session$Intercept
Usage
SetIntercept(session, intercept)
Arguments
session BasexClient instance-ID
intercept New Intercept value
Details
This method returns self invisibly, thus making it possible to chain together multiple method calls.
Examples
## Not run:
SetIntercept(TRUE)
## End(Not run)
SetSuccess SetSuccess
Description
Assign a new value to session$Success
Usage
SetSuccess(session, success)
Arguments
session BasexClient instance-ID
success Success-indicator for the last operation on the socket
Examples
## Not run:
SetSuccess(TRUE)
## End(Not run)
SocketClass SocketClass
Description
All methods that are used by BasexClient and QueryClass
Methods
Public methods:
• SocketClass$new()
• SocketClass$finalize()
• SocketClass$handShake()
• SocketClass$write_Byte()
• SocketClass$clone()
Method new(): Initialize a new socket
Usage:
SocketClass$new(host, port = 1984L, username, password)
Arguments:
host, port, username, password Host-information and credentials
Method finalize(): When releasing the session-object, close the socketConnection
Usage:
SocketClass$finalize()
Method handShake(): Send input to the socket and return the response
Usage:
SocketClass$handShake(input)
Arguments:
input Input
Details: Input is a raw vector, built up by converting all input to raw and concatenating the
results
Method write_Byte(): Write 1 byte to the socket
Usage:
SocketClass$write_Byte(Byte)
Arguments:
Byte A vector length 1
Method clone(): The objects of this class are cloneable with this method.
Usage:
SocketClass$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
Store Store
Description
Stores a binary resource in the opened database.
Usage
Store(session, path, input)
Arguments
session BasexClient instance-ID
path Path where to store the data
input Additional input, may be empty
Details
The ’Store’ command is deprecated and has been renamed to ’putBinary’. ’Store’ is being kept as
convenience.
Use the database-command retrieve to retrieve the resource. The input can be a UTF-8 encoded
XML document, a binary resource, or any other data (such as JSON or CSV) that can be successfully
converted to a resource by the server. This method returns self invisibly, thus making it possible to
chain together multiple method calls.
Value
A list with two items
• info Aditional info
• success A boolean, indicating if the command was completed successfull
Examples
## Not run:
Execute(Session, "DROP DB BinBase")
testBin <- Execute(Session, "Check BinBase")
bais <- raw()
for (b in 252:255) bais <- c(bais, c(b)) %>% as.raw()
test <- Store(Session, "test.bin", bais)
print(test$success)
baos <- Execute(Session, "binary get test.bin")
print(bais)
print(baos$result)
## End(Not run)
Updating Updating
Description
Check if the query contains updating expressions.
Usage
Updating(query_obj)
Arguments
query_obj Query instance-ID
Details
Returns TRUE if the query contains updating expressions; FALSE otherwise.
Value
This function returns a list with the following items:
• result Result
• success A boolean, indicating if the command was completed successfull |
github.com/ovh/cds/tools/smtpmock | go | Go | None
Documentation
[¶](#section-documentation)
---
### Index [¶](#pkg-index)
* [type Client](#Client)
* + [func NewClient(url string) Client](#NewClient)
* [type Message](#Message)
* [type SigninRequest](#SigninRequest)
* [type SigninResponse](#SigninResponse)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
This section is empty.
### Types [¶](#pkg-types)
####
type [Client](https://github.com/ovh/cds/blob/ea3de2fd6be3/tools/smtpmock/client.go#L17) [¶](#Client)
```
type Client interface {
Signin(token [string](/builtin#string)) ([SigninResponse](#SigninResponse), [error](/builtin#error))
GetMessages() ([][Message](#Message), [error](/builtin#error))
GetRecipientMessages(email [string](/builtin#string)) ([][Message](#Message), [error](/builtin#error))
GetRecipientLatestMessage(email [string](/builtin#string)) ([Message](#Message), [error](/builtin#error))
}
```
####
func [NewClient](https://github.com/ovh/cds/blob/ea3de2fd6be3/tools/smtpmock/client.go#L13) [¶](#NewClient)
```
func NewClient(url [string](/builtin#string)) [Client](#Client)
```
####
type [Message](https://github.com/ovh/cds/blob/ea3de2fd6be3/tools/smtpmock/message.go#L3) [¶](#Message)
```
type Message struct {
FromAgent [string](/builtin#string) `json:"from-agent"`
RemoteAddress [string](/builtin#string) `json:"remote-address"`
User [string](/builtin#string) `json:"user"`
From [string](/builtin#string) `json:"from"`
To [string](/builtin#string) `json:"to"`
Content [string](/builtin#string) `json:"content"`
ContentDecoded [string](/builtin#string) `json:"content-decoded"`
}
```
####
type [SigninRequest](https://github.com/ovh/cds/blob/ea3de2fd6be3/tools/smtpmock/signin.go#L3) [¶](#SigninRequest)
```
type SigninRequest struct {
SigninToken [string](/builtin#string) `json:"signin-token"`
}
```
####
type [SigninResponse](https://github.com/ovh/cds/blob/ea3de2fd6be3/tools/smtpmock/signin.go#L7) [¶](#SigninResponse)
```
type SigninResponse struct {
SessionToken [string](/builtin#string) `json:"session-token"`
}
``` |
CLVTools | cran | R | Package ‘CLVTools’
October 12, 2022
Title Tools for Customer Lifetime Value Estimation
Version 0.9.0
Date 2022-01-07
Depends R (>= 3.5.0), methods
Description A set of state-of-the-art probabilistic modeling approaches to derive estimates of individ-
ual customer lifetime values (CLV).
Commonly, probabilistic approaches focus on modelling 3 processes, i.e. individuals' attri-
tion, transaction, and spending process.
Latent customer attrition models, which are also known as ``buy-'til-you-
die models'', model the attrition as well as the transaction process.
They are used to make inferences and predictions about transactional patterns of individual cus-
tomers such as their future purchase behavior.
Moreover, these models have also been used to predict individuals’ long-
term engagement in activities such as playing an online game or
posting to a social media platform. The spending process is usually modelled by a separate prob-
abilistic model. Combining these results yields in
lifetime values estimates for individual customers.
This package includes fast and accurate implementations of various probabilistic models for non-
contractual settings
(e.g., grocery purchases or hotel visits). All implementations support time-
invariant covariates, which can be used to control for e.g.,
socio-demographics. If such an extension has been proposed in literature, we further pro-
vide the possibility to control for time-varying
covariates to control for e.g., seasonal patterns.
Currently, the package includes the following latent attrition models to model individuals' attri-
tion and transaction process:
[1] Pareto/NBD model (Pareto/Negative-Binomial-Distribution),
[2] the Extended Pareto/NBD model (Pareto/Negative-Binomial-Distribution with time-
varying covariates),
[3] the BG/NBD model (Beta-Gamma/Negative-Binomial-Distribution) and the
[4] GGom/NBD (Gamma-Gompertz/Negative-Binomial-Distribution).
Further, we provide an implementation of the Gamma/Gamma model to model the spending pro-
cess of individuals.
Imports data.table (>= 1.12.0), ggplot2 (>= 3.2.0), lubridate (>=
1.7.8), Matrix (>= 1.2-17), MASS, optimx (>= 2019-12.02),
2
Rcpp(>= 0.12.12), stats, utils
Suggests covr, knitr, rmarkdown, testthat
License GPL-3
URL https://github.com/bachmannpatrick/CLVTools
BugReports https://github.com/bachmannpatrick/CLVTools/issues
NeedsCompilation yes
SystemRequirements C++11
LinkingTo Rcpp, RcppArmadillo (>= 0.9.500.2.0), RcppGSL (>= 0.3.7)
LazyLoad yes
Encoding UTF-8
Collate 'CLVTools.R' 'RcppExports.R' 'all_generics.R'
'class_clv_time.R' 'class_clv_data.R' 'class_clv_model.R'
'class_clv_fitted.R' 'class_clv_fitted_transactions.R'
'class_clv_model_nocorrelation.R' 'class_clv_model_bgnbd.R'
'class_clv_bgnbd.R' 'class_clv_fitted_transactions_staticcov.R'
'class_clv_data_staticcovariates.R'
'class_clv_model_bgnbd_staticcov.R'
'class_clv_bgnbd_staticcov.R'
'class_clv_data_dynamiccovariates.R'
'class_clv_fitted_spending.R'
'class_clv_fitted_transactions_dynamiccov.R'
'class_clv_model_gg.R' 'class_clv_gg.R'
'class_clv_model_ggomnbd_nocov.R' 'class_clv_ggomnbd.R'
'class_clv_model_ggomnbd_staticcov.R'
'class_clv_ggomnbd_staticcov.R'
'class_clv_model_withcorrelation.R' 'class_clv_model_pnbd.R'
'class_clv_model_pnbd_staticcov.R'
'class_clv_model_pnbd_dynamiccov.R' 'class_clv_pnbd.R'
'class_clv_pnbd_dynamiccov.R' 'class_clv_pnbd_staticcov.R'
'class_clv_time_date.R' 'class_clv_time_datetime.R'
'class_clv_time_days.R' 'class_clv_time_hours.R'
'class_clv_time_weeks.R' 'class_clv_time_years.R'
'clv_template_controlflow_estimate.R'
'clv_template_controlflow_pmf.R'
'clv_template_controlflow_predict.R' 'data.R'
'f_DoExpectation.R' 'f_clvdata_inputchecks.R'
'f_clvfitted_inputchecks.R' 'f_generics_clvfitted.R'
'f_generics_clvfitted_estimate.R'
'f_generics_clvfittedspending.R'
'f_generics_clvfittedtransactions.R'
'f_generics_clvfittedtransactionsdyncov.R'
'f_generics_clvfittedtransactionsstaticcov.R'
'f_generics_clvfittedtransactionsstaticcov_estimate.R'
'f_generics_clvpnbddyncov.R' 'f_interface_bgbb.R'
'f_interface_bgnbd.R' 'f_interface_clvdata.R'
'f_interface_gg.R' 'f_interface_ggomnbd.R' 'f_interface_pmf.R'
'f_interface_pnbd.R' 'f_interface_predict_clvfittedspending.R'
'f_interface_predict_clvfittedtransactions.R'
'f_interface_setdynamiccovariates.R'
'f_interface_setstaticcovariates.R' 'f_s3generics_clvdata.R'
'f_s3generics_clvdata_dynamiccov.R'
'f_s3generics_clvdata_plot.R'
'f_s3generics_clvdata_staticcov.R' 'f_s3generics_clvfitted.R'
'f_s3generics_clvfittedspending_plot.R'
'f_s3generics_clvfittedtransactions_plot.R'
'f_s3generics_clvfittedtransactions_staticcov.R'
'f_s3generics_clvtime.R' 'interlayer_callLL.R'
'interlayer_callnextinterlayer.R' 'interlayer_constraints.R'
'interlayer_correlation.R' 'interlayer_manager.R'
'interlayer_regularization.R' 'pnbd_dyncov_ABCD.R'
'pnbd_dyncov_BkSum.R' 'pnbd_dyncov_CET.R' 'pnbd_dyncov_DECT.R'
'pnbd_dyncov_LL.R' 'pnbd_dyncov_createwalks.R'
'pnbd_dyncov_expectation.R' 'pnbd_dyncov_makewalks.R'
'pnbd_dyncov_palive.R'
RoxygenNote 7.1.2
VignetteBuilder knitr
Author <NAME> [cre, aut],
<NAME> [aut],
<NAME> [aut],
<NAME> [aut],
<NAME> [aut],
<NAME> [aut]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2022-01-09 01:32:48 UTC
R topics documented:
CLVTools-packag... 4
apparelDynCo... 6
apparelStaticCo... 6
apparelTran... 7
as.clv.dat... 7
as.data.frame.clv.dat... 9
as.data.table.clv.dat... 10
bgb... 11
bgnb... 13
bgnbd_CE... 17
bgnbd_expectatio... 19
bgnbd_L... 20
bgnbd_PAliv... 21
bgnbd_pm... 23
cdno... 24
clvdat... 24
fitted.clv.fitte... 27
g... 28
ggomnb... 30
ggomnbd_CE... 34
ggomnbd_expectatio... 35
ggomnbd_L... 36
ggomnbd_PAliv... 37
gg_L... 39
nobs.clv.dat... 40
nobs.clv.fitte... 40
plot.clv.dat... 41
plot.clv.fitted.spendin... 44
plot.clv.fitted.transaction... 46
pm... 50
pnb... 51
pnbd_CE... 57
pnbd_DER... 59
pnbd_expectatio... 61
pnbd_L... 62
pnbd_PAliv... 63
pnbd_pm... 65
predict.clv.fitted.spendin... 66
predict.clv.fitted.transaction... 67
SetDynamicCovariate... 71
SetStaticCovariate... 73
subset.clv.dat... 75
summary.clv.fitte... 76
vcov.clv.fitte... 79
CLVTools-package Customer Lifetime Value Tools
Description
CLVTools is a toolbox for various probabilistic customer attrition models for non-contractual set-
tings. It provides a framework, which is capable of unifying different probabilistic customer attri-
tion models. This package provides tools to estimate the number of future transactions of individual
customers as well as the probability of customers being alive in future periods. Further, the average
spending by customers can be estimated. Multiplying the future transactions conditional on being
alive and the predicted individual spending per transaction results in an individual CLV value.
The implemented models require transactional data from non-contractual businesses (i.e. cus-
tomers’ purchase history).
Author(s)
Maintainer: <NAME> <<EMAIL>>
Authors:
• <NAME> <<EMAIL>>
• <NAME> <<EMAIL>>
• <NAME> <<EMAIL>>
• <NAME> <<EMAIL>>
• <NAME> <<EMAIL>>
See Also
Development for CLVTools can be followed via the GitHub repository at https://github.com/
bachmannpatrick/CLVTools.
Examples
data("cdnow")
# Create a CLV data object, split data in estimation and holdout sample
clv.data.cdnow <- clvdata(data.transactions = cdnow, date.format = "ymd",
time.unit = "week", estimation.split = 39, name.id = "Id")
# summary of data
summary(clv.data.cdnow)
# Fit a PNBD model without covariates on the first 39 periods
pnbd.cdnow <- pnbd(clv.data.cdnow,
start.params.model = c(r=0.5, alpha=8, s=0.5, beta=10))
# inspect fit
summary(pnbd.cdnow)
# Predict 10 periods (weeks) ahead from estimation end
# and compare to actuals in this period
pred.out <- predict(pnbd.cdnow, prediction.end = 10)
# Plot the fitted model to the actual repeat transactions
plot(pnbd.cdnow)
apparelDynCov Time-varying Covariates for the Apparel Retailer Dataset
Description
This simulated data contains direct marketing information on all 250 customers in the "apparel-
Trans" dataset. This information can be used as time-varying covariates.
Usage
data("apparelDynCov")
Format
A data.table with 20500 rows and 5 variables
Id Customer Id
Cov.Date Date of contextual factor
Marketing Direct marketing variable: number of times a customer was contacted with direct mar-
keting in this time period
Gender 0=male, 1=female
Channel Acquisition channel: 0=online, 1=offline
apparelStaticCov Time-invariant Covariates for the Apparel Retailer Dataset
Description
This simulated data contains additional demographic information on all 250 customers in the "ap-
parelTrans" dataset. This information can be used as time-invariant covariates.
Usage
data("apparelStaticCov")
Format
A data.table with 250 rows and 3 variables:
Id Customer Id
Gender 0=male, 1=female
Channel Acquisition channel: 0=online, 1=offline
apparelTrans Apparel Retailer Dataset
Description
This is a simulated dataset containing the entire purchase history of customers made their first
purchase at an apparel retailer on January 3rd 2005. In total the dataset contains 250 customers who
made 3648 transactions between January 2005 and mid July 2006.
Usage
data("apparelTrans")
Format
A data.table with 2353 rows and 3 variables:
Id Customer Id
Date Date of purchase
Price Price of purchase
as.clv.data Coerce to clv.data object
Description
Functions to coerce transaction data to a clv.data object.
Usage
as.clv.data(
x,
date.format = "ymd",
time.unit = "weeks",
estimation.split = NULL,
name.id = "Id",
name.date = "Date",
name.price = "Price",
...
)
## S3 method for class 'data.frame'
as.clv.data(
x,
date.format = "ymd",
time.unit = "weeks",
estimation.split = NULL,
name.id = "Id",
name.date = "Date",
name.price = "Price",
...
)
## S3 method for class 'data.table'
as.clv.data(
x,
date.format = "ymd",
time.unit = "weeks",
estimation.split = NULL,
name.id = "Id",
name.date = "Date",
name.price = "Price",
...
)
Arguments
x Transaction data.
date.format Character string that indicates the format of the date variable in the data used.
See details.
time.unit What time unit defines a period. May be abbreviated, capitalization is ignored.
See details.
estimation.split
Indicates the length of the estimation period. See details.
name.id Column name of the customer id in x.
name.date Column name of the transaction date in x.
name.price Column name of price in x. NULL if no spending data is present.
... Ignored
Details
See section "Details" of clvdata for more details on parameters and usage.
Examples
data(cdnow)
# Turn data.table of transaction data into a clv.data object,
# using default date format and column names but no holdout period
clv.cdnow <- as.clv.data(cdnow)
as.data.frame.clv.data
Coerce to a Data Frame
Description
Extract a copy of the transaction data stored in the given clv.data object into a data.frame.
Usage
## S3 method for class 'clv.data'
as.data.frame(
x,
row.names = NULL,
optional = NULL,
Ids = NULL,
sample = c("full", "estimation", "holdout"),
...
)
Arguments
x An object of class clv.data.
row.names Ignored
optional Ignored
Ids Character vector of customer ids for which transactions should be extracted.
NULL extracts all.
sample Name of sample for which transactions should be extracted, either "estimation",
"holdout", or "full" (default).
... Ignored
Value
A data.frame with columns Id, Date, and Price (if present).
Examples
data("cdnow")
clv.data.cdnow <- clvdata(data.transactions = cdnow,
date.format="ymd",
time.unit = "w",
estimation.split = 37)
# Extract all transaction data (all Ids, estimation and holdout period)
df.trans <- as.data.frame(clv.data.cdnow)
# Extract transaction data of estimation period
df.trans <- as.data.frame(clv.data.cdnow, sample="estimation")
# Extract transaction data of Ids "1", "2", and "999"
# (estimation and holdout period)
df.trans <- as.data.frame(clv.data.cdnow, Ids = c("1", "2", "999"))
# Extract transaction data of Ids "1", "2", and "999" in estimation period
df.trans <- as.data.frame(clv.data.cdnow, Ids = c("1", "2", "999"),
sample="estimation")
as.data.table.clv.data
Coerce to a Data Table
Description
Extract a copy of the transaction data stored in the given clv.data object into a data.table.
Usage
## S3 method for class 'clv.data'
as.data.table(
x,
keep.rownames = FALSE,
Ids = NULL,
sample = c("full", "estimation", "holdout"),
...
)
Arguments
x An object of class clv.data.
keep.rownames Ignored
Ids Character vector of customer ids for which transactions should be extracted.
NULL extracts all.
sample Name of sample for which transactions should be extracted, either "estimation",
"holdout", or "full" (default).
... Ignored
Value
A data.table with columns Id, Date, and Price (if present).
Examples
library(data.table)
data("cdnow")
clv.data.cdnow <- clvdata(data.transactions = cdnow,
date.format="ymd",
time.unit = "w",
estimation.split = 37)
# Extract all transaction data (all Ids, estimation and holdout period)
dt.trans <- as.data.table(clv.data.cdnow)
# Extract transaction data of estimation period
dt.trans <- as.data.table(clv.data.cdnow, sample="estimation")
# Extract transaction data of Ids "1", "2", and "999"
# (estimation and holdout period)
dt.trans <- as.data.table(clv.data.cdnow, Ids = c("1", "2", "999"))
# Extract transaction data of Ids "1", "2", and "999" in estimation period
dt.trans <- as.data.table(clv.data.cdnow, Ids = c("1", "2", "999"),
sample="estimation")
bgbb BG/BB models - Work In Progress
Description
Fits BG/BB models on transactional data with static and without covariates. Not yet implemented.
Usage
## S4 method for signature 'clv.data'
bgbb(
clv.data,
start.params.model = c(),
optimx.args = list(),
verbose = TRUE,
...
)
## S4 method for signature 'clv.data.static.covariates'
bgbb(
clv.data,
start.params.model = c(),
optimx.args = list(),
verbose = TRUE,
names.cov.life = c(),
names.cov.trans = c(),
start.params.life = c(),
start.params.trans = c(),
names.cov.constr = c(),
start.params.constr = c(),
reg.lambdas = c(),
...
)
## S4 method for signature 'clv.data.dynamic.covariates'
bgbb(
clv.data,
start.params.model = c(),
optimx.args = list(),
verbose = TRUE,
names.cov.life = c(),
names.cov.trans = c(),
start.params.life = c(),
start.params.trans = c(),
names.cov.constr = c(),
start.params.constr = c(),
reg.lambdas = c(),
...
)
Arguments
clv.data The data object on which the model is fitted.
start.params.model
Named start parameters containing the optimization start parameters for the
model without covariates.
optimx.args Additional arguments to control the optimization which are forwarded to optimx::optimx.
If multiple optimization methods are specified, only the result of the last method
is further processed.
verbose Show details about the running of the function.
... Ignored
names.cov.life Which of the set Lifetime covariates should be used. Missing parameter indi-
cates all covariates shall be used.
names.cov.trans
Which of the set Transaction covariates should be used. Missing parameter
indicates all covariates shall be used.
start.params.life
Named start parameters containing the optimization start parameters for all life-
time covariates.
start.params.trans
Named start parameters containing the optimization start parameters for all trans-
action covariates.
names.cov.constr
Which covariates should be forced to use the same parameters for the lifetime
and transaction process. The covariates need to be present as both, lifetime and
transaction covariates.
start.params.constr
Named start parameters containing the optimization start parameters for the con-
straint covariates.
reg.lambdas Named lambda parameters used for the L2 regularization of the lifetime and the
transaction covariate parameters. Lambdas have to be >= 0.
Value
No value is returned.
bgnbd BG/NBD models
Description
Fits BG/NBD models on transactional data without and with static covariates.
Usage
## S4 method for signature 'clv.data'
bgnbd(
clv.data,
start.params.model = c(),
optimx.args = list(),
verbose = TRUE,
...
)
## S4 method for signature 'clv.data.static.covariates'
bgnbd(
clv.data,
start.params.model = c(),
optimx.args = list(),
verbose = TRUE,
names.cov.life = c(),
names.cov.trans = c(),
start.params.life = c(),
start.params.trans = c(),
names.cov.constr = c(),
start.params.constr = c(),
reg.lambdas = c(),
...
)
Arguments
clv.data The data object on which the model is fitted.
start.params.model
Named start parameters containing the optimization start parameters for the
model without covariates.
optimx.args Additional arguments to control the optimization which are forwarded to optimx::optimx.
If multiple optimization methods are specified, only the result of the last method
is further processed.
verbose Show details about the running of the function.
... Ignored
names.cov.life Which of the set Lifetime covariates should be used. Missing parameter indi-
cates all covariates shall be used.
names.cov.trans
Which of the set Transaction covariates should be used. Missing parameter
indicates all covariates shall be used.
start.params.life
Named start parameters containing the optimization start parameters for all life-
time covariates.
start.params.trans
Named start parameters containing the optimization start parameters for all trans-
action covariates.
names.cov.constr
Which covariates should be forced to use the same parameters for the lifetime
and transaction process. The covariates need to be present as both, lifetime and
transaction covariates.
start.params.constr
Named start parameters containing the optimization start parameters for the con-
straint covariates.
reg.lambdas Named lambda parameters used for the L2 regularization of the lifetime and the
transaction covariate parameters. Lambdas have to be >= 0.
Details
Model parameters for the BG/NBD model are r, alpha, a, and b.
r: shape parameter of the Gamma distribution of the purchase process.
alpha: scale parameter of the Gamma distribution of the purchase process.
a: shape parameter of the Beta distribution of the dropout process.
b: shape parameter of the Beta distribution of the dropout process.
If no start parameters are given, r = 1, alpha = 3, a = 1, b = 3 is used. All model start parameters are
required to be > 0. If no start values are given for the covariate parameters, 0.1 is used.
Note that the DERT expression has not been derived (yet) and it consequently is not possible to
calculated values for DERT and CLV.
The BG/NBD model: The BG/NBD is an "easy" alternative to the Pareto/NBD model that is
easier to implement. The BG/NBD model slight adapts the behavioral "story" associated with
the Pareto/NBD model in order to simplify the implementation. The BG/NBD model uses a
beta-geometric and exponential gamma mixture distributions to model customer behavior. The
key difference to the Pareto/NBD model is that a customer can only churn right after a transac-
tion. This simplifies computations significantly, however has the drawback that a customer cannot
churn until he/she makes a transaction. The Pareto/NBD model assumes that a customer can churn
at any time.
BG/NBD model with static covariates: The standard BG/NBD model captures heterogeneity
was solely using Gamma distributions. However, often exogenous knowledge, such as for exam-
ple customer demographics, is available. The supplementary knowledge may explain part of the
heterogeneity among the customers and therefore increase the predictive accuracy of the model.
In addition, we can rely on these parameter estimates for inference, i.e. identify and quantify
effects of contextual factors on the two underlying purchase and attrition processes. For technical
details we refer to the technical note by Fader and Hardie (2007).
The likelihood function is the likelihood function associated with the basic model where alpha,
a, and b are replaced with alpha = alpha0*exp(-g1z1), a = a_0*exp(g2z2), and b = b0*exp(g3z2)
while r remains unchanged. Note that in the current implementation, we constrain the covariate
parameters and data for the lifetime process to be equal (g2=g3 and z2=z3).
Value
Depending on the data object on which the model was fit, bgnbd returns either an object of class
clv.bgnbd or clv.bgnbd.static.cov.
The function summary can be used to obtain and print a summary of the results. The generic accessor
functions coefficients, vcov, fitted, logLik, AIC, BIC, and nobs are available.
References
<NAME>, <NAME>, <NAME> (2005). ““Counting Your Customers” the Easy Way: An Alternative
to the Pareto/NBD Model” Marketing Science, 24(2), 275-284.
<NAME>, <NAME> (2013). “Overcoming the BG/NBD Model’s #NUM! Error Problem” URL
http://brucehardie.com/notes/027/bgnbd_num_error.pdf.
<NAME>, <NAME> (2007). “Incorporating time-invariant covariates into the Pareto/NBD and
BG/NBD models.” URL http://www.brucehardie.com/notes/019/time_invariant_covariates.
pdf.
<NAME>, <NAME>, <NAME> (2007). “Creating a Fit Histogram for the BG/NBD Model” URL
https://www.brucehardie.com/notes/014/bgnbd_fit_histogram.pdf
See Also
clvdata to create a clv data object, SetStaticCovariates to add static covariates to an existing
clv data object.
gg to fit customer’s average spending per transaction with the Gamma-Gamma model
predict to predict expected transactions, probability of being alive, and customer lifetime value
for every customer
plot to plot the unconditional expectation as predicted by the fitted model
pmf for the probability to make exactly x transactions in the estimation period, given by the proba-
bility mass function (PMF).
The generic functions vcov, summary, fitted.
Examples
data("apparelTrans")
clv.data.apparel <- clvdata(apparelTrans, date.format = "ymd",
time.unit = "w", estimation.split = 40)
# Fit standard bgnbd model
bgnbd(clv.data.apparel)
# Give initial guesses for the model parameters
bgnbd(clv.data.apparel,
start.params.model = c(r=0.5, alpha=15, a = 2, b=5))
# pass additional parameters to the optimizer (optimx)
# Use Nelder-Mead as optimization method and print
# detailed information about the optimization process
apparel.bgnbd <- bgnbd(clv.data.apparel,
optimx.args = list(method="Nelder-Mead",
control=list(trace=6)))
# estimated coefs
coef(apparel.bgnbd)
# summary of the fitted model
summary(apparel.bgnbd)
# predict CLV etc for holdout period
predict(apparel.bgnbd)
# predict CLV etc for the next 15 periods
predict(apparel.bgnbd, prediction.end = 15)
# To estimate the bgnbd model with static covariates,
# add static covariates to the data
data("apparelStaticCov")
clv.data.static.cov <-
SetStaticCovariates(clv.data.apparel,
data.cov.life = apparelStaticCov,
names.cov.life = c("Gender", "Channel"),
data.cov.trans = apparelStaticCov,
names.cov.trans = c("Gender", "Channel"))
# Fit bgnbd with static covariates
bgnbd(clv.data.static.cov)
# Give initial guesses for both covariate parameters
bgnbd(clv.data.static.cov, start.params.trans = c(Gender=0.75, Channel=0.7),
start.params.life = c(Gender=0.5, Channel=0.5))
# Use regularization
bgnbd(clv.data.static.cov, reg.lambdas = c(trans = 5, life=5))
# Force the same coefficient to be used for both covariates
bgnbd(clv.data.static.cov, names.cov.constr = "Gender",
start.params.constr = c(Gender=0.5))
# Fit model only with the Channel covariate for life but
# keep all trans covariates as is
bgnbd(clv.data.static.cov, names.cov.life = c("Channel"))
bgnbd_CET BG/NBD: Conditional Expected Transactions
Description
Calculates the expected number of transactions in a given time period based on a customer’s past
transaction behavior and the BG/NBD model parameters.
• bgnbd_nocov_CET Conditional Expected Transactions without covariates
• bgnbd_staticcov_CET Conditional Expected Transactions with static covariates
Usage
bgnbd_nocov_CET(r, alpha, a, b, dPeriods, vX, vT_x, vT_cal)
bgnbd_staticcov_CET(
r,
alpha,
a,
b,
dPeriods,
vX,
vT_x,
vT_cal,
vCovParams_trans,
vCovParams_life,
mCov_trans,
mCov_life
)
Arguments
r shape parameter of the Gamma distribution of the purchase process
alpha scale parameter of the Gamma distribution of the purchase process
a shape parameter of the Beta distribution of the lifetime process
b shape parameter of the Beta distribution of the lifetime process
dPeriods number of periods to predict
vX Frequency vector of length n counting the numbers of purchases.
vT_x Recency vector of length n.
vT_cal Vector of length n indicating the total number of periods of observation.
vCovParams_trans
Vector of estimated parameters for the transaction covariates.
vCovParams_life
Vector of estimated parameters for the lifetime covariates.
mCov_trans Matrix containing the covariates data affecting the transaction process. One
column for each covariate.
mCov_life Matrix containing the covariates data affecting the lifetime process. One column
for each covariate.
Details
mCov_trans is a matrix containing the covariates data of the time-invariant covariates that affect
the transaction process. Each column represents a different covariate. For every column a gamma
parameter needs to added to vCovParams_trans at the respective position.
mCov_life is a matrix containing the covariates data of the time-invariant covariates that affect
the lifetime process. Each column represents a different covariate. For every column a gamma
parameter needs to added to vCovParams_life at the respective position.
Value
Returns a vector containing the conditional expected transactions for the existing customers in the
BG/NBD model.
References
Fader PS, <NAME>, <NAME> (2005). ““Counting Your Customers” the Easy Way: An Alternative
to the Pareto/NBD Model” Marketing Science, 24(2), 275-284.
Fader PS, <NAME> (2013). “Overcoming the BG/NBD Model’s #NUM! Error Problem” URL
http://brucehardie.com/notes/027/bgnbd_num_error.pdf.
Fader PS, <NAME> (2007). “Incorporating time-invariant covariates into the Pareto/NBD and
BG/NBD models.” URL http://www.brucehardie.com/notes/019/time_invariant_covariates.
pdf.
Fader PS, <NAME>, <NAME> (2007). “Creating a Fit Histogram for the BG/NBD Model” URL
https://www.brucehardie.com/notes/014/bgnbd_fit_histogram.pdf
bgnbd_expectation BG/NBD: Unconditional Expectation
Description
Computes the expected number of repeat transactions in the interval (0, vT_i] for a randomly se-
lected customer, where 0 is defined as the point when the customer came alive.
Usage
bgnbd_nocov_expectation(r, alpha, a, b, vT_i)
bgnbd_staticcov_expectation(r, vAlpha_i, vA_i, vB_i, vT_i)
Arguments
r shape parameter of the Gamma distribution of the purchase process
alpha scale parameter of the Gamma distribution of the purchase process
a shape parameter of the Beta distribution of the lifetime process
b shape parameter of the Beta distribution of the lifetime process
vT_i Number of periods since the customer came alive
vAlpha_i Vector of individual parameters alpha
vA_i Vector of individual parameters a
vB_i Vector of individual parameters b
Value
Returns the expected transaction values according to the chosen model.
References
Fader PS, <NAME>, <NAME> (2005). ““Counting Your Customers” the Easy Way: An Alternative
to the Pareto/NBD Model” Marketing Science, 24(2), 275-284.
Fader PS, <NAME> (2013). “Overcoming the BG/NBD Model’s #NUM! Error Problem” URL
http://brucehardie.com/notes/027/bgnbd_num_error.pdf.
Fader PS, <NAME> (2007). “Incorporating time-invariant covariates into the Pareto/NBD and
BG/NBD models.” URL http://www.brucehardie.com/notes/019/time_invariant_covariates.
pdf.
Fader PS, <NAME>, <NAME> (2007). “Creating a Fit Histogram for the BG/NBD Model” URL
https://www.brucehardie.com/notes/014/bgnbd_fit_histogram.pdf
bgnbd_LL BG/NBD: Log-Likelihood functions
Description
Calculates the Log-Likelihood values for the BG/NBD model with and without covariates.
The function bgnbd_nocov_LL_ind calculates the individual log-likelihood values for each cus-
tomer for the given parameters.
The function bgnbd_nocov_LL_sum calculates the log-likelihood value summed across customers
for the given parameters.
The function bgnbd_staticcov_LL_ind calculates the individual log-likelihood values for each
customer for the given parameters and covariates.
The function bgnbd_staticcov_LL_sum calculates the individual log-likelihood values summed
across customers.
Usage
bgnbd_nocov_LL_ind(vLogparams, vX, vT_x, vT_cal)
bgnbd_nocov_LL_sum(vLogparams, vX, vT_x, vT_cal)
bgnbd_staticcov_LL_ind(vParams, vX, vT_x, vT_cal, mCov_life, mCov_trans)
bgnbd_staticcov_LL_sum(vParams, vX, vT_x, vT_cal, mCov_life, mCov_trans)
Arguments
vLogparams vector with the BG/NBD model parameters at log scale. See Details.
vX Frequency vector of length n counting the numbers of purchases.
vT_x Recency vector of length n.
vT_cal Vector of length n indicating the total number of periods of observation.
vParams vector with the parameters for the BG/NBD model at log scale and the static
covariates at original scale. See Details.
mCov_life Matrix containing the covariates data affecting the lifetime process. One column
for each covariate.
mCov_trans Matrix containing the covariates data affecting the transaction process. One
column for each covariate.
Details
vLogparams is a vector with model parameters r, alpha_0, a, b at log-scale, in this order.
vParams is vector with the BG/NBD model parameters at log scale, followed by the parameters
for the lifetime covariates at original scale and then followed by the parameters for the transaction
covariates at original scale
mCov_trans is a matrix containing the covariates data of the time-invariant covariates that affect
the transaction process. Each column represents a different covariate. For every column a gamma
parameter needs to added to vLogparams at the respective position.
mCov_life is a matrix containing the covariates data of the time-invariant covariates that affect
the lifetime process. Each column represents a different covariate. For every column a gamma
parameter needs to added to vLogparams at the respective position.
Value
Returns the respective Log-Likelihood value(s) for the BG/NBD model with or without covariates.
References
Fader PS, <NAME>, <NAME> (2005). ““Counting Your Customers” the Easy Way: An Alternative
to the Pareto/NBD Model” Marketing Science, 24(2), 275-284.
Fader PS, <NAME> (2013). “Overcoming the BG/NBD Model’s #NUM! Error Problem” URL
http://brucehardie.com/notes/027/bgnbd_num_error.pdf.
Fader PS, <NAME> (2007). “Incorporating time-invariant covariates into the Pareto/NBD and
BG/NBD models.” URL http://www.brucehardie.com/notes/019/time_invariant_covariates.
pdf.
<NAME>, <NAME>, <NAME> (2007). “Creating a Fit Histogram for the BG/NBD Model” URL
https://www.brucehardie.com/notes/014/bgnbd_fit_histogram.pdf
bgnbd_PAlive BG/NBD: Probability of Being Alive
Description
Calculates the probability of a customer being alive at the end of the calibration period, based on a
customer’s past transaction behavior and the BG/NBD model parameters.
• bgnbd_nocov_PAlive P(alive) for the BG/NBD model without covariates
• bgnbd_staticcov_PAlive P(alive) for the BG/NBD model with static covariates
Usage
bgnbd_nocov_PAlive(r, alpha, a, b, vX, vT_x, vT_cal)
bgnbd_staticcov_PAlive(
r,
alpha,
a,
b,
vX,
vT_x,
vT_cal,
vCovParams_trans,
vCovParams_life,
mCov_trans,
mCov_life
)
Arguments
r shape parameter of the Gamma distribution of the purchase process
alpha scale parameter of the Gamma distribution of the purchase process
a shape parameter of the Beta distribution of the lifetime process
b shape parameter of the Beta distribution of the lifetime process
vX Frequency vector of length n counting the numbers of purchases.
vT_x Recency vector of length n.
vT_cal Vector of length n indicating the total number of periods of observation.
vCovParams_trans
Vector of estimated parameters for the transaction covariates.
vCovParams_life
Vector of estimated parameters for the lifetime covariates.
mCov_trans Matrix containing the covariates data affecting the transaction process. One
column for each covariate.
mCov_life Matrix containing the covariates data affecting the lifetime process. One column
for each covariate.
Details
mCov_trans is a matrix containing the covariates data of the time-invariant covariates that affect
the transaction process. Each column represents a different covariate. For every column a gamma
parameter needs to added to vCovParams_trans at the respective position.
mCov_life is a matrix containing the covariates data of the time-invariant covariates that affect
the lifetime process. Each column represents a different covariate. For every column a gamma
parameter needs to added to vCovParams_life at the respective position.
Value
Returns a vector with the PAlive for each customer.
References
Fader PS, <NAME>, <NAME> (2005). ““Counting Your Customers” the Easy Way: An Alternative
to the Pareto/NBD Model” Marketing Science, 24(2), 275-284.
Fader PS, <NAME> (2013). “Overcoming the BG/NBD Model’s #NUM! Error Problem” URL
http://brucehardie.com/notes/027/bgnbd_num_error.pdf.
Fader PS, <NAME> (2007). “Incorporating time-invariant covariates into the Pareto/NBD and
BG/NBD models.” URL http://www.brucehardie.com/notes/019/time_invariant_covariates.
pdf.
Fader PS, <NAME>, Lee KL (2007). “Creating a Fit Histogram for the BG/NBD Model” URL
https://www.brucehardie.com/notes/014/bgnbd_fit_histogram.pdf
bgnbd_pmf BG/NBD: Probability Mass Function (PMF)
Description
Calculate P(X(t)=x), the probability that a randomly selected customer makes exactly x transactions
in the interval (0, t].
Usage
bgnbd_nocov_PMF(r, alpha, a, b, x, vT_i)
bgnbd_staticcov_PMF(r, x, vAlpha_i, vA_i, vB_i, vT_i)
Arguments
r shape parameter of the Gamma distribution of the purchase process
alpha scale parameter of the Gamma distribution of the purchase process
a shape parameter of the Beta distribution of the lifetime process
b shape parameter of the Beta distribution of the lifetime process
x The number of transactions to calculate the probability for (unsigned integer).
vT_i Number of periods since the customer came alive.
vAlpha_i Vector of individual parameters alpha
vA_i Vector of individual parameters a
vB_i Vector of individual parameters b
Value
Returns a vector of probabilities.
References
Fader PS, <NAME>, <NAME> (2005). ““Counting Your Customers” the Easy Way: An Alternative
to the Pareto/NBD Model” Marketing Science, 24(2), 275-284.
Fader PS, Hardie BGS (2013). “Overcoming the BG/NBD Model’s #NUM! Error Problem” URL
http://brucehardie.com/notes/027/bgnbd_num_error.pdf.
Fader PS, Hardie BGS (2007). “Incorporating time-invariant covariates into the Pareto/NBD and
BG/NBD models.” URL http://www.brucehardie.com/notes/019/time_invariant_covariates.
pdf.
Fader PS, Hardie BGS, <NAME> (2007). “Creating a Fit Histogram for the BG/NBD Model” URL
https://www.brucehardie.com/notes/014/bgnbd_fit_histogram.pdf
cdnow CDNOW dataset
Description
A dataset containing the entire purchase history up to the end of June 1998 of the cohort of 23,570
individuals who made their first-ever purchase at CDNOW in the first quarter of 1997.
Usage
data("cdnow")
Format
A data.table with 6696 rows and 4 variables:
Id Customer Id
Date Date of purchase
CDs Amount of CDs purchased
Price Price of purchase
References
Fader, <NAME>. and <NAME>.,<NAME>, (2001), "Forecasting Repeat Sales at CDNOW: A Case
Study," Interfaces, 31 (May-June), Part 2 of 2, p94-107.
clvdata Create an object for transactional data required to estimate CLV
Description
Creates a data object that contains the prepared transaction data and that is used as input for model
fitting. The transaction data may be split in an estimation and holdout sample if desired. The model
then will only be fit on the estimation sample.
If covariates should be used when fitting a model, covariate data can be added to an object returned
from this function.
Usage
clvdata(
data.transactions,
date.format,
time.unit,
estimation.split = NULL,
name.id = "Id",
name.date = "Date",
name.price = "Price"
)
Arguments
data.transactions
Transaction data as data.frame or data.table. See details.
date.format Character string that indicates the format of the date variable in the data used.
See details.
time.unit What time unit defines a period. May be abbreviated, capitalization is ignored.
See details.
estimation.split
Indicates the length of the estimation period. See details.
name.id Column name of the customer id in data.transactions.
name.date Column name of the transaction date in data.transactions.
name.price Column name of price in data.transactions. NULL if no spending data is
present.
Details
data.transactions A data.frame or data.table with customers’ purchase history. Every trans-
action record consists of a purchase date and a customer id. Optionally, the price of the transaction
may be included to also allow for prediction of future customer spending.
time.unit The definition of a single period. Currently available are "hours", "days", "weeks",
and "years". May be abbreviated.
date.format A single format to use when parsing any date that is given as character input. This
includes the dates given in data.transaction, estimation.split, or as an input to any other
function at a later point, such as prediction.end in predict. The function parse_date_time of
package lubridate is used to parse inputs and hence all formats it accepts in argument orders
can be used. For example, a date of format "year-month-day" (i.e., "2010-06-17") is indicated with
"ymd". Other combinations such as "dmy", "dym", "ymd HMS", or "HMS dmy" are possible as well.
estimation.split May be specified as either the number of periods since the first transaction or
the timepoint (either as character, Date, or POSIXct) at which the estimation period ends. The
indicated timepoint itself will be part of the estimation sample. If no value is provided or set to
NULL, the whole dataset will used for fitting the model (no holdout sample).
Aggregation of Transactions:
Multiple transactions by the same customer that occur on the minimally representable temporal
resolution are aggregated to a single transaction with their spending summed. For time units days
and any other coarser Date-based time units (i.e. weeks, years), this means that transactions
on the same day are combined. When using finer time units such as hours which are based on
POSIXct, transactions on the same second are aggregated.
For the definition of repeat-purchases, combined transactions are viewed as a single transaction.
Hence, repeat-transactions are determined from the aggregated transactions.
Value
An object of class clv.data. See the class definition clv.data for more details about the returned
object.
The function summary can be used to obtain and print a summary of the data. The generic accessor
function nobs is available to read out the number of customers.
See Also
SetStaticCovariates to add static covariates
SetDynamicCovariates for how to add dynamic covariates
plot to plot the repeat transactions
summary to summarize the transaction data
pnbd to fit Pareto/NBD models on a clv.data object
Examples
data("cdnow")
# create clv data object with weekly periods
# and no splitting
clv.data.cdnow <- clvdata(data.transactions = cdnow,
date.format="ymd",
time.unit = "weeks")
# same but split after 37 periods
clv.data.cdnow <- clvdata(data.transactions = cdnow,
date.format="ymd",
time.unit = "w",
estimation.split = 37)
# same but estimation end on the 15th Oct 1997
clv.data.cdnow <- clvdata(data.transactions = cdnow,
date.format="ymd",
time.unit = "w",
estimation.split = "1997-10-15")
# summary of the transaction data
summary(clv.data.cdnow)
# plot the total number of transactions per period
plot(clv.data.cdnow)
# create data with the weekly periods defined to
# start on Mondays
## Not run:
# set start of week to Monday
oldopts <- options("lubridate.week.start"=1)
# create clv.data while Monday is the beginning of the week
clv.data.cdnow <- clvdata(data.transactions = cdnow,
date.format="ymd",
time.unit = "weeks")
# Dynamic covariates now have to be supplied for every Monday
# set week start to what it was before
options(oldopts)
## End(Not run)
fitted.clv.fitted Extract Unconditional Expectation
Description
Extract the unconditional expectation (future transactions unconditional on being "alive") from a
fitted clv model. This is the unconditional expectation data that is used when plotting the fitted
model.
Usage
## S3 method for class 'clv.fitted'
fitted(object, prediction.end = NULL, verbose = FALSE, ...)
Arguments
object A fitted clv model for which the unconditional expectation is desired.
prediction.end Until what point in time to predict. This can be the number of periods (numeric)
or a form of date/time object. See details.
verbose Show details about the running of the function.
... Ignored
Details
prediction.end indicates until when to predict or plot and can be given as either a point in time (of
class Date, POSIXct, or character) or the number of periods. If prediction.end is of class char-
acter, the date/time format set when creating the data object is used for parsing. If prediction.end
is the number of periods, the end of the fitting period serves as the reference point from which pe-
riods are counted. Only full periods may be specified. If prediction.end is omitted or NULL, it
defaults to the end of the holdout period if present and to the end of the estimation period otherwise.
The first prediction period is defined to start right after the end of the estimation period. If for
example weekly time units are used and the estimation period ends on Sunday 2019-01-01, then
the first day of the first prediction period is Monday 2019-01-02. Each prediction period includes a
total of 7 days and the first prediction period therefore will end on, and include, Sunday 2019-01-
08. Subsequent prediction periods again start on Mondays and end on Sundays. If prediction.end
indicates a timepoint on which to end, this timepoint is included in the prediction period.
Value
A data.table which contains the following columns:
period.until The timepoint that marks the end (up until and including) of the period to which
the data in this row refers.
period.num The number of this period.
expectation The value of the unconditional expectation for the period that ends on period.until.
See Also
plot to plot the unconditional expectation
gg Gamma/Gamma Spending model
Description
Fits the Gamma-Gamma model on a given object of class clv.data to predict customers’ mean
spending per transaction.
Usage
## S4 method for signature 'clv.data'
gg(
clv.data,
start.params.model = c(),
optimx.args = list(),
remove.first.transaction = TRUE,
verbose = TRUE,
...
)
Arguments
clv.data The data object on which the model is fitted.
start.params.model
Named start parameters containing the optimization start parameters for the
model without covariates.
optimx.args Additional arguments to control the optimization which are forwarded to optimx::optimx.
If multiple optimization methods are specified, only the result of the last method
is further processed.
remove.first.transaction
Whether customer’s first transaction are removed. If TRUE all zero-repeaters are
excluded from model fitting.
verbose Show details about the running of the function.
... Ignored
Details
Model parameters for the G/G model are p, q, and gamma.
p: shape parameter of the Gamma distribution of the spending process.
q: shape parameter of the Gamma distribution to account for customer heterogeneity.
gamma: scale parameter of the Gamma distribution to account for customer heterogeneity.
If no start parameters are given, 1.0 is used for all model parameters. All parameters are required to
be > 0.
The Gamma-Gamma model cannot be estimated for data that contains negative prices. Customers
with a mean spending of zero or a transaction count of zero are ignored during model fitting.
The G/G model: The G/G model allows to predict a value for future customer transactions.
Usually, the G/G model is used in combination with a probabilistic model predicting customer
transaction such as the Pareto/NBD or the BG/NBD model.
Value
An object of class clv.gg is returned.
The function summary can be used to obtain and print a summary of the results. The generic accessor
functions coefficients, vcov, fitted, logLik, AIC, BIC, and nobs are available.
References
<NAME>, <NAME> (1999). “A stochastic RFM model.” Journal of Interactive Marketing, 13(3),
2-12.
<NAME>, <NAME>, <NAME> (2005). “RFM and CLV: Using Iso-Value Curves for Customer Base
Analysis.” Journal of Marketing Research, 42(4), 415-430.
Fader PS, <NAME> (2013). “The Gamma-Gamma Model of Monetary Value.” URL http:
//www.brucehardie.com/notes/025/gamma_gamma.pdf.
See Also
clvdata to create a clv data object.
plot to plot diagnostics of the transaction data, incl. of spending.
predict to predict expected mean spending for every customer.
plot to plot the density of customer’s mean transaction value compared to the model’s prediction.
Examples
data("apparelTrans")
clv.data.apparel <- clvdata(apparelTrans, date.format = "ymd",
time.unit = "w", estimation.split = 40)
# Fit the gg model
gg(clv.data.apparel)
# Give initial guesses for the model parameters
gg(clv.data.apparel,
start.params.model = c(p=0.5, q=15, gamma=2))
# pass additional parameters to the optimizer (optimx)
# Use Nelder-Mead as optimization method and print
# detailed information about the optimization process
apparel.gg <- gg(clv.data.apparel,
optimx.args = list(method="Nelder-Mead",
control=list(trace=6)))
# estimated coefs
coef(apparel.gg)
# summary of the fitted model
summary(apparel.gg)
# Plot model vs empirical distribution
plot(apparel.gg)
# predict mean spending and compare against
# actuals in the holdout period
predict(apparel.gg)
ggomnbd Gamma-Gompertz/NBD model
Description
Fits Gamma-Gompertz/NBD models on transactional data with static and without covariates.
Usage
## S4 method for signature 'clv.data'
ggomnbd(
clv.data,
start.params.model = c(),
optimx.args = list(),
verbose = TRUE,
...
)
## S4 method for signature 'clv.data.static.covariates'
ggomnbd(
clv.data,
start.params.model = c(),
optimx.args = list(),
verbose = TRUE,
names.cov.life = c(),
names.cov.trans = c(),
start.params.life = c(),
start.params.trans = c(),
names.cov.constr = c(),
start.params.constr = c(),
reg.lambdas = c(),
...
)
Arguments
clv.data The data object on which the model is fitted.
start.params.model
Named start parameters containing the optimization start parameters for the
model without covariates.
optimx.args Additional arguments to control the optimization which are forwarded to optimx::optimx.
If multiple optimization methods are specified, only the result of the last method
is further processed.
verbose Show details about the running of the function.
... Ignored
names.cov.life Which of the set Lifetime covariates should be used. Missing parameter indi-
cates all covariates shall be used.
names.cov.trans
Which of the set Transaction covariates should be used. Missing parameter
indicates all covariates shall be used.
start.params.life
Named start parameters containing the optimization start parameters for all life-
time covariates.
start.params.trans
Named start parameters containing the optimization start parameters for all trans-
action covariates.
names.cov.constr
Which covariates should be forced to use the same parameters for the lifetime
and transaction process. The covariates need to be present as both, lifetime and
transaction covariates.
start.params.constr
Named start parameters containing the optimization start parameters for the con-
straint covariates.
reg.lambdas Named lambda parameters used for the L2 regularization of the lifetime and the
transaction covariate parameters. Lambdas have to be >= 0.
Details
Model parameters for the GGompertz/NBD model are r, alpha, beta, b and s.
r: shape parameter of the Gamma distribution of the purchase process. The smaller r, the stronger
the heterogeneity of the purchase process.
alpha: scale parameter of the Gamma distribution of the purchase process.
beta: scale parameter for the Gamma distribution for the lifetime process.
b: scale parameter of the Gompertz distribution (constant across customers).
s: shape parameter of the Gamma distribution for the lifetime process. The smaller s, the stronger
the heterogeneity of customer lifetimes.
If no start parameters are given, r = 1, alpha = 1, beta = 1, b = 1, s = 1 is used. All model start
parameters are required to be > 0. If no start values are given for the covariate parameters, 0.1 is
used.
Note that the DERT expression has not been derived (yet) and it consequently is not possible to
calculated values for DERT and CLV.
The Gamma-Gompertz/NBD model: There are two key differences of the gamma/Gompertz/NBD
(GGompertz/NBD) model compared to the relative to the well-known Pareto/NBD model: (i) its
probability density function can exhibit a mode at zero or an interior mode, and (ii) it can be
skewed to the right or to the left. Therefore, the GGompertz/NBD model is more flexible than
the Pareto/NBD model. According to Bemmaor and Glady (2012) can indicate substantial differ-
ences in expected residual lifetimes compared to the Pareto/NBD. The GGompertz/NBD tends to
be appropriate when firms are reputed and their offerings are differentiated.
Value
Depending on the data object on which the model was fit, ggomnbd returns either an object of class
clv.ggomnbd or clv.ggomnbd.static.cov.
The function summary can be used to obtain and print a summary of the results. The generic accessor
functions coefficients, vcov, fitted, logLik, AIC, BIC, and nobs are available.
References
<NAME>, <NAME> (2012). “Modeling Purchasing Behavior with Sudden “Death”: A Flexible
Customer Lifetime Model” Management Science, 58(5), 1012-1021.
See Also
clvdata to create a clv data object, SetStaticCovariates to add static covariates to an existing
clv data object.
gg to fit customer’s average spending per transaction with the Gamma-Gamma model
predict to predict expected transactions, probability of being alive, and customer lifetime value
for every customer
plot to plot the unconditional expectation as predicted by the fitted model
pmf for the probability to make exactly x transactions in the estimation period, given by the proba-
bility mass function (PMF).
The generic functions vcov, summary, fitted.
Examples
data("apparelTrans")
clv.data.apparel <- clvdata(apparelTrans, date.format = "ymd",
time.unit = "w", estimation.split = 40)
# Fit standard ggomnbd model
ggomnbd(clv.data.apparel)
# Give initial guesses for the model parameters
ggomnbd(clv.data.apparel,
start.params.model = c(r=0.5, alpha=15, b=5, beta=10, s=0.5))
# pass additional parameters to the optimizer (optimx)
# Use Nelder-Mead as optimization method and print
# detailed information about the optimization process
apparel.ggomnbd <- ggomnbd(clv.data.apparel,
optimx.args = list(method="Nelder-Mead",
control=list(trace=6)))
# estimated coefs
coef(apparel.ggomnbd)
# summary of the fitted model
summary(apparel.ggomnbd)
# predict CLV etc for holdout period
predict(apparel.ggomnbd)
# predict CLV etc for the next 15 periods
predict(apparel.ggomnbd, prediction.end = 15)
# To estimate the ggomnbd model with static covariates,
# add static covariates to the data
data("apparelStaticCov")
clv.data.static.cov <-
SetStaticCovariates(clv.data.apparel,
data.cov.life = apparelStaticCov,
names.cov.life = c("Gender", "Channel"),
data.cov.trans = apparelStaticCov,
names.cov.trans = c("Gender", "Channel"))
# Fit ggomnbd with static covariates
ggomnbd(clv.data.static.cov)
# Give initial guesses for both covariate parameters
ggomnbd(clv.data.static.cov, start.params.trans = c(Gender=0.75, Channel=0.7),
start.params.life = c(Gender=0.5, Channel=0.5))
# Use regularization
ggomnbd(clv.data.static.cov, reg.lambdas = c(trans = 5, life=5))
# Force the same coefficient to be used for both covariates
ggomnbd(clv.data.static.cov, names.cov.constr = "Gender",
start.params.constr = c(Gender=0.5))
# Fit model only with the Channel covariate for life but
# keep all trans covariates as is
ggomnbd(clv.data.static.cov, names.cov.life = c("Channel"))
ggomnbd_CET GGompertz/NBD: Conditional Expected Transactions
Description
Calculates the expected number of transactions in a given time period based on a customer’s past
transaction behavior and the GGompertz/NBD model parameters.
• ggomnbd_nocov_CET Conditional Expected Transactions without covariates
• ggomnbd_staticcov_CET Conditional Expected Transactions with static covariates
Usage
ggomnbd_nocov_CET(r, alpha_0, b, s, beta_0, dPeriods, vX, vT_x, vT_cal)
ggomnbd_staticcov_CET(
r,
alpha_0,
b,
s,
beta_0,
dPeriods,
vX,
vT_x,
vT_cal,
vCovParams_trans,
vCovParams_life,
mCov_life,
mCov_trans
)
Arguments
r shape parameter of the Gamma distribution of the purchase process. The smaller
r, the stronger the heterogeneity of the purchase process.
alpha_0 scale parameter of the Gamma distribution of the purchase process.
b scale parameter of the Gompertz distribution (constant across customers)
s shape parameter of the Gamma distribution for the lifetime process The smaller
s, the stronger the heterogeneity of customer lifetimes.
beta_0 scale parameter for the Gamma distribution for the lifetime process
dPeriods number of periods to predict
vX Frequency vector of length n counting the numbers of purchases.
vT_x Recency vector of length n.
vT_cal Vector of length n indicating the total number of periods of observation.
vCovParams_trans
Vector of estimated parameters for the transaction covariates.
vCovParams_life
Vector of estimated parameters for the lifetime covariates.
mCov_life Matrix containing the covariates data affecting the lifetime process. One column
for each covariate.
mCov_trans Matrix containing the covariates data affecting the transaction process. One
column for each covariate.
Details
mCov_trans is a matrix containing the covariates data of the time-invariant covariates that affect
the transaction process. Each column represents a different covariate. For every column a gamma
parameter needs to added to vCovParams_trans at the respective position.
mCov_life is a matrix containing the covariates data of the time-invariant covariates that affect
the lifetime process. Each column represents a different covariate. For every column a gamma
parameter needs to added to vCovParams_life at the respective position.
Value
Returns a vector containing the conditional expected transactions for the existing customers in the
GGompertz/NBD model.
References
<NAME>, <NAME> (2012). “Modeling Purchasing Behavior with Sudden “Death”: A Flexible
Customer Lifetime Model” Management Science, 58(5), 1012-1021.
ggomnbd_expectation GGompertz/NBD: Unconditional Expectation
Description
Computes the expected number of repeat transactions in the interval (0, vT_i] for a randomly se-
lected customer, where 0 is defined as the point when the customer came alive.
Usage
ggomnbd_nocov_expectation(r, alpha_0, b, s, beta_0, vT_i)
ggomnbd_staticcov_expectation(r, b, s, vAlpha_i, vBeta_i, vT_i)
Arguments
r shape parameter of the Gamma distribution of the purchase process. The smaller
r, the stronger the heterogeneity of the purchase process.
alpha_0 scale parameter of the Gamma distribution of the purchase process.
b scale parameter of the Gompertz distribution (constant across customers)
s shape parameter of the Gamma distribution for the lifetime process The smaller
s, the stronger the heterogeneity of customer lifetimes.
beta_0 scale parameter for the Gamma distribution for the lifetime process
vT_i Number of periods since the customer came alive
vAlpha_i Vector of individual parameters alpha
vBeta_i Vector of individual parameters beta
Value
Returns the expected transaction values according to the chosen model.
References
<NAME>, <NAME> (2012). “Modeling Purchasing Behavior with Sudden “Death”: A Flexible
Customer Lifetime Model” Management Science, 58(5), 1012-1021.
ggomnbd_LL GGompertz/NBD: Log-Likelihood functions
Description
Calculates the Log-Likelihood values for the GGompertz/NBD model with and without covariates.
The function ggomnbd_nocov_LL_ind calculates the individual log-likelihood values for each cus-
tomer for the given parameters.
The function ggomnbd_nocov_LL_sum calculates the log-likelihood value summed across customers
for the given parameters.
The function ggomnbd_staticcov_LL_ind calculates the individual log-likelihood values for each
customer for the given parameters and covariates.
The function ggomnbd_staticcov_LL_sum calculates the individual log-likelihood values summed
across customers.
Usage
ggomnbd_nocov_LL_ind(vLogparams, vX, vT_x, vT_cal)
ggomnbd_nocov_LL_sum(vLogparams, vX, vT_x, vT_cal)
ggomnbd_staticcov_LL_ind(vParams, vX, vT_x, vT_cal, mCov_life, mCov_trans)
ggomnbd_staticcov_LL_sum(vParams, vX, vT_x, vT_cal, mCov_life, mCov_trans)
Arguments
vLogparams vector with the GGompertz/NBD model parameters at log scale. See Details.
vX Frequency vector of length n counting the numbers of purchases.
vT_x Recency vector of length n.
vT_cal Vector of length n indicating the total number of periods of observation.
vParams vector with the parameters for the GGompertz/NBD model at log scale and the
static covariates at original scale. See Details.
mCov_life Matrix containing the covariates data affecting the lifetime process. One column
for each covariate.
mCov_trans Matrix containing the covariates data affecting the transaction process. One
column for each covariate.
Details
vLogparams is a vector with model parameters r, alpha_0, b, s, beta_0 at log-scale, in this or-
der.
vParams is vector with the GGompertz/NBD model parameters at log scale, followed by the pa-
rameters for the lifetime covariates at original scale and then followed by the parameters for the
transaction covariates at original scale
mCov_trans is a matrix containing the covariates data of the time-invariant covariates that affect
the transaction process. Each column represents a different covariate. For every column a gamma
parameter needs to added to vParams at the respective position.
mCov_life is a matrix containing the covariates data of the time-invariant covariates that affect
the lifetime process. Each column represents a different covariate. For every column a gamma
parameter needs to added to vParams at the respective position.
Value
Returns the respective Log-Likelihood value(s) for the GGompertz/NBD model with or without
covariates.
References
<NAME>, <NAME> (2012). “Modeling Purchasing Behavior with Sudden “Death”: A Flexible
Customer Lifetime Model” Management Science, 58(5), 1012-1021.
ggomnbd_PAlive GGompertz/NBD: Probability of Being Alive
Description
Calculates the probability of a customer being alive at the end of the calibration period, based on a
customer’s past transaction behavior and the GGompertz/NBD model parameters.
• ggomnbd_nocov_PAlive P(alive) for the GGompertz/NBD model without covariates
• ggomnbd_staticcov_PAlive P(alive) for the GGompertz/NBD model with static covariates
Usage
ggomnbd_staticcov_PAlive(
r,
alpha_0,
b,
s,
beta_0,
vX,
vT_x,
vT_cal,
vCovParams_trans,
vCovParams_life,
mCov_life,
mCov_trans
)
ggomnbd_nocov_PAlive(r, alpha_0, b, s, beta_0, vX, vT_x, vT_cal)
Arguments
r shape parameter of the Gamma distribution of the purchase process. The smaller
r, the stronger the heterogeneity of the purchase process.
alpha_0 scale parameter of the Gamma distribution of the purchase process.
b scale parameter of the Gompertz distribution (constant across customers)
s shape parameter of the Gamma distribution for the lifetime process The smaller
s, the stronger the heterogeneity of customer lifetimes.
beta_0 scale parameter for the Gamma distribution for the lifetime process
vX Frequency vector of length n counting the numbers of purchases.
vT_x Recency vector of length n.
vT_cal Vector of length n indicating the total number of periods of observation.
vCovParams_trans
Vector of estimated parameters for the transaction covariates.
vCovParams_life
Vector of estimated parameters for the lifetime covariates.
mCov_life Matrix containing the covariates data affecting the lifetime process. One column
for each covariate.
mCov_trans Matrix containing the covariates data affecting the transaction process. One
column for each covariate.
Details
mCov_trans is a matrix containing the covariates data of the time-invariant covariates that affect
the transaction process. Each column represents a different covariate. For every column a gamma
parameter needs to added to vCovParams_trans at the respective position.
mCov_life is a matrix containing the covariates data of the time-invariant covariates that affect
the lifetime process. Each column represents a different covariate. For every column a gamma
parameter needs to added to vCovParams_life at the respective position.
Value
Returns a vector with the PAlive for each customer.
References
<NAME>, <NAME> (2012). “Modeling Purchasing Behavior with Sudden “Death”: A Flexible
Customer Lifetime Model” Management Science, 58(5), 1012-1021.
gg_LL Gamma-Gamma: Log-Likelihood Function
Description
Calculates the Log-Likelihood value for the Gamma-Gamma model.
Usage
gg_LL(vLogparams, vX, vM_x)
Arguments
vLogparams a vector containing the log of the parameters p, q, gamma
vX frequency vector of length n counting the numbers of purchases
vM_x the observed average spending for every customer during the calibration time.
Details
vLogparams is a vector with the parameters for the Gamma-Gamma model. It has three parameters
(p, q, gamma). The scale parameter for each transaction is distributed across customers according
to a gamma distribution with parameters q (shape) and gamma (scale).
Value
Returns the Log-Likelihood value for the Gamma-Gamma model.
References
<NAME>, <NAME> (1999). “A stochastic RFM model.” Journal of Interactive Marketing, 13(3),
2-12.
<NAME>, <NAME>, <NAME> (2005). “RFM and CLV: Using Iso-Value Curves for Customer Base
Analysis.” Journal of Marketing Research, 42(4), 415-430.
Fader PS, <NAME> (2013). “The Gamma-Gamma Model of Monetary Value.” URL http:
//www.brucehardie.com/notes/025/gamma_gamma.pdf.
nobs.clv.data Number of observations
Description
The number of observations is defined as the number of unique customers in the transaction data.
Usage
## S3 method for class 'clv.data'
nobs(object, ...)
Arguments
object An object of class clv.data.
... Ignored
Value
The number of customers.
nobs.clv.fitted Number of observations
Description
The number of observations is defined as the number of unique customers for which the model was
fit.
Usage
## S3 method for class 'clv.fitted'
nobs(object, ...)
Arguments
object An object of class clv.fitted.
... Ignored
Value
The number of customers.
plot.clv.data Plot Diagnostics for the Transaction data in a clv.data Object
Description
Depending on the value of parameter which, one of the following plots will be produced. Note that
the sample parameter determines the period for which the selected plot is made (either estimation,
holdout, or full).
Tracking Plot: Plot the aggregated repeat transactions per period over the given time-horizon
(prediction.end). See Details for the definition of plotting periods.
Frequency Plot: Plot the distribution of transactions or repeat transactions per customer, after
aggregating transactions of the same customer on a single time point. Note that if trans.bins is
changed, label.remaining usually needs to be adapted as well.
Spending Plot: Plot the empirical density of either customer’s average spending per transaction
or the value of every transaction in the data, after aggregating transactions of the same customer
on a single time point. Note that in all cases this includes all transactions and not only repeat-
transactions.
Interpurchase Time Plot: Plot the empirical density of customer’s mean time (in number of
periods) between transactions, after aggregating transactions of the same customer on a single
time point. Note that customers without repeat-transactions are removed.
Usage
## S3 method for class 'clv.data'
plot(
x,
which = c("tracking", "frequency", "spending", "interpurchasetime"),
prediction.end = NULL,
cumulative = FALSE,
trans.bins = 0:9,
count.repeat.trans = TRUE,
count.remaining = TRUE,
label.remaining = "10+",
mean.spending = TRUE,
sample = c("estimation", "full", "holdout"),
geom = "line",
color = "black",
plot = TRUE,
verbose = TRUE,
...
)
Arguments
x The clv.data object to plot
which Which plot to produce, either "tracking", "frequency", "spending" or "interpur-
chasetime". May be abbreviated but only one may be selected. Defaults to
"tracking".
prediction.end "tracking": Until what point in time to plot. This can be the number of periods
(numeric) or a form of date/time object. See details.
cumulative "tracking": Whether the cumulative actual repeat transactions should be plotted.
trans.bins "frequency": Vector of integers indicating the number of transactions (x axis)
for which the customers should be counted.
count.repeat.trans
"frequency": Whether repeat transactions (TRUE, default) or all transactions
(FALSE) should be counted.
count.remaining
"frequency": Whether the customers which are not captured with trans.bins
should be counted in a separate last bar.
label.remaining
"frequency": Label for the last bar, if count.remaining=TRUE.
mean.spending "spending": Whether customer’s mean spending per transaction (TRUE, default)
or the value of every transaction in the data (FALSE) should be plotted.
sample Name of the sample for which the plot should be made, either "estimation",
"full", or "holdout". Defaults to "estimation". Not for "tracking".
geom The geometric object of ggplot2 to display the data. Forwarded to ggplot2::stat_density.
Not for "tracking" and "frequency".
color Color of resulting geom object in the plot. Not for "tracking".
plot Whether a plot should be created or only the assembled data returned.
verbose Show details about the running of the function.
... Forwarded to ggplot2::stat_density ("spending", "interpurchasetime") or ggplot2::geom_bar
("frequency"). Not for "tracking".
Details
prediction.end indicates until when to predict or plot and can be given as either a point in time (of
class Date, POSIXct, or character) or the number of periods. If prediction.end is of class char-
acter, the date/time format set when creating the data object is used for parsing. If prediction.end
is the number of periods, the end of the fitting period serves as the reference point from which pe-
riods are counted. Only full periods may be specified. If prediction.end is omitted or NULL, it
defaults to the end of the holdout period if present and to the end of the estimation period otherwise.
The first prediction period is defined to start right after the end of the estimation period. If for
example weekly time units are used and the estimation period ends on Sunday 2019-01-01, then
the first day of the first prediction period is Monday 2019-01-02. Each prediction period includes a
total of 7 days and the first prediction period therefore will end on, and include, Sunday 2019-01-
08. Subsequent prediction periods again start on Mondays and end on Sundays. If prediction.end
indicates a timepoint on which to end, this timepoint is included in the prediction period.
If there are no repeat transactions until prediction.end, only the time for which there is data is
plotted. If the data is returned (i.e. with argument plot=FALSE), the respective rows contain NA in
column Number of Repeat Transactions.
Value
An object of class ggplot from package ggplot2 is returned by default. If plot=FALSE, the data
that would have been used to create the plot is returned. Depending on which plot was selected, this
is a data.table which contains some of the following columns:
Id Customer Id
period.until The timepoint that marks the end (up until and including) of the period to which
the data in this row refers.
Number of Repeat Transactions
The number of actual repeat transactions in the period that ends at period.until.
Spending Spending as defined by parameter mean.spending.
mean.interpurchase.time
Mean number of periods between transactions per customer, excluding cus-
tomers with no repeat-transactions.
num.transactions
The number of (repeat) transactions, depending on count.repeat.trans.
num.customers The number of customers.
See Also
ggplot2::stat_density and ggplot2::geom_bar for possible arguments to ...
plot to plot fitted transaction models
plot to plot fitted spending models
Examples
data("cdnow")
clv.cdnow <- clvdata(cdnow, time.unit="w",estimation.split=37,
date.format="ymd")
### TRACKING PLOT
# Plot the actual repeat transactions
plot(clv.cdnow)
# same, explicitly
plot(clv.cdnow, which="tracking")
# plot cumulative repeat transactions
plot(clv.cdnow, cumulative=TRUE)
# Dont automatically plot but tweak further
library(ggplot2) # for ggtitle()
gg.cdnow <- plot(clv.cdnow)
# change Title
gg.cdnow + ggtitle("CDnow repeat transactions")
# Dont return a plot but only the data from
# which it would have been created
dt.plot.data <- plot(clv.cdnow, plot=FALSE)
### FREQUENCY PLOT
plot(clv.cdnow, which="frequency")
# Bins from 0 to 15, all remaining in bin labelled "16+"
plot(clv.cdnow, which="frequency", trans.bins=0:15,
label.remaining="16+")
# Count all transactions, not only repeat
# Note that the bins have to be adapted to start from 1
plot(clv.cdnow, which="frequency", count.repeat.trans = FALSE,
trans.bins=1:9)
### SPENDING DENSITY
# plot customer's average transaction value
plot(clv.cdnow, which="spending", mean.spending = TRUE)
# distribution of the values of every transaction
plot(clv.cdnow, which="spending", mean.spending = FALSE)
### INTERPURCHASE TIME DENSITY
# plot as small points, in blue
plot(clv.cdnow, which="interpurchasetime",
geom="point", color="blue", size=0.02)
plot.clv.fitted.spending
Plot expected and actual mean spending per transaction
Description
Compares the density of the observed average spending per transaction (empirical distribution) to
the model’s distribution of mean transaction spending (weighted by the actual number of transac-
tions). See plot.clv.data to plot more nuanced diagnostics for the transaction data only.
Usage
## S3 method for class 'clv.fitted.spending'
plot(x, n = 256, verbose = TRUE, ...)
## S4 method for signature 'clv.fitted.spending'
plot(x, n = 256, verbose = TRUE, ...)
Arguments
x The fitted spending model to plot
n Number of points at which the empirical and model density are calculated.
Should be a power of two.
verbose Show details about the running of the function.
... Ignored
Value
An object of class ggplot from package ggplot2 is returned by default.
References
<NAME>, <NAME> (1999). “A stochastic RFM model.” Journal of Interactive Marketing, 13(3),
2-12.
<NAME>, <NAME>, <NAME> (2005). “RFM and CLV: Using Iso-Value Curves for Customer Base
Analysis.” Journal of Marketing Research, 42(4), 415-430.
Fader PS, <NAME> (2013). “The Gamma-Gamma Model of Monetary Value.” URL http:
//www.brucehardie.com/notes/025/gamma_gamma.pdf.
See Also
plot for transaction models
plot for transaction diagnostics of clv.data objects
Examples
data("cdnow")
clv.cdnow <- clvdata(cdnow,
date.format="ymd",
time.unit = "week",
estimation.split = "1997-09-30")
est.gg <- gg(clv.data = clv.cdnow)
# Compare empirical to theoretical distribution
plot(est.gg)
## Not run:
# Modify the created plot further
library(ggplot2)
gg.cdnow <- plot(est.gg)
gg.cdnow + ggtitle("CDnow Spending Distribution")
## End(Not run)
plot.clv.fitted.transactions
Plot Diagnostics for a Fitted Transaction Model
Description
Depending on the value of parameter which, one of the following plots will be produced. See
plot.clv.data to plot more nuanced diagnostics for the transaction data only.
Tracking Plot: Plot the actual repeat transactions and overlay it with the repeat transaction as
predicted by the fitted model. Currently, following previous literature, the in-sample unconditional
expectation is plotted in the holdout period. In the future, we might add the option to also plot
the summed CET for the holdout period as an alternative evaluation metric. Note that only whole
periods can be plotted and that the prediction end might not exactly match prediction.end. See
the Note section for more details.
PMF Plot: Plot the actual and expected number of customers which made a given number of
repeat transaction in the estimation period. The expected number is based on the PMF of the fitted
model, the probability to make exactly a given number of repeat transactions in the estimation
period. For each bin, the expected number is the sum of all customers’ individual PMF value.
Note that if trans.bins is changed, label.remaining needs to be adapted as well.
Usage
## S3 method for class 'clv.fitted.transactions'
plot(
x,
which = c("tracking", "pmf"),
prediction.end = NULL,
cumulative = FALSE,
trans.bins = 0:9,
calculate.remaining = TRUE,
label.remaining = "10+",
newdata = NULL,
transactions = TRUE,
label = NULL,
plot = TRUE,
verbose = TRUE,
...
)
## S4 method for signature 'clv.fitted.transactions'
plot(
x,
which = c("tracking", "pmf"),
prediction.end = NULL,
cumulative = FALSE,
trans.bins = 0:9,
calculate.remaining = TRUE,
label.remaining = "10+",
newdata = NULL,
transactions = TRUE,
label = NULL,
plot = TRUE,
verbose = TRUE,
...
)
Arguments
x The fitted transaction model for which to produce diagnostic plots
which Which plot to produce, either "tracking" or "pmf". May be abbreviated but only
one may be selected. Defaults to "tracking".
prediction.end "tracking": Until what point in time to plot. This can be the number of periods
(numeric) or a form of date/time object. See details.
cumulative "tracking": Whether the cumulative expected (and actual) transactions should
be plotted.
trans.bins "pmf": Vector of positive integer numbers (>=0) indicating the number of repeat
transactions (x axis) to plot.
calculate.remaining
"pmf": Whether the probability for the remaining number of transactions not in
trans.bins should be calculated.
label.remaining
"pmf": Label for the last bar, if calculate.remaining=TRUE.
newdata An object of class clv.data for which the plotting should be made with the fitted
model. If none or NULL is given, the plot is made for the data on which the
model was fit.
transactions Whether the actual observed repeat transactions should be plotted.
label Character string to label the model in the legend.
plot Whether a plot is created or only the assembled data is returned.
verbose Show details about the running of the function.
... Ignored
Details
prediction.end indicates until when to predict or plot and can be given as either a point in time (of
class Date, POSIXct, or character) or the number of periods. If prediction.end is of class char-
acter, the date/time format set when creating the data object is used for parsing. If prediction.end
is the number of periods, the end of the fitting period serves as the reference point from which pe-
riods are counted. Only full periods may be specified. If prediction.end is omitted or NULL, it
defaults to the end of the holdout period if present and to the end of the estimation period otherwise.
The first prediction period is defined to start right after the end of the estimation period. If for
example weekly time units are used and the estimation period ends on Sunday 2019-01-01, then
the first day of the first prediction period is Monday 2019-01-02. Each prediction period includes a
total of 7 days and the first prediction period therefore will end on, and include, Sunday 2019-01-
08. Subsequent prediction periods again start on Mondays and end on Sundays. If prediction.end
indicates a timepoint on which to end, this timepoint is included in the prediction period.
The newdata argument has to be a clv data object of the exact same class as the data object on which
the model was fit. In case the model was fit with covariates, newdata needs to contain identically
named covariate data.
The use case for newdata is mainly two-fold: First, to estimate model parameters only on a sample
of the data and then use the fitted model object to predict or plot for the full data set provided
through newdata. Second, for models with dynamic covariates, to provide a clv data object with
longer covariates than contained in the data on which the model was estimated what allows to
predict or plot further. When providing newdata, some models might require additional steps that
can significantly increase runtime.
Value
An object of class ggplot from package ggplot2 is returned by default. If plot=FALSE, the data
that would have been used to create the plot is returned. Depending on which plot was selected, this
is a data.table which contains the following columns:
For the Tracking plot:
period.until The timepoint that marks the end (up until and including) of the period to which
the data in this row refers.
Actual The actual number of repeat transactions in the period that ends at period.until.
Only if transactions=TRUE.
"Name of Model" or "label"
The value of the unconditional expectation for the period that ends on period.until.
For the PMF plot:
num.transactions
The number of observed repeat transactions in the estimation period (as ordered
factor).
actual.num.customers
The actual number of customers which have the respective number of repeat
transactions. Only if transactions=TRUE.
expected.customers
The number of customers which are expected to have the respective number of
repeat transactions, as by the fitted model.
Note
Because the unconditional expectation for a period is derived as the difference of the cumulative
expectations calculated at the beginning and at end of the period, all timepoints for which the
expectation is calculated are required to be spaced exactly 1 time unit apart.
If prediction.end does not coincide with the start of a time unit, the last timepoint for which the
expectation is calculated and plotted therefore is not prediction.end but the start of the first time
unit after prediction.end.
See Also
plot.clv.fitted.spending for diagnostics of spending models
plot.clv.data for transaction diagnostics of clv.data objects
pmf for the values on which the PMF plot is based
Examples
data("cdnow")
# Fit ParetoNBD model on the CDnow data
pnbd.cdnow <- pnbd(clvdata(cdnow, time.unit="w",
estimation.split=37,
date.format="ymd"))
## TRACKING PLOT
# Plot actual repeat transaction, overlayed with the
# expected repeat transactions as by the fitted model
plot(pnbd.cdnow)
# Plot cumulative expected transactions of only the model
plot(pnbd.cdnow, cumulative=TRUE, transactions=FALSE)
# Plot until 2001-10-21
plot(pnbd.cdnow, prediction.end = "2001-10-21")
# Plot until 2001-10-21, as date
plot(pnbd.cdnow,
prediction.end = lubridate::dym("21-2001-10"))
# Plot 15 time units after end of estimation period
plot(pnbd.cdnow, prediction.end = 15)
# Save the data generated for plotting
# (period, actual transactions, expected transactions)
plot.out <- plot(pnbd.cdnow, prediction.end = 15)
# A ggplot object is returned that can be further tweaked
library("ggplot2")
gg.pnbd.cdnow <- plot(pnbd.cdnow)
gg.pnbd.cdnow + ggtitle("PNBD on CDnow")
## PMF PLOT
plot(pnbd.cdnow, which="pmf")
# For transactions 0 to 15, also have
# to change label for remaining
plot(pnbd.cdnow, which="pmf", trans.bins=0:15,
label.remaining="16+")
# For transactions 0 to 15 bins, no remaining
plot(pnbd.cdnow, which="pmf", trans.bins=0:15,
calculate.remaining=FALSE)
pmf Probability Mass Function
Description
Calculate P(X(t)=x), the probability to make exactly x repeat transactions in the interval (0, t]. This
interval is in the estimation period and excludes values of t=0. Note that here t is defined as the
observation period T.cal which differs by customer.
Usage
## S4 method for signature 'clv.fitted.transactions'
pmf(object, x = 0:5)
Arguments
object The fitted transaction model.
x Vector of positive integer numbers (>=0) indicating the number of repeat trans-
actions x for which the PMF should be calculated.
Value
Returns a data.table with ids and depending on x, multiple columns of PMF values, each column
for one value in x.
Id customer identification
pmf.x.Y PMF values for Y number of transactions
See Also
The model fitting functions pnbd,bgnbd, ggomnbd.
plot to visually compare the PMF values against actuals.
Examples
data("cdnow")
# Fit the ParetoNBD model on the CDnow data
pnbd.cdnow <- pnbd(clvdata(cdnow, time.unit="w",
estimation.split=37,
date.format="ymd"))
# Calculate the PMF for 0 to 10 transactions
# in the estimation period
pmf(pnbd.cdnow, x=0:10)
# Compare vs. actuals (CBS in estimation period):
# x mean(pmf) actual percentage of x
# 0 0.616514 1432/2357= 0.6075519
# 1 0.168309 436/2357 = 0.1849809
# 2 0.080971 208/2357 = 0.0882478
# 3 0.046190 100/2357 = 0.0424268
# 4 0.028566 60/2357 = 0.0254561
# 5 0.018506 36/2357 = 0.0152737
# 6 0.012351 27/2357 = 0.0114552
# 7 0.008415 21/2357 = 0.0089096
# 8 0.005822 5/2357 = 0.0021213
# 9 0.004074 4/2357 = 0.0016971
# 10 0.002877 7/2357 = 0.0029699
pnbd Pareto/NBD models
Description
Fits Pareto/NBD models on transactional data with and without covariates.
Usage
## S4 method for signature 'clv.data'
pnbd(
clv.data,
start.params.model = c(),
use.cor = FALSE,
start.param.cor = c(),
optimx.args = list(),
verbose = TRUE,
...
)
## S4 method for signature 'clv.data.static.covariates'
pnbd(
clv.data,
start.params.model = c(),
use.cor = FALSE,
start.param.cor = c(),
optimx.args = list(),
verbose = TRUE,
names.cov.life = c(),
names.cov.trans = c(),
start.params.life = c(),
start.params.trans = c(),
names.cov.constr = c(),
start.params.constr = c(),
reg.lambdas = c(),
...
)
## S4 method for signature 'clv.data.dynamic.covariates'
pnbd(
clv.data,
start.params.model = c(),
use.cor = FALSE,
start.param.cor = c(),
optimx.args = list(),
verbose = TRUE,
names.cov.life = c(),
names.cov.trans = c(),
start.params.life = c(),
start.params.trans = c(),
names.cov.constr = c(),
start.params.constr = c(),
reg.lambdas = c(),
...
)
Arguments
clv.data The data object on which the model is fitted.
start.params.model
Named start parameters containing the optimization start parameters for the
model without covariates.
use.cor Whether the correlation between the transaction and lifetime process should be
estimated.
start.param.cor
Start parameter for the optimization of the correlation.
optimx.args Additional arguments to control the optimization which are forwarded to optimx::optimx.
If multiple optimization methods are specified, only the result of the last method
is further processed.
verbose Show details about the running of the function.
... Ignored
names.cov.life Which of the set Lifetime covariates should be used. Missing parameter indi-
cates all covariates shall be used.
names.cov.trans
Which of the set Transaction covariates should be used. Missing parameter
indicates all covariates shall be used.
start.params.life
Named start parameters containing the optimization start parameters for all life-
time covariates.
start.params.trans
Named start parameters containing the optimization start parameters for all trans-
action covariates.
names.cov.constr
Which covariates should be forced to use the same parameters for the lifetime
and transaction process. The covariates need to be present as both, lifetime and
transaction covariates.
start.params.constr
Named start parameters containing the optimization start parameters for the con-
straint covariates.
reg.lambdas Named lambda parameters used for the L2 regularization of the lifetime and the
transaction covariate parameters. Lambdas have to be >= 0.
Details
Model parameters for the Pareto/NBD model are alpha, r, beta, and s.
s: shape parameter of the Gamma distribution for the lifetime process. The smaller s, the stronger
the heterogeneity of customer lifetimes.
beta: rate parameter for the Gamma distribution for the lifetime process.
r: shape parameter of the Gamma distribution of the purchase process. The smaller r, the stronger
the heterogeneity of the purchase process.
alpha: rate parameter of the Gamma distribution of the purchase process.
Based on these parameters, the average purchase rate while customers are active is r/alpha and the
average dropout rate is s/beta.
Ideally, the starting parameters for r and s represent your best guess concerning the heterogeneity of
customers in their buy and die rate. If covariates are included into the model additionally parameters
for the covariates affecting the attrition and the purchase process are part of the model.
If no start parameters are given, 1.0 is used for all model parameters and 0.1 for covariate parame-
ters. The model start parameters are required to be > 0.
The Pareto/NBD model: The Pareto/NBD is the first model addressing the issue of modeling
customer purchases and attrition simultaneously for non-contractual settings. The model uses
a Pareto distribution, a combination of an Exponential and a Gamma distribution, to explicitly
model customers’ (unobserved) attrition behavior in addition to customers’ purchase process.
In general, the Pareto/NBD model consist of two parts. A first process models the purchase be-
havior of customers as long as the customers are active. A second process models customers’
attrition. Customers live (and buy) for a certain unknown time until they become inactive and
"die". Customer attrition is unobserved. Inactive customers may not be reactivated. For techni-
cal details we refer to the original paper by Schmittlein, Morrison and Colombo (1987) and the
detailed technical note of Fader and Hardie (2005).
Pareto/NBD model with static covariates: The standard Pareto/NBD model captures hetero-
geneity was solely using Gamma distributions. However, often exogenous knowledge, such as
for example customer demographics, is available. The supplementary knowledge may explain
part of the heterogeneity among the customers and therefore increase the predictive accuracy of
the model. In addition, we can rely on these parameter estimates for inference, i.e. identify and
quantify effects of contextual factors on the two underlying purchase and attrition processes. For
technical details we refer to the technical note by Fader and Hardie (2007).
Pareto/NBD model with dynamic covariates: In many real-world applications customer pur-
chase and attrition behavior may be influenced by covariates that vary over time. In consequence,
the timing of a purchase and the corresponding value of at covariate a that time becomes relevant.
Time-varying covariates can affect customer on aggregated level as well as on an individual level:
In the first case, all customers are affected simultaneously, in the latter case a covariate is only rel-
evant for a particular customer. For technical details we refer to the paper by Bachmann, Meierer
and Näf (2020).
Value
Depending on the data object on which the model was fit, pnbd returns either an object of class
clv.pnbd, clv.pnbd.static.cov, or clv.pnbd.dynamic.cov.
The function summary can be used to obtain and print a summary of the results. The generic accessor
functions coefficients, vcov, fitted, logLik, AIC, BIC, and nobs are available.
Note
The Pareto/NBD model with dynamic covariates can currently not be fit with data that has a tem-
poral resolution of less than one day (data that was built with time unit hours).
References
Schmittlein DC, Morrison DG, Colombo R (1987). “Counting Your Customers: Who-Are They
and What Will They Do Next?” Management Science, 33(1), 1-24.
<NAME>, <NAME>, <NAME> (2021). “The Role of Time-Varying Contextual Factors in Latent
Attrition Models for Customer Base Analysis” Marketing Science 40(4). 783-809.
Fader PS, Hardie BGS (2005). “A Note on Deriving the Pareto/NBD Model and Related Expres-
sions.” URL http://www.brucehardie.com/notes/009/pareto_nbd_derivations_2005-11-05.
pdf.
Fader PS, Hardie BGS (2007). “Incorporating time-invariant covariates into the Pareto/NBD and
BG/NBD models.” URL http://www.brucehardie.com/notes/019/time_invariant_covariates.
pdf.
Fader PS, Hardie BGS (2020). “Deriving an Expression for P(X(t)=x) Under the Pareto/NBD
Model.” URL https://www.brucehardie.com/notes/012/pareto_NBD_pmf_derivation_rev.
pdf
See Also
clvdata to create a clv data object, SetStaticCovariates to add static covariates to an existing
clv data object.
gg to fit customer’s average spending per transaction with the Gamma-Gamma model
predict to predict expected transactions, probability of being alive, and customer lifetime value
for every customer
plot to plot the unconditional expectation as predicted by the fitted model
pmf for the probability to make exactly x transactions in the estimation period, given by the proba-
bility mass function (PMF).
The generic functions vcov, summary, fitted.
SetDynamicCovariates to add dynamic covariates on which the pnbd model can be fit.
Examples
data("apparelTrans")
clv.data.apparel <- clvdata(apparelTrans, date.format = "ymd",
time.unit = "w", estimation.split = 40)
# Fit standard pnbd model
pnbd(clv.data.apparel)
# Give initial guesses for the model parameters
pnbd(clv.data.apparel,
start.params.model = c(r=0.5, alpha=15, s=0.5, beta=10))
# pass additional parameters to the optimizer (optimx)
# Use Nelder-Mead as optimization method and print
# detailed information about the optimization process
apparel.pnbd <- pnbd(clv.data.apparel,
optimx.args = list(method="Nelder-Mead",
# estimated coefs
coef(apparel.pnbd)
# summary of the fitted model
summary(apparel.pnbd)
# predict CLV etc for holdout period
predict(apparel.pnbd)
# predict CLV etc for the next 15 periods
predict(apparel.pnbd, prediction.end = 15)
# Estimate correlation as well
pnbd(clv.data.apparel, use.cor = TRUE)
# To estimate the pnbd model with static covariates,
# add static covariates to the data
data("apparelStaticCov")
clv.data.static.cov <-
SetStaticCovariates(clv.data.apparel,
data.cov.life = apparelStaticCov,
names.cov.life = c("Gender", "Channel"),
data.cov.trans = apparelStaticCov,
names.cov.trans = c("Gender", "Channel"))
# Fit pnbd with static covariates
pnbd(clv.data.static.cov)
# Give initial guesses for both covariate parameters
pnbd(clv.data.static.cov, start.params.trans = c(Gender=0.75, Channel=0.7),
start.params.life = c(Gender=0.5, Channel=0.5))
# Use regularization
pnbd(clv.data.static.cov, reg.lambdas = c(trans = 5, life=5))
# Force the same coefficient to be used for both covariates
pnbd(clv.data.static.cov, names.cov.constr = "Gender",
start.params.constr = c(Gender=0.5))
# Fit model only with the Channel covariate for life but
# keep all trans covariates as is
pnbd(clv.data.static.cov, names.cov.life = c("Channel"))
# Add dynamic covariates data to the data object
# add dynamic covariates to the data
## Not run:
data("apparelDynCov")
clv.data.dyn.cov <-
SetDynamicCovariates(clv.data = clv.data.apparel,
data.cov.life = apparelDynCov,
data.cov.trans = apparelDynCov,
names.cov.life = c("Marketing", "Gender", "Channel"),
names.cov.trans = c("Marketing", "Gender", "Channel"),
name.date = "Cov.Date")
# Fit PNBD with dynamic covariates
pnbd(clv.data.dyn.cov)
# The same fitting options as for the
# static covariate are available
pnbd(clv.data.dyn.cov, reg.lambdas = c(trans=10, life=2))
## End(Not run)
pnbd_CET Pareto/NBD: Conditional Expected Transactions
Description
Calculates the expected number of transactions in a given time period based on a customer’s past
transaction behavior and the Pareto/NBD model parameters.
• pnbd_nocov_CET Conditional Expected Transactions without covariates
• pnbd_staticcov_CET Conditional Expected Transactions with static covariates
Usage
pnbd_nocov_CET(r, alpha_0, s, beta_0, dPeriods, vX, vT_x, vT_cal)
pnbd_staticcov_CET(
r,
alpha_0,
s,
beta_0,
dPeriods,
vX,
vT_x,
vT_cal,
vCovParams_trans,
vCovParams_life,
mCov_trans,
mCov_life
)
Arguments
r shape parameter of the Gamma distribution of the purchase process. The smaller
r, the stronger the heterogeneity of the purchase process
alpha_0 rate parameter of the Gamma distribution of the purchase process
s shape parameter of the Gamma distribution for the lifetime process. The smaller
s, the stronger the heterogeneity of customer lifetimes
beta_0 rate parameter for the Gamma distribution for the lifetime process.
dPeriods number of periods to predict
vX Frequency vector of length n counting the numbers of purchases.
vT_x Recency vector of length n.
vT_cal Vector of length n indicating the total number of periods of observation.
vCovParams_trans
Vector of estimated parameters for the transaction covariates.
vCovParams_life
Vector of estimated parameters for the lifetime covariates.
mCov_trans Matrix containing the covariates data affecting the transaction process. One
column for each covariate.
mCov_life Matrix containing the covariates data affecting the lifetime process. One column
for each covariate.
Details
mCov_trans is a matrix containing the covariates data of the time-invariant covariates that affect
the transaction process. Each column represents a different covariate. For every column a gamma
parameter needs to added to vCovParams_trans at the respective position.
mCov_life is a matrix containing the covariates data of the time-invariant covariates that affect
the lifetime process. Each column represents a different covariate. For every column a gamma
parameter needs to added to vCovParams_life at the respective position.
Value
Returns a vector containing the conditional expected transactions for the existing customers in the
Pareto/NBD model.
References
Schmittlein DC, Morrison DG, Colombo R (1987). “Counting Your Customers: Who-Are They
and What Will They Do Next?” Management Science, 33(1), 1-24.
<NAME>, <NAME>, <NAME> (2021). “The Role of Time-Varying Contextual Factors in Latent
Attrition Models for Customer Base Analysis” Marketing Science 40(4). 783-809.
Fader PS, Hardie BGS (2005). “A Note on Deriving the Pareto/NBD Model and Related Expres-
sions.” URL http://www.brucehardie.com/notes/009/pareto_nbd_derivations_2005-11-05.
pdf.
Fader PS, Hardie BGS (2007). “Incorporating time-invariant covariates into the Pareto/NBD and
BG/NBD models.” URL http://www.brucehardie.com/notes/019/time_invariant_covariates.
pdf.
Fader PS, Hardie BGS (2020). “Deriving an Expression for P(X(t)=x) Under the Pareto/NBD
Model.” URL https://www.brucehardie.com/notes/012/pareto_NBD_pmf_derivation_rev.
pdf
pnbd_DERT Pareto/NBD: Discounted Expected Residual Transactions
Description
Calculates the discounted expected residual transactions.
• pnbd_nocov_DERT Discounted expected residual transactions for the Pareto/NBD model with-
out covariates
• pnbd_staticcov_DERT Discounted expected residual transactions for the Pareto/NBD model
with static covariates
Usage
pnbd_nocov_DERT(
r,
alpha_0,
s,
beta_0,
continuous_discount_factor,
vX,
vT_x,
vT_cal
)
pnbd_staticcov_DERT(
r,
alpha_0,
s,
beta_0,
continuous_discount_factor,
vX,
vT_x,
vT_cal,
mCov_life,
mCov_trans,
vCovParams_life,
vCovParams_trans
)
Arguments
r shape parameter of the Gamma distribution of the purchase process. The smaller
r, the stronger the heterogeneity of the purchase process
alpha_0 rate parameter of the Gamma distribution of the purchase process
s shape parameter of the Gamma distribution for the lifetime process. The smaller
s, the stronger the heterogeneity of customer lifetimes
beta_0 rate parameter for the Gamma distribution for the lifetime process.
continuous_discount_factor
continuous discount factor to use
vX Frequency vector of length n counting the numbers of purchases.
vT_x Recency vector of length n.
vT_cal Vector of length n indicating the total number of periods of observation.
mCov_life Matrix containing the covariates data affecting the lifetime process. One column
for each covariate.
mCov_trans Matrix containing the covariates data affecting the transaction process. One
column for each covariate.
vCovParams_life
Vector of estimated parameters for the lifetime covariates.
vCovParams_trans
Vector of estimated parameters for the transaction covariates.
Details
mCov_trans is a matrix containing the covariates data of the time-invariant covariates that affect
the transaction process. Each column represents a different covariate. For every column a gamma
parameter needs to added to vCovParams_trans at the respective position.
mCov_life is a matrix containing the covariates data of the time-invariant covariates that affect
the lifetime process. Each column represents a different covariate. For every column a gamma
parameter needs to added to vCovParams_life at the respective position.
Value
Returns a vector with the DERT for each customer.
References
Schmittlein DC, Morrison DG, Colombo R (1987). “Counting Your Customers: Who-Are They
and What Will They Do Next?” Management Science, 33(1), 1-24.
<NAME>, <NAME>, <NAME> (2021). “The Role of Time-Varying Contextual Factors in Latent
Attrition Models for Customer Base Analysis” Marketing Science 40(4). 783-809.
Fader PS, Hardie BGS (2005). “A Note on Deriving the Pareto/NBD Model and Related Expres-
sions.” URL http://www.brucehardie.com/notes/009/pareto_nbd_derivations_2005-11-05.
pdf.
Fader PS, Hardie BGS (2007). “Incorporating time-invariant covariates into the Pareto/NBD and
BG/NBD models.” URL http://www.brucehardie.com/notes/019/time_invariant_covariates.
pdf.
Fader PS, Hardie BGS (2020). “Deriving an Expression for P(X(t)=x) Under the Pareto/NBD
Model.” URL https://www.brucehardie.com/notes/012/pareto_NBD_pmf_derivation_rev.
pdf
pnbd_expectation Pareto/NBD: Unconditional Expectation
Description
Computes the expected number of repeat transactions in the interval (0, vT_i] for a randomly se-
lected customer, where 0 is defined as the point when the customer came alive.
Usage
pnbd_nocov_expectation(r, s, alpha_0, beta_0, vT_i)
pnbd_staticcov_expectation(r, s, vAlpha_i, vBeta_i, vT_i)
Arguments
r shape parameter of the Gamma distribution of the purchase process. The smaller
r, the stronger the heterogeneity of the purchase process
s shape parameter of the Gamma distribution for the lifetime process. The smaller
s, the stronger the heterogeneity of customer lifetimes
alpha_0 rate parameter of the Gamma distribution of the purchase process
beta_0 rate parameter for the Gamma distribution for the lifetime process.
vT_i Number of periods since the customer came alive
vAlpha_i Vector of individual parameters alpha
vBeta_i Vector of individual parameters beta
Value
Returns the expected transaction values according to the chosen model.
References
<NAME>, <NAME>, <NAME> (1987). “Counting Your Customers: Who-Are They
and What Will They Do Next?” Management Science, 33(1), 1-24.
<NAME>, <NAME>, <NAME> (2021). “The Role of Time-Varying Contextual Factors in Latent
Attrition Models for Customer Base Analysis” Marketing Science 40(4). 783-809.
Fader PS, Hardie BGS (2005). “A Note on Deriving the Pareto/NBD Model and Related Expres-
sions.” URL http://www.brucehardie.com/notes/009/pareto_nbd_derivations_2005-11-05.
pdf.
Fader PS, Hardie BGS (2007). “Incorporating time-invariant covariates into the Pareto/NBD and
BG/NBD models.” URL http://www.brucehardie.com/notes/019/time_invariant_covariates.
pdf.
Fader PS, Hardie BGS (2020). “Deriving an Expression for P(X(t)=x) Under the Pareto/NBD
Model.” URL https://www.brucehardie.com/notes/012/pareto_NBD_pmf_derivation_rev.
pdf
pnbd_LL Pareto/NBD: Log-Likelihood functions
Description
Calculates the Log-Likelihood values for the Pareto/NBD model with and without covariates.
The function pnbd_nocov_LL_ind calculates the individual log-likelihood values for each customer
for the given parameters.
The function pnbd_nocov_LL_sum calculates the log-likelihood value summed across customers for
the given parameters.
The function pnbd_staticcov_LL_ind calculates the individual log-likelihood values for each cus-
tomer for the given parameters and covariates.
The function pnbd_staticcov_LL_sum calculates the individual log-likelihood values summed
across customers.
Usage
pnbd_nocov_LL_ind(vLogparams, vX, vT_x, vT_cal)
pnbd_nocov_LL_sum(vLogparams, vX, vT_x, vT_cal)
pnbd_staticcov_LL_ind(vParams, vX, vT_x, vT_cal, mCov_life, mCov_trans)
pnbd_staticcov_LL_sum(vParams, vX, vT_x, vT_cal, mCov_life, mCov_trans)
Arguments
vLogparams vector with the Pareto/NBD model parameters at log scale. See Details.
vX Frequency vector of length n counting the numbers of purchases.
vT_x Recency vector of length n.
vT_cal Vector of length n indicating the total number of periods of observation.
vParams vector with the parameters for the Pareto/NBD model at log scale and the static
covariates at original scale. See Details.
mCov_life Matrix containing the covariates data affecting the lifetime process. One column
for each covariate.
mCov_trans Matrix containing the covariates data affecting the transaction process. One
column for each covariate.
Details
vLogparams is a vector with model parameters r, alpha_0, s, beta_0 at log-scale, in this order.
vParams is vector with the Pareto/NBD model parameters at log scale, followed by the parameters
for the lifetime covariates at original scale and then followed by the parameters for the transaction
covariates at original scale
mCov_trans is a matrix containing the covariates data of the time-invariant covariates that affect
the transaction process. Each column represents a different covariate. For every column a gamma
parameter needs to added to vParams at the respective position.
mCov_life is a matrix containing the covariates data of the time-invariant covariates that affect
the lifetime process. Each column represents a different covariate. For every column a gamma
parameter needs to added to vParams at the respective position.
Value
Returns the respective Log-Likelihood value(s) for the Pareto/NBD model with or without covari-
ates.
References
Schmittlein DC, Morrison DG, Colombo R (1987). “Counting Your Customers: Who-Are They
and What Will They Do Next?” Management Science, 33(1), 1-24.
<NAME>, <NAME>, <NAME> (2021). “The Role of Time-Varying Contextual Factors in Latent
Attrition Models for Customer Base Analysis” Marketing Science 40(4). 783-809.
Fader PS, Hardie BGS (2005). “A Note on Deriving the Pareto/NBD Model and Related Expres-
sions.” URL http://www.brucehardie.com/notes/009/pareto_nbd_derivations_2005-11-05.
pdf.
Fader PS, Hardie BGS (2007). “Incorporating time-invariant covariates into the Pareto/NBD and
BG/NBD models.” URL http://www.brucehardie.com/notes/019/time_invariant_covariates.
pdf.
Fader PS, Hardie BGS (2020). “Deriving an Expression for P(X(t)=x) Under the Pareto/NBD
Model.” URL https://www.brucehardie.com/notes/012/pareto_NBD_pmf_derivation_rev.
pdf
pnbd_PAlive Pareto/NBD: Probability of Being Alive
Description
Calculates the probability of a customer being alive at the end of the calibration period, based on a
customer’s past transaction behavior and the Pareto/NBD model parameters.
• pnbd_nocov_PAlive P(alive) for the Pareto/NBD model without covariates
• pnbd_staticcov_PAlive P(alive) for the Pareto/NBD model with static covariates
Usage
pnbd_nocov_PAlive(r, alpha_0, s, beta_0, vX, vT_x, vT_cal)
pnbd_staticcov_PAlive(
r,
alpha_0,
s,
beta_0,
vX,
vT_x,
vT_cal,
vCovParams_trans,
vCovParams_life,
mCov_trans,
mCov_life
)
Arguments
r shape parameter of the Gamma distribution of the purchase process. The smaller
r, the stronger the heterogeneity of the purchase process
alpha_0 rate parameter of the Gamma distribution of the purchase process
s shape parameter of the Gamma distribution for the lifetime process. The smaller
s, the stronger the heterogeneity of customer lifetimes
beta_0 rate parameter for the Gamma distribution for the lifetime process.
vX Frequency vector of length n counting the numbers of purchases.
vT_x Recency vector of length n.
vT_cal Vector of length n indicating the total number of periods of observation.
vCovParams_trans
Vector of estimated parameters for the transaction covariates.
vCovParams_life
Vector of estimated parameters for the lifetime covariates.
mCov_trans Matrix containing the covariates data affecting the transaction process. One
column for each covariate.
mCov_life Matrix containing the covariates data affecting the lifetime process. One column
for each covariate.
Details
mCov_trans is a matrix containing the covariates data of the time-invariant covariates that affect
the transaction process. Each column represents a different covariate. For every column a gamma
parameter needs to added to vCovParams_trans at the respective position.
mCov_life is a matrix containing the covariates data of the time-invariant covariates that affect
the lifetime process. Each column represents a different covariate. For every column a gamma
parameter needs to added to vCovParams_life at the respective position.
Value
Returns a vector with the PAlive for each customer.
References
Schmittlein DC, Morrison DG, Colombo R (1987). “Counting Your Customers: Who-Are They
and What Will They Do Next?” Management Science, 33(1), 1-24.
<NAME>, <NAME>, <NAME> (2021). “The Role of Time-Varying Contextual Factors in Latent
Attrition Models for Customer Base Analysis” Marketing Science 40(4). 783-809.
Fader PS, <NAME>GS (2005). “A Note on Deriving the Pareto/NBD Model and Related Expres-
sions.” URL http://www.brucehardie.com/notes/009/pareto_nbd_derivations_2005-11-05.
pdf.
Fader PS, Hardie BGS (2007). “Incorporating time-invariant covariates into the Pareto/NBD and
BG/NBD models.” URL http://www.brucehardie.com/notes/019/time_invariant_covariates.
pdf.
Fader PS, Hardie BGS (2020). “Deriving an Expression for P(X(t)=x) Under the Pareto/NBD
Model.” URL https://www.brucehardie.com/notes/012/pareto_NBD_pmf_derivation_rev.
pdf
pnbd_pmf Pareto/NBD: Probability Mass Function (PMF)
Description
Calculate P(X(t)=x), the probability that a randomly selected customer makes exactly x transactions
in the interval (0, t].
Usage
pnbd_nocov_PMF(r, alpha_0, s, beta_0, x, vT_i)
pnbd_staticcov_PMF(r, s, x, vAlpha_i, vBeta_i, vT_i)
Arguments
r shape parameter of the Gamma distribution of the purchase process. The smaller
r, the stronger the heterogeneity of the purchase process
alpha_0 rate parameter of the Gamma distribution of the purchase process
s shape parameter of the Gamma distribution for the lifetime process. The smaller
s, the stronger the heterogeneity of customer lifetimes
beta_0 rate parameter for the Gamma distribution for the lifetime process.
x The number of transactions to calculate the probability for (unsigned integer).
vT_i Number of periods since the customer came alive.
vAlpha_i Vector of individual parameters alpha.
vBeta_i Vector of individual parameters beta.
Value
Returns a vector of probabilities.
References
Schmittlein DC, Morrison DG, Colombo R (1987). “Counting Your Customers: Who-Are They
and What Will They Do Next?” Management Science, 33(1), 1-24.
<NAME>, <NAME>, <NAME> (2021). “The Role of Time-Varying Contextual Factors in Latent
Attrition Models for Customer Base Analysis” Marketing Science 40(4). 783-809.
Fader PS, Hardie BGS (2005). “A Note on Deriving the Pareto/NBD Model and Related Expres-
sions.” URL http://www.brucehardie.com/notes/009/pareto_nbd_derivations_2005-11-05.
pdf.
Fader PS, Hardie BGS (2007). “Incorporating time-invariant covariates into the Pareto/NBD and
BG/NBD models.” URL http://www.brucehardie.com/notes/019/time_invariant_covariates.
pdf.
Fader PS, Hardie BGS (2020). “Deriving an Expression for P(X(t)=x) Under the Pareto/NBD
Model.” URL https://www.brucehardie.com/notes/012/pareto_NBD_pmf_derivation_rev.
pdf
predict.clv.fitted.spending
Predict customers’ future spending
Description
Predict customer’s future mean spending per transaction and compare it to the actual mean spending
in the holdout period.
Usage
## S3 method for class 'clv.fitted.spending'
predict(object, newdata = NULL, verbose = TRUE, ...)
## S4 method for signature 'clv.fitted.spending'
predict(object, newdata = NULL, verbose = TRUE, ...)
Arguments
object A fitted spending model for which prediction is desired.
newdata A clv data object for which predictions should be made with the fitted model. If
none or NULL is given, predictions are made for the data on which the model
was fit.
verbose Show details about the running of the function.
... Ignored
Details
If newdata is provided, the individual customer statistics underlying the model are calculated the
same way as when the model was fit initially. Hence, if remove.first.transaction was TRUE,
this will be applied to newdata as well.
Value
An object of class data.table with columns:
Id The respective customer identifier
actual.mean.spending
Actual mean spending per transaction in the holdout period. Only if there is a
holdout period otherwise it is not reported.
predicted.mean.spending
The mean spending per transaction as predicted by the fitted spending model.
See Also
models to predict spending: gg.
models to predict transactions: pnbd, bgnbd, ggomnbd.
predict for transaction models
Examples
data("apparelTrans")
# Fit gg model on data
apparel.holdout <- clvdata(apparelTrans, time.unit="w",
estimation.split=37, date.format="ymd")
apparel.gg <- gg(apparel.holdout)
# Predict customers' future mean spending per transaction
predict(apparel.gg)
predict.clv.fitted.transactions
Predict CLV from a fitted transaction model
Description
Probabilistic customer attrition models predict in general three expected characteristics for every
customer:
• "conditional expected transactions" (CET), which is the number of transactions to expect from
a customer during the prediction period,
• "probability of a customer being alive" (PAlive) at the end of the estimation period and
• "discounted expected residual transactions" (DERT) for every customer, which is the total num-
ber of transactions for the residual lifetime of a customer discounted to the end of the estima-
tion period. In the case of time-varying covariates, instead of DERT, "discounted expected
conditional transactions" (DECT) is predicted. DECT does only cover a finite time horizon in
contrast to DERT. For continuous.discount.factor=0, DECT corresponds to CET.
In order to derive a monetary value such as CLV, customer spending has to be considered. If
the clv.data object contains spending information, customer spending can be predicted using a
Gamma/Gamma spending model for parameter predict.spending and the predicted CLV is be
calculated (if the transaction model supports DERT/DECT). In this case, the prediction additionally
contains the following two columns:
• "predicted.mean.spending", the mean spending per transactions as predicted by the spending
model.
• "CLV", the customer lifetime value. CLV is the product of DERT/DECT and predicted spend-
ing.
Usage
## S3 method for class 'clv.fitted.transactions'
predict(
object,
newdata = NULL,
prediction.end = NULL,
predict.spending = gg,
continuous.discount.factor = 0.1,
verbose = TRUE,
...
)
## S4 method for signature 'clv.fitted.transactions'
predict(
object,
newdata = NULL,
prediction.end = NULL,
predict.spending = gg,
continuous.discount.factor = 0.1,
verbose = TRUE,
...
)
Arguments
object A fitted clv transaction model for which prediction is desired.
newdata A clv data object for which predictions should be made with the fitted model. If
none or NULL is given, predictions are made for the data on which the model
was fit.
prediction.end Until what point in time to predict. This can be the number of periods (numeric)
or a form of date/time object. See details.
predict.spending
Whether and how to predict spending and based on it also CLV, if possible. See
details.
continuous.discount.factor
continuous discount factor to use to calculate DERT/DECT
verbose Show details about the running of the function.
... Ignored
Details
predict.spending indicates whether to predict customers’ spending and if so, the spending model
to use. Accepted inputs are either a logical (TRUE/FALSE), a method to fit a spending model (i.e.
gg), or an already fitted spending model. If provided TRUE, a Gamma-Gamma model is fit with
default options. If argument newdata is provided, the spending model is fit on newdata. Predicting
spending is only possible if the transaction data contains spending information. See examples for
illustrations of valid inputs.
The newdata argument has to be a clv data object of the exact same class as the data object on which
the model was fit. In case the model was fit with covariates, newdata needs to contain identically
named covariate data.
The use case for newdata is mainly two-fold: First, to estimate model parameters only on a sample
of the data and then use the fitted model object to predict or plot for the full data set provided
through newdata. Second, for models with dynamic covariates, to provide a clv data object with
longer covariates than contained in the data on which the model was estimated what allows to
predict or plot further. When providing newdata, some models might require additional steps that
can significantly increase runtime.
prediction.end indicates until when to predict or plot and can be given as either a point in time (of
class Date, POSIXct, or character) or the number of periods. If prediction.end is of class char-
acter, the date/time format set when creating the data object is used for parsing. If prediction.end
is the number of periods, the end of the fitting period serves as the reference point from which pe-
riods are counted. Only full periods may be specified. If prediction.end is omitted or NULL, it
defaults to the end of the holdout period if present and to the end of the estimation period otherwise.
The first prediction period is defined to start right after the end of the estimation period. If for
example weekly time units are used and the estimation period ends on Sunday 2019-01-01, then
the first day of the first prediction period is Monday 2019-01-02. Each prediction period includes a
total of 7 days and the first prediction period therefore will end on, and include, Sunday 2019-01-
08. Subsequent prediction periods again start on Mondays and end on Sundays. If prediction.end
indicates a timepoint on which to end, this timepoint is included in the prediction period.
continuous.discount.factor allows to adjust the discount rate used to estimated the discounted
expected transactions (DERT/DECT). The default value is 0.1 (=10%). Note that a continuous rate
needs to be provided.
Value
An object of class data.table with columns:
Id The respective customer identifier
period.first First timepoint of prediction period
period.last Last timepoint of prediction period
period.length Number of time units covered by the period indicated by period.first and
period.last (including both ends).
PAlive Probability to be alive at the end of the estimation period
CET The Conditional Expected Transactions
DERT or DECT Discounted Expected Residual Transactions or Discounted Expected Condi-
tional Transactions for dynamic covariates models
actual.x Actual number of transactions until prediction.end. Only if there is a holdout
period and the prediction ends in it, otherwise it is not reported.
actual.total.spending
Actual total spending until prediction.end. Only if there is a holdout period and
the prediction ends in it, otherwise it is not reported.
predicted.mean.spending
The mean spending per transactions as predicted by the spending model.
predicted.CLV Customer Lifetime Value based on DERT/DECT and predicted.mean.spending.
See Also
models to predict transactions: pnbd, bgnbd, ggomnbd.
models to predict spending: gg.
predict for spending models
Examples
data("apparelTrans")
# Fit pnbd standard model on data, WITH holdout
apparel.holdout <- clvdata(apparelTrans, time.unit="w",
estimation.split=37, date.format="ymd")
apparel.pnbd <- pnbd(apparel.holdout)
# Predict until the end of the holdout period
predict(apparel.pnbd)
# Predict until 10 periods (weeks in this case) after
# the end of the 37 weeks fitting period
predict(apparel.pnbd, prediction.end = 10) # ends on 2010-11-28
# Predict until 31th Dec 2016 with the timepoint as a character
predict(apparel.pnbd, prediction.end = "2016-12-31")
# Predict until 31th Dec 2016 with the timepoint as a Date
predict(apparel.pnbd, prediction.end = lubridate::ymd("2016-12-31"))
# Predict future transactions but not spending and CLV
predict(apparel.pnbd, predict.spending = FALSE)
# Predict spending by fitting a Gamma-Gamma model
predict(apparel.pnbd, predict.spending = gg)
# Fit a spending model separately and use it to predict spending
apparel.gg <- gg(apparel.holdout, remove.first.transaction = FALSE)
predict(apparel.pnbd, predict.spending = apparel.gg)
# Fit pnbd standard model WITHOUT holdout
pnc <- pnbd(clvdata(apparelTrans, time.unit="w", date.format="ymd"))
# This fails, because without holdout, a prediction.end is required
## Not run:
predict(pnc)
## End(Not run)
# But it works if providing a prediction.end
predict(pnc, prediction.end = 10) # ends on 2016-12-17
SetDynamicCovariates Add Dynamic Covariates to a CLV data object
Description
Add dynamic covariate data to an existing data object of class clv.data. The returned object can
be used to fit models with dynamic covariates.
No covariate data can be added to a clv data object which already has any covariate set.
At least 1 covariate is needed for both processes and no categorical covariate may be of only a single
category.
Usage
SetDynamicCovariates(
clv.data,
data.cov.life,
data.cov.trans,
names.cov.life,
names.cov.trans,
name.id = "Id",
name.date = "Date"
)
Arguments
clv.data CLV data object to add the covariates data to.
data.cov.life Dynamic covariate data as data.frame or data.table for the lifetime process.
data.cov.trans Dynamic covariate data as data.frame or data.table for the transaction pro-
cess.
names.cov.life Vector with names of the columns in data.cov.life that contain the covariates.
names.cov.trans
Vector with names of the columns in data.cov.trans that contain the covari-
ates.
name.id Name of the column to find the Id data for both, data.cov.life and data.cov.trans.
name.date Name of the column to find the Date data for both, data.cov.life and data.cov.trans.
Details
data.cov.life and data.cov.trans are data.frames or data.tables that each contain exactly
1 row for every combination of timepoint and customer. For each customer appearing in the transac-
tion data there needs to be covariate data at every timepoint that marks the start of a period as defined
by time.unit. It has to range from the start of the estimation sample (timepoint.estimation.start)
until the end of the period in which the end of the holdout sample (timepoint.holdout.end) falls.
See the the provided data apparelDynCov for illustration. Covariates of class character or factor
are converted to k-1 numeric dummies.
Date as character If the Date column in the covariate data is of type character, the date.format
given when creating the the clv.data object is used for parsing.
Value
An object of class clv.data.dynamic.covariates. See the class definition clv.data.dynamic.covariates
for more details about the returned object.
Examples
## Not run:
data("apparelTrans")
data("apparelDynCov")
# Create a clv data object without covariates
clv.data.apparel <- clvdata(apparelTrans, time.unit="w",
date.format="ymd")
# Add static covariate data
clv.data.dyn.cov <-
SetDynamicCovariates(clv.data.apparel,
data.cov.life = apparelDynCov,
names.cov.life = c("Marketing", "Gender", "Channel"),
data.cov.trans = apparelDynCov,
names.cov.trans = c("Marketing", "Gender", "Channel"),
name.id = "Id",
name.date = "Cov.Date")
# summary output about covariates data
summary(clv.data.dyn.cov)
# fit pnbd model with dynamic covariates
pnbd(clv.data.dyn.cov)
## End(Not run)
SetStaticCovariates Add Static Covariates to a CLV data object
Description
Add static covariate data to an existing data object of class clv.data. The returned object then can
be used to fit models with static covariates.
No covariate data can be added to a clv data object which already has any covariate set.
At least 1 covariate is needed for both processes and no categorical covariate may be of only a single
category.
Usage
SetStaticCovariates(
clv.data,
data.cov.life,
data.cov.trans,
names.cov.life,
names.cov.trans,
name.id = "Id"
)
Arguments
clv.data CLV data object to add the covariates data to.
data.cov.life Static covariate data as data.frame or data.table for the lifetime process.
data.cov.trans Static covariate data as data.frame or data.table for the transaction process.
names.cov.life Vector with names of the columns in data.cov.life that contain the covariates.
names.cov.trans
Vector with names of the columns in data.cov.trans that contain the covari-
ates.
name.id Name of the column to find the Id data for both, data.cov.life and data.cov.trans.
Details
data.cov.life and data.cov.trans are data.frames or data.tables that each contain exactly
one single row of covariate data for every customer appearing in the transaction data. Covariates of
class character or factor are converted to k-1 numeric dummy variables.
Value
An object of class clv.data.static.covariates. See the class definition clv.data.static.covariates
for more details about the returned object.
Examples
data("apparelTrans")
data("apparelStaticCov")
# Create a clv data object without covariates
clv.data.apparel <- clvdata(apparelTrans, time.unit="w",
date.format="ymd")
# Add static covariate data
clv.data.apparel.cov <-
SetStaticCovariates(clv.data.apparel,
data.cov.life = apparelStaticCov,
names.cov.life = "Gender",
data.cov.trans = apparelStaticCov,
names.cov.trans = "Gender",
name.id = "Id")
# more summary output
summary(clv.data.apparel.cov)
# fit model with static covariates
pnbd(clv.data.apparel.cov)
subset.clv.data Subsetting clv.data
Description
Returns a subset of the transaction data stored within the given clv.data object which meet con-
ditions. The given expression are forwarded to the data.table of transactions. Possible rows to
subset and select are Id, Date, and Price (if present).
Usage
## S3 method for class 'clv.data'
subset(x, subset, select, sample = c("full", "estimation", "holdout"), ...)
Arguments
x clv.data to subset
subset logical expression indicating rows to keep
select expression indicating columns to keep
sample Name of sample for which transactions should be extracted,
... further arguments passed to data.table::subset
Value
A copy of the data.table of selected transactions. May contain columns Id, Date, and Price.
See Also
data.table’s subset
Examples
library(data.table) # for between()
data(cdnow)
clv.cdnow <- clvdata(cdnow,
date.format="ymd",
time.unit = "week",
estimation.split = "1997-09-30")
# all transactions of customer "1"
subset(clv.cdnow, Id=="1")
subset(clv.cdnow, subset = Id=="1")
# all transactions of customer "111" in the estimation period...
subset(clv.cdnow, Id=="111", sample="estimation")
# ... and in the holdout period
subset(clv.cdnow, Id=="111", sample="holdout")
# all transactions of customers "1", "2", and "999"
subset(clv.cdnow, Id %in% c("1","2","999"))
# all transactions on "1997-02-16"
subset(clv.cdnow, Date == "1997-02-16")
# all transactions between "1997-02-01" and "1997-02-16"
subset(clv.cdnow, Date >= "1997-02-01" & Date <= "1997-02-16")
# same using data.table's between
subset(clv.cdnow, between(Date, "1997-02-01","1997-02-16"))
# all transactions with a value between 50 and 100
subset(clv.cdnow, Price >= 50 & Price <= 100)
# same using data.table's between
subset(clv.cdnow, between(Price, 50, 100))
# only keep Id of transactions on "1997-02-16"
subset(clv.cdnow, Date == "1997-02-16", "Id")
summary.clv.fitted Summarizing a fitted CLV model
Description
Summary method for fitted CLV models that provides statistics about the estimated parameters
and information about the optimization process. If multiple optimization methods were used (for
example if specified in parameter optimx.args), all information here refers to the last method/row
of the resulting optimx object.
Usage
## S3 method for class 'clv.fitted'
summary(object, ...)
## S3 method for class 'clv.fitted.transactions.static.cov'
summary(object, ...)
## S3 method for class 'summary.clv.fitted'
print(
x,
digits = max(3L, getOption("digits") - 3L),
signif.stars = getOption("show.signif.stars"),
...
)
Arguments
object A fitted CLV model
... Ignored for summary, forwarded to printCoefmat for print.
x an object of class "summary.clv.no.covariates", usually, a result of a call to
summary.clv.no.covariates.
digits the number of significant digits to use when printing.
signif.stars logical. If TRUE, ‘significance stars’ are printed for each coefficient.
Value
This function computes and returns a list of summary information of the fitted model given in
object. It returns a list of class summary.clv.no.covariates that contains the following compo-
nents:
name.model the name of the fitted model.
call The call used to fit the model.
tp.estimation.start
Date or POSIXct indicating when the fitting period started.
tp.estimation.end
Date or POSIXct indicating when the fitting period ended.
estimation.period.in.tu
Length of fitting period in time.units.
time.unit Time unit that defines a single period.
coefficients a px4 matrix with columns for the estimated coefficients, its standard error, the
t-statistic and corresponding (two-sided) p-value.
estimated.LL the value of the log-likelihood function at the found solution.
AIC Akaike’s An Information Criterion for the fitted model.
BIC Schwarz’ Bayesian Information Criterion for the fitted model.
KKT1 Karush-Kuhn-Tucker optimality conditions of the first order, as returned by op-
timx.
KKT2 Karush-Kuhn-Tucker optimality conditions of the second order, as returned by
optimx.
fevals The number of calls to the log-likelihood function during optimization.
method The last method used to obtain the final solution.
additional.options
A list of additional options used for model fitting.
Correlation Whether the correlation between the purchase and the attrition pro-
cess was estimated.
estimated.param.cor Correlation coefficient measuring the correlation between
the two processes, if used.
For models fits with static covariates, the list additionally is of class summary.clv.static.covariates
and the list in additional.options contains the following elements:
additional.options
Regularization Whether L2 regularization for parameters of contextual factors
was used.
lambda.life The regularization lambda used for the parameters of the Lifetime
process, if used.
lambda.trans The regularization lambda used for the parameters of the Trans-
action process, if used.
Constraint covs Whether any covariate parameters were forced to be the same
for both processes.
Constraint params Name of the covariate parameters which were constraint,
if used.
See Also
The model fitting functions pnbd.
Function coef will extract the coefficients matrix including summary statistics and function
vcov will extract the vcov from the returned summary object.
Examples
data("apparelTrans")
# Fit pnbd standard model, no covariates
clv.data.apparel <- clvdata(apparelTrans, time.unit="w",
estimation.split=40, date.format="ymd")
pnbd.apparel <- pnbd(clv.data.apparel)
# summary about model fit
summary(pnbd.apparel)
# Add static covariate data
data("apparelStaticCov")
data.apparel.cov <-
SetStaticCovariates(clv.data.apparel,
data.cov.life = apparelStaticCov,
names.cov.life = "Gender",
data.cov.trans = apparelStaticCov,
names.cov.trans = "Gender",
name.id = "Id")
# fit model with covariates and regualization
pnbd.apparel.cov <- pnbd(data.apparel.cov,
reg.lambdas = c(life=2, trans=4))
# additional summary about covariate parameters
# and used regularization
summary(pnbd.apparel.cov)
vcov.clv.fitted Calculate Variance-Covariance Matrix for CLV Models fitted with
Maximum Likelihood Estimation
Description
Returns the variance-covariance matrix of the parameters of the fitted model object. The variance-
covariance matrix is derived from the Hessian that results from the optimization procedure. First,
the Moore-Penrose generalized inverse of the Hessian is used to obtain an estimate of the variance-
covariance matrix. Next, because some parameters may be transformed for the purpose of restricting
their value during the log-likelihood estimation, the variance estimates are adapted to be comparable
to the reported coefficient estimates. If the result is not positive definite, Matrix::nearPD is used with
standard settings to find the nearest positive definite matrix.
If multiple estimation methods were used, the Hessian of the last method is used.
Usage
## S3 method for class 'clv.fitted'
vcov(object, ...)
Arguments
object a fitted clv model object
... Ignored
Value
A matrix of the estimated covariances between the parameters of the model. The row and column
names correspond to the parameter names given by the coef method.
See Also
MASS::ginv, Matrix::nearPD |
devFunc | cran | R | Package ‘devFunc’
October 13, 2022
Type Package
Title Clear and Condense Argument Check for User-Defined Functions
Version 0.1
Author <NAME>
Maintainer <NAME> <<EMAIL>>
Description A concise check of the format of one or multiple input argu-
ments (data type, length or value) is provided. Since multiple input arguments can be tested si-
multaneously, a lengthly list of checks at the beginning of your func-
tion can be avoided, hereby enhancing the readability and maintainability of your code.
License GPL-3
Encoding UTF-8
Depends R (>= 3.3.0)
Imports plyr (>= 1.8.4), stringr (>= 1.1.0)
LazyData true
RoxygenNote 6.0.1
NeedsCompilation no
Repository CRAN
Date/Publication 2018-01-24 18:30:38 UTC
R topics documented:
checkCharVe... 2
checkIntVe... 2
checkLengt... 3
checkLogicVe... 4
checkNumOrIntVe... 5
checkNumVe... 6
checkRange... 6
checkValue... 7
checkCharVec Checking if all elements of a list are all character vectors
Description
Checking if all elements of a list are all character vectors
Usage
checkCharVec(listChar, namesListElements = NULL)
Arguments
listChar A list of the vectors of which one wishes to check if their data type is character
namesListElements
Character vector containing the names of the variables of which the data type
is checked. Optional parameter, with as default value NULL. This argument
should be used when the variable of which the data type is checked is not an ob-
ject that was provided as an argument to the function, or when the list elements
of the first argument do not have a name attached to it.
Value
No value is returned if all vectors have the character data type. If not, an error message is thrown
for each element of the list that does not pertain to the character data type.
Examples
arg1 <- 'something'
checkCharVec(list(arg1))
checkCharVec(list('somethingElse', TRUE))
arg2 <- 2
checkCharVec(list(arg2))
checkCharVec(list(arg2, TRUE, 5L))
checkIntVec Checking if all elements of a list are all integer vectors
Description
Checking if all elements of a list are all integer vectors
Usage
checkIntVec(listInt, namesListElements = NULL)
Arguments
listInt A list of the vectors of which one wishes to check if their data type is integer
namesListElements
Character vector containing the names of the variables of which the data type
is checked. Optional parameter, with as default value NULL. This argument
should be used when the variable of which the data type is checked is not an ob-
ject that was provided as an argument to the function, or when the list elements
of the first argument do not have a name attached to it.
Value
No value is returned if all vectors have the integer data type. If not, an error message is thrown for
each element of the list that does not pertain to the integer data type.
Examples
arg1 <- 1L
checkIntVec(list(arg1))
checkIntVec(list(1L, TRUE, 2L))
arg2 <- 'R'
checkIntVec(list(arg2))
checkIntVec(list(arg2, TRUE, 2))
checkLength Checking if the length of the different elements of a list corresponds to
what one expects.
Description
Checking if the length of the different elements of a list corresponds to what one expects.
Usage
checkLength(listObjects, lengthObjects)
Arguments
listObjects List of vectors, of irrespective data type.
lengthObjects Numeric vector, either of the same length as the ’listObjects’ argument, or of
length 1, but in the latter case, it will be tested whether or not the length of every
element of the ’listObjects’ argument equal this one value.
Value
No value is returned if all vectors correspond to the length against which it is tested. An error
message is thrown when the length does not corresponds for at least one element of the list.
Examples
arg1 <- 'something'
checkLength(list(arg1), 1)
checkLength(list('somethingElse', TRUE), 1)
checkLength(list('somethingElse', TRUE), c(1, 1))
arg2 <- 2:5
checkLength(list(arg1, arg2), c(1, 4))
checkLength(list(arg1, arg2), 1)
checkLogicVec Checking if all elements of a list are all logical vectors
Description
Checking if all elements of a list are all logical vectors
Usage
checkLogicVec(listLogic, namesListElements = NULL)
Arguments
listLogic A list of the vectors of which one wishes to check if their data type is logical
namesListElements
Character vector containing the names of the variables of which the data type
is checked. Optional parameter, with as default value NULL. This argument
should be used when the variable of which the data type is checked is not an ob-
ject that was provided as an argument to the function, or when the list elements
of the first argument do not have a name attached to it.
Value
No value is returned if all vectors have the logical data type. If not, an error message is thrown for
each element of the list that does not pertain to the logical data type.
Examples
arg1 <- TRUE
checkLogicVec(list(arg1))
checkLogicVec(list(TRUE, T, 2))
checkLogicVec(list(TRUE, T, 2), c('Var1', 'Var2', 'Var3'))
arg2 <- 0.8
checkLogicVec(list(arg2))
checkLogicVec(list(arg2, 'T', 2))
checkNumOrIntVec Checking if all elements of a list are all integer or numeric vectors
Description
Checking if all elements of a list are all integer or numeric vectors
Usage
checkNumOrIntVec(listNumOrInt, namesListElements = NULL)
Arguments
listNumOrInt A list of the vectors of which one wishes to check if their data type is integer.
namesListElements
Character vector containing the names of the variables of which the data type
is checked. Optional parameter, with as default value NULL. This argument
should be used when the variable of which the data type is checked is not an ob-
ject that was provided as an argument to the function, or when the list elements
of the first argument do not have a name attached to it.
Value
No value is returned if all vectors have the integer or numeric data type. If not, an error message is
thrown for each element of the list that does not pertain to the integer or numeric data type.
Examples
arg1 <- 1L
checkNumOrIntVec(list(arg1))
arg1 <- 1
checkNumOrIntVec(list(arg1))
checkNumOrIntVec(list(1L, TRUE, 2L))
checkNumOrIntVec(list(1L, TRUE, 2L), c('Var1', 'Var2', 'Var3'))
arg2 <- 'R'
checkNumOrIntVec(list(arg2))
checkNumOrIntVec(list(arg2, TRUE, 2))
checkNumVec Checking if all elements of a list are all numeric vectors
Description
Checking if all elements of a list are all numeric vectors
Usage
checkNumVec(listNum, namesListElements = NULL)
Arguments
listNum A list of the vectors of which one wishes to check if their data type is numeric
namesListElements
Character vector containing the names of the variables of which the data type
is checked. Optional parameter, with as default value NULL. This argument
should be used when the variable of which the data type is checked is not an ob-
ject that was provided as an argument to the function, or when the list elements
of the first argument do not have a name attached to it.
Value
No value is returned if all vectors have the numeric data type. If not, an error message is thrown for
each element of the list that does not pertain to the numeric data type.
Examples
arg1 <- 2
checkNumVec(list(arg1))
checkNumVec(list(TRUE, T, 2))
checkNumVec(list(TRUE, T, 2), c('Var1', 'Var2', 'Var3'))
arg2 <- 0.8
checkNumVec(list(arg2))
checkNumVec(list(arg2, 'T', 2))
checkRanges Checking if the value of a numeric or integer variable (of length 1) is
located within a certain range.
Description
Checking if the value of a numeric or integer variable (of length 1) is located within a certain range.
Usage
checkRanges(listObjects, listRanges)
Arguments
listObjects List of numeric or integer vectors, of each of length 1. It contains the list of
variables of which one wants to test its value against a vector of valid values.
This argument is obligatory.
listRanges List of character vectors, each character vector should be of length 2 or 4, while
the ’listRanges’ list should be of the same length as the ’listObjects’ argument.
It contains the values against which one wants to test the ’listObjects’ argument.
This argument is obligatory.
Value
No value is returned if all vectors of the ’listObjects’ argument is contained within the corresponding
ranges of the ’listRanges’ argument. An error message is thrown when this is not the case for at
least one of the elements of the ’listObjects’ argument. Note that each element of the ’listRange’
argument should be of the following structure. The first element of the character vector, as well as
the third element if the character vector is of length 4, should either be ’>’, ’>=’, ’<’ or ’<=’. In
case that the length of the character vector is 4, the first and the third element should be opposite
directions (some form of ’>’ combined with some form of ’<’). The second and fourth element
should be a numeric value coerced to a character. If the character vector is of length 2 (4), then the
range is either bounded from below or (and) above.
Examples
someValue <- 2
checkRanges(list(someValue), list(c('<', 3)))
someValue <- '2'
checkRanges(list(someValue), list(c('<', 3)))
checkRanges(list(someValue), list(c(1.5, 3)))
someValue <- 6
someOtherValue <- 5
checkRanges(list(someValue, someOtherValue), list(c('>=', 2.5), c('>=', 2.5, '<=', 5)))
checkRanges(list(someValue, someOtherValue), list(c('>=', 2.5), c('>=', 2.5, '<', 5)))
checkRanges(list(someValue, someOtherValue), list(c('>=', 2.5, '<=', 5), c('>=', 2.5, '<', 5)))
checkValues Checking if the value of vectors (of length 1) is authorized.
Description
Checking if the value of vectors (of length 1) is authorized.
Usage
checkValues(listObjects, listValues)
Arguments
listObjects List of vectors, of irrespective data type and each of length 1. It contains the list
of variables of which one wants to test its value against a vector of valid values.
This argument is obligatory.
listValues List of vectors, of irrespective data type and of the same length as the ’listO-
bjects’ argument. It contains the values against which one wants to test the
’listObjects’ argument. This argument is obligatory.
Value
No value is returned if all vectors correspond to the length against which it is tested. An error
message is thrown when at least one of the elements of the ’listObjects’ contains an invalid value,
as stipulated by the ’listValues’ argument.
Examples
lossType <- 'absolute'
checkValues(list(lossType), list(c('absolute', 'quadratic')))
checkValues(list(lossType), list(c('absolute', 'quadratic'), c('test', 'test2')))
#The next error message is weird, since it does not return the real name of the listObject
#that found to be wrong.
lossType <- 'absolute55'
listObjects <- list(lossType)
listValues <- list(c('absolute', 'quadratic'))
checkValues(listObjects, listValues)
#Now it is ok...
checkValues(list(lossType), list(c('absolute', 'quadratic'))) |
app-element-loading | npm | JavaScript | element-loading
===
> A element-loading component for Vue.js.
Demo
---
<http://element-component.github.io/element-loadingInstallation
---
```
npm i element-loading -D
```
Usage
---
```
import Vue from 'vue'import ElLoading from 'element-loading'import 'element-theme-default/dist/loading.css' Vue.use(ElLoading)
```
### 服务
Loading 还可以以服务的方式调用。引入 Loading 服务:
在需要调用时:
```
Loading.service(options);
```
其中 `options` 参数为 Loading 的配置项,具体见下表。`LoadingService` 会返回一个 Loading 实例,可通过调用该实例的 `close` 方法来关闭它:
```
let loadingInstance = Loading.service(options);loadingInstance.close();
```
需要注意的是,以服务的方式调用的全屏 Loading 是单例的:若在前一个全屏 Loading 关闭前再次调用全屏 Loading,并不会创建一个新的 Loading 实例,而是返回现有全屏 Loading 的实例:
```
let loadingInstance1 = Loading.service({ fullscreen: true });let loadingInstance2 = Loading.service({ fullscreen: true });console.log(loadingInstance1 === loadingInstance2); // true
```
此时调用它们中任意一个的 `close` 方法都能关闭这个全屏 Loading。
如果完整引入了 Element,那么 Vue.prototype 上会有一个全局方法 `$loading`,它的调用方式为:`this.$loading(options)`,同样会返回一个 Loading 实例。
### Options
| 参数 | 说明 | 类型 | 可选值 | 默认值 |
| --- | --- | --- | --- | --- |
| target | Loading 需要覆盖的 DOM 节点。可传入一个 DOM 对象或字符串;若传入字符串,则会将其作为参数传入 `document.querySelector`以获取到对应 DOM 节点 | object/string | — | document.body |
| body | 同 `v-loading` 指令中的 `body` 修饰符 | boolean | — | false |
| fullscreen | 同 `v-loading` 指令中的 `fullscreen` 修饰符 | boolean | — | true |
| lock | 同 `v-loading` 指令中的 `lock` 修饰符 | boolean | — | false |
| text | 显示在加载图标下方的加载文案 | string | — | — |
| customClass | Loading 的自定义类名 | string | — | — |
Development
---
```
make dev ## test make test ## build make build
```
License
===
[MIT](https://opensource.org/licenses/MIT)
Readme
---
### Keywords
* element
* vue
* component |
rumble | readthedoc | R | RumbleDB is a querying engine that allows you to query your large, messy datasets with ease and productivity. It covers the entire data pipeline: clean up, structure, normalize, validate, convert to an efficient binary format, and feed it right into Machine Learning estimators and models, all within the JSONiq language.
RumbleDB supports JSON-like datasets including JSON, JSON Lines, Parquet, Avro, SVM, CSV, ROOT as well as text files, of any size from kB to at least the two-digit TB range (we have not found the limit yet).
RumbleDB is both good at handling small amounts of data on your laptop (in which case it simply runs locally and efficiently in a single-thread) as well as large amounts of data by spreading computations on your laptop cores, or onto a large cluster (in which case it leverages Spark automagically).
RumbleDB can also be used to easily and efficiently convert data from a format to another, including from JSON to Parquet thanks to JSound validation.
It runs on many local or distributed filesystems such as HDFS, S3, Azure blob storage, and HTTP (read-only), and of course your local drive as well. You can use any of these file systems to store your datasets, but also to store and share your queries and functions as library modules with other users, worldwide or within your institution, who can import them with just one line of code. You can also output the results of your query or the log to these filesystems (as long as you have write access).
With RumbleDB, queries can be written in the tailor-made and expressive JSONiq language. Users can write their queries declaratively and start with just a few lines. No need for complex JSON parsing machinery as JSONiq supports the JSON data model natively.
The core of RumbleDB lies in JSONiq's FLWOR expressions, the semantics of which map beautifully to DataFrames and Spark SQL. Likewise expression semantics is seamlessly translated to transformations on RDDs or DataFrames, depending on whether a structure is recognized or not. Transformations are not exposed as function calls, but are completely hidden behind JSONiq queries, giving the user the simplicity of an SQL-like language and the flexibility needed to query heterogeneous, tree-like data that does not fit in DataFrames.
This documentation provides you with instructions on how to get started, examples of data sets and queries that can be executed locally or on a cluster, links to JSONiq reference and tutorials, notes on the function library implemented so far, and instructions on how to compile RumbleDB from scratch.
Please note that this is a (maturing) beta version. We welcome bug reports in the GitHub issues section.
Below, you will find instructions to get started with RumbleDB, either in an online sandbox or on your own computer, which among others will allow you to query any files stored on your local disk. In short, there are four possibilities to get started:
* Use one of our online sandboxes (Jupyter notebook or simple sandbox page)
* Run the standalone RumbleDB jar (new and experimental) with Java on your laptop
* Install Spark yourself on your laptop (for more control on Spark parameters) and use a small RumbleDB jar with spark-submit
* Use our docker image on your laptop (go to the "Run with docker" section on the left menu)
## Method 0: you want to play with RumbleDB without installing anything
If you really want to start writing queries right now, there is a public sandbox here that will just work and guide you. You only need to have a Google account to be able to execute them, as this exposes our Jupyter notebook via the Colab environment. You are also free to download and use this notebook with any other provider or even your own local Jupyter and it will work just the same: the queries are all shipped to our own, small public backend no matter what.
If you do not have a Google account, you can also use our simpler sandbox page without Jupyter, here where you can type small queries and see the results.
With the sandboxes above, you can only inline your data in the query or access a dataset with an HTTP URL.
Once you want to take it to the next level and query your own data on your laptop, you will find instructions below to use RumbleDB on your own computer manually, which among others will allow you to query any files stored on your local disk. And then, you can take a leap of faith and use RumbleDB on a large cluster (Amazon EMR, your company's cluster, etc).
## Method 1: with the large, standalone, RumbleDB jar (experimental)
RumbleDB works with both Java 8 and Java 11. You can check the Java version that is configured on your machine with:
`java -version`
If you do not have Java, you can download version 8 or 11 from AdoptOpenJDK.
Do make sure it is not Java 17, which will not work.
### Download RumbleDB
```
java -jar rumbledb-1.21.0-standalone.jar run -q '1+1'
```
or launch a JSONiq shell with:
```
java -jar rumbledb-1.21.0-standalone.jar repl
```
If you run out of memory, you can set allocate more memory to Java with an additional Java parameter, e.g., -Xmx10g
Scroll down this page skipping the method 2 section in order to continue.
## Method 1 bis: with Homebrew
It is also possible to use RumbleDB with brew, however there is currently no way to adjust memory usage. To install RumbleDB with brew, type the commands:
```
brew tap rumbledb/rumble
brew install --build-from-source rumble
```
```
rumbledb run -q '1+1'
```
Then, launch a JSONiq shell with:
`rumbledb repl`
Scroll down this page skipping the method 2 section in order to continue.
## Method 2: Install Spark locally yourself and use a compact RumbleDB jar
This method gives you more control about the Spark configuration than the experimental standalone jar, in particular you can increase the memory used, change the number of cores, and so on.
If you use Linux, <NAME> also kindly contributed an installation script for Linux users that roughly takes care of what is described below for you.
### Install Spark
RumbleDB requires an Apache Spark installation on Linux, Mac or Windows.
It is straightforward to directly download it, unpack it and put it at a location of your choosing. We recommend to pick Spark 3.2.2. Let us call this location SPARK_HOME (it is a good idea, in fact to also define an environment variable SPARK_HOME pointing to the absolute path of this location).
What you need to do then is to add the subdirectory "bin" within the unpacked directory to the PATH variable. On macOS this is done by adding
```
export SPARK_HOME=/path/to/spark-3.2.2-bin-hadoop3.2
export PATH=$SPARK_HOME/bin:$PATH
```
(with SPARK_HOME appropriately set to match your unzipped Spark directory) to the file .zshrc in your home directory, then making sure to force the change with
`. ~/.zshrc`
in the shell. In Windows, changing the PATH variable is done in the control panel. In Linux, it is similar to macOS.
As an alternative, users who love the command line can also install Spark with a package management system instead, such as brew (on macOS) or apt-get (on Ubuntu). However, these might be less predictable than a raw download.
You can test that Spark was correctly installed with:
```
spark-submit --version
```
Spark 3+ is documented to work with both Java 8 and Java 11. If there is an issue with the Java version, RumbleDB will inform you with an appropriate error message. You can check the Java version that is configured on your machine with:
`java -version`
### Download the small version of the RumbleDB jar
If you use Spark 3.2+, use rumbledb-1.21.0-for-spark-3.2.jar.
If you use Spark 3.3+, use rumbledb-1.21.0-for-spark-3.3.jar.
These jars do not embed Spark, since you chose to set it up separately. They will work with your Spark installation with the spark-submit command.
## Create some data set
Create, in the same directory as RumbleDB to keep it simple, a file data.json and put the following content inside. This is a small list of JSON objects in the JSON Lines format.
```
{ "product" : "broiler", "store number" : 1, "quantity" : 20 }
{ "product" : "toaster", "store number" : 2, "quantity" : 100 }
{ "product" : "toaster", "store number" : 2, "quantity" : 50 }
{ "product" : "toaster", "store number" : 3, "quantity" : 50 }
{ "product" : "blender", "store number" : 3, "quantity" : 100 }
{ "product" : "blender", "store number" : 3, "quantity" : 150 }
{ "product" : "socks", "store number" : 1, "quantity" : 500 }
{ "product" : "socks", "store number" : 2, "quantity" : 10 }
{ "product" : "shirt", "store number" : 3, "quantity" : 10 }
```
If you want to later try a bigger version of this data, you can also download a larger version with 100,000 objects from here. Wait, no, in fact you do not even need to download it: you can simply replace the file path in the queries below with "https://rumbledb.org/samples/products-small.json" and it will just work! RumbleDB feels just at home on the Web.
RumbleDB also scales without any problems to datasets that have millions or (on a cluster) billions of objects, although of course, for billions of objects HDFS or S3 are a better idea than the Web to store your data, for obvious reasons.
In the JSON Lines format that this simple dataset uses, you just need to make sure you have one object on each line (this is different from a plain JSON file, which has a single JSON value and can be indented). Of course, RumbleDB can read plain JSON files, too (with json-doc()), but below we will show you how to read JSON Line files, which is how JSON data scales.
## Running simple queries locally
If you used installation method 1 (the standalone jar), in a shell, from the directory where the RumbleDB .jar lies, type, all on one line:
```
java -jar rumbledb.jar repl
```
If you used installation method 2 (manual Spark setup), in a shell, from the directory where the RumbleDB .jar lies, type, all on one line:
```
spark-submit rumbledb.jar repl
```
The RumbleDB shell appears:
```
____ __ __ ____ ____
/ __ __ ______ ___ / /_ / /__ / __ \/ __ )
/ /_/ / / / / __ `__ \/ __ \/ / _ \/ / / / __ | The distributed JSONiq engine
/ _, _/ /_/ / / / / / / /_/ / / __/ /_/ / /_/ / 1.21.0 "Hawthorn blossom" beta
/_/ |_|__,_/_/ /_/ /_/_.___/_/___/_____/_____/
Master: local[*]
Item Display Limit: 200
Output Path: -
Log Path: -
Query Path : -
rumble$
```
You can now start typing simple queries like the following few examples. Press three times the return key to execute a query.
`"Hello, World"`
or
`1 + 1`
or
`(3 * 4) div 5`
The above queries do not actually use Spark. Spark is used when the I/O workload can be parallelized. The following query should output the file created above.
```
json-file("data.json")
```
json-file() reads its input in parallel, and thus will also work on your machine with MB or GB files (for TB files, a cluster will be preferable). You should specify a minimum number of partitions, here 10 (note that this is a bit ridiculous for our tiny example, but it is very relevant for larger files), as locally no parallelization will happen if you do not specify this number.
```
for $i in json-file("data.json", 10)
return $i
```
The above creates a very simple Spark job and executes it. More complex queries will create several Spark jobs. But you will not see anything of it: this is all done behind the scenes. If you are curious, you can go to localhost:4040 in your browser while your query is running (it will not be available once the job is complete) and look at what is going on behind the scenes.
Data can be filtered with the where clause. Again, below the hood, a Spark transformation will be used:
```
for $i in json-file("data.json", 10)
where $i.quantity gt 99
return $i
```
RumbleDB also supports grouping and aggregation, like so:
```
for $i in json-file("data.json", 10)
let $quantity := $i.quantity
group by $product := $i.product
return { "product" : $product, "total-quantity" : sum($quantity) }
```
RumbleDB also supports ordering. Note that clauses (where, let, group by, order by) can appear in any order. The only constraint is that the first clause should be a for or a let clause.
```
for $i in json-file("data.json", 10)
let $quantity := $i.quantity
group by $product := $i.product
let $sum := sum($quantity)
order by $sum descending
return { "product" : $product, "total-quantity" : $sum }
```
Finally, RumbleDB can also parallelize data provided within the query, exactly like Sparks' parallelize() creation:
```
for $i in parallelize((
{ "product" : "broiler", "store number" : 1, "quantity" : 20 },
{ "product" : "toaster", "store number" : 2, "quantity" : 100 },
{ "product" : "toaster", "store number" : 2, "quantity" : 50 },
{ "product" : "toaster", "store number" : 3, "quantity" : 50 },
{ "product" : "blender", "store number" : 3, "quantity" : 100 },
{ "product" : "blender", "store number" : 3, "quantity" : 150 },
{ "product" : "socks", "store number" : 1, "quantity" : 500 },
{ "product" : "socks", "store number" : 2, "quantity" : 10 },
{ "product" : "shirt", "store number" : 3, "quantity" : 10 }
), 10)
let $quantity := $i.quantity
group by $product := $i.product
let $sum := sum($quantity)
order by $sum descending
return { "product" : $product, "total-quantity" : $sum }
```
Mind the double parenthesis, as parallelize is a unary function to which we pass a sequence of objects.
## Further steps
Further steps could involve:
* Learning JSONiq. More details can be found in the JSONiq section of this documentation and in the JSONiq specification and tutorials.
* Storing some data on S3, creating a Spark cluster on Amazon EMR (or Azure blob storage and Azure, etc), and querying the data with RumbleDB. More details are found in the cluster section of this documentation.
* Using RumbleDB with Jupyter notebooks. For this, you can run RumbleDB as a server with a simple command, and get started by downloading the main JSONiq tutorial as a Jupyter notebook and just clicking your way through it. More details are found in the Jupyter notebook section of this documentation. Jupyter notebooks work both locally and on a cluster.
* Write JSONiq code, and share it on the Web, as others can import it from HTTP in just one line from within their queries (no package publication or installation required) or specify an HTTP URL as an input query to RumbleDB!
## Starting the HTTP server
RumbleDB can be run as an HTTP server that listens for queries. In order to do so, you can use the --server and --port parameters:
This command will not return until you force it to (Ctrl+C on Linux and Mac). This is because the server has to run permanently to listen to incoming requests.
Most users will not have to do anything beyond running the above command. For most of them, the next step would be to open a Jupyter notebook that connects to this server automatically.
This HTTP server is built as a basic server for the single user use case, i.e., the user runs their own RumbleDB server on their laptop or cluster, and connects to it via their Jupyter notebook, one query at a time. Some of our users have more advanced needs, or have a larger user base, and typically prefer to implement their own HTTP server, lauching RumbleDB queries either via the public RumbleDB Java API (like the basic HTTP server does -- so its code can serve as a demo of the Java API) or via the RumbleDB CLI.
Caution! Launching a server always has consequences on security, especially as RumbleDB can read from and write to your disk; So make sure you activate your firewall. In later versions, we may support authentication tokens.
## Testing that it works (not necessary for most end users)
The HTTP server is meant not to be used directly by end users, but instead to make it possible to integrate RumbleDB in other languages and environments, such as Python and Jupyter notebooks.
To test that the server is running, you can try the following address in your browser, assuming you have a query stored locally at /tmp/query.jq. All queries have to go to the /jsoniq path.
```
http://localhost:8001/jsoniq?query-path=/tmp/query.jq
```
The request returns a JSON object, and the resulting sequence of items is in the values array.
```
{ "values" : [ "foo", "bar" ] }
```
Almost all parameters from the command line are exposed as HTTP parameters.
A query can also be submitted in the request body:
```
curl -X POST --data '1+1' http://localhost:8001/jsoniq
```
## Use with Jupyter notebooks
With the HTTP server running, if you have installed Python and Jupyter notebooks (for example with the Anaconda data science package that does all of it automatically), you can create a RumbleDB magic by just executing the following code in a cell:
```
!pip install rumbledb
%load_ext rumbledb
%env RUMBLEDB_SERVER=http://locahost:8001/jsoniq
```
Where, of course, you need to adapt the port (8001) to the one you picked previously.
Then, you can execute queries in subsequent cells with:
`%jsoniq 1 + 1`
or on multiple lines:
```
%%jsoniq
for $doc in json-file("my-file")
where $doc.foo eq "bar"
return $doc
```
## Use with clusters
You can also let RumbleDB run as an HTTP server on the master node of a cluster, e.g. on Amazon EMR or Azure. You just need to:
* Create the cluster (it is usually just the push of a few buttons in Amazon or Azure)
* Wait for a few minutes
* Make sure that your own IP has incoming access to EMR machines by configuring the security group properly. You usually only need to do so the first time you set up a cluster (if your IP address remains the same), because the security group configuration will be reused for future EMR clusters.
Then there are two options
### With SSH tunneling
* Connect to the master with SSH with an extra parameter for securely tunneling the HTTP connection (for example
```
-L 8001:localhost:8001
```
or any port of your choosing) *
Download the RumbleDB jar to the master node
wget https://github.com/RumbleDB/rumble/releases/download/v1.21.0/rumbledb-1.21.0.jar
*
Launch the HTTP server on the master node (it will be accessible under
```
http://localhost:8001/jsoniq
```
).
spark-submit rumbledb-1.21.0.jar serve -p 8001
*
And then use Jupyter notebooks in the same way you would do it locally (it magically works because of the tunneling)
### With the EC2 hostname
There is also another way that does not need any tunnelling: you can specify the hostname of your EC2 machine (copied over from the EC2 dashboard) with the --host parameter. For example, with the placeholder
You also need to make sure in your EMR security group that the chosen port (e.g., 8001) is accessible from the machine in which you run your Jupyter notebook. Then, you can point your Jupyter notebook on this machine to
```
http://<ec2-hostname>:8001/jsoniq
```
.
Be careful not to open this port to the whole world, as queries can be sent that read and write to the EC2 machine and anything it has access to (like S3).
RumbleDB is able to read a variety of formats from a variety of file systems.
We support functions to read JSON, JSON Lines, Parquet, CSV, Text and ROOT files from various storage layers such as S3 and HDFS, Azure blob storage. We run most of our tests on Amazon EMR with S3 or HDFS, as well as locally on the local file system, but we welcome feedback on other setups.
## Supported formats
### JSON
A JSON file containing a single JSON object (or value) can be read with json-doc(). It will not spread access in any way, so that the files should be reasonably small. json-doc() can read JSON files even if the object or value is spread over multiple lines.
```
json-doc("file.json")
```
returns the (single) JSON value read from the supplied JSON file. This will also work for structures spread over multiple lines, as the read is local and not sharded.
json-doc() also works with an HTTP URI.
### JSON Lines
JSON Lines files are files that have one JSON object (or value) per line. Such files can thus become very large, up to billions or even trillions of JSON objects.
JSON Lines files are read with the json-file() function. json-file() exists in unary and binary. The first parameter specifies the JSON file (or set of JSON files) to read. The second, optional parameter specifies the minimum number of partitions. It is recommended to use it in a local setup, as the default is only one partition, which does not fully use the parallelism. If the input is on HDFS, then blocks are taken as splits by default. This is also similar to Spark's textFile().
json-file() also works with an HTTP URI, however, it will download the file completely and then parallelize, because HTTP does not support blocks. As a consequence, it can only be used for reasonable sizes.
If a host and port are set:
For a set of files:
```
for $my-json in json-file("/absolute/directory/file-*.json")
where $my-json.property eq "some value"
return $my-json
```
If a working directory is set:
```
for $my-json in json-file("*.json")
where $my-json.property eq "some value"
return $my-json
```
In some cases, JSON Lines files are highly structured, meaning that all objects have the same fields and these fields are associated with values with the same types. In this case, RumbleDB will be faster navigating such files if you open them with the function structured-json-file().
structured-json-file() parses one or more json files that follow JSON-lines format and returns a sequence of objects. This enables better performance with fully structured data and is recommended to use only when such data is available.
Warning: when the data has multiple types for the same field, this field and contained values will be treated as strings. This is also similar to Spark's spark.read.json().
```
for $my-structured-json in structured-json-file("hdfs://host:port/directory/structured-file.json")
where $my-structured-json.property eq "some value"
return $my-structured-json
```
### Text
Text files can be read into a sequence of string items, one string per line. RumbleDB can open files that have billions or potentially even trillions of lines with the function text-file().
text-file() exists in unary and binary. The first parameter specifies the text file (or set of text files) to read and return as a sequence of strings.
The second, optional parameter specifies the minimum number of partitions. It is recommended to use it in a local setup, as the default is only one partition, which does not fully use the parallelism. If the input is on HDFS, then blocks are taken as splits by default. This is also similar to Spark's textFile().
(Also see examples for json-file for host and port, sets of files and working directory).
There is also a function local-text-file() that reads locally, without parallelism. RumbleDB can stream through the file efficiently.
RumbleDB supports also the W3C-standard functions unparsed-text and unparsed-text-lines. The output of the latter is automatically parallelized as a potentially large sequence of strings.
```
count(
let $text := unparsed-text("file:///home/me/file.txt")
for $my-string in tokenize($text, "\n")
for $token in tokenize($my-string, ";")
where $token eq "some value"
return $token
)
```
### Parquet
Parquet files can be opened with the function parquet-file().
```
for $my-object in parquet-file("file.parquet")
where $my-object.property eq "some value"
return $my-json
```
```
for $my-object in parquet-file("*.parquet")
where $my-object.property eq "some value"
return $my-json
```
### CSV
CSV files can be opened with the function csv-file().
Options can be given in the form of a JSON object. All available options can be found in the Spark documentation
```
for $i in csv-file("file.csv", {"header": true, "inferSchema": true})
where $i.key eq "some value"
return $i
```
### AVRO
Avro files can be opened with the function avro-file().
Parses one or more avro files and returns a sequence of objects. This is similar to Spark's
```
spark.read().format("avro").load()
```
Options can be given in the form of a JSON object. All available options relevant for reading in avro data can be found in the Spark documentation
```
for $i in avro-file("file.avro", {"ignoreExtension": true, "avroSchema": "/path/to/schema.avsc"})
where $i._col1 eq "some value"
return $i
```
### libSVM
libSVM files can be opened with the function libsvm-file().
Parses one or more libsvm files and returns a sequence of objects. This is similar to Spark's
```
spark.read().format("libsvm").load()
```
```
for $i in libsvm-file("file.txt")
where $i._col1 eq "some value"
return $i
```
### ROOT
ROOT files can be open with the function root-file(). The second parameter specifies the path within the ROOT files (a ROOT file is like a mini-file system of its own). It is often `Events` or `tree` .
```
for $i in root-file("events.root", "Events")
where $i._c0 eq "some value"
return $i
```
## Creating your own big sequence
The function parallelize() can be used to create, on the fly, a big sequence of items in such a way that RumbleDB can spread its querying across cores and machines.
This function behaves like the Spark parallelize() you are familiar with and sends a large sequence to the cluster. The rest of the FLWOR expression is then evaluated with Spark transformations on the cluster.
```
for $i in parallelize(1 to 1000000)
where $i mod 1000 eq 0
return $i
```
There is also be a second, optional parameter that specifies the minimum number of partitions.
```
for $i in parallelize(1 to 1000000, 100)
where $i mod 1000 eq 0
return $i
```
## Supported file systems
As a general rule of thumb, RumbleDB can read from any file system that Spark can read from. The file system is inferred from the scheme used in the path used in any of the functions described above.
Note that the scheme is optional, in which case the default file system as configured in Hadoop and Spark is used. A relative path can also be provided, in which case the working directory (including its file system) as configured is used.
### Local file system
The scheme for the local file system is `file://` . Pay attention to the fact that for reading an absolute path, a third slash will follow the scheme.
Example:
```
file:///home/user/file.json
```
Warning! If you try to open a file from the local file system on a cluster of several machines, this might fail as the file is only on the machine that you are connected to. You need to pass additional parameters to `spark-submit` to make sure that any files read locally will be copied over to all machines. If you use `spark-submit` locally, however, this will work out of the box, but we recommend specifying a number of partitions to avoid reading the file as a single partition.
For Windows, you need to use forward slashes, and if the local file system is set up as the default and you omit the file scheme, you still need a forward slash in front of the drive letter to not confuse it with a URI scheme:
```
file:///C:/Users/hadoop/file.json
file:/C:/Users/hadoop/file.json
/C:/Users/hadoop/file.json
```
In particular, the following will not work:
```
file://C:/Users/hadoop/file.json
C:/Users/hadoop/file.json
C:\Users\hadoop\file.json
file://C:\Users\hadoop\file.json
```
### HDFS
The scheme for the Hadoop Distributed File System is `hdfs://` . A host and port should also be specified, as this is required by Hadoop.
Example:
```
hdfs://www.example.com:8021/user/hadoop/file.json
```
If HDFS is already set up as the default file system as is often the case in managed Spark clusters, an absolute path suffices:
```
/user/hadoop/file.json
```
The following will not work:
```
hdfs:///user/hadoop/file.json
hdfs://user/hadoop/file.json
hdfs:/user/hadoop/file.json
```
### S3
There are three schemes for reading from S3: `s3://` , `s3n://` and `s3a://` .
Examples:
```
s3://my-bucket/directory/file.json
s3n://my-bucket/directory/file.json
s3a://my-bucket/directory/file.json
```
If you are on an Amazon EMR cluster, `s3://` is straightforward to use and will automatically authenticate. For more details on how to set up your environment to read from S3 and which scheme is most appropriate, we refer to the Amazon S3 documentation.
### Azure blob storage
The scheme for Azure blob storage is `wasb://` .
Example:
```
wasb://mycontainer@<EMAIL>.blob.core.windows.net/directory/file.json
```
Date: 2004-04-12
Categories:
Tags:
We list here the most important functions supported by RumbleDB, and introduce them by means of examples. Highly detailed specifications can be found in the underlying W3C standard, unless the function is marked as specific to JSON or RumbleDB, in which case it can be found here. JSONiq and RumbleDB intentionally do not support builtin functions on XML nodes, NOTATION or QNames. RumbleDB supports almost all other W3C-standardized functions, please contact us if you are still missing one.
For the sake of ease of use, all W3C standard builtin functions and JSONiq builtin functions are in the RumbleDB namespace, which is the default function namespace and does not require any prefix in front of function names.
It is recommended that user-defined functions are put in the local namespace, i.e., their name should have the local: prefix (which is predefined). Otherwise, there is the risk that your code becomes incompatible with subsequent releases if new (unprefixed) builtin functions are introduced.
## Functions and operators on numerics
### Functions on numeric values
### abs
Fully implemented
`abs(-2)`
returns 2.0
### ceiling
Fully implemented
`ceiling(2.3)`
returns 3.0
### floor
Fully implemented
`floor(2.3)`
returns 2.0
### round
Fully implemented
`round(2.3)`
returns 2.0
`round(2.2345, 2)`
returns 2.23
### round-half-to-even
```
round-half-to-even(2.2345, 2), round-half-to-even(2.2345)
```
### Parsing numbers
### number
Fully implemented
`number("15")`
returns 15 as a double
`number("foo")`
returns NaN as a double
`number(15)`
returns 15 as a double
### Formatting integers
### format-integer
## Formatting numbers
### format-number
## Trigonometric and exponential functions
### pi
Fully implemented
`pi()`
returns 3.141592653589793
### exp
Fully implemented
`exp(10)`
### exp10
Fully implemented
`exp10(10)`
### log
Fully implemented
`log(100)`
### log10
Fully implemented
`log10(100)`
### pow
Fully implemented
`pow(10, 2)`
### sqrt
Fully implemented
`sqrt(4)`
returns 2
### sin
Fully implemented
`sin(pi())`
### cos
Fully implemented
`cos(pi())`
### cosh
JSONiq-specific. Fully implemented
`cosh(pi())`
### sinh
JSONiq-specific. Fully implemented
`sinh(pi())`
### tan
Fully implemented
`tan(pi())`
### asin
Fully implemented
`asin(1)`
### acos
Fully implemented
`acos(1)`
### atan
Fully implemented
`atan(1)`
### atan2
Fully implemented
`atan2(1)`
### Random numbers
### random-number-generator
## Functions on strings
### Functions to assemble and disassemble strings
### string-to-codepoint
```
string-to-codepoints("Thérèse")
```
returns (84, 104, 233, 114, 232, 115, 101)
```
string-to-codepoints("")
```
### codepoints-to-string
```
codepoints-to-string((2309, 2358, 2378, 2325))
```
returns "अशॊक"
```
codepoints-to-string(())
```
### Comparison of strings
### compare
### codepoint-equal
```
codepoint-equal("abcd", "abcd")
```
```
codepoint-equal("", ())
```
### collation-key
### contains-token
### Functions on string values
### concat
```
concat("foo", "bar", "foobar")
```
### string-join
```
string-join(("foo", "bar", "foobar"))
```
```
string-join(("foo", "bar", "foobar"), "-")
```
returns "foo-bar-foobar"
### substring
```
substring("foobar", 4)
```
```
substring("foobar", 4, 2)
```
returns "ba"
### string-length
Returns the length of the supplied string, or 0 if the empty sequence is supplied.
`string-length("foo")`
returns 3.
`string-length(())`
returns 0.
### normalize-space
Normalization of spaces in a string.
```
normalize-space(" The wealthy curled darlings of our nation. "),
```
returns "The wealthy curled darlings of our nation."
### normalize-unicode
Returns the value of the input after applying Unicode normalization.
```
normalize-unicode("hello world", "NFC")
```
returns the unicode-normalized version of the input string. Normalization forms NFC, NFD, NFKC, and NFKD are supported. "FULLY-NORMALIZED" though supported, should be used with caution as only the composition exclusion characters supported FULLY-NORMALIZED are which are uncommented in the following file.
### upper-case
Fully implemented
`upper-case("abCd0")`
returns "ABCD0"
### lower-case
Fully implemented
`lower-case("ABc!D")`
returns "abc!d"
### translate
```
translate("bar","abc","ABC")
```
returns "BAr"
```
translate("--aaa--","abc-","ABC")
```
returns "AAA"
### Functions based on substring matching
### contains
```
contains("foobar", "ob")
```
### starts-with
```
starts-with("foobar", "foo")
```
### ends-with
```
ends-with("foobar", "bar")
```
### substring-before
```
substring-before("foobar", "bar")
```
returns "foo"
```
substring-before("foobar", "o")
```
returns "f"
### substring-after
```
substring-after("foobar", "foo")
```
```
substring-after("foobar", "r")
```
### String functions that use regular expressions
### matches
Regular expression matching. The semantics of regular expressions are those of Java's Pattern class.
```
matches("foobar", "o+")
```
```
matches("foobar", "^fo+.*")
```
### replace
Arity 3 implemented, arity 4 is not.
Regular expression matching and replacing. The semantics of regular expressions are those of Java's Pattern class.
```
replace("abracadabra", "bra", "*")
```
returns "a*cada*"
```
replace("abracadabra", "a(.)", "a$1$1")
```
returns "abbraccaddabbra"
### tokenize
```
tokenize("aa bb cc dd")
```
```
tokenize("aa;bb;cc;dd", ";")
```
### analyze-string
## Functions that manipulate URIs
### resolve-uri
```
string(resolve-uri("examples","http://www.examples.com/"))
```
returns http://www.examples.com/examples
### encode-for-uri
```
encode-for-uri("100% organic")
```
returns 100%25%20organic
### iri-to-uri
### escape-html-uri
## Functions and operators on Boolean values
### Boolean constant functions
### true
Fully implemented
`fn:true()`
returns true
### false
Fully implemented
`fn:false()`
returns false
### boolean
Fully implemented
`boolean(9)`
returns true
`boolean("")`
returns false
### not
Fully implemented
`not(9)`
returns false
`boolean("")`
returns true
## Functions and operators on durations
### Component extraction functions on durations
### years-from-duration
```
years-from-duration(duration("P2021Y6M"))
```
### months-from-duration
```
months-from-duration(duration("P2021Y6M"))
```
### days-from-duration
```
days-from-duration(duration("P2021Y6M17D"))
```
returns 17.
### hours-from-duration
```
hours-from-duration(duration("P2021Y6M17DT12H35M30S"))
```
### minutes-from-duration
returns 35.
### seconds-from-duration
returns 30.
## Functions and operators on dates and times
### Constructing a DateTime
### dateTime
```
dateTime("2004-04-12T13:20:00+14:00")
```
returns 2004-04-12T13:20:00+14:00
### Component extraction functions on dates and times
### year-from-dateTime
```
year-from-dateTime(dateTime("2021-04-12T13:20:32.123+02:00"))
```
### month-from-dateTime
```
month-from-dateTime(dateTime("2021-04-12T13:20:32.123+02:00"))
```
returns 04.
### day-from-dateTime
```
day-from-dateTime(dateTime("2021-04-12T13:20:32.123+02:00"))
```
### hours-from-dateTime
```
hours-from-dateTime(dateTime("2021-04-12T13:20:32.123+02:00"))
```
### minutes-from-dateTime
```
minutes-from-dateTime(dateTime("2021-04-12T13:20:32.123+02:00"))
```
### seconds-from-dateTime
```
seconds-from-dateTime(dateTime("2021-04-12T13:20:32.123+02:00"))
```
returns 32.
### timezone-from-dateTime
```
timezone-from-dateTime(dateTime("2021-04-12T13:20:32.123+02:00"))
```
### year-from-date
```
year-from-date(date("2021-06-04"))
```
### month-from-date
```
month-from-date(date("2021-06-04"))
```
### day-from-date
```
day-from-date(date("2021-06-04"))
```
### timezone-from-date
```
timezone-from-date(date("2021-06-04-14:00"))
```
returns -PT14H.
### hours-from-time
```
hours-from-time(time("13:20:32.123+02:00"))
```
### minutes-from-time
```
minutes-from-time(time("13:20:32.123+02:00"))
```
### seconds-from-time
```
seconds-from-time(time("13:20:32.123+02:00"))
```
returns 32.123.
### timezone-from-time
```
timezone-from-time(time("13:20:32.123+02:00"))
```
### Timezone adjustment functions on dates and time values
### adjust-dateTime-to-timezone
```
adjust-dateTime-to-timezone(dateTime("2004-04-12T13:20:15+14:00"), dayTimeDuration("PT4H5M"))
```
returns 2004-04-12T03:25:15+04:05.
### adjust-date-to-timezone
```
adjust-date-to-timezone(date("2014-03-12"), dayTimeDuration("PT4H"))
```
returns 2014-03-12+04:00.
### adjust-time-to-timezone
```
adjust-time-to-timezone(time("13:20:00-05:00"), dayTimeDuration("-PT14H"))
```
returns 04:20:00-14:00.
### Formatting dates and times functions
The functions in this section accept a simplified version of the picture string, in which a variable marker accepts only:
* One of the following component specifiers: Y, M, d, D, F, H, m, s, P
* A first presentation modifier, for which the value can be:
* Nn, for all supported component specifiers, besides P
* N, if the component specifier is P
* a format token that indicates a numbering sequence of the the following form: '0001'
* A second presentation modifier, for which the value can be t or c, which are also the default values
* A width modifier, both minimum and maximum values
### format-dateTime
```
format-dateTime(dateTime("2004-04-12T13:20:00"), "[m]-[H]-[D]-[M]-[Y]")
```
returns 20-13-12-4-2004
### format-date
```
format-date(date("2004-04-12"), "[D]-[M]-[Y]")
```
returns 12-4-2004
### format-time
```
format-time(time("13:20:00"), "[H]-[m]-[s]")
```
returns 13-20-0
## Functions related to QNames
## Functions and operators on sequences
### General functions and operators on sequences
### empty
Returns a boolean whether the input sequence is empty or not.
`empty(1 to 10)`
returns false.
### exists
Returns a boolean whether the input sequence has at least one item or not.
`exists(1 to 10)`
returns true.
`exists(())`
returns false.
```
exists(json-file("file.json"))
```
### head
Returns the first item of a sequence, or the empty sequence if it is empty.
`head(1 to 10)`
returns 1.
`head(())`
returns ().
```
head(json-file("file.json"))
```
### tail
Returns all but the last item of a sequence, or the empty sequence if it is empty.
`tail(1 to 5)`
returns (2, 3, 4, 5).
`tail(())`
returns ().
```
tail(json-file("file.json"))
```
### insert-before
```
insert-before((3, 4, 5), 0, (1, 2))
```
returns (1, 2, 3, 4, 5).
### remove
Fully implemented
`remove((1,2, 10), 3)`
returns (1, 2).
### reverse
Fully implemented
`remove((1, 2, 3))`
returns (3, 2, 1).
### subsequence
```
subsequence((1, 2, 3), 2, 5)
```
returns (2, 3).
### unordered
Fully implemented
`unordered((1, 2, 3))`
returns (1, 2, 3).
### Functions that compare values in sequences
### distinct-values
Eliminates duplicates from a sequence of atomic items.
```
distinct-values((1, 1, 4, 3, 1, 1, "foo", 4, "foo", true, 3, 1, true, 5, 3, 1, 1))
```
returns (1, 4, 3, "foo", true, 5).
```
distinct-values(json-file("file.json").foo)
```
```
distinct-values(text-file("file.txt"))
```
### index-of
```
index-of((10, 20, 30, 40), 30)
```
returns 3.
```
index-of((10, 20, 30, 40), 35)
```
returns "".
### deep-equal
```
deep-equal((10, 20, "a"), (10, 20, "a"))
```
```
deep-equal(("b", "0"), ("b", 0))
```
returns false.
### Functions that test the cardinality of sequences
### zero-or-one
Fully implemented
`zero-or-one(("a"))`
returns "a".
```
zero-or-one(("a", "b"))
```
### one-or-more
Fully implemented
`one-or-more(("a"))`
returns "a".
`one-or-more(())`
returns an error.
### exactly-one
Fully implemented
`exactly-one(("a"))`
returns "a".
```
exactly-one(("a", "b"))
```
### Aggregate functions
### count
```
count(json-file("file.json"))
```
```
count(
for $i in json-file("file.json")
where $i.foo eq "bar"
return $i
)
```
### avg
```
let $x := (1, 2, 3, 4)
return avg($x)
```
returns 2.5.
```
avg(json-file("file.json").foo)
```
### max
```
for $i in 1 to 3
return max($i)
```
```
max(json-file("file.json").foo)
```
### min
returns 1.
```
for $i in 1 to 3
return min($i)
```
```
min(json-file("file.json").foo)
```
### sum
```
let $x := (1, 2, 3, 4)
return sum($x)
```
returns 10.
```
sum(json-file("file.json").foo)
```
### Functions giving access to external information
### collection
### Parsing and serializing
### serialize
Serializes the supplied input sequence, returning the serialized representation of the sequence as a string
```
serialize({hello: "world"})
```
returns { "hello" : "world" }
## Context Functions
### position
```
(1 to 10)[position() eq 5]
```
returns 5
### last
```
(1 to 10)[position() eq last()]
```
returns 10
`(1 to 10)[last()]`
returns 10
### current-dateTime
Fully implemented
`current-dateTime()`
returns 2020-02-26T11:22:48.423+01:00
### current-date
Fully implemented
`current-date()`
returns 2020-02-26Europe/Zurich
### current-time
Fully implemented
`current-time()`
returns 11:24:10.064+01:00
### implicit-timezone
Fully implemented
`implicit-timezone()`
returns PT1H.
### default-collation
Fully implemented
`default-collation()`
returns http://www.w3.org/2005/xpath-functions/collation/codepoint.
## High order functions
### Functions on functions
### function-lookup
### function-name
### function-arity
### Basic higher-order functions
### for-each
### filter
### fold-left
### fold-right
### for-each-pair
## JSONiq functions
### keys
```
keys({"foo" : "bar", "bar" : "foobar"})
```
returns ("foo", "bar"). Also works on an input sequence, eliminating duplicates
```
keys(({"foo" : "bar", "bar" : "foobar"}, {"foo": "bar2"}))
```
```
keys(json-file("file.json"))
```
### members
Fully implemented
`members([1 to 100])`
This function returns the members as an array, but not recursively, i.e., nested arrays are not unboxed.
Returns the first 100 integers as a sequence. Also works on an input sequence, in a distributive way.
```
members(([1 to 100], [ 300 to 1000 ]))
```
### null
Fully implemented
`null()`
Returns a JSON null (also available as the literal null).
### parse-json
### size
Fully implemented
`size([1 to 100])`
returns 100. Also works if the empty sequence is supplied, in which case it returns the empty sequence.
`size(())`
### accumulate
```
accumulate(({ "b" : 2 }, { "c" : 3 }, { "b" : [1, "abc"] }, {"c" : {"d" : 0.17}}))
```
```
{ "b" : [ 2, [ 1, "abc" ] ], "c" : [ 3, { "d" : 0.17 } ] }
```
### descendant-arrays
### descendant-objects
### descendant-pairs
```
descendant-pairs(({ "a" : [1, {"b" : 2}], "d" : {"c" : 3} }))
```
```
{ "a" : [ 1, { "b" : 2 } ] }
{ "b" : 2 }
{ "d" : { "c" : 3 } }
{ "c" : 3 }
```
### flatten
```
flatten(([1, 2], [[3, 4], [5, 6]], [7, [8, 9]]))
```
Unboxes arrays recursively, stopping the recursion when any other item is reached (object or atomic). Also works on an input sequence, in a distributive way.
Returns (1, 2, 3, 4, 5, 6, 7, 8, 9).
### intersect
```
intersect(({"a" : "abc", "b" : 2, "c" : [1, 2], "d" : "0"}, { "a" : 2, "b" : "ab", "c" : "foo" }))
```
```
{ "a" : [ "abc", 2 ], "b" : [ 2, "ab" ], "c" : [ [ 1, 2 ], "foo" ] }
```
### project
```
project({"foo" : "bar", "bar" : "foobar", "foobar" : "foo" }, ("foo", "bar"))
```
returns the object {"foo" : "bar", "bar" : "foobar"}. Also works on an input sequence, in a distributive way.
### remove-keys
returns the object {"foobar" : "foo"}. Also works on an input sequence, in a distributive way.
### values
```
values({"foo" : "bar", "bar" : "foobar"})
```
returns ("bar", "foobar"). Also works on an input sequence, in a distributive way.
```
values(({"foo" : "bar", "bar" : "foobar"}, {"foo" : "bar2"}))
```
```
values(json-file("file.json"))
```
### encode-for-roundtrip
### decode-from-roundtrip
### json-doc
```
json-doc("/Users/sheldon/object.json")
```
returns the (unique) JSON value parsed from a local JSON (but not necessarily JSON Lines) file where this value may be spread over multiple lines.
The parameters that can be used on the command line as well as on the planned HTTP server are shown below.
RumbleDB runs in three modes. You can select the mode passing a verb as the first parameter. For example:
```
spark-submit rumbledb.org run file.jq -o output-dir -P 1
spark-submit rumbledb.org run -q '1+1'
spark-submit rumbledb.org serve -p 8001
spark-submit rumbledb.org repl -c 10
```
Previous parameters (--shell, --query-path, --server) work in a backward compatible fashion, however we do recommend to start using the new verb-based format.
Shell parameter | Shortcut | HTTP parameter | example values | Semantics |
| --- | --- | --- | --- | --- |
--shell | repl | N/A | yes, no | yes runs the interactive shell. No executes a query specified with --query-path |
--shell-filter | N/A | N/A | jq . | Post-processes the output of JSONiq queries on the shell with the specified command (reading the RumbleDB output via stdin) |
--query | -q | query | 1+1 | A JSONiq query directly provided as a string. |
--query-path | (any text without -- or - is recognized as a query path) | query-path | file:///folder/file.jq | A JSONiq query file to read from (from any file system, even the Web!). |
--output-path | -o | output-path | file:///folder/output | Where to output to (if the output is large, it will create a sharded directory, otherwise it will create a file) |
--output-format | -f | N/A | json, csv, avro, parquet, or any other format supported by Spark | An output format to use for the output. Formats other than json can only be output if the query outputs a highly structured sequence of objects (you can nest your query in an annotate() call to specify a schema if it does not). |
--output-format-option:foo | N/A | N/A | bar | Options to further specify the output format (example: separator character for CSV, compression format...) |
--overwrite | -O (meaning --overwrite yes) | overwrite | yes, no | Whether to overwrite to --output-path. No throws an error if the output file/folder exists. |
--materialization-cap | -c | materialization-cap | 200 | A cap on the maximum number of items to materialize for large sequences within a query or for outputting on screen (used to be called --result-size). |
--number-of-output-partitions | -P | N/A | ad hoc | How many partitions to create in the output, i.e., the number of files that will be created in the output path directory. |
--log-path | N/A | log-path | file:///folder/log.txt | Where to output log information |
--print-iterator-tree | N/A | N/A | yes, no | For debugging purposes, prints out the expression tree and runtime interator tree. |
--show-error-info | -v (meaning --show-error-info yes) | show-error-info | yes, no | For debugging purposes. If you want to report a bug, you can use this to get the full exception stack. If no, then only a short message is shown in case of error. |
--static-typing | -t (meaning --static-typing yes) | static-typing | yes, no | Activates static type analysis, which annotates the expression tree with inferred types at compile time and enables more optimizations (experimental). Deactivated by default. |
--server | serve | N/A | yes, no | yes runs RumbleDB as a server on port 8001. Run queries with http://localhost:8001/jsoniq?query-path=/folder/foo.json |
--port | -p | N/A | 8001 (default) | Changes the port of the RumbleDB HTTP server to any of your liking |
--host | -h | N/A | localhost (default) | Changes the host of the RumbleDB HTTP server to any of your liking |
--variable:foo | N/A | variable:foo | bar | --variable:foo bar initialize the global variable $foo to "bar". The query must contain the corresponding global variable declaration, e.g., "declare variable $foo external;" |
--context-item | -I | context-item | bar | initializes the global context item $$ to "bar". The query must contain the corresponding global variable declaration, e.g., "declare context item external;" |
--context-item-input | -i | context-item-input | - | reads the context item value from the standard input |
--context-item-input-format | N/A | context-item-input-format | text or json | sets the input format to use |
for parsing the standard input (as text or as a serialized json value) |
--dates-with-timezone | N/A | dates-with-timezone | yes or no | activates timezone support for the type xs:date (deactivated by default) |
--optimize-general-comparison-to-value-comparison | N/A | optimize-general-comparison-to-value-comparison | yes or no | activates automatic conversion of general comparisons to value comparisons when applicable (activated by default) |
--function-inlining | N/A | function-inlining | yes or no | activates function inlining for non-recursive functions (activated by default) |
--parallel-execution | N/A | parallel-execution | yes or no | activates parallel execution when possible (activated by default) |
--native-execution | N/A | native-execution | yes or no | activates native (Spark SQL) execution when possible (activated by default) | |
crrSC | cran | R | Package ‘crrSC’
October 12, 2022
Title Competing Risks Regression for Stratified and Clustered Data
Version 1.1.2
Author <NAME> and <NAME>
Description Extension of 'cmprsk' to Stratified and Clustered data.
A goodness of fit test for Fine-Gray model is also provided.
Methods are detailed in the following articles: Zhou et al. (2011) <doi:10.1111/j.1541-
0420.2010.01493.x>,
Zhou et al. (2012) <doi:10.1093/biostatistics/kxr032>,
Zhou et al. (2013) <doi:10.1002/sim.5815>.
Depends survival
Maintainer <NAME> <<EMAIL>>
License GPL-2
NeedsCompilation yes
Repository CRAN
Date/Publication 2022-06-10 21:20:20 UTC
R topics documented:
bc... 2
cdat... 2
cente... 3
crr... 3
crr... 5
crrvv... 8
print.crr... 8
psh.tes... 9
bce Breast Cancer Data
Description
Data Randomly Generated According To El178 clinical trial
Usage
data(bce)
Format
A data frame with 200 observations and the following 6 variables.
trt Treatment: 0=Placebo, 1=Tamoxifen
time Event time
type Event type. 0=censored, 1=Breast Cancer recurrence , 2=Death without recurrence
nnodes Number of positive nodes
tsize Tumor size
age Age
Examples
data(bce)
cdata Clustered competing risks simulated data
Description
sample of 200 observations
Usage
data(cdata)
Format
A data frame with 200 observations and the following 4 variables. Simulation is detailed on the
paper Competing Risk Regression for clustered data. <NAME>, Labopin. 2011. In
Press. Biostatistics.
ID Id of cluster, each cluster is of size 2
ftime Event time
fstatus Event type. 0=censored, 1 , 2
z a binary covariate with P(z=1)=0.5
Examples
data(cdata)
center Multicenter Bone Marrow transplantation data
Description
Random sub sample of 400 patients
Usage
data(center)
Format
A data frame with 400 observations and the following 5 variables.
id Id of transplantation center
ftime Event time
fstatus Event type. 0=censored, 1=Acute or Chronic GvHD , 2=Death free of GvHD
cells source of stem cells: peripheral blood vs bone marrow
fm female donor to male recipient match
Examples
data(center)
crrc Competing Risks Regression for Clustered Data
Description
Regression modeling of subdistribution hazards for clustered right censored data. Failure times
within the same cluster are dependent.
Usage
crrc(ftime,fstatus,cov1,cov2,tf,cluster,
cengroup,failcode=1,
cencode=0, subset,
na.action=na.omit,
gtol=1e-6,maxiter=10,init)
Arguments
cluster Clustering covariate
ftime vector of failure/censoring times
fstatus vector with a unique code for each failure type and a separate code for censored
observations
cov1 matrix (nobs x ncovs) of fixed covariates (either cov1, cov2, or both are required)
cov2 matrix of covariates that will be multiplied by functions of time; if used, often
these covariates would also appear in cov1 to give a prop hazards effect plus a
time interaction
tf functions of time. A function that takes a vector of times as an argument and
returns a matrix whose jth column is the value of the time function correspond-
ing to the jth column of cov2 evaluated at the input time vector. At time tk, the
model includes the term cov2[,j]*tf(tk)[,j] as a covariate.
cengroup vector with different values for each group with a distinct censoring distribution
(the censoring distribution is estimated separately within these groups). All data
in one group, if missing.
failcode code of fstatus that denotes the failure type of interest
cencode code of fstatus that denotes censored observations
subset a logical vector specifying a subset of cases to include in the analysis
na.action a function specifying the action to take for any cases missing any of ftime, fsta-
tus, cov1, cov2, cengroup, or subset.
gtol iteration stops when a function of the gradient is < gtol
maxiter maximum number of iterations in Newton algorithm (0 computes scores and var
at init, but performs no iterations)
init initial values of regression parameters (default=all 0)
Details
This method extends Fine-Gray proportional hazards model for subdistribution (1999) to accommo-
date situations where the failure times within a cluster might be correlated since the study subjects
from the same cluster share common factors This model directly assesses the effect of covariates on
the subdistribution of a particular type of failure in a competing risks setting.
Value
Returns a list of class crr, with components
$coef the estimated regression coefficients
$loglik log pseudo-liklihood evaluated at coef
$score derivitives of the log pseudo-likelihood evaluated at coef
$inf -second derivatives of the log pseudo-likelihood
$var estimated variance covariance matrix of coef
$res matrix of residuals
$uftime vector of unique failure times
$bfitj jumps in the Breslow-type estimate of the underlying sub-distribution cumula-
tive hazard (used by predict.crr())
$tfs the tfs matrix (output of tf(), if used)
$converged TRUE if the iterative algorithm converged
$call The call to crr
$n The number of observations used in fitting the model
$n.missing The number of observations removed from the input data due to missing values
$loglik.null The value of the log pseudo-likelihood when all the coefficients are 0
Author(s)
<NAME>, <<EMAIL>>
References
<NAME>, <NAME>, <NAME>, Labopin M.(2012). Competing Risks Regression for Clustered data.
Biostatistics. 13 (3): 371-383.
See Also
cmprsk
Examples
#library(cmprsk)
#crr(ftime=cdata$ftime, fstatus=cdata$fstatus, cov1=cdata$z)
# Simulated clustered data set
data(cdata)
crrc(ftime=cdata[,1],fstatus=cdata[,2],
cov1=cdata[,3],
cluster=cdata[,4])
crrs Competing Risks Regression for Stratified Data
Description
Regression modeling of subdistribution hazards for stratified right censored data
Two types of stratification are addressed : Regularly stratified: small number of large groups (strata)
of subjects Highly stratified: large number of small groups (strata) of subjects
Usage
crrs(ftime, fstatus, cov1, cov2, strata,
tf, failcode=1, cencode=0,
ctype=1,
subsets, na.action=na.omit,
gtol=1e-6, maxiter=10,init)
Arguments
strata stratification covariate
ctype 1 if estimating censoring dist within strata (regular stratification), 2 if estimating
censoring dist across strata (highly stratification)
ftime vector of failure/censoring times
fstatus vector with a unique code for each failure type and a separate code for censored
observations
cov1 matrix (nobs x ncovs) of fixed covariates (either cov1, cov2, or both are required)
cov2 matrix of covariates that will be multiplied by functions of time; if used, often
these covariates would also appear in cov1 to give a prop hazards effect plus a
time interaction
tf functions of time. A function that takes a vector of times as an argument and
returns a matrix whose jth column is the value of the time function correspond-
ing to the jth column of cov2 evaluated at the input time vector. At time tk, the
model includes the term cov2[,j]*tf(tk)[,j] as a covariate.
failcode code of fstatus that denotes the failure type of interest
cencode code of fstatus that denotes censored observations
subsets a logical vector specifying a subset of cases to include in the analysis
na.action a function specifying the action to take for any cases missing any of ftime, fsta-
tus, cov1, cov2, cengroup, or subset.
gtol iteration stops when a function of the gradient is < gtol
maxiter maximum number of iterations in Newton algorithm (0 computes scores and var
at init, but performs no iterations)
init initial values of regression parameters (default=all 0)
Details
Fits the stratified extension of the Fine and Gray model (2011). This model directly assesses the
effect of covariates on the subdistribution of a particular type of failure in a competing risks setting.
Value
Returns a list of class crr, with components (see crr for details)
$coef the estimated regression coefficients
$loglik log pseudo-liklihood evaluated at coef
$score derivitives of the log pseudo-likelihood evaluated at coef
$inf -second derivatives of the log pseudo-likelihood
$var estimated variance covariance matrix of coef
$res matrix of residuals
$uftime vector of unique failure times
$bfitj jumps in the Breslow-type estimate of the underlying sub-distribution cumula-
tive hazard (used by predict.crr())
$tfs the tfs matrix (output of tf(), if used)
$converged TRUE if the iterative algorithm converged
$call The call to crr
$n The number of observations used in fitting the model
$n.missing The number of observations removed from the input data due to missing values
$loglik.null The value of the log pseudo-likelihood when all the coefficients are 0
Author(s)
<NAME>, <<EMAIL>>
References
<NAME>, <NAME>, <NAME>, <NAME>. (2011). Competing Risks Regression for Stratified Data.
Biometrics. 67(2):661-70.
See Also
cmprsk
Examples
##
#using fine and gray model
#crr(ftime=center$ftime, fstatus=center$fstatus,
#cov1=cbind(center$fm,center$cells))
#
# High Stratification: ctype=2
# Random sub-sample
data(center)
cov.test<-cbind(center$fm,center$cells)
crrs(ftime=center[,1],fstatus=center[,2],
cov1=cov.test,
strata=center$id,ctype=2)
crrvvs For internal use
Description
for internal use
Author(s)
<NAME>
print.crrs Print method for crrs output
Description
Prints call for crrs object
Usage
## S3 method for class 'crrs'
print(x, ...)
Arguments
x crr object (output from crrs())
... additional arguments to print()
Author(s)
<NAME>
psh.test Goodness-of-fit test for proportional subdistribution hazards model
Description
This Goodness-of-fit test proposed a modified weighted Schoenfeld residuals to test the proportion-
ality of subdistribution hazards for the Fine and Gray model
Usage
psh.test(time, fstatus, z, D=c(1,1), tf=function(x) cbind(x,x^2), init)
Arguments
time vector of failure times
fstatus failure status =0 if censored
z covariates
D components of z that are tested for time-varying effect
tf functions of t for z being tested on the same location
init initial values of regression parameters (default=all 0)
Details
The proposed score test employs Schoenfeld residuals adapted to competing risks data. The form of
the test is established assuming that the non-proportionality arises via time-dependent coefficients
in the Fine-Gray model, similar to the test of Grambsch and Therneau.
Value
Returns a data.frame with percentage of cens, cause 1, Test Statistic, d.f. ,p-value
Author(s)
<NAME>, <<EMAIL>>
References
<NAME>, <NAME>, <NAME>. (2013). Goodness-of-fit test for proportional subdistribution hazards
mode. Statistics in Medicine. In Press.
Examples
data(bce)
attach(bce)
lognodes <- log(nnodes)
Z1 <- cbind(lognodes, tsize/10, age, trt)
# trt = 0 if placebo, = 0 treatment
# testing for linear time varying effect of trt
psh.test(time=time, fstatus=type, z=Z1, D=c(0,0,0,1), tf=function(x) x)
# testing for quadratic time varying effect of trt
psh.test(time=time, fstatus=type, z=Z1, D=c(0,0,0,1), tf=function(x) x^2)
# testing for log time varying effect of trt
psh.test(time=time, fstatus=type, z=Z1, D=c(0,0,0,1),
tf=function(x) log(x))
# testing for both linear and quadratic time varying effect of trt
psh.test(time=time, fstatus=type, z=Z1,
D=matrix(c(0,0,0,1,0,0,0,1), 4,2), tf=function(x) cbind(x,x^2)) |
sherpa | readthedoc | Matlab | Sherpa 4.12.1+0.g30171f2c documentation
[Sherpa](index.html#document-index)
---
Welcome to Sherpa’s documentation[¶](#welcome-to-sherpa-s-documentation)
===
Welcome to the Sherpa documentation.
[Sherpa](http://cxc.harvard.edu/contrib/sherpa/)
is a Python package for modeling and fitting data. It was originally developed by the
[Chandra X-ray Center](http://cxc.harvard.edu/) for use in
[analysing X-ray data (both spectral and imaging)](http://cxc.harvard.edu/sherpa)
from the Chandra X-ray telescope, but it is designed to be a general-purpose package, which can be enhanced with domain-specific tasks (such as X-ray Astronomy).
Sherpa contains an expressive and powerful modeling language, coupled with a range of statistics and robust optimisers.
Sherpa is released under the
[GNU General Public License v3.0](https://github.com/sherpa/sherpa/blob/master/LICENSE),
and is compatible with Python versions 3.5, 3.6, and 3.7.
It is expected that it will work with Python 3.8 but testing has been limited.
Information on recent releases and citation information for Sherpa is available using the Digital Object Identifier (DOI)
[10.5281/zenodo.593753](https://doi.org/10.5281/zenodo.593753).
The last version of Sherpa compatible with Python 2.7 was the
[4.11.1 release](https://doi.org/10.5281/zenodo.3358134).
Installation[¶](#installation)
---
### Quick overview[¶](#quick-overview)
For those users who have already read this page, and need a quick refresher (or prefer to act first, and read documentation later),
the following commands can be used to install Sherpa, depending on your environment and set up.
```
conda install -c sherpa sherpa
```
```
pip install sherpa
```
```
python setup.py install
```
### Requirements[¶](#requirements)
Sherpa has the following requirements:
* Python 3.5, 3.6, or 3.7 (there has been limited testing with Python 3.8)
* NumPy (the exact lower limit has not been determined,
but it is likely to be 1.7.0 or later)
* Linux or OS-X (patches to add Windows support are welcome)
Sherpa can take advantage of the following Python packages if installed:
* [astropy](index.html#term-astropy): for reading and writing files in
[FITS](index.html#term-fits) format. The minimum required version of astropy is version 1.3, although only versions 2 and higher are used in testing
(version 3.2 is known to cause problems, but version 3.2.1 is okay).
* [matplotlib](index.html#term-matplotlib): for visualisation of one-dimensional data or models, one- or two- dimensional error analysis, and the results of Monte-Carlo Markov Chain runs. There are no known incompatabilities with matplotlib, but there has only been limited testing. Please
[report any problems](https://github.com/sherpa/sherpa/issues/)
you find.
The Sherpa build can be configured to create the
[`sherpa.astro.xspec`](index.html#module-sherpa.astro.xspec) module, which provides the models and utility functions from the [XSPEC](index.html#term-xspec).
The supported versions of XSPEC are 12.10.1 (patch level a or later),
12.10.0, 12.9.1, and 12.9.0.
Interactive display and manipulation of two-dimensional images is available if the [DS9](index.html#term-ds9) image viewer and the [XPA](index.html#term-xpa)
commands are installed. It is expected that any recent version of DS9 can be used.
### Releases and version numbers[¶](#releases-and-version-numbers)
The Sherpa release policy has a major release at the start of the year, corresponding to the code that is released in the previous December as part of the
[CIAO release](http://cxc.harvard.edu/ciao/), followed by several smaller releases throughout the year.
Information on the Sherpa releases is available from the Zenodo page for Sherpa, using the Digital Object Identifier
(DOI) [10.5281/zenodo.593753](https://doi.org/10.5281/zenodo.593753).
#### What version of Sherpa is installed?[¶](#what-version-of-sherpa-is-installed)
The version number and git commit id of Sherpa can be retrieved from the `sherpa._version` module using the following command:
```
% python -c 'import sherpa._version; print(sherpa._version.get_versions())'
{'version': '4.10.0', 'full': 'c7732043124b08d5e949b9a95c2eb6833e009421'}
```
#### Citing Sherpa[¶](#citing-sherpa)
Information on citing Sherpa can be found from the
[CITATION document](https://github.com/sherpa/sherpa/blob/master/CITATION)
in the Sherpa repository, or from the
[Sherpa Zenodo page](https://doi.org/10.5281/zenodo.593753).
### Installing a pre-compiled version of Sherpa[¶](#installing-a-pre-compiled-version-of-sherpa)
Additional useful Python packages include `astropy`, `matplotlib`,
and `ipython-notebook`.
#### Using the Anaconda python distribution[¶](#using-the-anaconda-python-distribution)
The Chandra X-ray Center provides releases of Sherpa that can be installed using
[Anaconda](https://www.continuum.io/anaconda-overview)
from the `sherpa` channel. First check to see what the latest available version is by using:
```
conda install -c sherpa sherpa --dry-run
```
and then, if there is a version available and there are no significant upgrades to the dependencies, Sherpa can be installed using:
```
conda install -c sherpa sherpa
```
It is **strongly** suggested that Sherpa is installed into a named
[conda environment](http://conda.pydata.org/docs/using/envs.html)
(i.e. not the default environment).
#### Using pip[¶](#using-pip)
Sherpa is also available from PyPI at
<https://pypi.python.org/pypi/sherpa> and can be installed with the command:
```
pip install sherpa
```
The NumPy package must already have been installed for this to work.
### Building from source[¶](#building-from-source)
#### Prerequisites[¶](#prerequisites)
The prerequisites for building from source are:
* Python versions: 3.5, 3.6, 3.7
* Python packages: `setuptools`, `numpy`
* System: `gcc`, `g++`, `make`, `flex`,
`bison` (the aim is to support recent versions of these tools; please report problems to the
[Sherpa issue tracker](https://github.com/sherpa/sherpa/issues/)).
It is *highly* recommended that matplotlib and astropy be installed before building Sherpa, to avoid skipping a number of tests in the test suite.
The full Sherpa test suite requires pytest and pytest-xvfb. These packages should be installed automatically for you by the test suite if they do not already exist.
Note
As of the Sherpa 4.10.1 release, a Fortran compiler is no-longer required to build Sherpa.
#### Obtaining the source package[¶](#obtaining-the-source-package)
The source code can be obtained as a release package from Zenodo - e.g.
[the Sherpa 4.10.0 release](https://zenodo.org/record/1245678) -
or from
[the Sherpa repository on GitHub](https://github.com/sherpa/sherpa),
either a release version,
such as the
[4.10.0](https://github.com/sherpa/sherpa/tree/4.10.0) tag,
or the `master` branch (which is not guaranteed to be stable).
For example:
```
git clone git://github.com/sherpa/sherpa.git cd sherpa git checkout 4.10.0
```
will use the `4.10.0` tag (although we strongly suggest using a newer release now!).
#### Configuring the build[¶](#configuring-the-build)
The Sherpa build is controlled by the `setup.cfg` file in the root of the Sherpa source tree. These configuration options include:
##### FFTW[¶](#fftw)
Sherpa ships with the [fftw library](http://www.fftw.org/) source code and builds it by default. To use a different version, change the `fftw` options in the `sherpa_config` section of the
`setup.cfg` file. The options to change are:
```
fftw=local fftw-include_dirs=/usr/local/include fftw-lib-dirs=/use/local/lib fftw-libraries=fftw3
```
The `fftw` option must be set to `local` and then the remaining options changed to match the location of the local installation.
##### XSPEC[¶](#xspec)
Note
The version number of XSPEC **must** be specified using the
`xspec_version` configuration option, as described below. This is a change from previous releases of Sherpa, but is required in order to support changes made in XSPEC 12.10.0.
Sherpa can be built to use the Astronomy models provided by
[XSPEC](index.html#term-xspec) versions 12.10.1 (patch level a or later), 12.10.0,
12.9.1, and 12.9.0. To enable XSPEC support, several changes must be made to the `xspec_config` section of the `setup.cfg` file. The available options (with default values) are:
```
with-xspec = False xspec_version = 12.9.0 xspec_lib_dirs = None xspec_include_dirs = None xspec_libraries = XSFunctions XSModel XSUtil XS cfitsio_lib_dirs = None cfitsio_libraries = cfitsio ccfits_lib_dirs = None ccfits_libraries = CCfits wcslib_lib_dirs = None wcslib_libraries = wcs gfortran_lib_dirs = None gfortran_libraries = gfortran
```
To build the [`sherpa.astro.xspec`](index.html#module-sherpa.astro.xspec) module, the
`with-xspec` option must be set to `True` **and** the
`xspec_version` option set to the correct version string (the XSPEC patch level must not be included), and then the remaining options depend on the version of XSPEC and whether the XSPEC model library or the full XSPEC system has been installed.
In the examples below, the `$HEADAS` value **must be replaced**
by the actual path to the HEADAS installation, and the versions of the libraries - such as `CCfits_2.5` - may need to be changed to match the contents of the XSPEC installation.
1. If the full XSPEC 12.10.1 system has been built then use:
```
with-xspec = True xspec_version = 12.10.1 xspec_lib_dirs = $HEADAS/lib xspec_include_dirs = $HEADAS/include xspec_libraries = XSFunctions XSUtil XS hdsp_6.26 ccfits_libraries = CCfits_2.5 wcslib_libraries = wcs-5.19.1
```
where the version numbers were taken from version 6.26.1 of HEASOFT and may need updating with a newer release.
2. If the full XSPEC 12.10.0 system has been built then use:
```
with-xspec = True xspec_version = 12.10.0 xspec_lib_dirs = $HEADAS/lib xspec_include_dirs = $HEADAS/include xspec_libraries = XSFunctions XSModel XSUtil XS hdsp_3.0 ccfits_libraries = CCfits_2.5 wcslib_libraries = wcs-5.16
```
3. If the full XSPEC 12.9.x system has been built then use:
```
with-xspec = True xspec_version = 12.9.1 xspec_lib_dirs = $HEADAS/lib xspec_include_dirs = $HEADAS/include xspec_libraries = XSFunctions XSModel XSUtil XS ccfits_libraries = CCfits_2.5 wcslib_libraries = wcs-5.16
```
changing `12.9.1` to `12.9.0` as appropriate.
4. If the model-only build of XSPEC has been installed, then the configuration is similar, but the library names may not need version numbers and locations, depending on how the
`cfitsio`, `CCfits`, and `wcs` libraries were installed.
Note that XSPEC 12.10.0 introduces a new `--enable-xs-models-only`
flag when building HEASOFT which simplifies the installation of these extra libraries, but can cause problems for the Sherpa build.
A common problem is to set one or both of the `xspec_lib_dirs`
and `xspec_lib_include` options to the value of `$HEADAS` instead of
`$HEADAS/lib` and `$HEADAS/include` (after expanding out the environment variable). Doing so will cause the build to fail with errors about being unable to find various XSPEC libraries such as
`XSFunctions` and `XSModel`.
The `gfortran` options should be adjusted if there are problems using the XSPEC module.
In order for the XSPEC module to be used from Python, the
`HEADAS` environment variable **must** be set before the
[`sherpa.astro.xspec`](index.html#module-sherpa.astro.xspec) module is imported.
The Sherpa test suite includes an extensive set of tests of this module, but a quick check of an installed version can be made with the following command:
```
% python -c 'from sherpa.astro import xspec; print(xspec.get_xsversion())'
12.10.1n
```
Warning
The `--enable-xs-models-only` flag with XSPEC 12.10.0 is known to cause problems for Sherpa. It is **strongly recommended** that either that the full XSPEC distribution is built, or that the XSPEC installation from CIAO 4.11 is used.
##### Other options[¶](#other-options)
The remaining options in the `setup.cfg` file allow Sherpa to be built in specific environments, such as when it is built as part of the [CIAO analysis system](http://cxc.harvard.edu/ciao/). Please see the comments in the `setup.cfg` file for more information on these options.
#### Building and Installing[¶](#building-and-installing)
It is highly recommended that some form of virtual environment,
such as a
[conda environment](http://conda.pydata.org/docs/using/envs.html)
or that provided by
[Virtualenv](https://virtualenv.pypa.io/en/stable/),
be used when building and installing Sherpa.
Warning
When building Sherpa on macOS within a conda environment, the following environment variable must be set otherwise importing Sherpa will crash Python:
```
export PYTHON_LDFLAGS=' '
```
That is, the variable is set to a space, not the empty string.
##### A standard installation[¶](#a-standard-installation)
From the root of the Sherpa source tree, Sherpa can be built by saying:
```
python setup.py build
```
and installed with one of:
```
python setup.py install python setup.py install --user
```
##### A development build[¶](#a-development-build)
The `develop` option should be used when developing Sherpa (such as adding new functionality or fixing a bug):
```
python setup.py develop
```
Tests can then be run with the `test` option:
```
python setup.py test
```
The `test` command is a wrapper that calls `pytest` under the hood,
and includes the `develop` command.
You can pass additional arguments to `pytest` with the `-a` or
`--pytest-args` arguments. As examples, the following two commands run all the tests in `test_data.py` and then a single named test in this file:
```
python setup.py test -a sherpa/tests/test_data.py python setup.py test -a sherpa/tests/test_data.py::test_data_eval_model
```
The full set of options, including those added by the Sherpa test suite - which are listed at the end of the `custom options`
section - can be found with:
```
python setup.py test -a "--pyargs sherpa --help"
```
and to pass an argument to the Sherpa test suite (there are currently two options, namely `--test-data` and `--runslow`):
```
python setup.py test -a "--pyargs sherpa --runslow"
```
Note
If you run both `install` and `develop` or `test` in the same Python environment you end up with two competing installations of Sherpa which result in unexpected behavior. If this happens, simply run `pip uninstall sherpa` as many times as necessary, until you get an error message that no more Sherpa installations are available. At this point you can re-install Sherpa.
The same issue may occur if you install a Sherpa binary release and then try to build Sherpa from source in the same environment.
The
[Sherpa test data suite](https://github.com/sherpa/sherpa-test-data)
can be installed to reduce the number of tests that are skipped with the following (this is only for those builds which used `git` to access the source code):
```
git submodule init git submodule update
```
When both the [DS9 image viewer](http://ds9.si.edu/site/Home.html) and
[XPA toolset](http://hea-www.harvard.edu/RD/xpa/) are installed, the test suite will include tests that check that DS9 can be used from Sherpa. This causes several copies of the DS9 viewer to be created,
which can be distracting, as it can cause loss of mouse focus (depending on how X-windows is set up). This can be avoided by installing the
[X virtual-frame buffer (Xvfb)](https://en.wikipedia.org/wiki/Xvfb).
Note
Although the standard Python setuptools approach is used to build Sherpa, there may be issues when using some of the other build targets, such as `build_ext`. Please report these to the
[Sherpa issues page](https://github.com/sherpa/sherpa/issues/).
#### Building the documentation[¶](#building-the-documentation)
Building the documentation requires the Sherpa source code and several additional packages:
* [Sphinx](http://sphinx.pocoo.org/), version 1.3 or later
* The `sphinx_rtd_theme`
* NumPy and [sphinx_astropy](https://github.com/astropy/sphinx-astropy/)
(the latter can be installed with `pip`).
* [Graphviz](https://www.graphviz.org/) (for the inheritance diagrams)
With these installed, the documentation can be built with the
`build_sphinx` target:
```
python setup.py build_sphinx
```
This can be done **without** building Sherpa (either an installation or development version), since Mock objects are used to represent compiled and optional components.
The documentation should be placed in `build/sphinx/html/index.html`,
although this may depend on what version of Sphinx is used.
It is also possible to build the documentation from within the `docs/`
directory:
```
cd docs make html
```
This places the documentation in `_build/html/index.html`.
### Testing the Sherpa installation[¶](#testing-the-sherpa-installation)
A very-brief “smoke” test can be run from the command-line with the `sherpa_smoke` executable:
```
sherpa_smoke WARNING: failed to import sherpa.astro.xspec; XSPEC models will not be available
---
Ran 7 tests in 0.456s
OK (skipped=5)
```
or from the Python prompt:
```
>>> import sherpa
>>> sherpa.smoke()
WARNING: failed to import sherpa.astro.xspec; XSPEC models will not be available
---
Ran 7 tests in 0.447s
OK (skipped=5)
```
This provides basic validation that Sherpa has been installed correctly, but does not run many functional tests. The screen output will include additional warning messages if the `astropy` or
`matplotlib` packages are not installed, or Sherpa was built without support for the XSPEC model library.
The Sherpa installation also includes the `sherpa_test` command-line tool which will run through the Sherpa test suite (the number of tests depends on what optional packages are available and how Sherpa was configured when built):
```
sherpa_test
```
The `sherpa` Anaconda channel contains the `sherpatest` package, which provides a number of data files in ASCII and [FITS](index.html#term-fits) formats. This is only useful when developing Sherpa, since the package is large. It will automatically be picked up by the `sherpa_test` script once it is installed.
#### Testing the documentation with Travis[¶](#testing-the-documentation-with-travis)
There is a documentation build included as part of the Travis-CI test suite,
but it is not set up to do much validation. That is, you need to do something quite severe to break this build. Please see
[issue 491](https://github.com/sherpa/sherpa/issues/491)
for more information.
A quick guide to modeling and fitting in Sherpa[¶](#a-quick-guide-to-modeling-and-fitting-in-sherpa)
---
Here are some examples of using Sherpa to model and fit data.
It is based on some of the examples used in the [astropy.modelling documentation](http://docs.astropy.org/en/stable/modeling/).
### Getting started[¶](#getting-started)
The following modules are assumed to have been imported:
```
>>> import numpy as np
>>> import matplotlib.pyplot as plt
```
The basic process, which will be followed below, is:
* [create a data object](#quick-gauss1d-data)
* [define the model](#quick-gauss1d-model)
* [select the statistic](#quick-gauss1d-statistic)
* [select the optimisation routine](#quick-gauss1d-optimiser)
* [fit the data](#quick-gauss1d-fit)
* [extract the parameter values](#quick-gauss1d-extract)
* [Calculating error values](#quick-gauss1d-errors)
Although presented as a list, it is not necessarily a linear process,
in that the order can be different to that above, and various steps can be repeated. The above list also does not include any visualization steps needed to inform and validate any choices.
### Fitting a one-dimensional data set[¶](#fitting-a-one-dimensional-data-set)
The following data - where `x` is the independent axis and
`y` the dependent one - is used in this example:
```
>>> np.random.seed(0)
>>> x = np.linspace(-5., 5., 200)
>>> ampl_true = 3
>>> pos_true = 1.3
>>> sigma_true = 0.8
>>> err_true = 0.2
>>> y = ampl_true * np.exp(-0.5 * (x - pos_true)**2 / sigma_true**2)
>>> y += np.random.normal(0., err_true, x.shape)
>>> plt.plot(x, y, 'ko');
```
The aim is to fit a one-dimensional gaussian to this data and to recover estimates of the true parameters of the model, namely the position
(`pos_true`), amplitude (`ampl_true`), and width (`sigma_true`).
The `err_true` term adds in random noise (using a
[Normal distribution](https://en.wikipedia.org/wiki/Normal_distribution))
to ensure the data is not perfectly-described by the model.
#### Creating a data object[¶](#creating-a-data-object)
Rather than pass around the arrays to be fit, Sherpa has the concept of a “data object”, which stores the independent and dependent axes, as well as any related metadata. For this example, the class to use is [`Data1D`](index.html#sherpa.data.Data1D), which requires a string label (used to identify the data), the independent axis, and then dependent axis:
```
>>> from sherpa.data import Data1D
>>> d = Data1D('example', x, y)
>>> print(d)
name = example x = Float64[200]
y = Float64[200]
staterror = None syserror = None
```
At this point no errors are being used in the fit, so the `staterror`
and `syserror` fields are empty. They can be set either when the object is created or at a later time.
#### Plotting the data[¶](#plotting-the-data)
The [`sherpa.plot`](index.html#module-sherpa.plot) module provides a number of classes that create pre-canned plots. For example, the
[`sherpa.plot.DataPlot`](index.html#sherpa.plot.DataPlot) class can be used to display the data.
The steps taken are normally:
1. create the object;
2. call the [`prepare()`](index.html#sherpa.plot.DataPlot.prepare)
method with the appropriate arguments,
in this case the data object;
3. and then call the [`plot()`](index.html#sherpa.plot.DataPlot.plot) method.
Sherpa has one plotting backend, [matplotlib](index.html#term-matplotlib), which is used to display plots. There is limited support for customizing these plots - such as always drawing the Y axis with a logarithmic scale - but extensive changes will require calling the plotting back-end directly.
As an example of the [`DataPlot`](index.html#sherpa.plot.DataPlot) output:
```
>>> from sherpa.plot import DataPlot
>>> dplot = DataPlot()
>>> dplot.prepare(d)
>>> dplot.plot()
```
It is not required to use these classes and in the following, plots will be created either via these classes or directly via matplotlib.
#### Define the model[¶](#define-the-model)
In this example a single model is used - a one-dimensional gaussian provided by the [`Gauss1D`](index.html#sherpa.models.basic.Gauss1D)
class - but more complex examples are possible: these include [multiple components](index.html#model-combine),
sharing models between data sets, and
[adding user-defined models](index.html#usermodel).
A full description of the model language and capabilities is provided in
[Creating model instances](index.html#document-models/index):
```
>>> from sherpa.models.basic import Gauss1D
>>> g = Gauss1D()
>>> print(g)
gauss1d
Param Type Value Min Max Units
--- --- --- --- --- ---
gauss1d.fwhm thawed 10 1.17549e-38 3.40282e+38
gauss1d.pos thawed 0 -3.40282e+38 3.40282e+38
gauss1d.ampl thawed 1 -3.40282e+38 3.40282e+38
```
It is also possible to
[restrict the range of a parameter](index.html#params-limits),
[toggle parameters so that they are fixed or fitted](index.html#params-freeze),
and [link parameters together](index.html#params-link).
The [`sherpa.plot.ModelPlot`](index.html#sherpa.plot.ModelPlot) class can be used to visualize the model. The [`prepare()`](index.html#sherpa.plot.ModelPlot.prepare) method takes both a data object and the model to plot:
```
>>> from sherpa.plot import ModelPlot
>>> mplot = ModelPlot()
>>> mplot.prepare(d, g)
>>> mplot.plot()
```
There is also a [`sherpa.plot.FitPlot`](index.html#sherpa.plot.FitPlot) class which will
[combine the two plot results](#quick-fitplot),
but it is often just-as-easy to combine them directly:
```
>>> dplot.plot()
>>> mplot.overplot()
```
The model parameters can be changed - either manually or automatically - to try and start the fit off closer to the best-fit location, but for this example we shall leave the initial parameters as they are.
#### Select the statistics[¶](#select-the-statistics)
In order to optimise a model - that is, to change the model parameters until the best-fit location is found - a statistic is needed. The statistic calculates a numerical value for a given set of model parameters; this is a measure of how well the model matches the data, and can include knowledge of errors on the dependent axis values. The
[optimiser (chosen below)](#quick-gauss1d-optimiser)
attempts to find the set of parameters which minimises this statistic value.
For this example, since the dependent axis (`y`)
has no error estimate, we shall pick the least-square statistic
([`LeastSq`](index.html#sherpa.stats.LeastSq)), which calculates the numerical difference of the model to the data for each point:
```
>>> from sherpa.stats import LeastSq
>>> stat = LeastSq()
```
#### Select the optimisation routine[¶](#select-the-optimisation-routine)
The optimiser is the part that determines how to minimise the statistic value (i.e. how to vary the parameter values of the model to find a local minimum). The main optimisers provided by Sherpa are
[`NelderMead`](index.html#sherpa.optmethods.NelderMead)
(also known as Simplex) and
[`LevMar`](index.html#sherpa.optmethods.LevMar)
(Levenberg-Marquardt). The latter is often quicker, but less robust,
so we start with it (the optimiser can be changed and the data re-fit):
```
>>> from sherpa.optmethods import LevMar
>>> opt = LevMar()
>>> print(opt)
name = levmar ftol = 1.19209289551e-07 xtol = 1.19209289551e-07 gtol = 1.19209289551e-07 maxfev = None epsfcn = 1.19209289551e-07 factor = 100.0 verbose = 0
```
#### Fit the data[¶](#fit-the-data)
The [`Fit`](index.html#sherpa.fit.Fit) class is used to bundle up the data, model, statistic, and optimiser choices. The
[`fit()`](index.html#sherpa.fit.Fit.fit) method runs the optimiser, and returns a
[`FitResults`](index.html#sherpa.fit.FitResults) instance, which contains information on how the fit performed. This infomation includes the
[`succeeded`](index.html#sherpa.fit.FitResults.succeeded)
attribute, to determine whether the fit converged, as well as information on the fit (such as the start and end statistic values) and best-fit parameter values. Note that the model expression can also be queried for the new parameter values.
```
>>> from sherpa.fit import Fit
>>> gfit = Fit(d, g, stat=stat, method=opt)
>>> print(gfit)
data = example model = gauss1d stat = LeastSq method = LevMar estmethod = Covariance
```
To actually fit the data, use the
[`fit()`](index.html#sherpa.fit.Fit.fit) method, which - depending on the data, model, or statistic being used - can take some time:
```
>>> gres = gfit.fit()
>>> print(gres.succeeded)
True
```
One useful method for interactive analysis is
[`format()`](index.html#sherpa.fit.FitResults.format), which returns a string representation of the fit results, as shown below:
```
>>> print(gres.format())
Method = levmar Statistic = leastsq Initial fit statistic = 180.71 Final fit statistic = 8.06975 at function evaluation 30 Data points = 200 Degrees of freedom = 197 Change in statistic = 172.641
gauss1d.fwhm 1.91572 +/- 0.165982
gauss1d.pos 1.2743 +/- 0.0704859
gauss1d.ampl 3.04706 +/- 0.228618
```
Note
The [`LevMar`](index.html#sherpa.optmethods.LevMar) optimiser calculates the covariance matrix at the best-fit location, and the errors from this are reported in the output from the call to the
[`fit()`](index.html#sherpa.fit.Fit.fit) method. In this particular case -
which uses the [`LeastSq`](index.html#sherpa.stats.LeastSq) statistic -
the error estimates do not have much meaning. As discussed below, Sherpa can [make use of error estimates on the data](#quick-gauss1d-errors)
to calculate meaningful parameter errors.
The [`sherpa.plot.FitPlot`](index.html#sherpa.plot.FitPlot) class will display the data and model. The [`prepare()`](index.html#sherpa.plot.FitPlot.prepare) method requires data and model plot objects; in this case the previous versions can be re-used, although the model plot needs to be updated to reflect the changes to the model parameters:
```
>>> from sherpa.plot import FitPlot
>>> fplot = FitPlot()
>>> mplot.prepare(d, g)
>>> fplot.prepare(dplot, mplot)
>>> fplot.plot()
```
As the model can be
[evaluated directly](index.html#document-evaluation/index),
this plot can also be created manually:
```
>>> plt.plot(d.x, d.y, 'ko', label='Data')
>>> plt.plot(d.x, g(d.x), linewidth=2, label='Gaussian')
>>> plt.legend(loc=2);
```
#### Extract the parameter values[¶](#extract-the-parameter-values)
The fit results include a large number of attributes, many of which are not relevant here (as the fit was done with no error values).
The following relation is used to convert from the full-width half-maximum value, used by the [`Gauss1D`](index.html#sherpa.models.basic.Gauss1D)
model, to the Gaussian sigma value used to create the data:
\(\rm{FWHM} = 2 \sqrt{2ln(2)} \sigma\):
```
>>> print(gres)
datasets = None itermethodname = none methodname = levmar statname = leastsq succeeded = True parnames = ('gauss1d.fwhm', 'gauss1d.pos', 'gauss1d.ampl')
parvals = (1.915724111406394, 1.2743015983545247, 3.0470560360944017)
statval = 8.069746329529591 istatval = 180.71034547759984 dstatval = 172.64059914807027 numpoints = 200 dof = 197 qval = None rstat = None message = successful termination nfev = 30
>>> conv = 2 * np.sqrt(2 * np.log(2))
>>> ans = dict(zip(gres.parnames, gres.parvals))
>>> print("Position = {:.2f} truth= {:.2f}".format(ans['gauss1d.pos'], pos_true))
Position = 1.27 truth= 1.30
>>> print("Amplitude= {:.2f} truth= {:.2f}".format(ans['gauss1d.ampl'], ampl_true))
Amplitude= 3.05 truth= 3.00
>>> print("Sigma = {:.2f} truth= {:.2f}".format(ans['gauss1d.fwhm']/conv, sigma_true))
Sigma = 0.81 truth= 0.80
```
The model, and its parameter values, can also be queried directly, as they have been changed by the fit:
```
>>> print(g)
gauss1d
Param Type Value Min Max Units
--- --- --- --- --- ---
gauss1d.fwhm thawed 1.91572 1.17549e-38 3.40282e+38
gauss1d.pos thawed 1.2743 -3.40282e+38 3.40282e+38
gauss1d.ampl thawed 3.04706 -3.40282e+38 3.40282e+38
>>> print(g.pos)
val = 1.2743015983545247 min = -3.4028234663852886e+38 max = 3.4028234663852886e+38 units =
frozen = False link = None default_val = 0.0 default_min = -3.4028234663852886e+38 default_max = 3.4028234663852886e+38
```
### Including errors[¶](#including-errors)
For this example, the error on each bin is assumed to be the same, and equal to the true error:
```
>>> dy = np.ones(x.size) * err_true
>>> de = Data1D('with-errors', x, y, staterror=dy)
>>> print(de)
name = with-errors x = Float64[200]
y = Float64[200]
staterror = Float64[200]
syserror = None
```
The statistic is changed from least squares to chi-square ([`Chi2`](index.html#sherpa.stats.Chi2)), to take advantage of this extra knowledge (i.e. the Chi-square statistic includes the error value per bin when calculating the statistic value):
```
>>> from sherpa.stats import Chi2
>>> ustat = Chi2()
>>> ge = Gauss1D('gerr')
>>> gefit = Fit(de, ge, stat=ustat, method=opt)
>>> geres = gefit.fit()
>>> print(geres.format())
Method = levmar Statistic = chi2 Initial fit statistic = 4517.76 Final fit statistic = 201.744 at function evaluation 30 Data points = 200 Degrees of freedom = 197 Probability [Q-value] = 0.393342 Reduced statistic = 1.02408 Change in statistic = 4316.01
gerr.fwhm 1.91572 +/- 0.0331963
gerr.pos 1.2743 +/- 0.0140972
gerr.ampl 3.04706 +/- 0.0457235
>>> if not geres.succeeded: print(geres.message)
```
Since the error value is independent of bin, then the fit results should be the same here (that is, the parameters in `g` are the same as `ge`):
```
>>> print(g)
gauss1d
Param Type Value Min Max Units
--- --- --- --- --- ---
gauss1d.fwhm thawed 1.91572 1.17549e-38 3.40282e+38
gauss1d.pos thawed 1.2743 -3.40282e+38 3.40282e+38
gauss1d.ampl thawed 3.04706 -3.40282e+38 3.40282e+38
>>> print(ge)
gerr
Param Type Value Min Max Units
--- --- --- --- --- ---
gerr.fwhm thawed 1.91572 1.17549e-38 3.40282e+38
gerr.pos thawed 1.2743 -3.40282e+38 3.40282e+38
gerr.ampl thawed 3.04706 -3.40282e+38 3.40282e+38
```
The difference is that more of the fields in the result structure are populated: in particular the
[`rstat`](index.html#sherpa.fit.FitResults.rstat) and
[`qval`](index.html#sherpa.fit.FitResults.qval) fields, which give the reduced statistic and the probability of obtaining this statistic value respectively.:
```
>>> print(geres)
datasets = None itermethodname = none methodname = levmar statname = chi2 succeeded = True parnames = ('gerr.fwhm', 'gerr.pos', 'gerr.ampl')
parvals = (1.9157241114064163, 1.2743015983545292, 3.047056036094392)
statval = 201.74365823823976 istatval = 4517.758636940002 dstatval = 4316.014978701763 numpoints = 200 dof = 197 qval = 0.3933424667915623 rstat = 1.0240794834428415 message = successful termination nfev = 30
```
#### Error analysis[¶](#error-analysis)
The default error estimation routine is
[`Covariance`](index.html#sherpa.estmethods.Covariance), which will be replaced by
[`Confidence`](index.html#sherpa.estmethods.Confidence) for this example:
```
>>> from sherpa.estmethods import Confidence
>>> gefit.estmethod = Confidence()
>>> print(gefit.estmethod)
name = confidence sigma = 1 eps = 0.01 maxiters = 200 soft_limits = False remin = 0.01 fast = False parallel = True numcores = 4 maxfits = 5 max_rstat = 3 tol = 0.2 verbose = False openinterval = False
```
Running the error analysis can take time, for particularly complex models. The default behavior is to use all the available CPU cores on the machine, but this can be changed with the `numcores`
attribute. Note that a message is displayed to the screen when each bound is calculated, to indicate progress:
```
>>> errors = gefit.est_errors()
gerr.fwhm lower bound: -0.0326327 gerr.fwhm upper bound: 0.0332578 gerr.pos lower bound: -0.0140981 gerr.pos upper bound: 0.0140981 gerr.ampl lower bound: -0.0456119 gerr.ampl upper bound: 0.0456119
```
The results can be displayed:
```
>>> print(errors.format())
Confidence Method = confidence Iterative Fit Method = None Fitting Method = levmar Statistic = chi2 confidence 1-sigma (68.2689%) bounds:
Param Best-Fit Lower Bound Upper Bound
--- --- --- ---
gerr.fwhm 1.91572 -0.0326327 0.0332578
gerr.pos 1.2743 -0.0140981 0.0140981
gerr.ampl 3.04706 -0.0456119 0.0456119
```
The [`ErrorEstResults`](index.html#sherpa.fit.ErrorEstResults) instance returned by
[`est_errors()`](index.html#sherpa.fit.Fit.est_errors) contains the parameter values and limits:
```
>>> print(errors)
datasets = None methodname = confidence iterfitname = none fitname = levmar statname = chi2 sigma = 1 percent = 68.26894921370858 parnames = ('gerr.fwhm', 'gerr.pos', 'gerr.ampl')
parvals = (1.9157241114064163, 1.2743015983545292, 3.047056036094392)
parmins = (-0.0326327431233302, -0.014098074065578947, -0.045611913713536456)
parmaxes = (0.033257800216357714, 0.014098074065578947, 0.045611913713536456)
nfits = 29
```
The data can be accessed, e.g. to create a dictionary where the keys are the parameter names and the values represent the parameter ranges:
```
>>> dvals = zip(errors.parnames, errors.parvals, errors.parmins,
... errors.parmaxes)
>>> pvals = {d[0]: {'val': d[1], 'min': d[2], 'max': d[3]}
for d in dvals}
>>> pvals['gerr.pos']
{'min': -0.014098074065578947, 'max': 0.014098074065578947, 'val': 1.2743015983545292}
```
#### Screen output[¶](#screen-output)
The default behavior - when *not* using the default
[`Covariance`](index.html#sherpa.estmethods.Covariance) method - is for
[`est_errors()`](index.html#sherpa.fit.Fit.est_errors) to print out the parameter bounds as it finds them, which can be useful in an interactive session since the error analysis can be slow. This can be controlled using the Sherpa logging interface.
#### A single parameter[¶](#a-single-parameter)
It is possible to investigate the error surface of a single parameter using the
[`IntervalProjection`](index.html#sherpa.plot.IntervalProjection) class. The following shows how the error surface changes with the position of the gaussian. The
[`prepare()`](index.html#sherpa.plot.IntervalProjection.prepare) method are given the range over which to vary the parameter (the range is chosen to be close to the three-sigma limit from the confidence analysis above,
ahd the dotted line is added to indicate the three-sigma limit above the best-fit for a single parameter):
```
>>> from sherpa.plot import IntervalProjection
>>> iproj = IntervalProjection()
>>> iproj.prepare(min=1.23, max=1.32, nloop=41)
>>> iproj.calc(gefit, ge.pos)
```
This can take some time, depending on the complexity of the model and number of steps requested. The resulting data looks like:
```
>>> iproj.plot()
>>> plt.axhline(geres.statval + 9, linestyle='dotted');
```
The curve is stored in the
[`IntervalProjection`](index.html#sherpa.plot.IntervalProjection) object (in fact, these values are created by the call to
[`calc()`](index.html#sherpa.plot.IntervalProjection.calc) and so can be accesed without needing to create the plot):
```
>>> print(iproj)
x = [ 1.23 , 1.2323, 1.2345, 1.2368, 1.239 , 1.2412, 1.2435, 1.2457, 1.248 ,
1.2503, 1.2525, 1.2548, 1.257 , 1.2592, 1.2615, 1.2637, 1.266 , 1.2683,
1.2705, 1.2728, 1.275 , 1.2772, 1.2795, 1.2817, 1.284 , 1.2863, 1.2885,
1.2908, 1.293 , 1.2953, 1.2975, 1.2997, 1.302 , 1.3043, 1.3065, 1.3088,
1.311 , 1.3133, 1.3155, 1.3177, 1.32 ]
y = [ 211.597 , 210.6231, 209.6997, 208.8267, 208.0044, 207.2325, 206.5113,
205.8408, 205.2209, 204.6518, 204.1334, 203.6658, 203.249 , 202.883 ,
202.5679, 202.3037, 202.0903, 201.9279, 201.8164, 201.7558, 201.7461,
201.7874, 201.8796, 202.0228, 202.2169, 202.462 , 202.758 , 203.105 ,
203.5028, 203.9516, 204.4513, 205.0018, 205.6032, 206.2555, 206.9585,
207.7124, 208.5169, 209.3723, 210.2783, 211.235 , 212.2423]
min = 1.23 max = 1.32 nloop = 41 delv = None fac = 1 log = False
```
#### A contour plot of two parameters[¶](#a-contour-plot-of-two-parameters)
The [`RegionProjection`](index.html#sherpa.plot.RegionProjection) class supports the comparison of two parameters. The contours indicate the one,
two, and three sigma contours.
```
>>> from sherpa.plot import RegionProjection
>>> rproj = RegionProjection()
>>> rproj.prepare(min=[2.8, 1.75], max=[3.3, 2.1], nloop=[21, 21])
>>> rproj.calc(gefit, ge.ampl, ge.fwhm)
```
As with the [interval projection](#quick-errors-intproj),
this step can take time.
```
>>> rproj.contour()
```
As with the single-parameter case, the statistic values for the grid are stored in the [`RegionProjection`](index.html#sherpa.plot.RegionProjection) object by the
[`calc()`](index.html#sherpa.plot.RegionProjection.calc) call,
and so can be accesed without needing to create the contour plot. Useful fields include `x0` and `x1` (the two parameter values),
`y` (the statistic value), and `levels` (the values used for the contours):
```
>>> lvls = rproj.levels
>>> print(lvls)
[ 204.03940717 207.92373254 213.57281632]
>>> nx, ny = rproj.nloop
>>> x0, x1, y = rproj.x0, rproj.x1, rproj.y
>>> x0.resize(ny, nx)
>>> x1.resize(ny, nx)
>>> y.resize(ny, nx)
>>> plt.imshow(y, origin='lower', cmap='viridis_r', aspect='auto',
... extent=(x0.min(), x0.max(), x1.min(), x1.max()))
>>> plt.colorbar()
>>> plt.xlabel(rproj.xlabel)
>>> plt.ylabel(rproj.ylabel)
>>> cs = plt.contour(x0, x1, y, levels=lvls)
>>> lbls = [(v, r"${}\sigma$".format(i+1)) for i, v in enumerate(lvls)]
>>> plt.clabel(cs, lvls, fmt=dict(lbls));
```
### Fitting two-dimensional data[¶](#fitting-two-dimensional-data)
Sherpa has support for two-dimensional data - that is data defined on the independent axes `x0` and `x1`. In the example below a contiguous grid is used, that is the pixel size is constant, but there is no requirement that this is the case.
```
>>> np.random.seed(0)
>>> x1, x0 = np.mgrid[:128, :128]
>>> y = 2 * x0**2 - 0.5 * x1**2 + 1.5 * x0 * x1 - 1
>>> y += np.random.normal(0, 0.1, y.shape) * 50000
```
#### Creating a data object[¶](#id2)
To support irregularly-gridded data, the multi-dimensional data classes require that the coordinate arrays and data values are one-dimensional.
For example, the following code creates a
[`Data2D`](index.html#sherpa.data.Data2D) object:
```
>>> from sherpa.data import Data2D
>>> x0axis = x0.ravel()
>>> x1axis = x1.ravel()
>>> yaxis = y.ravel()
>>> d2 = Data2D('img', x0axis, x1axis, yaxis, shape=(128, 128))
>>> print(d2)
name = img x0 = Int64[16384]
x1 = Int64[16384]
y = Float64[16384]
shape = (128, 128)
staterror = None syserror = None
```
#### Define the model[¶](#id3)
Creating the model is the same as the one-dimensional case; in this case the [`Polynom2D`](index.html#sherpa.models.basic.Polynom2D) class is used to create a low-order polynomial:
```
>>> from sherpa.models.basic import Polynom2D
>>> p2 = Polynom2D('p2')
>>> print(p2)
p2
Param Type Value Min Max Units
--- --- --- --- --- ---
p2.c thawed 1 -3.40282e+38 3.40282e+38
p2.cy1 thawed 0 -3.40282e+38 3.40282e+38
p2.cy2 thawed 0 -3.40282e+38 3.40282e+38
p2.cx1 thawed 0 -3.40282e+38 3.40282e+38
p2.cx1y1 thawed 0 -3.40282e+38 3.40282e+38
p2.cx1y2 thawed 0 -3.40282e+38 3.40282e+38
p2.cx2 thawed 0 -3.40282e+38 3.40282e+38
p2.cx2y1 thawed 0 -3.40282e+38 3.40282e+38
p2.cx2y2 thawed 0 -3.40282e+38 3.40282e+38
```
#### Control the parameters being fit[¶](#control-the-parameters-being-fit)
To reduce the number of parameters being fit, the `frozen` attribute can be set:
```
>>> for n in ['cx1', 'cy1', 'cx2y1', 'cx1y2', 'cx2y2']:
...: getattr(p2, n).frozen = True
...:
>>> print(p2)
p2
Param Type Value Min Max Units
--- --- --- --- --- ---
p2.c thawed 1 -3.40282e+38 3.40282e+38
p2.cy1 frozen 0 -3.40282e+38 3.40282e+38
p2.cy2 thawed 0 -3.40282e+38 3.40282e+38
p2.cx1 frozen 0 -3.40282e+38 3.40282e+38
p2.cx1y1 thawed 0 -3.40282e+38 3.40282e+38
p2.cx1y2 frozen 0 -3.40282e+38 3.40282e+38
p2.cx2 thawed 0 -3.40282e+38 3.40282e+38
p2.cx2y1 frozen 0 -3.40282e+38 3.40282e+38
p2.cx2y2 frozen 0 -3.40282e+38 3.40282e+38
```
#### Fit the data[¶](#id4)
Fitting is no different (the same statistic and optimisation objects used earlier could have been re-used here):
```
>>> f2 = Fit(d2, p2, stat=LeastSq(), method=LevMar())
>>> res2 = f2.fit()
>>> if not res2.succeeded: print(res2.message)
>>> print(res2)
datasets = None itermethodname = none methodname = levmar statname = leastsq succeeded = True parnames = ('p2.c', 'p2.cy2', 'p2.cx1y1', 'p2.cx2')
parvals = (-80.28947555488139, -0.48174521913599017, 1.5022711710872119, 1.9894112623568638)
statval = 400658883390.66907 istatval = 6571471882611.967 dstatval = 6170812999221.298 numpoints = 16384 dof = 16380 qval = None rstat = None message = successful termination nfev = 45
>>> print(p2)
p2
Param Type Value Min Max Units
--- --- --- --- --- ---
p2.c thawed -80.2895 -3.40282e+38 3.40282e+38
p2.cy1 frozen 0 -3.40282e+38 3.40282e+38
p2.cy2 thawed -0.481745 -3.40282e+38 3.40282e+38
p2.cx1 frozen 0 -3.40282e+38 3.40282e+38
p2.cx1y1 thawed 1.50227 -3.40282e+38 3.40282e+38
p2.cx1y2 frozen 0 -3.40282e+38 3.40282e+38
p2.cx2 thawed 1.98941 -3.40282e+38 3.40282e+38
p2.cx2y1 frozen 0 -3.40282e+38 3.40282e+38
p2.cx2y2 frozen 0 -3.40282e+38 3.40282e+38
```
#### Display the model[¶](#display-the-model)
The model can be visualized by evaluating it over a grid of points and then displaying it:
```
>>> m2 = p2(x0axis, x1axis).reshape(128, 128)
>>> def pimg(d, title):
... plt.imshow(d, origin='lower', interpolation='nearest',
... vmin=-1e4, vmax=5e4, cmap='viridis')
... plt.axis('off')
... plt.colorbar(orientation='horizontal',
... ticks=[0, 20000, 40000])
... plt.title(title)
...
>>> plt.figure(figsize=(8, 3))
>>> plt.subplot(1, 3, 1);
>>> pimg(y, "Data")
>>> plt.subplot(1, 3, 2)
>>> pimg(m2, "Model")
>>> plt.subplot(1, 3, 3)
>>> pimg(y - m2, "Residual")
```
Note
The `sherpa.image` model provides support for *interactive*
image visualization, but this only works if the
[DS9](http://ds9.si.edu/site/Home.html) image viewer is installed.
For the examples in this document, matplotlib plots will be created to view the data directly.
### Simultaneous fits[¶](#simultaneous-fits)
Sherpa allows multiple data sets to be fit at the same time, although there is only really a benefit if there is some model component or value that is shared between the data sets). In this example we have a dataset containing a lorentzian signal with a background component,
and another with just the background component. Fitting both together can improve the constraints on the parameter values.
First we start by simulating the data, where the
[`Polynom1D`](index.html#sherpa.models.basic.Polynom1D)
class is used to model the background as a straight line, and
[`Lorentz1D`](index.html#sherpa.astro.models.Lorentz1D)
for the signal:
```
>>> from sherpa.models import Polynom1D
>>> from sherpa.astro.models import Lorentz1D
>>> tpoly = Polynom1D()
>>> tlor = Lorentz1D()
>>> tpoly.c0 = 50
>>> tpoly.c1 = 1e-2
>>> tlor.pos = 4400
>>> tlor.fwhm = 200
>>> tlor.ampl = 1e4
>>> x1 = np.linspace(4200, 4600, 21)
>>> y1 = tlor(x1) + tpoly(x1) + np.random.normal(scale=5, size=x1.size)
>>> x2 = np.linspace(4100, 4900, 11)
>>> y2 = tpoly(x2) + np.random.normal(scale=5, size=x2.size)
>>> print("x1 size {} x2 size {}".format(x1.size, x2.size))
x1 size 21 x2 size 11
```
There is **no** requirement that the data sets have a common grid,
as can be seen in a raw view of the data:
```
>>> plt.plot(x1, y1)
>>> plt.plot(x2, y2)
```
The fits are set up as before; a data object is needed for each data set, and model instances are created:
```
>>> d1 = Data1D('a', x1, y1)
>>> d2 = Data1D('b', x2, y2)
>>> fpoly, flor = Polynom1D(), Lorentz1D()
>>> fpoly.c1.thaw()
>>> flor.pos = 4500
```
To help the fit, we use a simple algorithm to estimate the starting point for the source amplitude, by evaluating the model on the data grid and calculating the change in the amplitude needed to make it match the data:
```
>>> flor.ampl = y1.sum() / flor(x1).sum()
```
For simultaneous fits the same optimisation and statistic needs to be used for each fit (this is an area we are looking to improve):
```
>>> from sherpa.optmethods import NelderMead
>>> stat, opt = LeastSq(), NelderMead()
```
Set up the fits to the individual data sets:
```
>>> f1 = Fit(d1, fpoly + flor, stat, opt)
>>> f2 = Fit(d2, fpoly, stat, opt)
```
and a simultaneous (i.e. to both data sets) fit:
```
>>> from sherpa.data import DataSimulFit
>>> from sherpa.models import SimulFitModel
>>> sdata = DataSimulFit('all', (d1, d2))
>>> smodel = SimulFitModel('all', (fpoly + flor, fpoly))
>>> sfit = Fit(sdata, smodel, stat, opt)
```
Note that there is a [`simulfit()`](index.html#sherpa.fit.Fit.simulfit) method that can be used to fit using multiple [`sherpa.fit.Fit`](index.html#sherpa.fit.Fit) objects,
which wraps the above (using individual fit objects allows some of the data to be fit first, which may help reduce the parameter space needed to be searched):
```
>>> res = sfit.fit()
>>> print(res)
datasets = None itermethodname = none methodname = neldermead statname = leastsq succeeded = True parnames = ('polynom1d.c0', 'polynom1d.c1', 'lorentz1d.fwhm', 'lorentz1d.pos', 'lorentz1d.ampl')
parvals = (36.829217311393585, 0.012540257025027028, 249.55651534213359, 4402.7031194359088, 12793.559398547319)
statval = 329.6525419378109 istatval = 3813284.1822045334 dstatval = 3812954.52966 numpoints = 32 dof = 27 qval = None rstat = None message = Optimization terminated successfully nfev = 1152
```
The values of the `numpoints` and `dof` fields show that both datasets have been used in the fit.
The data can then be viewed (in this case a separate grid is used, but the
[data objects could be used to define the grid](index.html#evaluation-data)):
```
>>> plt.plot(x1, y1, label='Data 1')
>>> plt.plot(x2, y2, label='Data 2')
>>> x = np.arange(4000, 5000, 10)
>>> plt.plot(x, (fpoly + flor)(x), linestyle='dotted', label='Fit 1')
>>> plt.plot(x, fpoly(x), linestyle='dotted', label='Fit 2')
>>> plt.legend();
```
How do you do error analysis? Well, can call `sfit.est_errors()`, but that will fail with the current statistic (`LeastSq`), so need to change it. The error is 5, per bin, which has to be set up:
```
>>> print(sfit.calc_stat_info())
name =
ids = None bkg_ids = None statname = leastsq statval = 329.6525419378109 numpoints = 32 dof = 27 qval = None rstat = None
>>> d1.staterror = np.ones(x1.size) * 5
>>> d2.staterror = np.ones(x2.size) * 5
>>> sfit.stat = Chi2()
>>> check = sfit.fit()
```
How much did the fit change?:
```
>>> check.dstatval 0.0
```
Note that since the error on each bin is the same value, the best-fit value is not going to be different to the LeastSq result (so `dstatval`
should be 0):
```
>>> print(sfit.calc_stat_info())
name =
ids = None bkg_ids = None statname = chi2 statval = 13.186101677512438 numpoints = 32 dof = 27 qval = 0.988009259609 rstat = 0.48837413620416437
>>> sres = sfit.est_errors()
>>> print(sres)
datasets = None methodname = covariance iterfitname = none fitname = neldermead statname = chi2 sigma = 1 percent = 68.2689492137 parnames = ('polynom1d.c0', 'polynom1d.c1', 'lorentz1d.fwhm', 'lorentz1d.pos', 'lorentz1d.ampl')
parvals = (36.829217311393585, 0.012540257025027028, 249.55651534213359, 4402.7031194359088, 12793.559398547319)
parmins = (-4.9568824809960628, -0.0011007470586726147, -6.6079122387075824, -2.0094070026087474, -337.50275154547768)
parmaxes = (4.9568824809960628, 0.0011007470586726147, 6.6079122387075824, 2.0094070026087474, 337.50275154547768)
nfits = 0
```
Error estimates on a single parameter are
[as above](#quick-errors-intproj):
```
>>> iproj = IntervalProjection()
>>> iproj.prepare(min=6000, max=18000, nloop=101)
>>> iproj.calc(sfit, flor.ampl)
>>> iproj.plot()
```
Sherpa and CIAO[¶](#sherpa-and-ciao)
---
The Sherpa package was developed by the Chandra X-ray Center ([CXC](index.html#term-cxc))
as a general purpose fitting and modeling tool, with specializations for handling X-ray Astronomy data. It is provided as part of the
[CIAO](index.html#term-ciao) analysis package,
where the code is the same as that available from the
[Sherpa GitHub page](https://github.com/sherpa/sherpa),
with the following modifications:
* the I/O backend uses the CIAO library [Crates](index.html#term-crates) rather than
[astropy](index.html#term-astropy);
* a set of customized IPython routines are provided as part of CIAO that automatically loads Sherpa and adjusts the appearance of IPython (mainly changes to the prompt);
* and the CIAO version of Sherpa includes the optional XSPEC model library ([`sherpa.astro.xspec`](index.html#module-sherpa.astro.xspec)).
The online documentation provided for Sherpa as part of CIAO,
namely <http://cxc.harvard.edu/sherpa/>, can be used with the standalone version of Sherpa, but note that the focus of this documentation is the
[session-based API](index.html#document-ui/index)
provided by the
[`sherpa.astro.ui`](index.html#module-sherpa.astro.ui) and [`sherpa.ui`](index.html#module-sherpa.ui) modules.
These are wrappers around the Object-Oriented interface described in this document, and data management and utility routines.
What data is to be fit?[¶](#what-data-is-to-be-fit)
---
The Sherpa [`Data`](index.html#sherpa.data.Data) class is used to carry around the data to be fit: this includes the independent axis (or axes), the dependent axis (the data), and any necessary metadata. Although the design of Sherpa supports multiple-dimensional data sets, the current classes only support one- and two-dimensional data sets.
### Overview[¶](#overview)
The following modules are assumed to have been imported:
```
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from sherpa.stats import LeastSq
>>> from sherpa.optmethods import LevMar
>>> from sherpa import data
```
#### Names[¶](#names)
The first argument to any of the Data classes is the name of the data set. This is used for display purposes only,
and can be useful to identify which data set is in use.
It is stored in the `name` attribute of the object, and can be changed at any time.
#### The independent axis[¶](#the-independent-axis)
The independent axis - or axes - of a data set define the grid over which the model is to be evaluated. It is referred to as `x`, `x0`, `x1`, … depending on the dimensionality of the data (for
[binned datasets](#data-binned) there are `lo`
and `hi` variants).
Although dense multi-dimensional data sets can be stored as arrays with dimensionality greater than one, the internal representation used by Sherpa is often a flattened - i.e.
one-dimensional - version.
#### The dependent axis[¶](#the-dependent-axis)
This refers to the data being fit, and is referred to as `y`.
##### Unbinned data[¶](#unbinned-data)
Unbinned data sets - defined by classes which do not end in the name `Int` - represent point values; that is, the the data value is the value at the coordinates given by the independent axis.
Examples of unbinned data classes are
[`Data1D`](index.html#sherpa.data.Data1D) and [`Data2D`](index.html#sherpa.data.Data2D).
```
>>> np.random.seed(0)
>>> x = np.arange(20, 40, 0.5)
>>> y = x**2 + np.random.normal(0, 10, size=x.size)
>>> d1 = data.Data1D('test', x, y)
>>> print(d1)
name = test x = Float64[40]
y = Float64[40]
staterror = None syserror = None
```
##### Binned data[¶](#binned-data)
Binned data sets represent values that are defined over a range,
such as a histogram.
The integrated model classes end in `Int`: examples are
[`Data1DInt`](index.html#sherpa.data.Data1DInt)
and [`Data2DInt`](index.html#sherpa.data.Data2DInt).
It can be a useful optimisation to treat a binned data set as an unbinned one, since it avoids having to estimate the integral of the model over each bin. It depends in part on how the bin size compares to the scale over which the model changes.
```
>>> z = np.random.gamma(20, scale=0.5, size=1000)
>>> (y, edges) = np.histogram(z)
>>> d2 = data.Data1DInt('gamma', edges[:-1], edges[1:], y)
>>> print(d2)
name = gamma xlo = Float64[10]
xhi = Float64[10]
y = Int64[10]
staterror = None syserror = None
>>> plt.bar(d2.xlo, d2.y, d2.xhi - d2.xlo);
```
#### Errors[¶](#errors)
There is support for both statistical and systematic errors by either using the `staterror` and `syserror`
parameters when creating the data object, or by changing the
`staterror` and
`syserror` attributes of the object.
#### Filtering data[¶](#filtering-data)
Sherpa supports filtering data sets; that is, temporarily removing parts of the data (perhaps because there are problems, or to help restrict parameter values).
The `mask` attribute indicates whether a filter has been applied: if it returns `True` then no filter is set, otherwise it is a bool array where `False` values indicate those elements that are to be ignored. The `ignore()` and
`notice()` methods are used to define the ranges to exclude or include. For example, the following hides those values where the independent axis values are between 21.2 and 22.8:
```
>>> d1.ignore(21.2, 22.8)
>>> d1.x[np.invert(d1.mask)]
array([ 21.5, 22. , 22.5])
```
After this, a fit to the data will ignore these values, as shown below, where the number of degrees of freedom of the first fit,
which uses the filtered data, is three less than the fit to the full data set (the call to
`notice()` removes the filter since no arguments were given):
```
>>> from sherpa.models import Polynom1D
>>> from sherpa.fit import Fit
>>> mdl = Polynom1D()
>>> mdl.c2.thaw()
>>> fit = Fit(d1, mdl, stat=LeastSq(), method=LevMar())
>>> res1 = fit.fit()
>>> d1.notice()
>>> res2 = fit.fit()
>>> print("Degrees of freedom: {} vs {}".format(res1.dof, res2.dof))
Degrees of freedom: 35 vs 38
```
#### Visualizing a data set[¶](#visualizing-a-data-set)
The data objects contain several methods which can be used to visualize the data, but do not provide any direct plotting or display capabilities. There are low-level routines which provide access to the data - these include the
`to_plot()` and
`to_contour()` methods - but the preferred approach is to use the classes defined in the
[`sherpa.plot`](index.html#module-sherpa.plot) module, which are described in the
[visualization section](index.html#document-plots/index):
```
>>> from sherpa.plot import DataPlot
>>> pdata = DataPlot()
>>> pdata.prepare(d2)
>>> pdata.plot()
```
Although the data represented by `d2` is a histogram, the values are displayed at the center of the bin.
The plot objects automatically handle any
[filters](#data-filter)
applied to the data, as shown below.
```
>>> d1.ignore(25, 30)
>>> d1.notice(26, 27)
>>> pdata.prepare(d1)
>>> pdata.plot()
```
Note
The plot object stores the data given in the
[`prepare()`](index.html#sherpa.plot.DataPlot.prepare) call,
so that changes to the underlying objects will not be reflected in future calls to
[`plot()`](index.html#sherpa.plot.DataPlot.plot)
unless a new call to
[`prepare()`](index.html#sherpa.plot.DataPlot.prepare) is made.
```
>>> d1.notice()
```
At this point, a call to `pdata.plot()` would re-create the previous plot, even though the filter has been removed from the underlying data object.
#### Evaluating a model[¶](#evaluating-a-model)
The [`eval_model()`](index.html#sherpa.data.Data.eval_model) and
[`eval_model_to_fit()`](index.html#sherpa.data.Data.eval_model_to_fit)
methods can be used to evaluate a model on the grid defined by the data set. The first version uses the full grid, whereas the second respects any [filtering](#data-filter) applied to the data.
```
>>> d1.notice(22, 25)
>>> y1 = d1.eval_model(mdl)
>>> y2 = d1.eval_model_to_fit(mdl)
>>> x2 = d1.x[d1.mask]
>>> plt.plot(d1.x, d1.y, 'ko', label='Data')
>>> plt.plot(d1.x, y1, '--', label='Model (all points)')
>>> plt.plot(x2, y2, linewidth=2, label='Model (filtered)')
>>> plt.legend(loc=2)
```
### Reference/API[¶](#reference-api)
#### The sherpa.data module[¶](#module-sherpa.data)
Tools for creating, storing, inspecting, and manipulating data sets
Classes
| [`Data`](index.html#sherpa.data.Data)(name, indep, y[, staterror, syserror]) | Data class for generic, N-Dimensional data sets, where N depends on the number of independent axes passed during initialization. |
| [`Data1D`](index.html#sherpa.data.Data1D)(name, x, y[, staterror, syserror]) | |
| [`Data1DAsymmetricErrs`](index.html#sherpa.data.Data1DAsymmetricErrs)(name, x, y, elo, ehi[, …]) | 1-D data set with asymmetric errors Note: elo and ehi shall be stored as delta values from y |
| [`Data1DInt`](index.html#sherpa.data.Data1DInt)(name, xlo, xhi, y[, staterror, …]) | 1-D integrated data set |
| [`Data2D`](index.html#sherpa.data.Data2D)(name, x0, x1, y[, shape, staterror, …]) | |
| [`Data2DInt`](index.html#sherpa.data.Data2DInt)(name, x0lo, x1lo, x0hi, x1hi, y[, …]) | 2-D integrated data set |
| [`DataSimulFit`](index.html#sherpa.data.DataSimulFit)(name, datasets) | Store multiple data sets. |
| [`BaseData`](index.html#sherpa.data.BaseData) | Base class for all data classes. |
| [`DataSpace1D`](index.html#sherpa.data.DataSpace1D)(filter, x) | Class for representing 1-D Data Space. |
| [`DataSpace2D`](index.html#sherpa.data.DataSpace2D)(filter, x0, x1) | Class for representing 2-D Data Spaces. |
| [`DataSpaceND`](index.html#sherpa.data.DataSpaceND)(filter, indep) | Class for representing arbitray N-Dimensional data domains |
| [`Filter`](index.html#sherpa.data.Filter)() | A class for representing filters of N-Dimentional datasets. |
| [`IntegratedDataSpace1D`](index.html#sherpa.data.IntegratedDataSpace1D)(filter, xlo, xhi) | Same as DataSpace1D, but for supporting integrated data sets. |
| [`IntegratedDataSpace2D`](index.html#sherpa.data.IntegratedDataSpace2D)(filter, x0lo, x1lo, …) | Same as DataSpace2D, but for supporting integrated data sets. |
##### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of BaseData, Data, Data1D, Data1DAsymmetricErrs, Data1DInt, Data2D, Data2DInt, DataSimulFit
Inheritance diagram of DataSpace1D, DataSpace2D, DataSpaceND, IntegratedDataSpace1D, IntegratedDataSpace2D
#### The sherpa.astro.data module[¶](#module-sherpa.astro.data)
Classes for storing, inspecting, and manipulating astronomical data sets
Classes
| [`DataPHA`](index.html#sherpa.astro.data.DataPHA)(name, channel, counts[, staterror, …]) | PHA data set, including any associated instrument and background data. |
| [`DataARF`](index.html#sherpa.astro.data.DataARF)(name, energ_lo, energ_hi, specresp) | ARF data set. |
| [`DataRMF`](index.html#sherpa.astro.data.DataRMF)(name, detchans, energ_lo, energ_hi, …) | RMF data set. |
| [`DataIMG`](index.html#sherpa.astro.data.DataIMG)(name, x0, x1, y[, shape, staterror, …]) | Image data set, including functions for coordinate transformations |
| [`DataIMGInt`](index.html#sherpa.astro.data.DataIMGInt)(name, x0lo, x1lo, x0hi, x1hi, y) | |
| [`DataRosatRMF`](index.html#sherpa.astro.data.DataRosatRMF)(name, detchans, energ_lo, …) | |
##### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of DataPHA, DataARF, DataRMF, DataIMG, DataIMGInt, DataRosatRMF
Creating model instances[¶](#creating-model-instances)
---
The `sherpa.models` and [`sherpa.astro.models`](index.html#module-sherpa.astro.models)
namespaces provides a collection of one- and two-dimensional models as a convenience; the actual definition of each particular model depends on its type.
The following modules are assumed to have been imported for this section:
```
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from sherpa import models
```
### Creating a model instance[¶](#creating-a-model-instance)
Models must be created before there parameter values can be set. In this case a one-dimensional gaussian using the
[`Gauss1D`](index.html#sherpa.models.basic.Gauss1D) class:
```
>>> g = models.Gauss1D()
>>> print(g)
gauss1d
Param Type Value Min Max Units
--- --- --- --- --- ---
gauss1d.fwhm thawed 10 1.17549e-38 3.40282e+38
gauss1d.pos thawed 0 -3.40282e+38 3.40282e+38
gauss1d.ampl thawed 1 -3.40282e+38 3.40282e+38
```
A description of the model is provided by `help(g)`.
The parameter values have a current value, a valid range
(as given by the the minimum and maximum columns in the table above),
and a units field. The units field is a string, describing the expected units for the parameter; there is currently *no support* for using [astropy.units](http://docs.astropy.org/en/stable/units/index.html) to set a parameter value. The “Type” column refers to whether the parameter is fixed, (`frozen`) or can be varied during a fit (`thawed`),
as described below, in the [Freezing and Thawing parameters](#params-freeze) section.
Models can be given a name, to help distinguish multiple versions of the same model type. The default value is the lower-case version of the class name.
```
>>> g.name
'gauss1d'
>>> h = models.Gauss1D('other')
>>> print(h)
other
Param Type Value Min Max Units
--- --- --- --- --- ---
other.fwhm thawed 10 1.17549e-38 3.40282e+38
other.pos thawed 0 -3.40282e+38 3.40282e+38
other.ampl thawed 1 -3.40282e+38 3.40282e+38
>>> h.name
'other'
```
The model classes are expected to derive from the
[`ArithmeticModel`](index.html#sherpa.models.model.ArithmeticModel) class.
### Combining models[¶](#combining-models)
Models can be combined and shared by using the standard Python numerical operators. For instance, a one-dimensional gaussian plus a flat background - using the
[`Const1D`](index.html#sherpa.models.basic.Const1D) class - would be represented by the following model:
```
>>> src1 = models.Gauss1D('src1')
>>> back = models.Const1D('back')
>>> mdl1 = src1 + back
>>> print(mdl1)
(src1 + back)
Param Type Value Min Max Units
--- --- --- --- --- ---
src1.fwhm thawed 10 1.17549e-38 3.40282e+38
src1.pos thawed 0 -3.40282e+38 3.40282e+38
src1.ampl thawed 1 -3.40282e+38 3.40282e+38
back.c0 thawed 1 -3.40282e+38 3.40282e+38
```
Now consider fitting a second dataset where it is known that the background is two times higher than the first:
```
>>> src2 = models.Gauss1D('src2')
>>> mdl2 = src2 + 2 * back
>>> print(mdl2)
(src2 + (2 * back))
Param Type Value Min Max Units
--- --- --- --- --- ---
src2.fwhm thawed 10 1.17549e-38 3.40282e+38
src2.pos thawed 0 -3.40282e+38 3.40282e+38
src2.ampl thawed 1 -3.40282e+38 3.40282e+38
back.c0 thawed 1 -3.40282e+38 3.40282e+38
```
The two models can then be fit separately or simultaneously. In this example the two source models (the Gaussian component) were completely separate, but they could have been identical - in which case
`mdl2 = src1 + 2 * back` would have been used instead - or
[parameter linking](#params-link) could be used to constrain the models. An example of the use of linking would be to force the two FWHM (full-width half-maximum)
parameters to be the same but to let the position and amplitude values vary independently.
More information is available in the
[combining models](index.html#document-evaluation/combine) documentation.
### Changing a parameter[¶](#changing-a-parameter)
The parameters of a model - those numeric variables that control the shape of the model, and that can be varied during a fit -
can be accesed as attributes, both to read or change the current settings. The
[`val`](index.html#sherpa.models.parameter.Parameter.val) attribute contains the current value:
```
>>> print(h.fwhm)
val = 10.0 min = 1.17549435082e-38 max = 3.40282346639e+38 units =
frozen = False link = None default_val = 10.0 default_min = 1.17549435082e-38 default_max = 3.40282346639e+38
>>> h.fwhm.val 10.0
>>> h.fwhm.min 1.1754943508222875e-38
>>> h.fwhm.val = 15
>>> print(h.fwhm)
val = 15.0 min = 1.17549435082e-38 max = 3.40282346639e+38 units =
frozen = False link = None default_val = 15.0 default_min = 1.17549435082e-38 default_max = 3.40282346639e+38
```
Assigning a value to a parameter directly (i.e. without using the
`val` attribute) also works:
```
>>> h.fwhm = 12
>>> print(h.fwhm)
val = 12.0 min = 1.17549435082e-38 max = 3.40282346639e+38 units =
frozen = False link = None default_val = 12.0 default_min = 1.17549435082e-38 default_max = 3.40282346639e+38
```
### The soft and hard limits of a parameter[¶](#the-soft-and-hard-limits-of-a-parameter)
Each parameter has two sets of limits, which are referred to as
“soft” and “hard”. The soft limits are shown when the model is displayed, and refer to the
[`min`](index.html#sherpa.models.parameter.Parameter.min)
and
[`max`](index.html#sherpa.models.parameter.Parameter.max)
attributes for the parameter, whereas the hard limits are given by the
[`hard_min`](index.html#sherpa.models.parameter.Parameter.hard_min)
and
[`hard_max`](index.html#sherpa.models.parameter.Parameter.hard_max)
(which are not displayed, and can not be changed).
```
>>> print(h)
other
Param Type Value Min Max Units
--- --- --- --- --- ---
other.fwhm thawed 12 1.17549e-38 3.40282e+38
other.pos thawed 0 -3.40282e+38 3.40282e+38
other.ampl thawed 1 -3.40282e+38 3.40282e+38
>>> print(h.fwhm)
val = 12.0 min = 1.17549435082e-38 max = 3.40282346639e+38 units =
frozen = False link = None default_val = 12.0 default_min = 1.17549435082e-38 default_max = 3.40282346639e+38
```
These limits act to bound the acceptable parameter range; this is often because certain values are physically impossible, such as having a negative value for the full-width-half-maxium value of a Gaussian, but can also be used to ensure that the fit is restricted to a meaningful part of the search space. The hard limits are set by the model class, and represent the full valid range of the parameter, whereas the soft limits can be changed by the user, although they often default to the same values as the hard limits.
Setting a parameter to a value outside its soft limits will raise a [`ParameterErr`](index.html#sherpa.utils.err.ParameterErr) exception.
During a fit the paramater values are bound by the soft limits,
and a screen message will be displayed if an attempt to move outside this range was made. During error analysis the parameter values are allowed outside the soft limits, as long as they remain inside the hard limits.
### Guessing a parameter’s value from the data[¶](#guessing-a-parameter-s-value-from-the-data)
Sherpa models have a
[`guess()`](index.html#sherpa.models.model.Model.guess)
method which is used to seed the paramters (or parameter) with values and
[soft-limit ranges](#params-limits)
which match the data.
The idea is to move the parameters to values appropriate for the data, which can avoid un-needed computation by the optimiser.
The existing `guess` routines are very basic - such as picking the index of the largest value in the data for the peak location - and do not always account for the full complexity of the model expression, so care should be taken when using this functionality.
The arguments depend on the model type, since both the independent and dependent axes may be used, but the
[`to_guess()`](index.html#sherpa.data.Data.to_guess) method of a data object will return the correct data (assuming the dimensionality and type match):
```
>>> mdl.guess(*data.to_guess())
```
Note that the soft limits can be changed, as in this example which ensures the position of the gaussian falls within the grid of points (since this is the common situation; if the source is meant to lie outside the data range then the limits will need to be increased manually):
```
>>> yg, xg = np.mgrid[4000:4050:10, 3000:3070:10]
>>> r2 = (xg - 3024.2)**2 + (yg - 4011.7)**2
>>> zg = 2400 * np.exp(-r2 / 1978.2)
>>> d2d = Data2D('example', xg.flatten(), yg.flatten(), zg.flatten(),
shape=zg.shape)
>>> mdl = Gauss2D('mdl')
>>> print(mdl)
mdl
Param Type Value Min Max Units
--- --- --- --- --- ---
mdl.fwhm thawed 10 1.17549e-38 3.40282e+38
mdl.xpos thawed 0 -3.40282e+38 3.40282e+38
mdl.ypos thawed 0 -3.40282e+38 3.40282e+38
mdl.ellip frozen 0 0 0.999
mdl.theta frozen 0 -6.28319 6.28319 radians
mdl.ampl thawed 1 -3.40282e+38 3.40282e+38
>>> mdl.guess(*d2d.to_guess())
>>> print(mdl)
mdl
Param Type Value Min Max Units
--- --- --- --- --- ---
mdl.fwhm thawed 10 1.17549e-38 3.40282e+38
mdl.xpos thawed 3020 3000 3060
mdl.ypos thawed 4010 4000 4040
mdl.ellip frozen 0 0 0.999
mdl.theta frozen 0 -6.28319 6.28319 radians
mdl.ampl thawed 2375.22 2.37522 2.37522e+06
```
### Freezing and Thawing parameters[¶](#freezing-and-thawing-parameters)
Not all model parameters should be varied during a fit: perhaps the data quality is not sufficient to constrain all the parameters,
it is already known, the parameter is highly correlated with another, or perhaps the parameter value controls a behavior of the model that should not vary during a fit (such as the interpolation scheme to use). The [`frozen`](index.html#sherpa.models.parameter.Parameter.frozen)
attribute controls whether a fit should vary that parameter or not; it can be changed directly,
as shown below:
```
>>> h.fwhm.frozen False
>>> h.fwhm.frozen = True
```
or via the [`freeze()`](index.html#sherpa.models.parameter.Parameter.freeze)
and [`thaw()`](index.html#sherpa.models.parameter.Parameter.thaw)
methods for the parameter.
```
>>> h.fwhm.thaw()
>>> h.fwhm.frozen False
```
There are times when a model parameter should *never* be varied during a fit. In this case the
[`alwaysfrozen`](index.html#sherpa.models.parameter.Parameter.alwaysfrozen)
attribute will be set to `True` (this particular parameter is read-only).
### Linking parameters[¶](#linking-parameters)
There are times when it is useful for one parameter to be related to another: this can be equality, such as saying that the width of two model components are the same, or a functional form, such as saying that the position of one component is a certain distance away from another component. This concept is refererred to as linking parameter values. The second case incudes the first - where the functional relationship is equality -
but it is treated separately here as it is a common operation.
Lnking parameters also reduces the number of free parameters in a fit.
The following examples use the same two model components:
```
>>> g1 = models.Gauss1D('g1')
>>> g2 = models.Gauss1D('g2')
```
Linking parameter values requires referring to the parameter, rather than via the [`val`](index.html#sherpa.models.parameter.Parameter.val) attribute.
The [`link`](index.html#sherpa.models.parameter.Parameter.link) attribute is set to the link value (and is `None` for parameters that are not linked).
#### Equality[¶](#equality)
After the following, the two gaussian components have the same width:
```
>>> g2.fwhm.val 10.0
>>> g2.fwhm = g1.fwhm
>>> g1.fwhm = 1024
>>> g2.fwhm.val 1024.0
>>> g1.fwhm.link is None True
>>> g2.fwhm.link
<Parameter 'fwhm' of model 'g1'>
```
When displaying the model, the value and link expression are included:
```
>>> print(g2)
g2
Param Type Value Min Max Units
--- --- --- --- --- ---
g2.fwhm linked 1024 expr: g1.fwhm
g2.pos thawed 0 -3.40282e+38 3.40282e+38
g2.ampl thawed 1 -3.40282e+38 3.40282e+38
```
#### Functional relationship[¶](#functional-relationship)
The link can accept anything that evaluates to a value,
such as adding a constant.
```
>>> g2.pos = g1.pos + 8234
>>> g1.pos = 1200
>>> g2.pos.val 9434.0
```
The [`CompositeParameter`](index.html#sherpa.models.parameter.CompositeParameter) class controls how parameters are combined. In this case the result is a [`BinaryOpParameter`](index.html#sherpa.models.parameter.BinaryOpParameter) object.
#### Including another parameter[¶](#including-another-parameter)
It is possible to include other parameters in a link expression,
which can lead to further constraints on the fit. For instance,
rather than using a fixed separation, a range can be used. One way to do this is to use a [`Const1D`](index.html#sherpa.models.basic.Const1D)
model, restricting the value its one parameter can vary.
```
>>> sep = models.Const1D('sep')
>>> print(sep)
sep
Param Type Value Min Max Units
--- --- --- --- --- ---
sep.c0 thawed 1 -3.40282e+38 3.40282e+38
>>> g2.fwhm = g1.fwhm + sep.c0
>>> sep.c0 = 1200
>>> sep.c0.min = 800
>>> sep.c0.max = 1600
```
In this example, the separation of the two components is restricted to lie in the range 800 to 1600.
In order for the optimiser to recognize that it needs to vary the new parameter (`sep.c0`), the component *must* be included in the model expression. As it does not contribute to the model output directly, it should be multiplied by zero. So, for this example the model to be fit would be given by an expression like:
```
>>> mdl = g1 + g2 + 0 * sep
```
### Resetting parameter values[¶](#resetting-parameter-values)
The
[`reset()`](index.html#sherpa.models.parameter.Parameter.reset)
method of a parameter will change the parameter settings (which includes the status of the thawed flag and allowed ranges,
as well as the value) to the values they had the last time the parameter was *explicitly* set. That is, it does not restore the initial values used when the model was created, but the last values the user set.
The model class has its own
[`reset()`](index.html#sherpa.models.model.Model.reset)
method which calls reset on the thawed parameters. This can be used to
[change the starting point of a fit](index.html#change-fit-starting-point)
to see how robust the optimiser is by:
* explicitly setting parameter values (or using the default values)
* fit the data
* call reset
* change one or more parameters
* refit
### Inspecting models and parameters[¶](#inspecting-models-and-parameters)
Models, whether a single component or composite, contain a
`pars` attribute which is a tuple of all the parameters for that model. This can be used to programatically query or change the parameter values.
There are several attributes that return arrays of values for the thawed parameters of the model expression: the most useful is [`thawedpars`](index.html#sherpa.models.model.Model.thawedpars),
which gives the current values.
Composite models can be queried to find the individual components using the `parts` attribute, which contains a tuple of the components (these components can themselves be composite objects).
Evaluating a model[¶](#evaluating-a-model)
---
### Binned and Unbinned grids[¶](#binned-and-unbinned-grids)
Sherpa supports models for both
[unbinned](index.html#data-unbinned) and
[binned](index.html#data-binned) data sets. The output of a model depends on how it is called
(is it sent just the grid points or the bin edges), how the `integrate` flag of the model component is set, and whether the model supports both or just one case.
The [`Const1D`](index.html#sherpa.models.basic.Const1D) model represents a constant value, which means that for an unbinned dataset the model evaluates to a single value (the
[`c0`](index.html#sherpa.models.basic.Const1D.c0) parameter):
```
>>> from sherpa.models.basic import Const1D
>>> mdl = Const1D()
>>> mdl.c0 = 0.1
>>> mdl([1, 2, 3])
array([ 0.1, 0.1, 0.1])
>>> mdl([-4000, 12000])
array([ 0.1, 0.1])
```
The default value for its
`integrate` flag is
`True`:
```
>>> mdl.integrate True
```
which means that this value is multiplied by the bin width when given a binned grid (i.e. when sent in the low and high edges of each bin):
```
>>> mdl([10, 20, 30], [20, 30, 50])
array([ 1., 1., 2.])
```
When the `integrate` flag is unset, the model no longer multiplies by the bin width, and so acts similarly to the unbinned case:
```
>>> mdl.integrate = False
>>> mdl([10, 20, 30], [20, 30, 50])
array([ 0.1, 0.1, 0.1])
```
The behavior in all these three cases depends on the model - for instance some models may raise an exception, ignore the high-edge values in the binned case, or use the mid-point - and so the model documentation should be reviewed.
The following example uses the
[`Polynom1D`](index.html#sherpa.models.basic.Polynom1D) class to model the linear relation
\(y = mx + c\) with the origin at \(x = 1400\),
an offset of 2, and a gradient of 1:
```
>>> from sherpa.models.basic import Polynom1D
>>> poly = Polynom1D()
>>> poly.offset = 1400
>>> poly.c0 = 2
>>> poly.c1 = 1
>>> x = [1391, 1396, 1401, 1406, 1411]
>>> poly(x)
array([ -7., -2., 3., 8., 13.])
```
As the integrate flag is set, the model is integrated across each bin:
```
>>> poly.integrate True
>>> xlo, xhi = x[:-1], x[1:]
>>> y = poly(xlo, xhi)
>>> y array([-22.5, 2.5, 27.5, 52.5])
```
Thanks to the easy functonal form chosen for this example,
it is easy to confirm that these are the values of the integrated model:
```
>>> (y[:-1] + y[1:]) * 5 / 2.0 array([-22.5, 2.5, 27.5, 52.5])
```
Turning off the `integrate` flag for this model shows that it uses the low-edge of the bin when evaluating the model:
```
>>> poly.integrate = False
>>> poly(xlo, xhi)
array([-7., -2., 3., 8.])
```
### Combining models and parameters[¶](#combining-models-and-parameters)
Most of the examples show far have used a single model component,
such as a one-dimensional polynomial or a two-dimensional gaussian,
but individual components can be combined together by addition,
multiplication, subtraction, or even division. Components can also be combined with scalar values or - with *great* care - NumPy vectors.
Parameter values can be “combined” by
[linking them together](index.html#params-link) using mathematical expressions. The case of one model requiring the results of another model is discussed in
[the convolution section](index.html#document-evaluation/convolution).
Note
There is currently *no restriction* on combining models of different types. This means that there is no exception raised when combining a one-dimensional model with a two-dimensional one. It is only when the model is evaluated that an error *may* be raised.
#### Model Expressions[¶](#model-expressions)
A model, whether it is required to create a
[`sherpa.fit.Fit`](index.html#sherpa.fit.Fit) object or the argument to the [`sherpa.ui.set_source()`](index.html#sherpa.ui.set_source) call, is expected to behace like an instance of the
[`sherpa.models.model.ArithmeticModel`](index.html#sherpa.models.model.ArithmeticModel) class.
Instances can be combined as
[numeric types](https://docs.python.org/3/reference/datamodel.html#emulating-numeric-types)
since the class defines methods for addition, subtraction,
multiplication, division, modulus, and exponentiation.
This means that Sherpa model instances can be combined with other Python terms, such as the weighted combination of model components `cpt1`, `cpt2`, and `cpt3`:
```
cpt1 * (cpt2 + 0.8 * cpt3)
```
Since the models are evaluated on a grid, it is possible to include a NumPy vector in the expression, but this is only possible in restricted situations, when the grid size is known (i.e. the model expression is not going to be used in a general setting).
#### Example[¶](#example)
The following example fits two one-dimensional gaussians to a simulated dataset.
It is based on the [AstroPy modelling documentation](http://docs.astropy.org/en/stable/modeling/#compound-models),
but has [linked the positions of the two gaussians](index.html#params-link)
during the fit.
```
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from sherpa import data, models, stats, fit, plot
```
Since the example uses many different parts of the Sherpa API, the various modules are imported directly, rather than their contents,
to make it easier to work out what each symbol refers to.
Note
Some Sherpa modules re-export symbols from other modules, which means that a symbol can be found in several modules. An example is [`sherpa.models.basic.Gauss1D`](index.html#sherpa.models.basic.Gauss1D), which can also be imported as `sherpa.models.Gauss1D`.
##### Creating the simulated data[¶](#creating-the-simulated-data)
To provide a repeatable example, the NumPy random number generator is set to a fixed value:
```
>>> np.random.seed(42)
```
The two components used to create the simulated dataset are called
`sim1` and `sim2`:
```
>>> s1 = models.Gauss1D('sim1')
>>> s2 = models.Gauss1D('sim2')
```
The individual components can be displayed, as the `__str__`
method of the model class creates a display which includes the model expression and then a list of the paramters:
```
>>> print(s1)
sim1
Param Type Value Min Max Units
--- --- --- --- --- ---
sim1.fwhm thawed 10 1.17549e-38 3.40282e+38
sim1.pos thawed 0 -3.40282e+38 3.40282e+38
sim1.ampl thawed 1 -3.40282e+38 3.40282e+38
```
The [`pars`](index.html#sherpa.models.model.Model.pars) attribute contains a tuple of all the parameters in a model instance. This can be queried to find the attributes of the parameters (each element of the tuple is a [`Parameter`](index.html#sherpa.models.parameter.Parameter)
object):
```
>>> [p.name for p in s1.pars]
['fwhm', 'pos', 'ampl']
```
These components can be combined using standard mathematical operations; for example addition:
```
>>> sim_model = s1 + s2
```
The `sim_model` object represents the sum of two gaussians, and contains both the input models (using different names when creating model components - so here `sim1` and `sim2` - can make it easier to follow the logic of more-complicated model combinations):
```
>>> print(sim_model)
(sim1 + sim2)
Param Type Value Min Max Units
--- --- --- --- --- ---
sim1.fwhm thawed 10 1.17549e-38 3.40282e+38
sim1.pos thawed 0 -3.40282e+38 3.40282e+38
sim1.ampl thawed 1 -3.40282e+38 3.40282e+38
sim2.fwhm thawed 10 1.17549e-38 3.40282e+38
sim2.pos thawed 0 -3.40282e+38 3.40282e+38
sim2.ampl thawed 1 -3.40282e+38 3.40282e+38
```
The `pars` attribute now includes parameters from both components,
and so the `fullname`
attribute is used to discriminate between the two components:
```
>>> [p.fullname for p in sim_model.pars]
['sim1.fwhm', 'sim1.pos', 'sim1.ampl', 'sim2.fwhm', 'sim2.pos', 'sim2.ampl']
```
Since the original models are still accessible, they can be used to change the parameters of the combined model. The following sets the first component (`sim1`) to be centered at `x = 0` and the second one at `x = 0.5`:
```
>>> s1.ampl = 1.0
>>> s1.pos = 0.0
>>> s1.fwhm = 0.5
>>> s2.ampl = 2.5
>>> s2.pos = 0.5
>>> s2.fwhm = 0.25
```
The model is evaluated on the grid, and “noise” added to it
(using a normal distribution centered on 0 with a standard deviation of 0.2):
```
>>> x = np.linspace(-1, 1, 200)
>>> y = sim_model(x) + np.random.normal(0., 0.2, x.shape)
```
These arrays are placed into a Sherpa data object, using the
[`Data1D`](index.html#sherpa.data.Data1D) class, since it will be fit below, and then a plot created to show the simulated data:
```
>>> d = data.Data1D('multiple', x, y)
>>> dplot = plot.DataPlot()
>>> dplot.prepare(d)
>>> dplot.plot()
```
##### What is the composite model?[¶](#what-is-the-composite-model)
The result of the combination is a
[`BinaryOpModel`](index.html#sherpa.models.model.BinaryOpModel), which has
`op`,
`lhs`,
and `rhs`
attributes which describe the structure of the combination:
```
>>> sim_model
<BinaryOpModel model instance '(sim1 + sim2)'>
>>> sim_model.op
<ufunc 'add'>
>>> sim_model.lhs
<Gauss1D model instance 'sim1'>
>>> sim_model.rhs
<Gauss1D model instance 'sim2'>
```
There is also a
`parts` attribute which contains all the elements of the model (in this case the combination of the `lhs` and `rhs` attributes):
```
>>> sim_model.parts
(<Gauss1D model instance 'sim1'>, <Gauss1D model instance 'sim2'>)
>>> for cpt in sim_model.parts:
...: print(cpt)
sim1
Param Type Value Min Max Units
--- --- --- --- --- ---
sim1.fwhm thawed 0.5 1.17549e-38 3.40282e+38
sim1.pos thawed 0 -3.40282e+38 3.40282e+38
sim1.ampl thawed 1 -3.40282e+38 3.40282e+38 sim2
Param Type Value Min Max Units
--- --- --- --- --- ---
sim2.fwhm thawed 0.25 1.17549e-38 3.40282e+38
sim2.pos thawed 0.5 -3.40282e+38 3.40282e+38
sim2.ampl thawed 2.5 -3.40282e+38 3.40282e+38
```
As the `BinaryOpModel` class is a subclass of the
[`ArithmeticModel`](index.html#sherpa.models.model.ArithmeticModel) class, the combined model can be treated as a single model instance; for instance it can be evaluated on a grid by passing in an array of values:
```
>>> sim_model([-1.0, 0, 1])
array([ 1.52587891e-05, 1.00003815e+00, 5.34057617e-05])
```
##### Setting up the model[¶](#setting-up-the-model)
Rather than use the model components used to simulate the data,
new instances are created and combined to create the model:
```
>>> g1 = models.Gauss1D('g1')
>>> g2 = models.Gauss1D('g2')
>>> mdl = g1 + g2
```
In this particular fit, the separation of the two models is going to be assumed to be known, so the two `pos` parameters can be [linked together](index.html#params-link), which means that there is one less free parameter in the fit:
```
>>> g2.pos = g1.pos + 0.5
```
The FWHM parameters are changed as the default value of 10 is not appropriate for this data (since the independent axis ranges from -1 to 1):
```
>>> g1.fwhm = 0.1
>>> g2.fwhm = 0.1
```
The display of the combined model shows that the `g2.pos`
parameter is now linked to the `g1.pos` value:
```
>>> print(mdl)
(g1 + g2)
Param Type Value Min Max Units
--- --- --- --- --- ---
g1.fwhm thawed 0.1 1.17549e-38 3.40282e+38
g1.pos thawed 0 -3.40282e+38 3.40282e+38
g1.ampl thawed 1 -3.40282e+38 3.40282e+38
g2.fwhm thawed 0.1 1.17549e-38 3.40282e+38
g2.pos linked 0.5 expr: (g1.pos + 0.5)
g2.ampl thawed 1 -3.40282e+38 3.40282e+38
```
Note
It is a good idea to check the parameter ranges - that is
[their minimum and maximum values](index.html#params-limits) - to make sure they are appropriate for the data.
The model is evaluated with its initial parameter values so that it can be compared to the best-fit location later:
```
>>> ystart = mdl(x)
```
##### Fitting the model[¶](#fitting-the-model)
The initial model can be added to the data plot either directly,
with matplotlib commands, or using the
[`ModelPlot`](index.html#sherpa.plot.ModelPlot) class to overlay onto the
[`DataPlot`](index.html#sherpa.plot.DataPlot) display:
```
>>> mplot = plot.ModelPlot()
>>> mplot.prepare(d, mdl)
>>> dplot.plot()
>>> mplot.plot(overplot=True)
```
As can be seen, the initial values for the gaussian positions are close to optimal. This is unlikely to happen in real-world situations!
As there are no errors for the data set, the least-square statistic
([`LeastSq`](index.html#sherpa.stats.LeastSq)) is used (so that the fit attempts to minimise the separation between the model and data with no weighting), along with the default optimiser:
```
>>> f = fit.Fit(d, mdl, stats.LeastSq())
>>> res = f.fit()
>>> res.succeeded True
```
When displayig the results, the [`FitPlot`](index.html#sherpa.plot.FitPlot)
class is used since it combines both data and model plots (after updating the `mplot` object to include the new model parameter values):
```
>>> fplot = plot.FitPlot()
>>> mplot.prepare(d, mdl)
>>> fplot.prepare(dplot, mplot)
>>> fplot.plot()
>>> plt.plot(x, ystart, label='Start')
>>> plt.legend(loc=2)
```
As can be seen below, the position of the `g2` gaussian remains linked to that of `g1`:
```
>>> print(mdl)
(g1 + g2)
Param Type Value Min Max Units
--- --- --- --- --- ---
g1.fwhm thawed 0.515565 1.17549e-38 3.40282e+38
g1.pos thawed 0.00431538 -3.40282e+38 3.40282e+38
g1.ampl thawed 0.985078 -3.40282e+38 3.40282e+38
g2.fwhm thawed 0.250698 1.17549e-38 3.40282e+38
g2.pos linked 0.504315 expr: (g1.pos + 0.5)
g2.ampl thawed 2.48416 -3.40282e+38 3.40282e+38
```
##### Accessing the linked parameter[¶](#accessing-the-linked-parameter)
The `pars` attribute of a model instance provides access to the individual [`Parameter`](index.html#sherpa.models.parameter.Parameter) objects.
These can be used to query - as shown below - or change the model values:
```
>>> for p in mdl.pars:
...: if p.link is None:
...: print("{:10s} -> {:.3f}".format(p.fullname, p.val))
...: else:
...: print("{:10s} -> link to {}".format(p.fullname, p.link.name))
g1.fwhm -> 0.516 g1.pos -> 0.004 g1.ampl -> 0.985 g2.fwhm -> 0.251 g2.pos -> link to (g1.pos + 0.5)
g2.ampl -> 2.484
```
The linked parameter is actually an instance of the
[`CompositeParameter`](index.html#sherpa.models.parameter.CompositeParameter)
class, which allows parameters to be combined in a similar manner to models:
```
>>> g2.pos
<Parameter 'pos' of model 'g2'>
>>> print(g2.pos)
val = 0.504315379302 min = -3.40282346639e+38 max = 3.40282346639e+38 units =
frozen = True link = (g1.pos + 0.5)
default_val = 0.504315379302 default_min = -3.40282346639e+38 default_max = 3.40282346639e+38
>>> g2.pos.link
<BinaryOpParameter '(g1.pos + 0.5)'>
>>> print(g2.pos.link)
val = 0.504315379302 min = -3.40282346639e+38 max = 3.40282346639e+38 units =
frozen = False link = None default_val = 0.504315379302 default_min = -3.40282346639e+38 default_max = 3.40282346639e+38
```
### Convolution[¶](#convolution)
A convolution model requires both the evaluation grid *and* the data to convolve. Examples include using a point-spread function
([PSF](index.html#term-psf)) model to modify a two-dimensional model to account for blurring due to the instrument (or other sources, such as the atmosphere for ground-based Astronomical data sets),
or the redistribution of the counts as a function of energy as modelled by the [RMF](index.html#term-rmf) when analyzing astronomical X-ray spectra.
#### Two-dimensional convolution with a PSF[¶](#two-dimensional-convolution-with-a-psf)
The [`sherpa.astro.instrument.PSFModel`](index.html#sherpa.astro.instrument.PSFModel) class augments the behavior of
[`sherpa.instrument.PSFModel`](index.html#sherpa.instrument.PSFModel) by supporting images with a World Coordinate System ([WCS](index.html#term-wcs)).
##### Including a PSF in a model expression[¶](#including-a-psf-in-a-model-expression)
There are two steps to including a PSF:
> 1. create an instance
> 2. apply the instance to the model components
The “kernel” of the PSF is the actual data used to represent the blurring, and can be given as a numeric array or as a Sherpa model.
In the following example a simple 3 by 3 array is used to represent the PSF, but it first has to be converted into a
[`Data`](index.html#sherpa.data.Data) object:
```
>>> from sherpa.data import Data2D
>>> from sherpa.instrument import PSFModel
>>> k = np.asarray([[0, 1, 0], [1, 0, 1], [0, 1, 0]])
>>> yg, xg = np.mgrid[:3, :3]
>>> kernel = Data2D('kdata', xg.flatten(), yg.flatten(), k.flatten(),
shape=k.shape)
>>> psf = PSFModel(kernel=kernel)
>>> print(psf)
psfmodel
Param Type Value Min Max Units
--- --- --- --- --- ---
psfmodel.kernel frozen kdata
psfmodel.radial frozen 0 0 1
psfmodel.norm frozen 1 0 1
```
As [shown below](#convolution-psf2d-normalize), the data in the PSF is renormalized so that its sum matches the `norm` parameter,
which here is set to 1.
The following example sets up a simple model expression which represents the sum of a single pixel and a line of pixels, using
[`Box2D`](index.html#sherpa.models.basic.Box2D) for both.
```
>>> from sherpa.models.basic import Box2D
>>> pt = Box2D('pt')
>>> pt.xlow, pt.xhi = 1.5, 2.5
>>> pt.ylow, pt.yhi = 2.5, 3.5
>>> pt.ampl = 8
>>> box = Box2D('box')
>>> box.xlow, box.xhi = 4, 10
>>> box.ylow, box.yhi = 6.5, 7.5
>>> box.ampl = 10
>>> unconvolved_mdl = pt + box
>>> print(unconvolved_mdl)
(pt + box)
Param Type Value Min Max Units
--- --- --- --- --- ---
pt.xlow thawed 1.5 -3.40282e+38 3.40282e+38
pt.xhi thawed 2.5 -3.40282e+38 3.40282e+38
pt.ylow thawed 2.5 -3.40282e+38 3.40282e+38
pt.yhi thawed 3.5 -3.40282e+38 3.40282e+38
pt.ampl thawed 10 -3.40282e+38 3.40282e+38
box.xlow thawed 4 -3.40282e+38 3.40282e+38
box.xhi thawed 10 -3.40282e+38 3.40282e+38
box.ylow thawed 6.5 -3.40282e+38 3.40282e+38
box.yhi thawed 7.5 -3.40282e+38 3.40282e+38
box.ampl thawed 10 -3.40282e+38 3.40282e+38
```
Note
Although Sherpa provides the [`Delta2D`](index.html#sherpa.models.basic.Delta2D)
class, it is suggested that alternatives such as
[`Box2D`](index.html#sherpa.models.basic.Box2D) be used instead, since a delta function is **very** sensitive to the location at which it is evaluated. However, including a `Box2D` component in a fit can still be problematic since the output of the model does not vary smoothly as any of the bin edges change, which is a challenge for the
[optimisers provided with Sherpa](index.html#document-optimisers/index).
Rather than being another term in the model expression - that is,
an item that is added, subtracted, multiplied, or divided into an existing expression - the PSF model “wraps” the model it is to convolve.
This can be a single model or - as in this case - a composite one:
```
>>> convolved_mdl = psf(unconvolved_mdl)
>>> print(convolved_mdl)
psfmodel((pt + box))
Param Type Value Min Max Units
--- --- --- --- --- ---
pt.xlow thawed 1.5 -3.40282e+38 3.40282e+38
pt.xhi thawed 2.5 -3.40282e+38 3.40282e+38
pt.ylow thawed 2.5 -3.40282e+38 3.40282e+38
pt.yhi thawed 3.5 -3.40282e+38 3.40282e+38
pt.ampl thawed 10 -3.40282e+38 3.40282e+38
box.xlow thawed 4 -3.40282e+38 3.40282e+38
box.xhi thawed 10 -3.40282e+38 3.40282e+38
box.ylow thawed 6.5 -3.40282e+38 3.40282e+38
box.yhi thawed 7.5 -3.40282e+38 3.40282e+38
box.ampl thawed 10 -3.40282e+38 3.40282e+38
```
This new expression can be treated as any other Sherpa model, which means that we can apply extra terms to it, such as adding a background component that is not affected by the PSF:
```
>>> from sherpa.models.basic import Const2D
>>> bgnd = Const2D('bgnd')
>>> bgnd.c0 = 0.25
>>> print(convolved_mdl + bgnd)
(psfmodel((pt + box)) + bgnd)
Param Type Value Min Max Units
--- --- --- --- --- ---
pt.xlow thawed 1.5 -3.40282e+38 3.40282e+38
pt.xhi thawed 2.5 -3.40282e+38 3.40282e+38
pt.ylow thawed 2.5 -3.40282e+38 3.40282e+38
pt.yhi thawed 3.5 -3.40282e+38 3.40282e+38
pt.ampl thawed 10 -3.40282e+38 3.40282e+38
box.xlow thawed 4 -3.40282e+38 3.40282e+38
box.xhi thawed 10 -3.40282e+38 3.40282e+38
box.ylow thawed 6.5 -3.40282e+38 3.40282e+38
box.yhi thawed 7.5 -3.40282e+38 3.40282e+38
box.ampl thawed 10 -3.40282e+38 3.40282e+38
bgnd.c0 thawed 0.25 -3.40282e+38 3.40282e+38
```
In the following this extra term (`bgnd`) is not included to simplify the comparison between the unconvolved and convolved versions.
##### Evaluating a model including a PSF[¶](#evaluating-a-model-including-a-psf)
The PSF-convolved model can be evaluated - in *most cases* - just as is done for ordinary models. That is by supplying it with the grid coordinates to use. However, the need to convolve the data with a fixed grid does limit this somewhat.
For this example, a grid covering the points 0 to 9 inclusive is used for each axis (with a unit pixel size), which means that the unconvolved model can be evaluated with the following:
```
>>> yg, xg = np.mgrid[:10, :10]
>>> xg1d, yg1d = xg.flatten(), yg.flatten()
>>> m1 = unconvolved_mdl(xg1d, yg1d).reshape(xg.shape)
```
An easier alternative, once the PSF is included, is to create an empty dataset with the given grid (that is, a dataset for which we do not care about the dependent axis), and use the
`eval_model()` method to evaluate the model (the result for `m1` is the same whichever approach is used):
```
>>> blank = Data2D('blank', xg1d, yg1d, np.ones(xg1d.shape), xg.shape)
>>> m1 = blank.eval_model(unconvolved_mdl).reshape(xg.shape)
```
The “point source” is located at `x = 2, y = 3` and the line starts at `x=5` and extends to the end of the grid (at `y=7`).
Note
In this example the image coordinates were chosen to be the same as those drawn by matplotlib. The `extent` parameter of the
`imshow` call can be used when this correspondance does not hold.
The PSF model includes a
[`fold()`](index.html#sherpa.instrument.PSFModel.fold) method, which is used to pre-calculate terms needed for the convolution (which is done using a fourier transform), and so needs the grid over which it is to be applied. This is done by passing in a Sherpa dataset, such as the
`blank` example we have just created:
```
>>> psf.fold(blank)
>>> m2 = blank.eval_model(convolved_mdl).reshape(xg.shape)
```
The kernel used redistributes flux from the central pixel to its four immediate neighbors equally, which is what has happened to the point source at `(2, 2)`. The result for the line is to blur the line slightly, but note that the convolution has “wrapped around”, so that the flux that should have been placed into the pixel at `(10, 7)`,
which is off the grid, has been moved to `(0, 7)`.
Note
If the fold method is not called then evaluating the model will raise the following exception:
```
PSFErr: PSF model has not been folded
```
Care must be taken to ensure that fold is called whenever the grid has changed. This suggests that the same PSF model should not be used in simultaneous fits, unless it is known that the grid is the same in the multiple datasets.
##### The PSF Normalization[¶](#the-psf-normalization)
Since the `norm` parameter of the PSF model was set to 1, the PSF convolution is flux preserving, even at the edges thanks to the wrap-around behavior of the fourier transform. This can be seen by comparing the signal in the unconvolved and convolved images, which are (to numerical precision) the same:
```
>>> m1.sum()
60.0
>>> m2.sum()
60.0
```
The use of a fourier transform means that low-level signal will be found in many pixels which would expect to be 0. For example,
looking at the row of pixels at `y = 7` gives:
```
>>> m2[7]
array([2.50000000e+00, 1.73472348e-16, 5.20417043e-16, 4.33680869e-16,
2.50000000e+00, 2.50000000e+00, 5.00000000e+00, 5.00000000e+00,
5.00000000e+00, 2.50000000e+00])
```
### Evaluating the model on a different grid[¶](#evaluating-the-model-on-a-different-grid)
Sherpa now provides *experimental* support for evaluating a model on a different grid to the independent axis. This can be used to better model the underlying distribution (by use of a finer grid),
or to include features that lie outside the data range but, due to the use of a convolution model, can affect the values within the data range.
Existing Sherpa models take advantage of this support by inheriting from the
[`RegriddableModel1D`](index.html#sherpa.models.model.RegriddableModel1D)
or
[`RegriddableModel2D`](index.html#sherpa.models.model.RegriddableModel2D) classes.
At present the only documentation is provided in the
[low-level API module](index.html#document-model_classes/regrid).
### Simulating data[¶](#simulating-data)
Simulating a data set normally involves:
> 1. evaluate the model
> 2. add in noise
This may need to be repeated several times for complex models, such as when different components have different noise models or the noise needs to be added before evaluation by a component.
The model evaluation would be performed using the techniques described in this section, and then the noise term can be handled with [`sherpa.utils.poisson_noise()`](index.html#sherpa.utils.poisson_noise) or routines from NumPy or SciPy to evaluate noise, such as `numpy.random.standard_normal`.
```
>>> import numpy as np
>>> from sherpa.models.basic import Polynom1D
>>> np.random.seed(235)
>>> x = np.arange(10, 100, 12)
>>> mdl = Polynom1D('mdl')
>>> mdl.offset = 35
>>> mdl.c1 = 0.5
>>> mdl.c2 = 0.12
>>> ymdl = mdl(x)
>>> from sherpa.utils import poisson_noise
>>> ypoisson = poisson_noise(ymdl)
>>> from numpy.random import standard_normal, normal
>>> yconst = ymdl + standard_normal(ymdl.shape) * 10
>>> ydata = ymdl + normal(scale=np.sqrt(ymdl))
```
### Caching model evaluations[¶](#caching-model-evaluations)
Sherpa contains a rudimentary system for cacheing the results of a model evaluation. It is related to the
`modelCacher1d()`
function decorator.
### Examples[¶](#examples)
The following examples show the different ways that a model can be evaluted, for a range of situations. The
[direct method](#model-evaluate-example-oned-direct) is often sufficient, but for more complex cases it can be useful to
[ask a data object to evaluate the model](#model-evaluate-example-twod-via-data), particularly if you want to include instrumental responses,
[such as a RMF and ARF](#model-evaluate-example-pha-via-data).
#### Evaluating a one-dimensional model directly[¶](#evaluating-a-one-dimensional-model-directly)
In the following example a one-dimensional gaussian is evaluated on a grid of 5 points by
[using the model object directly](index.html#evaluation-direct).
The first approch just calls the model with the evaluation grid (here the array `x`),
which uses the parameter values as defined in the model itself:
```
>>> from sherpa.models.basic import Gauss1D
>>> gmdl = Gauss1D()
>>> gmdl.fwhm = 100
>>> gmdl.pos = 5050
>>> gmdl.ampl = 50
>>> x = [4800, 4900, 5000, 5100, 5200]
>>> y1 = gmdl(x)
```
The second uses the [`calc()`](index.html#sherpa.models.model.Model.calc)
method, where the parameter values must be specified in the call along with the grid on which to evaluate the model.
The order matches that of the parameters in the model, which can be found from the
[`pars`](index.html#sherpa.models.model.Model.pars) attribute of the model:
```
>>> [p.name for p in gmdl.pars]
['fwhm', 'pos', 'ampl']
>>> y2 = gmdl.calc([100, 5050, 100], x)
>>> y2 / y1 array([ 2., 2., 2., 2., 2.])
```
Since in this case the amplitude (the last parameter value) is twice that used to create `y1` the ratio is 2 for each bin.
#### Evaluating a 2D model to match a Data2D object[¶](#evaluating-a-2d-model-to-match-a-data2d-object)
In the following example the model is evaluated on a grid specified by a dataset, in this case a set of two-dimensional points stored in a [`Data2D`](index.html#sherpa.data.Data2D) object.
First the data is set up (there are only four points in this example to make things easy to follow).
```
>>> from sherpa.data import Data2D
>>> x0 = [1.0, 1.9, 2.4, 1.2]
>>> x1 = [-5.0, -7.0, 2.3, 1.2]
>>> y = [12.1, 3.4, 4.8, 5.2]
>>> twod = Data2D('data', x0, x1, y)
```
For demonstration purposes, the [`Box2D`](index.html#sherpa.models.basic.Box2D)
model is used, which represents a rectangle (any points within the
[`xlow`](index.html#sherpa.models.basic.Box2D.xlow)
to
[`xhi`](index.html#sherpa.models.basic.Box2D.xhi)
and
[`ylow`](index.html#sherpa.models.basic.Box2D.ylow)
to
[`yhi`](index.html#sherpa.models.basic.Box2D.yhi)
limits are set to the
[`ampl`](index.html#sherpa.models.basic.Box2D.ampl)
value, those outside are zero).
```
>>> from sherpa.models.basic import Box2D
>>> mdl = Box2D('mdl')
>>> mdl.xlow = 1.5
>>> mdl.xhi = 2.5
>>> mdl.ylow = -9.0
>>> mdl.yhi = 5.0
>>> mdl.ampl = 10.0
```
The coverage have been set so that some of the points are within the “box”, and so are set to the amplitude value when the model is evaluated.
```
>>> twod.eval_model(mdl)
array([ 0., 10., 10., 0.])
```
The [`eval_model()`](index.html#sherpa.data.Data.eval_model) method evaluates the model on the grid defined by the data set, so it is the same as calling the model directly with these values:
```
>>> twod.eval_model(mdl) == mdl(x0, x1)
array([ True, True, True, True], dtype=bool)
```
The [`eval_model_to_fit()`](index.html#sherpa.data.Data.eval_model_to_fit) method will apply any filter associated with the data before evaluating the model. At this time there is no filter so it returns the same as above.
```
>>> twod.eval_model_to_fit(mdl)
array([ 0., 10., 10., 0.])
```
Adding a simple spatial filter - that excludes one of the points within the box - with
[`ignore()`](index.html#sherpa.data.Data2D.ignore) now results in a difference in the outputs of
[`eval_model()`](index.html#sherpa.data.Data.eval_model)
and
[`eval_model_to_fit()`](index.html#sherpa.data.Data.eval_model_to_fit),
as shown below. The call to
[`get_indep()`](index.html#sherpa.data.Data.get_indep)
is used to show the grid used by
[`eval_model_to_fit()`](index.html#sherpa.data.Data.eval_model_to_fit).
```
>>> twod.ignore(x0lo=2, x0hi=3, x1l0=0, x1hi=10)
>>> twod.eval_model(mdl)
array([ 0., 10., 10., 0.])
>>> twod.get_indep(filter=True)
(array([ 1. , 1.9, 1.2]), array([-5. , -7. , 1.2]))
>>> twod.eval_model_to_fit(mdl)
array([ 0., 10., 0.])
```
#### Evaluating a model using a DataPHA object[¶](#evaluating-a-model-using-a-datapha-object)
This example is similar to the
[two-dimensional case above](#model-evaluate-example-twod-via-data),
in that it again shows the differences between the
[`eval_model()`](index.html#sherpa.astro.data.DataPHA.eval_model)
and
[`eval_model_to_fit()`](index.html#sherpa.astro.data.DataPHA.eval_model_to_fit)
methods. The added complication in this case is that the response information provided with a PHA file is used to convert between the “native” axis of the PHA file (channels) and that of the model (energy or wavelength). This conversion is handled automatically by the two methods (the
[following example](#model-evaluate-example-pha-directly)
shows how this can be done manually).
To start with, the data is loaded from a file, which also loads in the associated [ARF](index.html#term-arf) and [RMF](index.html#term-rmf) files:
```
>>> from sherpa.astro.io import read_pha
>>> pha = read_pha('3c273.pi')
WARNING: systematic errors were not found in file '3c273.pi'
statistical errors were found in file '3c273.pi'
but not used; to use them, re-read with use_errors=True read ARF file 3c273.arf read RMF file 3c273.rmf WARNING: systematic errors were not found in file '3c273_bg.pi'
statistical errors were found in file '3c273_bg.pi'
but not used; to use them, re-read with use_errors=True read background file 3c273_bg.pi
>>> pha
<DataPHA data set instance '3c273.pi'>
>>> pha.get_arf()
<DataARF data set instance '3c273.arf'>
>>> pha.get_rmf()
<DataRMF data set instance '3c273.rmf'>
```
The returned object - here `pha` - is an instance of the
[`sherpa.astro.data.DataPHA`](index.html#sherpa.astro.data.DataPHA) class - which has a number of attributes and methods specialized to handling PHA data.
This particular file has grouping information in it, that it it contains
`GROUPING` and `QUALITY` columns, so Sherpa applies them: that is, the number of bins over which the data is analysed is smaller than the number of channels in the file because each bin can consist of multiple channels. For this file,
there are 46 bins after grouping (the `filter` argument to the
[`get_dep()`](index.html#sherpa.astro.data.DataPHA.get_dep) call applies both filtering and grouping steps, but so far no filter has been applied):
```
>>> pha.channel.size 1024
>>> pha.get_dep().size 1024
>>> pha.grouped True
>>> pha.get_dep(filter=True).size 46
```
A filter - in this case to restrict to only bins that cover the energy range 0.5 to 7.0 keV - is applied with the
[`notice()`](index.html#sherpa.astro.data.DataPHA.notice) call, which removes four bins for this particular data set:
```
>>> pha.set_analysis('energy')
>>> pha.notice(0.5, 7.0)
>>> pha.get_dep(filter=True).size 42
```
A power-law model ([`PowLaw1D`](index.html#sherpa.models.basic.PowLaw1D)) is created and evaluated by the data object:
```
>>> from sherpa.models.basic import PowLaw1D
>>> mdl = PowLaw1D()
>>> y1 = pha.eval_model(mdl)
>>> y2 = pha.eval_model_to_fit(mdl)
>>> y1.size 1024
>>> y2.size 42
```
The [`eval_model()`](index.html#sherpa.astro.data.DataPHA.eval_model) call evaluates the model over the full dataset and *does not*
apply any grouping, so it returns a vector with 1024 elements.
In contrast, [`eval_model_to_fit()`](index.html#sherpa.astro.data.DataPHA.eval_model_to_fit)
applies *both* filtering and grouping, and returns a vector that matches the data (i.e. it has 42 elements).
The filtering and grouping information is *dynamic*, in that it can be changed without having to re-load the data set. The
[`ungroup()`](index.html#sherpa.astro.data.DataPHA.ungroup) call removes the grouping, but leaves the 0.5 to 7.0 keV energy filter:
```
>>> pha.ungroup()
>>> y3 = pha.eval_model_to_fit(mdl)
>>> y3.size 644
```
#### Evaluating a model using PHA responses[¶](#evaluating-a-model-using-pha-responses)
The [`sherpa.astro.data.DataPHA`](index.html#sherpa.astro.data.DataPHA) class handles the response information automatically, but it is possible to directly apply the response information to a model using the [`sherpa.astro.instrument`](index.html#module-sherpa.astro.instrument) module. In the following example the
[`RSPModelNoPHA`](index.html#sherpa.astro.instrument.RSPModelNoPHA)
and
[`RSPModelPHA`](index.html#sherpa.astro.instrument.RSPModelPHA)
classes are used to wrap a power-law model
([`PowLaw1D`](index.html#sherpa.models.basic.PowLaw1D))
so that the instrument responses - the [ARF](index.html#term-arf) and [RMF](index.html#term-rmf) -
are included in the model evaluation.
```
>>> from sherpa.astro.io import read_arf, read_rmf
>>> arf = read_arf('3c273.arf')
>>> rmf = read_rmf('3c273.rmf')
>>> rmf.detchans 1024
```
The number of channels in the RMF - that is, the number of bins over which the RMF is defined - is 1024.
```
>>> from sherpa.models.basic import PowLaw1D
>>> mdl = PowLaw1D()
```
The [`RSPModelNoPHA`](index.html#sherpa.astro.instrument.RSPModelNoPHA) class models the inclusion of both the ARF and RMF:
```
>>> from sherpa.astro.instrument import RSPModelNoPHA
>>> inst = RSPModelNoPHA(arf, rmf, mdl)
>>> inst
<RSPModelNoPHA model instance 'apply_rmf(apply_arf(powlaw1d))'>
>>> print(inst)
apply_rmf(apply_arf(powlaw1d))
Param Type Value Min Max Units
--- --- --- --- --- ---
powlaw1d.gamma thawed 1 -10 10
powlaw1d.ref frozen 1 -3.40282e+38 3.40282e+38
powlaw1d.ampl thawed 1 0 3.40282e+38
```
Note
The RMF and ARF are represented as models that “enclose” the spectrum - that is, they are written `apply_rmf(model)` and
`apply_arf(model)` rather than `rmf * model` - since they may perform a convolution or rebinning (ARF) of the model output.
The return value (`inst`) behaves as a normal Shepra model, for example:
```
>>> from sherpa.models.model import ArithmeticModel
>>> isinstance(inst, ArithmeticModel)
True
>>> inst.pars
(<Parameter 'gamma' of model 'powlaw1d'>,
<Parameter 'ref' of model 'powlaw1d'>,
<Parameter 'ampl' of model 'powlaw1d'>)
```
The model can therefore be evaluated by calling it with a grid (as used in the [first example above](#model-evaluate-example-oned-direct)), except that the input grid is ignored and the “native” grid of the response information is used. In this case, no matter the size of the one-dimensional array passed to `inst`, the output has 1024 elements (matching the number of channels in the RMF):
```
>>> inst(np.arange(1, 1025))
array([ 0., 0., 0., ..., 0., 0., 0.])
>>> inst([0.1, 0.2, 0.3])
array([ 0., 0., 0., ..., 0., 0., 0.])
>>> inst([0.1, 0.2, 0.3]).size 1024
>>> inst([10, 20]) == inst([])
array([ True, True, True, ..., True, True, True], dtype=bool)
```
The output of this call represents the number of counts expected in each bin:
```
>>> chans = np.arange(rmf.offset, rmf.offset + rmf.detchans)
>>> ydet = inst(chans)
>>> plt.plot(chans, ydet)
>>> plt.xlabel('Channel')
>>> plt.ylabel('Count / s')
```
Note
The interpretation of the model output as being in units of “counts”
(or a rate)
depends on the normalisation (or amplitude) of the model components,
and whether any term representing the exposure time has been included.
XSPEC additive models - such as [`XSapec`](index.html#sherpa.astro.xspec.XSapec) -
return values that have units of photon/cm^2/s (that is, the spectrum is integrated across each bin), which when passed through the ARF and RMF results in count/s (the ARF has units of cm^2 and the RMF can be thought of as converting photons to counts).
The Sherpa models, such as [`PowLaw1D`](index.html#sherpa.models.basic.PowLaw1D),
do not in general have units (so that the models can be applied to different data sets). This means that the interpretation of the normalization or amplitude term depends on how the model is being used.
The data in the `EBOUNDS` extension of the RMF - which provides an **approximate** mapping from channel to energy for visualization purposes only - is available as the
`e_min`
and
`e_max`
attributes of the
[`DataRMF`](index.html#sherpa.astro.data.DataRMF) object returned by
[`read_rmf()`](index.html#sherpa.astro.io.read_rmf).
The ARF object may contain an exposure time, in its
`exposure`
attribute:
```
>>> print(rmf)
name = 3c273.rmf detchans = 1024 energ_lo = Float64[1090]
energ_hi = Float64[1090]
n_grp = UInt64[1090]
f_chan = UInt64[2002]
n_chan = UInt64[2002]
matrix = Float64[61834]
offset = 1 e_min = Float64[1024]
e_max = Float64[1024]
ethresh = 1e-10
>>> print(arf)
name = 3c273.arf energ_lo = Float64[1090]
energ_hi = Float64[1090]
specresp = Float64[1090]
bin_lo = None bin_hi = None exposure = 38564.141454905 ethresh = 1e-10
```
These can be used to create a plot of energy versus counts per energy bin:
```
>>> # intersperse the low and high edges of each bin
>>> x = np.vstack((rmf.e_min, rmf.e_max)).T.flatten()
>>> # normalize each bin by its width and include the exposure time
>>> y = arf.exposure * ydet / (rmf.e_max - rmf.e_min)
>>> # Repeat for the low and high edges of each bin
>>> y = y.repeat(2)
>>> plt.plot(x, y, '-')
>>> plt.yscale('log')
>>> plt.ylim(1e3, 1e7)
>>> plt.xlim(0, 10)
>>> plt.xlabel('Energy (keV)')
>>> plt.ylabel('Count / keV')
```
Note
The bin widths are small enough that it is hard to make out each bin on this plot.
The
[`RSPModelPHA`](index.html#sherpa.astro.instrument.RSPModelPHA)
class adds in a
[`DataPHA`](index.html#sherpa.astro.data.DataPHA) object, which lets the evaluation grid be determined by any filter applied to the data object. In the following, the
[`read_pha()`](index.html#sherpa.astro.io.read_pha) call reads in a PHA file, along with its associated ARF and RMF (because the
`ANCRFILE` and `RESPFILE` keywords are set in the header of the PHA file), which means that there is no need to call
[`read_arf()`](index.html#sherpa.astro.io.read_arf)
and
[`read_rmf()`](index.html#sherpa.astro.io.read_rmf)
to creating the `RSPModelPHA` instance.
```
>>> from sherpa.astro.io import read_pha
>>> from sherpa.astro.instrument import RSPModelPHA
>>> pha = read_pha('3c273.pi')
WARNING: systematic errors were not found in file '3c273.pi'
statistical errors were found in file '3c273.pi'
but not used; to use them, re-read with use_errors=True read ARF file 3c273.arf read RMF file 3c273.rmf WARNING: systematic errors were not found in file '3c273_bg.pi'
statistical errors were found in file '3c273_bg.pi'
but not used; to use them, re-read with use_errors=True read background file 3c273_bg.pi
>>> arf2 = pha2.get_arf()
>>> rmf2 = pha2.get_rmf()
>>> mdl2 = PowLaw1D('mdl2')
>>> inst2 = RSPModelPHA(arf2, rmf2, pha2, mdl2)
>>> print(inst2)
apply_rmf(apply_arf(mdl2))
Param Type Value Min Max Units
--- --- --- --- --- ---
mdl2.gamma thawed 1 -10 10
mdl2.ref frozen 1 -3.40282e+38 3.40282e+38
mdl2.ampl thawed 1 0 3.40282e+38
```
The model again is evaluated on the channel grid defined by the RMF:
```
>>> inst2([]).size 1024
```
The [`DataPHA`](index.html#sherpa.astro.data.DataPHA) object can be adjusted to select a subset of data. The default is to use the full channel range:
```
>>> pha2.set_analysis('energy')
>>> pha2.get_filter()
'0.124829999695:12.410000324249'
>>> pha2.get_filter_expr()
'0.1248-12.4100 Energy (keV)'
```
This can be changed with the
[`notice()`](index.html#sherpa.astro.data.DataPHA.notice)
and
[`ignore()`](index.html#sherpa.astro.data.DataPHA.ignore)
methods:
```
>>> pha2.notice(0.5, 7.0)
>>> pha2.get_filter()
'0.518300011754:8.219800233841'
>>> pha2.get_filter_expr()
'0.5183-8.2198 Energy (keV)'
```
Note
Since the channels have a finite width, the method of filtering
(in other words, is it `notice` or `ignore`)
determines whether a channel that includes a boundary (in this case 0.5 and 7.0 keV) is included or excluded from the final range. The dataset used in this example includes grouping information, which is automatically applied,
which is why the upper limit of the included range is at 8 rather than 7 keV:
```
>>> pha2.grouped True
```
Ignore a range within the previous range to make the plot more interesting.
```
>>> pha2.ignore(2.0, 3.0)
>>> pha2.get_filter_expr()
'0.5183-1.9199,3.2339-8.2198 Energy (keV)'
```
When evaluate, over whole 1-1024 channels, but can take advantage of the filter if within a pair of calls to
[`startup()`](index.html#sherpa.models.model.Model.startup)
and
[`teardown()`](index.html#sherpa.models.model.Model.teardown)
(this is performed automatically by certain routines, such as within a fit):
```
>>> y1 = inst2([])
>>> inst2.startup()
>>> y2 = inst2([])
>>> inst2.teardown()
>>> y1.size, y2.size
(1024, 1024)
>>> np.all(y1 == y2)
False
```
```
>>> plt.plot(pha2.channel, y1, label='all')
>>> plt.plot(pha2.channel, y2, label='filtered')
>>> plt.xscale('log')
>>> plt.yscale('log')
>>> plt.ylim(0.001, 1)
>>> plt.xlim(5, 1000)
>>> plt.legend(loc='center')
```
Why is the exposure time not being included?
#### Or maybe this?[¶](#or-maybe-this)
This could come first, although maybe need a separate section on how to use astro.instruments (since this is geeting quite long now).
```
>>> from sherpa.astro.io import read_pha
>>> from sherpa.models.basic import PowLaw1D
>>> pha = read_pha('3c273.pi')
>>> pl = PowLaw1D()
```
```
>>> from sherpa.astro.instrument import Response1D, RSPModelPHA
>>> rsp = Response1D(pha)
>>> mdl = rsp(pl)
>>> isinstance(mdl, RSPModelPHA)
>>> print(mdl)
apply_rmf(apply_arf((38564.608926889 * powlaw1d)))
Param Type Value Min Max Units
--- --- --- --- --- ---
powlaw1d.gamma thawed 1 -10 10
powlaw1d.ref frozen 1 -3.40282e+38 3.40282e+38
powlaw1d.ampl thawed 1 0 3.40282e+38
```
Note that the exposure time - taken from the PHA or the ARF - is included so that the normalization is correct.
### Direct evaluation of the model[¶](#direct-evaluation-of-the-model)
Normally Sherpa will handle model evaluation automatically, such as during a fit or displaying the model results. However, the models can be evalutated directly by passing in the grid
([the independent axis](index.html#independent-axis))
directly. If `mdl` is an instance of a Sherpa model - that is it is derived from the
[`Model`](index.html#sherpa.models.model.Model)
class - then there are two standard ways to perform this evaluation:
1. Call the model with the grid directly - e.g. for a one-dimensional grid use one of:
```
mdl(x)
mdl(xlo, xhi)
```
2. Use the [`calc()`](index.html#sherpa.models.model.Model.calc) method, which requires a sequence of parameter values and then the grid; for the one-dimensional case this would be:
```
mdl.calc(pars, x)
mdl.calc(pars, xlo, xhi)
```
In this case the parameter values do *not* need to match the values stored in the model itself. This can be useful when a model is to be embedded within another one, as shown in the
[two-dimensional user model](index.html#example-usermodel-2d)
example.
### Evaluating a model with a data object[¶](#evaluating-a-model-with-a-data-object)
It is also possible to pass a model to a [data object](index.html#document-data/index)
and evaluate the model on a grid appropriate for the data,
using the
[`eval_model()`](index.html#sherpa.data.Data.eval_model) and
[`eval_model_to_fit()`](index.html#sherpa.data.Data.eval_model_to_fit) methods.
This can be useful when working in an environment where the mapping between the “native” grids used to represent data and models is not a simple one-to-one relation, such as when analyzing astronomical X-ray spectral data with an associated response
(i.e. a [RMF](index.html#term-rmf) file), or
[when using a PSF](index.html#convolution-psf2d-evaluate).
Available Models[¶](#available-models)
---
Note
The models in [`sherpa.astro.xspec`](index.html#module-sherpa.astro.xspec) are only available if Sherpa was built with support for the
[XSPEC model library](https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/XSappendixExternal.html).
This section describes the classes that implement models used to describe and fit data, while the
[Reference/API](#models-reference-api)
section below describes the classes used to create these models.
### Writing your own model[¶](#writing-your-own-model)
A model class can be created to fit any function, or interface with external code.
#### A one-dimensional model[¶](#a-one-dimensional-model)
An example is the
[AstroPy trapezoidal model](http://docs.astropy.org/en/stable/api/astropy.modeling.functional_models.Trapezoid1D.html),
which has four parameters: the amplitude of the central region, the center and width of this region, and the slope. The following model class,
which was not written for efficiancy or robustness, implements this interface:
```
import numpy as np
from sherpa.models import model
__all__ = ('Trap1D', )
def _trap1d(pars, x):
"""Evaluate the Trapezoid.
Parameters
---
pars: sequence of 4 numbers
The order is amplitude, center, width, and slope.
These numbers are assumed to be valid (e.g. width
is 0 or greater).
x: sequence of numbers
The grid on which to evaluate the model. It is expected
to be a floating-point type.
Returns
---
y: sequence of numbers
The model evaluated on the input grid.
Notes
---
This is based on the interface described at
http://docs.astropy.org/en/stable/api/astropy.modeling.functional_models.Trapezoid1D.html
but implemented without looking at the code, so any errors
are not due to AstroPy.
"""
(amplitude, center, width, slope) = pars
# There are five segments:
# xlo = center - width/2
# xhi = center + width/2
# x0 = xlo - amplitude/slope
# x1 = xhi + amplitude/slope
#
# flat xlo <= x < xhi
# slope x0 <= x < xlo
# xhi <= x < x1
# zero x < x0
# x >= x1
#
hwidth = width / 2.0
dx = amplitude / slope
xlo = center - hwidth
xhi = center + hwidth
x0 = xlo - dx
x1 = xhi + dx
out = np.zeros(x.size)
out[(x >= xlo) & (x < xhi)] = amplitude
idx = np.where((x >= x0) & (x < xlo))
out[idx] = slope * x[idx] - slope * x0
idx = np.where((x >= xhi) & (x < x1))
out[idx] = - slope * x[idx] + slope * x1
return out
class Trap1D(model.RegriddableModel1D):
"""A one-dimensional trapezoid.
The model parameters are:
ampl
The amplitude of the central (flat) segment (zero or greater).
center
The center of the central segment.
width
The width of the central segment (zero or greater).
slope
The gradient of the slopes (zero or greater).
"""
def __init__(self, name='trap1d'):
self.ampl = model.Parameter(name, 'ampl', 1, min=0, hard_min=0)
self.center = model.Parameter(name, 'center', 1)
self.width = model.Parameter(name, 'width', 1, min=0, hard_min=0)
self.slope = model.Parameter(name, 'slope', 1, min=0, hard_min=0)
model.RegriddableModel1D.__init__(self, name,
(self.ampl, self.center, self.width,
self.slope))
def calc(self, pars, x, *args, **kwargs):
"""Evaluate the model"""
# If given an integrated data set, use the center of the bin
if len(args) == 1:
x = (x + args[0]) / 2
return _trap1d(pars, x)
```
This can be used in the same manner as the
[`Gauss1D`](index.html#sherpa.models.basic.Gauss1D) model in the [quick guide to Sherpa](index.html#quick-gauss1d).
First, create the data to fit:
```
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> np.random.seed(0)
>>> x = np.linspace(-5., 5., 200)
>>> ampl_true = 3
>>> pos_true = 1.3
>>> sigma_true = 0.8
>>> err_true = 0.2
>>> y = ampl_true * np.exp(-0.5 * (x - pos_true)**2 / sigma_true**2)
>>> y += np.random.normal(0., err_true, x.shape)
```
Now create a Sherpa data object:
```
>>> from sherpa.data import Data1D
>>> d = Data1D('example', x, y)
```
Set up the user model:
```
>>> from trap import Trap1D
>>> t = Trap1D()
>>> print(t)
trap1d
Param Type Value Min Max Units
--- --- --- --- --- ---
trap1d.ampl thawed 1 0 3.40282e+38
trap1d.center thawed 1 -3.40282e+38 3.40282e+38
trap1d.width thawed 1 0 3.40282e+38
trap1d.slope thawed 1 0 3.40282e+38
```
Finally, perform the fit:
```
>>> from sherpa.fit import Fit
>>> from sherpa.stats import LeastSq
>>> from sherpa.optmethods import LevMar
>>> tfit = Fit(d, t, stat=LeastSq(), method=LevMar())
>>> tres = tfit.fit()
>>> if not tres.succeeded: print(tres.message)
```
Rather than use a [`ModelPlot`](index.html#sherpa.plot.ModelPlot) object,
the `overplot` argument can be set to allow multiple values in the same plot:
```
>>> from sherpa import plot
>>> dplot = plot.DataPlot()
>>> dplot.prepare(d)
>>> dplot.plot()
>>> mplot = plot.ModelPlot()
>>> mplot.prepare(d, t)
>>> mplot.plot(overplot=True)
```
#### A two-dimensional model[¶](#a-two-dimensional-model)
The two-dimensional case is similar to the one-dimensional case,
with the major difference being the number of independent axes to deal with. In the following example the model is assumed to only be applied to non-integrated data sets, as it simplifies the implementation of the `calc` method.
It also shows one way of embedding models from a different system,
in this case the
[two-dimemensional polynomial model](http://docs.astropy.org/en/stable/api/astropy.modeling.polynomial.Polynomial2D.html)
from the AstroPy package.
```
from sherpa.models import model from astropy.modeling.polynomial import Polynomial2D
__all__ = ('WrapPoly2D', )
class WrapPoly2D(model.RegriddableModel2D):
"""A two-dimensional polynomial from AstroPy, restricted to degree=2.
The model parameters (with the same meaning as the underlying
AstroPy model) are:
c0_0
c1_0
c2_0
c0_1
c0_2
c1_1
"""
def __init__(self, name='wrappoly2d'):
self._actual = Polynomial2D(degree=2)
self.c0_0 = model.Parameter(name, 'c0_0', 0)
self.c1_0 = model.Parameter(name, 'c1_0', 0)
self.c2_0 = model.Parameter(name, 'c2_0', 0)
self.c0_1 = model.Parameter(name, 'c0_1', 0)
self.c0_2 = model.Parameter(name, 'c0_2', 0)
self.c1_1 = model.Parameter(name, 'c1_1', 0)
model.RegriddableModel2D.__init__(self, name,
(self.c0_0, self.c1_0, self.c2_0,
self.c0_1, self.c0_2, self.c1_1))
def calc(self, pars, x0, x1, *args, **kwargs):
"""Evaluate the model"""
# This does not support 2D integrated data sets
mdl = self._actual
for n in ['c0_0', 'c1_0', 'c2_0', 'c0_1', 'c0_2', 'c1_1']:
pval = getattr(self, n).val
getattr(mdl, n).value = pval
return mdl(x0, x1)
```
Repeating the 2D fit by first setting up the data to fit:
```
>>> np.random.seed(0)
>>> y2, x2 = np.mgrid[:128, :128]
>>> z = 2. * x2 ** 2 - 0.5 * y2 ** 2 + 1.5 * x2 * y2 - 1.
>>> z += np.random.normal(0., 0.1, z.shape) * 50000.
```
Put this data into a Sherpa data object:
```
>>> from sherpa.data import Data2D
>>> x0axis = x2.ravel()
>>> x1axis = y2.ravel()
>>> d2 = Data2D('img', x0axis, x1axis, z.ravel(), shape=(128,128))
```
Create an instance of the user model:
```
>>> from poly import WrapPoly2D
>>> wp2 = WrapPoly2D('wp2')
>>> wp2.c1_0.frozen = True
>>> wp2.c0_1.frozen = True
```
Finally, perform the fit:
```
>>> f2 = Fit(d2, wp2, stat=LeastSq(), method=LevMar())
>>> res2 = f2.fit()
>>> if not res2.succeeded: print(res2.message)
>>> print(res2)
datasets = None itermethodname = none methodname = levmar statname = leastsq succeeded = True parnames = ('wp2.c0_0', 'wp2.c2_0', 'wp2.c0_2', 'wp2.c1_1')
parvals = (-80.289475553599914, 1.9894112623565667, -0.4817452191363118, 1.5022711710873158)
statval = 400658883390.6685 istatval = 6571934382318.328 dstatval = 6.17127549893e+12 numpoints = 16384 dof = 16380 qval = None rstat = None message = successful termination nfev = 80
>>> print(wp2)
wp2
Param Type Value Min Max Units
--- --- --- --- --- ---
wp2.c0_0 thawed -80.2895 -3.40282e+38 3.40282e+38
wp2.c1_0 frozen 0 -3.40282e+38 3.40282e+38
wp2.c2_0 thawed 1.98941 -3.40282e+38 3.40282e+38
wp2.c0_1 frozen 0 -3.40282e+38 3.40282e+38
wp2.c0_2 thawed -0.481745 -3.40282e+38 3.40282e+38
wp2.c1_1 thawed 1.50227 -3.40282e+38 3.40282e+38
```
### The sherpa.models.basic module[¶](#module-sherpa.models.basic)
Classes
| [`Box1D`](index.html#sherpa.models.basic.Box1D)([name]) | One-dimensional box function. |
| [`Const1D`](index.html#sherpa.models.basic.Const1D)([name]) | A constant model for one-dimensional data. |
| [`Cos`](index.html#sherpa.models.basic.Cos)([name]) | One-dimensional cosine function. |
| [`Delta1D`](index.html#sherpa.models.basic.Delta1D)([name]) | One-dimensional delta function. |
| [`Erf`](index.html#sherpa.models.basic.Erf)([name]) | One-dimensional error function. |
| [`Erfc`](index.html#sherpa.models.basic.Erfc)([name]) | One-dimensional complementary error function. |
| [`Exp`](index.html#sherpa.models.basic.Exp)([name]) | One-dimensional exponential function. |
| [`Exp10`](index.html#sherpa.models.basic.Exp10)([name]) | One-dimensional exponential function, base 10. |
| [`Gauss1D`](index.html#sherpa.models.basic.Gauss1D)([name]) | One-dimensional gaussian function. |
| [`Integrate1D`](index.html#sherpa.models.basic.Integrate1D)([name]) | |
| [`Log`](index.html#sherpa.models.basic.Log)([name]) | One-dimensional natural logarithm function. |
| [`Log10`](index.html#sherpa.models.basic.Log10)([name]) | One-dimensional logarithm function, base 10. |
| [`LogParabola`](index.html#sherpa.models.basic.LogParabola)([name]) | One-dimensional log-parabolic function. |
| [`NormGauss1D`](index.html#sherpa.models.basic.NormGauss1D)([name]) | One-dimensional normalised gaussian function. |
| [`Poisson`](index.html#sherpa.models.basic.Poisson)([name]) | One-dimensional Poisson function. |
| [`Polynom1D`](index.html#sherpa.models.basic.Polynom1D)([name]) | One-dimensional polynomial function of order 8. |
| [`PowLaw1D`](index.html#sherpa.models.basic.PowLaw1D)([name]) | One-dimensional power-law function. |
| [`Scale1D`](index.html#sherpa.models.basic.Scale1D)([name]) | A constant model for one-dimensional data. |
| [`Sin`](index.html#sherpa.models.basic.Sin)([name]) | One-dimensional sine function. |
| [`Sqrt`](index.html#sherpa.models.basic.Sqrt)([name]) | One-dimensional square root function. |
| [`StepHi1D`](index.html#sherpa.models.basic.StepHi1D)([name]) | One-dimensional step function. |
| [`StepLo1D`](index.html#sherpa.models.basic.StepLo1D)([name]) | One-dimensional step function. |
| [`TableModel`](index.html#sherpa.models.basic.TableModel)([name]) | |
| [`Tan`](index.html#sherpa.models.basic.Tan)([name]) | One-dimensional tan function. |
| [`UserModel`](index.html#sherpa.models.basic.UserModel)([name, pars]) | Support for user-supplied models. |
| [`Box2D`](index.html#sherpa.models.basic.Box2D)([name]) | Two-dimensional box function. |
| [`Const2D`](index.html#sherpa.models.basic.Const2D)([name]) | A constant model for two-dimensional data. |
| [`Delta2D`](index.html#sherpa.models.basic.Delta2D)([name]) | Two-dimensional delta function. |
| [`Gauss2D`](index.html#sherpa.models.basic.Gauss2D)([name]) | Two-dimensional gaussian function. |
| [`Scale2D`](index.html#sherpa.models.basic.Scale2D)([name]) | A constant model for two-dimensional data. |
| [`SigmaGauss2D`](index.html#sherpa.models.basic.SigmaGauss2D)([name]) | Two-dimensional gaussian function (varying sigma). |
| [`NormGauss2D`](index.html#sherpa.models.basic.NormGauss2D)([name]) | Two-dimensional normalised gaussian function. |
| [`Polynom2D`](index.html#sherpa.models.basic.Polynom2D)([name]) | Two-dimensional polynomial function. |
#### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of Box1D, Const1D, Scale1D, Cos, Delta1D, Erf, Erfc, Exp, Exp10, Gauss1D, Integrate1D, Log, Log10, LogParabola, NormGauss1D, Poisson, Polynom1D, PowLaw1D, Sin, Sqrt, StepHi1D, StepLo1D, TableModel, Tan, UserModel, Box2D, Const2D, Scale2D, Delta2D, Gauss2D, SigmaGauss2D, NormGauss2D, Polynom2D
### The sherpa.astro.models module[¶](#module-sherpa.astro.models)
Classes
| [`Atten`](index.html#sherpa.astro.models.Atten)([name]) | Model the attenuation by the Inter-Stellar Medium (ISM). |
| [`BBody`](index.html#sherpa.astro.models.BBody)([name]) | A one-dimensional Blackbody model. |
| [`BBodyFreq`](index.html#sherpa.astro.models.BBodyFreq)([name]) | A one-dimensional Blackbody model (frequency). |
| [`BPL1D`](index.html#sherpa.astro.models.BPL1D)([name]) | One-dimensional broken power-law function. |
| [`Beta1D`](index.html#sherpa.astro.models.Beta1D)([name]) | One-dimensional beta model function. |
| [`Beta2D`](index.html#sherpa.astro.models.Beta2D)([name]) | Two-dimensional beta model function. |
| [`DeVaucouleurs2D`](index.html#sherpa.astro.models.DeVaucouleurs2D)([name]) | Two-dimensional de Vaucouleurs model. |
| [`Dered`](index.html#sherpa.astro.models.Dered)([name]) | A de-reddening model. |
| [`Disk2D`](index.html#sherpa.astro.models.Disk2D)([name]) | Two-dimensional uniform disk model. |
| [`Edge`](index.html#sherpa.astro.models.Edge)([name]) | Photoabsorption edge model. |
| [`HubbleReynolds`](index.html#sherpa.astro.models.HubbleReynolds)([name]) | Two-dimensional Hubble-Reynolds model. |
| [`JDPileup`](index.html#sherpa.astro.models.JDPileup)([name]) | A CCD pileup model for the ACIS detectors on Chandra. |
| [`LineBroad`](index.html#sherpa.astro.models.LineBroad)([name]) | A one-dimensional line-broadening profile. |
| [`Lorentz1D`](index.html#sherpa.astro.models.Lorentz1D)([name]) | One-dimensional normalized Lorentz model function. |
| [`Lorentz2D`](index.html#sherpa.astro.models.Lorentz2D)([name]) | Two-dimensional un-normalised Lorentz function. |
| [`MultiResponseSumModel`](index.html#sherpa.astro.models.MultiResponseSumModel)(name[, pars]) | |
| [`NormBeta1D`](index.html#sherpa.astro.models.NormBeta1D)([name]) | One-dimensional normalized beta model function. |
| [`Schechter`](index.html#sherpa.astro.models.Schechter)([name]) | One-dimensional Schecter model function. |
| [`Sersic2D`](index.html#sherpa.astro.models.Sersic2D)([name]) | Two-dimensional Sersic model. |
| [`Shell2D`](index.html#sherpa.astro.models.Shell2D)([name]) | A homogenous spherical 3D shell projected onto 2D. |
#### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of Atten, BBody, BBodyFreq, BPL1D, Beta1D, Beta2D, DeVaucouleurs2D, Dered, Disk2D, Edge, HubbleReynolds, JDPileup, LineBroad, Lorentz1D, Lorentz2D, MultiResponseSumModel, NormBeta1D, Schechter, Sersic2D, Shell2D
### The sherpa.astro.optical module[¶](#module-sherpa.astro.optical)
Optical models intended for SED Analysis
The models match those used by the SpecView application [[1]](#id3),
and are intended for un-binned one-dimensional data sets defined on a wavelength grid, with units of Angstroms. When used with a binned data set the lower-edge of each bin is used to evaluate the model. This module does not contain all the spectral components from SpecView ([[2]](#id4)).
References
| [[1]](#id1) | <http://www.stsci.edu/institute/software_hardware/specview/> |
| [[2]](#id2) | <http://specview.stsci.edu/javahelp/Components.html> |
Classes
| [`AbsorptionEdge`](index.html#sherpa.astro.optical.AbsorptionEdge)([name]) | Optical model of an absorption edge. |
| [`AbsorptionGaussian`](index.html#sherpa.astro.optical.AbsorptionGaussian)([name]) | Gaussian function for modeling absorption (equivalent width). |
| [`AbsorptionLorentz`](index.html#sherpa.astro.optical.AbsorptionLorentz)([name]) | Lorentz function for modeling absorption (equivalent width). |
| [`AbsorptionVoigt`](index.html#sherpa.astro.optical.AbsorptionVoigt)([name]) | Voigt function for modeling absorption (equivalent width). |
| [`AccretionDisk`](index.html#sherpa.astro.optical.AccretionDisk)([name]) | A model of emission due to an accretion disk. |
| [`BlackBody`](index.html#sherpa.astro.optical.BlackBody)([name]) | Emission from a black body as a function of wavelength. |
| [`Bremsstrahlung`](index.html#sherpa.astro.optical.Bremsstrahlung)([name]) | Bremsstrahlung emission. |
| [`BrokenPowerlaw`](index.html#sherpa.astro.optical.BrokenPowerlaw)([name]) | Broken power-law model. |
| [`CCM`](index.html#sherpa.astro.optical.CCM)([name]) | Galactic extinction: the Cardelli, Clayton, and Mathis model. |
| [`EmissionGaussian`](index.html#sherpa.astro.optical.EmissionGaussian)([name]) | Gaussian function for modeling emission. |
| [`EmissionLorentz`](index.html#sherpa.astro.optical.EmissionLorentz)([name]) | Lorentz function for modeling emission. |
| [`EmissionVoigt`](index.html#sherpa.astro.optical.EmissionVoigt)([name]) | Voigt function for modeling emission. |
| [`FM`](index.html#sherpa.astro.optical.FM)([name]) | UV extinction curve: Fitzpatrick and Massa 1988. |
| [`LMC`](index.html#sherpa.astro.optical.LMC)([name]) | LMC extinction: the Howarth model. |
| [`LogAbsorption`](index.html#sherpa.astro.optical.LogAbsorption)([name]) | Gaussian function for modeling absorption (log of fwhm). |
| [`LogEmission`](index.html#sherpa.astro.optical.LogEmission)([name]) | Gaussian function for modeling emission (log of fwhm). |
| [`OpticalGaussian`](index.html#sherpa.astro.optical.OpticalGaussian)([name]) | Gaussian function for modeling absorption (optical depth). |
| [`Polynomial`](index.html#sherpa.astro.optical.Polynomial)([name]) | Polynomial model of order 5. |
| [`Powerlaw`](index.html#sherpa.astro.optical.Powerlaw)([name]) | Power-law model. |
| [`Recombination`](index.html#sherpa.astro.optical.Recombination)([name]) | Optically-thin recombination continuum model. |
| [`SM`](index.html#sherpa.astro.optical.SM)([name]) | Galactic extinction: the Savage & Mathis model. |
| [`SMC`](index.html#sherpa.astro.optical.SMC)([name]) | SMC extinction: the Prevot et al. |
| [`Seaton`](index.html#sherpa.astro.optical.Seaton)([name]) | Galactic extinction: the Seaton model from Synphot. |
| [`XGal`](index.html#sherpa.astro.optical.XGal)([name]) | Extragalactic extinction: Calzetti, Kinney and Storchi-Bergmann |
#### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of AbsorptionEdge, AbsorptionGaussian, AbsorptionLorentz, AbsorptionVoigt, AccretionDisk, BlackBody, Bremsstrahlung, BrokenPowerlaw, CCM, EmissionGaussian, EmissionLorentz, EmissionVoigt, FM, LMC, LogAbsorption, LogEmission, OpticalGaussian, Polynomial, Powerlaw, Recombination, SM, SMC, Seaton, XGal
### The sherpa.astro.xspec module[¶](#module-sherpa.astro.xspec)
Support for XSPEC models.
Sherpa supports versions 12.10.1, 12.10.0, 12.9.1, and 12.9.0 of XSPEC [[1]](#id2),
and can be built against the model library or the full application. There is no guarantee of support for older or newer versions of XSPEC.
To be able to use most routines from this module, the HEADAS environment variable must be set. The get_xsversion function can be used to return the XSPEC version - including patch level - the module is using:
```
>>> from sherpa.astro import xspec
>>> xspec.get_xsversion()
'12.10.1b'
```
#### Initializing XSPEC[¶](#initializing-xspec)
The XSPEC model library is initalized so that the cosmology parameters are set to H_0=70, q_0=0.0, and lambda_0=0.73 (they can be changed with set_xscosmo).
The other settings - for example for the abundance and cross-section tables - follow the standard rules for XSPEC. For XSPEC versions prior to 12.10.1, this means that the abundance table uses the `angr`
setting and the cross sections the `bcmc` setting (see set_xsabund and set_xsxsect for full details). As of XSPEC 12.10.1, the values are now taken from the user’s XSPEC configuration file - either
`~/.xspec/Xspec.init` or `$HEADAS/../spectral/manager/Xspec.init` -
for these settings. The default value for the photo-ionization table in this case is now `vern` rather than `bcmc`.
References
| [[1]](#id1) | <https://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/index.html> |
Classes
| [`XSagauss`](index.html#sherpa.astro.xspec.XSagauss)([name]) | The XSPEC agauss model: gaussian line profile in wavelength space. |
| [`XSapec`](index.html#sherpa.astro.xspec.XSapec)([name]) | The XSPEC apec model: APEC emission spectrum. |
| [`XSbapec`](index.html#sherpa.astro.xspec.XSbapec)([name]) | The XSPEC bapec model: velocity broadened APEC thermal plasma model. |
| [`XSbbody`](index.html#sherpa.astro.xspec.XSbbody)([name]) | The XSPEC bbody model: blackbody spectrum. |
| [`XSbbodyrad`](index.html#sherpa.astro.xspec.XSbbodyrad)([name]) | The XSPEC bbodyrad model: blackbody spectrum, area normalized. |
| [`XSbexrav`](index.html#sherpa.astro.xspec.XSbexrav)([name]) | The XSPEC bexrav model: reflected e-folded broken power law, neutral medium. |
| [`XSbexriv`](index.html#sherpa.astro.xspec.XSbexriv)([name]) | The XSPEC bexriv model: reflected e-folded broken power law, ionized medium. |
| [`XSbkn2pow`](index.html#sherpa.astro.xspec.XSbkn2pow)([name]) | The XSPEC bkn2pow model: broken power law with two breaks. |
| [`XSbknpower`](index.html#sherpa.astro.xspec.XSbknpower)([name]) | The XSPEC bknpower model: broken power law. |
| [`XSbmc`](index.html#sherpa.astro.xspec.XSbmc)([name]) | The XSPEC bmc model: Comptonization by relativistic matter. |
| [`XSbremss`](index.html#sherpa.astro.xspec.XSbremss)([name]) | The XSPEC bremss model: thermal bremsstrahlung. |
| [`XSbtapec`](index.html#sherpa.astro.xspec.XSbtapec)([name]) | The XSPEC btapec model: velocity broadened APEC emission spectrum with separate continuum and line temperatures. |
| [`XSbvapec`](index.html#sherpa.astro.xspec.XSbvapec)([name]) | The XSPEC bvapec model: velocity broadened APEC thermal plasma model. |
| [`XSbvtapec`](index.html#sherpa.astro.xspec.XSbvtapec)([name]) | The XSPEC bvtapec model: velocity broadened APEC emission spectrum with separate continuum and line temperatures. |
| [`XSbvvapec`](index.html#sherpa.astro.xspec.XSbvvapec)([name]) | The XSPEC bvvapec model: velocity broadened APEC thermal plasma model. |
| [`XSbvvtapec`](index.html#sherpa.astro.xspec.XSbvvtapec)([name]) | The XSPEC bvvtapec model: velocity broadened APEC emission spectrum with separate continuum and line temperatures. |
| [`XSc6mekl`](index.html#sherpa.astro.xspec.XSc6mekl)([name]) | The XSPEC c6mekl model: differential emission measure using Chebyshev representations with multi-temperature mekal. |
| [`XSc6pmekl`](index.html#sherpa.astro.xspec.XSc6pmekl)([name]) | The XSPEC c6pmekl model: differential emission measure using Chebyshev representations with multi-temperature mekal. |
| [`XSc6pvmkl`](index.html#sherpa.astro.xspec.XSc6pvmkl)([name]) | The XSPEC c6pvmkl model: differential emission measure using Chebyshev representations with multi-temperature mekal. |
| [`XSc6vmekl`](index.html#sherpa.astro.xspec.XSc6vmekl)([name]) | The XSPEC c6vmekl model: differential emission measure using Chebyshev representations with multi-temperature mekal. |
| [`XScarbatm`](index.html#sherpa.astro.xspec.XScarbatm)([name]) | The XSPEC carbatm model: Nonmagnetic carbon atmosphere of a neutron star. |
| [`XScemekl`](index.html#sherpa.astro.xspec.XScemekl)([name]) | The XSPEC cemekl model: plasma emission, multi-temperature using mekal. |
| [`XScevmkl`](index.html#sherpa.astro.xspec.XScevmkl)([name]) | The XSPEC cevmkl model: plasma emission, multi-temperature using mekal. |
| [`XScflow`](index.html#sherpa.astro.xspec.XScflow)([name]) | The XSPEC cflow model: cooling flow. |
| [`XScompLS`](index.html#sherpa.astro.xspec.XScompLS)([name]) | The XSPEC compLS model: Comptonization, Lamb & Sanford. |
| [`XScompPS`](index.html#sherpa.astro.xspec.XScompPS)([name]) | The XSPEC compPS model: Comptonization, Poutanen & Svenson. |
| [`XScompST`](index.html#sherpa.astro.xspec.XScompST)([name]) | The XSPEC compST model: Comptonization, Sunyaev & Titarchuk. |
| [`XScompTT`](index.html#sherpa.astro.xspec.XScompTT)([name]) | The XSPEC compTT model: Comptonization, Titarchuk. |
| [`XScompbb`](index.html#sherpa.astro.xspec.XScompbb)([name]) | The XSPEC compbb model: Comptonization, black body. |
| [`XScompmag`](index.html#sherpa.astro.xspec.XScompmag)([name]) | The XSPEC compmag model: Thermal and bulk Comptonization for cylindrical accretion onto the polar cap of a magnetized neutron star. |
| [`XScomptb`](index.html#sherpa.astro.xspec.XScomptb)([name]) | The XSPEC comptb model: Thermal and bulk Comptonization of a seed blackbody-like spectrum. |
| [`XScompth`](index.html#sherpa.astro.xspec.XScompth)([name]) | The XSPEC compth model: Paolo Coppi’s hybrid (thermal/non-thermal) hot plasma emission models. |
| [`XScplinear`](index.html#sherpa.astro.xspec.XScplinear)([name]) | The XSPEC cplinear model: a non-physical piecewise-linear model for low count background spectra. |
| [`XScutoffpl`](index.html#sherpa.astro.xspec.XScutoffpl)([name]) | The XSPEC cutoffpl model: power law, high energy exponential cutoff. |
| [`XSdisk`](index.html#sherpa.astro.xspec.XSdisk)([name]) | The XSPEC disk model: accretion disk, black body. |
| [`XSdiskbb`](index.html#sherpa.astro.xspec.XSdiskbb)([name]) | The XSPEC diskbb model: accretion disk, multi-black body components. |
| [`XSdiskir`](index.html#sherpa.astro.xspec.XSdiskir)([name]) | The XSPEC diskir model: Irradiated inner and outer disk. |
| [`XSdiskline`](index.html#sherpa.astro.xspec.XSdiskline)([name]) | The XSPEC diskline model: accretion disk line emission, relativistic. |
| [`XSdiskm`](index.html#sherpa.astro.xspec.XSdiskm)([name]) | The XSPEC diskm model: accretion disk with gas pressure viscosity. |
| [`XSdisko`](index.html#sherpa.astro.xspec.XSdisko)([name]) | The XSPEC disko model: accretion disk, inner, radiation pressure viscosity. |
| [`XSdiskpbb`](index.html#sherpa.astro.xspec.XSdiskpbb)([name]) | The XSPEC diskpbb model: accretion disk, power-law dependence for T(r). |
| [`XSdiskpn`](index.html#sherpa.astro.xspec.XSdiskpn)([name]) | The XSPEC diskpn model: accretion disk, black hole, black body. |
| [`XSeplogpar`](index.html#sherpa.astro.xspec.XSeplogpar)([name]) | The XSPEC eplogpar model: log-parabolic blazar model with nu-Fnu normalization. |
| [`XSeqpair`](index.html#sherpa.astro.xspec.XSeqpair)([name]) | The XSPEC eqpair model: Paolo Coppi’s hybrid (thermal/non-thermal) hot plasma emission models. |
| [`XSeqtherm`](index.html#sherpa.astro.xspec.XSeqtherm)([name]) | The XSPEC eqtherm model: Paolo Coppi’s hybrid (thermal/non-thermal) hot plasma emission models. |
| [`XSequil`](index.html#sherpa.astro.xspec.XSequil)([name]) | The XSPEC equil model: collisional plasma, ionization equilibrium. |
| [`XSexpdec`](index.html#sherpa.astro.xspec.XSexpdec)([name]) | The XSPEC expdec model: exponential decay. |
| [`XSezdiskbb`](index.html#sherpa.astro.xspec.XSezdiskbb)([name]) | The XSPEC ezdiskbb model: multiple blackbody disk model with zero-torque inner boundary. |
| [`XSgadem`](index.html#sherpa.astro.xspec.XSgadem)([name]) | The XSPEC gadem model: plasma emission, multi-temperature with gaussian distribution of emission measure. |
| [`XSgaussian`](index.html#sherpa.astro.xspec.XSgaussian)([name]) | The XSPEC gaussian model: gaussian line profile. |
| [`XSgnei`](index.html#sherpa.astro.xspec.XSgnei)([name]) | The XSPEC gnei model: collisional plasma, non-equilibrium, temperature evolution. |
| [`XSgrad`](index.html#sherpa.astro.xspec.XSgrad)([name]) | The XSPEC grad model: accretion disk, Schwarzschild black hole. |
| [`XSgrbm`](index.html#sherpa.astro.xspec.XSgrbm)([name]) | The XSPEC grbm model: gamma-ray burst continuum. |
| [`XShatm`](index.html#sherpa.astro.xspec.XShatm)([name]) | The XSPEC hatm model: Nonmagnetic hydrogen atmosphere of a neutron star. |
| [`XSkerrbb`](index.html#sherpa.astro.xspec.XSkerrbb)([name]) | The XSPEC kerrbb model: multi-temperature blackbody model for thin accretion disk around a Kerr black hole. |
| [`XSkerrd`](index.html#sherpa.astro.xspec.XSkerrd)([name]) | The XSPEC kerrd model: optically thick accretion disk around a Kerr black hole. |
| [`XSkerrdisk`](index.html#sherpa.astro.xspec.XSkerrdisk)([name]) | The XSPEC kerrdisk model: accretion disk line emission with BH spin as free parameter. |
| [`XSlaor`](index.html#sherpa.astro.xspec.XSlaor)([name]) | The XSPEC laor model: accretion disk, black hole emission line. |
| [`XSlaor2`](index.html#sherpa.astro.xspec.XSlaor2)([name]) | The XSPEC laor2 model: accretion disk with broken-power law emissivity profile, black hole emission line. |
| [`XSlogpar`](index.html#sherpa.astro.xspec.XSlogpar)([name]) | The XSPEC logpar model: log-parabolic blazar model. |
| [`XSlorentz`](index.html#sherpa.astro.xspec.XSlorentz)([name]) | The XSPEC lorentz model: lorentz line profile. |
| [`XSmeka`](index.html#sherpa.astro.xspec.XSmeka)([name]) | The XSPEC meka model: emission, hot diffuse gas (Mewe-Gronenschild). |
| [`XSmekal`](index.html#sherpa.astro.xspec.XSmekal)([name]) | The XSPEC mekal model: emission, hot diffuse gas (Mewe-Kaastra-Liedahl). |
| [`XSmkcflow`](index.html#sherpa.astro.xspec.XSmkcflow)([name]) | The XSPEC mkcflow model: cooling flow, mekal. |
| [`XSnei`](index.html#sherpa.astro.xspec.XSnei)([name]) | The XSPEC nei model: collisional plasma, non-equilibrium, constant temperature. |
| [`XSnlapec`](index.html#sherpa.astro.xspec.XSnlapec)([name]) | The XSPEC nlapec model: continuum-only APEC emission spectrum. |
| [`XSnpshock`](index.html#sherpa.astro.xspec.XSnpshock)([name]) | The XSPEC npshock model: shocked plasma, plane parallel, separate ion, electron temperatures. |
| [`XSnsa`](index.html#sherpa.astro.xspec.XSnsa)([name]) | The XSPEC nsa model: neutron star atmosphere. |
| [`XSnsagrav`](index.html#sherpa.astro.xspec.XSnsagrav)([name]) | The XSPEC nsagrav model: NS H atmosphere model for different g. |
| [`XSnsatmos`](index.html#sherpa.astro.xspec.XSnsatmos)([name]) | The XSPEC nsatmos model: NS Hydrogen Atmosphere model with electron conduction and self-irradiation. |
| [`XSnsmax`](index.html#sherpa.astro.xspec.XSnsmax)([name]) | The XSPEC nsmax model: Neutron Star Magnetic Atmosphere. |
| [`XSnsmaxg`](index.html#sherpa.astro.xspec.XSnsmaxg)([name]) | The XSPEC nsmaxg model: neutron star with a magnetic atmosphere. |
| [`XSnsx`](index.html#sherpa.astro.xspec.XSnsx)([name]) | The XSPEC nsx model: neutron star with a non-magnetic atmosphere. |
| [`XSnteea`](index.html#sherpa.astro.xspec.XSnteea)([name]) | The XSPEC nteea model: non-thermal pair plasma. |
| [`XSnthComp`](index.html#sherpa.astro.xspec.XSnthComp)([name]) | The XSPEC nthComp model: Thermally comptonized continuum. |
| [`XSoptxagn`](index.html#sherpa.astro.xspec.XSoptxagn)([name]) | The XSPEC optxagn model: Colour temperature corrected disc and energetically coupled Comptonisation model for AGN. |
| [`XSoptxagnf`](index.html#sherpa.astro.xspec.XSoptxagnf)([name]) | The XSPEC optxagnf model: Colour temperature corrected disc and energetically coupled Comptonisation model for AGN. |
| [`XSpegpwrlw`](index.html#sherpa.astro.xspec.XSpegpwrlw)([name]) | The XSPEC pegpwrlw model: power law, pegged normalization. |
| [`XSpexmon`](index.html#sherpa.astro.xspec.XSpexmon)([name]) | The XSPEC pexmon model: neutral Compton reflection with self-consistent Fe and Ni lines. |
| [`XSpexrav`](index.html#sherpa.astro.xspec.XSpexrav)([name]) | The XSPEC pexrav model: reflected powerlaw, neutral medium. |
| [`XSpexriv`](index.html#sherpa.astro.xspec.XSpexriv)([name]) | The XSPEC pexriv model: reflected powerlaw, neutral medium. |
| [`XSplcabs`](index.html#sherpa.astro.xspec.XSplcabs)([name]) | The XSPEC plcabs model: powerlaw observed through dense, cold matter. |
| [`XSposm`](index.html#sherpa.astro.xspec.XSposm)([name]) | The XSPEC posm model: positronium continuum. |
| [`XSpowerlaw`](index.html#sherpa.astro.xspec.XSpowerlaw)([name]) | The XSPEC powerlaw model: power law photon spectrum. |
| [`XSpshock`](index.html#sherpa.astro.xspec.XSpshock)([name]) | The XSPEC pshock model: plane-parallel shocked plasma, constant temperature. |
| [`XSraymond`](index.html#sherpa.astro.xspec.XSraymond)([name]) | The XSPEC raymond model: emission, hot diffuse gas, Raymond-Smith. |
| [`XSredge`](index.html#sherpa.astro.xspec.XSredge)([name]) | The XSPEC redge model: emission, recombination edge. |
| [`XSrefsch`](index.html#sherpa.astro.xspec.XSrefsch)([name]) | The XSPEC refsch model: reflected power law from ionized accretion disk. |
| [`XSrnei`](index.html#sherpa.astro.xspec.XSrnei)([name]) | The XSPEC rnei model: non-equilibrium recombining collisional plasma. |
| [`XSsedov`](index.html#sherpa.astro.xspec.XSsedov)([name]) | The XSPEC sedov model: sedov model, separate ion/electron temperature. |
| [`XSsirf`](index.html#sherpa.astro.xspec.XSsirf)([name]) | The XSPEC sirf model: self-irradiated funnel. |
| [`XSslimbh`](index.html#sherpa.astro.xspec.XSslimbh)([name]) | The XSPEC slimbh model: Stationary slim accretion disk. |
| [`XSsnapec`](index.html#sherpa.astro.xspec.XSsnapec)([name]) | The XSPEC snapec model: galaxy cluster spectrum using SN yields. |
| [`XSsrcut`](index.html#sherpa.astro.xspec.XSsrcut)([name]) | The XSPEC srcut model: synchrotron spectrum, cutoff power law. |
| [`XSsresc`](index.html#sherpa.astro.xspec.XSsresc)([name]) | The XSPEC sresc model: synchrotron spectrum, cut off by particle escape. |
| [`XSstep`](index.html#sherpa.astro.xspec.XSstep)([name]) | The XSPEC step model: step function convolved with gaussian. |
| [`XStapec`](index.html#sherpa.astro.xspec.XStapec)([name]) | The XSPEC tapec model: APEC emission spectrum with separate continuum and line temperatures. |
| [`XSvapec`](index.html#sherpa.astro.xspec.XSvapec)([name]) | The XSPEC vapec model: APEC emission spectrum. |
| [`XSvbremss`](index.html#sherpa.astro.xspec.XSvbremss)([name]) | The XSPEC vbremss model: thermal bremsstrahlung. |
| [`XSvequil`](index.html#sherpa.astro.xspec.XSvequil)([name]) | The XSPEC vequil model: collisional plasma, ionization equilibrium. |
| [`XSvgadem`](index.html#sherpa.astro.xspec.XSvgadem)([name]) | The XSPEC vgadem model: plasma emission, multi-temperature with gaussian distribution of emission measure. |
| [`XSvgnei`](index.html#sherpa.astro.xspec.XSvgnei)([name]) | The XSPEC vgnei model: collisional plasma, non-equilibrium, temperature evolution. |
| [`XSvmcflow`](index.html#sherpa.astro.xspec.XSvmcflow)([name]) | The XSPEC vmcflow model: cooling flow, mekal. |
| [`XSvmeka`](index.html#sherpa.astro.xspec.XSvmeka)([name]) | The XSPEC vmeka model: emission, hot diffuse gas (Mewe-Gronenschild). |
| [`XSvmekal`](index.html#sherpa.astro.xspec.XSvmekal)([name]) | The XSPEC vmekal model: emission, hot diffuse gas (Mewe-Kaastra-Liedahl). |
| [`XSvnei`](index.html#sherpa.astro.xspec.XSvnei)([name]) | The XSPEC vnei model: collisional plasma, non-equilibrium, constant temperature. |
| [`XSvnpshock`](index.html#sherpa.astro.xspec.XSvnpshock)([name]) | The XSPEC vnpshock model: shocked plasma, plane parallel, separate ion, electron temperatures. |
| [`XSvoigt`](index.html#sherpa.astro.xspec.XSvoigt)([name]) | The XSPEC voigt model: Voigt line profile. |
| [`XSvpshock`](index.html#sherpa.astro.xspec.XSvpshock)([name]) | The XSPEC vpshock model: plane-parallel shocked plasma, constant temperature. |
| [`XSvraymond`](index.html#sherpa.astro.xspec.XSvraymond)([name]) | The XSPEC vraymond model: emission, hot diffuse gas, Raymond-Smith. |
| [`XSvrnei`](index.html#sherpa.astro.xspec.XSvrnei)([name]) | The XSPEC vrnei model: non-equilibrium recombining collisional plasma. |
| [`XSvsedov`](index.html#sherpa.astro.xspec.XSvsedov)([name]) | The XSPEC vsedov model: sedov model, separate ion/electron temperature. |
| [`XSvtapec`](index.html#sherpa.astro.xspec.XSvtapec)([name]) | The XSPEC vtapec model: APEC emission spectrum with separate continuum and line temperatures. |
| [`XSvvapec`](index.html#sherpa.astro.xspec.XSvvapec)([name]) | The XSPEC vvapec model: APEC emission spectrum. |
| [`XSvvgnei`](index.html#sherpa.astro.xspec.XSvvgnei)([name]) | The XSPEC vvgnei model: collisional plasma, non-equilibrium, temperature evolution. |
| [`XSvvnei`](index.html#sherpa.astro.xspec.XSvvnei)([name]) | The XSPEC vvnei model: collisional plasma, non-equilibrium, constant temperature. |
| [`XSvvnpshock`](index.html#sherpa.astro.xspec.XSvvnpshock)([name]) | The XSPEC vvnpshock model: shocked plasma, plane parallel, separate ion, electron temperatures. |
| [`XSvvpshock`](index.html#sherpa.astro.xspec.XSvvpshock)([name]) | The XSPEC vvpshock model: plane-parallel shocked plasma, constant temperature. |
| [`XSvvrnei`](index.html#sherpa.astro.xspec.XSvvrnei)([name]) | The XSPEC vvrnei model: non-equilibrium recombining collisional plasma. |
| [`XSvvsedov`](index.html#sherpa.astro.xspec.XSvvsedov)([name]) | The XSPEC vvsedov model: sedov model, separate ion/electron temperature. |
| [`XSvvtapec`](index.html#sherpa.astro.xspec.XSvvtapec)([name]) | The XSPEC vvtapec model: APEC emission spectrum with separate continuum and line temperatures. |
| [`XSzagauss`](index.html#sherpa.astro.xspec.XSzagauss)([name]) | The XSPEC zagauss model: gaussian line profile in wavelength space. |
| [`XSzbbody`](index.html#sherpa.astro.xspec.XSzbbody)([name]) | The XSPEC zbbody model: blackbody spectrum. |
| [`XSzbremss`](index.html#sherpa.astro.xspec.XSzbremss)([name]) | The XSPEC zbremss model: thermal bremsstrahlung. |
| [`XSzgauss`](index.html#sherpa.astro.xspec.XSzgauss)([name]) | The XSPEC zgauss model: gaussian line profile. |
| [`XSzpowerlw`](index.html#sherpa.astro.xspec.XSzpowerlw)([name]) | The XSPEC zpowerlw model: redshifted power law photon spectrum. |
| [`XSSSS_ice`](index.html#sherpa.astro.xspec.XSSSS_ice)([name]) | The XSPEC sss_ice model: Einstein SSS ice absorption. |
| [`XSTBabs`](index.html#sherpa.astro.xspec.XSTBabs)([name]) | The XSPEC TBabs model: ISM grain absorption. |
| [`XSTBfeo`](index.html#sherpa.astro.xspec.XSTBfeo)([name]) | The XSPEC TBfeo model: ISM grain absorption. |
| [`XSTBgas`](index.html#sherpa.astro.xspec.XSTBgas)([name]) | The XSPEC TBgas model: ISM grain absorption. |
| [`XSTBgrain`](index.html#sherpa.astro.xspec.XSTBgrain)([name]) | The XSPEC TBgrain model: ISM grain absorption. |
| [`XSTBpcf`](index.html#sherpa.astro.xspec.XSTBpcf)([name]) | The XSPEC TBpcf model: ISM grain absorption. |
| [`XSTBrel`](index.html#sherpa.astro.xspec.XSTBrel)([name]) | The XSPEC TBrel model: ISM grain absorption. |
| [`XSTBvarabs`](index.html#sherpa.astro.xspec.XSTBvarabs)([name]) | The XSPEC TBvarabs model: ISM grain absorption. |
| [`XSabsori`](index.html#sherpa.astro.xspec.XSabsori)([name]) | The XSPEC absori model: ionized absorber. |
| [`XSacisabs`](index.html#sherpa.astro.xspec.XSacisabs)([name]) | The XSPEC acisabs model: Chandra ACIS q.e. |
| [`XScabs`](index.html#sherpa.astro.xspec.XScabs)([name]) | The XSPEC cabs model: Optically-thin Compton scattering. |
| [`XSconstant`](index.html#sherpa.astro.xspec.XSconstant)([name]) | The XSPEC constant model: energy-independent factor. |
| [`XScyclabs`](index.html#sherpa.astro.xspec.XScyclabs)([name]) | The XSPEC cyclabs model: absorption line, cyclotron. |
| [`XSdust`](index.html#sherpa.astro.xspec.XSdust)([name]) | The XSPEC dust model: dust scattering. |
| [`XSedge`](index.html#sherpa.astro.xspec.XSedge)([name]) | The XSPEC edge model: absorption edge. |
| [`XSexpabs`](index.html#sherpa.astro.xspec.XSexpabs)([name]) | The XSPEC expabs model: exponential roll-off at low E. |
| [`XSexpfac`](index.html#sherpa.astro.xspec.XSexpfac)([name]) | The XSPEC expfac model: exponential modification. |
| [`XSgabs`](index.html#sherpa.astro.xspec.XSgabs)([name]) | The XSPEC gabs model: gaussian absorption line. |
| [`XSheilin`](index.html#sherpa.astro.xspec.XSheilin)([name]) | The XSPEC heilin model: Voigt absorption profiles for He I series. |
| [`XShighecut`](index.html#sherpa.astro.xspec.XShighecut)([name]) | The XSPEC highecut model: high-energy cutoff. |
| [`XShrefl`](index.html#sherpa.astro.xspec.XShrefl)([name]) | The XSPEC hrefl model: reflection model. |
| [`XSismabs`](index.html#sherpa.astro.xspec.XSismabs)([name]) | The XSPEC ismabs model: A high resolution ISM absorption model with variable columns for individual ions. |
| [`XSlyman`](index.html#sherpa.astro.xspec.XSlyman)([name]) | The XSPEC lyman model: Voigt absorption profiles for H I or He II Lyman series. |
| [`XSnotch`](index.html#sherpa.astro.xspec.XSnotch)([name]) | The XSPEC notch model: absorption line, notch. |
| [`XSpcfabs`](index.html#sherpa.astro.xspec.XSpcfabs)([name]) | The XSPEC pcfabs model: partial covering fraction absorption. |
| [`XSphabs`](index.html#sherpa.astro.xspec.XSphabs)([name]) | The XSPEC phabs model: photoelectric absorption. |
| [`XSplabs`](index.html#sherpa.astro.xspec.XSplabs)([name]) | The XSPEC plabs model: power law absorption. |
| [`XSpwab`](index.html#sherpa.astro.xspec.XSpwab)([name]) | The XSPEC pwab model: power-law distribution of neutral absorbers. |
| [`XSredden`](index.html#sherpa.astro.xspec.XSredden)([name]) | The XSPEC redden model: interstellar extinction. |
| [`XSsmedge`](index.html#sherpa.astro.xspec.XSsmedge)([name]) | The XSPEC smedge model: smeared edge. |
| [`XSspexpcut`](index.html#sherpa.astro.xspec.XSspexpcut)([name]) | The XSPEC spexpcut model: super-exponential cutoff absorption. |
| [`XSspline`](index.html#sherpa.astro.xspec.XSspline)([name]) | The XSPEC spline model: spline modification. |
| [`XSswind1`](index.html#sherpa.astro.xspec.XSswind1)([name]) | The XSPEC swind1 model: absorption by partially ionized material with large velocity shear. |
| [`XSuvred`](index.html#sherpa.astro.xspec.XSuvred)([name]) | The XSPEC uvred model: interstellar extinction, Seaton Law. |
| [`XSvarabs`](index.html#sherpa.astro.xspec.XSvarabs)([name]) | The XSPEC varabs model: photoelectric absorption. |
| [`XSvphabs`](index.html#sherpa.astro.xspec.XSvphabs)([name]) | The XSPEC vphabs model: photoelectric absorption. |
| [`XSwabs`](index.html#sherpa.astro.xspec.XSwabs)([name]) | The XSPEC wabs model: photoelectric absorption, Wisconsin cross-sections. |
| [`XSwndabs`](index.html#sherpa.astro.xspec.XSwndabs)([name]) | The XSPEC wndabs model: photo-electric absorption, warm absorber. |
| [`XSxion`](index.html#sherpa.astro.xspec.XSxion)([name]) | The XSPEC xion model: reflected spectrum of photo-ionized accretion disk/ring. |
| [`XSxscat`](index.html#sherpa.astro.xspec.XSxscat)([name]) | The XSPEC xscat model: dust scattering. |
| [`XSzTBabs`](index.html#sherpa.astro.xspec.XSzTBabs)([name]) | The XSPEC zTBabs model: ISM grain absorption. |
| [`XSzbabs`](index.html#sherpa.astro.xspec.XSzbabs)([name]) | The XSPEC zbabs model: EUV ISM attenuation. |
| [`XSzdust`](index.html#sherpa.astro.xspec.XSzdust)([name]) | The XSPEC zdust model: extinction by dust grains. |
| [`XSzedge`](index.html#sherpa.astro.xspec.XSzedge)([name]) | The XSPEC zedge model: absorption edge. |
| [`XSzhighect`](index.html#sherpa.astro.xspec.XSzhighect)([name]) | The XSPEC zhighect model: high-energy cutoff. |
| [`XSzigm`](index.html#sherpa.astro.xspec.XSzigm)([name]) | The XSPEC zigm model: UV/Optical attenuation by the intergalactic medium. |
| [`XSzpcfabs`](index.html#sherpa.astro.xspec.XSzpcfabs)([name]) | The XSPEC zpcfabs model: partial covering fraction absorption. |
| [`XSzphabs`](index.html#sherpa.astro.xspec.XSzphabs)([name]) | The XSPEC zphabs model: photoelectric absorption. |
| [`XSzredden`](index.html#sherpa.astro.xspec.XSzredden)([name]) | The XSPEC zredden model: redshifted version of redden. |
| [`XSzsmdust`](index.html#sherpa.astro.xspec.XSzsmdust)([name]) | The XSPEC zsmdust model: extinction by dust grains in starburst galaxies. |
| [`XSzvarabs`](index.html#sherpa.astro.xspec.XSzvarabs)([name]) | The XSPEC zvarabs model: photoelectric absorption. |
| [`XSzvfeabs`](index.html#sherpa.astro.xspec.XSzvfeabs)([name]) | The XSPEC zvfeabs model: photoelectric absorption with free Fe edge energy. |
| [`XSzvphabs`](index.html#sherpa.astro.xspec.XSzvphabs)([name]) | The XSPEC zvphabs model: photoelectric absorption. |
| [`XSzwabs`](index.html#sherpa.astro.xspec.XSzwabs)([name]) | The XSPEC zwabs model: photoelectric absorption, Wisconsin cross-sections. |
| [`XSzwndabs`](index.html#sherpa.astro.xspec.XSzwndabs)([name]) | The XSPEC zwndabs model: photo-electric absorption, warm absorber. |
| [`XSzxipcf`](index.html#sherpa.astro.xspec.XSzxipcf)([name]) | The XSPEC zxipcf model: partial covering absorption by partially ionized material. |
#### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of XSagauss, XSapec, XSbapec, XSbbody, XSbbodyrad, XSbexrav, XSbexriv, XSbkn2pow, XSbknpower, XSbmc, XSbremss, XSbtapec, XSbvapec, XSbvtapec, XSbvvapec, XSbvvtapec, XSc6mekl, XSc6pmekl, XSc6pvmkl, XSc6vmekl, XScarbatm, XScemekl, XScevmkl, XScflow, XScompLS, XScompPS, XScompST, XScompTT, XScompbb, XScompmag, XScomptb, XScompth, XScplinear, XScutoffpl, XSdisk, XSdiskbb, XSdiskir, XSdiskline, XSdiskm, XSdisko, XSdiskpbb, XSdiskpn, XSeplogpar, XSeqpair, XSeqtherm, XSequil, XSexpdec, XSezdiskbb, XSgadem, XSgaussian, XSgnei, XSgrad, XSgrbm, XShatm, XSkerrbb, XSkerrd, XSkerrdisk, XSlaor, XSlaor2, XSlogpar, XSlorentz, XSmeka, XSmekal, XSmkcflow, XSnei, XSnlapec, XSnpshock, XSnsa, XSnsagrav, XSnsatmos, XSnsmax, XSnsmaxg, XSnsx, XSnteea, XSnthComp, XSoptxagn, XSoptxagnf, XSpegpwrlw, XSpexmon, XSpexrav, XSpexriv, XSplcabs, XSposm, XSpowerlaw, XSpshock, XSraymond, XSredge, XSrefsch, XSrnei, XSsedov, XSsirf, XSslimbh, XSsnapec, XSsrcut, XSsresc, XSstep, XStapec, XSvapec, XSvbremss, XSvequil, XSvgadem, XSvgnei, XSvmcflow, XSvmeka, XSvmekal, XSvnei, XSvnpshock, XSvoigt, XSvpshock, XSvraymond, XSvrnei, XSvsedov, XSvtapec, XSvvapec, XSvvgnei, XSvvnei, XSvvnpshock, XSvvpshock, XSvvrnei, XSvvsedov, XSvvtapec, XSzagauss, XSzbbody, XSzbremss, XSzgauss, XSzpowerlw
Inheritance diagram of XSSSS_ice, XSTBabs, XSTBfeo, XSTBgas, XSTBgrain, XSTBpcf, XSTBrel, XSTBvarabs, XSabsori, XSacisabs, XScabs, XSconstant, XScyclabs, XSdust, XSedge, XSexpabs, XSexpfac, XSgabs, XSheilin, XShighecut, XShrefl, XSismabs, XSlyman, XSnotch, XSpcfabs, XSphabs, XSplabs, XSpwab, XSredden, XSsmedge, XSspexpcut, XSspline, XSswind1, XSuvred, XSvarabs, XSvphabs, XSwabs, XSwndabs, XSxion, XSxscat, XSzTBabs, XSzbabs, XSzdust, XSzedge, XSzhighect, XSzigm, XSzpcfabs, XSzphabs, XSzredden, XSzsmdust, XSzvarabs, XSzvfeabs, XSzvphabs, XSzwabs, XSzwndabs, XSzxipcf
### Reference/API[¶](#reference-api)
This section describes the classes used to create models and the [Available Models](#available-models) section above contains the classes that implement various models.
#### The sherpa.models.model module[¶](#module-sherpa.models.model)
Classes
| [`Model`](index.html#sherpa.models.model.Model)(name[, pars]) | The base class for Sherpa models. |
| [`ArithmeticConstantModel`](index.html#sherpa.models.model.ArithmeticConstantModel)(val[, name]) | |
| [`ArithmeticFunctionModel`](index.html#sherpa.models.model.ArithmeticFunctionModel)(func) | |
| [`ArithmeticModel`](index.html#sherpa.models.model.ArithmeticModel)(name[, pars]) | |
| [`CompositeModel`](index.html#sherpa.models.model.CompositeModel)(name, parts) | |
| [`BinaryOpModel`](index.html#sherpa.models.model.BinaryOpModel)(lhs, rhs, op, opstr) | |
| [`FilterModel`](index.html#sherpa.models.model.FilterModel)(model, filter) | |
| [`MultigridSumModel`](index.html#sherpa.models.model.MultigridSumModel)(models) | |
| [`NestedModel`](index.html#sherpa.models.model.NestedModel)(outer, inner, *otherargs, …) | |
| [`RegriddableModel1D`](index.html#sherpa.models.model.RegriddableModel1D)(name[, pars]) | |
| [`RegriddableModel2D`](index.html#sherpa.models.model.RegriddableModel2D)(name[, pars]) | |
| [`SimulFitModel`](index.html#sherpa.models.model.SimulFitModel)(name, parts) | Store multiple models. |
| [`UnaryOpModel`](index.html#sherpa.models.model.UnaryOpModel)(arg, op, opstr) | |
##### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of Model, ArithmeticConstantModel, ArithmeticFunctionModel, ArithmeticModel, CompositeModel, BinaryOpModel, FilterModel, MultigridSumModel, NestedModel, RegriddableModel1D, RegriddableModel2D, SimulFitModel, UnaryOpModel
#### The sherpa.models.parameter module[¶](#module-sherpa.models.parameter)
Support for model parameter values.
Classes
| [`Parameter`](index.html#sherpa.models.parameter.Parameter)(modelname, name, val[, min, max, …]) | Represent a model parameter. |
| [`CompositeParameter`](index.html#sherpa.models.parameter.CompositeParameter)(name, parts) | |
| [`BinaryOpParameter`](index.html#sherpa.models.parameter.BinaryOpParameter)(lhs, rhs, op, opstr) | |
| [`ConstantParameter`](index.html#sherpa.models.parameter.ConstantParameter)(value) | |
| [`UnaryOpParameter`](index.html#sherpa.models.parameter.UnaryOpParameter)(arg, op, opstr) | |
##### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of Parameter, CompositeParameter, BinaryOpParameter, ConstantParameter, UnaryOpParameter
#### The sherpa.models.regrid module[¶](#module-sherpa.models.regrid)
Classes
| [`Axis`](index.html#sherpa.models.regrid.Axis)(lo, hi) | Class for representing N-D axes objects, for both “integrated” and “non-integrated” datasets |
| [`EvaluationSpace1D`](index.html#sherpa.models.regrid.EvaluationSpace1D)([x, xhi]) | Class for 1D Evaluation Spaces. |
| [`EvaluationSpace2D`](index.html#sherpa.models.regrid.EvaluationSpace2D)([x, y, xhi, yhi]) | Class for 2D Evaluation Spaces. |
| [`ModelDomainRegridder1D`](index.html#sherpa.models.regrid.ModelDomainRegridder1D)([evaluation_space, name]) | Allow 1D models to be evaluated on a different grid. |
| [`ModelDomainRegridder2D`](index.html#sherpa.models.regrid.ModelDomainRegridder2D)([evaluation_space, name]) | Allow 2D models to be evaluated on a different grid. |
Functions
| [`rebin_2d`](index.html#sherpa.models.regrid.rebin_2d)(y, from_space, to_space) | |
| [`rebin_int`](index.html#sherpa.models.regrid.rebin_int)(array, scale_x, scale_y) | Rebin array by an integer scale on both x and y |
| [`rebin_no_int`](index.html#sherpa.models.regrid.rebin_no_int)(array[, dimensions, scale]) | Rebin the array, conserving flux. |
##### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of Axis, EvaluationSpace1D, EvaluationSpace2D, ModelDomainRegridder1D, ModelDomainRegridder2D
#### The sherpa.instrument module[¶](#module-sherpa.instrument)
Classes
| [`ConvolutionKernel`](index.html#sherpa.instrument.ConvolutionKernel)(kernel[, name]) | |
| [`ConvolutionModel`](index.html#sherpa.instrument.ConvolutionModel)(lhs, rhs, psf) | |
| [`Kernel`](index.html#sherpa.instrument.Kernel)(dshape, kshape[, norm, frozen, …]) | Base class for convolution kernels |
| [`PSFKernel`](index.html#sherpa.instrument.PSFKernel)(dshape, kshape[, is_model, norm, …]) | class for PSF convolution kernels |
| [`PSFModel`](index.html#sherpa.instrument.PSFModel)([name, kernel]) | |
| [`RadialProfileKernel`](index.html#sherpa.instrument.RadialProfileKernel)(dshape, kshape[, …]) | class for 1D radial profile PSF convolution kernels |
##### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of ConvolutionKernel, ConvolutionModel, PSFModel, Kernel, PSFKernel, RadialProfileKernel
#### The sherpa.models.template module[¶](#module-sherpa.models.template)
Classes
| [`TemplateModel`](index.html#sherpa.models.template.TemplateModel)([name, pars, parvals, templates]) | |
| [`InterpolatingTemplateModel`](index.html#sherpa.models.template.InterpolatingTemplateModel)(name, template_model) | |
| [`KNNInterpolator`](index.html#sherpa.models.template.KNNInterpolator)(name, template_model[, k, order]) | |
| [`Template`](index.html#sherpa.models.template.Template)(*args, **kwargs) | |
##### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of TemplateModel, InterpolatingTemplateModel, KNNInterpolator, Template
#### The sherpa.astro.instrument module[¶](#module-sherpa.astro.instrument)
Models of common Astronomical models, particularly in X-rays.
The models in this module include support for instrument models that describe how X-ray photons are converted to measurable properties,
such as Pulse-Height Amplitudes (PHA) or Pulse-Invariant channels.
These ‘responses’ are assumed to follow OGIP standards, such as
[[1]](#id2).
References
| [[1]](#id1) | OGIP Calibration Memo CAL/GEN/92-002, “The Calibration Requirements for Spectral Analysis (Definition of RMF and ARF file formats)”,
<NAME>, <NAME>, <NAME>, <NAME> and <NAME>,
<https://heasarc.gsfc.nasa.gov/docs/heasarc/caldb/docs/memos/cal_gen_92_002/cal_gen_92_002.html> |
Classes
| [`ARF1D`](index.html#sherpa.astro.instrument.ARF1D)(arf[, pha, rmf]) | |
| [`PileupResponse1D`](index.html#sherpa.astro.instrument.PileupResponse1D)(pha, pileup_model) | |
| [`RMF1D`](index.html#sherpa.astro.instrument.RMF1D)(rmf[, pha, arf]) | |
| [`Response1D`](index.html#sherpa.astro.instrument.Response1D)(pha) | |
| [`MultipleResponse1D`](index.html#sherpa.astro.instrument.MultipleResponse1D)(pha) | |
| [`PSFModel`](index.html#sherpa.astro.instrument.PSFModel)([name, kernel]) | |
| [`ARFModel`](index.html#sherpa.astro.instrument.ARFModel)(arf, model) | Base class for expressing ARF convolution in model expressions. |
| [`ARFModelNoPHA`](index.html#sherpa.astro.instrument.ARFModelNoPHA)(arf, model) | ARF convolution model without associated PHA data set. |
| [`ARFModelPHA`](index.html#sherpa.astro.instrument.ARFModelPHA)(arf, pha, model) | ARF convolution model with associated PHA data set. |
| [`RMFModel`](index.html#sherpa.astro.instrument.RMFModel)(rmf, model) | Base class for expressing RMF convolution in model expressions. |
| [`RMFModelNoPHA`](index.html#sherpa.astro.instrument.RMFModelNoPHA)(rmf, model) | RMF convolution model without an associated PHA data set. |
| [`RMFModelPHA`](index.html#sherpa.astro.instrument.RMFModelPHA)(rmf, pha, model) | RMF convolution model with associated PHA data set. |
| [`RSPModel`](index.html#sherpa.astro.instrument.RSPModel)(arf, rmf, model) | Base class for expressing RMF + ARF convolution in model expressions |
| [`RSPModelNoPHA`](index.html#sherpa.astro.instrument.RSPModelNoPHA)(arf, rmf, model) | RMF + ARF convolution model without associated PHA data set. |
| [`RSPModelPHA`](index.html#sherpa.astro.instrument.RSPModelPHA)(arf, rmf, pha, model) | RMF + ARF convolution model with associated PHA. |
| [`MultiResponseSumModel`](index.html#sherpa.astro.instrument.MultiResponseSumModel)(source, pha) | |
| [`PileupRMFModel`](index.html#sherpa.astro.instrument.PileupRMFModel)(rmf, model[, pha]) | |
Functions
| [`create_arf`](index.html#sherpa.astro.instrument.create_arf)(elo, ehi[, specresp, exposure, …]) | Create an ARF. |
| [`create_delta_rmf`](index.html#sherpa.astro.instrument.create_delta_rmf)(rmflo, rmfhi[, offset, …]) | Create an ideal RMF. |
| [`create_non_delta_rmf`](index.html#sherpa.astro.instrument.create_non_delta_rmf)(rmflo, rmfhi, fname[, …]) | Create a RMF using a matrix from a file. |
##### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of ARF1D, PileupResponse1D, RMF1D, Response1D, MultipleResponse1D, PSFModel, ARFModel, ARFModelNoPHA, ARFModelPHA, RMFModel, RMFModelNoPHA, RMFModelPHA, RSPModel, RSPModelNoPHA, RSPModelPHA, MultiResponseSumModel, PileupRMFModel
#### The sherpa.astro.xspec module[¶](#module-sherpa.astro.xspec)
Support for XSPEC models.
Sherpa supports versions 12.10.1, 12.10.0, 12.9.1, and 12.9.0 of XSPEC [[1]](#id2),
and can be built against the model library or the full application. There is no guarantee of support for older or newer versions of XSPEC.
To be able to use most routines from this module, the HEADAS environment variable must be set. The get_xsversion function can be used to return the XSPEC version - including patch level - the module is using:
```
>>> from sherpa.astro import xspec
>>> xspec.get_xsversion()
'12.10.1b'
```
##### Initializing XSPEC[¶](#initializing-xspec)
The XSPEC model library is initalized so that the cosmology parameters are set to H_0=70, q_0=0.0, and lambda_0=0.73 (they can be changed with set_xscosmo).
The other settings - for example for the abundance and cross-section tables - follow the standard rules for XSPEC. For XSPEC versions prior to 12.10.1, this means that the abundance table uses the `angr`
setting and the cross sections the `bcmc` setting (see set_xsabund and set_xsxsect for full details). As of XSPEC 12.10.1, the values are now taken from the user’s XSPEC configuration file - either
`~/.xspec/Xspec.init` or `$HEADAS/../spectral/manager/Xspec.init` -
for these settings. The default value for the photo-ionization table in this case is now `vern` rather than `bcmc`.
References
| [[1]](#id1) | <https://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/index.html> |
This document describes the base classes for XSPEC models, and the utility routines - such as querying and retrieving the abundance table information. The models provided by XSPEC are described in [The sherpa.astro.xspec module](index.html#document-model_classes/astro_xspec).
> Classes
> | [`XSModel`](index.html#sherpa.astro.xspec.XSModel)(name[, pars]) | The base class for XSPEC models. |
> | [`XSAdditiveModel`](index.html#sherpa.astro.xspec.XSAdditiveModel)(name[, pars]) | The base class for XSPEC additive models. |
> | [`XSMultiplicativeModel`](index.html#sherpa.astro.xspec.XSMultiplicativeModel)(name[, pars]) | The base class for XSPEC multiplicative models. |
> | [`XSTableModel`](index.html#sherpa.astro.xspec.XSTableModel)(filename[, name, parnames, …]) | Interface to XSPEC table models. |
> Functions
> | [`get_xsabund`](index.html#sherpa.astro.xspec.get_xsabund)([element]) | Return the X-Spec abundance setting or elemental abundance. |
> | [`get_xschatter`](index.html#sherpa.astro.xspec.get_xschatter)() | Return the chatter level used by X-Spec. |
> | [`get_xscosmo`](index.html#sherpa.astro.xspec.get_xscosmo)() | Return the X-Spec cosmology settings. |
> | [`get_xspath_manager`](index.html#sherpa.astro.xspec.get_xspath_manager)() | Return the path to the files describing the XSPEC models. |
> | [`get_xspath_model`](index.html#sherpa.astro.xspec.get_xspath_model)() | Return the path to the model data files. |
> | [`get_xsstate`](index.html#sherpa.astro.xspec.get_xsstate)() | Return the state of the XSPEC module. |
> | [`get_xsversion`](index.html#sherpa.astro.xspec.get_xsversion)() | Return the version of the X-Spec model library in use. |
> | [`get_xsxsect`](index.html#sherpa.astro.xspec.get_xsxsect)() | Return the cross sections used by X-Spec models. |
> | [`get_xsxset`](index.html#sherpa.astro.xspec.get_xsxset)(name) | Return the X-Spec model setting. |
> | [`read_xstable_model`](index.html#sherpa.astro.xspec.read_xstable_model)(modelname, filename) | Create a XSPEC table model. |
> | [`set_xsabund`](index.html#sherpa.astro.xspec.set_xsabund)(abundance) | Set the elemental abundances used by X-Spec models. |
> | [`set_xschatter`](index.html#sherpa.astro.xspec.set_xschatter)(level) | Set the chatter level used by X-Spec. |
> | [`set_xscosmo`](index.html#sherpa.astro.xspec.set_xscosmo)(h0, q0, l0) | Set the cosmological parameters used by X-Spec models. |
> | [`set_xspath_manager`](index.html#sherpa.astro.xspec.set_xspath_manager)(path) | Set the path to the files describing the XSPEC models. |
> | [`set_xsstate`](index.html#sherpa.astro.xspec.set_xsstate)(state) | Restore the state of the XSPEC module. |
> | [`set_xsxsect`](index.html#sherpa.astro.xspec.set_xsxsect)(name) | Set the cross sections used by X-Spec models. |
> | [`set_xsxset`](index.html#sherpa.astro.xspec.set_xsxset)(name, value) | Set a X-Spec model setting. |
##### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of XSModel, XSAdditiveModel, XSMultiplicativeModel, XSTableModel
What statistic is to be used?[¶](#what-statistic-is-to-be-used)
---
The statistic object defines the numerical quantity which describes the “quality” of the fit, where the definition is that as the statistic value decreases, the model is getting to better describe the data.
It is the statistic value that the
[optimiser](index.html#document-optimisers/index)
uses to determine the “best-fit” parameter settings.
A simple example is the least-squares statistic which, if the data and model points are \(d_i\) and \(m_i\) respectively,
where the suffix indicates the bin number, then the overall statistic value is \(s = \sum_i (d_i - m_i)^2\). This is provided by the [`LeastSq`](index.html#sherpa.stats.LeastSq) class, and an example of its use is shown below. First we import the classes we need:
```
>>> import numpy as np
>>> from sherpa.data import Data1D
>>> from sherpa.models.basic import Gauss1D
>>> from sherpa.stats import LeastSq
```
As the data I use a one-dimensional gaussian with normally-distributed noise:
```
>>> np.random.seed(0)
>>> x = np.linspace(-5., 5., 200)
>>> gmdl = Gauss1D()
>>> gmdl.fwhm = 1.9
>>> gmdl.pos = 1.3
>>> gmdl.ampl = 3
>>> y = gmdl(x) + np.random.normal(0., 0.2, x.shape)
>>> d = Data1D('stat-example', x, y)
```
The statistic value, along with per-bin values, is returned by the [`calc_stat()`](index.html#sherpa.stats.Stat.calc_stat) method, so we can find the statistic value of this parameter set with:
```
>>> stat = LeastSq()
>>> s = stat.calc_stat(d, mdl)
>>> print("Statistic value = {}".format(s[0]))
Statistic value = 8.38666216358492
```
As the FWHM is varied about the true value we can see that the statistic increases:
```
>>> fwhm = np.arange(0.5, 3.0, 0.2)
>>> sval = []
>>> for f in fwhm:
... gmdl.fwhm = f
... sval.append(stat.calc_stat(d, gmdl)[0])
...
>>> plt.plot(fwhm, sval)
>>> plt.xlabel('FWHM')
>>> plt.ylabel('Statistic')
```
The statistic classes provided in Sherpa are given below, and cover a range of possibilities, such as: least-square for when there is no knowledge about the errors on each point, a variety of chi-square statistics for when the errors are assumed to be gaussian, and a variety of maximum-likelihood estimators for Poisson-distributed data. It is also possible to add your own statistic class.
### Reference/API[¶](#reference-api)
#### The sherpa.stats module[¶](#module-sherpa.stats)
Classes
| [`Stat`](index.html#sherpa.stats.Stat)(name) | The base class for calculating a statistic given data and model. |
| [`Chi2`](index.html#sherpa.stats.Chi2)([name]) | Chi Squared statistic. |
| [`LeastSq`](index.html#sherpa.stats.LeastSq)([name]) | Least Squared Statistic. |
| [`Chi2ConstVar`](index.html#sherpa.stats.Chi2ConstVar)([name]) | Chi Squared with constant variance. |
| [`Chi2DataVar`](index.html#sherpa.stats.Chi2DataVar)([name]) | Chi Squared with data variance. |
| [`Chi2Gehrels`](index.html#sherpa.stats.Chi2Gehrels)([name]) | Chi Squared with Gehrels variance. |
| [`Chi2ModVar`](index.html#sherpa.stats.Chi2ModVar)([name]) | Chi Squared with model amplitude variance. |
| [`Chi2XspecVar`](index.html#sherpa.stats.Chi2XspecVar)([name]) | Chi Squared with data variance (XSPEC style). |
| [`Likelihood`](index.html#sherpa.stats.Likelihood)([name]) | Maximum likelihood function |
| [`Cash`](index.html#sherpa.stats.Cash)([name]) | Maximum likelihood function. |
| [`CStat`](index.html#sherpa.stats.CStat)([name]) | Maximum likelihood function (XSPEC style). |
| [`WStat`](index.html#sherpa.stats.WStat)([name]) | Maximum likelihood function including background (XSPEC style). |
| [`UserStat`](index.html#sherpa.stats.UserStat)([statfunc, errfunc, name]) | Support simple user-supplied statistic calculations. |
##### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of Stat, Chi2, LeastSq, Chi2ConstVar, Chi2DataVar, Chi2Gehrels, Chi2ModVar, Chi2XspecVar, Likelihood, Cash, CStat, WStat, UserStat
Optimisers: How to improve the current parameter values[¶](#optimisers-how-to-improve-the-current-parameter-values)
---
The optimiser varies the model parameters in an attempt to find the solution which minimises the chosen
[statistic](index.html#document-statistics/index).
In general it is expected that the optimiser will be used by a [`Fit`](index.html#sherpa.fit.Fit) object to
[perform the fit](index.html#document-fit/index), but it can be used directly using the
[`fit()`](index.html#sherpa.optmethods.OptMethod.fit) method. The optimiser object allows configuration values to be changed which can tweak the behavior; for instance the tolerance to determine whether the fit has converged, the maximum number of iterations to use,
or how much information to display whilst optimising a model.
As an example, the default parameter values for the
[`Levenberg-Marquardt`](index.html#sherpa.optmethods.LevMar)
optimiser are:
```
>>> from sherpa.optmethods.LevMar
>>> lm = LevMar()
>>> print(lm)
name = levmar ftol = 1.19209289551e-07 xtol = 1.19209289551e-07 gtol = 1.19209289551e-07 maxfev = None epsfcn = 1.19209289551e-07 factor = 100.0 verbose = 0
```
These settings are available both as fields of the object and via the `config` dictionary field.
Additional optimisers can be built by extending from the
[`sherpa.optmethods.OptMethod`](index.html#sherpa.optmethods.OptMethod) class. This can be used to provide access to external packages such as
[CERN’s MINUIT optimisation library](http://iminuit.readthedocs.io).
### Choosing an optimiser[¶](#choosing-an-optimiser)
Warning
The following may not correctly represent Sherpa’s current capabilities,
so please take care when interpreting this section.
The following information is adapted from a memo written by <NAME> (1998).
The minimization of mathematical functions is a difficult operation. A general function \(f({\bf x})\) of the vector argument \(\bf x\) may have many isolated local minima, non-isolated minimum hypersurfaces, or even more complicated topologies. No finite minimization routine can guarantee to locate the unique, global, minimum of \(f({\bf x})\)
without being fed intimate knowledge about the function by the user.
This does not mean that minimization is a hopeless task. For many problems there are techniques which will locate a local minimum which may be “close enough” to the global minimum, and there are techniques which will find the global minimum a large fraction of the time (in a probabilistic sense). However, the reader should be aware of my philosophy is that there is no “best” algorithm for finding the minimum of a general function. Instead,
Sherpa provides tools which will allow the user to look at the overall behavior of the function and find plausible local minima, which will often contain the physically-meaningful minimum in the types of problem with which Sherpa deals.
In general, the best assurance that the correct minimum has been found in a particular calculation is careful examination of the nature of the solution
(e.g., by plotting a fitted function over data), and some confidence that the full region that the minimum may lie in has been well searched by the algorithm used. This document seeks to give the reader some information about what the different choices of algorithm will mean in terms of run-time and confidence of locating a good minimum.
Some points to take away from the discussions in the rest of this document.
> 1. Never accept the result of a minimization using a single optimization run;
> always test the minimum using a different method.
> 2. Check that the result of the minimization does not have parameter values
> at the edges of the parameter space. If this happens, then the fit must be
> disregarded since the minimum lies outside the space that has been
> searched, or the minimization missed the minimum.
> 3. Get a feel for the range of values of the target function (in Sherpa this
> is the fit statistic), and the stability of the solution, by starting the
> minimization from several different parameter values.
> 4. Always check that the minimum “looks right” by visualizing the
> model and the data.
Sherpa contains two types of routine for minimizing a fit statistic. I will call them the “single-shot” routines, which start from a guessed set of parameters, and then try to improve the parameters in a continuous fashion, and the “scatter-shot” routines, which try to look at parameters over the entire permitted hypervolume to see if there are better minima than near the starting guessed set of parameters.
#### Single-shot techniques[¶](#single-shot-techniques)
As the reader might expect, the single-shot routines are relatively quick, but depend critically on the guessed initial parameter values \({\bf x}_0\)
being near (in some sense) to the minimum \({\bf x}_{\rm min}\). All the single-shot routines investigate the local behaviour of the function near
\({\bf x}_0\), and then make a guess at the best direction and distance to move to find a better minimum. After testing at the new point, they accept that point as the next guess, \({\bf x}_1\), if the fit statistic is smaller than at the first point, and modify the search procedure if it isn’t smaller. The routines continue to run until one of the following occurs:
> 1. all search directions result in an increased value of the fit statistic;
> 2. an excessive number of steps have been taken; or
> 3. something strange happens to the fit statistic (e.g., it turns out to be
> discontinuous in some horrible way).
This description indicates that for the single-shot routines, there is a considerable emphasis on the initial search position, \({\bf x}_0\), being reasonable. It may also be apparent that the values of these parameters should be moderate; neither too small (\(10^{-12}\), say), nor too large
(\(10^{12}\), say). This is because the initial choice of step size in moving from \({\bf x}_0\) towards the next improved set of parameters,
\({\bf x}_1\), is based on the change in the fit statistic, \(f({\bf x})\) as components of \({\bf x}\) are varied by amounts \({\cal O}(1)\). If \(f\) varies little as \({\bf x}\) is varied by this amount, then the calculation of the distance to move to reach the next root may be inaccurate. On the other hand, if \(f\) has a lot of structure (several maxima and minima) as \({\bf x}\) is varied by the initial step size, then these single-shot minimizers may mistakenly jump entirely over the
“interesting” region of parameter space.
These considerations suggest that the user should arrange that the search vector is scaled so that the range of parameter space to be searched is neither too large nor too small. To take a concrete example, it would not be a good idea to have \(x_7\) parameterize the Hydrogen column density
(\(N_{\rm H}\)) in a spectral fit, with an initial guess of
\(10^{20}\ {\rm cm}^{-2}\), and a search range
(in units of \({\rm cm}^{-2}\)) of
\(10^{16}\) to \(10^{24}\). The minimizers will look for variations in the fit statistic as \(N_{\rm H}\) is varied by
\(1\ {\rm cm}^{-2}\), and finding none (to the rounding accuracy likely for the code), will conclude that
\(x_7\) is close to being a null parameter and can be ignored in the fitting. It would be much better to have \(x_7 = \log_{10}(N_{\rm H})\),
with a search range of 16 to 24. Significant variations in the fit statistic will occur as \(x_7\) is varied by \(\pm 1\), and the code has a reasonable chance of finding a useful solution.
Bearing this in mind, the single-shot minimizers in Sherpa are listed below:
[`NelderMead`](index.html#sherpa.optmethods.NelderMead)
This technique - also known as Simplex - creates a polyhedral search element around the initial position, \({\bf x}_0\), and then grows or shrinks in particular directions while crawling around parameter space, to try to place a minimum within the final search polyhedron. This technique has some hilarious ways of getting stuck in high-dimension parameter spaces (where the polyhedron can become a strange shape), but is very good at finding minima in regions where the fit statistic has a moderately well-defined topology. Since it works in a different way than Levenberg-Marquardt minimization, a good strategy is to combine both minimization to test whether an apparent minimum found by one technique is stable when searched by the other. I regard NelderMead searching as good in smooth and simple parameter spaces,
particularly when looking at regions where the fit statistic depends on a parameter in a linear or parabolic fashion, and bad where surfaces of equal value of the fit statistic are complicated. In either case, it is essential that the initial size of the polyhedron (with sides of length 1 unit) is a smallish fraction of the search space.
[`Levenberg-Marquardt`](index.html#sherpa.optmethods.LevMar)
This can be considered to be a censored maximum-gradients technique which,
starting from a first guess, moves towards a minimum by finding a good direction in which to move, and calculating a sensible distance to go.
Its principal drawback is that to calculate the distance to move it has to make some assumptions about how large a step size to take, and hence there is an implicit assumption that the search space is reasonably well scaled (to
\(\pm 10\) units in each of the search directions, say). It is also important that in finding these gradients, the steps do not miss a lot of important structure; i.e. there should not be too many subsidiary minima.
The search directions and distances to move are based on the shape of the target function near the initial guessed minimum, \({\bf x}_0\),
with progressive movement towards the dominant local minimum. Since this technique uses information about the local curvature of the fit statistic as well as its local gradients, the approach tends to stabilize the result in somce cases. I regard the techniques implemented in Sherpa as being good minimum-refiners for simple local topologies, since more assumptions about topology are made than in the NelderMead approach, but bad at finding global minima for target functions with complicated topologies.
#### Scatter-shot techniques[¶](#scatter-shot-techniques)
Although a bit ad hoc, these techniques attempt to locate a decent minimum over the entire range of the search parameter space. Because they involve searching a lot of the parameter space, they involve many function evaluations, and are somewhere between quite slow and incredibly-tediously slow.
The routines are listed below:
[`GridSearch`](index.html#sherpa.optmethods.GridSearch)
This routine simply searches a grid in each of the search parameters,
where the spacing is uniform between the minimum and maximum value of each parameter. There is an option to refine the fit at each point, by setting the
[`method`](index.html#sherpa.optmethods.GridSearch.method) attribute to one of the single-shot optimisers, but this is not set by default, as it can significantly increase the time required to fit the data.
The coarseness of the grid sets how precise a root will be found,
and if the fit statistic has significant structure on a smaller scale, then the grid-searcher will miss it completely. This is a good technique for finding an approximation to the minimum for a slowly-varying function. It is a bad technique for getting accurate estimates of the location of a minimum, or for examining a fit statistic with lots of subsidiary maxima and minima within the search space. It is intended for use with
[`template models`](index.html#sherpa.models.template.TemplateModel).
[`Monte Carlo`](index.html#sherpa.optmethods.MonCar)
This is a simple population based, stochastic function minimizer. At each iteration it combines population vectors - each containing a set of parameter values - using a weighted difference. This optimiser can be used to find solutions to complex search spaces but is not guaranteed to find a global minimum. It is over-kill for relatively simple problems.
### Summary and best-buy strategies[¶](#summary-and-best-buy-strategies)
Overall, the single-shot methods are best regarded as ways of refining minima located in other ways: from good starting guesses, or from the scatter-shot techniques. Using intelligence to come up with a good first-guess solution is the best approach, when the single-shot refiners can be used to get accurate values for the parameters at the minimum. However, I would certainly recommend running at least a second single-shot minimizer after the first, to get some indication that one set of assumptions about the shape of the minimum is not compromising the solution. It is probably best if the code rescales the parameter range between minimizations, so that a completely different sampling of the function near the trial minimum is being made.
| Optimiser | Type | Speed | Commentary |
| --- | --- | --- | --- |
| NelderMead | single-shot | fast | OK for refining minima |
| Levenberg-Marquardt | single-shot | fast | OK for refining minima |
| GridSearch | scatter-shot | slow | OK for smooth functions |
| Monte Carlo | scatter-shot | very slow | Good in many cases |
### Reference/API[¶](#reference-api)
#### The sherpa.optmethods module[¶](#module-sherpa.optmethods)
Classes
| [`OptMethod`](index.html#sherpa.optmethods.OptMethod)(name, optfunc) | Base class for the optimisers. |
| [`LevMar`](index.html#sherpa.optmethods.LevMar)([name]) | Levenberg-Marquardt optimization method. |
| [`NelderMead`](index.html#sherpa.optmethods.NelderMead)([name]) | Nelder-Mead Simplex optimization method. |
| [`MonCar`](index.html#sherpa.optmethods.MonCar)([name]) | Monte Carlo optimization method. |
| [`GridSearch`](index.html#sherpa.optmethods.GridSearch)([name]) | Grid Search optimization method. |
##### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of OptMethod, LevMar, NelderMead, MonCar, GridSearch
Fitting the data[¶](#fitting-the-data)
---
The [`Fit`](index.html#sherpa.fit.Fit) class takes the
[data](index.html#document-data/index) and [model expression](index.html#document-models/index)
to be fit, and uses the
[optimiser](index.html#document-optimisers/index) to minimise the
[chosen statistic](index.html#document-statistics/index). The basic approach is to:
> * create a [`Fit`](index.html#sherpa.fit.Fit) object;
> * call its [`fit()`](index.html#sherpa.fit.Fit.fit) method one or more times,
> potentially varying the `method` attribute to change the
> optimiser;
> * inspect the [`FitResults`](index.html#sherpa.fit.FitResults) object returned
> by `fit()` to extract information about the fit;
> * review the fit quality, perhaps re-fitting with a different set
> of parameters or using a different optimiser;
> * once the “best-fit” solution has been found, calculate error estimates by
> calling the
> [`est_errors()`](index.html#sherpa.fit.Fit.est_errors) method, which returns
> a [`ErrorEstResults`](index.html#sherpa.fit.ErrorEstResults) object detailing
> the results;
> * visualize the parameter errors with classes such as
> [`IntervalProjection`](index.html#sherpa.plot.IntervalProjection) and
> [`RegionProjection`](index.html#sherpa.plot.RegionProjection).
The following discussion uses a one-dimensional data set with gaussian errors (it was
[simulated with gaussian noise](index.html#document-evaluation/simulate)
with \(\sigma = 5\)):
```
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from sherpa.data import Data1D
>>> d = Data1D('fit example', [-13, -5, -3, 2, 7, 12],
... [102.3, 16.7, -0.6, -6.7, -9.9, 33.2],
... np.ones(6) * 5)
```
It is going to be fit with the expression:
\[y = c_0 + c_1 x + c_2 x^2\]
which is represented by the [`Polynom1D`](index.html#sherpa.models.basic.Polynom1D)
model:
```
>>> from sherpa.models.basic import Polynom1D
>>> mdl = Polynom1D()
```
To start with, just the \(c_0\) and \(c_2\) terms are used in the fit:
```
>>> mdl.c2.thaw()
>>> print(mdl)
polynom1d
Param Type Value Min Max Units
--- --- --- --- --- ---
polynom1d.c0 thawed 1 -3.40282e+38 3.40282e+38
polynom1d.c1 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c2 thawed 0 -3.40282e+38 3.40282e+38
polynom1d.c3 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c4 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c5 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c6 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c7 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c8 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.offset frozen 0 -3.40282e+38 3.40282e+38
```
### Creating a fit object[¶](#creating-a-fit-object)
A [`Fit`](index.html#sherpa.fit.Fit) object requires both a
[data set](index.html#document-data/index) and a
[model](index.html#document-models/index) object, and will use the
[`Chi2Gehrels`](index.html#sherpa.stats.Chi2Gehrels)
[statistic](index.html#document-statistics/index) with the
[`LevMar`](index.html#sherpa.optmethods.LevMar)
[optimiser](index.html#document-optimisers/index)
unless explicitly over-riden with the `stat` and
`method` parameters (when creating the object) or the
`stat` and
`method` attributes
(once the object has been created).
```
>>> from sherpa.fit import Fit
>>> f = Fit(d, mdl)
>>> print(f)
data = fit example model = polynom1d stat = Chi2Gehrels method = LevMar estmethod = Covariance
>>> print(f.data)
name = fit example x = [-13, -5, -3, 2, 7, 12]
y = [102.3, 16.7, -0.6, -6.7, -9.9, 33.2]
staterror = Float64[6]
syserror = None
>>> print(f.model)
polynom1d
Param Type Value Min Max Units
--- --- --- --- --- ---
polynom1d.c0 thawed 1 -3.40282e+38 3.40282e+38
polynom1d.c1 thawed 0 -3.40282e+38 3.40282e+38
polynom1d.c2 thawed 0 -3.40282e+38 3.40282e+38
polynom1d.c3 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c4 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c5 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c6 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c7 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c8 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.offset frozen 0 -3.40282e+38 3.40282e+38
```
The fit object stores references to objects, such as the model, which means that the fit object reflects the current state, and not just the values when it was created or used. For example, in the following the model is changed directly, and the value stored in the fit object is also changed:
```
>>> f.model.c2.val 0.0
>>> mdl.c2 = 1
>>> f.model.c2.val 1.0
```
### Using the optimiser and statistic[¶](#using-the-optimiser-and-statistic)
With a Fit object can calculate the statistic value for the current data and model
([`calc_stat()`](index.html#sherpa.fit.Fit.calc_stat)),
summarise how well the current model represents the data ([`calc_stat_info()`](index.html#sherpa.fit.Fit.calc_stat_info)),
calculate the per-bin chi-squared value (for chi-square statistics; `calc_stat_chisqr()`),
fit the model to the data
([`fit()`](index.html#sherpa.fit.Fit.fit) and
[`simulfit()`](index.html#sherpa.fit.Fit.simulfit)), and
[calculate the parameter errors](#estimating-errors)
([`est_errors()`](index.html#sherpa.fit.Fit.est_errors)).
```
>>> print("Starting statistic: {:.3f}".format(f.calc_stat()))
Starting statistic: 840.251
>>> sinfo1 = f.calc_stat_info()
>>> print(sinfo1)
name =
ids = None bkg_ids = None statname = chi2 statval = 840.2511999999999 numpoints = 6 dof = 4 qval = 1.4661616529226985e-180 rstat = 210.06279999999998
```
The fields in the [`StatInfoResults`](index.html#sherpa.fit.StatInfoResults) depend on the choice of statistic; in particular,
[`rstat`](index.html#sherpa.fit.StatInfoResults.rstat) and
[`qval`](index.html#sherpa.fit.StatInfoResults.qval) are set to `None`
if the statistic is not based on chi-square. The current data set has a reduced statistic of \(\sim 200\) which indicates that the fit is not very good (if the error bars in the dataset are correct then a good fit is indicated by a reduced statistic \(\simeq 1\) for chi-square based statistics).
As with the model and statistic, if the data object is changed then the results of calculations made on the fit object can be changed.
As shown below, when one data point is
[removed](index.html#data-filter), the calculated statistics are changed (such as the
[`numpoints`](index.html#sherpa.fit.StatInfoResults.numpoints) value).
```
>>> d.ignore(0, 5)
>>> sinfo2 = f.calc_stat_info()
>>> d.notice()
>>> sinfo1.numpoints 6
>>> sinfo2.numpoints 5
```
Note
The objects returned by the fit methods, such as
[`StatInfoResults`](index.html#sherpa.fit.StatInfoResults) and
[`FitResults`](index.html#sherpa.fit.FitResults), do not contain references to any of the input objects. This means that the values stored in these objects are not changed when the input values change.
The [`fit()`](index.html#sherpa.fit.Fit.fit) method uses the optimiser to vary the
[thawed parameter values](index.html#params-freeze)
in the model in an attempt to minimize the statistic value.
The method returns a
[`FitResults`](index.html#sherpa.fit.FitResults) object which contains information on the fit, such as whether it succeeded (found a minimum,
[`succeeded`](index.html#sherpa.fit.FitResults.succeeded)),
the start and end statistic value
([`istatval`](index.html#sherpa.fit.FitResults.istatval) and
[`statval`](index.html#sherpa.fit.FitResults.statval)),
and the best-fit parameter values
([`parnames`](index.html#sherpa.fit.FitResults.parnames) and
[`parvals`](index.html#sherpa.fit.FitResults.parvals)).
```
>>> res = f.fit()
>>> if res.succeeded: print("Fit succeeded")
Fit succeeded
```
The return value has a [`format()`](index.html#sherpa.fit.FitResults.format) method which provides a summary of the fit:
```
>>> print(res.format())
Method = levmar Statistic = chi2 Initial fit statistic = 840.251 Final fit statistic = 96.8191 at function evaluation 6 Data points = 6 Degrees of freedom = 4 Probability [Q-value] = 4.67533e-20 Reduced statistic = 24.2048 Change in statistic = 743.432
polynom1d.c0 -11.0742 +/- 2.91223
polynom1d.c2 0.503612 +/- 0.0311568
```
The information is also available directly as fields of the
[`FitResults`](index.html#sherpa.fit.FitResults) object:
```
>>> print(res)
datasets = None itermethodname = none methodname = levmar statname = chi2 succeeded = True parnames = ('polynom1d.c0', 'polynom1d.c2')
parvals = (-11.074165156709268, 0.5036124773506225)
statval = 96.8190901009578 istatval = 840.2511999999999 dstatval = 743.4321098990422 numpoints = 6 dof = 4 qval = 4.675333207707564e-20 rstat = 24.20477252523945 message = successful termination nfev = 6
```
The reduced chi square fit is now lower, \(\sim 25\), but still not close to 1.
#### Visualizing the fit[¶](#visualizing-the-fit)
The [`DataPlot`](index.html#sherpa.plot.DataPlot), [`ModelPlot`](index.html#sherpa.plot.ModelPlot),
and [`FitPlot`](index.html#sherpa.plot.FitPlot) classes can be used to
[see how well the model represents the data](index.html#document-plots/index).
```
>>> from sherpa.plot import DataPlot, ModelPlot
>>> dplot = DataPlot()
>>> dplot.prepare(f.data)
>>> mplot = ModelPlot()
>>> mplot.prepare(f.data, f.model)
>>> dplot.plot()
>>> mplot.overplot()
```
As can be seen, the model isn’t able to fully describe the data. One check to make here is to change the optimiser in case the fit is stuck in a local minimum. The default optimiser is
[`LevMar`](index.html#sherpa.optmethods.LevMar):
```
>>> f.method.name
'levmar'
>>> original_method = f.method
```
This can be changed to [`NelderMead`](index.html#sherpa.optmethods.NelderMead)
and the data re-fit.
```
>>> from sherpa.optmethods import NelderMead
>>> f.method = NelderMead()
>>> resn = f.fit()
```
In this case the statistic value has not changed, as shown by
[`dstatval`](index.html#sherpa.fit.FitResults.dstatval) being zero:
```
>>> print("Change in statistic: {}".format(resn.dstatval))
Change in statistic: 0.0
```
Note
An alternative approach is to create a new Fit object, with the new method, and use that instead of changing the
`method` attribute. For instance:
```
>>> fit2 = Fit(d, mdl, method=NelderMead())
>>> fit2.fit()
```
#### Adjusting the model[¶](#adjusting-the-model)
This suggests that the problem is not that a local minimum has been found, but that the model is not expressive enough to represent the data. One possible approach is to change the set of points used for the comparison, either by removing data points - perhaps because they are not well calibrated or there are known to be issues - or adding extra ones (by removing a previously-applied filter). The approach taken here is to change the model being fit;
in this case by allowing the linear term (\(c_1\)) of the polynomial to be fit, but a new model could have been tried.
```
>>> mdl.c1.thaw()
```
For the remainder of the analysis the original (Levenberg-Marquardt)
optimiser will be used:
```
>>> f.method = original_method
```
With \(c_1\) allowed to vary, the fit finds a much better solution, with a reduced chi square value of \(\simeq 1.3\):
```
>>> res2 = f.fit()
>>> print(res2.format())
Method = levmar Statistic = chi2 Initial fit statistic = 96.8191 Final fit statistic = 4.01682 at function evaluation 8 Data points = 6 Degrees of freedom = 3 Probability [Q-value] = 0.259653 Reduced statistic = 1.33894 Change in statistic = 92.8023
polynom1d.c0 -9.38489 +/- 2.91751
polynom1d.c1 -2.41692 +/- 0.250889
polynom1d.c2 0.478273 +/- 0.0312677
```
The previous plot objects can be used, but the model plot has to be updated to reflect the new model values. Three new plot styles are used:
[`FitPlot`](index.html#sherpa.plot.FitPlot) shows both the data and model values,
[`DelchiPlot`](index.html#sherpa.plot.DelchiPlot) to show the residuals, and
[`SplitPlot`](index.html#sherpa.plot.SplitPlot) to control the layout of the plots:
```
>>> from sherpa.plot import DelchiPlot, FitPlot, SplitPlot
>>> fplot = FitPlot()
>>> rplot = DelchiPlot()
>>> splot = SplitPlot()
>>> mplot.prepare(f.data, f.model)
>>> fplot.prepare(dplot, mplot)
>>> splot.addplot(fplot)
>>> rplot.prepare(f.data, f.model, f.stat)
>>> splot.addplot(rplot)
```
The residuals plot shows, on the ordinate, \(\sigma = (d - m) / e\) where
\(d\), \(m\), and \(e\) are the data, model, and error value for each bin respectively.
The use of this style of plot - where there’s the data and fit in the top and a related plot (normally some form of residual about the fit) in the bottom - is common enough that Sherpa provides a specialization of
[`SplitPlot`](index.html#sherpa.plot.SplitPlot) called
[`JointPlot`](index.html#sherpa.plot.JointPlot) for this case. In the following example the plots from above are re-used, as no settings have changed, so there is no need to call the
`prepare` method of the component plots:
```
>>> from sherpa.plot import JointPlot
>>> jplot = JointPlot()
>>> jplot.plottop(fplot)
>>> jplot.plotbot(rplot)
```
The two major changes to the `SplitPlot` output are that the top plot is now taller, and the gap between the plots has been reduced by removing the axis labelling from the first plot and the title of the second plot (the X axes of the two plots are also tied together, but that’s not obvious from this example).
#### Refit from a different location[¶](#refit-from-a-different-location)
It may be useful to repeat the fit starting the model off at different parameter locations to check that the fit solution is robust. This can be done manually, or the
[`reset()`](index.html#sherpa.models.model.Model.reset) method used to change the parameters back to
[the last values they were explicitly set to](index.html#parameter-reset):
```
>>> mdl.reset()
```
Rather than print out all the components, most of which are fixed at 0, the first three can be looped over using the model’s
[`pars`](index.html#sherpa.models.model.Model.pars) attribute:
```
>>> [(p.name, p.val, p.frozen) for p in mdl.pars[:3]]
[('c0', 1.0, False), ('c1', 0.0, False), ('c2', 1.0, False)]
```
There are many ways to choose the starting location; in this case the value of 10 is picked, as it is “far away” from the original values, but hopefully not so far away that the optimiser will not be able to find the best-fit location.
```
>>> for p in mdl.pars[:3]:
... p.val = 10
...
```
Note
Since the parameter object - an instance of the
[`Parameter`](index.html#sherpa.models.parameter.Parameter) class - is being accessed directly here, the value is changed via the
[`val`](index.html#sherpa.models.parameter.Parameter.val) attribute,
since it does not contain the same overloaded accessor functionality that allows the setting of the parameter directly from the model.
The fields of the parameter object are:
```
>>> print(mdl.pars[1])
val = 10.0 min = -3.4028234663852886e+38 max = 3.4028234663852886e+38 units =
frozen = False link = None default_val = 10.0 default_min = -3.4028234663852886e+38 default_max = 3.4028234663852886e+38
```
#### How did the optimiser vary the parameters?[¶](#how-did-the-optimiser-vary-the-parameters)
It can be instructive to see the “search path” taken by the optimiser; that is, how the parameter values are changed per iteration. The [`fit()`](index.html#sherpa.fit.Fit.fit) method will write these results to a file if the `outfile` parameter is set
(`clobber` is set here to make sure that any existing file is overwritten):
```
>>> res3 = f.fit(outfile='fitpath.txt', clobber=True)
>>> print(res3.format())
Method = levmar Statistic = chi2 Initial fit statistic = 196017 Final fit statistic = 4.01682 at function evaluation 8 Data points = 6 Degrees of freedom = 3 Probability [Q-value] = 0.259653 Reduced statistic = 1.33894 Change in statistic = 196013
polynom1d.c0 -9.38489 +/- 2.91751
polynom1d.c1 -2.41692 +/- 0.250889
polynom1d.c2 0.478273 +/- 0.0312677
```
The output file is a simple ASCII file where the columns give the function evaluation number, the statistic, and the parameter values:
```
# nfev statistic polynom1d.c0 polynom1d.c1 polynom1d.c2 0.000000e+00 1.960165e+05 1.000000e+01 1.000000e+01 1.000000e+01 1.000000e+00 1.960165e+05 1.000000e+01 1.000000e+01 1.000000e+01 2.000000e+00 1.960176e+05 1.000345e+01 1.000000e+01 1.000000e+01 3.000000e+00 1.960172e+05 1.000000e+01 1.000345e+01 1.000000e+01 4.000000e+00 1.961557e+05 1.000000e+01 1.000000e+01 1.000345e+01 5.000000e+00 4.016822e+00 -9.384890e+00 -2.416915e+00 4.782733e-01 6.000000e+00 4.016824e+00 -9.381649e+00 -2.416915e+00 4.782733e-01 7.000000e+00 4.016833e+00 -9.384890e+00 -2.416081e+00 4.782733e-01 8.000000e+00 4.016879e+00 -9.384890e+00 -2.416915e+00 4.784385e-01 9.000000e+00 4.016822e+00 -9.384890e+00 -2.416915e+00 4.782733e-01
```
As can be seen, a solution is found quickly in this situation. Is it the same as
[the previous attempt](#fit-c0-c1-c2)? Although the final statistic values are not the same, they are very close:
```
>>> res3.statval == res2.statval False
>>> res3.statval - res2.statval 1.7763568394002505e-15
```
as are the parameter values:
```
>>> res2.parvals
(-9.384889507344186, -2.416915493735619, 0.4782733426100644)
>>> res3.parvals
(-9.384889507268973, -2.4169154937357664, 0.47827334260997567)
>>> for p2, p3 in zip(res2.parvals, res3.parvals):
... print("{:+.2e}".format(p3 - p2))
...
+7.52e-11
-1.47e-13
-8.87e-14
```
The fact that the parameter values are very similar is good evidence for having found the “best fit” location, although this data set was designed to be easy to fit.
Real-world examples often require more in-depth analysis.
#### Comparing models[¶](#comparing-models)
Sherpa contains the [`calc_mlr()`](index.html#sherpa.utils.calc_mlr) and
[`calc_ftest()`](index.html#sherpa.utils.calc_ftest) routines which can be used to compare the model fits (using the change in the number of degrees of freedom and chi-square statistic) to see if freeing up \(c_1\) is statistically meaningful.
```
>>> from sherpa.utils import calc_mlr
>>> calc_mlr(res.dof - res2.dof, res.statval - res2.statval)
5.778887632582094e-22
```
This provides evidence that the three-term model (with \(c_1\)
free) is a statistically better fit, but the results of these routines should be reviewed carefully to be sure that they are valid; a suggested reference is
[Protassov et al. 2002, Astrophysical Journal, 571, 545](https://ui.adsabs.harvard.edu/#abs/2002ApJ...571..545P).
### Estimating errors[¶](#estimating-errors)
Note
Should I add a paragraph mentioning it can be a good idea to rerun `f.fit()` to make sure any changes haven’t messsed up the location?
There are two methods available to estimate errors from the fit object:
[`Covariance`](index.html#sherpa.estmethods.Covariance) and
[`Confidence`](index.html#sherpa.estmethods.Confidence). The method to use can be set when creating the fit object - using the `estmethod` parameter - or after the object has been created, by changing the
`estmethod` attribute. The default method is covariance
```
>>> print(f.estmethod.name)
covariance
```
which can be significantly faster faster, but less robust, than the confidence technique
[shown below](#fit-confidence-output).
The [`est_errors()`](index.html#sherpa.fit.Fit.est_errors) method is used to calculate the errors on the parameters and returns a
[`ErrorEstResults`](index.html#sherpa.fit.ErrorEstResults) object:
```
>>> coverrs = f.est_errors()
>>> print(coverrs.format())
Confidence Method = covariance Iterative Fit Method = None Fitting Method = levmar Statistic = chi2gehrels covariance 1-sigma (68.2689%) bounds:
Param Best-Fit Lower Bound Upper Bound
--- --- --- ---
polynom1d.c0 -9.38489 -2.91751 2.91751
polynom1d.c1 -2.41692 -0.250889 0.250889
polynom1d.c2 0.478273 -0.0312677 0.0312677
```
The `EstErrResults` object can also be displayed directly
(in a similar manner to the `FitResults` object returned by `fit`):
```
>>> print(coverrs)
datasets = None methodname = covariance iterfitname = none fitname = levmar statname = chi2gehrels sigma = 1 percent = 68.2689492137 parnames = ('polynom1d.c0', 'polynom1d.c1', 'polynom1d.c2')
parvals = (-9.384889507268973, -2.4169154937357664, 0.47827334260997567)
parmins = (-2.917507940156572, -0.2508893171295504, -0.031267664298717336)
parmaxes = (2.917507940156572, 0.2508893171295504, 0.031267664298717336)
nfits = 0
```
The default is to calculate the one-sigma (68.3%) limits for each thawed parameter. The
[error range can be changed](#fit-change-sigma)
with the
`sigma` attribute of the error estimator, and the
[set of parameters used](#fit-error-subset) in the analysis can be changed with the `parlist`
attribute of the
[`est_errors()`](index.html#sherpa.fit.Fit.est_errors)
call.
Warning
The error estimate should only be run when at a “good” fit. The assumption is that the search statistic is chi-square distributed so the change in statistic as a statistic varies can be related to a confidence bound. One requirement is that - for chi-square statistics - is that the reduced statistic value is small enough. See the `max_rstat` field of the
[`EstMethod`](index.html#sherpa.estmethods.EstMethod) object.
Give the error if this does not happen (e.g. if c1 has not been fit)?
#### Changing the error bounds[¶](#changing-the-error-bounds)
The default is to calculate one-sigma errors, since:
```
>>> f.estmethod.sigma 1
```
This can be changed, e.g. to calculate 90% errors which are approximately \(\sigma = 1.6\):
```
>>> f.estmethod.sigma = 1.6
>>> coverrs90 = f.est_errors()
>>> print(coverrs90.format())
Confidence Method = covariance Iterative Fit Method = None Fitting Method = levmar Statistic = chi2gehrels covariance 1.6-sigma (89.0401%) bounds:
Param Best-Fit Lower Bound Upper Bound
--- --- --- ---
polynom1d.c0 -9.38489 -4.66801 4.66801
polynom1d.c1 -2.41692 -0.401423 0.401423
polynom1d.c2 0.478273 -0.0500283 0.0500283
```
Note
As can be seen, 1.6 sigma isn’t quite 90%
```
>>> coverrs90.percent 89.0401416600884
```
#### Accessing the covariance matrix[¶](#accessing-the-covariance-matrix)
The errors created by [`Covariance`](index.html#sherpa.estmethods.Covariance) provides access to the covariance matrix via the
[`extra_output`](index.html#sherpa.fit.ErrorEstResults.extra_output) attribute:
```
>>> print(coverrs.extra_output)
[[ 8.51185258e+00 -4.39950074e-02 -6.51777887e-02]
[ -4.39950074e-02 6.29454494e-02 6.59925111e-04]
[ -6.51777887e-02 6.59925111e-04 9.77666831e-04]]
```
The order of these rows and columns matches that of the
[`parnames`](index.html#sherpa.fit.ErrorEstResults.parnames) field:
```
>>> print([p.split('.')[1] for p in coverrs.parnames])
['c0', 'c1', 'c2']
```
#### Changing the estimator[¶](#changing-the-estimator)
The [`Confidence`](index.html#sherpa.estmethods.Confidence) method takes each thawed parameter and varies it until it finds the point where the statistic has increased enough (the value is determined by the
`sigma` parameter and assumes the statistic is chi-squared distributed). This is repeated separately for the upper and lower bounds for each parameter. Since this can take time for complicated fits, the default behavior is to display a message when each limit is found (the
[order these messages are displayed can change](#fit-multi-core)
from run to run):
```
>>> from sherpa.estmethods import Confidence
>>> f.estmethod = Confidence()
>>> conferrs = f.est_errors()
polynom1d.c0 lower bound: -2.91751 polynom1d.c1 lower bound: -0.250889 polynom1d.c0 upper bound: 2.91751 polynom1d.c2 lower bound: -0.0312677 polynom1d.c1 upper bound: 0.250889 polynom1d.c2 upper bound: 0.0312677
```
As with the [covariance run](#fit-covariance-output), the return value is a [`ErrorEstResults`](index.html#sherpa.fit.ErrorEstResults) object:
```
>>> print(conferrs.format())
Confidence Method = confidence Iterative Fit Method = None Fitting Method = levmar Statistic = chi2gehrels confidence 1-sigma (68.2689%) bounds:
Param Best-Fit Lower Bound Upper Bound
--- --- --- ---
polynom1d.c0 -9.38489 -2.91751 2.91751
polynom1d.c1 -2.41692 -0.250889 0.250889
polynom1d.c2 0.478273 -0.0312677 0.0312677
```
Unlike the covariance errors, these are
[not guaranteed to be symmetrical](index.html#simple-user-model-confidence-bounds)
(although in this example they are).
#### Using multiple cores[¶](#using-multiple-cores)
The example data and model used here does not require many iterations to fit and calculate errors. However, for real-world situations the error analysis can often take significantly-longer than the fit. When using the
[`Confidence`](index.html#sherpa.estmethods.Confidence) technique, the default is to use all available CPU cores on the machine (the error range for each parameter can be thought of as a separate task, and the Python multiprocessing module is used to evaluate these tasks).
This is why the order of the
[screen output](#fit-confidence-call) from the
[`est_errors()`](index.html#sherpa.fit.Fit.est_errors) call can vary.
The
`parallel`
attribute of the error estimator controls whether the calculation should be run in parallel (`True`) or not, and the
`numcores` attribute determines how many CPU cores are used (the default is to use all available cores).
```
>>> f.estmethod.name
'confidence'
>>> f.estmethod.parallel True
```
The [`Covariance`](index.html#sherpa.estmethods.Covariance) technique does not provide a parallel option.
#### Using a subset of parameters[¶](#using-a-subset-of-parameters)
To save time, the error calculation can be restricted to a subset of parameters by setting the `parlist` parameter of the
[`est_errors()`](index.html#sherpa.fit.Fit.est_errors) call. As an example, the errors on just \(c_1\) can be evaluated with the following:
```
>>> c1errs = f.est_errors(parlist=(mdl.c1, ))
polynom1d.c1 lower bound: -0.250889 polynom1d.c1 upper bound: 0.250889
>>> print(c1errs)
datasets = None methodname = confidence iterfitname = none fitname = levmar statname = chi2gehrels sigma = 1 percent = 68.26894921370858 parnames = ('polynom1d.c1',)
parvals = (-2.4169154937357664,)
parmins = (-0.25088931712955054,)
parmaxes = (0.25088931712955054,)
nfits = 2
```
#### Visualizing the errors[¶](#visualizing-the-errors)
The [`IntervalProjection`](index.html#sherpa.plot.IntervalProjection) class is used to show how the statistic varies with a single parameter. It steps through a range of values for a single parameter, fitting the other thawed parameters, and displays the statistic value
(this gives an indication if the assumptions used to
[calculate the errors](#fit-confidence-call)
are valid):
```
>>> from sherpa.plot import IntervalProjection
>>> iproj = IntervalProjection()
>>> iproj.calc(f, mdl.c1)
>>> iproj.plot()
```
The default options display around one sigma from the best-fit location (the range is estimated from the covariance error estimate).
The [`prepare()`](index.html#sherpa.plot.IntervalProjection.prepare) method is used to change these defaults - e.g. by explicitly setting the `min`
and `max` values to use - or, as shown below, by scaling the covariance error estimate using the `fac` argument:
```
>>> iproj.prepare(fac=5, nloop=51)
```
The number of points has also been increased (the `nloop` argument)
to keep the curve smooth. Before re-plotting, the
[`calc()`](index.html#sherpa.plot.IntervalProjection.calc)
method needs to be re-run to calculate the new data.
The one-sigma range is also added as vertical dotted lines using
[`vline()`](index.html#sherpa.plot.IntervalProjection.vline):
```
>>> iproj.calc(f, mdl.c1)
>>> iproj.plot()
>>> pmin = c1errs.parvals[0] + c1errs.parmins[0]
>>> pmax = c1errs.parvals[0] + c1errs.parmaxes[0]
>>> iproj.vline(pmin, overplot=True, linestyle='dot')
>>> iproj.vline(pmax, overplot=True, linestyle='dot')
```
Note
The [`vline()`](index.html#sherpa.plot.IntervalProjection.vline)
method is a simple wrapper routine. Direct calls to matplotlib can also be used to [annotate the visualization](#fit-rproj-manual).
The [`RegionProjection`](index.html#sherpa.plot.RegionProjection) class allows the correlation between two parameters to be shown as a contour plot.
It follows a similar approach as
[`IntervalProjection`](index.html#sherpa.plot.IntervalProjection), although the final visualization is created with a call to
[`contour()`](index.html#sherpa.plot.RegionProjection.contour) rather than plot:
```
>>> from sherpa.plot import RegionProjection
>>> rproj = RegionProjection()
>>> rproj.calc(f, mdl.c0, mdl.c2)
>>> rproj.contour()
```
The limits can be changed, either using the
`fac` parameter (as shown in the
[interval-projection case](#fit-iproj-manual)), or with the `min` and `max` parameters:
```
>>> rproj.prepare(min=(-22, 0.35), max=(3, 0.6), nloop=(41, 41))
>>> rproj.calc(f, mdl.c0, mdl.c2)
>>> rproj.contour()
>>> xlo, xhi = plt.xlim()
>>> ylo, yhi = plt.ylim()
>>> def get_limits(i):
... return conferrs.parvals[i] + \
... np.asarray([conferrs.parmins[i],
... conferrs.parmaxes[i]])
...
>>> plt.vlines(get_limits(0), ymin=ylo, ymax=yhi)
>>> plt.hlines(get_limits(2), xmin=xlo, xmax=xhi)
```
The vertical lines are added to indicate the one-sigma errors
[calculated by the confidence method earlier](#fit-confidence-call).
The [`calc`](index.html#sherpa.plot.RegionProjection.calc) call sets the fields of the
[`RegionProjection`](index.html#sherpa.plot.RegionProjection) object with the data needed to create the plot. In the following example the data is used to show the search statistic as an image, with the contours overplotted. First, the axes of the image are calculated (the `extent` argument to matplotlib’s
`imshow` command requires the pixel edges, not the centers):
```
>>> xmin, xmax = rproj.x0.min(), rproj.x0.max()
>>> ymin, ymax = rproj.x1.min(), rproj.x1.max()
>>> nx, ny = rproj.nloop
>>> hx = 0.5 * (xmax - xmin) / (nx - 1)
>>> hy = 0.5 * (ymax - ymin) / (ny - 1)
>>> extent = (xmin - hx, xmax + hx, ymin - hy, ymax + hy)
```
The statistic data is stored as a one-dimensional array, and needs to be reshaped before it can be displayed:
```
>>> y = rproj.y.reshape((ny, nx))
```
```
>>> plt.clf()
>>> plt.imshow(y, origin='lower', extent=extent, aspect='auto', cmap='viridis_r')
>>> plt.colorbar()
>>> plt.xlabel(rproj.xlabel)
>>> plt.ylabel(rproj.ylabel)
>>> rproj.contour(overplot=True)
```
### Simultaneous fits[¶](#simultaneous-fits)
Sherpa uses the
[`DataSimulFit`](index.html#sherpa.data.DataSimulFit) and
[`SimulFitModel`](index.html#sherpa.models.model.SimulFitModel)
classes to fit multiple data seta and models simultaneously.
This can be done explicitly, by combining individual data sets and models into instances of these classes, or implicitly with the [`simulfit()`](index.html#sherpa.fit.Fit.simulfit) method.
It only makes sense to perform a simultaneous fit if there is some “link” between the various data sets or models, such as sharing one or model components or
[linking model parameters](index.html#params-link).
### Poisson stats[¶](#poisson-stats)
Should there be something about using Poisson stats?
### Reference/API[¶](#reference-api)
#### The sherpa.fit module[¶](#module-sherpa.fit)
Classes
| [`Fit`](index.html#sherpa.fit.Fit)(data, model[, stat, method, estmethod, …]) | Fit a model to a data set. |
| [`IterFit`](index.html#sherpa.fit.IterFit)(data, model, stat, method[, …]) | |
| [`FitResults`](index.html#sherpa.fit.FitResults)(fit, results, init_stat, …) | The results of a fit. |
| [`StatInfoResults`](index.html#sherpa.fit.StatInfoResults)(statname, statval, …[, …]) | A summary of the current statistic value for one or more data sets. |
| [`ErrorEstResults`](index.html#sherpa.fit.ErrorEstResults)(fit, results[, parlist]) | The results of an error estimation run. |
##### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of Fit, IterFit, FitResults, StatInfoResults, ErrorEstResults
#### The sherpa.estmethods module[¶](#module-sherpa.estmethods)
Classes
| [`EstMethod`](index.html#sherpa.estmethods.EstMethod)(name, estfunc) | |
| [`Confidence`](index.html#sherpa.estmethods.Confidence)([name]) | |
| [`Covariance`](index.html#sherpa.estmethods.Covariance)([name]) | |
| [`Projection`](index.html#sherpa.estmethods.Projection)([name]) | |
| [`EstNewMin`](index.html#sherpa.estmethods.EstNewMin) | Reached a new minimum fit statistic |
##### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of EstMethod, Confidence, Covariance, Projection, EstNewMin
Visualisation[¶](#visualisation)
---
### Overview[¶](#overview)
Sherpa has support for different plot backends, although at present there is only one, which uses the
[matplotlib](index.html#term-matplotlib) package.
Interactive visualizations of images is provided by
[DS9](index.html#term-ds9) - an Astronomical image viewer - if installed, whilst there is limited support for visualizing two-dimensional data sets with matplotlib. The classes described in this document do not need to be used, since the data can be plotted directly, but they do provide some conveniences.
The basic approach to creating a visualization using these classes is:
> * create an instance of the relevant class (e.g.
> [`DataPlot`](index.html#sherpa.plot.DataPlot));
> * send it the necessary data with the `prepare()` method (optional);
> * perform any necessary calculation with the `calc()` method (optional);
> * and plot the data with the
> [`plot()`](index.html#sherpa.plot.Plot.plot) or
> [`contour()`](index.html#sherpa.plot.Contour.contour)
> methods (or the
> [`overplot()`](index.html#sherpa.plot.Plot.overplot),
> and
> [`overcontour()`](index.html#sherpa.plot.Contour.overcontour) variants).
Note
The sherpa.plot module also includes error-estimation routines, such as the IntervalProjection class. This is mixing analysis with visualization, which may not be ideal.
#### Image Display[¶](#image-display)
There are also routines for image display, using the
[DS9](index.html#term-ds9) image viewer for interactive display. How are these used from the object API?
### Example[¶](#example)
Here is the data we wish to display: a set of consecutive bins, defining the edges of the bins, and the counts in each bin:
```
>>> import numpy as np
>>> edges = np.asarray([-10, -5, 5, 12, 17, 20, 30, 56, 60])
>>> y = np.asarray([28, 62, 17, 4, 2, 4, 125, 55])
```
As this is a one-dimensional integrated data set (i.e. a histogram),
we shall use the [`Data1DInt`](index.html#sherpa.data.Data1DInt) class to represent it:
```
>>> from sherpa.data import Data1DInt
>>> d = Data1DInt('example histogram', edges[:-1], edges[1:], y)
```
#### Displaying the data[¶](#displaying-the-data)
The [`DataPlot`](index.html#sherpa.plot.DataPlot) class can then be used to display the data, using the [`prepare()`](index.html#sherpa.plot.DataPlot.prepare)
method to set up the data to plot - in this case the
[`Data1DInt`](index.html#sherpa.data.Data1DInt) object - and then the
[`plot()`](index.html#sherpa.plot.DataPlot.plot) method to actually plot the data:
```
>>> from sherpa.plot import DataPlot
>>> dplot = DataPlot()
>>> dplot.prepare(d)
>>> dplot.plot()
```
The appearance of the plot will depend on the chosen backend (although as of the Sherpa 4.12.0 release there is only one, using the
[matplotlib](index.html#term-matplotlib) package).
#### Plotting data directly[¶](#plotting-data-directly)
Most of the Sherpa plot objects expect Sherpa objects to be sent to their `prepare` methods - normally data and model objects,
but plot objects themselves can be passed around for “composite”
plots - but there are several classes that accept the values to display directly:
[`Plot`](index.html#sherpa.plot.Plot),
[`Histogram`](index.html#sherpa.plot.Histogram),
[`Point`](index.html#sherpa.plot.Point),
and
[`Contour`](index.html#sherpa.plot.Contour). Here we use the Histogram class directly to diplay the data directly, on top of the existing plot:
```
>>> from sherpa.plot import Histogram
>>> hplot = Histogram()
>>> hplot.overplot(d.xlo, d.xhi, d.y)
```
#### Creating a model[¶](#creating-a-model)
For the following we need a
[model to display](index.html#document-models/index), so how about a constant minus a gaussian, using the
[`Const1D`](index.html#sherpa.models.basic.Const1D)
and
[`Gauss1D`](index.html#sherpa.models.basic.Gauss1D)
classes:
```
>>> from sherpa.models.basic import Const1D, Gauss1D
>>> mdl = Const1D('base') - Gauss1D('line')
>>> mdl.pars[0].val = 10
>>> mdl.pars[1].val = 25
>>> mdl.pars[2].val = 22
>>> mdl.pars[3].val = 10
>>> print(mdl)
(base - line)
Param Type Value Min Max Units
--- --- --- --- --- ---
base.c0 thawed 10 -3.40282e+38 3.40282e+38
line.fwhm thawed 25 1.17549e-38 3.40282e+38
line.pos thawed 22 -3.40282e+38 3.40282e+38
line.ampl thawed 10 -3.40282e+38 3.40282e+38
```
#### Displaying the model[¶](#displaying-the-model)
With a Sherpa model, we can now use the
[`ModelPlot`](index.html#sherpa.plot.ModelPlot) to display it. Note that unlike the data plot, the
[`prepare()`](index.html#sherpa.plot.ModelPlot.prepare) method requires the data *and* the model:
```
>>> from sherpa.plot import ModelPlot
>>> mplot = ModelPlot()
>>> mplot.prepare(d, mdl)
>>> mplot.plot()
>>> dplot.overplot()
```
The data was drawn on top of the model using the
[`overplot()`](index.html#sherpa.plot.DataPlot.overplot) method
([`plot()`](index.html#sherpa.plot.DataPlot.plot)
could also have been used as long as the
`overplot` argument was set to `True`).
#### Combining the data and model plots[¶](#combining-the-data-and-model-plots)
The above plot is very similar to that created by the
[`FitPlot`](index.html#sherpa.plot.FitPlot) class:
```
>>> from sherpa.plot import FitPlot
>>> fplot = FitPlot()
>>> fplot.prepare(dplot, mplot)
>>> fplot.plot()
```
The major difference is that here the data is drawn first, and then the model - unlike the previous example - so the colors used for the line and points has swapped. The plot title is also different.
#### Changing the plot appearance[¶](#changing-the-plot-appearance)
There is limited support for changing the appearance of plots,
and this can be done either by
> * changing the preference settings of the plot object
> (which will change any plot created by the object)
> * over-riding the setting when plotting the data (this
> capability is new to Sherpa 4.12.0).
There are several settings which are provided for all backends,
such as whether to draw an axis with a logarithmic scale - the
`xlog` and `ylog` settings - as well as others that are specific to a backend - such as the `marker` preference provided by the Matplotlib backend. The name of the preference setting depends on the plot object, for the DataPlot it is
[`plot_prefs`](index.html#sherpa.plot.DataPlot.plot_prefs):
```
>>> print(dplot.plot_prefs)
{'xerrorbars': False, 'yerrorbars': True, 'ecolor': None, 'capsize': None, 'barsabove': False, 'xlog': False, 'ylog': False, 'linestyle': 'None', 'linecolor': None, 'color': None, 'marker': '.', 'markerfacecolor': None, 'markersize': None, 'xaxis': False, 'ratioline': False}
```
Here we set the y scale of the data plot to be drawn with a log scale - by changing the preference setting - and then override the `marker` and `linestyle` elements when creating the plot:
```
>>> dplot.plot_prefs['ylog'] = True
>>> dplot.plot(marker='s', linestyle='dashed')
```
If called without any arguments, the marker and line-style changes are no-longer applied, but the y-axis is still drawn with a log scale:
```
>>> dplot.plot()
```
Let’s remove the y scaling so that the remaining plots use a linear scale:
```
>>> dplot.plot_prefs['ylog'] = False
```
#### Updating a plot[¶](#updating-a-plot)
Note that for the FitPlot class the
[`prepare()`](index.html#sherpa.plot.FitPlot.prepare) method accepts plot objects rather than data and model objects.
```
>>> from sherpa.optmethods import NelderMead
>>> from sherpa.stats import Cash
>>> from sherpa.fit import Fit
>>> f = Fit(d, mdl, stat=Cash(), method=NelderMead())
>>> f.fit()
```
The model plot needs to be updated to reflect the new parameter values before we can replot the fit:
```
>>> mplot.prepare(d, mdl)
>>> fplot.plot()
```
#### Looking at confidence ranges[¶](#looking-at-confidence-ranges)
The variation in best-fit statistic to a parameter can be investigated with the [`IntervalProjection`](index.html#sherpa.plot.IntervalProjection)
class (there is also a [`IntervalUncertainty`](index.html#sherpa.plot.IntervalUncertainty)
but it is not as robust). Here we use the default options for determing the parameter range over which to vary the gaussian line position (which corresponds to mdl.pars[2]):
```
>>> from sherpa.plot import IntervalProjection
>>> iproj = IntervalProjection()
>>> iproj.calc(f, mdl.pars[2])
WARNING: hard minimum hit for parameter base.c0 WARNING: hard maximum hit for parameter base.c0 WARNING: hard minimum hit for parameter line.fwhm WARNING: hard maximum hit for parameter line.fwhm WARNING: hard minimum hit for parameter line.ampl WARNING: hard maximum hit for parameter line.ampl
>>> iproj.plot()
```
### Reference/API[¶](#reference-api)
#### The sherpa.plot module[¶](#module-sherpa.plot)
A visualization interface to Sherpa
Classes
| [`Confidence1D`](index.html#sherpa.plot.Confidence1D)() | |
| [`IntervalProjection`](index.html#sherpa.plot.IntervalProjection)() | |
| [`IntervalUncertainty`](index.html#sherpa.plot.IntervalUncertainty)() | |
| [`Confidence2D`](index.html#sherpa.plot.Confidence2D)() | |
| [`RegionProjection`](index.html#sherpa.plot.RegionProjection)() | |
| [`RegionUncertainty`](index.html#sherpa.plot.RegionUncertainty)() | |
| [`Contour`](index.html#sherpa.plot.Contour)() | Base class for contour plots |
| [`DataContour`](index.html#sherpa.plot.DataContour)() | Create contours of 2D data. |
| [`PSFContour`](index.html#sherpa.plot.PSFContour)() | Derived class for creating 2D PSF contours |
| [`FitContour`](index.html#sherpa.plot.FitContour)() | Derived class for creating 2D combination data and model contours |
| [`ModelContour`](index.html#sherpa.plot.ModelContour)() | Derived class for creating 2D model contours |
| [`RatioContour`](index.html#sherpa.plot.RatioContour)() | Derived class for creating 2D ratio contours (data divided by model) |
| [`ResidContour`](index.html#sherpa.plot.ResidContour)() | Derived class for creating 2D residual contours (data-model) |
| [`SourceContour`](index.html#sherpa.plot.SourceContour)() | Derived class for creating 2D model contours |
| [`Histogram`](index.html#sherpa.plot.Histogram)() | Base class for histogram plots |
| [`Plot`](index.html#sherpa.plot.Plot)() | Base class for line plots |
| [`DataPlot`](index.html#sherpa.plot.DataPlot)() | Create 1D plots of data values. |
| [`PSFPlot`](index.html#sherpa.plot.PSFPlot)() | Derived class for creating 1D PSF kernel data plots |
| [`FitPlot`](index.html#sherpa.plot.FitPlot)() | Combine data and model plots for 1D data. |
| [`ModelPlot`](index.html#sherpa.plot.ModelPlot)() | Create 1D plots of model values. |
| [`ChisqrPlot`](index.html#sherpa.plot.ChisqrPlot)() | Create plots of the chi-square value per point. |
| [`ComponentModelPlot`](index.html#sherpa.plot.ComponentModelPlot)() | |
| [`DelchiPlot`](index.html#sherpa.plot.DelchiPlot)() | Create plots of the delta-chi value per point. |
| [`RatioPlot`](index.html#sherpa.plot.RatioPlot)() | Create plots of the ratio of data to model per point. |
| [`ResidPlot`](index.html#sherpa.plot.ResidPlot)() | Create plots of the residuals (data - model) per point. |
| [`SourcePlot`](index.html#sherpa.plot.SourcePlot)() | Create 1D plots of unconcolved model values. |
| [`ComponentSourcePlot`](index.html#sherpa.plot.ComponentSourcePlot)() | |
| [`SplitPlot`](index.html#sherpa.plot.SplitPlot)([rows, cols]) | Create multiple plots. |
| [`JointPlot`](index.html#sherpa.plot.JointPlot)() | Multiple plots that share a common axis |
| [`Point`](index.html#sherpa.plot.Point)() | Base class for point plots |
##### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of Confidence1D, IntervalProjection, IntervalUncertainty, Confidence2D, RegionProjection, RegionUncertainty, Contour, DataContour, PSFContour, FitContour, ModelContour, RatioContour, ResidContour, SourceContour, Histogram, Plot, DataPlot, PSFPlot, FitPlot, ModelPlot, ChisqrPlot, ComponentModelPlot, DelchiPlot, RatioPlot, ResidPlot, SourcePlot, ComponentSourcePlot, SplitPlot, JointPlot, Point
#### The sherpa.astro.plot module[¶](#module-sherpa.astro.plot)
Classes
| [`ChisqrPlot`](index.html#sherpa.astro.plot.ChisqrPlot)() | Create plots of the chi-square value per point. |
| [`BkgChisqrPlot`](index.html#sherpa.astro.plot.BkgChisqrPlot)() | Derived class for creating background plots of 1D chi**2 ((data-model)/error)**2 |
| [`DataPlot`](index.html#sherpa.astro.plot.DataPlot)() | Create 1D plots of data values. |
| [`BkgDataPlot`](index.html#sherpa.astro.plot.BkgDataPlot)() | Derived class for creating plots of background counts |
| [`DelchiPlot`](index.html#sherpa.astro.plot.DelchiPlot)() | Create plots of the delta-chi value per point. |
| [`BkgDelchiPlot`](index.html#sherpa.astro.plot.BkgDelchiPlot)() | Derived class for creating background plots of 1D delchi chi ((data-model)/error) |
| [`FitPlot`](index.html#sherpa.astro.plot.FitPlot)() | Combine data and model plots for 1D data. |
| [`BkgFitPlot`](index.html#sherpa.astro.plot.BkgFitPlot)() | Derived class for creating plots of background counts with fitted model |
| [`HistogramPlot`](index.html#sherpa.astro.plot.HistogramPlot)() | |
| [`ARFPlot`](index.html#sherpa.astro.plot.ARFPlot)() | Create plots of the ancillary response file (ARF). |
| [`ModelHistogram`](index.html#sherpa.astro.plot.ModelHistogram)() | Derived class for creating 1D PHA model histogram plots |
| [`BkgModelHistogram`](index.html#sherpa.astro.plot.BkgModelHistogram)() | Derived class for creating 1D background PHA model histogram plots |
| [`OrderPlot`](index.html#sherpa.astro.plot.OrderPlot)() | Derived class for creating plots of the convolved source model using selected multiple responses |
| [`SourcePlot`](index.html#sherpa.astro.plot.SourcePlot)() | Create 1D plots of unconcolved model values. |
| [`BkgSourcePlot`](index.html#sherpa.astro.plot.BkgSourcePlot)() | Derived class for plotting the background unconvolved source model |
| [`ModelPlot`](index.html#sherpa.astro.plot.ModelPlot)() | Create 1D plots of model values. |
| [`BkgModelPlot`](index.html#sherpa.astro.plot.BkgModelPlot)() | Derived class for creating plots of background model |
| [`RatioPlot`](index.html#sherpa.astro.plot.RatioPlot)() | Create plots of the ratio of data to model per point. |
| [`BkgRatioPlot`](index.html#sherpa.astro.plot.BkgRatioPlot)() | Derived class for creating background plots of 1D ratio (<data:model>) |
| [`ResidPlot`](index.html#sherpa.astro.plot.ResidPlot)() | Create plots of the residuals (data - model) per point. |
| [`BkgResidPlot`](index.html#sherpa.astro.plot.BkgResidPlot)() | Derived class for creating background plots of 1D residual (data-model) |
##### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of ChisqrPlot, BkgChisqrPlot, DataPlot, BkgDataPlot, DelchiPlot, BkgDelchiPlot, FitPlot, BkgFitPlot, HistogramPlot, ARFPlot, ModelHistogram, BkgModelHistogram, OrderPlot, SourcePlot, BkgSourcePlot, ModelPlot, BkgModelPlot, RatioPlot, BkgRatioPlot, ResidPlot, BkgResidPlot
Markov Chain Monte Carlo and Poisson data[¶](#markov-chain-monte-carlo-and-poisson-data)
---
Sherpa provides a
[Markov Chain Monte Carlo (MCMC)](https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo)
method designed for Poisson-distributed data.
It was originally developed as the
[Bayesian Low-Count X-ray Spectral (BLoCXS)](http://hea-www.harvard.edu/astrostat/pyblocxs/)
package, but has since been incorporated into Sherpa.
It is developed from the work presented in
[Analysis of Energy Spectra with Low Photon Counts via Bayesian Posterior Simulation](https://ui.adsabs.harvard.edu/#abs/2001ApJ...548..224V)
by <NAME> et al.
Unlike many MCMC implementations, idea is that have some idea of the search surface at the optimum - i.e. the covariance matrix - and then use that to explore this region.
### Example[¶](#example)
Note
This example probably needs to be simplified to reduce the run time
#### Simulate the data[¶](#simulate-the-data)
Create a simulated data set:
```
>>> np.random.seed(2)
>>> x0low, x0high = 3000, 4000
>>> x1low, x1high = 4000, 4800
>>> dx = 15
>>> x1, x0 = np.mgrid[x1low:x1high:dx, x0low:x0high:dx]
```
Convert to 1D arrays:
```
>>> shape = x0.shape
>>> x0, x1 = x0.flatten(), x1.flatten()
```
Create the model used to simulate the data:
```
>>> from sherpa.astro.models import Beta2D
>>> truth = Beta2D()
>>> truth.xpos, truth.ypos = 3512, 4418
>>> truth.r0, truth.alpha = 120, 2.1
>>> truth.ampl = 12
```
Evaluate the model to calculate the expected values:
```
>>> mexp = truth(x0, x1).reshape(shape)
```
Create the simulated data by adding in Poisson-distributed noise:
```
>>> msim = np.random.poisson(mexp)
```
#### What does the data look like?[¶](#what-does-the-data-look-like)
Use an arcsinh transform to view the data, based on the work of
[<NAME> & Szalay (1999)](https://ui.adsabs.harvard.edu/#abs/1999AJ....118.1406L).
```
>>> plt.imshow(np.arcsinh(msim), origin='lower', cmap='viridis',
... extent=(x0low, x0high, x1low, x1high),
... interpolation='nearest', aspect='auto')
>>> plt.title('Simulated image')
```
#### Find the starting point for the MCMC[¶](#find-the-starting-point-for-the-mcmc)
Set up a model and use the standard Sherpa approach to find a good starting place for the MCMC analysis:
```
>>> from sherpa import data, stats, optmethods, fit
>>> d = data.Data2D('sim', x0, x1, msim.flatten(), shape=shape)
>>> mdl = Beta2D()
>>> mdl.xpos, mdl.ypos = 3500, 4400
```
Use a Likelihood statistic and Nelder-Mead algorithm:
```
>>> f = fit.Fit(d, mdl, stats.Cash(), optmethods.NelderMead())
>>> res = f.fit()
>>> print(res.format())
Method = neldermead Statistic = cash Initial fit statistic = 20048.5 Final fit statistic = 607.229 at function evaluation 777 Data points = 3618 Degrees of freedom = 3613 Change in statistic = 19441.3
beta2d.r0 121.945
beta2d.xpos 3511.99
beta2d.ypos 4419.72
beta2d.ampl 12.0598
beta2d.alpha 2.13319
```
Now calculate the covariance matrix (the default error estimate):
```
>>> f.estmethod
<Covariance error-estimation method instance 'covariance'>
>>> eres = f.est_errors()
>>> print(eres.format())
Confidence Method = covariance Iterative Fit Method = None Fitting Method = neldermead Statistic = cash covariance 1-sigma (68.2689%) bounds:
Param Best-Fit Lower Bound Upper Bound
--- --- --- ---
beta2d.r0 121.945 -7.12579 7.12579
beta2d.xpos 3511.99 -2.09145 2.09145
beta2d.ypos 4419.72 -2.10775 2.10775
beta2d.ampl 12.0598 -0.610294 0.610294
beta2d.alpha 2.13319 -0.101558 0.101558
```
The covariance matrix is stored in the `extra_output` attribute:
```
>>> cmatrix = eres.extra_output
>>> pnames = [p.split('.')[1] for p in eres.parnames]
>>> plt.imshow(cmatrix, interpolation='nearest', cmap='viridis')
>>> plt.xticks(np.arange(5), pnames)
>>> plt.yticks(np.arange(5), pnames)
>>> plt.colorbar()
```
#### Run the chain[¶](#run-the-chain)
Finally, run a chain (use a small number to keep the run time low for this example):
```
>>> from sherpa.sim import MCMC
>>> mcmc = MCMC()
>>> mcmc.get_sampler_name()
>>> draws = mcmc.get_draws(f, cmatrix, niter=1000)
>>> svals, accept, pvals = draws
>>> pvals.shape
(5, 1001)
>>> accept.sum() * 1.0 / 1000 0.48499999999999999
```
#### Trace plots[¶](#trace-plots)
```
>>> plt.plot(pvals[0, :])
>>> plt.xlabel('Iteration')
>>> plt.ylabel('r0')
```
Or using the [`sherpa.plot`](index.html#module-sherpa.plot) module:
```
>>> from sherpa import plot
>>> tplot = plot.TracePlot()
>>> tplot.prepare(svals, name='Statistic')
>>> tplot.plot()
```
#### PDF of a parameter[¶](#pdf-of-a-parameter)
```
>>> pdf = plot.PDFPlot()
>>> pdf.prepare(pvals[1, :], 20, False, 'xpos', name='example')
>>> pdf.plot()
```
Add in the covariance estimate:
```
>>> xlo, xhi = eres.parmins[1] + eres.parvals[1], eres.parmaxes[1] + eres.parvals[1]
>>> plt.annotate('', (xlo, 90), (xhi, 90), arrowprops={'arrowstyle': '<->'})
>>> plt.plot([eres.parvals[1]], [90], 'ok')
```
#### CDF for a parameter[¶](#cdf-for-a-parameter)
Normalise by the actual answer to make it easier to see how well the results match reality:
```
>>> cdf = plot.CDFPlot()
>>> plt.subplot(2, 1, 1)
>>> cdf.prepare(pvals[1, :] - truth.xpos.val, r'$\Delta x$')
>>> cdf.plot(clearwindow=False)
>>> plt.title('')
>>> plt.subplot(2, 1, 2)
>>> cdf.prepare(pvals[2, :] - truth.ypos.val, r'$\Delta y$')
>>> cdf.plot(clearwindow=False)
>>> plt.title('')
```
#### Scatter plot[¶](#scatter-plot)
```
>>> plt.scatter(pvals[0, :] - truth.r0.val,
... pvals[4, :] - truth.alpha.val, alpha=0.3)
>>> plt.xlabel(r'$\Delta r_0$', size=18)
>>> plt.ylabel(r'$\Delta \alpha$', size=18)
```
This can be compared to the
[`RegionProjection`](index.html#sherpa.plot.RegionProjection) calculation:
```
>>> plt.scatter(pvals[0, :], pvals[4, :], alpha=0.3)
>>> from sherpa.plot import RegionProjection
>>> rproj = RegionProjection()
>>> rproj.prepare(min=[95, 1.8], max=[150, 2.6], nloop=[21, 21])
>>> rproj.calc(f, mdl.r0, mdl.alpha)
>>> rproj.contour(overplot=True)
>>> plt.xlabel(r'$r_0$'); plt.ylabel(r'$\alpha$')
```
### Reference/API[¶](#reference-api)
#### The sherpa.sim module[¶](#module-sherpa.sim)
Monte-Carlo Markov Chain support for low-count data (Poisson statistics).
The `sherpa.sim` module provides support for exploring the posterior probability density of parameters in a fit to low-count data, for which Poisson statistics hold, using a Bayesian algorithm and a Monte-Carlo Markov Chain (MCMC). It was originally known as the pyBLoCXS (python Bayesian Low-Count X-ray Spectral) package [[1]](#id7), but has since been incorporated into Sherpa.
The Sherpa UI modules - e.g. sherpa.ui and sherpa.astro.ui - provide many of the routines described below (e.g. `list_samplers`).
##### Acknowledgements[¶](#acknowledgements)
The original version of the code was developed by the CHASC Astro-Statistics collaboration <http://hea-www.harvard.edu/AstroStat/>,
and was called pyBLoCXS. It has since been developed by the Chandra X-ray Center and weas added to Sherpa in version 4.5.1.
##### Overview[¶](#overview)
The algorithm explores parameter space at a suspected minimum -
i.e. after a standard Sherpa fit. It supports a flexible definition of priors and allows for variations in the calibration information. It can be used to compute posterior predictive p-values for the likelihood ratio test
[[2]](#id8). Future versions will allow for the incorporation of calibration uncertainty [[3]](#id9).
MCMC is a complex computational technique that requires some sophistication on the part of its users to ensure that it both converges and explores the posterior distribution properly. The pyBLoCXS code has been tested with a number of simple single-component spectral models. It should be used with great care in more complex settings. The code is based on the methods in
[[4]](#id10) but employs a different MCMC sampler than is described in that article.
A general description of the techniques employed along with their convergence diagnostics can be found in the Appendices of [[4]](#id10)
and in [[5]](#id11).
##### Jumping Rules[¶](#jumping-rules)
The jumping rule determines how each step in the Monte-Carlo Markov Chain is calculated. The setting can be changed using `set_sampler`.
The `sherpa.sim` module provides the following rules, which may be augmented by other modules:
* `MH` uses a Metropolis-Hastings jumping rule that is a multivariate t-distribution with user-specified degrees of freedom centered on the best-fit parameters, and with multivariate scale determined by the
`covar` function applied at the best-fit location.
* `MetropolisMH` mixes this Metropolis Hastings jumping rule with a Metropolis jumping rule centered at the current draw, in both cases drawing from the same t-distribution as used with `MH`. The probability of using the best-fit location as the start of the jump is given by the `p_M` parameter of the rule (use `get_sampler` or
`get_sampler_opt` to view and `set_sampler_opt` to set this value),
otherwise the jump is from the previous location in the chain.
Options for the sampler are retrieved and set by `get_sampler` or
`get_sampler_opt`, and `set_sampler_opt` respectively. The list of available samplers is given by `list_samplers`.
##### Choosing the parameter values[¶](#choosing-the-parameter-values)
By default, the prior on each parameter is taken to be flat, varying from the parameter minima to maxima values. This prior can be changed using the `set_prior` function, which can set the prior for a parameter to a function or Sherpa model. The list of currently set prior-parameter pairs is returned by the `list_priors` function, and the prior function associated with a particular Sherpa model parameter may be accessed with `get_prior`.
##### Running the chain[¶](#running-the-chain)
The `get_draws` function runs a pyBLoCXS chain using fit information associated with the specified data set(s), and the currently set sampler and parameter priors, for a specified number of iterations. It returns an array of statistic values, an array of acceptance Booleans, and a 2-D array of associated parameter values.
##### Analyzing the results[¶](#analyzing-the-results)
The module contains several routines to visualize the results of the chain,
including `plot_trace`, `plot_cdf`, and `plot_pdf`, along with
`sherpa.utils.get_error_estimates` for calculating the limits from a parameter chain.
References
| [[1]](#id1) | <http://hea-www.harvard.edu/AstroStat/pyBLoCXS/> |
| [[2]](#id2) | “Statistics, Handle with Care: Detecting Multiple Model Components with the Likelihood Ratio Test”, Protassov et al., 2002, ApJ, 571, 545
[http://adsabs.harvard.edu/abs/2002ApJ…571..545P](http://adsabs.harvard.edu/abs/2002ApJ...571..545P) |
| [[3]](#id3) | “Accounting for Calibration Uncertainties in X-ray Analysis:
Effective Areas in Spectral Fitting”, Lee et al., 2011, ApJ, 731, 126
[http://adsabs.harvard.edu/abs/2011ApJ…731..126L](http://adsabs.harvard.edu/abs/2011ApJ...731..126L) |
| [4] | *([1](#id4), [2](#id5))* “Analysis of Energy Spectra with Low Photon Counts via Bayesian Posterior Simulation”, van Dyk et al. 2001, ApJ, 548, 224
[http://adsabs.harvard.edu/abs/2001ApJ…548..224V](http://adsabs.harvard.edu/abs/2001ApJ...548..224V) |
| [[5]](#id6) | Chapter 11 of Gelman, Carlin, Stern, and Rubin
(Bayesian Data Analysis, 2nd Edition, 2004, Chapman & Hall/CRC). |
Examples
Analysis proceeds as normal, up to the point that a good fit has been determined, as shown below (note that a Poisson likelihood,
such as the `cash` statistic, must be used):
```
>>> from sherpa.astro import ui
>>> ui.load_pha('src.pi')
>>> ui.notice(0.5, 7)
>>> ui.set_source(ui.xsphabs.gal * ui.xspowerlaw.pl)
>>> ui.set_stat('cash')
>>> ui.set_method('simplex')
>>> ui.fit()
>>> ui.covar()
```
Once the best-fit location has been determined (which may require multiple calls to `fit`), the chain can be run. In this example the default sampler (`MetropolisMH`) and default parameter priors
(flat, varying between the minimum and maximum values) are used,
as well as the default number of iterations (1000):
```
>>> stats, accept, params = ui.get_draws()
```
The `stats` array contains the fit statistic for each iteration
(the first element of these arrays is the starting point of the chain,
so there will be 1001 elements in this case). The “trace” - i.e.
statistic versus iteration - can be plotted using:
```
>>> ui.plot_trace(stats, name='stat')
```
The `accept` array indicates whether, at each iteration, the proposed jump was accepted, (`True`) or if the previous iterations parameter values are used. This can be used to look at the acceptance rate for the chain (dropping the last element and a burn-in period, which here is arbitrarily taken to be 100):
```
>>> nburn = 100
>>> arate = accept[nburn:-1].sum() * 1.0 / (len(accept) - nburn - 1)
>>> print("acceptance rate = {}".format(arate))
```
The trace of the parameter values can also be displayed; in this example a burn-in period has not been removed):
```
>>> par1 = params[:, 0]
>>> par2 = params[:, 1]
>>> ui.plot_trace(par1, name='par1')
>>> ui.plot_trace(par2, name='par2')
```
The cumulative distribution can also be viewed:
```
>>> ui.plot_cdf(par1[nburn:], name='par1')
```
as well as the probability density:
```
>>> ui.plot_[pdf(par2[nburn:], name='par2')
```
The traces can be used to estimate the credible interval for a parameter:
```
>>> from sherpa.utils import get_error_estimates
>>> pval, plo, phi = get_error_estimates(par1[nburn:])
```
Classes
| [`MCMC`](index.html#sherpa.sim.MCMC)() | High-level UI to pyBLoCXS that joins the loop in ‘Walk’ with the jumping rule in ‘Sampler’. |
Functions
| [`flat`](index.html#sherpa.sim.flat)(x) | The flat prior (returns 1 everywhere). |
| [`inverse`](index.html#sherpa.sim.inverse)(x) | Returns the inverse of x. |
| [`inverse2`](index.html#sherpa.sim.inverse2)(x) | Returns the invers of x^2. |
##### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of MCMC
#### The sherpa.sim.mh module[¶](#module-sherpa.sim.mh)
pyBLoCXS is a sophisticated Markov chain Monte Carlo (MCMC) based algorithm designed to carry out Bayesian Low-Count X-ray Spectral (BLoCXS) analysis in the Sherpa environment. The code is a Python extension to Sherpa that explores parameter space at a suspected minimum using a predefined Sherpa model to high-energy X-ray spectral data. pyBLoCXS includes a flexible definition of priors and allows for variations in the calibration information. It can be used to compute posterior predictive p-values for the likelihood ratio test (see Protassov et al., 2002, ApJ, 571, 545). Future versions will allow for the incorporation of calibration uncertainty (Lee et al., 2011, ApJ, 731, 126).
MCMC is a complex computational technique that requires some sophistication on the part of its users to ensure that it both converges and explores the posterior distribution properly. The pyBLoCXS code has been tested with a number of simple single-component spectral models. It should be used with great care in more complex settings. Readers interested in Bayesian low-count spectral analysis should consult van Dyk et al. (2001, ApJ, 548, 224). pyBLoCXS is based on the methods in van Dyk et al. (2001) but employs a different MCMC sampler than is described in that article. In particular, pyBLoCXS has two sampling modules. The first uses a Metropolis-Hastings jumping rule that is a multivariate t-distribution with user specified degrees of freedom centered on the best spectral fit and with multivariate scale determined by the Sherpa function, covar(), applied to the best fit. The second module mixes this Metropolis Hastings jumping rule with a Metropolis jumping rule centered at the current draw, also sampling according to a t-distribution with user specified degrees of freedom and multivariate scale determined by a user specified scalar multiple of covar() applied to the best fit.
A general description of the MCMC techniques we employ along with their convergence diagnostics can be found in Appendices A.2 - A.4 of van Dyk et al. (2001) and in more detail in Chapter 11 of Gelman, Carlin, Stern, and Rubin
(Bayesian Data Analysis, 2nd Edition, 2004, Chapman & Hall/CRC).
<http://hea-www.harvard.edu/AstroStat/pyBLoCXS/Classes
| [`LimitError`](index.html#sherpa.sim.mh.LimitError) | |
| [`MH`](index.html#sherpa.sim.mh.MH)(fcn, sigma, mu, dof, *args) | The Metropolis Hastings Sampler |
| [`MetropolisMH`](index.html#sherpa.sim.mh.MetropolisMH)(fcn, sigma, mu, dof, *args) | The Metropolis Metropolis-Hastings Sampler |
| [`Sampler`](index.html#sherpa.sim.mh.Sampler)() | |
| [`Walk`](index.html#sherpa.sim.mh.Walk)([sampler, niter]) | |
Functions
| [`dmvnorm`](index.html#sherpa.sim.mh.dmvnorm)(x, mu, sigma[, log]) | Probability Density of a multi-variate Normal distribution |
| [`dmvt`](index.html#sherpa.sim.mh.dmvt)(x, mu, sigma, dof[, log, norm]) | Probability Density of a multi-variate Student’s t distribution |
##### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of LimitError, MetropolisMH, MH, Sampler, Walk
#### The sherpa.sim.sample module[¶](#module-sherpa.sim.sample)
Classes
| [`NormalParameterSampleFromScaleMatrix`](index.html#sherpa.sim.sample.NormalParameterSampleFromScaleMatrix)() | |
| [`NormalParameterSampleFromScaleVector`](index.html#sherpa.sim.sample.NormalParameterSampleFromScaleVector)() | |
| [`NormalSampleFromScaleMatrix`](index.html#sherpa.sim.sample.NormalSampleFromScaleMatrix)() | |
| [`NormalSampleFromScaleVector`](index.html#sherpa.sim.sample.NormalSampleFromScaleVector)() | |
| [`ParameterSampleFromScaleMatrix`](index.html#sherpa.sim.sample.ParameterSampleFromScaleMatrix)() | |
| [`ParameterSampleFromScaleVector`](index.html#sherpa.sim.sample.ParameterSampleFromScaleVector)() | |
| [`ParameterScale`](index.html#sherpa.sim.sample.ParameterScale)() | |
| [`ParameterScaleMatrix`](index.html#sherpa.sim.sample.ParameterScaleMatrix)() | |
| [`ParameterScaleVector`](index.html#sherpa.sim.sample.ParameterScaleVector)() | |
| [`StudentTParameterSampleFromScaleMatrix`](index.html#sherpa.sim.sample.StudentTParameterSampleFromScaleMatrix)() | |
| [`StudentTSampleFromScaleMatrix`](index.html#sherpa.sim.sample.StudentTSampleFromScaleMatrix)() | |
| [`UniformParameterSampleFromScaleVector`](index.html#sherpa.sim.sample.UniformParameterSampleFromScaleVector)() | |
| [`UniformSampleFromScaleVector`](index.html#sherpa.sim.sample.UniformSampleFromScaleVector)() | |
Functions
| [`multivariate_t`](index.html#sherpa.sim.sample.multivariate_t)(mean, cov, df[, size]) | Draw random deviates from a multivariate Student’s T distribution Such a distribution is specified by its mean covariance matrix, and degrees of freedom. |
| [`multivariate_cauchy`](index.html#sherpa.sim.sample.multivariate_cauchy)(mean, cov[, size]) | This needs to be checked too! A reference to the literature the better |
| [`normal_sample`](index.html#sherpa.sim.sample.normal_sample)(fit[, num, sigma, correlate, …]) | Sample the fit statistic by taking the parameter values from a normal distribution. |
| [`uniform_sample`](index.html#sherpa.sim.sample.uniform_sample)(fit[, num, factor, numcores]) | Sample the fit statistic by taking the parameter values from an uniform distribution. |
| [`t_sample`](index.html#sherpa.sim.sample.t_sample)(fit[, num, dof, numcores]) | Sample the fit statistic by taking the parameter values from a Student’s t-distribution. |
##### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of ParameterScale, ParameterScaleVector, ParameterScaleMatrix, ParameterSampleFromScaleMatrix, ParameterSampleFromScaleVector, UniformParameterSampleFromScaleVector, NormalParameterSampleFromScaleVector, NormalParameterSampleFromScaleMatrix, StudentTParameterSampleFromScaleMatrix, NormalSampleFromScaleMatrix, NormalSampleFromScaleVector, UniformSampleFromScaleVector, StudentTSampleFromScaleMatrix
#### The sherpa.sim.simulate module[¶](#module-sherpa.sim.simulate)
Classes for PPP simulations
Classes
| [`LikelihoodRatioResults`](index.html#sherpa.sim.simulate.LikelihoodRatioResults)(ratios, stats, …) | The results of a likelihood ratio comparison simulation. |
| [`LikelihoodRatioTest`](index.html#sherpa.sim.simulate.LikelihoodRatioTest)() | Likelihood Ratio Test. |
##### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of LikelihoodRatioResults, LikelihoodRatioTest
Utility routines[¶](#utility-routines)
---
There are a number of utility routines provided by Sherpa that may be useful. Should they be documented here or elsewhere?
Unfortunately it is not always obvious whether a routine is for use with the Object-Oriented API or the Session API.
### Reference/API[¶](#reference-api)
#### The sherpa.utils.err module[¶](#module-sherpa.utils.err)
Sherpa specific exceptions
Exceptions
| [`SherpaErr`](index.html#sherpa.utils.err.SherpaErr)(dict, *args) | Base class for all Sherpa exceptions |
| [`ArgumentErr`](index.html#sherpa.utils.err.ArgumentErr)(key, *args) | |
| [`ArgumentTypeErr`](index.html#sherpa.utils.err.ArgumentTypeErr)(key, *args) | |
| [`ConfidenceErr`](index.html#sherpa.utils.err.ConfidenceErr)(key, *args) | |
| [`DS9Err`](index.html#sherpa.utils.err.DS9Err)(key, *args) | |
| [`DataErr`](index.html#sherpa.utils.err.DataErr)(key, *args) | Error in creating or using a data set |
| [`EstErr`](index.html#sherpa.utils.err.EstErr)(key, *args) | |
| [`FitErr`](index.html#sherpa.utils.err.FitErr)(key, *args) | |
| [`IOErr`](index.html#sherpa.utils.err.IOErr)(key, *args) | |
| [`IdentifierErr`](index.html#sherpa.utils.err.IdentifierErr)(key, *args) | |
| [`ImportErr`](index.html#sherpa.utils.err.ImportErr)(key, *args) | |
| [`InstrumentErr`](index.html#sherpa.utils.err.InstrumentErr)(key, *args) | |
| [`ModelErr`](index.html#sherpa.utils.err.ModelErr)(key, *args) | Error in creating or using a model |
| [`NotImplementedErr`](index.html#sherpa.utils.err.NotImplementedErr)(key, *args) | |
| [`PSFErr`](index.html#sherpa.utils.err.PSFErr)(key, *args) | |
| [`ParameterErr`](index.html#sherpa.utils.err.ParameterErr)(key, *args) | Error in creating or using a model |
| [`PlotErr`](index.html#sherpa.utils.err.PlotErr)(key, *args) | Error in creating or using a plotting class |
| [`SessionErr`](index.html#sherpa.utils.err.SessionErr)(key, *args) | |
| [`StatErr`](index.html#sherpa.utils.err.StatErr)(key, *args) | |
##### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of SherpaErr, ArgumentErr, ArgumentTypeErr, ConfidenceErr, DS9Err, DataErr, EstErr, FitErr, IOErr, IdentifierErr, ImportErr, InstrumentErr, ModelErr, NotImplementedErr, PSFErr, ParameterErr, PlotErr, SessionErr, StatErr
#### The sherpa.utils module[¶](#module-sherpa.utils)
Functions
| [`Knuth_close`](index.html#sherpa.utils.Knuth_close)(x, y, tol[, myop]) | Check whether two floating-point numbers are close together. |
| [`_guess_ampl_scale`](index.html#sherpa.utils._guess_ampl_scale) | The scaling applied to a value to create its range. |
| [`apache_muller`](index.html#sherpa.utils.apache_muller)(fcn, xa, xb[, fa, fb, args, …]) | |
| [`bisection`](index.html#sherpa.utils.bisection)(fcn, xa, xb[, fa, fb, args, …]) | |
| [`bool_cast`](index.html#sherpa.utils.bool_cast)(val) | Convert a string to a boolean. |
| [`calc_ftest`](index.html#sherpa.utils.calc_ftest)(dof1, stat1, dof2, stat2) | Compare two models using the F test. |
| [`calc_mlr`](index.html#sherpa.utils.calc_mlr)(delta_dof, delta_stat) | Compare two models using the Maximum Likelihood Ratio test. |
| [`calc_total_error`](index.html#sherpa.utils.calc_total_error)([staterror, syserror]) | Add statistical and systematic errors in quadrature. |
| [`create_expr`](index.html#sherpa.utils.create_expr)(vals[, mask, format, delim]) | collapse a list of channels into an expression using hyphens and commas to indicate filtered intervals. |
| [`dataspace1d`](index.html#sherpa.utils.dataspace1d)(start, stop[, step, numbins]) | Populates an integrated grid |
| [`dataspace2d`](index.html#sherpa.utils.dataspace2d)(dim) | Populates a blank image dataset |
| [`demuller`](index.html#sherpa.utils.demuller)(fcn, xa, xb, xc[, fa, fb, fc, …]) | A root-finding algorithm using Muller’s method. |
| [`erf`](index.html#sherpa.utils.erf)(x) | Calculate the error function. |
| [`export_method`](index.html#sherpa.utils.export_method)(meth[, name, modname]) | Given a bound instance method, return a simple function that wraps it. |
| [`extract_kernel`](index.html#sherpa.utils.extract_kernel)(kernel, dims_kern, dims_new, …) | Extract the kernel. |
| [`filter_bins`](index.html#sherpa.utils.filter_bins)(mins, maxes, axislist) | What mask represents the given set of filters? |
| [`gamma`](index.html#sherpa.utils.gamma)(z) | Calculate the Gamma function. |
| [`get_error_estimates`](index.html#sherpa.utils.get_error_estimates)(x[, sorted]) | Compute the median and (-1,+1) sigma values for the data. |
| [`get_fwhm`](index.html#sherpa.utils.get_fwhm)(y, x[, xhi]) | Estimate the width of the data. |
| [`get_keyword_defaults`](index.html#sherpa.utils.get_keyword_defaults)(func[, skip]) | Return the keyword arguments and their default values. |
| [`get_keyword_names`](index.html#sherpa.utils.get_keyword_names)(func[, skip]) | Return the names of the keyword arguments. |
| [`get_midpoint`](index.html#sherpa.utils.get_midpoint)(a) | Estimate the middle of the data. |
| [`get_num_args`](index.html#sherpa.utils.get_num_args)(func) | Return the number of arguments for a function. |
| [`get_peak`](index.html#sherpa.utils.get_peak)(y, x[, xhi]) | Estimate the peak position of the data. |
| [`get_position`](index.html#sherpa.utils.get_position)(y, x[, xhi]) | Get 1D model parameter positions pos (val, min, max) |
| [`get_valley`](index.html#sherpa.utils.get_valley)(y, x[, xhi]) | Estimate the position of the minimum of the data. |
| [`guess_amplitude`](index.html#sherpa.utils.guess_amplitude)(y, x[, xhi]) | Guess model parameter amplitude (val, min, max) |
| [`guess_amplitude2d`](index.html#sherpa.utils.guess_amplitude2d)(y, x0lo, x1lo[, x0hi, x1hi]) | Guess 2D model parameter amplitude (val, min, max) |
| [`guess_amplitude_at_ref`](index.html#sherpa.utils.guess_amplitude_at_ref)(r, y, x[, xhi]) | Guess model parameter amplitude (val, min, max) |
| [`guess_bounds`](index.html#sherpa.utils.guess_bounds)(x[, xhi]) | Guess the bounds of a parameter from the independent axis. |
| [`guess_fwhm`](index.html#sherpa.utils.guess_fwhm)(y, x[, xhi, scale]) | Estimate the value and valid range for the FWHM of the data. |
| [`guess_position`](index.html#sherpa.utils.guess_position)(y, x0lo, x1lo[, x0hi, x1hi]) | Guess 2D model parameter positions xpos, ypos ({val0, min0, max0}, |
| [`guess_radius`](index.html#sherpa.utils.guess_radius)(x0lo, x1lo[, x0hi, x1hi]) | Guess the radius parameter of a 2D model. |
| [`guess_reference`](index.html#sherpa.utils.guess_reference)(pmin, pmax, x[, xhi]) | Guess model parameter reference (val, min, max) |
| [`histogram1d`](index.html#sherpa.utils.histogram1d)(x, x_lo, x_hi) | Create a 1D histogram from a sequence of samples. |
| [`histogram2d`](index.html#sherpa.utils.histogram2d)(x, y, x_grid, y_grid) | Create 2D histogram from a sequence of samples. |
| [`igam`](index.html#sherpa.utils.igam)(a, x) | Calculate the regularized incomplete Gamma function (lower). |
| [`igamc`](index.html#sherpa.utils.igamc)(a, x) | Calculate the complement of the regularized incomplete Gamma function (upper). |
| [`incbet`](index.html#sherpa.utils.incbet)(a, b, x) | Calculate the incomplete Beta function. |
| [`interpolate`](index.html#sherpa.utils.interpolate)(xout, xin, yin[, function]) | One-dimensional interpolation. |
| [`is_binary_file`](index.html#sherpa.utils.is_binary_file)(filename) | Estimate if a file is a binary file. |
| [`lgam`](index.html#sherpa.utils.lgam)(z) | Calculate the log (base e) of the Gamma function. |
| [`linear_interp`](index.html#sherpa.utils.linear_interp)(xout, xin, yin) | Linear one-dimensional interpolation. |
| [`multinormal_pdf`](index.html#sherpa.utils.multinormal_pdf)(x, mu, sigma) | The PDF of a multivariate-normal. |
| [`multit_pdf`](index.html#sherpa.utils.multit_pdf)(x, mu, sigma, dof) | The PDF of a multivariate student-t. |
| [`nearest_interp`](index.html#sherpa.utils.nearest_interp)(xout, xin, yin) | Nearest-neighbor one-dimensional interpolation. |
| [`neville`](index.html#sherpa.utils.neville)(xout, xin, yin) | Polynomial one-dimensional interpolation using Neville’s method. |
| [`neville2d`](index.html#sherpa.utils.neville2d)(xinterp, yinterp, x, y, fval) | Polynomial two-dimensional interpolation using Neville’s method. |
| [`new_muller`](index.html#sherpa.utils.new_muller)(fcn, xa, xb[, fa, fb, args, …]) | |
| [`normalize`](index.html#sherpa.utils.normalize)(xs) | Normalize an array. |
| [`numpy_convolve`](index.html#sherpa.utils.numpy_convolve)(a, b) | Convolve two 1D arrays together using NumPy’s FFT. |
| [`pad_bounding_box`](index.html#sherpa.utils.pad_bounding_box)(kernel, mask) | Expand the kernel to match the mask. |
| [`parallel_map`](index.html#sherpa.utils.parallel_map)(function, sequence[, numcores]) | Run a function on a sequence of inputs in parallel. |
| [`param_apply_limits`](index.html#sherpa.utils.param_apply_limits)(param_limits, par[, …]) | Apply the given limits to a parameter. |
| [`parse_expr`](index.html#sherpa.utils.parse_expr)(expr) | parse a filter expression into numerical components for notice/ignore e.g. |
| [`poisson_noise`](index.html#sherpa.utils.poisson_noise)(x) | Draw samples from a Poisson distribution. |
| [`print_fields`](index.html#sherpa.utils.print_fields)(names, vals[, converters]) | Given a list of strings names and mapping vals, where names is a subset of vals.keys(), return a listing of name/value pairs printed one per line in the format ‘<name> = <value>’. |
| [`quantile`](index.html#sherpa.utils.quantile)(sorted_array, f) | Return the quantile element from sorted_array, where f is [0,1] using linear interpolation. |
| [`rebin`](index.html#sherpa.utils.rebin)(y0, x0lo, x0hi, x1lo, x1hi) | Rebin a histogram. |
| [`sao_arange`](index.html#sherpa.utils.sao_arange)(start, stop[, step]) | Create a range of values between start and stop. |
| [`sao_fcmp`](index.html#sherpa.utils.sao_fcmp)(x, y, tol) | Compare y to x, using an absolute tolerance. |
| [`set_origin`](index.html#sherpa.utils.set_origin)(dims[, maxindex]) | Return the position of the origin of the kernel. |
| [`sum_intervals`](index.html#sherpa.utils.sum_intervals)(src, indx0, indx1) | Sum up data within one or more pairs of indexes. |
| [`zeroin`](index.html#sherpa.utils.zeroin)(fcn, xa, xb[, fa, fb, args, maxfev, tol]) | Obtain a zero of a function of one variable using Brent’s root finder. |
Classes
| [`NoNewAttributesAfterInit`](index.html#sherpa.utils.NoNewAttributesAfterInit)() | Prevents attribute deletion and setting of new attributes after __init__ has been called. |
#### The sherpa.utils.testing module[¶](#module-sherpa.utils.testing)
Functions
| [`requires_data`](index.html#sherpa.utils.testing.requires_data)(*args, **kwargs) | |
| [`requires_ds9`](index.html#sherpa.utils.testing.requires_ds9)(*args, **kwargs) | |
| [`requires_fits`](index.html#sherpa.utils.testing.requires_fits)(*args, **kwargs) | |
| [`requires_group`](index.html#sherpa.utils.testing.requires_group)(*args, **kwargs) | |
| [`requires_package`](index.html#sherpa.utils.testing.requires_package)(*args) | |
| [`requires_plotting`](index.html#sherpa.utils.testing.requires_plotting)(*args, **kwargs) | |
| [`requires_pylab`](index.html#sherpa.utils.testing.requires_pylab)(*args, **kwargs) | |
| [`requires_stk`](index.html#sherpa.utils.testing.requires_stk)(*args, **kwargs) | |
| [`requires_xspec`](index.html#sherpa.utils.testing.requires_xspec)(*args, **kwargs) | |
Classes
| [`SherpaTestCase`](index.html#sherpa.utils.testing.SherpaTestCase)([methodName]) | Base class for Sherpa unit tests. |
#### The sherpa.astro.io module[¶](#module-sherpa.astro.io)
Functions
| [`read_table`](index.html#sherpa.astro.io.read_table)(arg[, ncols, colkeys, dstype]) | Create a dataset from a tabular file. |
| [`read_image`](index.html#sherpa.astro.io.read_image)(arg[, coord, dstype]) | Create an image dataset from a file. |
| [`read_arf`](index.html#sherpa.astro.io.read_arf)(arg) | Create a DataARF object. |
| [`read_rmf`](index.html#sherpa.astro.io.read_rmf)(arg) | Create a DataRMF object. |
| [`read_arrays`](index.html#sherpa.astro.io.read_arrays)(*args) | Create a dataset from multiple arrays. |
| [`read_pha`](index.html#sherpa.astro.io.read_pha)(arg[, use_errors, use_background]) | Create a DataPHA object. |
| [`write_image`](index.html#sherpa.astro.io.write_image)(filename, dataset[, ascii, clobber]) | Write out an image. |
| [`write_pha`](index.html#sherpa.astro.io.write_pha)(filename, dataset[, ascii, clobber]) | Write out a PHA dataset. |
| [`write_table`](index.html#sherpa.astro.io.write_table)(filename, dataset[, ascii, clobber]) | Write out a table. |
| [`pack_table`](index.html#sherpa.astro.io.pack_table)(dataset) | Convert a Sherpa data object into an I/O item (tabular). |
| [`pack_image`](index.html#sherpa.astro.io.pack_image)(dataset) | Convert a Sherpa data object into an I/O item (image). |
| [`pack_pha`](index.html#sherpa.astro.io.pack_pha)(dataset) | Convert a Sherpa PHA data object into an I/O item (tabular). |
| [`read_table_blocks`](index.html#sherpa.astro.io.read_table_blocks)(arg[, make_copy]) | Return the HDU elements (columns and header) from a FITS table. |
Simple Interpolation[¶](#simple-interpolation)
---
### Overview[¶](#overview)
Although Sherpa allows you to fit complex models to complex data sets with complex statistics and complex optimisers, it can also be used for simple situations, such as interpolating a function. In this example a one-dimensional set of data is given - i.e.
\((x_i, y_i)\) - and a polynomial of order 2 is fit to the data. The model is then used to interpolate (and extrapolate)
values. The walk through ends with changing the fit to a linear model (i.e. polynomial of order 1) and a comparison of the two model fits.
### Setting up[¶](#setting-up)
The following sections will load in classes from Sherpa as needed, but it is assumed that the following modules have been loaded:
```
>>> import numpy as np
>>> import matplotlib.pyplot as plt
```
### Loading the data[¶](#loading-the-data)
The data is the following:
| x | y |
| --- | --- |
| 1 | 1 |
| 1.5 | 1.5 |
| 2 | 1.75 |
| 4 | 3.25 |
| 8 | 6 |
| 17 | 16 |
which can be “loaded” into Sherpa using the
[`Data1D`](index.html#sherpa.data.Data1D) class:
```
>>> from sherpa.data import Data1D
>>> x = [1, 1.5, 2, 4, 8, 17]
>>> y = [1, 1.5, 1.75, 3.25, 6, 16]
>>> d = Data1D('interpolation', x, y)
>>> print(d)
name = interpolation x = Float64[6]
y = Float64[6]
staterror = None syserror = None None
```
Note
Creating the [`Data1D`](index.html#sherpa.data.Data1D) object will - from Sherpa 4.12.0 - warn you that it is converting the dependent axis data to NumPy. That is, the above call will cause the following message to be displayed:
```
UserWarning: Converting array [1, 1.5, 1.75, 3.25, 6, 16] to numpy array
```
This can be displayed using the [`DataPlot`](index.html#sherpa.plot.DataPlot) class:
```
>>> from sherpa.plot import DataPlot
>>> dplot = DataPlot()
>>> dplot.prepare(d)
>>> dplot.plot()
```
### Setting up the model[¶](#setting-up-the-model)
For this example, a second-order polynomial is going to be fit to the data by using the [`Polynom1D`](index.html#sherpa.models.basic.Polynom1D) class:
```
>>> from sherpa.models.basic import Polynom1D
>>> mdl = Polynom1D()
>>> print(mdl)
polynom1d
Param Type Value Min Max Units
--- --- --- --- --- ---
polynom1d.c0 thawed 1 -3.40282e+38 3.40282e+38
polynom1d.c1 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c2 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c3 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c4 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c5 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c6 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c7 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c8 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.offset frozen 0 -3.40282e+38 3.40282e+38
```
The help for Polynom1D shows that the model is defined as:
\[f(x) = \sum_{i=0}^8 c_i (x - {\rm offset})^i\]
so to get a second-order polynomial we have to
[thaw](index.html#params-freeze) the `c2`
parameter (the linear term `c1` is kept at 0 to show that the choice of parameter to fit is up to the user):
```
>>> mdl.c2.thaw()
```
This model can be compared to the data using the
[`ModelPlot`](index.html#sherpa.plot.ModelPlot) class (note that, unlike the data plot, the
[`prepare()`](index.html#sherpa.plot.ModelPlot.prepare) method takes both the data - needed to know what \(x_i\) to use - and the model):
```
>>> from sherpa.plot import ModelPlot
>>> mplot = ModelPlot()
>>> mplot.prepare(d, mdl)
>>> dplot.plot()
>>> mplot.overplot()
```
Since the default parameter values are still being used, the result is not a good description of the data. Let’s fix this!
### Fitting the model to the data[¶](#fitting-the-model-to-the-data)
Since we have no error bars, we are going to use least-squares minimisation - that is, minimise the square of the distance between the model and the data using the
[`LeastSq`](index.html#sherpa.stats.LeastSq) statisic and the
[`NelderMead`](index.html#sherpa.optmethods.NelderMead) optimiser
(for this case the [`LevMar`](index.html#sherpa.optmethods.LevMar) optimiser is likely to produce as good a result but faster, but I have chosen to select the more robust method):
```
>>> from sherpa.stats import LeastSq
>>> from sherpa.optmethods import NelderMead
>>> from sherpa.fit import Fit
>>> f = Fit(d, mdl, stat=LeastSq(), method=NelderMead())
>>> print(f)
data = interpolation model = polynom1d stat = LeastSq method = NelderMead estmethod = Covariance
```
In this case there is no need to change any of the options for the optimiser (the least-squares statistic has no options), so the objects are passed straight to the [`Fit`](index.html#sherpa.fit.Fit) object.
The [`fit()`](index.html#sherpa.fit.Fit.fit) method is used to fit the data; as it returns useful information (in a [`FitResults`](index.html#sherpa.fit.FitResults)
object) we capture this in the `res` variable, and then check that the fit was succesfull (i.e. it converged):
```
>>> res = f.fit()
>>> res.succeeded True
```
For this example the time to perform the fit is very short, but for complex data sets and models the call can take a long time!
A quick summary of the fit results is available via the
[`format()`](index.html#sherpa.fit.FitResults.format) method, while printing the variable retutrns more details:
```
>>> print(res.format())
Method = neldermead Statistic = leastsq Initial fit statistic = 255.875 Final fit statistic = 2.4374 at function evaluation 264 Data points = 6 Degrees of freedom = 4 Change in statistic = 253.438
polynom1d.c0 1.77498
polynom1d.c2 0.0500999
>>> print(res)
datasets = None itermethodname = none methodname = neldermead statname = leastsq succeeded = True parnames = ('polynom1d.c0', 'polynom1d.c2')
parvals = (1.7749826216226083, 0.050099944904353017)
statval = 2.4374045728256455 istatval = 255.875 dstatval = 253.43759542717436 numpoints = 6 dof = 4 qval = None rstat = None message = Optimization terminated successfully nfev = 264
```
The best-fit parameter values can also be retrieved from the model itself:
```
>>> print(mdl)
polynom1d
Param Type Value Min Max Units
--- --- --- --- --- ---
polynom1d.c0 thawed 1.77498 -3.40282e+38 3.40282e+38
polynom1d.c1 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c2 thawed 0.0500999 -3.40282e+38 3.40282e+38
polynom1d.c3 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c4 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c5 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c6 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c7 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c8 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.offset frozen 0 -3.40282e+38 3.40282e+38
```
as can the current fit statistic (as this is for fitting a second-order polynomial I’ve chosen to label the variable with a suffix of 2,
which will make more sense
[below](#simple-interpolation-stat-order1)):
```
>>> stat2 = f.calc_stat()
>>> print("Statistic = {:.4f}".format(stat2))
Statistic = 2.4374
```
Note
In an actual analysis session the fit would probably be repeated,
perhaps with a different optimiser, and starting from a different set of parameter values, to give more confidence that the fit has not been caught in a local minimum. This example is simple enough that this is not needed here.
To compare the new model to the data I am going to use a
[`FitPlot`](index.html#sherpa.plot.FitPlot) - which combines a DataPlot and ModelPlot - and a [`ResidPlot`](index.html#sherpa.plot.ResidPlot) - to look at the residuals, defined as \({\rm data}_i - {\rm model}_i\),
using the [`SplitPlot`](index.html#sherpa.plot.SplitPlot) class to orchestrate the display (note that `mplot` needs to be re-created since the model has changed since the last time its `prepare` method was called):
```
>>> from sherpa.plot import FitPlot, ResidPlot, SplitPlot
>>> fplot = FitPlot()
>>> mplot.prepare(d, mdl)
>>> fplot.prepare(dplot, mplot)
>>> splot = SplitPlot()
>>> splot.addplot(fplot)
>>> rplot = ResidPlot()
>>> rplot.prepare(d, mdl, stat=LeastSq())
WARNING: The displayed errorbars have been supplied with the data or calculated using chi2xspecvar; the errors are not used in fits with leastsq
>>> rplot.plot_prefs['yerrorbars'] = False
>>> splot.addplot(rplot)
```
The default behavior for the residual plot is to include error bars,
here calculated using the [`Chi2XspecVar`](index.html#sherpa.stats.Chi2XspecVar) class,
but they have been turned off - by setting the
`yerrorbars` option to `False` - since they are not meaningful here.
### Interpolating values[¶](#interpolating-values)
The model can be evaluated directly by supplying it with the independent-axis values; for instance for \(x\) equal to 2, 5, and 10:
```
>>> print(mdl([2, 5, 10]))
[1.9753824 3.02748124 6.78497711]
```
It can also be used to extrapolate the model outside the range of the data (as long as the model is defined for these values):
```
>>> print(mdl([-100]))
[502.77443167]
>>> print(mdl([234.56]))
[2758.19347071]
```
### Changing the fit[¶](#changing-the-fit)
Let’s see how the fit looks if we use a linear model instead. This means thawing out the `c1` parameter and clearing `c2`:
```
>>> mdl.c1.thaw()
>>> mdl.c2 = 0
>>> mdl.c2.freeze()
>>> f.fit()
<Fit results instance>
```
As this is a simple case, I am ignoring the return value from the
[`fit()`](index.html#sherpa.fit.Fit.fit) method, but in an actual analysis session it should be checked to ensure the fit converged.
The new model parameters are:
```
>>> print(mdl)
polynom1d
Param Type Value Min Max Units
--- --- --- --- --- ---
polynom1d.c0 thawed -0.248624 -3.40282e+38 3.40282e+38
polynom1d.c1 thawed 0.925127 -3.40282e+38 3.40282e+38
polynom1d.c2 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c3 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c4 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c5 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c6 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c7 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.c8 frozen 0 -3.40282e+38 3.40282e+38
polynom1d.offset frozen 0 -3.40282e+38 3.40282e+38
```
and the best-fit statistic value can be compared to the
[earlier version](#simple-interpolation-stat-order2):
```
>>> stat1 = f.calc_stat()
>>> print("Statistic: order 1 = {:.3f} order 2 = {:.3f}".format(stat1, stat2))
Statistic: order 1 = 1.898 order 2 = 2.437
```
Note
Sherpa provides several routines for comparing statistic values,
such as [`sherpa.utils.calc_ftest()`](index.html#sherpa.utils.calc_ftest) and
[`sherpa.utils.calc_mlr()`](index.html#sherpa.utils.calc_mlr), to see if one can be preferred over the other, but these are not relevant here, as the statistic being used is just the least-squared difference.
The two models can be visually compared by taking advantage of the previous plot objects retaining the values from the previous fit:
```
>>> mplot2 = ModelPlot()
>>> mplot2.prepare(d, mdl)
>>> mplot.plot()
>>> mplot2.overplot()
```
An alternative would be to create the plots directly (the order=2 parameter values are restored from the res object created from the [first fit](#simple-interpolation-first-fit)
to the data), in which case we are not limited to calculating the model on the independent axis of the input data (the order is chosen to match the colors of the previous plot):
```
>>> xgrid = np.linspace(0, 20, 21)
>>> y1 = mdl(xgrid)
>>> mdl.c0 = res.parvals[0]
>>> mdl.c1 = 0
>>> mdl.c2 = res.parvals[1]
>>> y2 = mdl(xgrid)
>>> plt.clf()
>>> plt.plot(xgrid, y2, label='order=2');
>>> plt.plot(xgrid, y1, label='order=1');
>>> plt.legend();
>>> plt.title("Manual evaluation of the models");
```
Simple user model[¶](#simple-user-model)
---
This example works through a fit to a small one-dimensional dataset which includes errors. This means that, unlike the
[Simple Interpolation](index.html#document-examples/simple_interpolation) example, an analysis of the
[parameter errors](#simple-user-model-estimating-parameter-errors)
can be made. The fit begins with the use of the
[basic Sherpa models](#simple-user-model-creating-the-model),
but this turns out to be sub-optimal - since the model parameters do not match the required parameters - so a
[user model is created](#sherpa-user-model-writing-your-own-model),
which recasts the Sherpa model parameters into the desired form. It also has the advantage of simplifying the model, which avoids the need for manual intervention required with the Sherpa version.
### Introduction[¶](#introduction)
For this example, a data set from the literature was chosen,
looking at non-Astronomy papers to show that Sherpa can be used in a variety of fields. There is no attempt made here to interpret the results, and the model parameters, and their errors, derived here **should not** be assumed to have any meaning compared to the results of the paper.
The data used in this example is taken from <NAME>, <NAME>, <NAME>, <NAME> (2017) Effect of various nitrogen conditions on population growth, temporary cysts and cellular biochemical compositions of Karenia mikimotoi. PLoS ONE 12(2): e0171996.
[doi:10.1371/journal.pone.0171996](https://dx.doi.org/10.1371/journal.pone.0171996). The Supporting information section of the paper includes a spreadsheet containing the data for the figures, and this was downloaded and stored as the file `pone.0171996.s001.xlsx`.
The aim is to fit a similar model to that described in Table 5,
that is
\[y = N (1.0 + e^{a + b * t})^{-1}\]
where \(t\) and \(y\) are the abscissa (independent axis)
and ordinate (dependent axis), respectively. The idea is to see if we can get a similar result rather than to make any inferences based on the data. For this example only the “NaNO3” dataset is going to be used.
### Setting up[¶](#setting-up)
Both NumPy and Matplotlib are required:
```
>>> import numpy as np
>>> import matplotlib.pyplot as plt
```
### Reading in the data[¶](#reading-in-the-data)
The
[openpyxl](https://openpyxl.readthedocs.io/) package (version 2.5.3) is used to read in the data from the Excel spreadsheet. This is not guaranteed to be the optimal means of reading in the data (and relies on hard-coded knowledge of the column numbers):
```
>>> from openpyxl import load_workbook
>>> wb = load_workbook('pone.0171996.s001.xlsx')
>>> fig4 = wb['Fig4data']
>>> t = []; y = []; dy = []
>>> for r in list(fig4.values)[2:]:
... t.append(r[0])
... y.append(r[3])
... dy.append(r[4])
...
```
With these arrays, a [data object](index.html#document-data/index)
can be created:
```
>>> from sherpa.data import Data1D
>>> d = Data1D('NaNO_3', t, y, dy)
```
Unlike the [first worked example](index.html#document-examples/simple_interpolation),
this data set includes an error column, so the data plot created by [`DataPlot`](index.html#sherpa.plot.DataPlot) contains error bars (although not obvious for the first point,
which has an error of 0):
```
>>> from sherpa.plot import DataPlot
>>> dplot = DataPlot()
>>> dplot.prepare(d)
>>> dplot.plot()
```
The data can also be inspected directly (as there aren’t many data points):
```
>>> print(d)
name = NaNO_3 x = Int64[11]
y = Float64[11]
staterror = [0, 0.9214, 1.1273, 1.9441, 2.3363, 0.9289, 1.6615, 1.1726, 1.8066, 2.149, 1.983]
syserror = None
```
### Restricting the data[¶](#restricting-the-data)
Trying to fit the whole data set will fail because the first data point has an error of 0, so it is necessary to
[restrict, or filter out,](index.html#data-filter)
this data point. The simplest way is to select a data range to ignore using
[`ignore()`](index.html#sherpa.data.Data1D.ignore), in this case everything where \(x < 1\):
```
>>> d.get_filter()
'0.0000:20.0000'
>>> d.ignore(None, 1)
>>> d.get_filter()
'2.0000:20.0000'
```
The [`get_filter()`](index.html#sherpa.data.Data1D.get_filter) routine returns a text description of the filters applied to the data; it starts with all the data being included (0 to 20) and then after excluding all points less than 1 the filter is now 2 to 20.
The format can be changed to something more appropriate for this data set:
```
>>> d.get_filter(format='%d')
'2:20'
```
Since the data has been changed, the data plot object is updated so that the following plots reflect the new filter:
```
>>> dplot.prepare(d)
```
### Creating the model[¶](#creating-the-model)
Table 5 lists the model fit to this dataset as
\[y = 14.89 (1.0 + e^{1.941 - 0.453 t})^{-1}\]
which can be constructed from components using the
[`Const1D`](index.html#sherpa.models.basic.Const1D)
and [`Exp`](index.html#sherpa.models.basic.Exp) models, as shown below:
```
>>> from sherpa.models.basic import Const1D, Exp
>>> plateau = Const1D('plateau')
>>> rise = Exp('rise')
>>> mdl = plateau / (1 + rise)
>>> print(mdl)
(plateau / (1 + rise))
Param Type Value Min Max Units
--- --- --- --- --- ---
plateau.c0 thawed 1 -3.40282e+38 3.40282e+38
rise.offset thawed 0 -3.40282e+38 3.40282e+38
rise.coeff thawed -1 -3.40282e+38 3.40282e+38
rise.ampl thawed 1 0 3.40282e+38
```
The amplitude of the exponential is fixed at 1, but the other terms will remain free in the fit, with `plateau.c0` representing the normalization, and the `rise.offset` and `rise.coeff` terms the exponent term. The `offset` and `coeff` terms do not match the form used in the paper, namely \(a + b t\),
which has some interesting consequences for the fit, as will be discussed below in the
[user-model section](#simple-user-model-parameter-optimisation).
```
>>> rise.ampl.freeze()
>>> print(mdl)
(plateau / (1 + rise))
Param Type Value Min Max Units
--- --- --- --- --- ---
plateau.c0 thawed 1 -3.40282e+38 3.40282e+38
rise.offset thawed 0 -3.40282e+38 3.40282e+38
rise.coeff thawed -1 -3.40282e+38 3.40282e+38
rise.ampl frozen 1 0 3.40282e+38
```
The funtional form of the exponential model provided by Sherpa, assuming an amplitude of unity, is
\[f(x) = e^{{\rm coeff} * (x - {\rm offset})}\]
which means that I expect the final values to be
\({\rm coeff} \simeq -0.5\) and, as
\(- {\rm coeff} * {\rm offset} \simeq 1.9\), then
\({\rm offset} \simeq 4\).
The plateau value should be close to 15.
The model and data can be shown together, but as the fit has not yet been made then showing on the same plot is not very instructive,
so here’s two plots one above the other, created by mixing the Sherpa and Matplotlib APIs:
```
>>> from sherpa.plot import ModelPlot
>>> mplot = ModelPlot()
>>> mplot.prepare(d, mdl)
>>> plt.subplot(2, 1, 1)
>>> mplot.plot(clearwindow=False)
>>> plt.subplot(2, 1, 2)
>>> dplot.plot(clearwindow=False)
>>> plt.title('')
```
The title of the data plot was removed since it overlaped the X axis of the model plot above it.
### Fitting the data[¶](#fitting-the-data)
The main difference to [fitting the first example](index.html#simple-interpolation-fit) is that the
[`Chi2`](index.html#sherpa.stats.Chi2) statistic is used,
since the data contains error values.
```
>>> from sherpa.stats import Chi2
>>> from sherpa.fit import Fit
>>> f = Fit(d, mdl, stat=Chi2())
>>> print(f)
data = NaNO_3 model = (plateau / (1 + rise))
stat = Chi2 method = LevMar estmethod = Covariance
>>> print("Starting statistic: {}".format(f.calc_stat()))
Starting statistic: 633.2233812020354
```
The use of a Chi-square statistic means that the fit also calculates the reduced statistic (the final statistic value divided by the degrees of freedom), which should be \(\sim 1\) for a “good”
fit, and an estimate of the probability (Q value) that the fit is good (this is also based on the statistic and number of degrees of freedom).
```
>>> fitres = f.fit()
>>> print(fitres.format())
Method = levmar Statistic = chi2 Initial fit statistic = 633.223 Final fit statistic = 101.362 at function evaluation 17 Data points = 10 Degrees of freedom = 7 Probability [Q-value] = 5.64518e-19 Reduced statistic = 14.4802 Change in statistic = 531.862
plateau.c0 10.8792 +/- 0.428815
rise.offset 457.221 +/- 0
rise.coeff 24.3662 +/- 0
```
Changed in version 4.10.1: The implementation of the [`LevMar`](index.html#sherpa.optmethods.LevMar)
class has been changed from Fortran to C++ in the 4.10.1 release.
The results of the optimiser are expected not to change significantly, but one of the more-noticeable changes is that the covariance matrix is now returned directly from a fit,
which results in an error estimate provided as part of the fit output (the values after the +/- terms above).
The reduced chi-square value is large, as shown in the screen output above and the explicit access below, the probability value is essentially 0, and the parameters are nowhere near the expected values.
```
>>> print("Reduced chi square = {:.2f}".format(fitres.rstat))
Reduced chi square = 14.48
```
Visually comparing the model and data values highlights how poor this fit is (the data plot does not need regenerating in this case, but [`prepare()`](index.html#sherpa.plot.DataPlot.prepare) is called just to make sure that the correct data is being displayed):
```
>>> dplot.prepare(d)
>>> mplot.prepare(d, mdl)
>>> dplot.plot()
>>> mplot.overplot()
```
Either the model has got caught in a local minimum, or it is not a good description of the data. To investigate further, a useful technique is to switch the optimiser and re-fit; the hope is that the different optimiser will be able to escape the local minima in the search space. The default optimiser used by
[`Fit`](index.html#sherpa.fit.Fit) is
[`LevMar`](index.html#sherpa.optmethods.LevMar), which is often a good choice for data with errors. The other standard optimiser provided by Sherpa is
[`NelderMead`](index.html#sherpa.optmethods.NelderMead), which is often slower than `LevMar` - as it requires more model evaluations - but less-likely to get stuck:
```
>>> from sherpa.optmethods import NelderMead
>>> f.method = NelderMead()
>>> fitres2 = f.fit()
>>> print(mdl)
(plateau / (1 + rise))
Param Type Value Min Max Units
--- --- --- --- --- ---
plateau.c0 thawed 10.8792 -3.40282e+38 3.40282e+38
rise.offset thawed 457.221 -3.40282e+38 3.40282e+38
rise.coeff thawed 24.3662 -3.40282e+38 3.40282e+38
rise.ampl frozen 1 0 3.40282e+38
```
An alternative to replacing the
`method` attribute, as done above, would be to create a new [`Fit`](index.html#sherpa.fit.Fit) object - changing the method using the `method` attribute of the initializer, and use that to fit the model and data.
As can be seen, the parameter values have not changed; the
[`dstatval`](index.html#sherpa.fit.FitResults.dstatval) attribute contains the change in the statsitic value, and as shown below, it has not improved:
```
>>> fitres2.dstatval 0.0
```
The failure of this fit is actually down to the coupling of the `offset` and `coeff` parameters of the
[`Exp`](index.html#sherpa.models.basic.Exp) model, as will be discussed [below](#simple-user-model-parameter-optimisation),
but a good solution can be found by tweaking the starting parameter values.
### Restarting the fit[¶](#restarting-the-fit)
The [`reset()`](index.html#sherpa.models.model.Model.reset) will change the parameter values back to the
[last values you set them to](#simple-user-model-freeze-ampl),
which may not be the same as their
[default settings](#simple-user-model-creating-the-model)
(in this case the difference is in the state of the `rise.ampl`
parameter, which has remained frozen):
```
>>> mdl.reset()
>>> print(mdl)
(plateau / (1 + rise))
Param Type Value Min Max Units
--- --- --- --- --- ---
plateau.c0 thawed 1 -3.40282e+38 3.40282e+38
rise.offset thawed 0 -3.40282e+38 3.40282e+38
rise.coeff thawed -1 -3.40282e+38 3.40282e+38
rise.ampl frozen 1 0 3.40282e+38
```
Note
It is not always necessary to reset the parameter values when trying to get out of a local minimum, but it can be a useful strategy to avoid getting trapped in the same area.
One of the simplest changes to make here is to set the plateau term to the maximum data value, as the intention is for this term to represent the asymptote of the curve.
```
>>> plateau.c0 = np.max(d.y)
>>> mplot.prepare(d, mdl)
>>> dplot.plot()
>>> mplot.overplot()
```
A new fit object could be created, but it is also possible to re-use the existing object. This leaves the optimiser set to
[`NelderMead`](index.html#sherpa.optmethods.NelderMead), although in this case the same parameter values are found if the method attribute had been changed back to
[`LevMar`](index.html#sherpa.optmethods.LevMar):
```
>>> fitres3 = f.fit()
>>> print(fitres3.format())
Method = neldermead Statistic = chi2 Initial fit statistic = 168.42 Final fit statistic = 0.299738 at function evaluation 42 Data points = 10 Degrees of freedom = 7 Probability [Q-value] = 0.9999 Reduced statistic = 0.0428198 Change in statistic = 168.12
plateau.c0 14.9694 +/- 0.859633
rise.offset 4.17729 +/- 0.630148
rise.coeff -0.420696 +/- 0.118487
```
These results already look a lot better than the previous attempt;
the reduced statistic is much smaller, and the values are similar to the reported values. As shown in the plot below, the model also well describes the data:
```
>>> mplot.prepare(d, mdl)
>>> dplot.plot()
>>> mplot.overplot()
```
The residuals can also be displayed, in this case normalizing by the error values by using a
[`DelchiPlot`](index.html#sherpa.plot.DelchiPlot) plot:
```
>>> from sherpa.plot import DelchiPlot
>>> residplot = DelchiPlot()
>>> residplot.prepare(d, mdl, f.stat)
>>> residplot.plot()
```
Unlike the data and model plots, the
[`prepare()`](index.html#sherpa.plot.DelchiPlot.prepare) method of the residual plot requires a statistic object, so the value in the fit object (using the `stat`
attribute) is used.
Given that the reduced statistic for the fit is a lot smaller than 1 (\(\sim 0.04\)), the residuals are all close to 0:
the ordinate axis shows \((d - m) / e\) where
\(d\), \(m\), and \(e\) are data, model, and error value respectively.
### What happens at \(t = 0\)?[¶](#what-happens-at-t-0)
The [filtering applied earlier](#simple-user-model-restrict)
can be removed, to see how the model behaves at low times. Calling the [`notice()`](index.html#sherpa.data.Data1D.notice) without any arguments removes any previous filter:
```
>>> d.notice()
>>> d.get_filter(format='%d')
'0:20'
```
For this plot, the [`FitPlot`](index.html#sherpa.plot.FitPlot) class is going to be used to show both the data and model rather than doing it manually as above:
```
>>> from sherpa.plot import FitPlot
>>> fitplot = FitPlot()
>>> dplot.prepare(d)
>>> mplot.prepare(d, mdl)
>>> fitplot.prepare(dplot, mplot)
>>> fitplot.plot()
```
Note
The `prepare` method on the components of the Fit plot (in this case `dplot` and
`mplot`) must be called with their appropriate arguments to ensure that the latest changes - such as filters and parameter values - are picked up.
Warning
Trying to create a residual plot for this new data range,
will end up with a division-by-zero warning from the
`prepare` call, as the first data point has an error of 0 and the residual plot shows \((d - m) / e\).
For the rest of this example the first data point has been removed:
```
>>> d.ignore(None, 1)
```
### Estimating parameter errors[¶](#estimating-parameter-errors)
The [`calc_stat_info()`](index.html#sherpa.fit.Fit.calc_stat_info) method returns an overview of the current fit:
```
>>> statinfo = f.calc_stat_info()
>>> print(statinfo)
name =
ids = None bkg_ids = None statname = chi2 statval = 0.2997382864907501 numpoints = 10 dof = 7 qval = 0.999900257642653 rstat = 0.04281975521296431
```
It is another way of getting at some of the information in the
[`FitResults`](index.html#sherpa.fit.FitResults) object; for instance
```
>>> statinfo.rstat == fitres3.rstat True
```
Note
The `FitResults` object refers to the model at the time the fit was made, whereas `calc_stat_info` is calculated based on the current values, and so the results can be different.
The [`est_errors()`](index.html#sherpa.fit.Fit.est_errors) method is used to estimate error ranges for the parameter values. It does this by
[varying the parameters around the best-fit location](index.html#estimating-errors)
until the statistic value has increased by a set amount.
The default method for estimating errors is
[`Covariance`](index.html#sherpa.estmethods.Covariance)
```
>>> f.estmethod.name
'covariance'
```
which has the benefit of being fast, but may not be as robust as other techniques.
```
>>> coverrs = f.est_errors()
>>> print(coverrs.format())
Confidence Method = covariance Iterative Fit Method = None Fitting Method = levmar Statistic = chi2 covariance 1-sigma (68.2689%) bounds:
Param Best-Fit Lower Bound Upper Bound
--- --- --- ---
plateau.c0 14.9694 -0.880442 0.880442
rise.offset 4.17729 -0.646012 0.646012
rise.coeff -0.420696 -0.12247 0.12247
```
These errors are similar to those reported
[during the fit](#simple-user-model-refit).
As [shown below](#simple-user-model-compare-errors),
the error values can be extracted from the output of
[`est_errors()`](index.html#sherpa.fit.Fit.est_errors).
The default is to calculate “one sigma” error bounds
(i.e. those that cover 68.3% of the expected parameter range),
but this can be changed by altering the
`sigma` attribute of the error estimator.
```
>>> f.estmethod.sigma 1
```
Changing this value to 1.6 means that the errors are close to the 90% bounds (for a single parameter):
```
>>> f.estmethod.sigma = 1.6
>>> coverrs90 = f.est_errors()
>>> print(coverrs90.format())
Confidence Method = covariance Iterative Fit Method = None Fitting Method = neldermead Statistic = chi2 covariance 1.6-sigma (89.0401%) bounds:
Param Best-Fit Lower Bound Upper Bound
--- --- --- ---
plateau.c0 14.9694 -1.42193 1.42193
rise.offset 4.17729 -1.04216 1.04216
rise.coeff -0.420696 -0.19679 0.19679
```
The covariance method uses the covariance matrix to estimate the error surface, and so the parameter errors are symmetric.
A more-robust, but often significantly-slower, approach is to use the [`Confidence`](index.html#sherpa.estmethods.Confidence) approach:
```
>>> from sherpa.estmethods import Confidence
>>> f.estmethod = Confidence()
>>> conferrs = f.est_errors()
plateau.c0 lower bound: -0.804259 rise.offset lower bound: -0.590258 rise.coeff lower bound: -0.148887 rise.offset upper bound: 0.714407 plateau.c0 upper bound: 0.989664 rise.coeff upper bound: 0.103391
```
The [error estimation for the confidence technique is run in parallel](index.html#fit-multi-core) - if the machine has multiple cores usable by the Python multiprocessing module -
which can mean that the screen output above is not always in the same order. As shown below, the confidence-derived error bounds are similar to the covariance bounds, but are not symmetric.
```
>>> print(conferrs.format())
Confidence Method = confidence Iterative Fit Method = None Fitting Method = neldermead Statistic = chi2 confidence 1-sigma (68.2689%) bounds:
Param Best-Fit Lower Bound Upper Bound
--- --- --- ---
plateau.c0 14.9694 -0.804259 0.989664
rise.offset 4.17729 -0.590258 0.714407
rise.coeff -0.420696 -0.148887 0.103391
```
The default is to use all
[thawed parameters](index.html#params-freeze)
in the error analysis, but the [`est_errors()`](index.html#sherpa.fit.Fit.est_errors)
method has a `parlist` attribute which can be used to restrict the parameters used, for example to just the `offset` term:
```
>>> offseterrs = f.est_errors(parlist=(mdl.pars[1], ))
rise.offset lower bound: -0.590258 rise.offset upper bound: 0.714407
>>> print(offseterrs)
datasets = None methodname = confidence iterfitname = none fitname = neldermead statname = chi2 sigma = 1 percent = 68.26894921370858 parnames = ('rise.offset',)
parvals = (4.177287700807689,)
parmins = (-0.5902580352584237,)
parmaxes = (0.7144070082643514,)
nfits = 8
```
The covariance and confidence limits can be compared by accessing the fields of the
[`ErrorEstResults`](index.html#sherpa.fit.ErrorEstResults) object:
```
>>> fmt = "{:13s} covar=±{:4.2f} conf={:+5.2f} {:+5.2f}"
>>> for i in range(len(conferrs.parnames)):
... print(fmt.format(conferrs.parnames[i], coverrs.parmaxes[i],
... conferrs.parmins[i], conferrs.parmaxes[i]))
...
plateau.c0 covar=±0.88 conf=-0.80 +0.99 rise.offset covar=±0.65 conf=-0.59 +0.71 rise.coeff covar=±0.12 conf=-0.15 +0.10
```
The [`est_errors()`](index.html#sherpa.fit.Fit.est_errors) method returns a range, but often it is important to visualize the error surface, which can be done using the interval projection
(for one parameter) and region projection (for two parameter)
routines. The one-dimensional version is created with the
[`IntervalProjection`](index.html#sherpa.plot.IntervalProjection)
class, as shown in the following, which shows how the statistic varies with the plateau term (the vertical dashed line indicates the best-fit location for the parameter, and the horizontal line the statistic value for the best-fit location):
```
>>> from sherpa.plot import IntervalProjection
>>> intproj = IntervalProjection()
>>> intproj.calc(f, plateau.c0)
>>> intproj.plot()
```
Unlike the previous plots, this requires calling the
[`calc()`](index.html#sherpa.plot.IntervalProjection.calc) method before [`plot()`](index.html#sherpa.plot.IntervalProjection.plot). As the [`prepare()`](index.html#sherpa.plot.IntervalProjection.prepare)
method was not called, it used the default options to calculate the plot range (i.e. the range over which
`plateau.c0` would be varied), which turns out in this case to be close to the one-sigma limits.
The range, and number of points, can also be set explicitly:
```
>>> intproj.prepare(min=12.5, max=20, nloop=51)
>>> intproj.calc(f, plateau.c0)
>>> intproj.plot()
>>> s0 = f.calc_stat()
>>> for ds in [1, 4, 9]:
... intproj.hline(s0 + ds, overplot=True, linestyle='dot', linecolor='gray')
...
```
The horizontal lines indicate the statistic value for one, two, and three sigma limits for a single parameter value (and assuming a Chi-square statistic). The plot shows how, as the parameter moves away from its best-fit location, the search space becomes less symmetric.
Following the same approach, the [`RegionProjection`](index.html#sherpa.plot.RegionProjection)
class calculates the statistic value as two parameters are varied,
displaying the results as a contour plot. It requires two parameters and the visualization is created with the `contour()`
method:
```
>>> from sherpa.plot import RegionProjection
>>> regproj = RegionProjection()
>>> regproj.calc(f, rise.offset, rise.coeff)
>>> regproj.contour()
```
The contours show the one, two, and three sigma contours, with the cross indicating the best-fit value. As with the interval-projection plot,
the [`prepare()`](index.html#sherpa.plot.RegionProjection.prepare) method can be used to define the grid of points to use; the values below are chosen to try and cover the full three-sigma range as well as improve the smoothness of the contours by increasing the number of points that are looped over:
```
>>> regproj.prepare(min=(2, -1.2), max=(8, -0.1), nloop=(21, 21))
>>> regproj.calc(f, rise.offset, rise.coeff)
>>> regproj.contour()
```
### Writing your own model[¶](#writing-your-own-model)
The approach above has provided fit results, but they do not match those of the paper and, since
\[\begin{split}a & = & - {\rm coeff} * {\rm offset} \\
b & = & \, {\rm coeff}\end{split}\]
it is hard to transform the values from above to get accurate results. An alternative approach is to
[create a model](index.html#usermodel) with the parameters in the required form, which requires a small amount of code (by using the
[`Exp`](index.html#sherpa.models.basic.Exp) class to do the actual model evaluation).
The following class (`MyExp`) creates a model that has two parameters (`a` and `b`) that represents
\(f(x) = e^{a + b x}\). The starting values for these parameters are chosen to match the default values of the
[`Exp`](index.html#sherpa.models.basic.Exp) parameters,
where \({\rm coeff} = -1\) and \({\rm offset} = 0\):
```
from sherpa.models.basic import RegriddableModel1D from sherpa.models.parameter import Parameter
class MyExp(RegriddableModel1D):
"""A simpler form of the Exp model.
The model is f(x) = exp(a + b * x).
"""
def __init__(self, name='myexp'):
self.a = Parameter(name, 'a', 0)
self.b = Parameter(name, 'b', -1)
# The _exp instance is used to perform the model calculation,
# as shown in the calc method.
self._exp = Exp('hidden')
return RegriddableModel1D.__init__(self, name, (self.a, self.b))
def calc(self, pars, *args, **kwargs):
"""Calculate the model"""
# Tell the exp model to evaluate the model, after converting
# the parameter values to the required form, and order, of:
# offset, coeff, ampl.
#
coeff = pars[1]
offset = -1 * pars[0] / coeff
ampl = 1.0
return self._exp.calc([offset, coeff, ampl], *args, **kwargs)
```
This can be used as any other Sherpa model:
```
>>> plateau2 = Const1D('plateau2')
>>> rise2 = MyExp('rise2')
>>> mdl2 = plateau2 / (1 + rise2)
>>> print(mdl2)
(plateau2 / (1 + rise2))
Param Type Value Min Max Units
--- --- --- --- --- ---
plateau2.c0 thawed 1 -3.40282e+38 3.40282e+38
rise2.a thawed 0 -3.40282e+38 3.40282e+38
rise2.b thawed -1 -3.40282e+38 3.40282e+38
>>> fit2 = Fit(d, mdl2, stat=Chi2())
>>> res2 = fit2.fit()
>>> print(res2.format())
Method = levmar Statistic = chi2 Initial fit statistic = 633.223 Final fit statistic = 0.299738 at function evaluation 52 Data points = 10 Degrees of freedom = 7 Probability [Q-value] = 0.9999 Reduced statistic = 0.0428198 Change in statistic = 632.924
plateau2.c0 14.9694 +/- 0.859768
rise2.a 1.75734 +/- 0.419169
rise2.b -0.420685 +/- 0.118473
>>> dplot.prepare(d)
>>> mplot2 = ModelPlot()
>>> mplot2.prepare(d, mdl2)
>>> dplot.plot()
>>> mplot2.overplot()
```
Unlike the [initial attempt](#simple-user-model-creating-the-model),
this version did not require any manual intervention to find the best-fit solution. This is because the degeneracy between the two terms of the exponential in the
[`Exp`](index.html#sherpa.models.basic.Exp) model have been broken in this version, and so the optimiser work better.
It also has the advantage that the parameters match the problem, and so the parameter limits determined below can be used directly, without having to transform them.
```
>>> fit2.estmethod = Confidence()
>>> conferrs2 = fit2.est_errors()
plateau2.c0 lower bound: -0.804444 rise2.b lower bound: -0.148899 rise2.a lower bound: -0.38086 rise2.b upper bound: 0.10338 plateau2.c0 upper bound: 0.989623 rise2.a upper bound: 0.489919
>>> print(conferrs2.format())
Confidence Method = confidence Iterative Fit Method = None Fitting Method = levmar Statistic = chi2 confidence 1-sigma (68.2689%) bounds:
Param Best-Fit Lower Bound Upper Bound
--- --- --- ---
plateau2.c0 14.9694 -0.804444 0.989623
rise2.a 1.75734 -0.38086 0.489919
rise2.b -0.420685 -0.148899 0.10338
```
The difference in the model parameterisation can also be seen in the various error-analysis plots, such as the region-projection contour plot (where the limits have been chosen to cover the three-sigma contour), and a marker has been added to show the result listed in Table 5 of Zhao et al:
```
>>> regproj2 = RegionProjection()
>>> regproj2.prepare(min=(0.5, -1.2), max=(5, -0.1), nloop=(21, 21))
>>> regproj2.calc(fit2, rise2.a, rise2.b)
>>> regproj2.contour()
>>> plt.plot(1.941, -0.453, 'ko', label='NaNO$_3$ Table 5')
>>> plt.legend(loc=1)
```
Using Sessions to manage models and data[¶](#using-sessions-to-manage-models-and-data)
---
So far we have discussed the object-based API of Sherpa -
where it is up to the user to manage the creation and handling of
[data](index.html#document-data/index),
[model](index.html#document-models/index),
[fit](index.html#document-fit/index) and related objects. Sherpa also provides a “Session” class that handles much of this,
and it can be used directly - via the [`sherpa.ui.utils.Session`](index.html#sherpa.ui.utils.Session) or
[`sherpa.astro.ui.utils.Session`](index.html#sherpa.astro.ui.utils.Session) classes - or indirectly using the routines in the
[`sherpa.ui`](index.html#module-sherpa.ui) and [`sherpa.astro.ui`](index.html#module-sherpa.astro.ui) modules.
The session API is intended to be used in an interactive setting, and so deals with object management. Rather than deal with objects, the API uses labels (numeric or string)
to identify data sets and model components. The Astronomy-specific version adds domain-specific functionality; in this case support for Astronomical data analysis, with a strong focus on high-energy (X-ray)
data. It is currently documented on the
<http://cxc.harvard.edu/sherpa/> web site.
The [`Session`](index.html#sherpa.ui.utils.Session) object provides methods that allow you to:
* load data
* set the model
* change the statistic and optimiser
* fit
* calculate errors
* visualize the results
These are the same stages as described in the
[getting started](index.html#getting-started) section, but the syntax is different, since the Session object handles the creation of, and passing around, the underlying Sherpa objects.
The [`sherpa.ui`](index.html#module-sherpa.ui) module provides an interface where the Session object is hidden from the user, which makes it more appropriate for an interactive analysis session.
### Examples[¶](#examples)
The following examples are *very* basic, since they are intended to highlight how the Sesssion API is used.
The CIAO documentation for Sherpa at <http://cxc.harvard.edu/sherpa/>
provides more documentation and examples.
There are two examples which show the same process -
finding out what value best represents a small dataset -
using the
[`Session`](index.html#sherpa.ui.utils.Session) object directly and then via the
[`sherpa.ui`](index.html#module-sherpa.ui) module.
The data to be fit is the four element array:
```
>>> x = [100, 200, 300, 400]
>>> y = [10, 12, 9, 13]
```
For this example the [`Cash`](index.html#sherpa.stats.Cash) statistic will be used, along with the
[`NelderMead`](index.html#sherpa.optmethods.NelderMead) optimiser.
Note
Importing the Session object - whether directly or via the ui module - causes several checks to be run, to see what parts of the system may not be available. This can lead to warning messages such as the following to be displayed:
```
WARNING: imaging routines will not be available,
failed to import sherpa.image.ds9_backend due to
'RuntimeErr: DS9Win unusable: Could not find ds9 on your PATH'
```
Other checks are to see if the chosen I/O and plotting backends are present, and if support for the XSPEC model library is available.
#### Using the Session object[¶](#using-the-session-object)
By default the Session object has no available models associated with it. The
`_add_model_types()`
method is used to register the models from
[`sherpa.models.basic`](index.html#module-sherpa.models.basic) with the session (by default it will add any class in the module that is derived from the
[`ArithmeticModel`](index.html#sherpa.models.model.ArithmeticModel)
class):
```
>>> from sherpa.ui.utils import Session
>>> import sherpa.models.basic
>>> s = Session()
>>> s._add_model_types(sherpa.models.basic)
```
The [`load_arrays()`](index.html#sherpa.ui.utils.Session.load_arrays) is used to create a [`Data1D`](index.html#sherpa.data.Data1D) object, which is managed by the Session class and referenced by the identifier `1`
(this is in fact the default identifier, which can be manipulated by the
[`get_default_id()`](index.html#sherpa.ui.utils.Session.get_default_id)
and
[`set_default_id()`](index.html#sherpa.ui.utils.Session.set_default_id)
methods, and can be a string or an integer).
Many methods will default to using the default identifier,
but `load_arrays` requires it:
```
>>> s.load_arrays(1, x, y)
```
Note
The session object is not just limited to handling
[`Data1D`](index.html#sherpa.data.Data1D) data sets. The
`load_arrays` takes an optional argument which defines the class of the data (e.g. [`Data2D`](index.html#sherpa.data.Data2D)),
and there are several other methods which can be used to create a data object, such as
[`load_data`](index.html#sherpa.ui.utils.Session.load_data)
and
[`set_data`](index.html#sherpa.ui.utils.Session.set_data).
The [`list_data_ids()`](index.html#sherpa.ui.utils.Session.list_data_ids) method returns the list of available data sets (i.e. those that have been loaded into the session):
```
>>> s.list_data_ids()
[1]
```
The [`get_data()`](index.html#sherpa.ui.utils.Session.get_data) method lets a user access the underlying data object. This method uses the default identifier if not specified:
```
>>> s.get_data()
<Data1D data set instance ''>
>>> print(s.get_data())
name =
x = Int64[4]
y = Int64[4]
staterror = None syserror = None
```
The default statistic and optimiser are set to values useful for data with Gaussian errors:
```
>>> s.get_stat_name()
'chi2gehrels'
>>> s.get_method_name()
'levmar'
```
As the data here is counts based, and is to be fit with Poisson statitics, the
[`set_stat()`](index.html#sherpa.ui.utils.Session.set_stat)
and
[`set_method()`](index.html#sherpa.ui.utils.Session.set_method)
methods are used to change the statistic and optimiser.
Note that they take a string as an argument
(rather than an instance of a
[`Stat`](index.html#sherpa.stats.Stat)
or [`OptMethod`](index.html#sherpa.optmethods.OptMethod)
class):
```
>>> s.set_stat('cash')
>>> s.set_method('simplex')
```
The [`set_source()`](index.html#sherpa.ui.utils.Session.set_source) method is used to define the model expression that is to be fit to the data. It can be sent a model expression created using the model classes directly, as described in the
[Creating Model Instances](index.html#document-models/index) section above.
However, in this case a string is used to define the model, and references each model component using the form
`modelname.instancename`. The `modelname` defines the type of model - in this case the
[`Const1D`](index.html#sherpa.models.basic.Const1D) model - and it must have been registered with the session object using
`_add_model_types`. The
[`list_models()`](index.html#sherpa.ui.utils.Session.list_models) method can be used to find out what models are available.
The `instancename` is used as an identifier for the component, and can be used with other methods,
such as [`set_par()`](index.html#sherpa.ui.utils.Session.set_par).
```
>>> s.set_source('const1d.mdl')
```
The `instancename` value is also used to create a Python variable which provides direct access to the model component (it can also be retrieved with
[`get_model_component()`](index.html#sherpa.ui.utils.Session.get_model_component)):
```
>>> print(mdl)
const1d.mdl Param Type Value Min Max Units
--- --- --- --- --- ---
mdl.c0 thawed 1 -3.40282e+38 3.40282e+38
```
The source model can be retrievd with
[`get_source()`](index.html#sherpa.ui.utils.Session.get_source), which in this example is just the single model component `mdl`:
```
>>> s.get_source()
<Const1D model instance 'const1d.mdl'>
```
With the data, model, statistic, and optimiser set, it is now possible to perform a fit. The
[`fit()`](index.html#sherpa.ui.utils.Session.fit) method defaults to a simultaneous fit of all the loaded data sets; in this case there is only one:
```
>>> s.fit()
Dataset = 1 Method = neldermead Statistic = cash Initial fit statistic = 8 Final fit statistic = -123.015 at function evaluation 90 Data points = 4 Degrees of freedom = 3 Change in statistic = 131.015
mdl.c0 11
```
The fit results are displayed to the screen, but can also be accessed with methods such as
[`calc_stat()`](index.html#sherpa.ui.utils.Session.calc_stat),
[`calc_stat_info()`](index.html#sherpa.ui.utils.Session.calc_stat_info),
and
[`get_fit_results()`](index.html#sherpa.ui.utils.Session.get_fit_results).
```
>>> r = s.get_fit_results()
>>> print(r)
datasets = (1,)
itermethodname = none methodname = neldermead statname = cash succeeded = True parnames = ('mdl.c0',)
parvals = (11.0,)
statval = -123.01478400625663 istatval = 8.0 dstatval = 131.01478400625663 numpoints = 4 dof = 3 qval = None rstat = None message = Optimization terminated successfully nfev = 90
```
There are also methods which allow you to plot the data, model,
fit, and residuals (amongst others):
[`plot_data()`](index.html#sherpa.ui.utils.Session.plot_data),
[`plot_model()`](index.html#sherpa.ui.utils.Session.plot_model),
[`plot_fit()`](index.html#sherpa.ui.utils.Session.plot_fit),
[`plot_resid()`](index.html#sherpa.ui.utils.Session.plot_resid).
The following hides the automatically-created error bars on the data points by changing a setting in dictionary returned by
[`get_data_plot_prefs()`](index.html#sherpa.ui.utils.Session.get_data_plot_prefs),
and then displays the data along with the model:
```
>>> s.get_data_plot_prefs()['yerrorbars'] = False
>>> s.plot_fit()
```
#### Using the UI module[¶](#using-the-ui-module)
Using the UI module is very similar to the Session object, since it automatically creates a global Session object, and registers the available models, when imported. This means that the preceeding example can be replicated but without the need for the Session object.
Since the module is intended for an interactive environment, in this example the symbols are loaded into the default namespace to avoid having to qualify each function with the module name. For commentary, please refer to the preceeding example:
```
>>> from sherpa.ui import *
>>> load_arrays(1, x, y)
>>> list_data_ids()
[1]
>>> get_data()
<Data1D data set instance ''>
>>> print(get_data())
name =
x = Int64[4]
y = Int64[4]
staterror = None syserror = None
>>> get_stat_name()
'chi2gehrels'
>>> get_method_name()
'levmar'
>>> set_stat('cash')
>>> set_method('simplex')
>>> set_source('const1d.mdl')
>>> print(mdl)
const1d.mdl Param Type Value Min Max Units
--- --- --- --- --- ---
mdl.c0 thawed 1 -3.40282e+38 3.40282e+38
>>> get_source()
<Const1D model instance 'const1d.mdl'>>> fit()
Dataset = 1 Method = neldermead Statistic = cash Initial fit statistic = 8 Final fit statistic = -123.015 at function evaluation 90 Data points = 4 Degrees of freedom = 3 Change in statistic = 131.015
mdl.c0 11
>>> r = get_fit_results()
>>> print(r)
datasets = (1,)
itermethodname = none methodname = neldermead statname = cash succeeded = True parnames = ('mdl.c0',)
parvals = (11.0,)
statval = -123.01478400625663 istatval = 8.0 dstatval = 131.014784006 numpoints = 4 dof = 3 qval = None rstat = None message = Optimization terminated successfully nfev = 90
>>> get_data_plot_prefs()['yerrorbars'] = False
>>> plot_fit()
```
The plot created by this function is the same as shown in the previous example.
### Reference/API[¶](#reference-api)
#### The sherpa.ui module[¶](#module-sherpa.ui)
The [`sherpa.ui`](#module-sherpa.ui) module provides an interface to the
[`sherpa.ui.utils.Session`](index.html#sherpa.ui.utils.Session)
object, where a singleton class is used to provide the access but hidden away. This needs better explanation…
> Functions
> | [`add_model`](index.html#sherpa.ui.add_model)(modelclass[, args, kwargs]) | Create a user-defined model class. |
> | [`add_user_pars`](index.html#sherpa.ui.add_user_pars)(modelname, parnames[, …]) | Add parameter information to a user model. |
> | [`calc_chisqr`](index.html#sherpa.ui.calc_chisqr)([id]) | Calculate the per-bin chi-squared statistic. |
> | [`calc_stat`](index.html#sherpa.ui.calc_stat)([id]) | Calculate the fit statistic for a data set. |
> | [`calc_stat_info`](index.html#sherpa.ui.calc_stat_info)() | Display the statistic values for the current models. |
> | [`clean`](index.html#sherpa.ui.clean)() | Clear out the current Sherpa session. |
> | [`conf`](index.html#sherpa.ui.conf)(*args) | Estimate parameter confidence intervals using the confidence method. |
> | [`confidence`](index.html#sherpa.ui.confidence)(*args) | Estimate parameter confidence intervals using the confidence method. |
> | [`contour`](index.html#sherpa.ui.contour)(*args) | Create a contour plot for an image data set. |
> | [`contour_data`](index.html#sherpa.ui.contour_data)([id]) | Contour the values of an image data set. |
> | [`contour_fit`](index.html#sherpa.ui.contour_fit)([id]) | Contour the fit to a data set. |
> | [`contour_fit_resid`](index.html#sherpa.ui.contour_fit_resid)([id, replot, overcontour]) | Contour the fit and the residuals to a data set. |
> | [`contour_kernel`](index.html#sherpa.ui.contour_kernel)([id]) | Contour the kernel applied to the model of an image data set. |
> | [`contour_model`](index.html#sherpa.ui.contour_model)([id]) | Create a contour plot of the model. |
> | [`contour_psf`](index.html#sherpa.ui.contour_psf)([id]) | Contour the PSF applied to the model of an image data set. |
> | [`contour_ratio`](index.html#sherpa.ui.contour_ratio)([id]) | Contour the ratio of data to model. |
> | [`contour_resid`](index.html#sherpa.ui.contour_resid)([id]) | Contour the residuals of the fit. |
> | [`contour_source`](index.html#sherpa.ui.contour_source)([id]) | Create a contour plot of the unconvolved spatial model. |
> | [`copy_data`](index.html#sherpa.ui.copy_data)(fromid, toid) | Copy a data set, creating a new identifier. |
> | [`covar`](index.html#sherpa.ui.covar)(*args) | Estimate parameter confidence intervals using the covariance method. |
> | [`covariance`](index.html#sherpa.ui.covariance)(*args) | Estimate parameter confidence intervals using the covariance method. |
> | [`create_model_component`](index.html#sherpa.ui.create_model_component)([typename, name]) | Create a model component. |
> | [`dataspace1d`](index.html#sherpa.ui.dataspace1d)(start, stop[, step, numbins, …]) | Create the independent axis for a 1D data set. |
> | [`dataspace2d`](index.html#sherpa.ui.dataspace2d)(dims[, id, dstype]) | Create the independent axis for a 2D data set. |
> | [`delete_data`](index.html#sherpa.ui.delete_data)([id]) | Delete a data set by identifier. |
> | [`delete_model`](index.html#sherpa.ui.delete_model)([id]) | Delete the model expression for a data set. |
> | [`delete_model_component`](index.html#sherpa.ui.delete_model_component)(name) | Delete a model component. |
> | [`delete_psf`](index.html#sherpa.ui.delete_psf)([id]) | Delete the PSF model for a data set. |
> | [`fake`](index.html#sherpa.ui.fake)([id, method]) | Simulate a data set. |
> | [`fit`](index.html#sherpa.ui.fit)([id]) | Fit a model to one or more data sets. |
> | [`freeze`](index.html#sherpa.ui.freeze)(*args) | Fix model parameters so they are not changed by a fit. |
> | [`get_cdf_plot`](index.html#sherpa.ui.get_cdf_plot)() | Return the data used to plot the last CDF. |
> | [`get_chisqr_plot`](index.html#sherpa.ui.get_chisqr_plot)([id]) | Return the data used by plot_chisqr. |
> | [`get_conf`](index.html#sherpa.ui.get_conf)() | Return the confidence-interval estimation object. |
> | [`get_conf_opt`](index.html#sherpa.ui.get_conf_opt)([name]) | Return one or all of the options for the confidence interval method. |
> | [`get_conf_results`](index.html#sherpa.ui.get_conf_results)() | Return the results of the last conf run. |
> | [`get_confidence_results`](index.html#sherpa.ui.get_confidence_results)() | Return the results of the last conf run. |
> | [`get_covar`](index.html#sherpa.ui.get_covar)() | Return the covariance estimation object. |
> | [`get_covar_opt`](index.html#sherpa.ui.get_covar_opt)([name]) | Return one or all of the options for the covariance method. |
> | [`get_covar_results`](index.html#sherpa.ui.get_covar_results)() | Return the results of the last covar run. |
> | [`get_covariance_results`](index.html#sherpa.ui.get_covariance_results)() | Return the results of the last covar run. |
> | [`get_data`](index.html#sherpa.ui.get_data)([id]) | Return the data set by identifier. |
> | [`get_data_contour`](index.html#sherpa.ui.get_data_contour)([id]) | Return the data used by contour_data. |
> | [`get_data_contour_prefs`](index.html#sherpa.ui.get_data_contour_prefs)() | Return the preferences for contour_data. |
> | [`get_data_image`](index.html#sherpa.ui.get_data_image)([id]) | Return the data used by image_data. |
> | [`get_data_plot`](index.html#sherpa.ui.get_data_plot)([id]) | Return the data used by plot_data. |
> | [`get_data_plot_prefs`](index.html#sherpa.ui.get_data_plot_prefs)() | Return the preferences for plot_data. |
> | [`get_default_id`](index.html#sherpa.ui.get_default_id)() | Return the default data set identifier. |
> | [`get_delchi_plot`](index.html#sherpa.ui.get_delchi_plot)([id]) | Return the data used by plot_delchi. |
> | [`get_dep`](index.html#sherpa.ui.get_dep)([id, filter]) | Return the dependent axis of a data set. |
> | [`get_dims`](index.html#sherpa.ui.get_dims)([id, filter]) | Return the dimensions of the data set. |
> | [`get_draws`](index.html#sherpa.ui.get_draws)([id, otherids, niter, covar_matrix]) | Run the pyBLoCXS MCMC algorithm. |
> | [`get_error`](index.html#sherpa.ui.get_error)([id, filter]) | Return the errors on the dependent axis of a data set. |
> | [`get_filter`](index.html#sherpa.ui.get_filter)([id]) | Return the filter expression for a data set. |
> | [`get_fit_contour`](index.html#sherpa.ui.get_fit_contour)([id]) | Return the data used by contour_fit. |
> | [`get_fit_plot`](index.html#sherpa.ui.get_fit_plot)([id]) | Return the data used to create the fit plot. |
> | [`get_fit_results`](index.html#sherpa.ui.get_fit_results)() | Return the results of the last fit. |
> | [`get_functions`](index.html#sherpa.ui.get_functions)() | Return the functions provided by Sherpa. |
> | [`get_indep`](index.html#sherpa.ui.get_indep)([id]) | Return the independent axes of a data set. |
> | [`get_int_proj`](index.html#sherpa.ui.get_int_proj)([par, id, otherids, recalc, …]) | Return the interval-projection object. |
> | [`get_int_unc`](index.html#sherpa.ui.get_int_unc)([par, id, otherids, recalc, …]) | Return the interval-uncertainty object. |
> | [`get_iter_method_name`](index.html#sherpa.ui.get_iter_method_name)() | Return the name of the iterative fitting scheme. |
> | [`get_iter_method_opt`](index.html#sherpa.ui.get_iter_method_opt)([optname]) | Return one or all options for the iterative-fitting scheme. |
> | [`get_kernel_contour`](index.html#sherpa.ui.get_kernel_contour)([id]) | Return the data used by contour_kernel. |
> | [`get_kernel_image`](index.html#sherpa.ui.get_kernel_image)([id]) | Return the data used by image_kernel. |
> | [`get_kernel_plot`](index.html#sherpa.ui.get_kernel_plot)([id]) | Return the data used by plot_kernel. |
> | [`get_method`](index.html#sherpa.ui.get_method)([name]) | Return an optimization method. |
> | [`get_method_name`](index.html#sherpa.ui.get_method_name)() | Return the name of current Sherpa optimization method. |
> | [`get_method_opt`](index.html#sherpa.ui.get_method_opt)([optname]) | Return one or all of the options for the current optimization method. |
> | [`get_model`](index.html#sherpa.ui.get_model)([id]) | Return the model expression for a data set. |
> | [`get_model_autoassign_func`](index.html#sherpa.ui.get_model_autoassign_func)() | Return the method used to create model component identifiers. |
> | [`get_model_component`](index.html#sherpa.ui.get_model_component)(name) | Returns a model component given its name. |
> | [`get_model_component_image`](index.html#sherpa.ui.get_model_component_image)(id[, model]) | Return the data used by image_model_component. |
> | [`get_model_component_plot`](index.html#sherpa.ui.get_model_component_plot)(id[, model]) | Return the data used to create the model-component plot. |
> | [`get_model_contour`](index.html#sherpa.ui.get_model_contour)([id]) | Return the data used by contour_model. |
> | [`get_model_contour_prefs`](index.html#sherpa.ui.get_model_contour_prefs)() | Return the preferences for contour_model. |
> | [`get_model_image`](index.html#sherpa.ui.get_model_image)([id]) | Return the data used by image_model. |
> | [`get_model_pars`](index.html#sherpa.ui.get_model_pars)(model) | Return the names of the parameters of a model. |
> | [`get_model_plot`](index.html#sherpa.ui.get_model_plot)([id]) | Return the data used to create the model plot. |
> | [`get_model_plot_prefs`](index.html#sherpa.ui.get_model_plot_prefs)() | Return the preferences for plot_model. |
> | [`get_model_type`](index.html#sherpa.ui.get_model_type)(model) | Describe a model expression. |
> | [`get_num_par`](index.html#sherpa.ui.get_num_par)([id]) | Return the number of parameters in a model expression. |
> | [`get_num_par_frozen`](index.html#sherpa.ui.get_num_par_frozen)([id]) | Return the number of frozen parameters in a model expression. |
> | [`get_num_par_thawed`](index.html#sherpa.ui.get_num_par_thawed)([id]) | Return the number of thawed parameters in a model expression. |
> | [`get_par`](index.html#sherpa.ui.get_par)(par) | Return a parameter of a model component. |
> | [`get_pdf_plot`](index.html#sherpa.ui.get_pdf_plot)() | Return the data used to plot the last PDF. |
> | [`get_prior`](index.html#sherpa.ui.get_prior)(par) | Return the prior function for a parameter (MCMC). |
> | [`get_proj`](index.html#sherpa.ui.get_proj)() | Return the confidence-interval estimation object. |
> | [`get_proj_opt`](index.html#sherpa.ui.get_proj_opt)([name]) | Return one or all of the options for the confidence interval method. |
> | [`get_proj_results`](index.html#sherpa.ui.get_proj_results)() | Return the results of the last proj run. |
> | [`get_projection_results`](index.html#sherpa.ui.get_projection_results)() | Return the results of the last proj run. |
> | [`get_psf`](index.html#sherpa.ui.get_psf)([id]) | Return the PSF model defined for a data set. |
> | [`get_psf_contour`](index.html#sherpa.ui.get_psf_contour)([id]) | Return the data used by contour_psf. |
> | [`get_psf_image`](index.html#sherpa.ui.get_psf_image)([id]) | Return the data used by image_psf. |
> | [`get_psf_plot`](index.html#sherpa.ui.get_psf_plot)([id]) | Return the data used by plot_psf. |
> | [`get_pvalue_plot`](index.html#sherpa.ui.get_pvalue_plot)([null_model, alt_model, …]) | Return the data used by plot_pvalue. |
> | [`get_pvalue_results`](index.html#sherpa.ui.get_pvalue_results)() | Return the data calculated by the last plot_pvalue call. |
> | [`get_ratio_contour`](index.html#sherpa.ui.get_ratio_contour)([id]) | Return the data used by contour_ratio. |
> | [`get_ratio_image`](index.html#sherpa.ui.get_ratio_image)([id]) | Return the data used by image_ratio. |
> | [`get_ratio_plot`](index.html#sherpa.ui.get_ratio_plot)([id]) | Return the data used by plot_ratio. |
> | [`get_reg_proj`](index.html#sherpa.ui.get_reg_proj)([par0, par1, id, otherids, …]) | Return the region-projection object. |
> | [`get_reg_unc`](index.html#sherpa.ui.get_reg_unc)([par0, par1, id, otherids, …]) | Return the region-uncertainty object. |
> | [`get_resid_contour`](index.html#sherpa.ui.get_resid_contour)([id]) | Return the data used by contour_resid. |
> | [`get_resid_image`](index.html#sherpa.ui.get_resid_image)([id]) | Return the data used by image_resid. |
> | [`get_resid_plot`](index.html#sherpa.ui.get_resid_plot)([id]) | Return the data used by plot_resid. |
> | [`get_sampler`](index.html#sherpa.ui.get_sampler)() | Return the current MCMC sampler options. |
> | [`get_sampler_name`](index.html#sherpa.ui.get_sampler_name)() | Return the name of the current MCMC sampler. |
> | [`get_sampler_opt`](index.html#sherpa.ui.get_sampler_opt)(opt) | Return an option of the current MCMC sampler. |
> | [`get_scatter_plot`](index.html#sherpa.ui.get_scatter_plot)() | Return the data used to plot the last scatter plot. |
> | [`get_source`](index.html#sherpa.ui.get_source)([id]) | Return the source model expression for a data set. |
> | [`get_source_component_image`](index.html#sherpa.ui.get_source_component_image)(id[, model]) | Return the data used by image_source_component. |
> | [`get_source_component_plot`](index.html#sherpa.ui.get_source_component_plot)(id[, model]) | Return the data used by plot_source_component. |
> | [`get_source_contour`](index.html#sherpa.ui.get_source_contour)([id]) | Return the data used by contour_source. |
> | [`get_source_image`](index.html#sherpa.ui.get_source_image)([id]) | Return the data used by image_source. |
> | [`get_source_plot`](index.html#sherpa.ui.get_source_plot)([id]) | Return the data used to create the source plot. |
> | [`get_split_plot`](index.html#sherpa.ui.get_split_plot)() | Return the plot attributes for displays with multiple plots. |
> | [`get_stat`](index.html#sherpa.ui.get_stat)([name]) | Return the fit statisic. |
> | [`get_stat_info`](index.html#sherpa.ui.get_stat_info)() | Return the statistic values for the current models. |
> | [`get_stat_name`](index.html#sherpa.ui.get_stat_name)() | Return the name of the current fit statistic. |
> | [`get_staterror`](index.html#sherpa.ui.get_staterror)([id, filter]) | Return the statistical error on the dependent axis of a data set. |
> | [`get_syserror`](index.html#sherpa.ui.get_syserror)([id, filter]) | Return the systematic error on the dependent axis of a data set. |
> | [`get_trace_plot`](index.html#sherpa.ui.get_trace_plot)() | Return the data used to plot the last trace. |
> | [`guess`](index.html#sherpa.ui.guess)([id, model, limits, values]) | Estimate the parameter values and ranges given the loaded data. |
> | [`ignore`](index.html#sherpa.ui.ignore)([lo, hi]) | Exclude data from the fit. |
> | [`ignore_id`](index.html#sherpa.ui.ignore_id)(ids[, lo, hi]) | Exclude data from the fit for a data set. |
> | [`image_close`](index.html#sherpa.ui.image_close)() | Close the image viewer. |
> | [`image_data`](index.html#sherpa.ui.image_data)([id, newframe, tile]) | Display a data set in the image viewer. |
> | [`image_deleteframes`](index.html#sherpa.ui.image_deleteframes)() | Delete all the frames open in the image viewer. |
> | [`image_fit`](index.html#sherpa.ui.image_fit)([id, newframe, tile, deleteframes]) | Display the data, model, and residuals for a data set in the image viewer. |
> | [`image_getregion`](index.html#sherpa.ui.image_getregion)([coord]) | Return the region defined in the image viewer. |
> | [`image_kernel`](index.html#sherpa.ui.image_kernel)([id, newframe, tile]) | Display the 2D kernel for a data set in the image viewer. |
> | [`image_model`](index.html#sherpa.ui.image_model)([id, newframe, tile]) | Display the model for a data set in the image viewer. |
> | [`image_model_component`](index.html#sherpa.ui.image_model_component)(id[, model, newframe, …]) | Display a component of the model in the image viewer. |
> | [`image_open`](index.html#sherpa.ui.image_open)() | Start the image viewer. |
> | [`image_psf`](index.html#sherpa.ui.image_psf)([id, newframe, tile]) | Display the 2D PSF model for a data set in the image viewer. |
> | [`image_ratio`](index.html#sherpa.ui.image_ratio)([id, newframe, tile]) | Display the ratio (data/model) for a data set in the image viewer. |
> | [`image_resid`](index.html#sherpa.ui.image_resid)([id, newframe, tile]) | Display the residuals (data - model) for a data set in the image viewer. |
> | [`image_setregion`](index.html#sherpa.ui.image_setregion)(reg[, coord]) | Set the region to display in the image viewer. |
> | [`image_source`](index.html#sherpa.ui.image_source)([id, newframe, tile]) | Display the source expression for a data set in the image viewer. |
> | [`image_source_component`](index.html#sherpa.ui.image_source_component)(id[, model, …]) | Display a component of the source expression in the image viewer. |
> | [`image_xpaget`](index.html#sherpa.ui.image_xpaget)(arg) | Return the result of an XPA call to the image viewer. |
> | [`image_xpaset`](index.html#sherpa.ui.image_xpaset)(arg[, data]) | Return the result of an XPA call to the image viewer. |
> | [`int_proj`](index.html#sherpa.ui.int_proj)(par[, id, otherids, replot, fast, …]) | Calculate and plot the fit statistic versus fit parameter value. |
> | [`int_unc`](index.html#sherpa.ui.int_unc)(par[, id, otherids, replot, min, …]) | Calculate and plot the fit statistic versus fit parameter value. |
> | [`link`](index.html#sherpa.ui.link)(par, val) | Link a parameter to a value. |
> | [`list_data_ids`](index.html#sherpa.ui.list_data_ids)() | List the identifiers for the loaded data sets. |
> | [`list_functions`](index.html#sherpa.ui.list_functions)([outfile, clobber]) | Display the functions provided by Sherpa. |
> | [`list_iter_methods`](index.html#sherpa.ui.list_iter_methods)() | List the iterative fitting schemes. |
> | [`list_methods`](index.html#sherpa.ui.list_methods)() | List the optimization methods. |
> | [`list_model_components`](index.html#sherpa.ui.list_model_components)() | List the names of all the model components. |
> | [`list_model_ids`](index.html#sherpa.ui.list_model_ids)() | List of all the data sets with a source expression. |
> | [`list_models`](index.html#sherpa.ui.list_models)([show]) | List the available model types. |
> | [`list_priors`](index.html#sherpa.ui.list_priors)() | Return the priors set for model parameters, if any. |
> | [`list_samplers`](index.html#sherpa.ui.list_samplers)() | List the MCMC samplers. |
> | [`list_stats`](index.html#sherpa.ui.list_stats)() | List the fit statistics. |
> | [`load_arrays`](index.html#sherpa.ui.load_arrays)(id, *args) | Create a data set from array values. |
> | [`load_conv`](index.html#sherpa.ui.load_conv)(modelname, filename_or_model, …) | Load a 1D convolution model. |
> | [`load_data`](index.html#sherpa.ui.load_data)(id[, filename, ncols, colkeys, …]) | Load a data set from an ASCII file. |
> | [`load_filter`](index.html#sherpa.ui.load_filter)(id[, filename, ignore, ncols]) | Load the filter array from an ASCII file and add to a data set. |
> | [`load_psf`](index.html#sherpa.ui.load_psf)(modelname, filename_or_model, …) | Create a PSF model. |
> | [`load_staterror`](index.html#sherpa.ui.load_staterror)(id[, filename, ncols]) | Load the statistical errors from an ASCII file. |
> | [`load_syserror`](index.html#sherpa.ui.load_syserror)(id[, filename, ncols]) | Load the systematic errors from an ASCII file. |
> | [`load_table_model`](index.html#sherpa.ui.load_table_model)(modelname, filename[, …]) | Load ASCII tabular data and use it as a model component. |
> | [`load_template_interpolator`](index.html#sherpa.ui.load_template_interpolator)(name, …) | Set the template interpolation scheme. |
> | [`load_template_model`](index.html#sherpa.ui.load_template_model)(modelname, templatefile) | Load a set of templates and use it as a model component. |
> | [`load_user_model`](index.html#sherpa.ui.load_user_model)(func, modelname[, filename, …]) | Create a user-defined model. |
> | [`load_user_stat`](index.html#sherpa.ui.load_user_stat)(statname, calc_stat_func[, …]) | Create a user-defined statistic. |
> | [`normal_sample`](index.html#sherpa.ui.normal_sample)([num, sigma, correlate, id, …]) | Sample the fit statistic by taking the parameter values from a normal distribution. |
> | [`notice`](index.html#sherpa.ui.notice)([lo, hi]) | Include data in the fit. |
> | [`notice_id`](index.html#sherpa.ui.notice_id)(ids[, lo, hi]) | Include data from the fit for a data set. |
> | [`paramprompt`](index.html#sherpa.ui.paramprompt)([val]) | Should the user be asked for the parameter values when creating a model? |
> | [`plot`](index.html#sherpa.ui.plot)(*args) | Create one or more plot types. |
> | [`plot_cdf`](index.html#sherpa.ui.plot_cdf)(points[, name, xlabel, replot, …]) | Plot the cumulative density function of an array of values. |
> | [`plot_chisqr`](index.html#sherpa.ui.plot_chisqr)([id, replot, overplot, clearwindow]) | Plot the chi-squared value for each point in a data set. |
> | [`plot_data`](index.html#sherpa.ui.plot_data)([id, replot, overplot, clearwindow]) | Plot the data values. |
> | [`plot_delchi`](index.html#sherpa.ui.plot_delchi)([id, replot, overplot, clearwindow]) | Plot the ratio of residuals to error for a data set. |
> | [`plot_fit`](index.html#sherpa.ui.plot_fit)([id, replot, overplot, clearwindow]) | Plot the fit results (data, model) for a data set. |
> | [`plot_fit_delchi`](index.html#sherpa.ui.plot_fit_delchi)([id, replot, overplot, …]) | Plot the fit results, and the residuals, for a data set. |
> | [`plot_fit_ratio`](index.html#sherpa.ui.plot_fit_ratio)([id, replot, overplot, …]) | Plot the fit results, and the ratio of data to model, for a data set. |
> | [`plot_fit_resid`](index.html#sherpa.ui.plot_fit_resid)([id, replot, overplot, …]) | Plot the fit results, and the residuals, for a data set. |
> | [`plot_kernel`](index.html#sherpa.ui.plot_kernel)([id, replot, overplot, clearwindow]) | Plot the 1D kernel applied to a data set. |
> | [`plot_model`](index.html#sherpa.ui.plot_model)([id, replot, overplot, clearwindow]) | Plot the model for a data set. |
> | [`plot_model_component`](index.html#sherpa.ui.plot_model_component)(id[, model, replot, …]) | Plot a component of the model for a data set. |
> | [`plot_pdf`](index.html#sherpa.ui.plot_pdf)(points[, name, xlabel, bins, …]) | Plot the probability density function of an array of values. |
> | [`plot_psf`](index.html#sherpa.ui.plot_psf)([id, replot, overplot, clearwindow]) | Plot the 1D PSF model applied to a data set. |
> | [`plot_pvalue`](index.html#sherpa.ui.plot_pvalue)(null_model, alt_model[, …]) | Compute and plot a histogram of likelihood ratios by simulating data. |
> | [`plot_ratio`](index.html#sherpa.ui.plot_ratio)([id, replot, overplot, clearwindow]) | Plot the ratio of data to model for a data set. |
> | [`plot_resid`](index.html#sherpa.ui.plot_resid)([id, replot, overplot, clearwindow]) | Plot the residuals (data - model) for a data set. |
> | [`plot_scatter`](index.html#sherpa.ui.plot_scatter)(x, y[, name, xlabel, ylabel, …]) | Create a scatter plot. |
> | [`plot_source`](index.html#sherpa.ui.plot_source)([id, replot, overplot, clearwindow]) | Plot the source expression for a data set. |
> | [`plot_source_component`](index.html#sherpa.ui.plot_source_component)(id[, model, replot, …]) | Plot a component of the source expression for a data set. |
> | [`plot_trace`](index.html#sherpa.ui.plot_trace)(points[, name, xlabel, replot, …]) | Create a trace plot of row number versus value. |
> | [`proj`](index.html#sherpa.ui.proj)(*args) | Estimate parameter confidence intervals using the projection method. |
> | [`projection`](index.html#sherpa.ui.projection)(*args) | Estimate parameter confidence intervals using the projection method. |
> | [`reg_proj`](index.html#sherpa.ui.reg_proj)(par0, par1[, id, otherids, replot, …]) | Plot the statistic value as two parameters are varied. |
> | [`reg_unc`](index.html#sherpa.ui.reg_unc)(par0, par1[, id, otherids, replot, …]) | Plot the statistic value as two parameters are varied. |
> | [`reset`](index.html#sherpa.ui.reset)([model, id]) | Reset the model parameters to their default settings. |
> | [`restore`](index.html#sherpa.ui.restore)([filename]) | Load in a Sherpa session from a file. |
> | [`save`](index.html#sherpa.ui.save)([filename, clobber]) | Save the current Sherpa session to a file. |
> | [`save_arrays`](index.html#sherpa.ui.save_arrays)(filename, args[, fields, …]) | Write a list of arrays to an ASCII file. |
> | [`save_data`](index.html#sherpa.ui.save_data)(id[, filename, fields, sep, …]) | Save the data to a file. |
> | [`save_delchi`](index.html#sherpa.ui.save_delchi)(id[, filename, clobber, sep, …]) | Save the ratio of residuals (data-model) to error to a file. |
> | [`save_error`](index.html#sherpa.ui.save_error)(id[, filename, clobber, sep, …]) | Save the errors to a file. |
> | [`save_filter`](index.html#sherpa.ui.save_filter)(id[, filename, clobber, sep, …]) | Save the filter array to a file. |
> | [`save_model`](index.html#sherpa.ui.save_model)(id[, filename, clobber, sep, …]) | Save the model values to a file. |
> | [`save_resid`](index.html#sherpa.ui.save_resid)(id[, filename, clobber, sep, …]) | Save the residuals (data-model) to a file. |
> | [`save_source`](index.html#sherpa.ui.save_source)(id[, filename, clobber, sep, …]) | Save the model values to a file. |
> | [`save_staterror`](index.html#sherpa.ui.save_staterror)(id[, filename, clobber, sep, …]) | Save the statistical errors to a file. |
> | [`save_syserror`](index.html#sherpa.ui.save_syserror)(id[, filename, clobber, sep, …]) | Save the statistical errors to a file. |
> | [`set_conf_opt`](index.html#sherpa.ui.set_conf_opt)(name, val) | Set an option for the confidence interval method. |
> | [`set_covar_opt`](index.html#sherpa.ui.set_covar_opt)(name, val) | Set an option for the covariance method. |
> | [`set_data`](index.html#sherpa.ui.set_data)(id[, data]) | Set a data set. |
> | [`set_default_id`](index.html#sherpa.ui.set_default_id)(id) | Set the default data set identifier. |
> | [`set_dep`](index.html#sherpa.ui.set_dep)(id[, val]) | Set the dependent axis of a data set. |
> | [`set_filter`](index.html#sherpa.ui.set_filter)(id[, val, ignore]) | Set the filter array of a data set. |
> | [`set_full_model`](index.html#sherpa.ui.set_full_model)(id[, model]) | Define the convolved model expression for a data set. |
> | [`set_iter_method`](index.html#sherpa.ui.set_iter_method)(meth) | Set the iterative-fitting scheme used in the fit. |
> | [`set_iter_method_opt`](index.html#sherpa.ui.set_iter_method_opt)(optname, val) | Set an option for the iterative-fitting scheme. |
> | [`set_method`](index.html#sherpa.ui.set_method)(meth) | Set the optimization method. |
> | [`set_method_opt`](index.html#sherpa.ui.set_method_opt)(optname, val) | Set an option for the current optimization method. |
> | [`set_model`](index.html#sherpa.ui.set_model)(id[, model]) | Set the source model expression for a data set. |
> | [`set_model_autoassign_func`](index.html#sherpa.ui.set_model_autoassign_func)([func]) | Set the method used to create model component identifiers. |
> | [`set_par`](index.html#sherpa.ui.set_par)(par[, val, min, max, frozen]) | Set the value, limits, or behavior of a model parameter. |
> | [`set_prior`](index.html#sherpa.ui.set_prior)(par, prior) | Set the prior function to use with a parameter. |
> | [`set_proj_opt`](index.html#sherpa.ui.set_proj_opt)(name, val) | Set an option for the projection method. |
> | [`set_psf`](index.html#sherpa.ui.set_psf)(id[, psf]) | Add a PSF model to a data set. |
> | [`set_sampler`](index.html#sherpa.ui.set_sampler)(sampler) | Set the MCMC sampler. |
> | [`set_sampler_opt`](index.html#sherpa.ui.set_sampler_opt)(opt, value) | Set an option for the current MCMC sampler. |
> | [`set_source`](index.html#sherpa.ui.set_source)(id[, model]) | Set the source model expression for a data set. |
> | [`set_stat`](index.html#sherpa.ui.set_stat)(stat) | Set the statistical method. |
> | [`set_staterror`](index.html#sherpa.ui.set_staterror)(id[, val, fractional]) | Set the statistical errors on the dependent axis of a data set. |
> | [`set_syserror`](index.html#sherpa.ui.set_syserror)(id[, val, fractional]) | Set the systematic errors on the dependent axis of a data set. |
> | [`set_xlinear`](index.html#sherpa.ui.set_xlinear)([plottype]) | New plots will display a linear X axis. |
> | [`set_xlog`](index.html#sherpa.ui.set_xlog)([plottype]) | New plots will display a logarithmically-scaled X axis. |
> | [`set_ylinear`](index.html#sherpa.ui.set_ylinear)([plottype]) | New plots will display a linear Y axis. |
> | [`set_ylog`](index.html#sherpa.ui.set_ylog)([plottype]) | New plots will display a logarithmically-scaled Y axis. |
> | [`show_all`](index.html#sherpa.ui.show_all)([id, outfile, clobber]) | Report the current state of the Sherpa session. |
> | [`show_conf`](index.html#sherpa.ui.show_conf)([outfile, clobber]) | Display the results of the last conf evaluation. |
> | [`show_covar`](index.html#sherpa.ui.show_covar)([outfile, clobber]) | Display the results of the last covar evaluation. |
> | [`show_data`](index.html#sherpa.ui.show_data)([id, outfile, clobber]) | Summarize the available data sets. |
> | [`show_filter`](index.html#sherpa.ui.show_filter)([id, outfile, clobber]) | Show any filters applied to a data set. |
> | [`show_fit`](index.html#sherpa.ui.show_fit)([outfile, clobber]) | Summarize the fit results. |
> | [`show_kernel`](index.html#sherpa.ui.show_kernel)([id, outfile, clobber]) | Display any kernel applied to a data set. |
> | [`show_method`](index.html#sherpa.ui.show_method)([outfile, clobber]) | Display the current optimization method and options. |
> | [`show_model`](index.html#sherpa.ui.show_model)([id, outfile, clobber]) | Display the model expression used to fit a data set. |
> | [`show_proj`](index.html#sherpa.ui.show_proj)([outfile, clobber]) | Display the results of the last proj evaluation. |
> | [`show_psf`](index.html#sherpa.ui.show_psf)([id, outfile, clobber]) | Display any PSF model applied to a data set. |
> | [`show_source`](index.html#sherpa.ui.show_source)([id, outfile, clobber]) | Display the source model expression for a data set. |
> | [`show_stat`](index.html#sherpa.ui.show_stat)([outfile, clobber]) | Display the current fit statistic. |
> | [`simulfit`](index.html#sherpa.ui.simulfit)([id]) | Fit a model to one or more data sets. |
> | [`t_sample`](index.html#sherpa.ui.t_sample)([num, dof, id, otherids, numcores]) | Sample the fit statistic by taking the parameter values from a Student’s t-distribution. |
> | [`thaw`](index.html#sherpa.ui.thaw)(*args) | Allow model parameters to be varied during a fit. |
> | [`uniform_sample`](index.html#sherpa.ui.uniform_sample)([num, factor, id, otherids, …]) | Sample the fit statistic by taking the parameter values from an uniform distribution. |
> | [`unlink`](index.html#sherpa.ui.unlink)(par) | Unlink a parameter value. |
> | [`unpack_arrays`](index.html#sherpa.ui.unpack_arrays)(*args) | Create a sherpa data object from arrays of data. |
> | [`unpack_data`](index.html#sherpa.ui.unpack_data)(filename[, ncols, colkeys, …]) | Create a sherpa data object from an ASCII file. |
#### The sherpa.astro.ui module[¶](#module-sherpa.astro.ui)
Since some of these are re-exports of the
[`sherpa.ui`](index.html#module-sherpa.ui) version, should we only document the new/different ones here? That would be a maintenance burden…
> Functions
> | [`add_model`](index.html#sherpa.astro.ui.add_model)(modelclass[, args, kwargs]) | Create a user-defined model class. |
> | [`add_user_pars`](index.html#sherpa.astro.ui.add_user_pars)(modelname, parnames[, …]) | Add parameter information to a user model. |
> | [`calc_chisqr`](index.html#sherpa.astro.ui.calc_chisqr)([id]) | Calculate the per-bin chi-squared statistic. |
> | [`calc_data_sum`](index.html#sherpa.astro.ui.calc_data_sum)([lo, hi, id, bkg_id]) | Sum up the data values over a pass band. |
> | [`calc_data_sum2d`](index.html#sherpa.astro.ui.calc_data_sum2d)([reg, id]) | Sum up the data values of a 2D data set. |
> | [`calc_energy_flux`](index.html#sherpa.astro.ui.calc_energy_flux)([lo, hi, id, bkg_id, model]) | Integrate the unconvolved source model over a pass band. |
> | [`calc_kcorr`](index.html#sherpa.astro.ui.calc_kcorr)(z, obslo, obshi[, restlo, …]) | Calculate the K correction for a model. |
> | [`calc_model_sum`](index.html#sherpa.astro.ui.calc_model_sum)([lo, hi, id, bkg_id]) | Sum up the fitted model over a pass band. |
> | [`calc_model_sum2d`](index.html#sherpa.astro.ui.calc_model_sum2d)([reg, id]) | Sum up the convolved model for a 2D data set. |
> | [`calc_photon_flux`](index.html#sherpa.astro.ui.calc_photon_flux)([lo, hi, id, bkg_id, model]) | Integrate the unconvolved source model over a pass band. |
> | [`calc_source_sum`](index.html#sherpa.astro.ui.calc_source_sum)([lo, hi, id, bkg_id]) | Sum up the source model over a pass band. |
> | [`calc_source_sum2d`](index.html#sherpa.astro.ui.calc_source_sum2d)([reg, id]) | Sum up the unconvolved model for a 2D data set. |
> | [`calc_stat`](index.html#sherpa.astro.ui.calc_stat)([id]) | Calculate the fit statistic for a data set. |
> | [`calc_stat_info`](index.html#sherpa.astro.ui.calc_stat_info)() | Display the statistic values for the current models. |
> | [`clean`](index.html#sherpa.astro.ui.clean)() | Clear out the current Sherpa session. |
> | [`conf`](index.html#sherpa.astro.ui.conf)(*args) | Estimate parameter confidence intervals using the confidence method. |
> | [`confidence`](index.html#sherpa.astro.ui.confidence)(*args) | Estimate parameter confidence intervals using the confidence method. |
> | [`contour`](index.html#sherpa.astro.ui.contour)(*args) | Create a contour plot for an image data set. |
> | [`contour_data`](index.html#sherpa.astro.ui.contour_data)([id]) | Contour the values of an image data set. |
> | [`contour_fit`](index.html#sherpa.astro.ui.contour_fit)([id]) | Contour the fit to a data set. |
> | [`contour_fit_resid`](index.html#sherpa.astro.ui.contour_fit_resid)([id, replot, overcontour]) | Contour the fit and the residuals to a data set. |
> | [`contour_kernel`](index.html#sherpa.astro.ui.contour_kernel)([id]) | Contour the kernel applied to the model of an image data set. |
> | [`contour_model`](index.html#sherpa.astro.ui.contour_model)([id]) | Create a contour plot of the model. |
> | [`contour_psf`](index.html#sherpa.astro.ui.contour_psf)([id]) | Contour the PSF applied to the model of an image data set. |
> | [`contour_ratio`](index.html#sherpa.astro.ui.contour_ratio)([id]) | Contour the ratio of data to model. |
> | [`contour_resid`](index.html#sherpa.astro.ui.contour_resid)([id]) | Contour the residuals of the fit. |
> | [`contour_source`](index.html#sherpa.astro.ui.contour_source)([id]) | Create a contour plot of the unconvolved spatial model. |
> | [`copy_data`](index.html#sherpa.astro.ui.copy_data)(fromid, toid) | Copy a data set, creating a new identifier. |
> | [`covar`](index.html#sherpa.astro.ui.covar)(*args) | Estimate parameter confidence intervals using the covariance method. |
> | [`covariance`](index.html#sherpa.astro.ui.covariance)(*args) | Estimate parameter confidence intervals using the covariance method. |
> | [`create_arf`](index.html#sherpa.astro.ui.create_arf)(elo, ehi[, specresp, exposure, …]) | Create an ARF. |
> | [`create_model_component`](index.html#sherpa.astro.ui.create_model_component)([typename, name]) | Create a model component. |
> | [`create_rmf`](index.html#sherpa.astro.ui.create_rmf)(rmflo, rmfhi[, startchan, e_min, …]) | Create an RMF. |
> | [`dataspace1d`](index.html#sherpa.astro.ui.dataspace1d)(start, stop[, step, numbins, …]) | Create the independent axis for a 1D data set. |
> | [`dataspace2d`](index.html#sherpa.astro.ui.dataspace2d)(dims[, id, dstype]) | Create the independent axis for a 2D data set. |
> | [`delete_bkg_model`](index.html#sherpa.astro.ui.delete_bkg_model)([id, bkg_id]) | Delete the background model expression for a data set. |
> | [`delete_data`](index.html#sherpa.astro.ui.delete_data)([id]) | Delete a data set by identifier. |
> | [`delete_model`](index.html#sherpa.astro.ui.delete_model)([id]) | Delete the model expression for a data set. |
> | [`delete_model_component`](index.html#sherpa.astro.ui.delete_model_component)(name) | Delete a model component. |
> | [`delete_psf`](index.html#sherpa.astro.ui.delete_psf)([id]) | Delete the PSF model for a data set. |
> | [`eqwidth`](index.html#sherpa.astro.ui.eqwidth)(src, combo[, id, lo, hi, bkg_id, …]) | Calculate the equivalent width of an emission or absorption line. |
> | [`fake`](index.html#sherpa.astro.ui.fake)([id, method]) | Simulate a data set. |
> | [`fake_pha`](index.html#sherpa.astro.ui.fake_pha)(id, arf, rmf, exposure[, backscal, …]) | Simulate a PHA data set from a model. |
> | [`fit`](index.html#sherpa.astro.ui.fit)([id]) | Fit a model to one or more data sets. |
> | [`fit_bkg`](index.html#sherpa.astro.ui.fit_bkg)([id]) | Fit a model to one or more background PHA data sets. |
> | [`freeze`](index.html#sherpa.astro.ui.freeze)(*args) | Fix model parameters so they are not changed by a fit. |
> | [`get_analysis`](index.html#sherpa.astro.ui.get_analysis)([id]) | Return the units used when fitting spectral data. |
> | [`get_areascal`](index.html#sherpa.astro.ui.get_areascal)([id, bkg_id]) | Return the fractional area factor of a PHA data set. |
> | [`get_arf`](index.html#sherpa.astro.ui.get_arf)([id, resp_id, bkg_id]) | Return the ARF associated with a PHA data set. |
> | [`get_arf_plot`](index.html#sherpa.astro.ui.get_arf_plot)([id, resp_id]) | Return the data used by plot_arf. |
> | [`get_axes`](index.html#sherpa.astro.ui.get_axes)([id, bkg_id]) | Return information about the independent axes of a data set. |
> | [`get_backscal`](index.html#sherpa.astro.ui.get_backscal)([id, bkg_id]) | Return the area scaling of a PHA data set. |
> | [`get_bkg`](index.html#sherpa.astro.ui.get_bkg)([id, bkg_id]) | Return the background for a PHA data set. |
> | [`get_bkg_arf`](index.html#sherpa.astro.ui.get_bkg_arf)([id]) | Return the background ARF associated with a PHA data set. |
> | [`get_bkg_chisqr_plot`](index.html#sherpa.astro.ui.get_bkg_chisqr_plot)([id, bkg_id]) | Return the data used by plot_bkg_chisqr. |
> | [`get_bkg_delchi_plot`](index.html#sherpa.astro.ui.get_bkg_delchi_plot)([id, bkg_id]) | Return the data used by plot_bkg_delchi. |
> | [`get_bkg_fit_plot`](index.html#sherpa.astro.ui.get_bkg_fit_plot)([id, bkg_id]) | Return the data used by plot_bkg_fit. |
> | [`get_bkg_model`](index.html#sherpa.astro.ui.get_bkg_model)([id, bkg_id]) | Return the model expression for the background of a PHA data set. |
> | [`get_bkg_model_plot`](index.html#sherpa.astro.ui.get_bkg_model_plot)([id, bkg_id]) | Return the data used by plot_bkg_model. |
> | [`get_bkg_plot`](index.html#sherpa.astro.ui.get_bkg_plot)([id, bkg_id]) | Return the data used by plot_bkg. |
> | [`get_bkg_ratio_plot`](index.html#sherpa.astro.ui.get_bkg_ratio_plot)([id, bkg_id]) | Return the data used by plot_bkg_ratio. |
> | [`get_bkg_resid_plot`](index.html#sherpa.astro.ui.get_bkg_resid_plot)([id, bkg_id]) | Return the data used by plot_bkg_resid. |
> | [`get_bkg_rmf`](index.html#sherpa.astro.ui.get_bkg_rmf)([id]) | Return the background RMF associated with a PHA data set. |
> | [`get_bkg_scale`](index.html#sherpa.astro.ui.get_bkg_scale)([id]) | Return the background scaling factor for a PHA data set. |
> | [`get_bkg_source`](index.html#sherpa.astro.ui.get_bkg_source)([id, bkg_id]) | Return the model expression for the background of a PHA data set. |
> | [`get_bkg_source_plot`](index.html#sherpa.astro.ui.get_bkg_source_plot)([id, lo, hi, bkg_id]) | Return the data used by plot_bkg_source. |
> | [`get_cdf_plot`](index.html#sherpa.astro.ui.get_cdf_plot)() | Return the data used to plot the last CDF. |
> | [`get_chisqr_plot`](index.html#sherpa.astro.ui.get_chisqr_plot)([id]) | Return the data used by plot_chisqr. |
> | [`get_conf`](index.html#sherpa.astro.ui.get_conf)() | Return the confidence-interval estimation object. |
> | [`get_conf_opt`](index.html#sherpa.astro.ui.get_conf_opt)([name]) | Return one or all of the options for the confidence interval method. |
> | [`get_conf_results`](index.html#sherpa.astro.ui.get_conf_results)() | Return the results of the last conf run. |
> | [`get_confidence_results`](index.html#sherpa.astro.ui.get_confidence_results)() | Return the results of the last conf run. |
> | [`get_coord`](index.html#sherpa.astro.ui.get_coord)([id]) | Get the coordinate system used for image analysis. |
> | [`get_counts`](index.html#sherpa.astro.ui.get_counts)([id, filter, bkg_id]) | Return the dependent axis of a data set. |
> | [`get_covar`](index.html#sherpa.astro.ui.get_covar)() | Return the covariance estimation object. |
> | [`get_covar_opt`](index.html#sherpa.astro.ui.get_covar_opt)([name]) | Return one or all of the options for the covariance method. |
> | [`get_covar_results`](index.html#sherpa.astro.ui.get_covar_results)() | Return the results of the last covar run. |
> | [`get_covariance_results`](index.html#sherpa.astro.ui.get_covariance_results)() | Return the results of the last covar run. |
> | [`get_data`](index.html#sherpa.astro.ui.get_data)([id]) | Return the data set by identifier. |
> | [`get_data_contour`](index.html#sherpa.astro.ui.get_data_contour)([id]) | Return the data used by contour_data. |
> | [`get_data_contour_prefs`](index.html#sherpa.astro.ui.get_data_contour_prefs)() | Return the preferences for contour_data. |
> | [`get_data_image`](index.html#sherpa.astro.ui.get_data_image)([id]) | Return the data used by image_data. |
> | [`get_data_plot`](index.html#sherpa.astro.ui.get_data_plot)([id]) | Return the data used by plot_data. |
> | [`get_data_plot_prefs`](index.html#sherpa.astro.ui.get_data_plot_prefs)() | Return the preferences for plot_data. |
> | [`get_default_id`](index.html#sherpa.astro.ui.get_default_id)() | Return the default data set identifier. |
> | [`get_delchi_plot`](index.html#sherpa.astro.ui.get_delchi_plot)([id]) | Return the data used by plot_delchi. |
> | [`get_dep`](index.html#sherpa.astro.ui.get_dep)([id, filter, bkg_id]) | Return the dependent axis of a data set. |
> | [`get_dims`](index.html#sherpa.astro.ui.get_dims)([id, filter]) | Return the dimensions of the data set. |
> | [`get_draws`](index.html#sherpa.astro.ui.get_draws)([id, otherids, niter, covar_matrix]) | Run the pyBLoCXS MCMC algorithm. |
> | [`get_energy_flux_hist`](index.html#sherpa.astro.ui.get_energy_flux_hist)([lo, hi, id, num, …]) | Return the data displayed by plot_energy_flux. |
> | [`get_error`](index.html#sherpa.astro.ui.get_error)([id, filter, bkg_id]) | Return the errors on the dependent axis of a data set. |
> | [`get_exposure`](index.html#sherpa.astro.ui.get_exposure)([id, bkg_id]) | Return the exposure time of a PHA data set. |
> | [`get_filter`](index.html#sherpa.astro.ui.get_filter)([id]) | Return the filter expression for a data set. |
> | [`get_fit_contour`](index.html#sherpa.astro.ui.get_fit_contour)([id]) | Return the data used by contour_fit. |
> | [`get_fit_plot`](index.html#sherpa.astro.ui.get_fit_plot)([id]) | Return the data used to create the fit plot. |
> | [`get_fit_results`](index.html#sherpa.astro.ui.get_fit_results)() | Return the results of the last fit. |
> | [`get_functions`](index.html#sherpa.astro.ui.get_functions)() | Return the functions provided by Sherpa. |
> | [`get_grouping`](index.html#sherpa.astro.ui.get_grouping)([id, bkg_id]) | Return the grouping array for a PHA data set. |
> | [`get_indep`](index.html#sherpa.astro.ui.get_indep)([id, filter, bkg_id]) | Return the independent axes of a data set. |
> | [`get_int_proj`](index.html#sherpa.astro.ui.get_int_proj)([par, id, otherids, recalc, …]) | Return the interval-projection object. |
> | [`get_int_unc`](index.html#sherpa.astro.ui.get_int_unc)([par, id, otherids, recalc, …]) | Return the interval-uncertainty object. |
> | [`get_iter_method_name`](index.html#sherpa.astro.ui.get_iter_method_name)() | Return the name of the iterative fitting scheme. |
> | [`get_iter_method_opt`](index.html#sherpa.astro.ui.get_iter_method_opt)([optname]) | Return one or all options for the iterative-fitting scheme. |
> | [`get_kernel_contour`](index.html#sherpa.astro.ui.get_kernel_contour)([id]) | Return the data used by contour_kernel. |
> | [`get_kernel_image`](index.html#sherpa.astro.ui.get_kernel_image)([id]) | Return the data used by image_kernel. |
> | [`get_kernel_plot`](index.html#sherpa.astro.ui.get_kernel_plot)([id]) | Return the data used by plot_kernel. |
> | [`get_method`](index.html#sherpa.astro.ui.get_method)([name]) | Return an optimization method. |
> | [`get_method_name`](index.html#sherpa.astro.ui.get_method_name)() | Return the name of current Sherpa optimization method. |
> | [`get_method_opt`](index.html#sherpa.astro.ui.get_method_opt)([optname]) | Return one or all of the options for the current optimization method. |
> | [`get_model`](index.html#sherpa.astro.ui.get_model)([id]) | Return the model expression for a data set. |
> | [`get_model_autoassign_func`](index.html#sherpa.astro.ui.get_model_autoassign_func)() | Return the method used to create model component identifiers. |
> | [`get_model_component`](index.html#sherpa.astro.ui.get_model_component)(name) | Returns a model component given its name. |
> | [`get_model_component_image`](index.html#sherpa.astro.ui.get_model_component_image)(id[, model]) | Return the data used by image_model_component. |
> | [`get_model_component_plot`](index.html#sherpa.astro.ui.get_model_component_plot)(id[, model]) | Return the data used to create the model-component plot. |
> | [`get_model_contour`](index.html#sherpa.astro.ui.get_model_contour)([id]) | Return the data used by contour_model. |
> | [`get_model_contour_prefs`](index.html#sherpa.astro.ui.get_model_contour_prefs)() | Return the preferences for contour_model. |
> | [`get_model_image`](index.html#sherpa.astro.ui.get_model_image)([id]) | Return the data used by image_model. |
> | [`get_model_pars`](index.html#sherpa.astro.ui.get_model_pars)(model) | Return the names of the parameters of a model. |
> | [`get_model_plot`](index.html#sherpa.astro.ui.get_model_plot)([id]) | Return the data used to create the model plot. |
> | [`get_model_plot_prefs`](index.html#sherpa.astro.ui.get_model_plot_prefs)() | Return the preferences for plot_model. |
> | [`get_model_type`](index.html#sherpa.astro.ui.get_model_type)(model) | Describe a model expression. |
> | [`get_num_par`](index.html#sherpa.astro.ui.get_num_par)([id]) | Return the number of parameters in a model expression. |
> | [`get_num_par_frozen`](index.html#sherpa.astro.ui.get_num_par_frozen)([id]) | Return the number of frozen parameters in a model expression. |
> | [`get_num_par_thawed`](index.html#sherpa.astro.ui.get_num_par_thawed)([id]) | Return the number of thawed parameters in a model expression. |
> | [`get_order_plot`](index.html#sherpa.astro.ui.get_order_plot)([id, orders]) | Return the data used by plot_order. |
> | [`get_par`](index.html#sherpa.astro.ui.get_par)(par) | Return a parameter of a model component. |
> | [`get_pdf_plot`](index.html#sherpa.astro.ui.get_pdf_plot)() | Return the data used to plot the last PDF. |
> | [`get_photon_flux_hist`](index.html#sherpa.astro.ui.get_photon_flux_hist)([lo, hi, id, num, …]) | Return the data displayed by plot_photon_flux. |
> | [`get_pileup_model`](index.html#sherpa.astro.ui.get_pileup_model)([id]) | Return the pile up model for a data set. |
> | [`get_prior`](index.html#sherpa.astro.ui.get_prior)(par) | Return the prior function for a parameter (MCMC). |
> | [`get_proj`](index.html#sherpa.astro.ui.get_proj)() | Return the confidence-interval estimation object. |
> | [`get_proj_opt`](index.html#sherpa.astro.ui.get_proj_opt)([name]) | Return one or all of the options for the confidence interval method. |
> | [`get_proj_results`](index.html#sherpa.astro.ui.get_proj_results)() | Return the results of the last proj run. |
> | [`get_projection_results`](index.html#sherpa.astro.ui.get_projection_results)() | Return the results of the last proj run. |
> | [`get_psf`](index.html#sherpa.astro.ui.get_psf)([id]) | Return the PSF model defined for a data set. |
> | [`get_psf_contour`](index.html#sherpa.astro.ui.get_psf_contour)([id]) | Return the data used by contour_psf. |
> | [`get_psf_image`](index.html#sherpa.astro.ui.get_psf_image)([id]) | Return the data used by image_psf. |
> | [`get_psf_plot`](index.html#sherpa.astro.ui.get_psf_plot)([id]) | Return the data used by plot_psf. |
> | [`get_pvalue_plot`](index.html#sherpa.astro.ui.get_pvalue_plot)([null_model, alt_model, …]) | Return the data used by plot_pvalue. |
> | [`get_pvalue_results`](index.html#sherpa.astro.ui.get_pvalue_results)() | Return the data calculated by the last plot_pvalue call. |
> | [`get_quality`](index.html#sherpa.astro.ui.get_quality)([id, bkg_id]) | Return the quality flags for a PHA data set. |
> | [`get_rate`](index.html#sherpa.astro.ui.get_rate)([id, filter, bkg_id]) | Return the count rate of a PHA data set. |
> | [`get_ratio_contour`](index.html#sherpa.astro.ui.get_ratio_contour)([id]) | Return the data used by contour_ratio. |
> | [`get_ratio_image`](index.html#sherpa.astro.ui.get_ratio_image)([id]) | Return the data used by image_ratio. |
> | [`get_ratio_plot`](index.html#sherpa.astro.ui.get_ratio_plot)([id]) | Return the data used by plot_ratio. |
> | [`get_reg_proj`](index.html#sherpa.astro.ui.get_reg_proj)([par0, par1, id, otherids, …]) | Return the region-projection object. |
> | [`get_reg_unc`](index.html#sherpa.astro.ui.get_reg_unc)([par0, par1, id, otherids, …]) | Return the region-uncertainty object. |
> | [`get_resid_contour`](index.html#sherpa.astro.ui.get_resid_contour)([id]) | Return the data used by contour_resid. |
> | [`get_resid_image`](index.html#sherpa.astro.ui.get_resid_image)([id]) | Return the data used by image_resid. |
> | [`get_resid_plot`](index.html#sherpa.astro.ui.get_resid_plot)([id]) | Return the data used by plot_resid. |
> | [`get_response`](index.html#sherpa.astro.ui.get_response)([id, bkg_id]) | Return the response information applied to a PHA data set. |
> | [`get_rmf`](index.html#sherpa.astro.ui.get_rmf)([id, resp_id, bkg_id]) | Return the RMF associated with a PHA data set. |
> | [`get_sampler`](index.html#sherpa.astro.ui.get_sampler)() | Return the current MCMC sampler options. |
> | [`get_sampler_name`](index.html#sherpa.astro.ui.get_sampler_name)() | Return the name of the current MCMC sampler. |
> | [`get_sampler_opt`](index.html#sherpa.astro.ui.get_sampler_opt)(opt) | Return an option of the current MCMC sampler. |
> | [`get_scatter_plot`](index.html#sherpa.astro.ui.get_scatter_plot)() | Return the data used to plot the last scatter plot. |
> | [`get_source`](index.html#sherpa.astro.ui.get_source)([id]) | Return the source model expression for a data set. |
> | [`get_source_component_image`](index.html#sherpa.astro.ui.get_source_component_image)(id[, model]) | Return the data used by image_source_component. |
> | [`get_source_component_plot`](index.html#sherpa.astro.ui.get_source_component_plot)(id[, model]) | Return the data used by plot_source_component. |
> | [`get_source_contour`](index.html#sherpa.astro.ui.get_source_contour)([id]) | Return the data used by contour_source. |
> | [`get_source_image`](index.html#sherpa.astro.ui.get_source_image)([id]) | Return the data used by image_source. |
> | [`get_source_plot`](index.html#sherpa.astro.ui.get_source_plot)([id, lo, hi]) | Return the data used by plot_source. |
> | [`get_specresp`](index.html#sherpa.astro.ui.get_specresp)([id, filter, bkg_id]) | Return the effective area values for a PHA data set. |
> | [`get_split_plot`](index.html#sherpa.astro.ui.get_split_plot)() | Return the plot attributes for displays with multiple plots. |
> | [`get_stat`](index.html#sherpa.astro.ui.get_stat)([name]) | Return the fit statisic. |
> | [`get_stat_info`](index.html#sherpa.astro.ui.get_stat_info)() | Return the statistic values for the current models. |
> | [`get_stat_name`](index.html#sherpa.astro.ui.get_stat_name)() | Return the name of the current fit statistic. |
> | [`get_staterror`](index.html#sherpa.astro.ui.get_staterror)([id, filter, bkg_id]) | Return the statistical error on the dependent axis of a data set. |
> | [`get_syserror`](index.html#sherpa.astro.ui.get_syserror)([id, filter, bkg_id]) | Return the systematic error on the dependent axis of a data set. |
> | [`get_trace_plot`](index.html#sherpa.astro.ui.get_trace_plot)() | Return the data used to plot the last trace. |
> | [`group`](index.html#sherpa.astro.ui.group)([id, bkg_id]) | Turn on the grouping for a PHA data set. |
> | [`group_adapt`](index.html#sherpa.astro.ui.group_adapt)(id[, min, bkg_id, maxLength, …]) | Adaptively group to a minimum number of counts. |
> | [`group_adapt_snr`](index.html#sherpa.astro.ui.group_adapt_snr)(id[, min, bkg_id, …]) | Adaptively group to a minimum signal-to-noise ratio. |
> | [`group_bins`](index.html#sherpa.astro.ui.group_bins)(id[, num, bkg_id, tabStops]) | Group into a fixed number of bins. |
> | [`group_counts`](index.html#sherpa.astro.ui.group_counts)(id[, num, bkg_id, maxLength, …]) | Group into a minimum number of counts per bin. |
> | [`group_snr`](index.html#sherpa.astro.ui.group_snr)(id[, snr, bkg_id, maxLength, …]) | Group into a minimum signal-to-noise ratio. |
> | [`group_width`](index.html#sherpa.astro.ui.group_width)(id[, num, bkg_id, tabStops]) | Group into a fixed bin width. |
> | [`guess`](index.html#sherpa.astro.ui.guess)([id, model, limits, values]) | Estimate the parameter values and ranges given the loaded data. |
> | [`ignore`](index.html#sherpa.astro.ui.ignore)([lo, hi]) | Exclude data from the fit. |
> | [`ignore2d`](index.html#sherpa.astro.ui.ignore2d)([val]) | Exclude a spatial region from all data sets. |
> | [`ignore2d_id`](index.html#sherpa.astro.ui.ignore2d_id)(ids[, val]) | Exclude a spatial region from a data set. |
> | [`ignore2d_image`](index.html#sherpa.astro.ui.ignore2d_image)([ids]) | Exclude pixels using the region defined in the image viewer. |
> | [`ignore_bad`](index.html#sherpa.astro.ui.ignore_bad)([id, bkg_id]) | Exclude channels marked as bad in a PHA data set. |
> | [`ignore_id`](index.html#sherpa.astro.ui.ignore_id)(ids[, lo, hi]) | Exclude data from the fit for a data set. |
> | [`image_close`](index.html#sherpa.astro.ui.image_close)() | Close the image viewer. |
> | [`image_data`](index.html#sherpa.astro.ui.image_data)([id, newframe, tile]) | Display a data set in the image viewer. |
> | [`image_deleteframes`](index.html#sherpa.astro.ui.image_deleteframes)() | Delete all the frames open in the image viewer. |
> | [`image_fit`](index.html#sherpa.astro.ui.image_fit)([id, newframe, tile, deleteframes]) | Display the data, model, and residuals for a data set in the image viewer. |
> | [`image_getregion`](index.html#sherpa.astro.ui.image_getregion)([coord]) | Return the region defined in the image viewer. |
> | [`image_kernel`](index.html#sherpa.astro.ui.image_kernel)([id, newframe, tile]) | Display the 2D kernel for a data set in the image viewer. |
> | [`image_model`](index.html#sherpa.astro.ui.image_model)([id, newframe, tile]) | Display the model for a data set in the image viewer. |
> | [`image_model_component`](index.html#sherpa.astro.ui.image_model_component)(id[, model, newframe, …]) | Display a component of the model in the image viewer. |
> | [`image_open`](index.html#sherpa.astro.ui.image_open)() | Start the image viewer. |
> | [`image_psf`](index.html#sherpa.astro.ui.image_psf)([id, newframe, tile]) | Display the 2D PSF model for a data set in the image viewer. |
> | [`image_ratio`](index.html#sherpa.astro.ui.image_ratio)([id, newframe, tile]) | Display the ratio (data/model) for a data set in the image viewer. |
> | [`image_resid`](index.html#sherpa.astro.ui.image_resid)([id, newframe, tile]) | Display the residuals (data - model) for a data set in the image viewer. |
> | [`image_setregion`](index.html#sherpa.astro.ui.image_setregion)(reg[, coord]) | Set the region to display in the image viewer. |
> | [`image_source`](index.html#sherpa.astro.ui.image_source)([id, newframe, tile]) | Display the source expression for a data set in the image viewer. |
> | [`image_source_component`](index.html#sherpa.astro.ui.image_source_component)(id[, model, …]) | Display a component of the source expression in the image viewer. |
> | [`image_xpaget`](index.html#sherpa.astro.ui.image_xpaget)(arg) | Return the result of an XPA call to the image viewer. |
> | [`image_xpaset`](index.html#sherpa.astro.ui.image_xpaset)(arg[, data]) | Return the result of an XPA call to the image viewer. |
> | [`int_proj`](index.html#sherpa.astro.ui.int_proj)(par[, id, otherids, replot, fast, …]) | Calculate and plot the fit statistic versus fit parameter value. |
> | [`int_unc`](index.html#sherpa.astro.ui.int_unc)(par[, id, otherids, replot, min, …]) | Calculate and plot the fit statistic versus fit parameter value. |
> | [`link`](index.html#sherpa.astro.ui.link)(par, val) | Link a parameter to a value. |
> | [`list_bkg_ids`](index.html#sherpa.astro.ui.list_bkg_ids)([id]) | List all the background identifiers for a data set. |
> | [`list_data_ids`](index.html#sherpa.astro.ui.list_data_ids)() | List the identifiers for the loaded data sets. |
> | [`list_functions`](index.html#sherpa.astro.ui.list_functions)([outfile, clobber]) | Display the functions provided by Sherpa. |
> | [`list_iter_methods`](index.html#sherpa.astro.ui.list_iter_methods)() | List the iterative fitting schemes. |
> | [`list_methods`](index.html#sherpa.astro.ui.list_methods)() | List the optimization methods. |
> | [`list_model_components`](index.html#sherpa.astro.ui.list_model_components)() | List the names of all the model components. |
> | [`list_model_ids`](index.html#sherpa.astro.ui.list_model_ids)() | List of all the data sets with a source expression. |
> | [`list_models`](index.html#sherpa.astro.ui.list_models)([show]) | List the available model types. |
> | [`list_priors`](index.html#sherpa.astro.ui.list_priors)() | Return the priors set for model parameters, if any. |
> | [`list_response_ids`](index.html#sherpa.astro.ui.list_response_ids)([id, bkg_id]) | List all the response identifiers of a data set. |
> | [`list_samplers`](index.html#sherpa.astro.ui.list_samplers)() | List the MCMC samplers. |
> | [`list_stats`](index.html#sherpa.astro.ui.list_stats)() | List the fit statistics. |
> | [`load_arf`](index.html#sherpa.astro.ui.load_arf)(id[, arg, resp_id, bkg_id]) | Load an ARF from a file and add it to a PHA data set. |
> | [`load_arrays`](index.html#sherpa.astro.ui.load_arrays)(id, *args) | Create a data set from array values. |
> | [`load_ascii`](index.html#sherpa.astro.ui.load_ascii)(id[, filename, ncols, colkeys, …]) | Load an ASCII file as a data set. |
> | [`load_ascii_with_errors`](index.html#sherpa.astro.ui.load_ascii_with_errors)(id[, filename, …]) | Load an ASCII file with asymmetric errors as a data set. |
> | [`load_bkg`](index.html#sherpa.astro.ui.load_bkg)(id[, arg, use_errors, bkg_id]) | Load the background from a file and add it to a PHA data set. |
> | [`load_bkg_arf`](index.html#sherpa.astro.ui.load_bkg_arf)(id[, arg]) | Load an ARF from a file and add it to the background of a PHA data set. |
> | [`load_bkg_rmf`](index.html#sherpa.astro.ui.load_bkg_rmf)(id[, arg]) | Load a RMF from a file and add it to the background of a PHA data set. |
> | [`load_conv`](index.html#sherpa.astro.ui.load_conv)(modelname, filename_or_model, …) | Load a 1D convolution model. |
> | [`load_data`](index.html#sherpa.astro.ui.load_data)(id[, filename]) | Load a data set from a file. |
> | [`load_filter`](index.html#sherpa.astro.ui.load_filter)(id[, filename, bkg_id, ignore, …]) | Load the filter array from a file and add to a data set. |
> | [`load_grouping`](index.html#sherpa.astro.ui.load_grouping)(id[, filename, bkg_id]) | Load the grouping scheme from a file and add to a PHA data set. |
> | [`load_image`](index.html#sherpa.astro.ui.load_image)(id[, arg, coord, dstype]) | Load an image as a data set. |
> | [`load_multi_arfs`](index.html#sherpa.astro.ui.load_multi_arfs)(id, filenames[, resp_ids]) | Load multiple ARFs for a PHA data set. |
> | [`load_multi_rmfs`](index.html#sherpa.astro.ui.load_multi_rmfs)(id, filenames[, resp_ids]) | Load multiple RMFs for a PHA data set. |
> | [`load_pha`](index.html#sherpa.astro.ui.load_pha)(id[, arg, use_errors]) | Load a PHA data set. |
> | [`load_psf`](index.html#sherpa.astro.ui.load_psf)(modelname, filename_or_model, …) | Create a PSF model. |
> | [`load_quality`](index.html#sherpa.astro.ui.load_quality)(id[, filename, bkg_id]) | Load the quality array from a file and add to a PHA data set. |
> | [`load_rmf`](index.html#sherpa.astro.ui.load_rmf)(id[, arg, resp_id, bkg_id]) | Load a RMF from a file and add it to a PHA data set. |
> | [`load_staterror`](index.html#sherpa.astro.ui.load_staterror)(id[, filename, bkg_id]) | Load the statistical errors from a file. |
> | [`load_syserror`](index.html#sherpa.astro.ui.load_syserror)(id[, filename, bkg_id]) | Load the systematic errors from a file. |
> | [`load_table`](index.html#sherpa.astro.ui.load_table)(id[, filename, ncols, colkeys, …]) | Load a FITS binary file as a data set. |
> | [`load_table_model`](index.html#sherpa.astro.ui.load_table_model)(modelname, filename[, method]) | Load tabular or image data and use it as a model component. |
> | [`load_template_interpolator`](index.html#sherpa.astro.ui.load_template_interpolator)(name, …) | Set the template interpolation scheme. |
> | [`load_template_model`](index.html#sherpa.astro.ui.load_template_model)(modelname, templatefile) | Load a set of templates and use it as a model component. |
> | [`load_user_model`](index.html#sherpa.astro.ui.load_user_model)(func, modelname[, filename]) | Create a user-defined model. |
> | [`load_user_stat`](index.html#sherpa.astro.ui.load_user_stat)(statname, calc_stat_func[, …]) | Create a user-defined statistic. |
> | [`load_xstable_model`](index.html#sherpa.astro.ui.load_xstable_model)(modelname, filename) | Load a XSPEC table model. |
> | [`normal_sample`](index.html#sherpa.astro.ui.normal_sample)([num, sigma, correlate, id, …]) | Sample the fit statistic by taking the parameter values from a normal distribution. |
> | [`notice`](index.html#sherpa.astro.ui.notice)([lo, hi]) | Include data in the fit. |
> | [`notice2d`](index.html#sherpa.astro.ui.notice2d)([val]) | Include a spatial region of all data sets. |
> | [`notice2d_id`](index.html#sherpa.astro.ui.notice2d_id)(ids[, val]) | Include a spatial region of a data set. |
> | [`notice2d_image`](index.html#sherpa.astro.ui.notice2d_image)([ids]) | Include pixels using the region defined in the image viewer. |
> | [`notice_id`](index.html#sherpa.astro.ui.notice_id)(ids[, lo, hi]) | Include data from the fit for a data set. |
> | [`pack_image`](index.html#sherpa.astro.ui.pack_image)([id]) | Convert a data set into an image structure. |
> | [`pack_pha`](index.html#sherpa.astro.ui.pack_pha)([id]) | Convert a PHA data set into a file structure. |
> | [`pack_table`](index.html#sherpa.astro.ui.pack_table)([id]) | Convert a data set into a table structure. |
> | [`paramprompt`](index.html#sherpa.astro.ui.paramprompt)([val]) | Should the user be asked for the parameter values when creating a model? |
> | [`plot`](index.html#sherpa.astro.ui.plot)(*args) | Create one or more plot types. |
> | [`plot_arf`](index.html#sherpa.astro.ui.plot_arf)([id, resp_id, replot, overplot, …]) | Plot the ARF associated with a data set. |
> | [`plot_bkg`](index.html#sherpa.astro.ui.plot_bkg)([id, bkg_id, replot, overplot, …]) | Plot the background values for a PHA data set. |
> | [`plot_bkg_chisqr`](index.html#sherpa.astro.ui.plot_bkg_chisqr)([id, bkg_id, replot, …]) | Plot the chi-squared value for each point of the background of a PHA data set. |
> | [`plot_bkg_delchi`](index.html#sherpa.astro.ui.plot_bkg_delchi)([id, bkg_id, replot, …]) | Plot the ratio of residuals to error for the background of a PHA data set. |
> | [`plot_bkg_fit`](index.html#sherpa.astro.ui.plot_bkg_fit)([id, bkg_id, replot, overplot, …]) | Plot the fit results (data, model) for the background of a PHA data set. |
> | [`plot_bkg_fit_delchi`](index.html#sherpa.astro.ui.plot_bkg_fit_delchi)([id, bkg_id, replot, …]) | Plot the fit results, and the residuals, for the background of a PHA data set. |
> | [`plot_bkg_fit_ratio`](index.html#sherpa.astro.ui.plot_bkg_fit_ratio)([id, bkg_id, replot, …]) | Plot the fit results, and the data/model ratio, for the background of a PHA data set. |
> | [`plot_bkg_fit_resid`](index.html#sherpa.astro.ui.plot_bkg_fit_resid)([id, bkg_id, replot, …]) | Plot the fit results, and the residuals, for the background of a PHA data set. |
> | [`plot_bkg_model`](index.html#sherpa.astro.ui.plot_bkg_model)([id, bkg_id, replot, …]) | Plot the model for the background of a PHA data set. |
> | [`plot_bkg_ratio`](index.html#sherpa.astro.ui.plot_bkg_ratio)([id, bkg_id, replot, …]) | Plot the ratio of data to model values for the background of a PHA data set. |
> | [`plot_bkg_resid`](index.html#sherpa.astro.ui.plot_bkg_resid)([id, bkg_id, replot, …]) | Plot the residual (data-model) values for the background of a PHA data set. |
> | [`plot_bkg_source`](index.html#sherpa.astro.ui.plot_bkg_source)([id, lo, hi, bkg_id, …]) | Plot the model expression for the background of a PHA data set. |
> | [`plot_cdf`](index.html#sherpa.astro.ui.plot_cdf)(points[, name, xlabel, replot, …]) | Plot the cumulative density function of an array of values. |
> | [`plot_chisqr`](index.html#sherpa.astro.ui.plot_chisqr)([id, replot, overplot, clearwindow]) | Plot the chi-squared value for each point in a data set. |
> | [`plot_data`](index.html#sherpa.astro.ui.plot_data)([id, replot, overplot, clearwindow]) | Plot the data values. |
> | [`plot_delchi`](index.html#sherpa.astro.ui.plot_delchi)([id, replot, overplot, clearwindow]) | Plot the ratio of residuals to error for a data set. |
> | [`plot_energy_flux`](index.html#sherpa.astro.ui.plot_energy_flux)([lo, hi, id, num, bins, …]) | Display the energy flux distribution. |
> | [`plot_fit`](index.html#sherpa.astro.ui.plot_fit)([id, replot, overplot, clearwindow]) | Plot the fit results (data, model) for a data set. |
> | [`plot_fit_delchi`](index.html#sherpa.astro.ui.plot_fit_delchi)([id, replot, overplot, …]) | Plot the fit results, and the residuals, for a data set. |
> | [`plot_fit_ratio`](index.html#sherpa.astro.ui.plot_fit_ratio)([id, replot, overplot, …]) | Plot the fit results, and the ratio of data to model, for a data set. |
> | [`plot_fit_resid`](index.html#sherpa.astro.ui.plot_fit_resid)([id, replot, overplot, …]) | Plot the fit results, and the residuals, for a data set. |
> | [`plot_kernel`](index.html#sherpa.astro.ui.plot_kernel)([id, replot, overplot, clearwindow]) | Plot the 1D kernel applied to a data set. |
> | [`plot_model`](index.html#sherpa.astro.ui.plot_model)([id, replot, overplot, clearwindow]) | Plot the model for a data set. |
> | [`plot_model_component`](index.html#sherpa.astro.ui.plot_model_component)(id[, model, replot, …]) | Plot a component of the model for a data set. |
> | [`plot_order`](index.html#sherpa.astro.ui.plot_order)([id, orders, replot, overplot, …]) | Plot the model for a data set convolved by the given response. |
> | [`plot_pdf`](index.html#sherpa.astro.ui.plot_pdf)(points[, name, xlabel, bins, …]) | Plot the probability density function of an array of values. |
> | [`plot_photon_flux`](index.html#sherpa.astro.ui.plot_photon_flux)([lo, hi, id, num, bins, …]) | Display the photon flux distribution. |
> | [`plot_psf`](index.html#sherpa.astro.ui.plot_psf)([id, replot, overplot, clearwindow]) | Plot the 1D PSF model applied to a data set. |
> | [`plot_pvalue`](index.html#sherpa.astro.ui.plot_pvalue)(null_model, alt_model[, …]) | Compute and plot a histogram of likelihood ratios by simulating data. |
> | [`plot_ratio`](index.html#sherpa.astro.ui.plot_ratio)([id, replot, overplot, clearwindow]) | Plot the ratio of data to model for a data set. |
> | [`plot_resid`](index.html#sherpa.astro.ui.plot_resid)([id, replot, overplot, clearwindow]) | Plot the residuals (data - model) for a data set. |
> | [`plot_scatter`](index.html#sherpa.astro.ui.plot_scatter)(x, y[, name, xlabel, ylabel, …]) | Create a scatter plot. |
> | [`plot_source`](index.html#sherpa.astro.ui.plot_source)([id, lo, hi, replot, overplot, …]) | Plot the source expression for a data set. |
> | [`plot_source_component`](index.html#sherpa.astro.ui.plot_source_component)(id[, model, replot, …]) | Plot a component of the source expression for a data set. |
> | [`plot_trace`](index.html#sherpa.astro.ui.plot_trace)(points[, name, xlabel, replot, …]) | Create a trace plot of row number versus value. |
> | [`proj`](index.html#sherpa.astro.ui.proj)(*args) | Estimate parameter confidence intervals using the projection method. |
> | [`projection`](index.html#sherpa.astro.ui.projection)(*args) | Estimate parameter confidence intervals using the projection method. |
> | [`reg_proj`](index.html#sherpa.astro.ui.reg_proj)(par0, par1[, id, otherids, replot, …]) | Plot the statistic value as two parameters are varied. |
> | [`reg_unc`](index.html#sherpa.astro.ui.reg_unc)(par0, par1[, id, otherids, replot, …]) | Plot the statistic value as two parameters are varied. |
> | [`resample_data`](index.html#sherpa.astro.ui.resample_data)([id, niter, seed]) | Resample data with asymmetric error bars. |
> | [`reset`](index.html#sherpa.astro.ui.reset)([model, id]) | Reset the model parameters to their default settings. |
> | [`restore`](index.html#sherpa.astro.ui.restore)([filename]) | Load in a Sherpa session from a file. |
> | [`sample_energy_flux`](index.html#sherpa.astro.ui.sample_energy_flux)([lo, hi, id, num, …]) | Return the energy flux distribution of a model. |
> | [`sample_flux`](index.html#sherpa.astro.ui.sample_flux)([modelcomponent, lo, hi, id, …]) | Return the flux distribution of a model. |
> | [`sample_photon_flux`](index.html#sherpa.astro.ui.sample_photon_flux)([lo, hi, id, num, …]) | Return the photon flux distribution of a model. |
> | [`save`](index.html#sherpa.astro.ui.save)([filename, clobber]) | Save the current Sherpa session to a file. |
> | [`save_all`](index.html#sherpa.astro.ui.save_all)([outfile, clobber]) | Save the information about the current session to a text file. |
> | [`save_arrays`](index.html#sherpa.astro.ui.save_arrays)(filename, args[, fields, ascii, …]) | Write a list of arrays to a file. |
> | [`save_data`](index.html#sherpa.astro.ui.save_data)(id[, filename, bkg_id, ascii, clobber]) | Save the data to a file. |
> | [`save_delchi`](index.html#sherpa.astro.ui.save_delchi)(id[, filename, bkg_id, ascii, …]) | Save the ratio of residuals (data-model) to error to a file. |
> | [`save_error`](index.html#sherpa.astro.ui.save_error)(id[, filename, bkg_id, ascii, …]) | Save the errors to a file. |
> | [`save_filter`](index.html#sherpa.astro.ui.save_filter)(id[, filename, bkg_id, ascii, …]) | Save the filter array to a file. |
> | [`save_grouping`](index.html#sherpa.astro.ui.save_grouping)(id[, filename, bkg_id, ascii, …]) | Save the grouping scheme to a file. |
> | [`save_image`](index.html#sherpa.astro.ui.save_image)(id[, filename, ascii, clobber]) | Save the pixel values of a 2D data set to a file. |
> | [`save_model`](index.html#sherpa.astro.ui.save_model)(id[, filename, bkg_id, ascii, …]) | Save the model values to a file. |
> | [`save_pha`](index.html#sherpa.astro.ui.save_pha)(id[, filename, bkg_id, ascii, clobber]) | Save a PHA data set to a file. |
> | [`save_quality`](index.html#sherpa.astro.ui.save_quality)(id[, filename, bkg_id, ascii, …]) | Save the quality array to a file. |
> | [`save_resid`](index.html#sherpa.astro.ui.save_resid)(id[, filename, bkg_id, ascii, …]) | Save the residuals (data-model) to a file. |
> | [`save_source`](index.html#sherpa.astro.ui.save_source)(id[, filename, bkg_id, ascii, …]) | Save the model values to a file. |
> | [`save_staterror`](index.html#sherpa.astro.ui.save_staterror)(id[, filename, bkg_id, …]) | Save the statistical errors to a file. |
> | [`save_syserror`](index.html#sherpa.astro.ui.save_syserror)(id[, filename, bkg_id, ascii, …]) | Save the systematic errors to a file. |
> | [`save_table`](index.html#sherpa.astro.ui.save_table)(id[, filename, ascii, clobber]) | Save a data set to a file as a table. |
> | [`set_analysis`](index.html#sherpa.astro.ui.set_analysis)(id[, quantity, type, factor]) | Set the units used when fitting and displaying spectral data. |
> | [`set_areascal`](index.html#sherpa.astro.ui.set_areascal)(id[, area, bkg_id]) | Change the fractional area factor of a PHA data set. |
> | [`set_arf`](index.html#sherpa.astro.ui.set_arf)(id[, arf, resp_id, bkg_id]) | Set the ARF for use by a PHA data set. |
> | [`set_backscal`](index.html#sherpa.astro.ui.set_backscal)(id[, backscale, bkg_id]) | Change the area scaling of a PHA data set. |
> | [`set_bkg`](index.html#sherpa.astro.ui.set_bkg)(id[, bkg, bkg_id]) | Set the background for a PHA data set. |
> | [`set_bkg_full_model`](index.html#sherpa.astro.ui.set_bkg_full_model)(id[, model, bkg_id]) | Define the convolved background model expression for a PHA data set. |
> | [`set_bkg_model`](index.html#sherpa.astro.ui.set_bkg_model)(id[, model, bkg_id]) | Set the background model expression for a PHA data set. |
> | [`set_bkg_source`](index.html#sherpa.astro.ui.set_bkg_source)(id[, model, bkg_id]) | Set the background model expression for a PHA data set. |
> | [`set_conf_opt`](index.html#sherpa.astro.ui.set_conf_opt)(name, val) | Set an option for the confidence interval method. |
> | [`set_coord`](index.html#sherpa.astro.ui.set_coord)(id[, coord]) | Set the coordinate system to use for image analysis. |
> | [`set_counts`](index.html#sherpa.astro.ui.set_counts)(id[, val, bkg_id]) | Set the dependent axis of a data set. |
> | [`set_covar_opt`](index.html#sherpa.astro.ui.set_covar_opt)(name, val) | Set an option for the covariance method. |
> | [`set_data`](index.html#sherpa.astro.ui.set_data)(id[, data]) | Set a data set. |
> | [`set_default_id`](index.html#sherpa.astro.ui.set_default_id)(id) | Set the default data set identifier. |
> | [`set_dep`](index.html#sherpa.astro.ui.set_dep)(id[, val, bkg_id]) | Set the dependent axis of a data set. |
> | [`set_exposure`](index.html#sherpa.astro.ui.set_exposure)(id[, exptime, bkg_id]) | Change the exposure time of a PHA data set. |
> | [`set_filter`](index.html#sherpa.astro.ui.set_filter)(id[, val, bkg_id, ignore]) | Set the filter array of a data set. |
> | [`set_full_model`](index.html#sherpa.astro.ui.set_full_model)(id[, model]) | Define the convolved model expression for a data set. |
> | [`set_grouping`](index.html#sherpa.astro.ui.set_grouping)(id[, val, bkg_id]) | Apply a set of grouping flags to a PHA data set. |
> | [`set_iter_method`](index.html#sherpa.astro.ui.set_iter_method)(meth) | Set the iterative-fitting scheme used in the fit. |
> | [`set_iter_method_opt`](index.html#sherpa.astro.ui.set_iter_method_opt)(optname, val) | Set an option for the iterative-fitting scheme. |
> | [`set_method`](index.html#sherpa.astro.ui.set_method)(meth) | Set the optimization method. |
> | [`set_method_opt`](index.html#sherpa.astro.ui.set_method_opt)(optname, val) | Set an option for the current optimization method. |
> | [`set_model`](index.html#sherpa.astro.ui.set_model)(id[, model]) | Set the source model expression for a data set. |
> | [`set_model_autoassign_func`](index.html#sherpa.astro.ui.set_model_autoassign_func)([func]) | Set the method used to create model component identifiers. |
> | [`set_par`](index.html#sherpa.astro.ui.set_par)(par[, val, min, max, frozen]) | Set the value, limits, or behavior of a model parameter. |
> | [`set_pileup_model`](index.html#sherpa.astro.ui.set_pileup_model)(id[, model]) | Include a model of the Chandra ACIS pile up when fitting PHA data. |
> | [`set_prior`](index.html#sherpa.astro.ui.set_prior)(par, prior) | Set the prior function to use with a parameter. |
> | [`set_proj_opt`](index.html#sherpa.astro.ui.set_proj_opt)(name, val) | Set an option for the projection method. |
> | [`set_psf`](index.html#sherpa.astro.ui.set_psf)(id[, psf]) | Add a PSF model to a data set. |
> | [`set_quality`](index.html#sherpa.astro.ui.set_quality)(id[, val, bkg_id]) | Apply a set of quality flags to a PHA data set. |
> | [`set_rmf`](index.html#sherpa.astro.ui.set_rmf)(id[, rmf, resp_id, bkg_id]) | Set the RMF for use by a PHA data set. |
> | [`set_sampler`](index.html#sherpa.astro.ui.set_sampler)(sampler) | Set the MCMC sampler. |
> | [`set_sampler_opt`](index.html#sherpa.astro.ui.set_sampler_opt)(opt, value) | Set an option for the current MCMC sampler. |
> | [`set_source`](index.html#sherpa.astro.ui.set_source)(id[, model]) | Set the source model expression for a data set. |
> | [`set_stat`](index.html#sherpa.astro.ui.set_stat)(stat) | Set the statistical method. |
> | [`set_staterror`](index.html#sherpa.astro.ui.set_staterror)(id[, val, fractional, bkg_id]) | Set the statistical errors on the dependent axis of a data set. |
> | [`set_syserror`](index.html#sherpa.astro.ui.set_syserror)(id[, val, fractional, bkg_id]) | Set the systematic errors on the dependent axis of a data set. |
> | [`set_xlinear`](index.html#sherpa.astro.ui.set_xlinear)([plottype]) | New plots will display a linear X axis. |
> | [`set_xlog`](index.html#sherpa.astro.ui.set_xlog)([plottype]) | New plots will display a logarithmically-scaled X axis. |
> | [`set_ylinear`](index.html#sherpa.astro.ui.set_ylinear)([plottype]) | New plots will display a linear Y axis. |
> | [`set_ylog`](index.html#sherpa.astro.ui.set_ylog)([plottype]) | New plots will display a logarithmically-scaled Y axis. |
> | [`show_all`](index.html#sherpa.astro.ui.show_all)([id, outfile, clobber]) | Report the current state of the Sherpa session. |
> | [`show_bkg`](index.html#sherpa.astro.ui.show_bkg)([id, bkg_id, outfile, clobber]) | Show the details of the PHA background data sets. |
> | [`show_bkg_model`](index.html#sherpa.astro.ui.show_bkg_model)([id, bkg_id, outfile, clobber]) | Display the background model expression used to fit a data set. |
> | [`show_bkg_source`](index.html#sherpa.astro.ui.show_bkg_source)([id, bkg_id, outfile, clobber]) | Display the background model expression for a data set. |
> | [`show_conf`](index.html#sherpa.astro.ui.show_conf)([outfile, clobber]) | Display the results of the last conf evaluation. |
> | [`show_covar`](index.html#sherpa.astro.ui.show_covar)([outfile, clobber]) | Display the results of the last covar evaluation. |
> | [`show_data`](index.html#sherpa.astro.ui.show_data)([id, outfile, clobber]) | Summarize the available data sets. |
> | [`show_filter`](index.html#sherpa.astro.ui.show_filter)([id, outfile, clobber]) | Show any filters applied to a data set. |
> | [`show_fit`](index.html#sherpa.astro.ui.show_fit)([outfile, clobber]) | Summarize the fit results. |
> | [`show_kernel`](index.html#sherpa.astro.ui.show_kernel)([id, outfile, clobber]) | Display any kernel applied to a data set. |
> | [`show_method`](index.html#sherpa.astro.ui.show_method)([outfile, clobber]) | Display the current optimization method and options. |
> | [`show_model`](index.html#sherpa.astro.ui.show_model)([id, outfile, clobber]) | Display the model expression used to fit a data set. |
> | [`show_proj`](index.html#sherpa.astro.ui.show_proj)([outfile, clobber]) | Display the results of the last proj evaluation. |
> | [`show_psf`](index.html#sherpa.astro.ui.show_psf)([id, outfile, clobber]) | Display any PSF model applied to a data set. |
> | [`show_source`](index.html#sherpa.astro.ui.show_source)([id, outfile, clobber]) | Display the source model expression for a data set. |
> | [`show_stat`](index.html#sherpa.astro.ui.show_stat)([outfile, clobber]) | Display the current fit statistic. |
> | [`simulfit`](index.html#sherpa.astro.ui.simulfit)([id]) | Fit a model to one or more data sets. |
> | [`subtract`](index.html#sherpa.astro.ui.subtract)([id]) | Subtract the background estimate from a data set. |
> | [`t_sample`](index.html#sherpa.astro.ui.t_sample)([num, dof, id, otherids, numcores]) | Sample the fit statistic by taking the parameter values from a Student’s t-distribution. |
> | [`thaw`](index.html#sherpa.astro.ui.thaw)(*args) | Allow model parameters to be varied during a fit. |
> | [`ungroup`](index.html#sherpa.astro.ui.ungroup)([id, bkg_id]) | Turn off the grouping for a PHA data set. |
> | [`uniform_sample`](index.html#sherpa.astro.ui.uniform_sample)([num, factor, id, otherids, …]) | Sample the fit statistic by taking the parameter values from an uniform distribution. |
> | [`unlink`](index.html#sherpa.astro.ui.unlink)(par) | Unlink a parameter value. |
> | [`unpack_arf`](index.html#sherpa.astro.ui.unpack_arf)(arg) | Create an ARF data structure. |
> | [`unpack_arrays`](index.html#sherpa.astro.ui.unpack_arrays)(*args) | Create a sherpa data object from arrays of data. |
> | [`unpack_ascii`](index.html#sherpa.astro.ui.unpack_ascii)(filename[, ncols, colkeys, …]) | Unpack an ASCII file into a data structure. |
> | [`unpack_bkg`](index.html#sherpa.astro.ui.unpack_bkg)(arg[, use_errors]) | Create a PHA data structure for a background data set. |
> | [`unpack_data`](index.html#sherpa.astro.ui.unpack_data)(filename, *args, **kwargs) | Create a sherpa data object from a file. |
> | [`unpack_image`](index.html#sherpa.astro.ui.unpack_image)(arg[, coord, dstype]) | Create an image data structure. |
> | [`unpack_pha`](index.html#sherpa.astro.ui.unpack_pha)(arg[, use_errors]) | Create a PHA data structure. |
> | [`unpack_rmf`](index.html#sherpa.astro.ui.unpack_rmf)(arg) | Create a RMF data structure. |
> | [`unpack_table`](index.html#sherpa.astro.ui.unpack_table)(filename[, ncols, colkeys, dstype]) | Unpack a FITS binary file into a data structure. |
> | [`unsubtract`](index.html#sherpa.astro.ui.unsubtract)([id]) | Undo any background subtraction for the data set. |
#### The sherpa.ui.utils module[¶](#module-sherpa.ui.utils)
The [`sherpa.ui.utils`](#module-sherpa.ui.utils) module contains the
[`Session`](index.html#sherpa.ui.utils.Session) class that provides the data-management code used by the
[`sherpa.ui`](index.html#module-sherpa.ui) module.
> Classes
> | [`ModelWrapper`](index.html#sherpa.ui.utils.ModelWrapper)(session, modeltype[, args, kwargs]) | |
> | [`Session`](index.html#sherpa.ui.utils.Session)() | |
##### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of ModelWrapper, Session
#### The sherpa.astro.ui.utils module[¶](#module-sherpa.astro.ui.utils)
The [`sherpa.astro.ui.utils`](#module-sherpa.astro.ui.utils) module contains the Astronomy-specific
[`Session`](index.html#sherpa.astro.ui.utils.Session) class that provides the data-management code used by the [`sherpa.astro.ui`](index.html#module-sherpa.astro.ui) module.
> Classes
> | [`Session`](index.html#sherpa.astro.ui.utils.Session)() | |
##### Class Inheritance Diagram[¶](#class-inheritance-diagram)
Inheritance diagram of Session
Bug Reports[¶](#bug-reports)
---
If you have found a bug in Sherpa please report it. The preferred way is to create a new issue on the Sherpa
[GitHub issue page](https://github.com/sherpa/sherpa/issues/); that requires creating a free account on GitHub if you do not have one.
For those using Sherpa as part of
[CIAO](http://cxc.harvard.edu/ciao/), please use the
[CXC HelpDesk system](http://cxc.harvard.edu/helpdesk/).
Please include an example that demonstrates the issue that will allow the developers to reproduce and fix the problem. You may be asked to also provide information about your operating system and a full Python stack trace; the Sherpa developers will walk you through obtaining a stack trace if it is necessary.
Contributing to Sherpa development[¶](#contributing-to-sherpa-development)
---
Contributions to Sherpa - whether it be bug reports,
documentation updates, or new code - are highly encouraged.
At present we do not have any explicit documentation on how to contribute to Sherpa, but it is similar to other open-source packages such as
[AstroPy](http://docs.astropy.org/en/stable/index.html#contributing).
The developer documentation is also currently lacking.
Indices and tables[¶](#indices-and-tables)
---
* [Index](genindex.html)
* [Module Index](py-modindex.html)
* [Search Page](search.html)
### Glossary[¶](#glossary)
ARF The Ancillary Response Function used to describe the effective area curve of an X-ray telescope; that is, the area of the telescope and detector tabulated as a function of energy. The
[FITS](#term-fits) format used to represent ARFs is defined in the [OGIP](#term-ogip) Calibration Memo
[CAL/GEN/02-002](https://heasarc.gsfc.nasa.gov/docs/heasarc/caldb/docs/memos/cal_gen_92_002/cal_gen_92_002.html).
Astropy A community Python library for Astronomy:
<https://www.astropy.org/>.
CIAO The data reduction and analysis provided by the [CXC](#term-cxc)
for users of the Chandra X-ray telescope. Sherpa is provided as part of CIAO, but can also be used separately. The CIAO system is available from <http://cxc.harvard.edu/ciao/>.
Crates The Input/Output library provided as part of [CIAO](#term-ciao).
It provides read and write access to FITS data files, with speciality support for X-ray data formats that follow
[OGIP](#term-ogip) standards (such as [ARF](#term-arf) and [RMF](#term-rmf)
files).
CXC The [Chandra X-ray Center](http://cxc.harvard.edu/).
DS9 An external image viewer designed to allow users to interact with gridded data sets (2D and 3D). Is is used by Sherpa to display image data, and is available from <http://ds9.si.edu/>. It uses the [XPA](#term-xpa) messaging system to communicate with external processes.
FITS The [Flexible Image Transport System](https://en.wikipedia.org/wiki/FITS) is a common data format in Astronomy, originally defined to represent imaging data from radio telescopes, but has since been extended to contain a mixture of imaging and tabular data. Information on the various standards related to the format are available from the
[FITS documentation page](https://fits.gsfc.nasa.gov/fits_documentation.html)
at [HEASARC](#term-heasarc).
HEASARC NASA’s High Energy Astrophysics Science Archive Research Center at Goddard Space Flight Center:
<https://heasarc.gsfc.nasa.gov/>.
matplotlib The matplotlib plotting package, which is documented at
<https://matplotlib.org/>, is used to provide the plotting support in Sherpa.
OGIP The Office for Guest Investigator Programs (OGIP) was a division of the Laboratory for High Energy Astrophysics at Goddard Space Flight Center. The activities of that group have now become the responsibility of the [HEASARC FITS Working Group (HFWG)](https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/ofwg_intro.html),
and supports the use of high-energy astrophysics data through multimission standards and archive access. Of particular note for users of Sherpa are the standard documents produced by this group that define the data formats and standards used by high-energy Astrophysics missions.
PHA The standard file format used to store astronomical X-ray spectral data. The format is defined as part of the
[OGIP](#term-ogip) set of standards, in particular OGIP memos
[OGIP/92-007](https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/spectra/ogip_92_007/ogip_92_007.html)
and
[OGIP/92-007a](https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/spectra/ogip_92_007a/ogip_92_007a.html).
Confusingly, PHA can also refer to the Pulse Height Amplitude (the amount of charge detected) of an event, which is one of the two channel types that can be found in a PHA format file.
PSF The Point Spread Function. This represents the response of an imaging system to a delta function: e.g. what is the shape that a point source would produce when observed by a system. It is dependent on the optical design of the system but can also be influenced by other factors (e.g. for ground-based observatories the atmosphere can add additional blurring).
RMF The Redistribution Matrix Function used to describe the response of an Astronomical X-ray detector. It is a matrix containing the probability of detecting a photon of a given energy at a given detector channel. The [FITS](#term-fits) format used to represent RMFs is defined in the
[OGIP](#term-ogip) Calibration Memo
[CAL/GEN/02-002](https://heasarc.gsfc.nasa.gov/docs/heasarc/caldb/docs/memos/cal_gen_92_002/cal_gen_92_002.html).
WCS The phrase World Coordinate System for an Astronomical data set represents the mapping between the measured position on the detector and a “celestial” coordinate. The most common case is in providing a location on the sky (e.g. in
[Equatorial](https://en.wikipedia.org/wiki/Equatorial_coordinate_system)
or [Galactic](https://en.wikipedia.org/wiki/Galactic_coordinate_system)
coordinates)
for a given image pixel, but it can also be used to map between row on a spectrograph and the corresponding wavelength of light.
XPA The [XPA messaging system](http://hea-www.harvard.edu/saord/xpa/)
is used by [DS9](#term-ds9) to communicate with external programs. Sherpa uses this functionality to control DS9 - by sending it images to display and retriving any regions a used may have created on the image data.
The command-line tools used for this commiunication may be available via the package manager for a particular operating system, such as
[xpa-tools for Ubuntu](https://packages.ubuntu.com/xenial/xpa-tools),
or they can be
[built from source](https://github.com/ericmandel/xpa).
XSPEC This can refer to either the X-ray Spectral fitting package,
or the models from this package. XSPEC is distributed by
[HEASARC](#term-heasarc) and its home page is
<https://heasarc.gsfc.nasa.gov/xanadu/xspec/>. Sherpa can be built with support for the
[models from XSPEC](https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/XSappendixExternal.html).
Sherpa can be built to use XSPEC versions 12.10.1 (patch level a or later), 12.10.0, 12.9.1, or 12.9.0.
At present there is no developer mailing list for Sherpa. |
TSrepr | cran | R | Package ‘TSrepr’
October 12, 2022
Type Package
Title Time Series Representations
Version 1.1.0
Date 2020-07-12
Description Methods for representations (i.e. dimensionality reduction, preprocessing, feature extrac-
tion) of time series to help more accurate and effective time series data mining.
Non-data adaptive, data adaptive, model-based and data dictated (clipped) representation meth-
ods are implemented. Also various normalisation methods (min-max, z-score, Box-Cox, Yeo-
Johnson),
and forecasting accuracy measures are implemented.
License GPL-3 | file LICENSE
Encoding UTF-8
LazyData true
Depends R (>= 2.10)
Imports Rcpp (>= 0.12.12), MASS, quantreg, wavelets, mgcv, dtt
LinkingTo Rcpp
RoxygenNote 7.1.0
URL https://petolau.github.io/package/,
https://github.com/PetoLau/TSrepr/
BugReports https://github.com/PetoLau/TSrepr/issues
Suggests knitr, rmarkdown, ggplot2, data.table, moments, testthat
VignetteBuilder knitr
NeedsCompilation yes
Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-3501-8783>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2020-07-13 06:50:15 UTC
R topics documented:
clippin... 3
coefCom... 4
denorm_ata... 5
denorm_boxco... 5
denorm_min_ma... 6
denorm_y... 7
denorm_... 8
elec_loa... 9
fast_sta... 9
maap... 10
ma... 11
map... 11
mas... 12
mda... 13
ms... 13
norm_ata... 14
norm_boxco... 15
norm_min_ma... 15
norm_min_max_lis... 16
norm_min_max_param... 17
norm_y... 18
norm_... 18
norm_z_lis... 19
norm_z_param... 20
repr_dc... 21
repr_df... 22
repr_dw... 23
repr_ex... 24
repr_feacli... 25
repr_feacliptren... 26
repr_featren... 27
repr_ga... 28
repr_lis... 29
repr_l... 31
repr_matri... 32
repr_pa... 34
repr_pi... 35
repr_pl... 36
repr_sa... 37
repr_seas_profil... 38
repr_sm... 39
repr_windowin... 39
rle... 41
rms... 41
smap... 42
trendin... 43
TSrep... 44
clipping Creates bit-level (clipped representation) from a vector
Description
The clipping computes bit-level (clipped representation) from a vector.
Usage
clipping(x)
Arguments
x the numeric vector (time series)
Details
Clipping transforms time series to bit-level representation.
It is defined as follows:
reprt = 1if xt > µ, 0otherwise,
where xt is a value of a time series and µ is average of a time series.
Value
the integer vector of zeros and ones
Author(s)
<NAME>, <<EMAIL>>
References
<NAME>, <NAME>, <NAME>, <NAME>, <NAME> (2006) A bit level representation
for time series data mining with shape based similarity. Data Mining and Knowledge Discovery
13(1):11-40
<NAME>, and <NAME> (2018) Interpretable multiple data streams clustering with clipped streams
representation for the improvement of electricity consumption forecasting. Data Mining and Knowl-
edge Discovery. Springer. DOI: 10.1007/s10618-018-0598-2
See Also
trending
Examples
clipping(rnorm(50))
coefComp Functions for linear regression model coefficients extraction
Description
The functions computes regression coefficients from a linear model.
Usage
lmCoef(X, Y)
rlmCoef(X, Y)
l1Coef(X, Y)
Arguments
X the model (design) matrix of independent variables
Y the vector of dependent variable (time series)
Value
The numeric vector of regression coefficients
Author(s)
<NAME>, <<EMAIL>>
See Also
lm, rlm, rq
Examples
design_matrix <- matrix(rnorm(10), ncol = 2)
lmCoef(design_matrix, rnorm(5))
rlmCoef(design_matrix, rnorm(5))
l1Coef(design_matrix, rnorm(5))
denorm_atan Arctangent denormalisation
Description
The denorm_atan denormalises time series from Arctangent function.
Usage
denorm_atan(x)
Arguments
x the numeric vector (time series)
Value
the numeric vector of denormalised values
Author(s)
<NAME>, <<EMAIL>>
See Also
denorm_z, denorm_min_max
Examples
denorm_atan(runif(50))
denorm_boxcox Two-parameter Box-Cox denormalisation
Description
The denorm_boxcox denormalises time series by two-parameter Box-Cox method.
Usage
denorm_boxcox(x, lambda = 0.1, gamma = 0)
Arguments
x the numeric vector (time series) to be denormalised
lambda the numeric value - power transformation parameter (default is 0.1)
gamma the non-negative numeric value - parameter for holding the time series positive
(offset) (default is 0)
Value
the numeric vector of denormalised values
Author(s)
<NAME>, <<EMAIL>>
See Also
denorm_z, denorm_min_max, denorm_atan
Examples
denorm_boxcox(runif(50))
denorm_min_max Min-Max denormalisation
Description
The denorm_min_max denormalises time series by min-max method.
Usage
denorm_min_max(x, min, max)
Arguments
x the numeric vector (time series)
min the minimum value
max the maximal value
Value
the numeric vector of denormalised values
Author(s)
<NAME>, <<EMAIL>>
References
<NAME>, <NAME> (2018) Clustering-based forecasting method for individual consumers elec-
tricity load using time series representations. Open Comput Sci, 8(1):38–50, DOI: 10.1515/comp-
2018-0006
See Also
norm_min_max, norm_min_max_list
Examples
# Normalise values and save normalisation parameters:
norm_res <- norm_min_max_list(rnorm(50, 5, 2))
# Denormalise new data with previous computed parameters:
denorm_min_max(rnorm(50, 4, 2), min = norm_res$min, max = norm_res$max)
denorm_yj Yeo-Johnson denormalisation
Description
The denorm_yj denormalises time series by Yeo-Johnson method
Usage
denorm_yj(x, lambda = 0.1)
Arguments
x the numeric vector (time series) to be denormalised
lambda the numeric value - power transformation parameter (default is 0.1)
Value
the numeric vector of denormalised values
Author(s)
<NAME>, <<EMAIL>>
See Also
denorm_z, denorm_min_max, denorm_boxcox
Examples
denorm_yj(runif(50))
denorm_z Z-score denormalisation
Description
The denorm_z denormalises time series by z-score method.
Usage
denorm_z(x, mean, sd)
Arguments
x the numeric vector (time series)
mean the mean value
sd the standard deviation value
Value
the numeric vector of denormalised values
Author(s)
<NAME>, <<EMAIL>>
References
Laurinec P, <NAME> (2018) Clustering-based forecasting method for individual consumers elec-
tricity load using time series representations. Open Comput Sci, 8(1):38–50, DOI: 10.1515/comp-
2018-0006
See Also
norm_z, norm_z_list
Examples
# Normalise values and save normalisation parameters:
norm_res <- norm_z_list(rnorm(50, 5, 2))
# Denormalise new data with previous computed parameters:
denorm_z(rnorm(50, 4, 2), mean = norm_res$mean, sd = norm_res$sd)
elec_load 2 weeks of electricity load data from 50 consumers.
Description
A dataset containing the electricity consumption time series from 50 consumers of the length of 2
weeks. Every day is 48 measurements (half-hourly data). Each row represents one consumers time
series.
Usage
elec_load
Format
A data frame with 50 rows and 672 variables.
Source
Anonymized.
fast_stat Fast statistic functions (helpers)
Description
Fast statistic functions (helpers) for representations computation.
Usage
maxC(x)
minC(x)
meanC(x)
sumC(x)
medianC(x)
Arguments
x the numeric vector
Value
the numeric value
Author(s)
<NAME>, <<EMAIL>>
Examples
maxC(rnorm(50))
minC(rnorm(50))
meanC(rnorm(50))
sumC(rnorm(50))
medianC(rnorm(50))
maape MAAPE
Description
the maape computes MAAPE (Mean Arctangent Absolute Percentage Error) of a forecast.
Usage
maape(x, y)
Arguments
x the numeric vector of real values
y the numeric vector of forecasted values
Value
the numeric value in %
Author(s)
<NAME>, <<EMAIL>>
References
Sungil Kim, Heeyoung Kim (2016) A new metric of absolute percentage error for intermittent
demand forecasts, International Journal of Forecasting 32(3):669-679
Examples
maape(runif(50), runif(50))
mae MAE
Description
The mae computes MAE (Mean Absolute Error) of a forecast.
Usage
mae(x, y)
Arguments
x the numeric vector of real values
y the numeric vector of forecasted values
Value
the numeric value
Author(s)
<NAME>, <<EMAIL>>
Examples
mae(runif(50), runif(50))
mape MAPE
Description
the mape computes MAPE (Mean Absolute Percentage Error) of a forecast.
Usage
mape(x, y)
Arguments
x the numeric vector of real values
y the numeric vector of forecasted values
Value
the numeric value in %
Author(s)
<NAME>, <<EMAIL>>
Examples
mape(runif(50), runif(50))
mase MASE
Description
The mase computes MASE (Mean Absolute Scaled Error) of a forecast.
Usage
mase(real, forecast, naive)
Arguments
real the numeric vector of real values
forecast the numeric vector of forecasted values
naive the numeric vector of naive forecast
Value
the numeric value
Author(s)
<NAME>, <<EMAIL>>
Examples
mase(rnorm(50), rnorm(50), rnorm(50))
mdae MdAE
Description
The mdae computes MdAE (Median Absolute Error) of a forecast.
Usage
mdae(x, y)
Arguments
x the numeric vector of real values
y the numeric vector of forecasted values
Value
the numeric value
Author(s)
<NAME>, <<EMAIL>>
Examples
mdae(runif(50), runif(50))
mse MSE
Description
The mse computes MSE (Mean Squared Error) of a forecast.
Usage
mse(x, y)
Arguments
x the numeric vector of real values
y the numeric vector of forecasted values
Value
the numeric value
Author(s)
<NAME>, <<EMAIL>>
Examples
mse(runif(50), runif(50))
norm_atan Arctangent normalisation
Description
The norm_atan normalises time series by Arctangent to max (-1,1) range.
Usage
norm_atan(x)
Arguments
x the numeric vector (time series)
Value
the numeric vector of normalised values
Author(s)
<NAME>, <<EMAIL>>
See Also
norm_z, norm_min_max
Examples
norm_atan(rnorm(50))
norm_boxcox Two-parameter Box-Cox normalisation
Description
The norm_boxcox normalises time series by two-parameter Box-Cox normalisation.
Usage
norm_boxcox(x, lambda = 0.1, gamma = 0)
Arguments
x the numeric vector (time series)
lambda the numeric value - power transformation parameter (default is 0.1)
gamma the non-negative numeric value - parameter for holding the time series positive
(offset) (default is 0)
Value
the numeric vector of normalised values
Author(s)
<NAME>, <<EMAIL>>
See Also
norm_z, norm_min_max, norm_atan
Examples
norm_boxcox(runif(50))
norm_min_max Min-Max normalisation
Description
The norm_min_max normalises time series by min-max method.
Usage
norm_min_max(x)
Arguments
x the numeric vector (time series)
Value
the numeric vector of normalised values
Author(s)
<NAME>, <<EMAIL>>
See Also
norm_z
Examples
norm_min_max(rnorm(50))
norm_min_max_list Min-Max normalization list
Description
The norm_min_max_list normalises time series by min-max method and returns normalization
parameters (min and max).
Usage
norm_min_max_list(x)
Arguments
x the numeric vector (time series)
Value
the list composed of:
norm_values the numeric vector of normalised values of time series
min the min value
max the max value
Author(s)
<NAME>, <<EMAIL>>
See Also
norm_z_list
Examples
norm_min_max_list(rnorm(50))
norm_min_max_params Min-Max normalisation with parameters
Description
The norm_min_max_params normalises time series by min-max method with defined parameters.
Usage
norm_min_max_params(x, min, max)
Arguments
x the numeric vector (time series)
min the numeric value
max the numeric value
Value
the numeric vector of normalised values
Author(s)
<NAME>, <<EMAIL>>
See Also
norm_z_params
Examples
norm_min_max_params(rnorm(50), 0, 1)
norm_yj Yeo-Johnson normalisation
Description
The norm_yj normalises time series by Yeo-Johnson normalisation.
Usage
norm_yj(x, lambda = 0.1)
Arguments
x the numeric vector (time series)
lambda the numeric value - power transformation parameter (default is 0.1)
Value
the numeric vector of normalised values
Author(s)
<NAME>, <<EMAIL>>
See Also
norm_z, norm_min_max, norm_boxcox
Examples
norm_yj(runif(50))
norm_z Z-score normalisation
Description
The norm_z normalises time series by z-score.
Usage
norm_z(x)
Arguments
x the numeric vector (time series)
Value
the numeric vector of normalised values
Author(s)
<NAME>, <<EMAIL>>
See Also
norm_min_max
Examples
norm_z(runif(50))
norm_z_list Z-score normalization list
Description
The norm_z_list normalizes time series by z-score and returns normalization parameters (mean
and standard deviation).
Usage
norm_z_list(x)
Arguments
x the numeric vector (time series)
Value
the list composed of:
norm_values the numeric vector of normalised values of time series
mean the mean value
sd the standard deviation
Author(s)
<NAME>, <<EMAIL>>
See Also
norm_min_max_list
Examples
norm_z_list(runif(50))
norm_z_params Z-score normalisation with parameters
Description
The norm_z_params normalises time series by z-score with defined mean and standard deviation.
Usage
norm_z_params(x, mean, sd)
Arguments
x the numeric vector (time series)
mean the numeric value
sd the numeric value - standard deviation
Value
the numeric vector of normalised values
Author(s)
<NAME>, <<EMAIL>>
See Also
norm_min_max_params
Examples
norm_z_params(runif(50), 0.5, 1)
repr_dct DCT representation
Description
The repr_dct computes DCT (Discrete Cosine Transform) representation from a time series.
Usage
repr_dct(x, coef = 10)
Arguments
x the numeric vector (time series)
coef the number of coefficients to extract from DCT
Details
The length of the final time series representation is equal to set coef parameter.
Value
the numeric vector of DCT coefficients
Author(s)
<NAME>, <<EMAIL>>
See Also
repr_dft, repr_dwt, dtt
Examples
repr_dct(rnorm(50), coef = 4)
repr_dft DFT representation by FFT
Description
The repr_dft computes DFT (Discrete Fourier Transform) representation from a time series by
FFT (Fast Fourier Transform).
Usage
repr_dft(x, coef = 10)
Arguments
x the numeric vector (time series)
coef the number of coefficients to extract from FFT
Details
The length of the final time series representation is equal to set coef parameter.
Value
the numeric vector of DFT coefficients
Author(s)
<NAME>, <<EMAIL>>
See Also
repr_dwt, repr_dct, fft
Examples
repr_dft(rnorm(50), coef = 4)
repr_dwt DWT representation
Description
The repr_dwt computes DWT (Discrete Wavelet Transform) representation (coefficients) from a
time series.
Usage
repr_dwt(x, level = 4, filter = "d4")
Arguments
x the numeric vector (time series)
level the level of DWT transformation (default is 4)
filter the filter name (default is "d6"). Can be: "haar", "d4", "d6", ..., "d20", "la8",
"la10", ..., "la20", "bl14", "bl18", "bl20", "c6", "c12", ..., "c30". See more info
at wt.filter.
Details
This function extracts DWT coefficients. You can use various wavelet filters, see all of them here
wt.filter. The number of extracted coefficients depends on the level selected. The final repre-
sentation has length equal to floor(n / 2^level), where n is a length of original time series.
Value
the numeric vector of DWT coefficients
Author(s)
<NAME>, <<EMAIL>>
References
Laurinec P, <NAME> (2016) Comparison of representations of time series for clustering smart meter
data. In: Lecture Notes in Engineering and Computer Science: Proceedings of The World Congress
on Engineering and Computer Science 2016, pp 458-463
See Also
repr_dft, repr_dct, dwt
Examples
# Interpretation: DWT with Daubechies filter of length 4 and
# 3rd level of DWT coefficients extracted.
repr_dwt(rnorm(50), filter = "d4", level = 3)
repr_exp Exponential smoothing seasonal coefficients as representation
Description
The repr_exp computes exponential smoothing seasonal coefficients.
Usage
repr_exp(x, freq, alpha = TRUE, gamma = TRUE)
Arguments
x the numeric vector (time series)
freq the frequency of the time series
alpha the smoothing factor (default is TRUE - automatic determination of smoothing
factor), or number between 0 to 1
gamma the seasonal smoothing factor (default is TRUE - automatic determination of
seasonal smoothing factor), or number between 0 to 1
Details
This function extracts exponential smoothing seasonal coefficients and uses them as time series
representation. You can set smoothing factors (alpha, gamma) manually, but recommended is au-
tomatic method (set to TRUE). The trend component is not included in computations.
Value
the numeric vector of seasonal coefficients
Author(s)
<NAME>, <<EMAIL>>
References
<NAME>, <NAME> (2016) Comparison of representations of time series for clustering smart meter
data. In: Lecture Notes in Engineering and Computer Science: Proceedings of The World Congress
on Engineering and Computer Science 2016, pp 458-463
<NAME>, <NAME>, <NAME>, <NAME>, <NAME>, Ezzeddine AB (2016) Adaptive
time series forecasting of energy consumption using optimized cluster analysis. In: Data Mining
Workshops (ICDMW), 2016 IEEE 16th International Conference on, IEEE, pp 398-405
See Also
repr_lm, repr_gam, repr_seas_profile,HoltWinters
Examples
repr_exp(rnorm(96), freq = 24)
repr_feaclip FeaClip representation of time series
Description
The repr_feaclip computes representation of time series based on feature extraction from bit-
level (clipped) representation.
Usage
repr_feaclip(x)
Arguments
x the numeric vector (time series)
Details
FeaClip is method of time series representation based on feature extraction from run lengths (RLE)
of bit-level (clipped) representation. It extracts 8 key features from clipped representation.
There are as follows:
repr = {max1 − max.f romrunlengthsof ones,
sum1 − sumof runlengthsof ones,
max0 − max.f romrunlengthsof zeros,
crossings − lengthof RLEencoding − 1,
f0 − numberof f irstzeros,
l0 − numberof lastzeros,
f1 − numberof f irstones,
l1 − numberof lastones}.
Value
the numeric vector of length 8
Author(s)
<NAME>, <<EMAIL>>
References
<NAME>, and <NAME> (2018) Interpretable multiple data streams clustering with clipped streams
representation for the improvement of electricity consumption forecasting. Data Mining and Knowl-
edge Discovery. Springer. DOI: 10.1007/s10618-018-0598-2
See Also
repr_featrend, repr_feacliptrend
Examples
repr_feaclip(rnorm(50))
repr_feacliptrend FeaClipTrend representation of time series
Description
The repr_feacliptrend computes representation of time series based on feature extraction from
bit-level representations (clipping and trending).
Usage
repr_feacliptrend(x, func, pieces = 2L, order = 4L)
Arguments
x the numeric vector (time series)
func the aggregation function for FeaTrend procedure (sumC or maxC)
pieces the number of parts of time series to split
order the order of simple moving average
Details
FeaClipTrend combines FeaClip and FeaTrend representation methods. See documentation of these
two methods (check See Also section).
Value
the numeric vector of frequencies of features
Author(s)
<NAME>, <<EMAIL>>
References
<NAME>, and <NAME> (2018) Interpretable multiple data streams clustering with clipped streams
representation for the improvement of electricity consumption forecasting. Data Mining and Knowl-
edge Discovery. Springer. DOI: 10.1007/s10618-018-0598-2
See Also
repr_featrend, repr_feaclip
Examples
repr_feacliptrend(rnorm(50), maxC, 2, 4)
repr_featrend FeaTrend representation of time series
Description
The repr_featrend computes representation of time series based on feature extraction from bit-
level (trending) representation.
Usage
repr_featrend(x, func, pieces = 2L, order = 4L)
Arguments
x the numeric vector (time series)
func the function of aggregation, can be sumC or maxC or similar aggregation func-
tion
pieces the number of parts of time series to split (default to 2)
order the order of simple moving average (default to 4)
Details
FeaTrend is method of time series representation based on feature extraction from run lengths (RLE)
of bit-level (trending) representation. It extracts number of features from trending representation
based on number of pieces defined. From every piece, 2 features are extracted. You can define what
feature will be extracted, recommended functions are max and sum. For example if max is selected,
then maximum value of run lengths of ones and zeros are extracted.
Value
the numeric vector of the length pieces
Author(s)
<NAME>, <<EMAIL>>
See Also
repr_feaclip, repr_feacliptrend
Examples
# default settings
repr_featrend(rnorm(50), maxC)
# compute FeaTrend for 4 pieces and make more smoothed ts by order = 8
repr_featrend(rnorm(50), sumC, 4, 8)
repr_gam GAM regression coefficients as representation
Description
The repr_gam computes seasonal GAM regression coefficients. Additional exogenous variables
can be also added.
Usage
repr_gam(x, freq = NULL, xreg = NULL)
Arguments
x the numeric vector (time series)
freq the frequency of the time series. Can be vector of two frequencies (seasonalities)
or just an integer of one frequency.
xreg the numeric vector or the data.frame with additional exogenous regressors
Details
This model-based representation method extracts regression coefficients from a GAM (Generalized
Additive Model). The extraction of seasonal regression coefficients is automatic. The maximum
number of seasonalities is 2 so it is possible to compute representation for double-seasonal time
series. The first set seasonality (frequency) is main, so for example if we have hourly time series
(freq = c(24, 24*7)), the number of extracted daily seasonal coefficients is 24 and the number of
weekly seasonal coefficients is 7, because the length of second seasonality representation is always
freq_1 / freq_2. The smooth function for seasonal variables is set to cubic regression spline. There
is also possibility to add another independent variables (xreg).
Value
the numeric vector of GAM regression coefficients
Author(s)
<NAME>, <<EMAIL>>
References
<NAME>, <NAME> (2016) Comparison of representations of time series for clustering smart meter
data. In: Lecture Notes in Engineering and Computer Science: Proceedings of The World Congress
on Engineering and Computer Science 2016, pp 458-463
<NAME>, <NAME>, <NAME>, <NAME>, <NAME>, Ezzeddine AB (2016) Adaptive
time series forecasting of energy consumption using optimized cluster analysis. In: Data Mining
Workshops (ICDMW), 2016 IEEE 16th International Conference on, IEEE, pp 398-405
<NAME>, <NAME> (2018) Clustering-based forecasting method for individual consumers elec-
tricity load using time series representations. Open Comput Sci, 8(1):38–50, DOI: 10.1515/comp-
2018-0006
See Also
repr_lm, repr_exp, gam
Examples
repr_gam(rnorm(96), freq = 24)
repr_list Computation of list of representations list of time series with different
lengths
Description
The repr_list computes list of representations from list of time series
Usage
repr_list(
x,
func = NULL,
args = NULL,
normalise = FALSE,
func_norm = norm_z,
windowing = FALSE,
win_size = NULL
)
Arguments
x the list of time series, where time series can have different lengths
func the function that computes representation
args the list of additional (or required) parameters of func (function that computes
representation)
normalise normalise (scale) time series before representations computation? (default is
FALSE)
func_norm the normalisation function (default is norm_z)
windowing perform windowing? (default is FALSE)
win_size the size of the window
Details
This function computes representation to an every member of a list of time series (that can have
different lengths) and returns list of time series representations. It can be combined with windowing
(see repr_windowing) and normalisation of time series.
Value
the numeric list of representations of time series
Author(s)
<NAME>, <<EMAIL>>
See Also
repr_windowing, repr_matrix
Examples
# Create random list of time series with different lengths
list_ts <- list(rnorm(sample(8:12, 1)), rnorm(sample(8:12, 1)), rnorm(sample(8:12, 1)))
repr_list(list_ts, func = repr_sma,
args = list(order = 3))
# return normalised representations, and normalise time series by min-max normalisation
repr_list(list_ts, func = repr_sma,
args = list(order = 3), normalise = TRUE, func_norm = norm_min_max)
repr_lm Regression coefficients from linear model as representation
Description
The repr_lm computes seasonal regression coefficients from a linear model. Additional exogenous
variables can be also added.
Usage
repr_lm(x, freq = NULL, method = "lm", xreg = NULL)
Arguments
x the numeric vector (time series)
freq the frequency of the time series. Can be vector of two frequencies (seasonalities)
or just an integer of one frequency.
method the linear regression method to use. It can be "lm", "rlm" or "l1".
xreg the data.frame with additional exogenous regressors or the single numeric vector
Details
This model-based representation method extracts regression coefficients from a linear model. The
extraction of seasonal regression coefficients is automatic. The maximum number of seasonalities is
2 so it is possible to compute representation for double-seasonal time series. The first set seasonality
(frequency) is main, so for example if we have hourly time series (freq = c(24, 24*7)), the number
of extracted daily seasonal coefficients is 24 and the number of weekly seasonal coefficients is 7,
because the length of second seasonality representation is always freq_1 / freq_2. There is also
possibility to add another independent variables (xreg).
You have three possibilities for selection of a linear model method.
• "lm" is classical OLS regression.
• "rlm" is robust linear model using psi huber function and is implemented in MASS package.
• "l1" is L1 quantile regression model (also robust linear regression method) implemented in
package quantreg.
Value
the numeric vector of regression coefficients
Author(s)
<NAME>, <<EMAIL>>
References
<NAME>, <NAME> (2016) Comparison of representations of time series for clustering smart meter
data. In: Lecture Notes in Engineering and Computer Science: Proceedings of The World Congress
on Engineering and Computer Science 2016, pp 458-463
<NAME>, <NAME>, <NAME>, <NAME>, <NAME>, Ezzeddine AB (2016) Adaptive
time series forecasting of energy consumption using optimized cluster analysis. In: Data Mining
Workshops (ICDMW), 2016 IEEE 16th International Conference on, IEEE, pp 398-405
<NAME>, <NAME> (2018) Clustering-based forecasting method for individual consumers elec-
tricity load using time series representations. Open Comput Sci, 8(1):38–50, DOI: 10.1515/comp-
2018-0006
See Also
repr_gam, repr_exp
Examples
# Extracts 24 seasonal regression coefficients from the time series by linear model
repr_lm(rnorm(96), freq = 24, method = "lm")
# Try also robust linear models ("rlm" and "l1")
repr_lm(rnorm(96), freq = 24, method = "rlm")
repr_lm(rnorm(96), freq = 24, method = "l1")
repr_matrix Computation of matrix of representations from matrix of time series
Description
The repr_matrix computes matrix of representations from matrix of time series
Usage
repr_matrix(
x,
func = NULL,
args = NULL,
normalise = FALSE,
func_norm = norm_z,
windowing = FALSE,
win_size = NULL
)
Arguments
x the matrix, data.frame or data.table of time series, where time series are in rows
of the table
func the function that computes representation
args the list of additional (or required) parameters of func (function that computes
representation)
normalise normalise (scale) time series before representations computation? (default is
FALSE)
func_norm the normalisation function (default is norm_z)
windowing perform windowing? (default is FALSE)
win_size the size of the window
Details
This function computes representation to an every row of a matrix of time series and returns matrix
of time series representations. It can be combined with windowing (see repr_windowing) and
normalisation of time series.
Value
the numeric matrix of representations of time series
Author(s)
<NAME>, <<EMAIL>>
See Also
repr_windowing, repr_list
Examples
# Create random matrix of time series
mat_ts <- matrix(rnorm(100), ncol = 10)
repr_matrix(mat_ts, func = repr_paa,
args = list(q = 5, func = meanC))
# return normalised representations, and normalise time series by min-max normalisation
repr_matrix(mat_ts, func = repr_paa,
args = list(q = 2, func = meanC), normalise = TRUE, func_norm = norm_min_max)
# with windowing
repr_matrix(mat_ts, func = repr_feaclip, windowing = TRUE, win_size = 5)
repr_paa PAA - Piecewise Aggregate Approximation
Description
The repr_paa computes PAA representation from a vector.
Usage
repr_paa(x, q, func)
Arguments
x the numeric vector (time series)
q the integer of the length of the "piece"
func the aggregation function. Can be meanC, medianC, sumC, minC or maxC or
similar aggregation function
Details
PAA with possibility to use arbitrary aggregation function. The original method uses average as
aggregation function.
Value
the numeric vector
Author(s)
<NAME>, <<EMAIL>>
References
<NAME>, <NAME>, <NAME>, <NAME> (2001) Dimensionality Reduction for Fast Simi-
larity Search in Large Time Series Databases. Knowledge and Information Systems 3(3):263-286
See Also
repr_dwt, repr_dft, repr_dct, repr_sma
Examples
repr_paa(rnorm(11), 2, meanC)
repr_pip PIP representation
Description
The repr_pip computes PIP (Perceptually Important Points) representation from a time series.
Usage
repr_pip(x, times = 10, return = "points")
Arguments
x the numeric vector (time series)
times the number of important points to extract (default 10)
return what to return? Can be important points ("points"), places of important points
in a vector ("places") or "both" (data.frame).
Value
the values based on the argument return (see above)
Author(s)
<NAME>, <<EMAIL>>
References
Fu TC, Chung FL, Luk R, and Ng CM (2008) Representing financial time series based on data point
importance. Engineering Applications of Artificial Intelligence, 21(2):277-300
Examples
repr_pip(rnorm(100), times = 12, return = "both")
repr_pla PLA representation
Description
The repr_pla computes PLA (Piecewise Linear Approximation) representation from a time series.
Usage
repr_pla(x, times = 10, return = "points")
Arguments
x the numeric vector (time series)
times the number of important points to extract (default 10)
return what to return? Can be "points" (segments), places of points (segments) in a
vector ("places") or "both" (data.frame).
Value
the values based on the argument return (see above)
Author(s)
<NAME>, <<EMAIL>>
References
Zhu Y, <NAME>, <NAME> (2007) A Piecewise Linear Representation Method of Time Series Based on
Feature Points. Knowledge-Based Intelligent Information and Engineering Systems 4693:1066-
1072
Examples
repr_pla(rnorm(100), times = 12, return = "both")
repr_sax SAX - Symbolic Aggregate Approximation
Description
The repr_sax creates SAX symbols for a univariate time series.
Usage
repr_sax(x, q = 2, a = 6, eps = 0.01)
Arguments
x the numeric vector (time series)
q the integer of the length of the "piece" in PAA
a the integer of the alphabet size
eps is the minimum threshold for variance in x and should be a numeric value. If x
has a smaller variance than eps, it will represented as a word using the middle
alphabet.
Value
the character vector of SAX representation
Author(s)
<NAME>, <<EMAIL>>
References
<NAME>, <NAME>, <NAME>, <NAME> (2003) A symbolic representation of time series, with impli-
cations for streaming algorithms. Proceedings of the 8th ACM SIGMOD Workshop on Research
Issues in Data Mining and Knowledge Discovery - DMKD’03
See Also
repr_paa, repr_pla
Examples
x <- rnorm(48)
repr_sax(x, q = 4, a = 5)
repr_seas_profile Mean seasonal profile of time series
Description
The repr_seas_profile computes mean seasonal profile representation from a time series.
Usage
repr_seas_profile(x, freq, func)
Arguments
x the numeric vector (time series)
freq the integer of the length of the season
func the aggregation function. Can be meanC or medianC or similar aggregation
function.
Details
This function computes mean seasonal profile representation for a seasonal time series. The length
of representation is length of set seasonality (frequency) of a time series. Aggregation function is
arbitrary (best choice is for you maybe mean or median).
Value
the numeric vector
Author(s)
<NAME>, <<EMAIL>>
References
<NAME>, <NAME> (2016) Comparison of representations of time series for clustering smart meter
data. In: Lecture Notes in Engineering and Computer Science: Proceedings of The World Congress
on Engineering and Computer Science 2016, pp 458-463
<NAME>, <NAME>, <NAME>, <NAME>, <NAME>, Ezzeddine AB (2016) Adaptive
time series forecasting of energy consumption using optimized cluster analysis. In: Data Mining
Workshops (ICDMW), 2016 IEEE 16th International Conference on, IEEE, pp 398-405
<NAME>, <NAME> (2018) Clustering-based forecasting method for individual consumers elec-
tricity load using time series representations. Open Comput Sci, 8(1):38–50, DOI: 10.1515/comp-
2018-0006
See Also
repr_lm, repr_gam, repr_exp
Examples
repr_seas_profile(rnorm(48*10), 48, meanC)
repr_sma Simple Moving Average representation
Description
The repr_sma computes Simple Moving Average (SMA) from a time series.
Usage
repr_sma(x, order)
Arguments
x the numeric vector (time series)
order the order of simple moving average
Value
the numeric vector of smoothed values of the length = length(x) - order + 1
Author(s)
<NAME>, <<EMAIL>>
Examples
repr_sma(rnorm(50), 4)
repr_windowing Windowing of time series
Description
The repr_windowing computes representations from windows of a vector.
Usage
repr_windowing(x, win_size, func = NULL, args = NULL)
Arguments
x the numeric vector (time series)
win_size the length of the window
func the function for representation computation. For example repr_feaclip or
repr_trend.
args the list of additional arguments to the func (representation computation func-
tion). The args list must be named.
Details
This function applies specified representation method (function) to every non-overlapping window
(subsequence, piece) of a time series.
Value
the numeric vector
Author(s)
<NAME>, <<EMAIL>>
References
<NAME>, and <NAME> (2018) Interpretable multiple data streams clustering with clipped streams
representation for the improvement of electricity consumption forecasting. Data Mining and Knowl-
edge Discovery. Springer. DOI: 10.1007/s10618-018-0598-2
See Also
repr_paa, repr_matrix
Examples
# func without arguments
repr_windowing(rnorm(48), win_size = 24, func = repr_feaclip)
# func with arguments
repr_windowing(rnorm(48), win_size = 24, func = repr_featrend,
args = list(func = maxC, order = 2, pieces = 2))
rleC RLE (Run Length Encoding) written in C++
Description
The rleC computes RLE from bit-level (clipping or trending representation) vector.
Usage
rleC(x)
Arguments
x the integer vector (from clipping or trending)
Value
the list of values and counts of zeros and ones
Examples
# clipping
clipped <- clipping(rnorm(50))
rleC(clipped)
# trending
trended <- trending(rnorm(50))
rleC(trended)
rmse RMSE
Description
The rmse computes RMSE (Root Mean Squared Error) of a forecast.
Usage
rmse(x, y)
Arguments
x the numeric vector of real values
y the numeric vector of forecasted values
Value
the numeric value
Author(s)
<NAME>, <<EMAIL>>
Examples
rmse(runif(50), runif(50))
smape sMAPE
Description
The smape computes sMAPE (Symmetric Mean Absolute Percentage Error) of a forecast.
Usage
smape(x, y)
Arguments
x the numeric vector of real values
y the numeric vector of forecasted values
Value
the numeric value in %
Author(s)
<NAME>, <<EMAIL>>
Examples
smape(runif(50), runif(50))
trending Creates bit-level (trending) representation from a vector
Description
The trending Computes bit-level (trending) representation from a vector.
Usage
trending(x)
Arguments
x the numeric vector (time series)
Details
Trending transforms time series to bit-level representation.
It is defined as follows:
reprt = 1if xt − xt+1 < 0, 0otherwise,
where xt is a value of a time series.
Value
the integer vector of zeros and ones
Author(s)
<NAME>, <<EMAIL>>
See Also
clipping
Examples
trending(rnorm(50))
TSrepr TSrepr package
Description
Package contains methods for time series representations computation. Representation methods of
time series are for dimensionality and noise reduction, emphasizing of main characteristics of time
series data and speed up of consequent usage of machine learning methods.
Details
Package: TSrepr
Type: Package
Date: 2018-01-26 - Inf
License: GPL-3
The following functions for time series representations are included in the package:
• repr_paa - Piecewise Aggregate Approximation (PAA)
• repr_dwt - Discrete Wavelet Transform (DWT)
• repr_dft - Discrete Fourier Transform (DFT)
• repr_dct - Discrete Cosine Transform (DCT)
• repr_sma - Simple Moving Average (SMA)
• repr_pip - Perceptually Important Points (PIP)
• repr_sax - Symbolic Aggregate Approximation (SAX)
• repr_pla - Piecewise Linear Approximation (PLA)
• repr_seas_profile - Mean seasonal profile
• repr_lm - Model-based seasonal representations based on linear model (lm, rlm, l1)
• repr_gam - Model-based seasonal representations based on generalized additive model (GAM)
• repr_exp - Exponential smoothing seasonal coefficients
• repr_feaclip - Feature extraction from clipping representation (FeaClip)
• repr_featrend - Feature extraction from trending representation (FeaTrend)
• repr_feacliptrend - Feature extraction from clipping and trending representation (FeaClip-
Trend)
There are also implemented additional useful functions as:
• repr_windowing - applies above mentioned representations to every window of a time series
• repr_matrix - applies above mentioned representations to every row of a matrix of time series
• repr_list - applies above mentioned representations to every member of a list of time series
• norm_z, norm_min_max, norm_boxcox, norm_yj, norm_atan - normalisation functions
• norm_z_params, norm_min_max_params - normalisation functions with defined parameters
• norm_z_list, norm_min_max_list - normalisation functions with output also of scaling param-
eters
• denorm_z, denorm_min_max, denorm_boxcox, denorm_yj, denorm_atan - denormalisation
functions
Author(s)
<NAME>
Maintainer: <NAME> <<EMAIL>> |
myclasp | npm | JavaScript | clasp
===
> Develop [Apps Script](https://developers.google.com/apps-script/) projects locally using clasp (*C*ommand *L*ine *A*pps *S*cript *P*rojects).
**To get started, try out the [codelab](https://g.co/codelabs/clasp)!**
Features
---
**🗺️ Develop Locally:** `clasp` allows you to develop your Apps Script projects locally. That means you can check-in your code into source control, collaborate with other developers, and use your favorite tools to develop Apps Script.
**🔢 Manage Deployment Versions:** Create, update, and view your multiple deployments of your project.
**📁 Structure Code:** `clasp` automatically converts your flat project on [script.google.com](https://script.google.com) into **folders**. For example:
* *On script.google.com*:
+ `tests/slides.gs`
+ `tests/sheets.gs`
* *locally*:
+ `tests/`
- `slides.js`
- `sheets.js`
**🔷 Write Apps Script in TypeScript:** Write your Apps Script projects using TypeScript features:
* Arrow functions
* Optional structural typing
* Classes
* Type inference
* Interfaces
* [And more...](https://github.com/google/clasp/blob/HEAD/docs/typescript.md)
**➡️ Run Apps Script:** Execute your Apps Script from the command line. Features:
* *Instant* deployment.
* Suggested functions Autocomplete (Fuzzy)
* Easyily add custom Google OAuth scopes
* [And more...](https://github.com/google/clasp/blob/HEAD/docs/run.md)
Install
---
First download `clasp`:
```
sudo npm i @google/clasp -g
```
Then enable Apps Script API: <https://script.google.com/home/usersettings(If that fails, run this:)
```
sudo npm i -g grpc @google/clasp --unsafe-perm
```
Commands
---
The following command provide basic Apps Script project management.
> Note: Most of them require you to `clasp login` and `clasp create/clone` before using the rest of the commands.
```
clasp
```
* [`clasp login [--no-localhost] [--creds <file>]`](#login)
* [`clasp logout`](#logout)
* [`clasp create [--title <title>] [--type <type>] [--rootDir <dir>] [--parentId <id>]`](#create)
* [`clasp clone <scriptId | scriptURL> [versionNumber]`](#clone)
* [`clasp pull [--versionNumber]`](#pull)
* [`clasp push [--watch] [--force]`](#push)
* [`clasp status [--json]`](#status)
* [`clasp open [scriptId] [--webapp] [--creds]`](#open)
* [`clasp deployments`](#deployments)
* [`clasp deploy [--versionNumber <version>] [--description <description>] [--deploymentId <id>]`](#deploy)
* [`clasp undeploy [deploymentId]`](#undeploy)
* [`clasp version [description]`](#version)
* [`clasp versions`](#versions)
* [`clasp list`](#list)
### Advanced Commands
> **NOTE**: These commands require Project ID/credentials setup (see below).
* [`clasp logs [--json] [--open] [--setup] [--watch]`](#logs)
* [`clasp run [functionName] [--nondev]`](#run)
* [`clasp apis list`](#apis)
* [`clasp apis enable <api>`](#apis)
* [`clasp apis disable <api>`](#apis)
* [`clasp setting <key> [value]`](#setting)
Reference
---
### Login
Logs the user in. Saves the client credentials to a `.clasprc.json` file.
#### Options
* `--no-localhost`: Do not run a local server, manually enter code instead.
* `--creds <file>`: Use custom credentials used for `clasp run`. Saves a `.clasprc.json` file to current working directory. This file should be private!
### Logout
Logs out the user by deleting client credentials.
#### Examples
* `clasp logout`
### Create
Creates a new script project. Prompts the user for the script type if not specified.
#### Options
* `--type [docs/sheets/slides/forms]`: If specified, creates a new add-on attached to a Document, Spreadsheet, Presentation, or Form. If `--parentId` is specified, this value is ignored.
* `--title <title>`: A project title.
* `--rootDir <dir>`: Local directory in which clasp will store your project files. If not specified, clasp will default to the current directory.
* `--parentId <id>`: A project parent Id.
+ The Drive ID of a parent file that the created script project is bound to. This is usually the ID of a Google Doc, Google Sheet, Google Form, or Google Slides file. If not set, a standalone script project is created.
+ i.e. `https://docs.google.com/presentation/d/{id}/edit`
#### Examples
* `clasp create`
* `clasp create --type standalone` (default)
* `clasp create --type docs`
* `clasp create --type sheets`
* `clasp create --type slides`
* `clasp create --type forms`
* `clasp create --type webapp`
* `clasp create --type api`
* `clasp create --title "My Script"`
* `clasp create --rootDir ./dist`
* `clasp create --parentId "1D_Gxyv*****************************NXO7o"`
These options can be combined like so:
* `clasp create --title "My Script" --parentId "1D_Gxyv*****************************NXO7o" --rootDir ./dist`
### Clone
Clones the script project from script.google.com.
#### Options
* `scriptId | scriptURL`: The script ID *or* script URL to clone.
* `versionNumber`: The version of the script to clone.
#### Examples
* `clasp clone "1<KEY>"`
* `clasp clone "https://script.google.com/d/15ImUCpyi1Jsd8yF8Z6wey_7cw<KEY>"`
### Pull
Fetches a project from either a provided or saved script ID.
Updates local files with Apps Script project.
#### Options
* `--versionNumber`: The version number of the project to retrieve.
#### Examples
* `clasp pull`
* `clasp pull --versionNumber 23`
### Push
Force writes all local files to script.google.com.
Ignores files:
* That start with a `.`
* That don't have an accepted file extension
* That are ignored (filename matches a glob pattern in the `.claspignore` file)
#### Options
* `-f` `--force`: Forcibly overwrites the remote manifest.
* `-w` `--watch`: Watches local file changes. Pushes files every few seconds.
#### Examples
* `clasp push`
* `clasp push -f`
* `clasp push --watch`
### Status
Lists files that will be written to the server on `push`.
Ignores files:
* That start with a `.`
* That don't have an accepted file extension
* That are ignored (filename matches a glob pattern in the ignore file)
#### Options
* `--json`: Show status in JSON form.
#### Examples
* `clasp status`
* `clasp status --json`
### Open
Opens the current directory's `clasp` project on script.google.com. Provide a `scriptId` to open a different script. Can also open web apps.
#### Options
* `scriptId`: The optional script project to open.
* `--webapp`: open web application in a browser.
* `--creds`: Open the URL to create credentials.
#### Examples
* `clasp open`
* `clasp open [scriptId]`
* `clasp open --webapp`
* `clasp open --creds`
### Deployments
List deployments of a script.
#### Examples
* `clasp deployments`
### Deploy
Creates a version and deploys a script.
The response gives the version of the deployment.
#### Options
* `-V <version>` `--versionNumber <version>`: The project version to deploy at.
* `-d <description>` `--description <description>`: The deployment description.
* `-i <id>` `--deploymentId <id>`: The deployment ID to redeploy.
#### Examples
* `clasp deploy` (create new deployment and new version)
* `clasp deploy --versionNumber 4` (create new deployment)
* `clasp deploy --desc "Updates sidebar logo."` (deploy with description)
* `clasp deploy --deploymentId 123` (create new version)
* `clasp deploy -V 7 -d "Updates sidebar logo." -i 456`
### Undeploy
Undeploys a deployment of a script.
#### Options
* `deploymentId`: An optional deployment ID.
#### Examples
* `clasp undeploy` (undeploy the last deployment.)
* `clasp undeploy "123"`
### Version
Creates an immutable version of the script.
#### Options
* `description`: description The description of the script version.
#### Examples
* `clasp version`
* `clasp version "Bump the version."`
### Versions
List versions of a script.
#### Examples
* `clasp versions`
### List
Lists your most recent Apps Script projects.
#### Examples
* `clasp list # helloworld1 – xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx ...`
Advanced Commands
---
> **NOTE**: These commands require Project ID/credentials setup (see below).
### Logs
Prints out most recent the *StackDriver logs*. These are logs from `console.log`, not `Logger.log`.
#### Options
* `--json`: Output logs in json format.
* `--open`: Open StackDriver logs in a browser.
* `--setup`: Setup StackDriver logs.
* `--watch`: Retrieves the newest logs every 5 seconds.
#### Examples
```
clasp logs ERROR Sat Apr 07 2019 10:58:31 GMT-0700 (PDT) myFunction my log error INFO Sat Apr 07 2019 10:58:31 GMT-0700 (PDT) myFunction info message
```
* `clasp logs --json`
* `clasp logs --open`
* `clasp logs --watch`
### Run
Remotely executes an Apps Script function.
To use this command you must:
1. Log in with your credentials (`clasp login --creds creds.json`)
2. Deploy the Script as an API executable (Easist done via GUI at the moment).
3. Enable any APIs that are used by the script.
4. Have the following in your `appsscript.json`:
```
"executionApi": { "access": "ANYONE"}
```
#### Options
* `functionName`: The name of the function in the script that you want to run.
* `nondev`: If true, runs the function in non-devMode.
#### Examples
* `clasp run 'sendEmail'`
### List/Enable/Disable Google APIs
List available APIs. Enables and disables Google APIs.
#### List APIs
Lists Google APIs that can be enabled as [Advanced Services](https://developers.google.com/apps-script/guides/services/advanced).
* `clasp apis`
* `clasp apis list`
#### Enable/Disable APIs
Enables or disables APIs with the Google Cloud project. These APIs are used via services like GmailApp and Advanced Services like BigQuery.
The API name can be found using `clasp apis list`.
* `clasp apis enable drive`
* `clasp apis disable drive`
### Help
Displays the help function.
#### Examples
* `clasp help`
### Setting
Update `.clasp.json` settings file.
If `settingKey` is omitted it prints the current settings.
If `newValue` is omitted it returns the current setting value.
#### Options
* `settingKey`: settingKey They key in `.clasp.json` you want to change
* `newValue`: newValue The new value for the setting
#### Examples
* `clasp setting`
* `clasp setting scriptId`
* `clasp setting scriptId new-id`
Guides
---
### Ignore File (`.claspignore`)
Like `.gitignore`, `.claspignore` allows you to ignore files that you do not wish to not upload on `clasp push`. Steps:
1. Create a file called `.claspignore` in your project's root directory.
2. Add patterns to be excluded from `clasp push`. *Note*: The `.claspignore` file is parsed with [Anymatch](https://github.com/micromatch/anymatch), which is different from `.gitignore`, especially for directories. To ignore a directory, use syntax like `**/node_modules/**`.
A sample `.claspignore` ignoring everything except the manifest and `build/main.js`:
```
**/**
!build/main.js
!appsscript.json
```
Project Settings File (`.clasp.json`)
---
When running `clone` or `create`, a file named `.clasp.json` is created in the current directory to describe `clasp`'s configuration for the current project. Example `.clasp.json`:
```
{ "scriptId": "", "rootDir": "build/", "projectId": "project-id-xxxxxxxxxxxxxxxxxxx", "fileExtension": "ts", "filePushOrder": ["file1.ts", "file2.ts"]}
```
The following configuration values can be used:
### `scriptId` (required)
Specifies the id of the Google Script project that clasp will target. It is the part located inbetween `/d/` and `/edit` in your project's URL: `https://script.google.com/d/<SCRIPT_ID>/edit`.
### `rootDir` (optional)
Specifies the **local** directory in which clasp will store your project files. If not specified, clasp will default to the current directory.
### `projectId` (optional)
Specifies the id of the Google Cloud Platform project that clasp will target.
The Google Script project is associated with the Google Cloud Platform.
1. Run `clasp open`.
2. Click `Resources > Cloud Platform project...`.
3. Specify the project ID `project-id-xxxxxxxxxxxxxxxxxxx`.
Even if you do not set this manually, clasp will ask this via a prompt to you at the required time.
### `fileExtension` (optional)
Specifies the file extension for **local** script files in your Apps Script project.
### `filePushOrder` (optional)
Specifies the files that should be pushed first, useful for scripts that rely on order of execution. All other files are pushed after this list of files.
Troubleshooting
---
The library requires **Node version >= 6.0.0**. Use this script to check your version and **upgrade Node if necessary**:
```
node -v # Check Node version sudo npm install n -gsudo n latest
```
README Badge
---
Using clasp for your project? Add a README badge to show it off:
```
[![clasp](https://img.shields.io/badge/built%20with-clasp-4285f4.svg)](https://github.com/google/clasp)
```
Develop clasp
---
See [the develop guide](https://github.com/google/clasp/blob/HEAD/docs/develop.md) for instructions on how to build `clasp`. It's not that hard!
Contributing
---
The main purpose of this tool is to enable local Apps Script development.
If you have a core feature or use-case you'd like to see, find a GitHub issue or create a detailed proposal of the use-case.
PRs are very welcome! See the [issues](https://github.com/google/clasp/issues) (especially **good first issue** and **help wanted**).
### How to Submit a Pull Request
1. Look over the test cases in `tests/test.ts`, try cases that the PR may affect.
2. Run [tslint](https://palantir.github.io/tslint/): `npm run lint`.
3. Submit a pull request after testing your feature to make sure it works.
⚡ Powered by the [Apps Script API](https://developers.google.com/apps-script/api/).
Readme
---
### Keywords
* Apps
* Script
* SDK
* API
* script.google.com
* extension
* add-on |
@fiori-for-react/utils | npm | JavaScript | @fiori-for-react/utils
===
Helper Utils for fiori-for-react
Installation
---
```
yarn add @fiori-for-react/utils OR
npm install @fiori-for-react/utils --save
```
Modules
---
###
StyleClassHelper
Concat multiple CSS Module into an instance of this class helper and place them into a react component.
Example:
```
import { StyleClassHelper } from '@fiori-for-react/utils';
import style from 'YOUR_STYLESHEET';
const classes = new StyleClassHelper.of(style.text);
classes.put(style.anotherClass);
classes.put(style.thirdClass)
const MyComponent = props => (
<div className={classes}>
My Component
</div>
);
export default MyComponent;
```
Contribute
---
Please check our [Contribution Guidelines](https://github.com/SAP/fiori-for-react/blob/master/CONTRIBUTING.md).
License
---
Copyright (c) 2019 SAP SE or an SAP affiliate company. All rights reserved.
This file is licensed under the Apache Software License, Version 2.0 except as noted otherwise in the [LICENSE](https://github.com/SAP/fiori-for-react/blob/master/LICENSE) file.
Readme
---
### Keywords
none |
github.com/profclems/glab | go | Go | README
[¶](#section-readme)
---
### GLab
![GLab](https://user-images.githubusercontent.com/9063085/90530075-d7a58580-e14a-11ea-9727-4f592f7dcf2e.png)
[![Go Report Card](https://goreportcard.com/badge/github.com/profclems/glab)](https://goreportcard.com/report/github.com/profclems/glab)
[![Gitter](https://badges.gitter.im/glabcli/community.svg)](https://gitter.im/glabcli/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
[![Reddit](https://img.shields.io/reddit/subreddit-subscribers/glab_cli?style=social)](https://reddit.com/r/glab_cli)
[![Twitter Follow](https://img.shields.io/twitter/follow/glab_cli?style=social)](https://twitter.com/glab_cli)
[![Mentioned in Awesome Go](https://awesome.re/mentioned-badge.svg)](https://github.com/avelino/awesome-go#version-control)
GLab is an open source GitLab CLI tool bringing GitLab to your terminal next to where you are already working with `git` and your code without switching between windows and browser tabs. Work with issues, merge requests, **watch running pipelines directly from your CLI** among other features.
Inspired by [gh](https://github.com/cli/cli), the official GitHub CLI tool.
`glab` is available for repositories hosted on GitLab.com and self-hosted GitLab Instances. `glab` supports multiple authenticated GitLab instances and automatically detects the authenticated hostname from the remotes available in the working git directory.
![image](https://user-images.githubusercontent.com/41906128/88968573-0b556400-d29f-11ea-8504-8ecd9c292263.png)
### Table of Contents
* [Usage](#readme-usage)
* [Demo](#readme-demo)
* [Documentation](#readme-documentation)
* [Installation](#readme-installation)
+ [Quick Install (Bash)](#readme-quick-install-bash)
+ [Windows](#readme-windows)
- [WinGet](#readme-winget)
- [Scoop](#readme-scoop)
- [EXE Installer](#readme-exe-installer)
+ [Linux](#readme-linux)
- [Linuxbrew (Homebrew)](#readme-linuxbrew-homebrew)
- [Snapcraft](#readme-snapcraft)
- [Arch Linux](#readme-arch-linux)
- [KISS Linux](#readme-kiss-linux)
- [Alpine Linux](#readme-alpine-linux)
* [Install a pinned version from edge](#readme-install-a-pinned-version-from-edge)
* [Alpine Linux Docker-way](#readme-alpine-linux-docker-way)
- [Nix/NixOS](#readme-nixnixos)
+ [macOS](#readme-macos)
- [Homebrew](#readme-homebrew)
- [MacPorts](#readme-macports)
+ [Building From Source](#readme-building-from-source)
- [Prerequisites](#readme-prerequisites-for-building-from-source-are)
* [Authentication](#readme-authentication)
* [Configuration](#readme-configuration)
* [Environment Variables](#readme-environment-variables)
* [What about lab](#readme-what-about-lab)
* [Issues](#readme-issues)
* [Contributing](#readme-contributing)
+ [Support glab 💖](#readme-support-glab-)
- [Individuals](#readme-individuals)
- [Backers](#readme-backers)
* [License](#readme-license)
#### Usage
```
glab <command> <subcommand> [flags]
```
#### Demo
[![asciicast](https://asciinema.org/a/368622.svg)](https://asciinema.org/a/368622)
#### Documentation
Read the [documentation](https://glab.readthedocs.io/) for usage instructions.
#### Installation
Download a binary suitable for your OS at the [releases page](https://github.com/profclems/glab/releases/latest).
##### Quick Install
**Supported Platforms**: Linux and macOS
###### Homebrew
```
brew install glab
```
Updating (Homebrew):
```
brew upgrade glab
```
Alternatively, you can install `glab` by shell script:
```
curl -sL https://j.mp/glab-cli | sudo sh
```
or
```
curl -s https://raw.githubusercontent.com/profclems/glab/trunk/scripts/install.sh | sudo sh
```
*Installs into `usr/bin`*
**NOTE**: Please take care when running scripts in this fashion. Consider peeking at the install script itself and verify that it works as intended.
##### Windows
Available for download via [WinGet](https://github.com/microsoft/winget-cli), [scoop](https://scoop.sh), or downloadable EXE installer file.
###### WinGet
```
winget install glab
```
Updating (WinGet):
```
winget install glab
```
###### Scoop
```
scoop install glab
```
Updating (Scoop):
```
scoop update glab
```
###### EXE Installer
EXE installers are available for download on the [releases page](https://github.com/profclems/glab/releases/latest).
##### Linux
Prebuilt binaries available at the [releases page](https://github.com/profclems/glab/releases/latest).
###### Linuxbrew (Homebrew)
```
brew install glab
```
Updating (Homebrew):
```
brew upgrade glab
```
###### Snapcraft
[![Get it from the Snap Store](https://snapcraft.io/static/images/badges/en/snap-store-black.svg)](https://snapcraft.io/glab)
Make sure you have [snap installed on your Linux Distro](https://snapcraft.io/docs/installing-snapd).
1. `sudo snap install --edge glab`
2. `sudo snap connect glab:ssh-keys` to grant ssh access
###### Arch Linux
`glab` is available through the [community/glab](https://archlinux.org/packages/community/x86_64/glab/) package or download and install an archive from the [releases page](https://github.com/profclems/glab/releases/latest). Arch Linux also supports [snap](https://snapcraft.io/docs/installing-snap-on-arch-linux).
```
yay -Sy glab
```
or any other [AUR helper](https://wiki.archlinux.org/index.php/AUR_helpers) of your choice.
###### KISS Linux
> WARNING: It seems that KISS Linux may no longer be actively maintained, so links to its web domain have been removed from this README.
`glab` is available on the [KISS Linux Community Repo](https://github.com/kisslinux/community) as `gitlab-glab`.
If you already have the community repo configured in your `KISS_PATH` you can install `glab` through your terminal.
```
kiss b gitlab-glab && kiss i gitlab-glab
```
###### Alpine Linux
`glab` is available on the [Alpine Community Repo](https://git.alpinelinux.org/aports/tree/community/glab?h=master) as `glab`.
Install We use `--no-cache` so we don't need to do an `apk update` before.
```
apk add --no-cache glab
```
Install a pinned version from edge To ensure that by default edge will be used to get the latest updates. We need the edge repository under `/etc/apk/repositories`.
Afterwards you can install it with `apk add --no-cache glab@edge`
We use `--no-cache` so we don't need to do an `apk update` before.
```
echo "@edge http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories apk add --no-cache glab@edge
```
Alpine Linux Docker-way Use edge directly
```
FROM alpine:3.13 RUN apk add --no-cache glab
```
Fetching latest glab version from edge
```
FROM alpine:3.13 RUN echo "@edge http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories RUN apk add --no-cache glab@edge
```
##### Nix/NixOS
Nix/NixOS users can install from [nixpkgs](https://search.nixos.org/packages?channel=unstable&show=glab&from=0&size=30&sort=relevance&query=glab):
```
nix-env -iA nixos.glab
```
##### macOS
###### Homebrew
`glab` is available via [Homebrew](https://formulae.brew.sh/formula/glab)
```
brew install glab
```
Updating:
```
brew upgrade glab
```
###### MacPorts
`glab`is also available via [MacPorts](https://ports.macports.org/port/glab/summary)
```
sudo port install glab
```
Updating:
```
sudo port selfupdate && sudo port upgrade glab
```
##### Building From Source
If a supported binary for your OS is not found at the [releases page](https://github.com/profclems/glab/releases/latest), you can build from source:
###### Prerequisites for building from source
* `make`
* Go 1.13+
1. Verify that you have Go 1.13+ installed
```
$ go version go version go1.14
```
If `go` is not installed, follow instructions on [the Go website](https://golang.org/doc/install).
2. Clone this repository
```
git clone https://github.com/profclems/glab.git cd glab
```
If you have $GOPATH/bin or $GOBIN in your $PATH, you can just install with `make install` (install glab in $GOPATH/bin) and **skip steps 3 and 4**.
3. Build the project
```
make
```
4. Change PATH to find newly compiled `glab`
```
export PATH=$PWD/bin:$PATH
```
5. Run `glab version` to confirm that it worked
#### Authentication
Get a GitLab access token at <https://gitlab.com/-/profile/personal_access_tokens> or <https://gitlab.example.com/-/profile/personal_access_tokens> if self-hosted
* start interactive setup
```
glab auth login
```
* authenticate against gitlab.com by reading the token from a file
```
glab auth login --stdin < myaccesstoken.txt
```
* authenticate against a self-hosted GitLab instance by reading from a file
```
glab auth login --hostname salsa.debian.org --stdin < myaccesstoken.txt
```
* authenticate with token and hostname (Not recommended for shared environments)
```
glab auth login --hostname gitlab.example.org --token xxxxx
```
#### Configuration
By default, `glab` follows the XDG Base Directory [Spec](https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html): global configuration file is saved at `~/.config/glab-cli`. Local configuration file is saved at `.git/glab-cli` in the current working git directory. Advanced workflows may override the location of the global configuration by setting the `GLAB_CONFIG_DIR` environment variable.
**To set configuration globally**
```
glab config set --global editor vim
```
**To set configuration for current directory (must be a git repository)**
```
glab config set editor vim
```
**To set configuration for a specific host**
Use the `--host` flag to set configuration for a specific host. This is always stored in the global config file with or without the `global` flag.
```
glab config set editor vim --host gitlab.example.org
```
#### Environment Variables
```
GITLAB_TOKEN: an authentication token for API requests. Setting this avoids being prompted to authenticate and overrides any previously stored credentials.
Can be set in the config with 'glab config set token xxxxxx'
GITLAB_URI or GITLAB_HOST: specify the url of the gitlab server if self hosted (eg: https://gitlab.example.com). Default is https://gitlab.com.
GITLAB_API_HOST: specify the host where the API endpoint is found. Useful when there are separate [sub]domains or hosts for git and the API endpoint: defaults to the hostname found in the git URL
REMOTE_ALIAS or GIT_REMOTE_URL_VAR: git remote variable or alias that contains the gitlab url.
Can be set in the config with 'glab config set remote_alias origin'
VISUAL, EDITOR (in order of precedence): the editor tool to use for authoring text.
Can be set in the config with 'glab config set editor vim'
BROWSER: the web browser to use for opening links.
Can be set in the config with 'glab config set browser mybrowser'
GLAMOUR_STYLE: environment variable to set your desired markdown renderer style Available options are (dark|light|notty) or set a custom style https://github.com/charmbracelet/glamour#styles
NO_COLOR: set to any value to avoid printing ANSI escape sequences for color output.
FORCE_HYPERLINKS: set to 1 to force hyperlinks to be output, even when not outputing to a TTY
```
#### What about [Lab](https://github.com/zaquestion/lab)?
Both `glab` and [lab](https://github.com/zaquestion/lab) are open-source tools with the same goal of bringing GitLab to your command line and simplifying the developer workflow. In many ways `lab` is to [hub](https://github.com/github/hub), while `glab` is to [gh](https://github.com/cli/cli).
If you want a tool that’s more opinionated and intended to help simplify your GitLab workflows from the command line, then `glab` is for you. However, If you're looking for a tool like [hub](https://github.com/github/hub) that feels like using git and allows you to interact with GitLab, you might consider using [lab](https://github.com/zaquestion/lab).
Some `glab` commands such as `ci view` and `ci trace` were adopted from [lab](https://github.com/zaquestion/lab).
#### Issues
If you have an issue: report it on the [issue tracker](https://github.com/profclems/glab/issues)
#### Contributing
Feel like contributing? That's awesome! We have a [contributing guide](https://github.com/profclems/glab/blob/trunk/CONTRIBUTING.md) and [Code of conduct](https://github.com/profclems/glab/blob/trunk/CODE_OF_CONDUCT.md) to help guide you
##### Contributors
###### Individuals
This project exists thanks to all the people who contribute. [[Contribute](https://github.com/profclems/glab/blob/trunk/.github/CONTRIBUTING.md)].
[![](https://opencollective.com/glab/contributors.svg?width=890)](https://opencollective.com/glab/contribute)
###### Organizations
[![Fosshost.org](https://fosshost.org/img/fosshost-logo.png)](https://fosshost.org)
#### License
Copyright © [<NAME>](https://clementsam.tech)
`glab` is open-sourced software licensed under the [MIT](https://github.com/profclems/glab/blob/v1.22.0/LICENSE) license.
None |
strum | rust | Rust | Crate strum
===
Strum
---
![Build Status](https://travis-ci.org/Peternator7/strum.svg?branch=master)
![Latest Version](https://img.shields.io/crates/v/strum.svg)
![Rust Documentation](https://docs.rs/strum/badge.svg)
Strum is a set of macros and traits for working with enums and strings easier in Rust.
The full version of the README can be found on GitHub.
Including Strum in Your Project
---
Import strum and `strum_macros` into your project by adding the following lines to your Cargo.toml. `strum_macros` contains the macros needed to derive all the traits in Strum.
```
[dependencies]
strum = "0.25"
strum_macros = "0.25"
# You can also access strum_macros exports directly through strum using the "derive" feature strum = { version = "0.25", features = ["derive"] }
```
Modules
---
* additional_attributesDocumentation for Additional Attributes
Enums
---
* ParseErrorThe `ParseError` enum is a collection of all the possible reasons an enum can fail to parse from a string.
Traits
---
* AsStaticRefDeprecatedA cheap reference-to-reference conversion. Used to convert a value to a reference value with `'static` lifetime within generic code.
* EnumCountA trait for capturing the number of variants in Enum. This trait can be autoderived by
`strum_macros`.
* EnumMessageAssociates additional pieces of information with an Enum. This can be autoimplemented by deriving `EnumMessage` and annotating your variants with
`#[strum(message="...")]`.
* EnumProperty`EnumProperty` is a trait that makes it possible to store additional information with enum variants. This trait is designed to be used with the macro of the same name in the `strum_macros` crate. Currently, the only string literals are supported in attributes, the other methods will be implemented as additional attribute types become stabilized.
* IntoEnumIteratorThis trait designates that an `Enum` can be iterated over. It can be auto generated using `strum_macros` on your behalf.
* VariantIterator
* VariantMetadata
* VariantNamesA trait for retrieving the names of each variant in Enum. This trait can be autoderived by `strum_macros`.
Derive Macros
---
* AsRefStr`derive`Converts enum variants to `&'static str`.
* AsStaticStr`derive`
* Display`derive`Converts enum variants to strings.
* EnumCount`derive`Add a constant `usize` equal to the number of variants.
* EnumDiscriminants`derive`Generate a new type with only the discriminant names.
* EnumIsGenerated `is_*()` methods for each variant.
E.g. `Color.is_red()`.
* EnumIter`derive`Creates a new type that iterates of the variants of an enum.
* EnumMessage`derive`Add a verbose message to an enum variant.
* EnumProperty`derive`Add custom properties to enum variants.
* EnumString`derive`Converts strings to enum variants based on their name.
* EnumVariantNames`derive`Implements `Strum::VariantNames` which adds an associated constant `VARIANTS` which is an array of discriminant names.
* FromRepr`derive`Add a function to enum that allows accessing variants by its discriminant
* IntoStaticStr`derive`Implements `From<MyEnum> for &'static str` on an enum.
* ToString`derive`implements `std::string::ToString` on an enum
Crate strum
===
Strum
---
![Build Status](https://travis-ci.org/Peternator7/strum.svg?branch=master)
![Latest Version](https://img.shields.io/crates/v/strum.svg)
![Rust Documentation](https://docs.rs/strum/badge.svg)
Strum is a set of macros and traits for working with enums and strings easier in Rust.
The full version of the README can be found on GitHub.
Including Strum in Your Project
---
Import strum and `strum_macros` into your project by adding the following lines to your Cargo.toml. `strum_macros` contains the macros needed to derive all the traits in Strum.
```
[dependencies]
strum = "0.25"
strum_macros = "0.25"
# You can also access strum_macros exports directly through strum using the "derive" feature strum = { version = "0.25", features = ["derive"] }
```
Modules
---
* additional_attributesDocumentation for Additional Attributes
Enums
---
* ParseErrorThe `ParseError` enum is a collection of all the possible reasons an enum can fail to parse from a string.
Traits
---
* AsStaticRefDeprecatedA cheap reference-to-reference conversion. Used to convert a value to a reference value with `'static` lifetime within generic code.
* EnumCountA trait for capturing the number of variants in Enum. This trait can be autoderived by
`strum_macros`.
* EnumMessageAssociates additional pieces of information with an Enum. This can be autoimplemented by deriving `EnumMessage` and annotating your variants with
`#[strum(message="...")]`.
* EnumProperty`EnumProperty` is a trait that makes it possible to store additional information with enum variants. This trait is designed to be used with the macro of the same name in the `strum_macros` crate. Currently, the only string literals are supported in attributes, the other methods will be implemented as additional attribute types become stabilized.
* IntoEnumIteratorThis trait designates that an `Enum` can be iterated over. It can be auto generated using `strum_macros` on your behalf.
* VariantIterator
* VariantMetadata
* VariantNamesA trait for retrieving the names of each variant in Enum. This trait can be autoderived by `strum_macros`.
Derive Macros
---
* AsRefStr`derive`Converts enum variants to `&'static str`.
* AsStaticStr`derive`
* Display`derive`Converts enum variants to strings.
* EnumCount`derive`Add a constant `usize` equal to the number of variants.
* EnumDiscriminants`derive`Generate a new type with only the discriminant names.
* EnumIsGenerated `is_*()` methods for each variant.
E.g. `Color.is_red()`.
* EnumIter`derive`Creates a new type that iterates of the variants of an enum.
* EnumMessage`derive`Add a verbose message to an enum variant.
* EnumProperty`derive`Add custom properties to enum variants.
* EnumString`derive`Converts strings to enum variants based on their name.
* EnumVariantNames`derive`Implements `Strum::VariantNames` which adds an associated constant `VARIANTS` which is an array of discriminant names.
* FromRepr`derive`Add a function to enum that allows accessing variants by its discriminant
* IntoStaticStr`derive`Implements `From<MyEnum> for &'static str` on an enum.
* ToString`derive`implements `std::string::ToString` on an enum
Module strum::additional_attributes
===
Documentation for Additional Attributes
---
### Attributes on Enums
Strum supports several custom attributes to modify the generated code. At the enum level, the following attributes are supported:
* `#[strum(serialize_all = "case_style")]` attribute can be used to change the case used when serializing to and deserializing from strings. This feature is enabled by withoutboats/heck and supported case styles are:
+ `camelCase`
+ `PascalCase`
+ `kebab-case`
+ `snake_case`
+ `SCREAMING_SNAKE_CASE`
+ `SCREAMING-KEBAB-CASE`
+ `lowercase`
+ `UPPERCASE`
+ `title_case`
+ `mixed_case`
+ `Train-Case`
```
use strum_macros;
#[derive(Debug, Eq, PartialEq, strum_macros::Display)]
#[strum(serialize_all = "snake_case")]
enum Brightness {
DarkBlack,
Dim {
glow: usize,
},
#[strum(serialize = "bright")]
BrightWhite,
}
assert_eq!(
String::from("dark_black"),
Brightness::DarkBlack.to_string().as_ref()
);
assert_eq!(
String::from("dim"),
Brightness::Dim { glow: 0 }.to_string().as_ref()
);
assert_eq!(
String::from("bright"),
Brightness::BrightWhite.to_string().as_ref()
);
```
* You can also apply the `#[strum(ascii_case_insensitive)]` attribute to the enum,
and this has the same effect of applying it to every variant.
### Attributes on Variants
Custom attributes are applied to a variant by adding `#[strum(parameter="value")]` to the variant.
* `serialize="..."`: Changes the text that `FromStr()` looks for when parsing a string. This attribute can be applied multiple times to an element and the enum variant will be parsed if any of them match.
* `to_string="..."`: Similar to `serialize`. This value will be included when using `FromStr()`. More importantly,
this specifies what text to use when calling `variant.to_string()` with the `Display` derivation, or when calling `variant.as_ref()` with `AsRefStr`.
* `default`: Applied to a single variant of an enum. The variant must be a Tuple-like variant with a single piece of data that can be create from a `&str` i.e. `T: From<&str>`.
The generated code will now return the variant with the input string captured as shown below instead of failing.
```
// Replaces this:
_ => Err(strum::ParseError::VariantNotFound)
// With this in generated code:
default => Ok(Variant(default.into()))
```
The plugin will fail if the data doesn’t implement From<&str>. You can only have one `default`
on your enum.
* `disabled`: removes variant from generated code.
* `ascii_case_insensitive`: makes the comparison to this variant case insensitive (ASCII only).
If the whole enum is marked `ascii_case_insensitive`, you can specify `ascii_case_insensitive = false`
to disable case insensitivity on this v ariant.
* `message=".."`: Adds a message to enum variant. This is used in conjunction with the `EnumMessage`
trait to associate a message with a variant. If `detailed_message` is not provided,
then `message` will also be returned when `get_detailed_message` is called.
* `detailed_message=".."`: Adds a more detailed message to a variant. If this value is omitted, then
`message` will be used in it’s place.
* Structured documentation, as in `/// ...`: If using `EnumMessage`, is accessible via get_documentation().
* `props(key="value")`: Enables associating additional information with a given variant.
Enum strum::ParseError
===
```
pub enum ParseError {
VariantNotFound,
}
```
The `ParseError` enum is a collection of all the possible reasons an enum can fail to parse from a string.
Variants
---
### VariantNotFound
Trait Implementations
---
### impl Clone for ParseError
#### fn clone(&self) -> ParseError
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, demand: &mut Demand<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports.
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &ParseError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Copy for ParseError
### impl Eq for ParseError
### impl StructuralEq for ParseError
### impl StructuralPartialEq for ParseError
Auto Trait Implementations
---
### impl RefUnwindSafe for ParseError
### impl Send for ParseError
### impl Sync for ParseError
### impl Unpin for ParseError
### impl UnwindSafe for ParseError
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<E> Provider for Ewhere
E: Error + ?Sized,
#### fn provide<'a>(&'a self, demand: &mut Demand<'a>)
🔬This is a nightly-only experimental API. (`provide_any`)Data providers should implement this method to provide *all* values they are able to provide by using `demand`.
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Trait strum::AsStaticRef
===
```
pub trait AsStaticRef<T>where
T: ?Sized,{
// Required method
fn as_static(&self) -> &'static T;
}
```
👎Deprecated since 0.22.0: please use `#[derive(IntoStaticStr)]` insteadA cheap reference-to-reference conversion. Used to convert a value to a reference value with `'static` lifetime within generic code.
Required Methods
---
#### fn as_static(&self) -> &'static T
👎Deprecated since 0.22.0: please use `#[derive(IntoStaticStr)]` insteadImplementors
---
Trait strum::EnumCount
===
```
pub trait EnumCount {
const COUNT: usize;
}
```
A trait for capturing the number of variants in Enum. This trait can be autoderived by
`strum_macros`.
Required Associated Constants
---
#### const COUNT: usize
Implementors
---
Trait strum::EnumMessage
===
```
pub trait EnumMessage {
// Required methods
fn get_message(&self) -> Option<&'static str>;
fn get_detailed_message(&self) -> Option<&'static str>;
fn get_documentation(&self) -> Option<&'static str>;
fn get_serializations(&self) -> &'static [&'static str];
}
```
Associates additional pieces of information with an Enum. This can be autoimplemented by deriving `EnumMessage` and annotating your variants with
`#[strum(message="...")]`.
Example
---
```
// You need to bring the type into scope to use it!!!
use strum::EnumMessage;
#[derive(PartialEq, Eq, Debug, EnumMessage)]
enum Pet {
#[strum(message="I have a dog")]
#[strum(detailed_message="My dog's name is Spots")]
Dog,
/// I am documented.
#[strum(message="I don't have a cat")]
Cat,
}
let my_pet = Pet::Dog;
assert_eq!("I have a dog", my_pet.get_message().unwrap());
```
Required Methods
---
#### fn get_message(&self) -> Option<&'static str#### fn get_detailed_message(&self) -> Option<&'static str#### fn get_documentation(&self) -> Option<&'static strGet the doc comment associated with a variant if it exists.
#### fn get_serializations(&self) -> &'static [&'static str]
Implementors
---
Trait strum::EnumProperty
===
```
pub trait EnumProperty {
// Required method
fn get_str(&self, prop: &str) -> Option<&'static str>;
// Provided methods
fn get_int(&self, _prop: &str) -> Option<usize> { ... }
fn get_bool(&self, _prop: &str) -> Option<bool> { ... }
}
```
`EnumProperty` is a trait that makes it possible to store additional information with enum variants. This trait is designed to be used with the macro of the same name in the `strum_macros` crate. Currently, the only string literals are supported in attributes, the other methods will be implemented as additional attribute types become stabilized.
Example
---
```
// You need to bring the type into scope to use it!!!
use strum::EnumProperty;
#[derive(PartialEq, Eq, Debug, EnumProperty)]
enum Class {
#[strum(props(Teacher="Ms.Frizzle", Room="201"))]
History,
#[strum(props(Teacher="Mr.Smith"))]
#[strum(props(Room="103"))]
Mathematics,
#[strum(props(Time="2:30"))]
Science,
}
let history = Class::History;
assert_eq!("Ms.Frizzle", history.get_str("Teacher").unwrap());
```
Required Methods
---
#### fn get_str(&self, prop: &str) -> Option<&'static strProvided Methods
---
#### fn get_int(&self, _prop: &str) -> Option<usize#### fn get_bool(&self, _prop: &str) -> Option<boolImplementors
---
Trait strum::IntoEnumIterator
===
```
pub trait IntoEnumIterator: Sized {
type Iterator: Iterator<Item = Self>;
// Required method
fn iter() -> Self::Iterator;
}
```
This trait designates that an `Enum` can be iterated over. It can be auto generated using `strum_macros` on your behalf.
Example
---
```
// You need to bring the type into scope to use it!!!
use strum::{EnumIter, IntoEnumIterator};
#[derive(EnumIter, Debug)]
enum Color {
Red,
Green { range: usize },
Blue(usize),
Yellow,
}
// Iterate over the items in an enum and perform some function on them.
fn generic_iterator<E, F>(pred: F)
where
E: IntoEnumIterator,
F: Fn(E),
{
for e in E::iter() {
pred(e)
}
}
generic_iterator::<Color, _>(|color| println!("{:?}", color));
```
Required Associated Types
---
#### type Iterator: Iterator<Item = SelfRequired Methods
---
#### fn iter() -> Self::Iterator
Implementors
---
Trait strum::VariantNames
===
```
pub trait VariantNames {
const VARIANTS: &'static [&'static str];
}
```
A trait for retrieving the names of each variant in Enum. This trait can be autoderived by `strum_macros`.
Required Associated Constants
---
#### const VARIANTS: &'static [&'static str]
Names of the variants of this enum
Implementors
---
Derive Macro strum::AsRefStr
===
```
#[derive(AsRefStr)]
{
// Attributes available to this derive:
#[strum]
}
```
Available on **crate feature `derive`** only.Converts enum variants to `&'static str`.
Implements `AsRef<str>` on your enum using the same rules as
`Display` for determining what string is returned. The difference is that `as_ref()` returns a `&str` instead of a `String` so you don’t allocate any additional memory with each call.
```
// You need to bring the AsRef trait into scope to use it use std::convert::AsRef;
use strum_macros::AsRefStr;
#[derive(AsRefStr, Debug)]
enum Color {
#[strum(serialize = "redred")]
Red,
Green {
range: usize,
},
Blue(usize),
Yellow,
}
// uses the serialize string for Display let red = Color::Red;
assert_eq!("redred", red.as_ref());
// by default the variants Name let yellow = Color::Yellow;
assert_eq!("Yellow", yellow.as_ref());
// or for string formatting println!(
"blue: {} green: {}",
Color::Blue(10).as_ref(),
Color::Green { range: 42 }.as_ref()
);
```
Derive Macro strum::Display
===
```
#[derive(Display)]
{
// Attributes available to this derive:
#[strum]
}
```
Available on **crate feature `derive`** only.Converts enum variants to strings.
Deriving `Display` on an enum prints out the given enum. This enables you to perform round trip style conversions from enum into string and back again for unit style variants. `Display`
choose which serialization to used based on the following criteria:
1. If there is a `to_string` property, this value will be used. There can only be one per variant.
2. Of the various `serialize` properties, the value with the longest length is chosen. If that behavior isn’t desired, you should use `to_string`.
3. The name of the variant will be used if there are no `serialize` or `to_string` attributes.
```
// You need to bring the ToString trait into scope to use it use std::string::ToString;
use strum_macros::Display;
#[derive(Display, Debug)]
enum Color {
#[strum(serialize = "redred")]
Red,
Green {
range: usize,
},
Blue(usize),
Yellow,
}
// uses the serialize string for Display let red = Color::Red;
assert_eq!(String::from("redred"), format!("{}", red));
// by default the variants Name let yellow = Color::Yellow;
assert_eq!(String::from("Yellow"), yellow.to_string());
// or for string formatting println!(
"blue: {} green: {}",
Color::Blue(10),
Color::Green { range: 42 }
);
```
Derive Macro strum::EnumCount
===
```
#[derive(EnumCount)]
{
// Attributes available to this derive:
#[strum]
}
```
Available on **crate feature `derive`** only.Add a constant `usize` equal to the number of variants.
For a given enum generates implementation of `strum::EnumCount`,
which adds a static property `COUNT` of type usize that holds the number of variants.
```
use strum::{EnumCount, IntoEnumIterator};
use strum_macros::{EnumCount as EnumCountMacro, EnumIter};
#[derive(Debug, EnumCountMacro, EnumIter)]
enum Week {
Sunday,
Monday,
Tuesday,
Wednesday,
Thursday,
Friday,
Saturday,
}
assert_eq!(7, Week::COUNT);
assert_eq!(Week::iter().count(), Week::COUNT);
```
Derive Macro strum::EnumDiscriminants
===
```
#[derive(EnumDiscriminants)]
{
// Attributes available to this derive:
#[strum]
#[strum_discriminants]
}
```
Available on **crate feature `derive`** only.Generate a new type with only the discriminant names.
Given an enum named `MyEnum`, generates another enum called `MyEnumDiscriminants` with the same variants but without any data fields. This is useful when you wish to determine the variant of an `enum` but one or more of the variants contains a non-`Default` field. `From`
implementations are generated so that you can easily convert from `MyEnum` to
`MyEnumDiscriminants`.
By default, the generated enum has the following derives: `Clone, Copy, Debug, PartialEq, Eq`.
You can add additional derives using the `#[strum_discriminants(derive(AdditionalDerive))]`
attribute.
Note, the variant attributes passed to the discriminant enum are filtered to avoid compilation errors due to the derives mismatches, thus only `#[doc]`, `#[cfg]`, `#[allow]`, and `#[deny]`
are passed through by default. If you want to specify a custom attribute on the discriminant variant, wrap it with `#[strum_discriminants(...)]` attribute.
```
// Bring trait into scope use std::str::FromStr;
use strum::{IntoEnumIterator, EnumMessage};
use strum_macros::{EnumDiscriminants, EnumIter, EnumString, EnumMessage};
#[derive(Debug)]
struct NonDefault;
// simple example
#[derive(Debug, EnumDiscriminants)]
#[strum_discriminants(derive(EnumString, EnumMessage))]
enum MyEnum {
#[strum_discriminants(strum(message = "Variant zero"))]
Variant0(NonDefault),
Variant1 { a: NonDefault },
}
// You can rename the generated enum using the `#[strum_discriminants(name(OtherName))]` attribute:
#[derive(Debug, EnumDiscriminants)]
#[strum_discriminants(derive(EnumIter))]
#[strum_discriminants(name(MyVariants))]
enum MyEnumR {
Variant0(bool),
Variant1 { a: bool },
}
// test simple example assert_eq!(
MyEnumDiscriminants::Variant0,
MyEnumDiscriminants::from_str("Variant0").unwrap()
);
// test rename example combined with EnumIter assert_eq!(
vec![MyVariants::Variant0, MyVariants::Variant1],
MyVariants::iter().collect::<Vec<_>>()
);
// Make use of the auto-From conversion to check whether an instance of `MyEnum` matches a
// `MyEnumDiscriminants` discriminant.
assert_eq!(
MyEnumDiscriminants::Variant0,
MyEnum::Variant0(NonDefault).into()
);
assert_eq!(
MyEnumDiscriminants::Variant0,
MyEnumDiscriminants::from(MyEnum::Variant0(NonDefault))
);
// Make use of the EnumMessage on the `MyEnumDiscriminants` discriminant.
assert_eq!(
MyEnumDiscriminants::Variant0.get_message(),
Some("Variant zero")
);
```
It is also possible to specify the visibility (e.g. `pub`/`pub(crate)`/etc.)
of the generated enum. By default, the generated enum inherits the visibility of the parent enum it was generated from.
```
use strum_macros::EnumDiscriminants;
// You can set the visibility of the generated enum using the `#[strum_discriminants(vis(..))]` attribute:
mod inner {
use strum_macros::EnumDiscriminants;
# #[allow(dead_code)]
#[derive(Debug, EnumDiscriminants)]
#[strum_discriminants(vis(pub))]
#[strum_discriminants(name(PubDiscriminants))]
enum PrivateEnum {
Variant0(bool),
Variant1 { a: bool },
}
}
// test visibility example, `PrivateEnum` should not be accessible here assert_ne!(
inner::PubDiscriminants::Variant0,
inner::PubDiscriminants::Variant1,
);
```
Derive Macro strum::EnumIs
===
```
#[derive(EnumIs)]
{
// Attributes available to this derive:
#[strum]
}
```
Generated `is_*()` methods for each variant.
E.g. `Color.is_red()`.
```
use strum_macros::EnumIs;
#[derive(EnumIs, Debug)]
enum Color {
Red,
Green { range: usize },
}
assert!(Color::Red.is_red());
assert!(Color::Green{range: 0}.is_green());
```
Derive Macro strum::EnumIter
===
```
#[derive(EnumIter)]
{
// Attributes available to this derive:
#[strum]
}
```
Available on **crate feature `derive`** only.Creates a new type that iterates of the variants of an enum.
Iterate over the variants of an Enum. Any additional data on your variants will be set to `Default::default()`.
The macro implements `strum::IntoEnumIterator` on your enum and creates a new type called `YourEnumIter` that is the iterator object.
You cannot derive `EnumIter` on any type with a lifetime bound (`<'a>`) because the iterator would surely create unbounded lifetimes.
```
// You need to bring the trait into scope to use it!
use strum::IntoEnumIterator;
use strum_macros::EnumIter;
#[derive(EnumIter, Debug, PartialEq)]
enum Color {
Red,
Green { range: usize },
Blue(usize),
Yellow,
}
// It's simple to iterate over the variants of an enum.
for color in Color::iter() {
println!("My favorite color is {:?}", color);
}
let mut ci = Color::iter();
assert_eq!(Some(Color::Red), ci.next());
assert_eq!(Some(Color::Green {range: 0}), ci.next());
assert_eq!(Some(Color::Blue(0)), ci.next());
assert_eq!(Some(Color::Yellow), ci.next());
assert_eq!(None, ci.next());
```
Derive Macro strum::EnumMessage
===
```
#[derive(EnumMessage)]
{
// Attributes available to this derive:
#[strum]
}
```
Available on **crate feature `derive`** only.Add a verbose message to an enum variant.
Encode strings into the enum itself. The `strum_macros::EmumMessage` macro implements the `strum::EnumMessage` trait.
`EnumMessage` looks for `#[strum(message="...")]` attributes on your variants.
You can also provided a `detailed_message="..."` attribute to create a seperate more detailed message than the first.
`EnumMessage` also exposes the variants doc comments through `get_documentation()`. This is useful in some scenarios,
but `get_message` should generally be preferred. Rust doc comments are intended for developer facing documentation,
not end user messaging.
```
// You need to bring the trait into scope to use it use strum::EnumMessage;
use strum_macros;
#[derive(strum_macros::EnumMessage, Debug)]
#[allow(dead_code)]
enum Color {
/// Danger color.
#[strum(message = "Red", detailed_message = "This is very red")]
Red,
#[strum(message = "Simply Green")]
Green { range: usize },
#[strum(serialize = "b", serialize = "blue")]
Blue(usize),
}
// Generated code looks like more or less like this:
/*
impl ::strum::EnumMessage for Color {
fn get_message(&self) -> ::core::option::Option<&'static str> {
match self {
&Color::Red => ::core::option::Option::Some("Red"),
&Color::Green {..} => ::core::option::Option::Some("Simply Green"),
_ => None
}
}
fn get_detailed_message(&self) -> ::core::option::Option<&'static str> {
match self {
&Color::Red => ::core::option::Option::Some("This is very red"),
&Color::Green {..}=> ::core::option::Option::Some("Simply Green"),
_ => None
}
}
fn get_documentation(&self) -> ::std::option::Option<&'static str> {
match self {
&Color::Red => ::std::option::Option::Some("Danger color."),
_ => None
}
}
fn get_serializations(&self) -> &'static [&'static str] {
match self {
&Color::Red => {
static ARR: [&'static str; 1] = ["Red"];
&ARR
},
&Color::Green {..}=> {
static ARR: [&'static str; 1] = ["Green"];
&ARR
},
&Color::Blue (..) => {
static ARR: [&'static str; 2] = ["b", "blue"];
&ARR
},
}
}
}
*/
let c = Color::Red;
assert_eq!("Red", c.get_message().unwrap());
assert_eq!("This is very red", c.get_detailed_message().unwrap());
assert_eq!("Danger color.", c.get_documentation().unwrap());
assert_eq!(["Red"], c.get_serializations());
```
Derive Macro strum::EnumProperty
===
```
#[derive(EnumProperty)]
{
// Attributes available to this derive:
#[strum]
}
```
Available on **crate feature `derive`** only.Add custom properties to enum variants.
Enables the encoding of arbitary constants into enum variants. This method currently only supports adding additional string values. Other types of literals are still experimental in the rustc compiler. The generated code works by nesting match statements.
The first match statement matches on the type of the enum, and the inner match statement matches on the name of the property requested. This design works well for enums with a small number of variants and properties, but scales linearly with the number of variants so may not be the best choice in all situations.
```
use strum_macros;
// bring the trait into scope use strum::EnumProperty;
#[derive(strum_macros::EnumProperty, Debug)]
#[allow(dead_code)]
enum Color {
#[strum(props(Red = "255", Blue = "255", Green = "255"))]
White,
#[strum(props(Red = "0", Blue = "0", Green = "0"))]
Black,
#[strum(props(Red = "0", Blue = "255", Green = "0"))]
Blue,
#[strum(props(Red = "255", Blue = "0", Green = "0"))]
Red,
#[strum(props(Red = "0", Blue = "0", Green = "255"))]
Green,
}
let my_color = Color::Red;
let display = format!(
"My color is {:?}. It's RGB is {},{},{}",
my_color,
my_color.get_str("Red").unwrap(),
my_color.get_str("Green").unwrap(),
my_color.get_str("Blue").unwrap()
);
assert_eq!("My color is Red. It\'s RGB is 255,0,0", &display);
```
Derive Macro strum::EnumString
===
```
#[derive(EnumString)]
{
// Attributes available to this derive:
#[strum]
}
```
Available on **crate feature `derive`** only.Converts strings to enum variants based on their name.
auto-derives `std::str::FromStr` on the enum (for Rust 1.34 and above, `std::convert::TryFrom<&str>`
will be derived as well). Each variant of the enum will match on it’s own name.
This can be overridden using `serialize="DifferentName"` or `to_string="DifferentName"`
on the attribute as shown below.
Multiple deserializations can be added to the same variant. If the variant contains additional data,
they will be set to their default values upon deserialization.
The `default` attribute can be applied to a tuple variant with a single data parameter. When a match isn’t found, the given variant will be returned and the input string will be captured in the parameter.
Note that the implementation of `FromStr` by default only matches on the name of the variant. There is an option to match on different case conversions through the
`#[strum(serialize_all = "snake_case")]` type attribute.
See the Additional Attributes Section for more information on using this feature.
If you have a large enum, you may want to consider using the `use_phf` attribute here. It leverages perfect hash functions to parse much quicker than a standard `match`. (MSRV 1.46)
Example howto use `EnumString`
---
```
use std::str::FromStr;
use strum_macros::EnumString;
#[derive(Debug, PartialEq, EnumString)]
enum Color {
Red,
// The Default value will be inserted into range if we match "Green".
Green {
range: usize,
},
// We can match on multiple different patterns.
#[strum(serialize = "blue", serialize = "b")]
Blue(usize),
// Notice that we can disable certain variants from being found
#[strum(disabled)]
Yellow,
// We can make the comparison case insensitive (however Unicode is not supported at the moment)
#[strum(ascii_case_insensitive)]
Black,
}
/*
//The generated code will look like:
impl std::str::FromStr for Color {
type Err = ::strum::ParseError;
fn from_str(s: &str) -> ::core::result::Result<Color, Self::Err> {
match s {
"Red" => ::core::result::Result::Ok(Color::Red),
"Green" => ::core::result::Result::Ok(Color::Green { range:Default::default() }),
"blue" => ::core::result::Result::Ok(Color::Blue(Default::default())),
"b" => ::core::result::Result::Ok(Color::Blue(Default::default())),
s if s.eq_ignore_ascii_case("Black") => ::core::result::Result::Ok(Color::Black),
_ => ::core::result::Result::Err(::strum::ParseError::VariantNotFound),
}
}
}
*/
// simple from string let color_variant = Color::from_str("Red").unwrap();
assert_eq!(Color::Red, color_variant);
// short version works too let color_variant = Color::from_str("b").unwrap();
assert_eq!(Color::Blue(0), color_variant);
// was disabled for parsing = returns parse-error let color_variant = Color::from_str("Yellow");
assert!(color_variant.is_err());
// however the variant is still normally usable println!("{:?}", Color::Yellow);
let color_variant = Color::from_str("bLACk").unwrap();
assert_eq!(Color::Black, color_variant);
```
Derive Macro strum::EnumVariantNames
===
```
#[derive(EnumVariantNames)]
{
// Attributes available to this derive:
#[strum]
}
```
Available on **crate feature `derive`** only.Implements `Strum::VariantNames` which adds an associated constant `VARIANTS` which is an array of discriminant names.
Adds an `impl` block for the `enum` that adds a static `VARIANTS` array of `&'static str` that are the discriminant names.
This will respect the `serialize_all` attribute on the `enum` (like `#[strum(serialize_all = "snake_case")]`.
```
// import the macros needed use strum_macros::{EnumString, EnumVariantNames};
// You need to import the trait, to have access to VARIANTS use strum::VariantNames;
#[derive(Debug, EnumString, EnumVariantNames)]
#[strum(serialize_all = "kebab-case")]
enum Color {
Red,
Blue,
Yellow,
RebeccaPurple,
}
assert_eq!(["red", "blue", "yellow", "rebecca-purple"], Color::VARIANTS);
```
Derive Macro strum::FromRepr
===
```
#[derive(FromRepr)]
{
// Attributes available to this derive:
#[strum]
}
```
Available on **crate feature `derive`** only.Add a function to enum that allows accessing variants by its discriminant
This macro adds a standalone function to obtain an enum variant by its discriminant. The macro adds
`from_repr(discriminant: usize) -> Option<YourEnum>` as a standalone function on the enum. For variants with additional data, the returned variant will use the `Default` trait to fill the data. The discriminant follows the same rules as `rustc`. The first discriminant is zero and each successive variant has a discriminant of one greater than the previous variant, except where an explicit discriminant is specified. The type of the discriminant will match the `repr` type if it is specifed.
When the macro is applied using rustc >= 1.46 and when there is no additional data on any of the variants, the `from_repr` function is marked `const`. rustc >= 1.46 is required to allow `match` statements in `const fn`. The no additional data requirement is due to the inability to use `Default::default()` in a `const fn`.
You cannot derive `FromRepr` on any type with a lifetime bound (`<'a>`) because the function would surely create unbounded lifetimes.
```
use strum_macros::FromRepr;
#[derive(FromRepr, Debug, PartialEq)]
enum Color {
Red,
Green { range: usize },
Blue(usize),
Yellow,
}
assert_eq!(Some(Color::Red), Color::from_repr(0));
assert_eq!(Some(Color::Green {range: 0}), Color::from_repr(1));
assert_eq!(Some(Color::Blue(0)), Color::from_repr(2));
assert_eq!(Some(Color::Yellow), Color::from_repr(3));
assert_eq!(None, Color::from_repr(4));
// Custom discriminant tests
#[derive(FromRepr, Debug, PartialEq)]
#[repr(u8)]
enum Vehicle {
Car = 1,
Truck = 3,
}
assert_eq!(None, Vehicle::from_repr(0));
```
On versions of rust >= 1.46, the `from_repr` function is marked `const`.
```
use strum_macros::FromRepr;
#[derive(FromRepr, Debug, PartialEq)]
#[repr(u8)]
enum Number {
One = 1,
Three = 3,
}
const fn number_from_repr(d: u8) -> Option<Number> {
Number::from_repr(d)
}
assert_eq!(None, number_from_repr(0));
assert_eq!(Some(Number::One), number_from_repr(1));
assert_eq!(None, number_from_repr(2));
assert_eq!(Some(Number::Three), number_from_repr(3));
assert_eq!(None, number_from_repr(4));
```
Derive Macro strum::IntoStaticStr
===
```
#[derive(IntoStaticStr)]
{
// Attributes available to this derive:
#[strum]
}
```
Available on **crate feature `derive`** only.Implements `From<MyEnum> for &'static str` on an enum.
Implements `From<YourEnum>` and `From<&'a YourEnum>` for `&'static str`. This is useful for turning an enum variant into a static string.
The Rust `std` provides a blanket impl of the reverse direction - i.e. `impl Into<&'static str> for YourEnum`.
```
use strum_macros::IntoStaticStr;
#[derive(IntoStaticStr)]
enum State<'a> {
Initial(&'a str),
Finished,
}
fn verify_state<'a>(s: &'a str) {
let mut state = State::Initial(s);
// The following won't work because the lifetime is incorrect:
// let wrong: &'static str = state.as_ref();
// using the trait implemented by the derive works however:
let right: &'static str = state.into();
assert_eq!("Initial", right);
state = State::Finished;
let done: &'static str = state.into();
assert_eq!("Finished", done);
}
verify_state(&"hello world".to_string());
```
Derive Macro strum::ToString
===
```
#[derive(ToString)]
{
// Attributes available to this derive:
#[strum]
}
```
Available on **crate feature `derive`** only.implements `std::string::ToString` on an enum
```
// You need to bring the ToString trait into scope to use it use std::string::ToString;
use strum_macros;
#[derive(strum_macros::ToString, Debug)]
enum Color {
#[strum(serialize = "redred")]
Red,
Green {
range: usize,
},
Blue(usize),
Yellow,
}
// uses the serialize string for Display let red = Color::Red;
assert_eq!(String::from("redred"), red.to_string());
// by default the variants Name let yellow = Color::Yellow;
assert_eq!(String::from("Yellow"), yellow.to_string());
``` |
pypi_mindbody_api.jsonl | personal_doc | Ruby | # Grow with the wellness leader.
Mindbody is the go-to platform for revenue growth in wellness technology, offering you a wealth of opportunities to serve businesses and consumers alike.
At Mindbody, we go the extra mile to help your solutions succeed, with our Partner Store, hosted events, dedicated partner team and more.
You’ll have free access to our sandbox and endpoints tailored to your needs. And it doesn’t stop there – we offer developer-focused guidance at every step, you can start easily and launch quickly.
var client = newRestClient("https://api.mindbodyonline.com/public/v6/payroll/commissions"); var request = new RestRequest(Method.GET); request.AddHeader("SiteId", "-99"); request.AddHeader("Api-Key", "b5fd5260655140668edde9209c55b87e"); IRestResponse response = client.Execute(request); var client = new RestClient("https://api.mindbodyonline.com/public/v6/client/clientcompleteinfo?ClientId=100013562");var request = new RestRequest(Method.GET); request.AddHeader("SiteId","-99"); request.AddHeader("Api-Key", "b5fd5260655140668edde9209c55b87e"); request.AddHeader("Authorization", "bae2ee52ca8341b9b478cb7a95181ff373e14dca2e794f0b9e590070e2ebbe6a"); IRestResponse response = client.Execute(request); var client = newRestClient("https://api.mindbodyonline.com/public/v6/site/paymenttypes"); var request = new RestRequest(Method.GET); request.AddHeader("SiteId", "-99"); request.AddHeader( "Api-Key", "b5fd5260655140668edde9209c55b87e"); request.AddHeader( "Authorization", "bae2ee52ca8341b9b478cb7a95181ff373e14dca2e794f0b9e590070e2ebbe6a"); request.AddHeader("Content-Type", "application/json"); IRestResponse response = client.Execute(request);
"After having worked with the Mindbody API for many years, the improvements in features and functionality, especially the addition of webhooks, have enabled StudioEase to take our innovative concepts and bring them to the marketplace. Without a strong API connection, we would not have been able to expand on the core MindBody infrastructure."
Fill out a brief form to set up your free account and sandbox environment.
Start experimenting and testing your code before it goes live. Our free sandbox makes it simple.
Ready to go live ? Tell us about your integration, submit your request for the credentials and launch. IT’S THAT EASY.
#
Date: 2023-10-01
Categories:
Tags:
b'If you would like to review your past call volume to predict future bill amounts, log in to your Developer Portal Account. Select ‘Reports’ on the left-hand side of your Account screen and go to Invoice Details. Under Invoice details, select the billing cycle you would like to review, and the total call count for that billing cycle will be located to the right of the Total Location Count.'
If you would like to review your past call volume to predict future bill amounts, log in to your Developer Portal Account. Select ‘Reports’ on the left-hand side of your Account screen and go to Invoice Details. Under Invoice details, select the billing cycle you would like to review, and the total call count for that billing cycle will be located to the right of the Total Location Count.
Date: 2020-04-17
Categories:
Tags:
* Mindbody Webhooks API
* Events
* Event Base
* Site
* Business Day Closure
* Location
* Appointment
* appointmentBooking.created
* appointmentBooking.updated
* appointmentBooking.cancelled
* appointmentAddOn.created
* appointmentAddOn.deleted
* Class Schedule
* Class
* Class Booking
* Class Waitlist
* Class Description
* Client
* Merge
* Membership
* Contract
* Sale
* Staff
The Mindbody Webhooks API notifies you when specific events are triggered by a business that uses Mindbody. You can use this API to create an application that shows near real-time updates to a business’s data without having to long-poll the Public API. This API is Mindbody’s implementation of an HTTP push API.
The Webhooks API uses the HTTP protocol, so it works with any language that has an HTTP library. Requests use standard HTTP verbs such as `GET` , `POST` , and `DELETE` . All endpoints accept and return JSON. The API documentation uses the JSON data types defined by W3Schools. The resource documentation describes requests and responses in detail for each endpoint.
## Getting Started
* Go to our developer portal and create your developer account.
* While logged into your developer account, request to go live with your developer account. Make sure you are logged in before you click this link.
* Once Mindbody approves you for live access, activate the link between your developer account and at least one Mindbody business. Follow the instructions for activation in the Public API documentation.
* While logged into your developer account, go to the API Keys section on the API Credentials page of your developer portal account, and create an API Key named something like “Webhooks.”
* Use the POST subscription endpoint to create a subscription for one or more events. This subscription should point to a webhook that you use for testing.
* Use the PATCH subscription endpoint to activate your subscription.
* Thoroughly test your application.
* Optionally, use the DELETE subscription endpoint to deactivate the webhook you used for testing.
## Authentication and Security
If you make a call without an API-Key header, you receive a 401 HTTP status code and the following error response:
The Webhooks API uses API keys from your developer account, which are located on the API Credentials page. You can use the same API keys created for the Public API for the Webhooks API. If you already have an API key issued during the Webhooks open beta, that key is still valid for use.
To make a successful API call, pass an API key as an `API-Key` header.
If you make a call and the API-Key header contains an invalid value, you receive a 401 HTTP status code and an error response body.
### X-Mindbody Signature Header
```
/// <summary>
/// Validates whether the webhook hash is contained in the request and is correct
/// </summarypublic class ValidateWebhookSignatureAttribute : ActionFilterAttribute
{
private const string EnvSignatureKey = "ENVSIGNATUREKEY";
public override void OnActionExecuting(HttpActionContext actionContext)
{
// Find signature in request headers
if (!actionContext.Request.Headers.TryGetValues("X-Mindbody-Signature", out var values))
{
actionContext.Response = actionContext.Request.CreateErrorResponse(HttpStatusCode.BadRequest, "Request not signed. Expected X-Mindbody-Signature not found on request.");
var requestHash = values.First();
// Get signature key stored from subscription creation
var signatureKey = Environment.GetEnvironmentVariable(EnvSignatureKey);
if (string.IsNullOrWhiteSpace(signatureKey))
{
throw new NullReferenceException($"Signature key, {EnvSignatureKey}, not found in the environment");
}
string computedHash;
using (var hmac = new HMACSHA256(Encoding.UTF8.GetBytes(signatureKey)))
{
// Read request body, encode with UTF-8 and compute hash
var payload = actionContext.Request.Content.ReadAsStringAsync().Result;
var hash = hmac.ComputeHash(Encoding.UTF8.GetBytes(payload));
computedHash = $"sha256={Convert.ToBase64String(hash)}";
}
// Compare the computed hash with the request's hash
if (computedHash != requestHash)
{
actionContext.Response = actionContext.Request.CreateErrorResponse(HttpStatusCode.BadRequest, "Invalid signature. X-Mindbody-Signature value was not as expected.");
base.OnActionExecuting(actionContext);
}
}
```
When Mindbody sends an event notification to a registered webhook, we include a signature header so that you know the request was sent from our servers. The header name is `X-Mindbody-Signature` and the value is a `UTF-8` encoded hash. We strongly recommend that you verify this signature when you receive a request at your webhook endpoint.
To validate the webhook signature, you:
* Encode the request body using an HMAC-SHA-256 library in your coding language of choice. When you do this, use the
`messageSignatureKey` returned from the POST Subscription endpoint as the key for the algorithm. * Prepend
`sha256=` to the encoded signature. * Compare the string with the header value. If they match, then the message came from our servers.
Look at the C# example on the right to see how to do this.
### HTTPS
All calls to the Webhooks API must use HTTPS connections, using TLS v1.2 or higher. Connections made using an older version of TLS may not work correctly.
## Webhook URL Requirements
To correctly handle events delivered to your webhook URL, please keep the following information in mind when designing your application:
* Webhook URLs must accept HTTPS connections using TLS v1.2 or higher.
* Webhook URLs must accept both
`POST` and `HEAD` HTTP requests. Mindbody uses `POST` requests to deliver events to the URL, and `HEAD` requests to check the URL’s validity when creating a subscription. * Events are not guaranteed to be delivered only once.
* Events are not guaranteed to be delivered in chronological order.
* If Mindbody does not receive a
`2xx` response within 10 seconds of posting an event to your webhook URL, we try to resend the event every 15 minutes. After 3 hours, Mindbody stops trying to resend the event. To avoid some failures and prevent retries, add incoming events to a queue for future processing so you can return a response to our API as quickly as possible.
Follow these steps to be successful in your integration:
* Receive your webhook.
* Queue your webhook.
* Respond to your webhook.
* Process your data.
## Best Practices
To avoid duplicates if your endpoints receive multiple copies of a single event, we recommend that you make your event processing idempotent. For example, you can log processed events, and make sure that your application does not reprocess events that have already been logged.
To ensure your system has the most up-to-date data, please sync your cached data using the Public API every 24 hours.
## Pairing with the Public API
Because the event notification posted to your webhook only contains certain data, you may need to use the Public API to collect additional information about an item that has changed. Each event documented here tells you which fields you can use to query the Public API for more information.
### Transaction Key
Example header included with an UpdateClient API request
```
Transaction-Key: "c1ad6020-ee69-4dc3-ba8b-562727a60dbf"
```
Example payload included with the resulting client.updated event
```
{
"messageId": "6AMKfY2dBB8MqvfiPzkZcr",
"eventId": "client.updated",
"eventSchemaVersion": 1,
"eventInstanceOriginationDateTime": "2020-04-17T19:03:14Z",
"transactionKey": "c1ad6020-ee69-4dc3-ba8b-562727a60dbf",
"eventData": {
"siteId": -1517,
...
}
}
```
Many actions taken via Public API endpoints will trigger a webhook event dispatch. For example, if POST UpdateClient is called, a client.updated event will be dispatched if any updates to a client are made. To optimize the number calls made to Public API, a transaction key may be used with API calls to track events dispatched as a result of API actions.
When calling the Public API, integrations may populate an optional `Transaction-Key` header. This header is a string limited to 50 characters (anything beyond 50 will be truncated). Events dispatched, as a result of API calls, will set a `transactionKey` property in event payloads. This property will only be populated if the API source matches the Webhooks subscription source. API keys may differ, but the sourcename making the API call must be the same used to create the webhooks subscription. If the header is not populated, the event did not originate from an API call, or the sourcename does not match, `transactionKey` will not exist in the event payload.
## Base URL
https://mb-api.mindbodyonline.com/push/api/v1
## Versioning
If Mindbody introduces an API change that breaks preexisting API contracts, we create a new API version number and provide you with a transition guide.
Current Version: 1Previous Version: NA
## Dates and Times
The Webhooks API returns dates and times in ISO 8601 format as defined by the RFC 339 specification. All returned dates and times are UTC dates and times. For example, a class that occurs on January 5th, 2017 at 2:15PM (EST) is represented as
```
"2017-01-05T19:15:00Z"
```
because EST is five hours behind UTC. All date/time pairs are returned in the format `YYYY-MM-DDTHH:mm:ssZ` .
## Rate Limits
Rate limits differ from one endpoint to the next. There are two kinds of rate limits: daily call limits and system limits. Your daily call limits are defined when you receive your API Key. System limits are in place to ensure faster response times and stability. If you need Mindbody to increase your rate limits, contact your account manager.
## Errors
In the Webhooks API, HTTP response codes indicate the status of a request.
* Codes in the 200 range indicate success.
* Codes in the 400 range indicate errors in the provided information, for example, a required parameter was omitted, a parameter violated its minimum or maximum constraint, and so on.
* Codes in the 500 range indicate errors from our servers. Please contact us immediately if you receive a 500 error.
The Webhooks API returns errors for many reasons. We recommend that you write code that gracefully handles all possible errors as documented for each endpoint.
In addition to HTTP codes, the API returns a JSON error response object, as shown in the example on the right.
Name | Type | Description |
| --- | --- | --- |
errors | list of objects | Contains a list of all the errors generated by the call. |
errorCode | string | A code for the specific error returned. You can safely parse these values to numbers. |
errorType | string | A categorical code indicating which aspect of the request was invalid. |
errorMessage | string | A brief message explaining why you received the error. |
## Subscription Deactivation
Your subscription may be deactivated because Mindbody stopped receiving a `2xx` HTTP status code from the `webhookUrl` when posting events. When this occurs, Mindbody will send an email to the developer portal email account with which the subscription was created. After you have resolved the delivery issues, please update the subscription’s status to Active using the PATCH Subscription endpoint. In the meantime, we recommend using the Mindbody Public API to perform a manual sync of cached data.
# Resources
## Subscriptions
# Subscriptions
A subscription lets you choose which event notifications are sent to a webhook URL. Mindbody must activate subscriptions before the associated webhook can receive event notifications.
# Event Schema Versions
The event schema version controls the payload that is sent to a webhook. Currently, there is only one version for every available event type, so this field is always `1` .
$request = new HttpRequest();
$request->setUrl('https://mb-api.mindbodyonline.com/push/api/v1/subscriptions');
$request->setMethod(HTTP_METH_GET);
conn.request("GET", "/push/api/v1/subscriptions", headers=headers)
This endpoint searches for subscriptions associated with your developer portal account:
You can retrieve a specific subscription by calling GET (by ID).
```
{
"items" : [
{
"subscriptionId" : "0b2f2a18-5003-4d4e-a793-d16c95f72496",
"status" : "PendingActivation",
"subscriptionCreationDateTime" : "2018-01-01T08:00:00Z",
"statusChangeDate" : "2018-01-01T08:00:00Z",
"statusChangeMessage" : null,
"statusChangeUser" : "ACMEDeveloper",
"eventIds" : [
"classSchedule.created",
"classSchedule.updated",
"classSchedule.cancelled",
"class.updated"
],
"eventSchemaVersion" : 1,
"referenceId" : "2bf12eec-30b5-492d-95eb-803c1705ddf4",
"webhookUrl" : "https://acmebusinessdoman.com/webhook"
},
...
]
}
```
### GET (by ID)
$request = new HttpRequest();
$request->setUrl('https://mb-api.mindbodyonline.com/push/api/v1/subscriptions/{subscriptionId}');
$request->setMethod(HTTP_METH_GET);
conn.request("GET", "/api/v1/subscriptions/{subscriptionId}", headers=headers)
This endpoint finds and returns the single subscription associated with the passed ID.
Name | Type | Description |
| --- | --- | --- |
subscriptionId | string | Returns the single location identified by this ID (a GUID). This is the |
Partial example of response content structure:
```
{
"subscriptionId" : "0b2f2a18-5003-4d4e-a793-d16c95f72496",
"status" : "PendingActivation",
...
}
```
This response object is the same as one of the objects contained in the `items` field in the GET Subscriptions response. See that endpoint’s documentation for detailed information about the response object’s fields.
### POST
```
curl -X POST \
-A "{yourAppName}" \
-H "API-Key: {yourAPIKey}" \
-H "Content-Type: application/json" \
-d '{requestBody}' \
"https://mb-api.mindbodyonline.com/push/api/v1/subscriptions"
```
```
var client = new RestClient("https://mb-api.mindbodyonline.com/push/api/v1/subscriptions");
var request = new RestRequest(Method.POST);
request.AddHeader("content-type", "application/json");
request.AddHeader("api-key", "{yourAPIKey}");
request.AddParameter("application/json", "{requestBody}", ParameterType.RequestBody);
IRestResponse response = client.Execute(request);
```
$request = new HttpRequest();
$request->setUrl('https://mb-api.mindbodyonline.com/push/api/v1/subscriptions');
$request->setMethod(HTTP_METH_POST);
$request->setHeaders(array(
'content-type' => 'application/json',
'api-key' => '{yourAPIKey}'
));
$request->setBody('{requestBody}');
payload = "{requestBody}"
headers = {
'api-key': "{yourAPIKey}",
'content-type': "application/json"
}
conn.request("POST", "/push/api/v1/subscriptions", payload, headers)
request = Net::HTTP::Post.new(url)
request["api-key"] = '{yourAPIKey}'
request["content-type"] = 'application/json'
request.body = "{requestBody}"
This endpoint creates a pending subscription that is linked to your developer portal account. After you have created a subscription, you can activate it using the PATCH Subscription endpoint.
```
{
"eventIds" : [
"classSchedule.created",
"classSchedule.updated",
"classSchedule.cancelled"
],
"eventSchemaVersion" : 1,
"referenceId" : "7796d1bd-5554-46d4-a8fc-7016f8142b13",
"webhookUrl" : "https://acmebusinessdoman.com/webhook"
}
```
Name | Type | Description |
| --- | --- | --- |
eventIds | list of strings | The events you want to be sent to the specified |
eventSchemaVersion | number | The event schema version for this subscription. |
referenceId | string | An arbitrary field that you can set to a value of your choice. Mindbody stores and returns this value for the subscription you are creating. Most commonly, this field stores a GUID that you can use in your application. |
webhookUrl | string | The URL that Mindbody posts the event notifications to. Webhook URL Requirements lists considerations and requirements for this URL. |
```
{
"subscriptionId" : "0b2f2a18-5003-4d4e-a793-d16c95f72496",
"status" : "PendingActivation",
"subscriptionCreationDateTime" : "2018-01-01T08:00:00Z",
"statusChangeDate" : "2018-01-01T08:00:00Z",
"statusChangeMessage" : null,
"statusChangeUser" : "ACMEDeveloper",
"eventIds" : [
"classSchedule.created",
"classSchedule.updated",
"classSchedule.cancelled"
],
"eventSchemaVersion" : 1,
"referenceId" : "2bf12eec-30b5-492d-95eb-803c1705ddf4",
"webhookUrl" : "https://acmebusinessdoman.com/webhook",
"messageSignatureKey" : "<KEY>
}
```
```
{
"errors": [{
"errorCode": "14000013",
"errorType": "invalidWebhookURLMissingHTTPS",
"errorMessage": "URLs must start with HTTPS."
}]
}
```
HTTP Status Code | errorCode | errorType | Description |
| --- | --- | --- | --- |
400 | 14000010 | invalidValueWebhookURLRequired | The |
400 | 14000011 | invalidValueWebhookURLCannotBeBlank | The |
400 | 14000012 | invalidValueWebhookURLTooLong | The |
400 | 14000013 | invalidWebhookURLMissingHTTPS | The |
400 | 14000014 | invalidWebhookURLNoResponse | The |
400 | 14000003 | invalidEventSchemaVersionRequired | The |
400 | 14000005 | invalidEventSchemaVersion | The |
400 | 14000006 | invalidValueEventIdsRequired | The |
400 | 14000007 | invalidValueEventIdsCannotBeBlank | The |
400 | 14000008 | invalidValueEventIds | One or more of the passed |
400 | 14000009 | invalidValueReferenceIdTooLong | The |
401 | 14010001 | missingAPIKeyHeader | An |
403 | 14030001 | forbidden | You are not authorized to use this endpoint. |
403 | 14030002 | developerAccountNotActive | Your developer portal account does not have live access. Request to go live, then re-try your request. |
### PATCH
Example patch request to reactivate a subscription and update the WebhookUrl
```
curl -X PATCH \
'https://mb-api.mindbodyonline.com/push/api/v1/subscriptions/{subscriptionId}' \
-H 'Api-Key: {yourApiKey}' \
-H 'Content-Type: application/json' \
-A '{yourAppName}' \
-d '{
"Status": "Active",
"WebhookUrl": "https://yournewwebhookurl.com"
}'
```
```
var client = new RestClient("https://mb-api.mindbodyonline.com/push/api/v1/subscriptions/{subscriptionId}");
var request = new RestRequest(Method.PATCH);
request.AddHeader("Api-Key", "{yourApiKey}");
request.AddParameter("application/json", "{\n\t\"Status\": \"Active\",\n\t\"WebhookUrl\": \"https://yournewwebhookurl.com\"\n}", ParameterType.RequestBody);
IRestResponse response = client.Execute(request);
```
HttpRequest::methodRegister('PATCH');
$request = new HttpRequest();
$request->setUrl('https://mb-api.mindbodyonline.com/push/api/v1/subscriptions/{subscriptionId}');
$request->setMethod(HttpRequest::HTTP_METH_PATCH);
$request->setHeaders(array(
'Api-Key' => '{yourApiKey}',
'Content-Type' => 'application/json'
));
$request->setBody('{
"Status": "Active",
"WebhookUrl": "https://yournewwebhookurl.com"
}');
echo $response->getBody();
} catch (HttpException $ex) {
echo $ex;
}
```
payload = "{\n\t\"Status\": \"Active\",\n\t\"WebhookUrl\": \"https://yournewwebhookurl.com\"\n}"
headers = {
'Api-Key': "{yourApiKey}",
'Content-Type': "application/json"
}
conn.request("PATCH", "/push/api/v1/subscriptions/{subscriptionId}", payload, headers)
http = Net::HTTP.new(url.host, url.port)
request = Net::HTTP::Patch.new(url)
request["Api-Key"] = '{yourApiKey}'
request["Content-Type"] = 'application/json'
request.body = "{\n\t\"Status\": \"Active\",\n\t\"WebhookUrl\": \"https://yournewwebhookurl.com\"\n}"
This endpoint can activate a new subscription or reactivate an inactive subscription that is associated with your developer portal account, by updating the status. You can also update your subscription’s eventIds, eventSchemaVersion, referenceId, and webhookUrl.
Name | Type | Description |
| --- | --- | --- |
subscriptionId | string | The subscription’s ID (a GUID). |
Name | Type | Description |
| --- | --- | --- |
eventIds | list of strings | A list of event IDs that you want to update or subscribe to. |
eventSchemaVersion | number | The event schema version associated with the subscription. Currently, this is always |
referenceId | string | An arbitrary field that you can set to a value of your choice. Mindbody stores and returns this value for the subscription you are activating. Most commonly, this field stores a GUID that you can use in your application. |
webhookUrl | string | The URL registered as the target of the webhook deliveries. Mindbody posts the event notifications to this URL. Webhook URL Requirements lists considerations and requirements for this URL. |
status | string | The subscription’s desired status. Possible values are |
Example Response
```
{
"subscriptionId": "85c8b693-e9ab-8f29-81e0-a9239ddb27d8",
"status": "Active",
"subscriptionCreationDateTime": "2019-03-19T06:52:21Z",
"statusChangeDate": "2019-03-26T09:04:09Z",
"statusChangeMessage": "TestSource changed status from DeactivatedByAdmin to Active",
"statusChangeUser": "TestSource",
"eventIds": [
"classRosterBooking.created",
"classRosterBookingStatus.updated",
"classRosterBooking.cancelled",
"classWaitlistRequest.created",
"classWaitlistRequest.cancelled",
"siteBusinessDayClosure.created",
"siteBusinessDayClosure.cancelled",
"client.created",
"client.updated",
"client.deactivated",
"clientProfileMerger.created",
"clientMembershipAssignment.created",
"clientMembershipAssignment.cancelled",
"clientSale.created",
"site.created",
"site.updated",
"site.deactivated",
"location.created",
"location.updated",
"location.deactivated",
"staff.created",
"staff.updated",
"staff.deactivated",
"appointmentBooking.created",
"appointmentBooking.cancelled",
"appointmentAddOn.created",
"appointmentAddOn.deleted"
],
"eventSchemaVersion": 1,
"referenceId": "12345678",
"webhookUrl": "https://yournewwebhookurl.com"
}
```
HTTP Status Code | errorCode | errorType | Description |
| --- | --- | --- | --- |
400 | 14000021 | invalidSubscriptionId | The |
404 | 14040001 | resourceNotFound | The |
403 | 14030001 | forbidden | You are not authorized to use this endpoint. |
400 | 14000010 | invalidValueWebhookURLRequired | The |
400 | 14000011 | invalidValueWebhookURLCannotBeBlank | The |
400 | 14000012 | invalidValueWebhookURLTooLong | The |
400 | 14000016 | invalidWebhookUrlBadFormat | The |
400 | 14000013 | invalidValueWebhooksURLMissingHTTPS | The |
400 | 14000014 | invalidWebhookURLNoResponse | The |
400 | 14000015 | invalidWebhookUrlError | The |
400 | 14000006 | invalidValueEventIdsRequired | The |
400 | 14000007 | invalidValueEventIdsCannotBeBlank | The |
400 | 14000008 | invalidValueEventIds | One or more of the passed |
400 | 14000009 | invalidValueReferenceIdTooLong | The |
400 | 14000003 | invalidEventSchemaVersionRequired | The |
400 | 14000005 | invalidEventSchemaVersion | The |
400 | 14000020 | subscriptionStatusNotValid | The status of the subscription is not valid. |
### DELETE
```
curl -X DELETE \
-H "API-Key: {yourAPIKey}" \
-A "{yourAppName}" \
"https://mb-api.mindbodyonline.com/push/api/v1/subscriptions/{subscriptionId}"
```
```
var client = new RestClient("https://mb-api.mindbodyonline.com/push/api/v1/subscriptions/{subscriptionId}");
var request = new RestRequest(Method.DELETE);
request.AddHeader("api-key", "{yourAPIKey}");
IRestResponse response = client.Execute(request);
```
$request = new HttpRequest();
$request->setUrl('https://mb-api.mindbodyonline.com/push/api/v1/subscriptions/{subscriptionId}');
$request->setMethod(HTTP_METH_DELETE);
conn.request("DELETE", "/api/v1/subscriptions/{subscriptionId}", headers=headers)
request = Net::HTTP::Delete.new(url)
request["api-key"] = '{yourAPIKey}'
This endpoint deactivates a subscription associated with the passed ID.
Name | Type | Description |
| --- | --- | --- |
subscriptionId | string | The subscription ID (a GUID) that you are deactivating. This ID is the |
```
{
"message" : "Subscription deactivated successfully.",
"deactivationDateTime" : "2018-01-01T08:00:00Z",
"subscriptionId" : "0b2f2a18-5003-4d4e-a793-d16c95f72496",
"referenceId" : "2bf12eec-30b5-492d-95eb-803c1705ddf4"
}
```
Name | Type | Description |
| --- | --- | --- |
message | string | A message about the deactivation request. Unless an error occurs, this message always |
deactivationDateTime | string | The UTC date and time when the deactivation took place. |
subscriptionId | string | The subscription’s ID (a GUID). |
referenceId | string | The subscription’s reference ID, assigned when the subscription was created. |
HTTP Status Code | errorCode | errorType | Description |
| --- | --- | --- | --- |
400 | 14000018 | genericDomainError | The subscription has already been deactivated. |
401 | 14010001 | missingAPIKeyHeader | An |
403 | 14030001 | forbidden | You are not authorized to use this endpoint. |
403 | 14030002 | developerAccountNotActive | Your developer portal account does not have live access. Request to go live, then re-try your request. |
404 | 14040001 | resourceNotFound | The |
## Metrics
# Metrics
Metrics allow you to check the state of all the subscriptions associated with your Public API account and to see their statistics.
```
var client = new RestClient("https://mb-api.mindbodyonline.com/push/api/v1/metrics");
var request = new RestRequest(Method.GET);
request.AddHeader("API-Key", "{yourAPIKey}");
IRestResponse response = client.Execute(request);
```
$request = new HttpRequest();
$request->setUrl('https://mb-api.mindbodyonline.com/push/api/v1/metrics');
$request->setMethod(HTTP_METH_GET);
$request->setHeaders(array(
'API-Key' => '{yourAPIKey}'
));
headers = {
'API-Key': "{yourAPIKey}"
}
conn.request("GET", "/api/v1/metrics", headers=headers)
url = URI("https://mb-api.mindbodyonline.com/push/api/v1/metrics")
request = Net::HTTP::Get.new(url)
request["API-Key"] = '{yourAPIKey}'
https://mb-api.mindbodyonline.com/push/api/v1/metrics
This endpoint gets metrics for all the subscriptions associated with your Public API developer account.
```
{
"items": [
{
"subscriptionId" : "0b2f2a18-5003-4d4e-a793-d16c95f72496",
"status" : "Active",
"statusChangeDate" : null,
"creationDateTime" : "2018-01-01T08:00:00Z",
"messagesAttempted" : 10,
"messagesDelivered" : 8,
"messagesUndelivered" : 1,
"messagesFailed" : 1
}
]
}
```
Name | Type | Description |
| --- | --- | --- |
subscriptionId | string | The subscription’s ID (a GUID). |
status | string | The subscription’s current status. |
statusChangeDate | string | The UTC date and time when the subscription’s |
creationDateTime | string | The UTC date and time when the subscription was created. |
messagesAttempted | number | The number of event notifications Mindbody attempted to deliver to the subscription’s |
messagesDelivered | number | The number of event notifications Mindbody successfully delivered to the subscription’s |
messagesUndelivered | number | The number of event notifications where MINDBODy received a failure response from the subscription’s |
messagesFailed | number | The number of event notifications that Mindbody stopped trying to send after 3 hours. |
# Events
Webhook API events are only delivered to subscriptions that are active at the time the event occurs. Events are not stored and delivered to subscriptions at a later time.
## Event Base
Webhooks API events are structured identically, except for the event-specific information contained in the `eventData` property.
# Event Body
```
{
"messageId": "ASwFMoA2Q5UKw69g3RDbvU",
"eventId": "site.created",
"eventSchemaVersion": 1,
"eventInstanceOriginationDateTime": "2018-04-18T10:02:55Z",
"eventData": {eventDataObject}
}
```
Name | Type | Description |
| --- | --- | --- |
messageId | string | The event’s unique ID. |
eventId | string | The ID that can be passed in the POST Subscription request’s |
eventSchemaVersion | number | The message’s event schema version. Currently, this value is always |
eventInstanceOriginationDate | string | The date and time when a Mindbody business triggered the event. |
eventData | object | The event data object. This value can be determined by the |
## Site
Site events encapsulate changes made to a Mindbody business’s details.
### site.created
Mindbody sends this event when a new business subscribes to Mindbody’s services. This event is useful for integrations with cross-regional businesses. It is not needed when you develop an integration with a single Mindbody business.
```
{
"siteId": 123,
"name": "ACME Yoga",
"description": "A place for all ACME practitioners to refine their yoga-craft.",
"logoURL": "https://clients.mindbodyonline.com/studios/ACMEYoga/logo_mobile.png?osv=637207744820527893",
"pageColor1": "#000000",
"pageColor2": "#000000",
"pageColor3": "#000000",
"pageColor4": "#000000",
"pageColor5": "#000000",
"acceptsVisa": true,
"acceptsDiscover": false,
"acceptsMasterCard": true,
"acceptsAmericanExpress": false,
"isEmailContactApproved": true,
"isSmsPackageEnabled": false,
"subscriptionLevel": "Accelerate 2.0",
"isActive": true
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
name | string | The name of the business. |
description | string | A description of the business. |
logoURL | string | A URL that points to the logo image for the business. |
pageColor1 | string | A hex code for a color the business owner uses in their marketing. This color can be used to “theme” an integration so that it matches the configured color-scheme for the business. |
pageColor2 | string | The hex code for a second color, to be used in the same manner as |
pageColor3 | string | The hex code for a third color, to be used in the same manner as |
pageColor4 | string | The hex code for a fourth color, to be used in the same manner as |
pageColor5 | string | The hex code for a fifth color, to be used in the same manner as |
acceptsVisa | bool | When |
acceptsDiscover | bool | When |
acceptsMasterCard | bool | When |
acceptsAmericanExpress | bool | When |
isEmailContactApproved | bool | When |
isSmsPackageEnabled | bool | When |
pricingLevel | string | The Mindbody pricing level for the business. This property is now deprecated. Use subscriptionLevel instead. |
subscriptionLevel | string | The Mindbody subscription level for the business. |
isActive | bool | When |
### site.updated
Mindbody sends this event when the business changes any of the fields in the site.created event object.
### site.deactivated
Mindbody sends this event when a business is deactivated.
## Business Day Closure
Mindbody sends site business day closure events when a business owner or staff member schedules or cancels a closed business day.
### siteBusinessDayClosure.created
Mindbody sends this event when a business schedules a day or date-range for which the business is closed.
```
{
"siteId": 123,
"businessDayClosureId": 10,
"nameClosedDay": "Memorial Day",
"startDateTime": "2018-05-28T00:00:00",
"endDateTime": "2018-05-29T00:00:00",
"serviceCategoriesAffectedIds": [1, 3, 6]
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
businessDayClosureId | int | The ID of the business day closure. |
nameClosedDay | string | The display name of the closure. |
startDateTime | string | The first day in a date range for which the business is to be closed. |
endDateTime | string | The last day in a date range for which the business is to be closed. |
serviceCategoriesAffectedIds | list of int | The service category IDs that are to be removed from the business’ schedule between the closure’s |
### siteBusinessDayClosure.cancelled
Mindbody sends this event when a business removes a closed business day from their schedule.
```
{
"siteId": 123,
"businessDayClosureId": 10
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
businessDayClosureId | int | The ID of the closure that the business removed from their schedule. |
## Location
Mindbody sends location events when changes are made to locations of a business.
### location.created
Mindbody sends this event when a business owner asks Mindbody to add a new location for their business.
```
{
"siteId": 123,
"locationId": 2,
"name": "<NAME> (Downtown)",
"description": "Our downtown location.",
"hasClasses": true,
"phoneExtension": null,
"addressLine1": "123 ABC Ct",
"addressLine2": null,
"city": "San Luis Obispo",
"state": "CA",
"postalCode": "93401",
"phone": "8055551234",
"latitude": 150.0,
"longitude": 120.0,
"tax1": .10,
"tax2": 0,
"tax3": 0,
"tax4": 0,
"tax5": 0,
"webColor5": "#000000"
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
locationId | number | The location ID. |
name | string | The location name. |
description | string | The location description. |
hasClasses | bool | When |
phoneExtension | string | The location’s phone number extension. |
addressLine1 | string | Line one of the location’s street address. |
addressLine2 | string | Line two of the location’s street address. |
city | string | The city the location is in. |
state | string | The state the location is in. |
postalCode | string | The location’s postal code. |
phone | string | The location’s phone number. |
latitude | number | The location’s latitude coordinate. |
longitude | number | The location’s longitude coordinate. |
tax1 | number | One of the tax rates used at the location to tax products and services. |
tax2 | number | One of the tax rates used at the location to tax products and services. |
tax3 | number | One of the tax rates used at the location to tax products and services. |
tax4 | number | One of the tax rates used at the location to tax products and services. |
tax5 | number | One of the tax rates used at the location to tax products and services. |
webColor5 | string | A hex code for a color the business owner uses in their marketing. This color can be used to “theme” an integration so that it matches the configured color-scheme for this location. |
### location.updated
Mindbody sends this event when a business changes any of the fields in the location.created event object.
### location.deactivated
Mindbody sends this event when a business deactivates one of its locations.
## Appointment
Mindbody sends appointment events when an appointment is booked or cancelled at a business.
### appointmentBooking.created
Mindbody sends this event when a client or staff member adds an appointment to a staff member’s schedule.
```
{
"siteId": 123,
"appointmentId": 121,
"status": "Scheduled",
"isConfirmed": true,
"hasArrived": false,
"locationId": 1,
"clientId": "100000120",
"clientFirstName": "John",
"clientLastName": "Smith",
"clientEmail": "<EMAIL>",
"clientPhone": "8055551234",
"staffId": 5,
"staffFirstName": "Jane",
"staffLastName": "Doe",
"startDateTime": "2018-03-15T17:12:00Z",
"endDateTime": "2018-03-15T18:12:00Z",
"durationMinutes": 60,
"genderRequested": null,
"resources": [
{
"id": 1,
"name": "Room A"
}
],
"notes": null,
"formulaNotes": null,
"icdCodes": [
{
"code": "123abc",
"description": "Deep muscular repair"
}
],
"providerId": null,
"sessionTypeId": 17,
"appointmentName": "60 min deep muscular repair massage"
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
appointmentId | number | The appointment ID. |
status | string | The appointment status. |
isConfirmed | bool | When |
hasArrived | bool | When |
locationId | int | The ID of the location where the appointment is booked. |
clientId | string | The public ID of the client for whom the appointment is booked. |
clientFirstName | string | The client’s first name. |
clientLastName | string | The client’s last name. |
clientEmail | string | The client’s email address. |
clientPhone | string | The client’s phone number. |
staffId | number | The ID of the staff member who is to provide the appointment service. |
staffFirstName | string | The staff member’s first name. |
staffLastName | string | The staff member’s last name. |
startDateTime | string | The UTC date and time when the appointment starts. |
endDateTime | string | The UTC date and time when the appointment ends. |
durationMinutes | number | The duration of the appointment in minutes. |
genderRequested | string | Indicates which gender of staff member the client prefers for the appointment. |
resources | list of objects | Contains a list of the room(s) where this appointment may be held, or resource(s) it may use. |
resources[].id | number | The room/resource ID. |
resources[].name | string | The room/resource name. |
notes | string | Appointment notes added by staff members. |
formulaNotes | string | The most recent formula note added to the client’s profile by a staff member. |
icdCodes | list of objects | Contains a list of the ICD or CPT codes attached to the appointment. |
icdCodes[].code | string | The ICD or CPT code. |
icdCodes[].description | string | A brief description of the ICD or CPT code. |
providerId | string | The staff member’s provider ID. |
sessionTypeId | number | The session type associated with the new appointment. |
appointmentName | string | The name of the appointment type. |
### appointmentBooking.updated
Mindbody sends this event when a change is made to any of the properties in the appointmentBooking.created event object.
This event object is the same as the appointmentBooking.created event object.
### appointmentBooking.cancelled
Mindbody sends this event when a business removes an appointment from the schedule.
```
{
"siteId": 123,
"appointmentId": 121
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
appointmentId | number | The ID of the appointment that was cancelled. |
### appointmentAddOn.created
Mindbody sends this event when an add-on is created.
```
{
"messageId": "MpPJtHF3732inxZ7n9BrxS",
"eventId": "appointmentAddOn.created",
"eventSchemaVersion": 1,
"eventInstanceOriginationDateTime": "2020-08-05T16:51:31Z",
"eventData": {
"siteId": 123,
"appointmentId": 30275,
"addOnAppointmentId": 30276,
"addOnName": "Hot stones add-on",
"clientId": "100000120",
"staffId": 12
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
appointmentId | number | The appointment ID the add-on was added to. |
addOnAppointmentId | number | The appointment add-on ID. |
addOnName | string | The name of the appointment add-on. |
clientId | string | The public ID of the client for whom the appointment add-on is booked. |
staffId | number | The ID of the staff member who is to provide the appointment add-on service. |
### appointmentAddOn.deleted
Mindbody sends this event when an add-on is deleted.
```
{
"messageId": "BCzdSUNDmR9aTuzpEiHH7e",
"eventId": "appointmentAddOn.deleted",
"eventSchemaVersion": 1,
"eventInstanceOriginationDateTime": "2020-08-15T00:04:05Z",
"eventData": {
"siteId": 123,
"addOnAppointmentId": 30276
}
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
addOnAppointmentId | number | The appointment add-on ID. |
## Class Schedule
Mindbody sends class schedule events when changes are made to a scheduled class. Class schedules group classes together and define on which days and at what times classes occur. Class schedules represent the group of classes added to a business, not individual classes.
### classSchedule.created
Mindbody sends this event when a business schedules a new class offering.
If you receive this event and are caching classes, you can call the `GetClasses` endpoint in the Public API and add the new classes to your system’s caches.
```
{
"siteId": 123,
"locationId": 1,
"classScheduleId": 8,
"classDescriptionId": 15,
"resources": [
{
"id": 1,
"name": "Room A"
}
],
"maxCapacity": 24,
"webCapacity": 20,
"staffId": 5,
"staffName": "<NAME>",
"isActive": true,
"startDate": "2018-07-22",
"endDate": "2020-07-22",
"startTime": "10:30:00",
"endTime": "11:30:00",
"daysOfWeek": [
"Monday",
"Wednesday",
"Friday"
],
"assistantOneId": 123,
"assistantOneName": "<NAME>",
"assistantTwoId": 345,
"assistantTwoName": "<NAME>"
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
locationId | number | The ID of the location where the class schedule was added. |
classScheduleId | number | The class schedule’s ID. |
classDescriptionId | number | The class schedule’s class description ID. Used to link to a class description. |
resources | list of objects | Contains a list of the room(s) where the classes offered in this class schedule may be held, or resource(s) that any of the classes offered in this class schedule may use. |
resources[].id | number | The room/resource ID. |
resources[].name | string | The room/resource name. |
maxCapacity | number | The total number of spaces available in each class associated with the class schedule. |
webCapacity | number | The total number of spaces that clients can reserve. |
staffId | number | The ID of the staff member that is teaching the class. |
staffName | string | The staff member’s name. |
isActive | bool | When |
startDate | string | The first date on which classes in this schedule are held. |
endDate | string | The last date on which classes in this schedule are held. |
startTime | string | The time of day when the class starts. |
endTime | string | The time of day when the class ends. |
daysOfWeek | list of strings | The days of the week on which this class schedule adds classes to the business schedule. |
assistantOneId | nullable number | The first assistant’s staff ID. |
assistantOneName | string | The first assistant’s name. |
assistantTwoId | nullable number | The second assistant’s staff ID. |
assistantTwoName | string | The second assistant’s name. |
### classSchedule.updated
Mindbody sends this event when a business changes any of the fields in the classSchedule.created event object. Note that if the class schedule’s `endDate` changes, classes are added or removed from the schedule accordingly.
If you receive this event and are caching classes, you can call the `GetClasses` endpoint in the Public API and update your system’s caches.
### classSchedule.cancelled
Mindbody sends this event when a business cancels a class schedule. When this occurs, all of the classes associated with the schedule are removed.
If you receive this event and are caching classes, you can remove classes associated with the deleted class schedule from your system’s caches.
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
classScheduleId | number | The class schedule’s ID. |
## Class
Mindbody sends class events when changes are made to a single class instance, that is, one class on a specific date at a specific time, not a range of classes.
### class.updated
Mindbody sends this event when a business changes a single class.
```
{
"siteId": 123,
"locationId": 1,
"classId": 201,
"classScheduleId": 5,
"isCancelled": false,
"isStaffASubstitute": false,
"isWaitlistAvailable": true,
"isIntendedForOnlineViewing": true,
"staffId": 10,
"staffName": "<NAME>",
"startDateTime": "2018-07-17T12:00:00Z",
"endDateTime": "2018-07-17T13:00:00Z",
"classDescriptionId": 21,
"assistantOneId": 456,
"assistantOneName": "<NAME>",
"assistantTwoId": null,
"assistantTwoName": null,
"resources": [
{
"id": 1,
"name": "Room A"
}
]
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
locationId | number | The ID of the location where the class takes place. |
classId | number | The individual class’s ID. |
classScheduleId | number | The ID of the class schedule to which the class belongs. |
isCancelled | bool | When |
isStaffASubstitute | bool | When |
isWaitlistAvailable | bool | When |
isIntendedForOnlineViewing | bool | When |
staffId | number | The ID of the staff member teaching the class. |
staffName | string | The name of the staff member teaching the class. |
startDateTime | string | The UTC date and time when the class starts. |
endDateTime | string | The UTC date and time when the class ends. |
classDescriptionId | number | The class schedule’s class description ID. Used to link to a class description. |
assistantOneId | nullable number | The first assistant’s staff ID. |
assistantOneName | string | The first assistant’s name. |
assistantTwoId | nullable number | The second assistant’s staff ID. |
assistantTwoName | string | The second assistant’s name. |
resources | list of objects | Contains a list of the room(s) where this individual class may be held, or resource(s) it may use. |
resources[].id | number | The room/resource ID. |
resources[].name | string | The room/resource name. |
## Class Booking
Mindbody sends class booking events when a client is booked into a class at a business.
### classRosterBooking.created
Mindbody sends this event when a client is booked into a class.
```
{
"siteId": 123,
"locationId": 1,
"classId": 201,
"classRosterBookingId": 11,
"classStartDateTime": "2018-07-17T12:00:00Z",
"classEndDateTime": "2018-07-17T13:00:00Z",
"signedInStatus": "SignedIn",
"staffId": 12,
"staffName": "<NAME>",
"maxCapacity": 20,
"webCapacity": 15,
"totalBooked": 10,
"webBooked": 8,
"totalWaitlisted": 0,
"clientId": "100000009",
"clientUniqueId": 100000009,
"clientFirstName": "John",
"clientLastName": "Smith",
"clientEmail": "<EMAIL>",
"clientPhone": "8055551234",
"clientPassId": "112",
"clientPassSessionsTotal": 10,
"clientPassSessionsDeducted": 1,
"clientPassSessionsRemaining": 9,
"clientPassActivationDateTime": "2017-07-17T00:00:00Z",
"clientPassExpirationDateTime": "2018-07-17T00:00:00Z",
"bookingOriginatedFromWaitlist": false,
"clientsNumberOfVisitsAtSite": 6,
"itemId": 12345,
"itemName": "Yoga 5 Pass",
"itemSiteId": 1234567
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
locationId | number | The ID of the location where the class being booked takes place. |
classId | number | The ID of the class for which the booking was made. |
classRosterBookingId | number | The booking ID. |
classStartDateTime | string | The UTC date and time when the class starts. |
classEndDateTime | string | The UTC date and time when the class ends. |
signedInStatus | string | The current status of the booking. |
staffId | number | The ID of the staff member who teaches the class. |
staffName | string | The name of the staff member who teaches the class. |
maxCapacity | number | The total number of spaces available on the class roster. |
webCapacity | number | The total number of bookings that can be made by clients from a mobile application, the Public API, or the online schedule for the business. |
totalBooked | number | The number of spaces booked in the class. |
webBooked | number | The number of spaces booked in the class that originated from a mobile app, the Public API, or the online schedule for the business. |
totalWaitlisted | number | The number of clients currently on the waiting list for the class. |
clientId | string | The public ID of the client booking the class. |
clientUniqueId | number | The client’s system generated ID at the business. This value cannot be changed by business owners and is always unique across all clients at the business. |
clientFirstName | string | The client’s first name. |
clientLastName | string | The client’s last name. |
clientEmail | string | The client’s email. |
clientPhone | string | The client’s phone number. |
clientPassId | string | The ID of the pass used to pay for the booking. This value is |
clientPassSessionsTotal | nullable number | The total number of visits the pass can pay for. This value is |
clientPassSessionsDeducted | nullable number | The total number of visits that the pass has already paid for. This value is |
clientPassSessionsRemaining | nullable number | The total number of visits remaining on this pass. The value is |
clientPassActivationDateTime | string | The date on and after which the pass can pay for visits. The pass will pay for visits before this date if the business has disabled the Pricing Option Activation Dates - Enforce option on their General Setup & Options page. This value is |
clientPassExpirationDateTime | string | The date after which the pass expires and can no longer pay for visits. This value is |
bookingOriginatedFromWaitlist | bool | When |
clientsNumberOfVisitsAtSite | number | The total number of visits the client has made to the business. |
itemId | number | The business’s ID for the pricing option used to pay for the class booking. This is |
itemName | string | The business’s name for the pricing option used to pay for the class booking. |
itemSiteId | number | For a cross-regional booking, this is the site ID at which the pricing option used to pay for the class booking was purchased. For a non-cross-regional booking, the value will be |
### classRosterBookingStatus.updated
Mindbody sends this event when a booking is altered in a way that does not result in it being cancelled.
```
{
"siteId": 123,
"locationId": 1,
"classId": 201,
"classRosterBookingId": 11,
"classDateTime": "2018-07-17T12:00:00Z",
"signedInStatus": "SignedIn",
"staffId": 12,
"clientId": "100000009",
"clientUniqueId": 100000009,
"clientFirstName": "John",
"clientLastName": "Smith",
"clientEmail": "<EMAIL>",
"clientPhone": "8055551234",
"itemId": 2401,
"itemName": "Hot Yoga Drop In",
"itemSiteId": 12345,
"clientPassId": "46791"
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
locationId | number | The ID of the location where the class being booked takes place. |
classId | number | The ID of the class for which the booking was made. |
classRosterBookingId | number | The booking ID. |
classDateTime | string | The UTC date and time when the class starts. |
signedInStatus | string | The current status of the booking. |
staffId | number | The ID of the staff member who teaches the class. |
clientId | string | The public ID of the booking client. |
clientUniqueId | number | The client’s system generated ID for whom you want to update class roster. This value cannot be changed by business owners and is always unique across all clients at the business. |
clientFirstName | string | The booking client’s first name. |
clientLastName | string | The booking client’s last name. |
clientEmail | string | The booking client’s email. |
clientPhone | string | The booking client’s phone number. |
itemId | number | The business’s ID for the pricing option used to pay for the class booking. This is |
itemName | string | The business’s name for the pricing option used to pay for the class booking. |
itemSiteId | number | For a cross-regional booking, this is the site ID at which the pricing option used to pay for the class booking was purchased. For a non-cross-regional booking, the value will be |
clientPassId | string | The ID of the pass used to pay for the booking. This value is |
### classRosterBooking.cancelled
Mindbody sends this event when a booking is cancelled.
```
{
"siteId": 123,
"locationId": 1,
"classId": 201,
"classRosterBookingId": 11,
"clientId": "100000009",
"clientUniqueId": 100000009
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
locationId | number | The ID of the location where the class being booked takes place. |
classId | number | The ID of the class for which the booking was made. |
classRosterBookingId | number | The ID of the booking. |
clientId | string | The public ID of the booking client. |
clientUniqueId | number | The client’s system generated ID for whom you want to cancel class roster. This value cannot be changed by business owners and is always unique across all clients at the business. |
## Class Waitlist
Mindbody sends class waiting list events when a client is added to or removed from a class waiting list.
### classWaitlistRequest.created
Mindbody sends this event when a client is added to a class waiting list.
```
{
"siteId": 123,
"locationId": 1,
"classId": 201,
"classScheduleId": 6,
"waitlistEntryId": 157,
"waitlistMaxSize": 5,
"clientId": "100000009",
"clientUniqueId": 100000009,
"clientEmail": "<EMAIL>",
"clientPhone": "8055551234",
"classStartDateTime": "2018-07-17T12:00:00Z",
"classEndDateTime": "2018-07-17T13:00:00Z",
"clientsNumberOfVisitsAtSite": 6
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
locationId | number | The ID of the location where the class with this waiting list takes place. |
classId | number | The ID of the class the waiting list is for. |
classScheduleId | number | The class schedule’s ID. |
waitlistEntryId | number | The ID for this specific client and waiting list pairing. |
waitlistMaxSize | number | The total number of spaces available on the waiting list. |
clientId | string | The public ID of the client being added to the waiting list. |
clientUniqueId | number | The client’s system generated ID whom you want to add to a class waiting list. This value cannot be changed by business owners and is always unique across all clients at the business. |
clientEmail | string | The email of the client who is trying to book the class. |
clientPhone | string | The phone number of the client who is trying to book the class. |
classStartDateTime | string | The UTC date and time when the class starts. |
classEndDateTime | string | The UTC date and time when the class ends. |
clientsNumberOfVisitsAtSite | number | The total number of visits the client has made to the business. |
### classWaitlistRequest.cancelled
Mindbody sends this event when a client is removed from a class waiting list.
```
{
"siteId": 123,
"waitlistEntryId": 157
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
waitlistEntryId | number | The ID for this specific client and waiting list pairing. |
## Class Description
Mindbody sends class description events when an owner or staff member changes a class name or description.
### classDescription.updated
Mindbody sends this event when a change is made to a class’s name or description. When this event is sent, you should update all class names and descriptions for classes associated with class schedules that use this description.
If you receive this event and you caching class descriptions, you can call the `GetClassDescriptions` endpoint in the Public API and refresh your system’s caches.
```
{
"siteId": 123,
"id": 11,
"name": "Beginning Hatha Yoga",
"description": "A great, gentle introduction to the Hatha practice."
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
id | number | The ID of the class description. |
name | string | The name for all classes associated with class schedules that use this class description. |
description | string | The description for all classes associated with class schedules that use this class description. |
## Client
Mindbody sends client events when a staff member or client changes the client’s information.
### client.created
Mindbody sends this event when a new client is added to a business.
```
{
"siteId": 123,
"clientId": "100000009",
"clientUniqueId": 100000009,
"creationDateTime": "2018-08-28T06:45:58Z",
"status": "Non-Member",
"firstName": "John",
"lastName": "Smith",
"email": "<EMAIL>",
"mobilePhone": "8055551234",
"homePhone": null,
"workPhone": null,
"addressLine1": "123 ABC Ct",
"addressLine2": ,
"city": "San Luis Obispo",
"state": "CA",
"postalCode": "93401",
"country": "US",
"birthDateTime": "1989-07-02T00:00:00Z",
"gender": "Male",
"appointmentGenderPreference": null,
"firstAppointmentDateTime": "2018-08-29T06:45:58Z",
"referredBy": null,
"isProspect": false,
"isCompany": false,
"isLiabilityReleased": true,
"liabilityAgreementDateTime": "2018-08-29T06:45:58Z",
"homeLocation": 1,
"clientNumberOfVisitsAtSite": 2,
"indexes": [
{
"indexName": "LongtermGoal",
"indexValue": "IncreasedFlexibility"
}
],
"sendPromotionalEmails": true,
"sendScheduleEmails": true,
"sendAccountEmails": true,
"sendPromotionalTexts": false,
"sendScheduleTexts": false,
"sendAccountTexts": false,
"creditCardLastFour": "1445",
"creditCardExpDate": "2025-06-30T00:00:00Z",
"directDebitLastFour": "7754",
"notes": "Notes about the client.",
"photoUrl": "https://clients.mindbodyonline.com/studios/ACMEYoga/clients/100000009_large.jpg?osv=637136734414821811",
"previousEmail": null
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
clientId | string | The client’s public ID, used frequently in the Public API. |
clientUniqueId | number | The client’s system generated ID when a new client is added to a business. This value cannot be changed by business owners and is always unique across all clients at the business. |
creationDateTime | string | The UTC date and time when the client was added to the business. |
status | string | The client’s membership status. Because each business can add custom statuses, these values can differ from one business to another. The Mindbody standard status values follow: |
firstName | string | The client’s first name. |
lastName | string | The client’s last name. |
string | The client’s email address. |
mobilePhone | string | The client’s mobile phone number. |
homePhone | string | The client’s home phone number. |
workPhone | string | The client’s work phone number. |
addressLine1 | string | Line one of the client’s street address. |
addressLine2 | string | Line two of the client’s street address. |
city | string | The city in which the client’s address is located. |
state | string | The state in which the client’s address is located. |
postalCode | string | The client’s postal code. |
country | string | The country in which client’s address is located. |
birthDateTime | string | The client’s birth date. |
gender | string | The client’s gender. Note that this field may be any value, depending on the gender options configured by the business. |
appointmentGenderPreference | string | Indicates which gender of staff member the client prefers to provide their appointment services. |
firstAppointmentDateTime | string | The UTC date and time of the client’s first visit to the site. |
referredBy | string | How the client was referred to the business. |
isProspect | bool | When |
isCompany | bool | When |
isLiabilityReleased | bool | When |
liabilityAgreementDateTime | string | The UTC date and time when the client agreed to the business’s liability waiver. |
homeLocation | number | The business location where the client will most frequently obtain services. |
clientNumberOfVisitsAtSite | number | The total number of visits the client has made to the business. |
indexes | list of objects | Contains information about the client’s client indexes and their values. |
indexes.indexName | string | The name of the client index. |
indexes.indexValue | string | The value of the client index. |
sendPromotionalEmails | bool | When |
sendScheduleEmails | bool | When |
sendAccountEmails | bool | When |
sendPromotionalTexts | bool | When |
sendScheduleTexts | bool | When |
sendAccountTexts | bool | When |
creditCardLastFour | string | The last four characters of the client’s stored credit card. |
creditCardExpDate | string | The expiration date of the client’s stored credit card. |
directDebitLastFour | string | The last four characters of the client’s stored bank account. |
notes | string | The first thousand characters of the client’s account notes. |
photoUrl | string | The URL of the client’s profile picture. |
previousEmail | string | The client’s email address before the client was updated. When |
### client.updated
Mindbody sends this event when a business or client changes any of the properties in the client.created event object.
Mindbody strongly recommends that developers call the `GetClients` endpoint in the Public API daily to re-pull all clients and update caches.
### client.deactivated
Mindbody sends this event when a business or client deactivates a client at the business.
## Merge
Mindbody sends merge events when an owner or staff member merges one client into another.
### clientProfileMerger.created
Mindbody sends this event when a business merges one client into another.
```
{
"siteId": 123,
"mergeDateTime": "2016-08-28T06:45:58Z",
"mergedByStaffId": 11,
"keptClientId": "100000009",
"keptClientUniqueId": 100000009,
"removedClientUniqueId": 100000008
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
mergeDateTime | string | The UTC date and time when the merge happened. |
mergedByStaffId | number | The ID of the staff member who merged the two clients. |
keptClientId | string | The client’s public ID, used frequently in the Public API. |
keptClientUniqueId | number | The client’s system generated ID whom you want to keep in the system. This value cannot be changed by business owners and is always unique across all clients at the business. |
removedClientUniqueId | number | The client’s system generated ID whom you want to remove from the system. This value cannot be changed by business owners and is always unique across all clients at the business. |
## Membership
Mindbody sends membership events when a client gets or loses a membership.
### clientMembershipAssignment.created
Mindbody sends this event when a client gets a new membership.
```
{
"siteId": 123,
"clientId": "100000009",
"clientUniqueId": 100000009,
"clientFirstName": "John",
"clientLastName": "Smith",
"clientEmail": "<EMAIL>",
"membershipId": 12,
"membershipName": "Gold Level Member"
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
clientId | string | The client’s public ID, used frequently in the Public API. |
clientUniqueId | number | The client’s system generated ID whom a new membership gets assigned. This value cannot be changed by business owners and is always unique across all clients at the business. |
clientFirstName | string | The client’s first name. |
clientLastName | string | The client’s last name. |
clientEmail | string | The client’s email address. |
membershipId | number | The ID of the membership that is being added to the client’s account. |
membershipName | string | The name of the membership that is being added to the client’s account. |
### clientMembershipAssignment.cancelled
Mindbody sends this event when a business removes a membership from a client’s account.
```
{
"siteId": 123,
"clientId": "100000009",
"membershipId": 12
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
clientId | string | The client’s public ID, used frequently in the Public API. |
membershipId | number | The ID of the membership that is being added to the client’s account. |
## Contract
Mindbody sends contract events when changes are made to a client’s contract.
### clientContract.created
Mindbody sends this event when a client purchases a contract, or a staff member sells a contract to a client.
```
{
"siteId": 123,
"clientId": "100000009",
"clientUniqueId": 100000009,
"clientFirstName": "John",
"clientLastName": "Smith",
"clientEmail": "<EMAIL>",
"agreementDateTime": "2018-03-20T10:29:42Z",
"contractSoldByStaffId": 12,
"contractSoldByStaffFirstName": "Jane",
"contractSoldByStaffLastName": "Doe",
"contractOriginationLocation": 1,
"contractId": 3,
"contractName": "Gold Membership Contract",
"clientContractId": 117,
"contractStartDateTime": "2018-03-20T00:00:00Z",
"contractEndDateTime": "2019-03-20T00:00:00Z",
"isAutoRenewing": true
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
clientId | string | The client’s public ID, used frequently in the Public API. |
clientUniqueId | number | The client’s system generated ID for whom contract is purchased or a staff member sells a contract. This value cannot be changed by business owners and is always unique across all clients at the business. |
clientFirstName | string | The client’s first name. |
clientLastName | string | The client’s last name. |
clientEmail | string | The client’s email address. |
agreementDateTime | string | The UTC date and time when the client agreed to the contract’s terms and conditions. |
contractSoldByStaffId | number | The ID of the staff member who sold the contract to the client. This value is |
contractSoldByStaffFirstName | string | The first name of the staff member who sold the contract to the client. This value is |
contractSoldByStaffLastName | string | The last name of the staff member who sold the contract to the client. This value is |
contractOriginationLocation | number | The ID of the location from which the contract was sold. |
contractId | number | The contract’s ID. |
contractName | string | The contract’s name. |
clientContractId | number | The unique identifier for the contract and client pairing. |
contractStartDateTime | string | The date when the contract’s billing cycle starts. |
contractEndDateTime | string | The date when the contract’s billing cycle ends. |
isAutoRenewing | bool | When |
### clientContract.updated
Mindbody sends this event when a client’s contract is changed. Note that both contract suspensions and terminations count as changes, not as cancellations.
```
{
"siteId": 123,
"agreementDateTime": "2018-03-20T10:29:42Z",
"clientId": "100000009",
"clientUniqueId": 100000009,
"clientContractId": 117,
"contractStartDateTime": "2018-03-20T00:00:00Z",
"contractEndDateTime": "2019-03-20T00:00:00Z",
"isAutoRenewing": true
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
agreementDateTime | string | The UTC date and time when the client agreed to the contract’s terms and conditions. |
clientId | string | The client’s public ID, used frequently in the Public API. |
clientUniqueId | number | The client’s system generated ID for whom contract is updated (suspensions and terminations). This value cannot be changed by business owners and is always unique across all clients at the business. |
clientContractId | number | The unique identifier for the contract and client pairing. |
contractStartDateTime | string | The date when the contract’s billing cycle starts. |
contractEndDateTime | string | The date when the contract’s billing cycle ends. |
isAutoRenewing | bool | When |
### clientContract.cancelled
Mindbody sends this event when a client’s contract is deleted.
```
{
"siteId": 123,
"clientId": "100000009",
"clientUniqueId": 100000009,
"clientContractId": 117
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
clientId | string | The client’s public ID, used frequently in the Public API. |
clientUniqueId | number | The client’s system generated ID for whom contract is deleted. This value cannot be changed by business owners and is always unique across all clients at the business. |
clientContractId | number | The unique identifier for the contract and client pairing. |
## Sale
Mindbody sends sale events when a sale is made to a client. A staff member at the business may make the sale or clients may create their own sales using a mobile application, the Public API, or the online schedule for the business.
### clientSale.created
Mindbody sends this event when a sale is made to a client.
```
{
"siteId": 123,
"saleId": 96,
"purchasingClientId": "100000009",
"payments": [
{
"paymentId": 103,
"paymentMethodId": 14,
"paymentMethodName": "Cash",
"paymentAmountPaid": 150,
"paymentLastFour": null,
"paymentNotes": null
}
],
"saleDateTime": "2018-05-03T16:52:23Z",
"soldById": 10,
"soldByName": "<NAME>",
"locationId": 1,
"totalAmountPaid": 150,
"items": [
{
"itemId": 78,
"type": "Service",
"name": "<NAME>",
"amountPaid": 150,
"amountDiscounted": 0,
"quantity": 1,
"recipientClientId": "100000009",
"paymentReferenceId": 44
}
]
}
```
Name | Type | Description |
| --- | --- | --- |
siteId | number | The Mindbody ID for the business. |
saleId | number | The ID of the sale. |
purchasingClientId | string | The client’s public ID, used heavily in the Public API. |
payments | list of objects | Contains information about the payment methods used for the sale. |
payments.paymentId | number | The payment’s ID. |
payments.paymentMethodId | number | The ID of the payment method associated with the payment. |
payments.paymentMethodName | string | The name of the payment method associated with the payment. |
payments.paymentAmountPaid | number | The monetary amount of the payment. |
payments.paymentLastFour | string | The last four digits of the credit card associated with the payment. This value is |
payments.paymentNotes | string | Any payment notes left on this payment by a staff member at the time of the sale. |
saleDateTime | string | The UTC date and time when the sale took place. |
soldById | number | The ID of the client or staff member who made the sale. |
soldByName | string | The name of the client or staff member who made the sale. |
locationId | number | The location where the sale took place. This value is |
totalAmountPaid | number | The sale total, including taxes and discounts. |
items | list of objects | Contains information about the items sold. |
items.itemId | number | The item’s product ID. |
items.type | string | The item type. |
items.name | string | The item’s name. |
items.amountPaid | number | The total amount paid for the item, including taxes and discounts. |
items.amountDiscounted | number | The amount that was not paid because the item was discounted. |
items.quantity | number | When this value is more than one, it indicates that this item record represents multiple items. For example, if the item is a 10 Punch Pass and the quantity is 2, then this single item represents two punch passes. |
items.recipientClientId | string | The public ID of the client who received the product. |
items.paymentReferenceId | number | The item’s payment reference ID. |
## Staff
Mindbody sends staff events when an owner or staff member changes staff member information.
### staff.created
Mindbody sends this event when a business adds a new staff member.
```
{
"staffId": 12,
"siteId": 123,
"addressLine1": "123 ABC Ct",
"addressLine2": null,
"staffFirstName": "Jane",
"staffLastName": "Doe",
"city": "San Luis Obispo",
"state": "CA",
"country": "US",
"postalCode": "93401",
"sortOrder": 1,
"isIndependentContractor": true,
"alwaysAllowDoubleBooking": false,
"providerIds": [
"688135485"
],
"imageUrl": "https://clients.mindbodyonline.com/studios/ACMEYoga/staff/12_large.jpg?osv=637160121420806704",
"biography": null,
"gender": "Female"
}
```
Name | Type | Description |
| --- | --- | --- |
staffId | number | The staff member’s ID. |
siteId | number | The Mindbody ID for the business. |
addressLine1 | string | Line one of the staff member’s street address. |
addressLine2 | string | Line two of the staff member’s street address. |
staffFirstName | string | The staff member’s first name. |
staffLastName | string | The staff member’s last name. |
city | string | The city in which the staff member’s address is located. |
state | string | The state in which the staff member’s address is located. |
country | string | The country in which the staff member’s address is located. |
postalCode | string | The staff member’s postal code. |
sortOrder | number | The staff member’s sort weight. Smaller weights should appear at the top of lists. |
isIndependentContractor | bool | When |
alwaysAllowDoubleBooking | bool | When |
providerIds | list of strings | Contains all of the provider IDs associated with the staff member. |
imageUrl | string | A URL to an image of the staff member. |
biography | string | The staff member’s biography text. |
gender | string | The staff member’s gender. |
### staff.updated
Mindbody sends this event when someone updates the information about a staff member.
### staff.deactivated
Mindbody sends this event when a business deactivates one of its staff members.
# Mindbody API Release Notes
Date: 2018-06-14
Categories:
Tags:
## Mindbody API Release Notes
### July 2023 - Endpoint Updates
POST UpdateClientIndex
POST DeactivatePromoCode
We added new functionality to deactivate an existing promocode record at the specified business.
Get ClientContracts
We updated the endpoint to include Subtotal and Tax properties to UpcomingAutopayEvents object.
POST PurchaseAccountCredit
POST PurchaseGiftCard
### June 2023 - Endpoint Updates
POST CheckoutShoppingCart
We updated the endpoint to support cross regional gift cards.
Get BookableItems
We updated the endpoint to return Public Display for staff availability.
### April-May 2023 - Endpoint Updates
POST AddClassSchedule
We removed requiring pricing options.
Get ClientContracts
We updated the endpoint to include ProductId returned in UpcomingAutopayEvent object.
We updated the endpoint to return membership requirements.
### Mar 2023 Release – Billing Usage Reports
Your Developer Account on the Mindbody Developer Portal now includes a 'Reports' section with following new reports, that provide a summary of your API activity and billing usage details per billing cycle.- Invoice Detail Report - Activity by Studio Report - Booking Detail Report (If applicable)
### Jan 2023 Release – Documentation Enhancements
This release features many exciting usability enhancements in API Documentation and improvements to API Developer Experience.
* Global Search – Search for any text or Endpoints.
* Language-specific API documentation in HTTP, Python, .NET, Ruby and PHP.
* Live API Playground containing
- Usage examples for each endpoint - Reactive code samples and - ‘Try it out’ section for Live API call * SDKs in 4 supported languages: Python, .NET, Ruby, PHP
* Responsive UI
* Improvements in page load times
### November 2022 - Endpoint Updates
POST TerminateContract
We added a new endpoint to terminate a client contract.
POST SuspendContract
We added a new endpoint to suspend a client contract.
POST AddClassSchedule
We added a new endpoint to add a class schedule.
POST AddEnrollmentSchedule
We added a new endpoint to add an enrollment schedule.
DELETE RemoveFromAppointmentWaitlist
We added a new endpoint to remove appointments from the waitlist.
POST AddAppointment
We added new functionality to add appointments to the waitlist.
Get SessionTypes
We have modified the endpoint to return the online description in API response.
POST AddOrUpdateAppointments
Fixed Bug 1092785: We have fixed the bug where clients were able to update appointments that were booked for another client.
Fixed Bug 1145389: We have fixed the bug where TNMB users were getting "Invalid credentials" error while trying to access v5.0 endpoints.
Get ClientServices
Fixed Bug 1148072: We have fixed a bug where we were not getting services for which a restriction was added for use at location settings.
POST PurchaseContract
Fixed Bug 1140280: We have fixed a bug where billing details were updated when passed saveinfo as false in the request.
# New Endpoints - Mindbody Consumer APIs
Mindbody Consumer APIs are designed to help companies build integrated solutions that complement consumers' wellness behaviors, enabling them to grow their business by connecting with the largest health, beauty, and wellness community in the world.
GET Consumer Visits
GET Consumer Purchases
DELETE Consumer
Disconnects consumer from a partner.
GET Business Directory
Returns a list of the businesses on the Mindbody Platform.
POST RemoveClientFromClass
We added a new property "VisitId" in RemoveClientFromClass endpoint. Using VisitID, we can remove specific visits of clients from class.
POST ValidateStaffLogin
Fixed Bug 1108743 : We have fixed the bug where partners were getting error when trying to create staff token using the ValidateStaffLogin endpoint.
POST GetServices
Fixed Bug 1107273: We fixed a bug where we were getting authentication error for staff credentials for The New Mindbody Experience sites.
Get Categories.
POST GetTransactions .
Fixed Bug 1100245 : We fixed a bug in AddClientDirectDebitInfo API when adding direct debit information, if validation occurs, it cleared existing information from the client's profile while it should retain.
Fixed Bug 1100243: We fixed a bug in AddClientDirectDebitInfo API where validation was missing for incorrect staff permissions and we were not getting validation message from Public API.
GET ContactLogs.
Fixed Bug 1062893: We fixed a bug where ShowSystemGenerated filter was not working fine in GET ContactLogs endpoint. When passed as false, it was returning system-generated mails in response.
GET ActiveClientMemberships.
Fixed Bug 1088323: We fixed a bug where we were not getting clients memberships in API when clients have multiple memberships, only top priority active memberships were returned.
GET ClientServices.
Fixed Bug 1126064: We fixed a bug where we were not getting client services when passed location filter in API parameter.
### July 2022 - Endpoint Updates
POST CheckoutShoppingCart.We have changed the validation message in the checkout shopping cart endpoint when purchasing the product with instore value as false.
### June 2022 - Endpoint Updates
GET Clients.We added a limitation in the Get Client request for ClientIds, now maximum of 20 client IDs can be passed in the parameter of the endpoint.
POST UserTokens.
Fixed: Bug 1091144- We fixed an issue where the access token was not getting created for staff credentials of the new MB experience sites.
Fixed: Bug 1093286- We fixed an issue where the promotions not applicable for online sales was accepted even when instore was passed as false.
Fixed: Bug 928626- We fixed an issue where a high response time for the RemoveClientFromClass endpoint was causing high CPU usage. With this fix, we improved response time by ~8sec.
POST GetActiveSessionTimes.
Fixed: Bug 1093286- We fixed an issue where an entry was not made in the database when the "GetActiveSessionTimes" endpoint was executed (v5/5.1) without SourceName/Password and the API key was passed in the header.
All V5.0/V5.1 endpoints.
Fixed: Bug 1103114- We fixed an issue where we were getting API results when passed API-key of an inactive source in the header.
### May 2022 - Endpoint Updates
POST CheckoutShoppingCart.
Fixed: Bug 1015018- We fixed an issue where the payment method was displayed as n/a when the payment failed.
POST AddClientToClass.
Fixed: Bug 1078863 - We fixed an issue where the Reservation Confirmations (Single) email was not being sent when the User Token was not passed.
GET ClientContracts.
Fixed: Bug 1087955 - We fixed an issue where the client contract was showing up multiple times in the response of GET ClientContracts.
POST AddOrUpdateAppointments.
Fixed: Bug 1022345- We fixed an issue where consumers were able to use off-peak time access pricing options to pay for Peak services Appointments in v5.
GET Contracts.
Fixed: Bug 1026950 - We fixed an issue where Get contract returned no results when passed with offset parameters.
POST Availabilities
This endpoint adds availabilities and unavailabilities for a staff.
Put Availabilities
This endpoint updates information for a specific availability or unavailability of the staff.
DELETE Availability
This endpoint deletes the availability or unavailability of staff.
GET Semesters
This endpoint retrieves the business class semesters.
GET MobileProviders
This endpoint fetches the list of mobile providers supported by the business.
GET ProspectStages
This endpoint fetches the list of prospect stages for the potential clients.
GET SalesReps
This endpoint fetches the basic details of the staff members that are sales representatives.
Post RemoveClientsfromClasses
This endpoint can be utilized for removing multiple clients from multiple classes in one request.
Post ReturnSale
This endpoint can be used to return sales for a specified sale ID in business mode.
Get Courses
This endpoint fetches data related to courses depending on the access level.
Post CancelSingleClass
This endpoint will cancel a single class from the studio.
GET Clients
Fixed: Bug 826656 - We fixed the issue where the "Get clients" request returned an error: "Something went wrong. Please try again." for certain profiles included in the response.
GET BookableItems.
Fixed: Bug 1039434 - We fixed an issue where higher call volume usually resulted in the performance degradation of the endpoint.
Fixed: Bug 1075498 - We fixed an issue where CheckoutShoppingCart returns a strange error in API that looks like an SQL query when the Strong Customer Authentication (SCA) challenge is enabled.
### February 2022 - Endpoint Updates
GET ClientSchedule
This endpoint fetches the schedule of the client for a given studio.
PUT Products
This endpoint updates the retail price and an online price for products.
PUT SaleDate
This endpoint updates the SaleDate and returns the details of the sale.
PUT Services
This endpoint updates the retail price and an online price for services.
GET StaffImageURL
This endpoint fetches the image-URL of the staff, for a given studio if it exists.
GET Classes
Fixed: Bug 675732 - We fixed an issue where all classes show as 'False' for "IsEnrolled", even though the client is registered for those classes.
Fixed: Bug 1003855 - We fixed an issue where, only in some circumstances API will return 500 internal server error in the response, but cancellation still takes place.
# V6.0 endpoint access
Fixed: Bug 1048497 - We fixed an issue where Inactive partners were able to access V6 endpoints
# GET Packages
Fixed: Bug 925367 - We fixed an issue in getting Packages where Package items were not showing the correct "tax rate", instead it was always showing "0.0"
### Nov 2021 - Endpoints Update
# OnlineDescription
OnlineDescription field is included in the response of GetStaffAppointments endpoint which indicates the online description associated with the appointment. Documentation can be found here.
### Oct 2021 - Endpoints Update
# Recipient Client ID
Recipient Client ID is included in GetSales endpoint which indicates the ID for the client for which purchase was made. Please refer to RecipientClientId field. Documentation can be found here.
# Update Product Price
The new endpoint will let you update the retail and online price of the product.Documentation can be found here.
### Sept 2021 API Updates
# Custom Staff ID
Custom StaffID field is included in GetStaff, UpdateStaff and AddStaff endpoints. Please refer to EmpID field. Documentation can be found here.
### Aug 2021 - Endpoint updates
# POST CheckoutShoppingCart
This endpoint now gives you the capability to control whether taxes should be applied and calculated for the purchase or not. Through new boolean parameter CalculateTax, you could control whether taxes should be applied or not. Documentation can be found here.
### July 2021 - New Endpoint
# GET ActiveClientsMemberships
Retrieve a list of memberships associated with each client. This endpoint works for multiple clients. Documentation can be found here.
### June 2021 – New and Updated Endpoints
# GET StaffSessionTypes
Retrieve a list of active session types for a specific staff member. Documentation can be found here.
# POST AddStaffAvailability
Add staff availability or unavailability for a given staff member. Documentation can be found here.
# POST AssignStaffSessionType
Assign a staff member to an appointment session type with staff specific properties such as time length and pay rate. Documentation can be found here.
# POST UpdateStaffPermissions
Assign a permission group to a staff member. Documentation can be found here.
# GET Clients and POST UpdateClient
These endpoints now include information about the state of a client suspension in the response. Documentation can be found here.
# GET ClientRewards
Retrieve current client reward balance and journal of reward transactions. Documentation can be found here.
# POST UpdateClientRewards
Adjust client reward balance by earning or redeeming rewards points for a given client. Documentation can be found here.
# GET ClientFormulaNotes
This endpoint had been updated to retrieve cross regional formula notes for a client, or for a specific appointment. Documentation can be found here.
# POST AddClientFormulaNote
Add a formula note for a specified client or specified client appointment. Documentation can be found here.
# DELETE ClientFormulaNote
Delete an existing formula note. Documentation can be found here.
# GET ClientContracts
This endpoint had been updated to return ContractID in the response. Documentation can be found here.
# POST PurchaseGiftCard
This endpoint had been updated to support two new properties in the request: BarcodeId and SenderName. Documentation can be found here.
### April 2021 - Enhancements in Existing Endpoints
# GET Sales
This endpoint which is used to retrieve sales data is enhanced by adding 23 new properties that are relevant to the sale detail. Documentation can be found here.
# GET Products
This endpoint which is used to retrieve products data is enhanced by adding 8 new properties that are relevant to the product detail. Documentation can be found here.
# GET ClientVisits
This endpoint which is used to retrieve client visits data is enhanced by adding 4 new properties that are relevant to the visit. Documentation can be found here
# GET Clients
This endpoint which gives the details of the clients will also have a property 'AcccountBalance' which gives the balance of a client. Documentation can be found here
### April 2021 Updates - New Endpoints
Exciting news! We have introduced 4 new endpoints in Public API. Do check them out.
# GET Categories
This endpoint returns the list of revenue and product revenue categories configured for the site . Documentation can be found here.
# GET Transactions
This endpoint retrieves the payment transaction data associated with a sale. Documentation can be found here.
# GET ClientCompleteInfo
This endpoint gives all the details for a specific client including services, membership, arrivals etc. Documentation can be found here.
# GET PaymentTypes
This endpoint gives the list of payment types configured for the studio. Documentation can be found here.
# GET ProductsInventory
This endpoint gives the inventory data for products for a given studio. Documentation can be found here.
### February 2021 Updates - New endpoints
Exciting new endpoints are available in Public API!
# POST AddStaff
This endpoint creates a new staff member record at the specified business. Documentation can be found here.
# POST UpdateStaff
This endpoint can be used to update an existing staff member record at the specified business. Documentation can be found here.
# POST AddPromoCode
This endpoint can be used to create a new promocode record at the site. Documentation can be found here.
# GET PromoCodes
This endpoint returns a list of promocodes at the specified business. Documentation can be found here.
### Upcoming site maintenance
We’re upgrading the servers that power Booker to AWS, providing you with industry leading security and uptime. Scheduled downtime will occur on Sunday, February 21 from 1:00-5:00AM EST while we complete this upgrade.
[Learn more]- link to API release notes
### SCA Regulatory Updates
Strong Customer Authentication (SCA) is a new European regulatory requirement to reduce fraud and make online payments more secure. To accept payments and meet SCA requirements, we have made additions to the CheckoutShoppingCart, PurchaseContract, and PurchaseGiftCard endpoints to facilitate an SCA challenge. This only impacts EU customers who use the Public API to conduct consumer transactions using Stripe.New optional request fields that have been added:
* ConsumerPresent (Boolean) - Use this to indicate that the consumer is present or otherwise able to successfully negotiate an SCA challenge. It is not a good idea to have this always be false as that could very likely lead to a bank declining all transactions for the merchant. Defaults to false.
* PaymentAuthenticationCallbackUrl (String) - If ConsumerPresent is true, and the bank requests SCA, upon completion of the SCA challenge, the consumer will be redirected back to this URL. Unfortunately, at this time there is no indication as to whether the consumer confirmed or denied the transaction. This field is only needed if ConsumerPresent is true.
* CheckoutShoppingCart only: TransactionIDs (List of integers) - If any of the credit card payments are indicating an SCA challenge, then a second call into CheckoutShoppingCart is required where these challenged or pre-authorized credit card transactions will be needed to complete the process and capture the funds. These will be provided to you in the response from the first call to CheckoutShoppingCart along with AuthenticationUrls for any card authorizations that have an SCA challenge indicated. Note that if no SCA challenge is required, these will not be provided in the response and no second call into CheckoutShoppingCart is needed. Also note that this list may only contain 1 integer depending on whether your integration allows for multiple payments which is supported by the CheckoutShoppingCart endpoint.
One new element has been added to the CheckoutShoppingCart response object:
Transactions (List of TransactionResponse)
* TransactionID - integer
* AuthenticationUrl - string (optional valid URL provided by the bank)
If no SCA challenge is indicated, none of this information is needed and it will not be returned, and no 2nd call into CheckoutShoppingCart is needed. If provided at least one of the indicated transactions will have an AuthenticationUrl where the consumer will need to accept or decline the transaction, and upon doing so will be redirected to the PaymentAuthenticationCallbackUrl indicated in the request.
If your integration supports multiple credit card payments, there is a chance that more than one of them will have an SCA challenge indicated. It is up to your application to maintain this awareness and only make the 2nd call into CheckoutShoppingCart when all the challenges have been addressed. Your integration will need to provide all these TransactionIDs in the 2nd call back into CheckoutShoppingCart. Any TransactionResponse indicating only a TransactionID with no AuthenticationUrl, simply means that transaction has been preauthorized and has no SCA challenge indicated.
One new element has been added to the PurchaseContract/PurchaseGiftCard response object:
PaymentProcessingFailures (List of PaymentProcessingFailure)
* Type - string
* Message - string
* AuthenticationRedirectUrl - string (optional valid URL provided by the bank)
If no SCA challenge is indicated, PurchaseContract.Status = Success, none of this information is needed and it will not be returned, and no 2nd call into PurchaseContract is needed. If provided at least one of the indicated PaymentProcessingFailures, will have an AuthenticationRedirectUrl where the consumer will need to accept or decline the transaction, and upon doing so will be redirected to the PaymentAuthenticationCallbackUrl indicated in the request.
Because PurchaseContract leverages a shopping cart via the Mindbody Marketplace, no additional information needs to be passed back in the 2nd call into PurchaseContract to finalize the order. Note: The cart will only remain valid for 15 minutes, after which time the cart is abandoned and any authorized credit card transactions will be voided.
### December 2020 Updates - GetClasses V6 Will Not Show Hidden Classes When Unauthenticated
Beginning mid-December, we will be providing more support for cancelled classes that are hidden from the schedule. GetClasses V6 will no longer return classes that are marked as "hidden" when there is no authentication header present (read more about hidden classes). Unauthenticated users should never have access to see these hidden classes.
Hidden classes will continue to be returned when a staff with valid permission is authenticated.
UPDATE:
This was released on 12/16/2020
### Added Validation for Adding/Updating a Client's HomeLocation
The following client HomeLocation validation was added to match the current behavior in the core software:
When adding a new client using AddOrUpdateClients in v5.0 or v5.1 or AddClient in v6, if the HomeLocation property is not provided in the request, then the HomeLocation will be set to default to the first active location ID of the studio.
On update or add, if the HomeLocation ID is provided in the request, it must be a valid active location ID. Zero is also a valid value for HomeLocation ID on update or a staff-level add, which means "all locations" or "no preferred location".
### Class/Enrollment Booking Endpoints V5.x + V6.0 Respects "Make Unpaid Reservation" Staff Permission
On 10/30/2020 at 10:00 AM PDT we released an update to the AddClientsToClasses, AddClientToClass, and AddClientsToEnrollments endpoints in V5.x + V6.0 which affects whether staff can create an unpaid class reservation.
When the Make Unpaid Reservation staff permission is disabled for the User Token or User Credentials supplied in the request, then the endpoint will return the following error:
Staff member does not have permission to make unpaid reservations.
If this permission is enabled, the staff will be allowed to complete the unpaid reservation successfully. This functionality now matches that of our core web software.
Please note that all staff members that do not have this permission will experience a change in behavior and will no longer be allowed to make unpaid reservations.
### Get ClassDescriptions V6 Update to Fix Duplicate Results
On 10/26/2020 we released an update to the Get ClassDescriptions V6 endpoint. Now when providing a date filter in the request we return one ClassDescription object per Class. The old behavior would return a ClassDescription object for each scheduled occurrence of the Class in the date filter range.
### GetClasses V6 Update to Respect Consumer Mode Show Open Class Spaces Setting
On 10/13/2020 at 10:00 AM PDT we released an update to the GetClasses endpoint in V6 which affects whether class capacity properties are returned based on authorization and a class setting.
When the General Setup & Options - Consumer Mode Show # Open Class Spaces setting is disabled and no User Token is included in the request Authorization header, then the following class capacity properties will be returned as null in the response:
* MaxCapacity
* TotalBooked
* TotalBookedWaitlist
* TotalWebBooked
* WebCapacity
If this setting is enabled, the properties will be returned. If a user token is passed that was generated using Staff level credentials or higher (or using Source Credentials), then the class capacity properties will always be returned regardless of the setting, which applies to consumer mode only. This functionality matches that of previous versions of the API.
### Developer Portal Sign-in Upgrade
On 10/10/2020 at 9:00 PM PDT Mindbody will be upgrading the sign-in workflow for the Developer Portal, so you can authenticate using a single Mindbody account. If you've downloaded and signed up for the Mindbody app, good news—you already have a Mindbody account!
If your existing Mindbody account uses the same email you have on file with the Developer Portal, please use your Mindbody account password to sign in after the upgrade.
If you do not already have a Mindbody account for the email you have on file in the Developer Portal, one will be created for you as part of the upgrade. You will then be asked to verify your email address upon your next sign in.
We recommend taking the time to ensure your Developer Portal account information is up to date to avoid any potential sign-in confusion.
### August 2020 Updates - Create/Delete Appointment Add-Ons and Webhooks
August brought us the ability to create or delete add-ons for an appointment, as well as created/deleted webhooks for add-ons.
# Post AddAppointmentAddOn
This endpoint books an add-on on top of an existing, regular appointment. To book an add-on, you must use an existing appointment ID and session type ID. Documentation can be found here.
# Delete DeleteAppointmentAddOn
This endpoint can be used to early-cancel a booked appointment add-on. Documentation can be found here.
# New appointmentAddOn.deleted Webhook added!
### July 2020 Updates - Updated Staff Endpoints, Get Add-Ons, And Get Available Dates
July brought us a workflow optimization to check on scheduled availability for up to a 30 day range for a specific type of appointment, as well as expanded visibility into and information regarding add-ons.
# Get Staff
Now allows developers to filter staff for a particular appointment type. Documentation can be found here.
# Get AppointmentAvailableDates
New workflow optimization for finding availabilities based on employee schedules and appointment type. Documentation can be found here.
# Get AppointmentAddOns
List available add-ons, and optionally filter on only those that can be performed by a particular staff member. Documentation can be found here.
# Get StaffAppointments
Now includes an array of add-ons that are booked for the returned appointments. Documentation can be found here.
### Live Stream Classes Are Here!
Mindbody's Virtual Wellness Platform is a fully integrated video on-demand and live stream platform enabling businesses to create sustainable, hybrid business models with virtual offerings directly through the Mindbody business software. Consumers will now be able to book, pay for and experience virtual classes and wellness services through one application.
We added a ContentFormats property to the Program shared resource returned from Public API V6 endpoints. Possible values:
*
`"InPerson"` - The program does not offer virtual classes. *
```
"LiveStream:Mindbody"
```
- The program offers classes with virtual live streams hosted on the Mindbody Virtual Wellness Platform. *
`"LiveStream:Other"` - The program offers classes with virtual live streams that are not hosted on the Mindbody Virtual Wellness Platform.
We added a VirtualStreamLink property to the Class shared resource returned from Public API v6 endpoints. This URL is only returned for Classes that:
* Offer virtual live streams hosted on the Mindbody Virtual Wellness Platform and
* have not yet started
Live stream links are usually generated the day of the scheduled class.
### May 2020 Updates - Addressing Duplicate Client Creation in Public API
Starting the week of May 18th, 2020 all versions of the Public API will no longer allow duplicate clients to be created. Duplicate client accounts are a commonly referenced issue that will be addressed in upcoming changes to the Public API. In the future, duplicate clients will no longer present issues for your integration as we change our API responses to better serve your needs.Duplicates are defined as client records with the same first name, last name and email. MINDBODY recommends updating integrations to handle these new error responses when using AddOrUpdateClients, AddClient or UpdateClient across Public API V5.0 through V6.0. In V5.0 and V5.1 error codes will be returned in the 300s range with a message indicating duplicate creation was attempted. In V6.0 an HTTP Status Code 400 and error message will be returned If you are a business that would like to use this update earlier than May, please contact your account manager or the Contact API Support.
### Public API V6.0 POST AddArrival Updates
Cross-regional arrivals are now supported in V6.0! When used on a site that is part of a region, the following new logic will apply:
* When a client exists within the region but not at the studio where the arrival is being logged, a local client record will be automatically created.
* If the local client does not have an applicable local membership or pricing option, a membership or pricing option will be automatically used if it exists elsewhere within the region.
Developers calling POST AddArrival V6.0 must provide a staff user token with staff assigned the LaunchSignInScreen permission. Anonymous calls to AddArrival V6.0 will fail with the error message "Authorization Required".
### Public API V6.0 GET Services Updates
We have recently added some additional properties to the GET Services endpoint to help improve your purchase workflows. Changes include "SellOnline", "Membership" and "IsIntroOffer". Find out more in the documentation here.
### Past and Upcoming Bug Fixes - Public API & Webhooks
# Week of March 30th, 2020 - Tentatively Planned For
* A new endpoint GET ClientDuplicates will be available in Public API V6 to check for duplicate clients.
* Developers using UploadClientPhoto will no longer experience outdated images returned in the GET Clients response.
* Developers will receive documentation for new endpoints GET Genders and GET ClientDuplicates.
* Developers will no longer see code examples with `http`, which has been updated to `https`.
* Developers will see an updated list of error codes found in versions 5.0 & 5.1.
# Week of March 23rd, 2020
* Developers calling POST AddClient V6.0 will no longer receive a false error when attempting to add a new client to a new site with no clients.
* Developers calling POST AddAppointment and UpdateAppointment will now be informed when attempting to use an unrecognized gender preference.
* POST AddAppointment will no longer throw a 401 error when attempting to send a booking confirmation email.
* Developers will be able to add custom gender values for V6.0 POST AddClient, V6.0 POST UpdateClient, V5.x AddOrUpdateClient, and V5.1 UpdateClientCrossRegional.
* Developers will now be able to view all previous release note entries. Scroll through prior notes below.
# Week of March 16th, 2020
* In Public API V6.0 when using POST PurchaseGiftCard using the Test flag and then attempting to make a live sale will no longer error.
* Customers using the Developer Portal for Public API will see (1) a minor update to the cross-regional usage tutorial, (2) a minor correction to the Handling Errors section of the V6.0 documentation, and (3) a new button that allows a client to download the Public API Postman collection.
# Week of March 9th, 2020
* In Public API V5.x when using AddOrUpdateClients we will now require the use of an email when adding a new client or updating a client profile and are checked under the Required Fields. This will apply to both consumer mode and staff mode requests.
# Week of March 2nd, 2020
* In Public API (all versions) when using GetClientVisits we fixed an issue where arrival visits were not returned in the response.
* In Public API (all versions) when using GetClientVisits we fixed an issue where duplicate appointment visits were returned in the response.
* In Public API V6.0 when using AddAppointment or UpdateAppointment and "GenderPreference" parameter is left out or set to "None", the preference will be set to No Preference.
### Recent Webhooks Updates
We have recently added some additional properties to the Webhooks client, classRosterBooking, class and classSchedule payloads to help improve your class booking experiences.
* The client payloads have been updated to now return you details on billing info, client photos and prior email addresses used for client records. These are great pieces of information to use when building a comprehensive profile overview for your integration.
* The classRosterBooking payloads now include item details so you can relate these back to the Services[].ProductId returned by GET Services in Public API. This is helpful if you would like to see what services a client used during their booking.
* The class and classSchedule payloads have been updated to now include details for assistants assigned to classes. In addition, the classSchedule payload has been updated to now only send when a change is made to the entire schedule. If you had previously used this to indicated single class instance changes please use the class.updated Webhook instead. This includes class cancellations.
### Developer Onboarding Changes - February 2020
Starting February 19, 2020 all Developer Accounts created on that date or after will be unable to use Public API V5.0 or V5.1. All developers are encouraged to use V6.0, which is MINDBODY’s latest release of the Public API. Existing accounts will not be affected by this change.
# Onboarding FAQ
* How will I know if I cannot use Public API V5.0/V5.1?
* You will know if your account does not have access if it was created on or after February 19, 2020 or receives error code 108 and message "Your developer account does not have access to this legacy version".
* I need something that is only in V5.0/V5.1, what do I do now?
* Tell Contact API Support and feedback will be collected for the product and engineering teams to review.
* Can an exception be made for my account so I can use V5.0/V5.1?
* Tell Contact API Support and feedback will be collected for the product and engineering teams to review.
* Does this mean V5.0 and/or V5.1 is being deprecated?
* This is not a deprecation announcement for V5.0 and V5.1 of the Public API. When that happens, a broad announcement will be made that is clear on timelines and changes regarding deprecation plans.
* Why are you stopping onboarding to V5.0/V5.1?
* V6.0 of the Public API has been in stable release for close to a year and based on feedback and adoption rates we feel it is time to move all new developers to the latest version where stability, security and ease of use are much greater.
* What should I do if I'm currently using V5.0/V5.1?
* We recommend all developers regularly plan to update their integrations to MINDBODY, but also recognize this can be a large change so at this time we will continue to provide support for our V5.x users.
* How can I find the V5.0/V5.1 documentation if I can no longer pick the version of documentation to view?
* Please login to your account and the option to view the documentation will be available. Alternatively, you can use this link to view after logging in.
### Addressing Duplicate Client Creation in Public API
Changes were made as of February 28, 2020 to projected error codes. Error 329 will now be 331, and error 330 will be 332. The requirement of email, first and last name on client creation and update has been removed and will use existing requirements for first and last name. Please account for these changes.
Starting the week of May 11th, 2020 all versions of the Public API will no longer allow duplicate clients to be created. Duplicate client accounts are a commonly referenced issue that will be addressed in upcoming changes to the Public API. In the future, duplicate clients will no longer present issues for your integration as we change our API responses to better serve your needs.
Duplicates are defined as client records with the same first name, last name and email.
MINDBODY recommends updating integrations to handle these new error responses when using AddOrUpdateClients, AddClient or UpdateClient across Public API V5.0 through V6.0. In V5.0 and V5.1 error codes will be returned in the 300s range with a message indicating duplicate creation was attempted. In V6.0 an HTTP Status Code 400 and error message will be returned
If you are a business that would like to use this update earlier than May, please contact your account manager or the Contact API Support.
### November 2019 Updates - New Membership Features, Appointment Booking Updated Webhook & Much More!
November brought us even more updates to the API Platform. Check out the changes below and see what was released in the last month.
# GET Memberships - New Endpoint
GET Memberships allows developers to retrieve membership details and map to GET ActiveClientMemberships. See the documentation here for more details.
# GET ActiveClientMemberships Public API V6.0 - Cross-Regional Lookups!
The GET ActiveClientMemberships endpoint was updated to allow developers to look up memberships cross-regionally now. No need to make multiple calls anymore. Documentation can be found here.
A new Webhook event type “appointmentBooking.updated” was just added. This webhook will provide developers with real-time updates for appointment changes that occur throughout the MINDBODY software.
# Visit Resource Updates in Public API V6.0
GET ClientVisits and POST AddClientToClass now returns additional data on the service id and name used for booking.
# Cross-Regional Waitlist Now Supported in V6.0
The POST AddClientToClass endpoint now supports cross-regional waitlist bookings. Documentation can be found here.
# GET ClientServices now returns ProductId
GET ClientServices V6.0 returns the product id so that it can be related directly back to GET Services V6.0.
### October 2019 Updates
October was a busy month for our API Platform. Check out the updates below and see what was released in the last month.
# Direct Debit/ACH Support in Public API V6.0 - Additional Payment Options for Checkout!
A long requested addition to the Public API is now live. We support direct debit/ACH in the Public API and are ready for checkout. See the documentation here for more details.
# GET Sales Public API V6.0 Improvements - More Data and Service Mapping!
There is now even more data and documentation available for developers to surface about sales. Use these updates to get the barcode Id and then map the responses to Services and Products. Documentation can be found here.
# GET GiftCardBalance Public API V6.0 - Check Those Balances!
Now developers can check prepaid gift card balances as often as they would like without going through the checkout workflow. See the documentation for the new endpoint here.
### IP Whitelisting Available Now
Developers can now secure their integration with the MINDBODY Public API through the usage of IP Whitelists. This additional security measure gives developers an additional layer of protection when interacting with customer data. It is not mandatory to use, however it is highly recommended.This is documented in the Developer Portal under the Authentication section.
### The Stable Release of the Public API V6.0 is Here!
MINDBODY is happy to announce that Version 6.0 of the Public API is now stable and ready for use in your applications. Here are a few of the highlights of the new release.
# Transitioning to RESTful APIs
No more SOAP and no more parsing XML! V6.0 has REST-like endpoints that use GET and POST calls to retrieve data in an easy-to-read JSON format.
# New and Improved Documentation
The documentation has been extensively rewritten. Query parameters and response elements for all endpoints are described in detail. Many of the endpoints have helpful code examples of both requests and responses in cURL, C#, PHP, Python, and Ruby. The tutorials detail how to accomplish common workflows, like booking an appointment or getting a shopping cart total.
# Standardized Error Handling
New error messages consist of an error code and a specific description to help you quickly pinpoint the source of the problem. For example, you might receive an error code of "ClassOnlineSignUpsFull." The description for this error code reads: "The class is full. No more clients are allowed to sign up for this class online."
# Support for Gift Cards
There is now an endpoint that allows a client to purchase a gift card from a business in a variety of designs. The card can be emailed to the recipient on a specific day, and a card title and a personal message can be added.
If you don't yet have a developer account, check out our FAQs, or go to our Getting Started section for step-by-step help to:
* create a MINDBODY developer account use the MINDBODY sandbox for development and testing your application
* request approval from MINDBODY to take your application live
* once you're approved, follow the activation process to take your application live
### Announcing the Stable Release of the Webhooks API!
As of April 1, 2019, MINDBODY proudly announces that our Webhooks API V1.0 release has been promoted to a Stable Release, ready for use in your applications.You can use this API to create an application that shows near real-time updates to business data without having to long-poll the Public API. This API is MINDBODY's implementation of an HTTP push API. The MINDBODY Webhooks API notifies you when specific events that you subscribe to occur in the data of a business that uses MINDBODY. For example, using the Webhooks API, you can create a subscription that notifies you as soon as a new client has been created so that your application can send the client a welcome email. Another typical use is to create a subscription that notifies you when a client has booked a class or an appointment, so that you can reduce the frequency of call volume.
# Release Highlights
We have made some changes to the Webhooks API during its beta period. Here are a few of the most important changes now in the stable release:
* API Key Updates
You can now use the same API key mechanism for the Public API and for the Webhooks API. Create a key for each of your integrations on the API Credentials page in the Developers Portal and manage all your keys in one place. * Authorization Updates
Now that we have implemented API keys for Webhooks, the Authorization header is no longer required in calls. * PATCH Subscription Endpoint
We have added a new endpoint that lets you activate and reactivate the subscriptions that are associated with your developer portal account. You no longer need to contact MINDBODY to have this done for you. Note that you can only activate or update one subscription at a time using this endpoint.
### Public API V6.0 now in Open Beta!
Check out the latest version of the Public API and explore all the new updates we've been making to the platform!
# Highlights
* Transitioning to RESTful APIs
* Standardized Error Handling
* Detailed Documentation
* Gift Card Support
* And much more!
Disclaimer: You should expect releases that are labeled beta to have frequent, unannounced changes, so we recommend that you do not use beta features in your production offerings.
Please send all feedback to Contact API Support
### Watch the latest MINDBODY Developer Videos
# BOLD 2018: <NAME> on FitMetrix
At the Partner Track at MINDBODY BOLD 2018, <NAME> (FitMetrix) explains how to create a fantastic API integration. Watch it here.
# API Keys - MINDBODY
How to use API keys on the MINDBODY Public API. Watch it here.
### API Updates 6/14/2018
# GDPR
The v5.0 and v5.1 GetClients endpoint has been updated to return email opt-in statuses to meet new GDPR requirements. The GetClients output has not changed, it will still return:
* PromotionalEmailOptIn for newsletters and marketing campaigns
* EmailOptIn for scheduling notifications and reminders
Please note GetClients has not been updated to include SMS opt-in statuses, this work will be completed during a later phase. We recommend calling GetClients to refresh your data-caching layer if you have not done so since June 13, 2018.
# Deprecation of TLS 1.0 and 1.1 Support
On June 18th, 2018, MINDBODY will no longer accept connections made over TLS 1.0 or TLS 1.1. MINDBODY will only accept TLS 1.2. You may have to change the code in your API integration to accommodate this security change. This update is mandatory, as MINDBODY must be PCI compliant.
The Payment Card Industry (PCI) Data Security Standard has stipulated that TLS 1.0 and 1.1 encryption protocols can no longer be used for secure communications, only TLSv1.2 or higher is acceptable. The PCI DSS standards can be read in full here.
To ensure compliance with the PCI DSS standards and security best practices, MINDBODY will disable support for TLS 1.0 and 1.1 on all our web-facing systems on Monday June 18th, 2018. After June 18, 2018, MINDBODY Partners who attempt to access the MINDBODY platform with TLS 1.0 and 1.1 will have their connection refused, only TLS 1.2 will be accepted.
If your requests to MINDBODY use TLS 1.0 or 1.1 your automation will need to be updated to exclusively use TLS 1.2. This may involve updating your programming language or tool set to a more modern version, or changing configurations to specifically enable TLS 1.2 and disabling TLS 1.0 and 1.1. If you do not make this change you will not be able to access the MINDBODY API platform.
If your requests to MINDBODY already use TLS 1.2, no change is needed.
If you have questions, please Contact API Support.
Thank you for working with us to keep our shared customers safe.
The MINDBODY Team
### API Updates 5/30/2018
SOAP API Version 5.1 Promotion to Stable Release
We are pleased to announce that Version 5.1 of our SOAP API has been promoted to Stable Release. We now recommend developers use v5.1 in their production applications.
### API Updates 1/18/2018
Beginning 1 January 2018, any business subject to French taxation is also subject to the new French NF525 law that requires businesses that record payments via a cash drawer, accounting or management software, to use a software which meets certain conditions of sales data inalterability, security, storage and archiving.
To allow for compliance with the NF525 law, certain functionality related to editing of sales has been removed from UpdateClientServices and UpdateSaleDate in version 5.0 and 5.1 for MINDBODY businesses located in French territories. Developers will now be returned the following response:
```
<Status>InternalException</Status>
```
<ErrorCode>1100</ErrorCode> <Message>Endpoint not accessible for the subscriber's country</MessageIf you have any questions, please contact us at <EMAIL>.
### API Updates 9/26/2017
SOAP API Version 5.1 (beta) Released
We are pleased to announce that Version 5.1 of our SOAP API has been released. This update contains support for many Cross-Regional use cases, as well as Contracts/Autopays. As a beta release, feel free to begin starting with your dev/test efforts. We are actively collecting feedback at Contact API Support. However, we do not recommend using these new features for production use until we make a GA release.
Getting Started with Version 5.1 (beta)
Switching to version 5.1 is simple. Wherever you are using /api/0_5 in your API calls, use /api/0_5_1 instead. This is relevant for accessing all routes (WSDL, SOAPAction, etc.).Example: https://api.mindbodyonline.com/0_5/ClientService.asmx?wsdl Becomes: https://api.mindbodyonline.com/0_5_1/ClientService.asmx?wsdl
API Support for Contracts/Autopays with Credit Cards
* The Sale Service now has two new endpoints: GetContracts and PurchaseContracts.
* GetClientContracts, part of the Client Service, has been updated to include additional information about clients' previously purchased contracts.
Cross-Regional Class Bookings
* The Class Service has been updated with a new endpoint, AddClientToClass, which now supports Cross-Regional class bookings.
* When used on a Cross-Regional class booking, RemoveClientsFromClasses has been updated to return sessions back to the series (ClientService) that was used to book into a class.
Cross-Regional Client Lookups
The following endpoints now allow for Cross-Regional lookups, simply by passing in a CrossRegionalLookup=true request parameter.
* GetClientServices
* GetClientSchedule
* GetClientVisits
* GetClientContracts
Cross-Regional Client Updates
* Do you need to update client information (based on RSSID / ClientID) across all of the sites within a Cross-Regional Organization? Now you can using UpdateClientCrossRegional
Updates to Additional Endpoints
* GetSales now returns the ClientID associated with a sale, as well as information about the items associated with the sale.
* GetClients now allows you to pass in a LastModifiedDate which, when set, will result in only clients who have been modified on or after the specified date being returned in the response.
* GetClasses now allows you to pass in a LastModifiedDate which, when set, will result in only classes that have been modified on or after the specified date being returned in the response.
* GetClassVisits now allows you to pass in a LastModifiedDate which, when set, will result in only visits that have been modified on or after the specified date being returned in the response.
### API Updates 6/19/2017
Upcoming Changes to Version 5.0
GetClients
Summary: The response for GetClients is being updated to more closely resemble the information that a staff member can see within our core software. Currently, when developers execute a GetClients request with a Fields.Clients.ClientCreditCard and staff credentials of a staff member that has the “View client billing information” permission enabled and the “View client profile” disabled , the credit card property is returned in the response. However, this same staff member is not able to view the clients stored credit card in through core. Updated Behavior:
* If “View client billing information” is enabled and “View client profile” is disabled, then the response will not contain credit card response properties.
* If “View client billing information” is disabled and “View client profile” is disabled, then the response will not contain credit card response properties.
* If “View client billing information” is enabled and “View client profile” is enabled, then the response will contain credit card response properties.
AddOrUpdateAppointments
Current Behavior: When developers execute an AddOrUpdateAppointments request to update an existing appointment using staff credentials that do NOT have the “Modify appointments” permissions enabled, the Response.ErrorCode is set to ‘200’ with a Response.Status of ‘Success’. The message within the <Appointments> response property correctly displays “Permission error - User does not have permission to edit appointment.”
Updated Behavior: When user executes an AddOrUpdateAppointments request to update an existing appointment using staff credentials that do NOT have the “Modify appointments” permissions enabled, the Response.ErrorCode is set to ‘201’ with a Response.Status of ‘FailedAction’. The message within the <Appointments> response property correctly displays “Permission error - User does not have permission to edit appointment.”
GetActiveClientMemberships
Current Behavior: This method only returns the most recent membership that a client has even though the client has more than 1 active memberships. Updated Behavior: We will be updating the behavior of this method and returning all client active memberships and not just the most recent membership.
GetStaffAppointments
Current Behavior: Appointments scheduled on "All/Business Closed" holidays are returned in the GetStaffAppointments response. However, our core software does not allow staff to view these appointments from the in the appointment schedule screen and instead displays the holiday message. Updated Behavior: The GetStaffAppointments response will not display any appointments that are scheduled on a "All/Business Closed" holidays.
### API Updates 5/26/2017
* AddClientsToClasses
* Developers were experiencing an edge case where users could be added to classes even if the classes were full. This has been resolved.
* AddOrUpdateAppointments
* The behavior of this method has been fixed to allow for the late cancelling of appointment requests. This error message will be replaced with a “200 Success: Appointment.Action.Updated” status in the response.
* Some developers were reporting issues when updating the StartDateTime of existing appointments. These issues have been resolved.
* Scheduling restrictions for businesses that use the “Scheduling Suspension” feature have been updated to be consistent with how our core software works.
* Previous Behavior: When user executes an AddOrUpdateAppointments request to update an existing appointment using staff credentials that do NOT have the “Modify appointments” permissions enabled, the Response.ErrorCode is set to ‘200’ with a Response.Status of ‘Success’. The message within the <Appointments> response property correctly displays “Permission error - User does not have permission to edit appointment.” Updated Behavior: When user executes an AddOrUpdateAppointments request to update an existing appointment using staff credentials that do NOT have the “Modify appointments” permissions enabled, the Response.ErrorCode is set to ‘201’ with a Response.Status of ‘FailedAction’. The message within the <Appointments> response property correctly displays “Permission error - User does not have permission to edit appointment.”
### API Updates 5-25-2017
* A bug in our latest update of this method was causing classes scheduled on holidays / closed days to be returned in the response of Get Classes. With a release happening on 5/25/2017, this issue will be resolved and those classes will no longer be included in the response object.
* When a user passes in a <PageSize> value of ‘0’, the GetClasses response will contain all classes within the supplied request parameter range.
* GetClasses will now return valid resource information for each class returned in the response. Please note that resource information will only be returned when the Classes.Resource field request parameter is included in the GetClasses request.
### API Updates 4-19-2017
SOAP v.5 – ClassService.AddClientsToEnrollments
SOAP v.5 – AppointmentService.AddOrUpateAppointments
SOAP v.5 – SalesService.CheckoutShoppingCart
* Method updated to prevent deactivated pricing options from being purchased.
SOAP v.5 - ClassService.GetClasses
* Method has been optimized to reduce network strain and improve processing times. As part, the following changes have been introduced:
* Update to return classes with service categories set to “inactive”.
* Update to return deactivated resources still assigned to active classes.
* Update to return ‘Substitute=False’ when assigned class has been cancelled.
* Update to return staffID of originally scheduled staff member for cancelled classes. This is a change from previous behavior of staffID being returned as “-1”.
* Method no longer requires “Classes.Resource” to be passed in request to return resources/room information. Instead the ‘Resource’ object will now be returned by default for requests using “XMLDetail=Full”.
* Update to no longer return an error for requests including an inactivate ‘LocationID’.
* Fix to resolve issue with the following error being thrown when requesting client relationships: [Incorrect syntax near ')'.]
SOAP v.5 - ClassService.RemoveClientsFromClasses
SOAP v.5 – ClassService.UpdateClientVisits
# Mindbody API Terms of Use
Date: 2023-09-30
Categories:
Tags:
## Mindbody API Terms of Use
Last Updated: September 30, 2023
Please read these terms of use carefully before using an API offered by Mindbody, Inc. (“Mindbody”).
By accessing or using any application programming interfaces offered by Mindbody (collectively, the "API" or "Mindbody API"), you and, if applicable, the company you represent (collectively, "you" or "your") agree to be bound by these API Terms of Use (the "Agreement"). This Agreement is a legal contract between you and Mindbody. If you do not have the authority to enter this Agreement, or if you do not agree with the Agreement, you may not access or use the API.
We may revise and update this Agreement from time to time in our sole discretion. All changes are effective immediately when we post them and apply to all access to and use of the API thereafter. Your continued use of the API following the posting of a revised Agreement means that you accept and agree to the changes. You are expected to check this page from time to time so you are aware of any changes, as they are binding on you.
This Agreement incorporates the following documents (as may be updated from time to time) by reference:
* Mindbody Terms of Service
* Mindbody Security Policy
* Mindbody Branding Requirements
* Mindbody Professional Services Agreement
* Mindbody Privacy Policy
### 1. Definitions
"Application" means any application that you develop using the Mindbody API to use, search, display, upload, and/or modify the Mindbody Content.
"Mindbody Content" means all content, data, and other information made available on the Mindbody websites or software ("Mindbody Website") and the consumer-facing downloadable mobile app made available by Mindbody and known as the "Mindbody App," which allows consumers to find, book and pay for classes and other services offered by participating Subscribers.
"Mindbody Data" means any data or information Mindbody obtains or accesses from its customers and/or end users, including any Personal Data.
"Personal Data" means any personal data or personal information (or analogous term) as defined under applicable privacy and data protection laws.
"Service" means the Mindbody services, including, but not limited to, online business management software services designed specifically for businesses in the wellness industry, made available through the Mindbody Website and Mindbody App.
"Subscriber" is defined in the Mindbody Privacy Policy.
### 2. License
Subject to the terms and conditions of this Agreement, Mindbody grants you a revocable, limited, non-exclusive, non-sublicensable, non-transferable license to access and use the API solely for the purpose of developing, testing, displaying, and distributing your Application. Mindbody may revoke this license at any time for any reason. You will not, and will not permit any person, directly or indirectly, to reverse engineer, disassemble, reconstruct, decompile, translate, modify, copy, rent, modify, or alter, other than as explicitly permitted hereunder, create derivative works of the API or any other portion of the Service.
### 3. Modifications
Mindbody reserves the right to modify the Service or the API (or any part thereof) at any time in its sole discretion. Upon release of any new versions of the API, Mindbody reserves the right to require you to obtain and use the most recent version of the API in order to obtain functionality of your Application with the Service.
### 4. Support
Mindbody may, but is under no obligation to, provide basic technical support in connection with your use of the API. Any such support will be provided via web forums, FAQs, or other internet-based documentation made available to authorized developers in Mindbody’s sole discretion.
### 5. Application Guidelines
You may develop, display or distribute Applications that interact with the API. You agree that you are solely responsible for any Application that you develop, and that any such Application must comply with Mindbody Branding Requirements, where applicable.
### 6. API Call Limitations
Mindbody may limit the number of API calls you are permitted to make during any given period. Mindbody will determine call limits based on various factors, including the ways your Applications may be used or the anticipated volume of use associated with your Applications. If you exceed the call limits established by Mindbody, we reserve the right to charge you for excess API calls, in accordance with Section 10, or to terminate your access to the API, in accordance with Section 11. In no event will unused API calls roll over to the next day or month, as applicable.
### 7. Ownership
You acknowledge and agree that the Service, the Mindbody Content, including Mindbody’s trademarks and logos, and the API are protected by applicable intellectual property laws and treaties (whether those rights happen to be registered or not, and wherever in the world those rights may exist). As between you and Mindbody, the Service, the Mindbody Content, including Mindbody’s trademarks and logos, and the API, together with any and all intellectual property rights contained in the foregoing, are and will at all times remain the sole and exclusive property of Mindbody. You agree that at no time during or after the termination of this Agreement will you attempt to register any trademarks (including domain names) that are derived from or confusingly similar to those of Mindbody, or will you buy or otherwise arrange to use any such domains to redirect internet content to your site. All uses by you of Mindbody’s logos or trademarks shall inure to the sole benefit of Mindbody.
### 8. Non-Permitted Purposes; API Restrictions
You are responsible for your own conduct, and the conduct of any third party accessing the API on your behalf, while using the API and for any consequences thereof. You will use the API only for purposes that are legal, proper and in accordance with this Agreement and any applicable policies or guidelines provided by Mindbody, as they may be amended from time to time. You may not share your access credentials with any Mindbody competitor or otherwise enable a Mindbody competitor to have access. In addition to the other restrictions contained herein, you agree that when using the API, you will not do the following, attempt to do the following, or permit your end users or other third parties to do the following:
* Disparage Mindbody or knowingly tarnish the name, reputation, image or goodwill of Mindbody in connection with your Application or use of the API;
* Modify, obscure, circumvent, interfere with, disrupt, or disable any element of the API or its access control features;
* Extract, provide or otherwise use any data elements from the Mindbody Data to enhance the data files of third parties;
* Require end users to create a source name for a commercial integration or provide their log in credentials to a third-party developer;
* Use the API in a product or service that competes with products or services offered by Mindbody, including the Service;
* Attempt to or circumvent any security measures or technical limitations of the API or the Service;
* Use the API in any manner or for any purpose that violates any law or regulation, any right of any person, including but not limited to the intellectual property rights of such person, or any privacy and data protection laws, or to engage in activities that would violate any fiduciary relationship, any applicable local, state, federal, or international law, or any regulations having the force of law, or which otherwise may be harmful, in Mindbody’s sole discretion, to Mindbody, its providers, or Subscribers or end users of the Service;
* Sell, lease, or sublicense the API or access thereto;
* Use the API in a manner that detrimentally affects the stability of Mindbody’s servers or the behavior of other applications using the API;
* Create or disclose metrics about, or perform any statistical analysis of, the API or the Service;
* Use the API on behalf of any third party;
* Make API calls exceeding a reasonable amount per day, as determined in Mindbody’s sole discretion and in accordance with this Agreement;
* Frame, crawl, screen scrape, extract, or data mine Mindbody Content or Mindbody Data;
* Use any robot, spider, site search/retrieval application, or other device to retrieve or index any portion of the Service or collect information about Subscribers or users of the Service for any unauthorized purpose;
* Use the API to aggregate, consolidate or otherwise arrange, display or make available Mindbody Content or Mindbody Data in combination with any third party or any Mindbody competitors’ content or data, for any commercial purpose or in any manner that Mindbody determines could diminish the value or integrity of its business or brand;
* Use the API in an Application containing any of the following content: adult content; pyramid schemes, chain letters or disruptive commercial messages or advertisements; infringing, or obscene content; content promoting or instructing about illegal activities or promoting physical harm or injury against any group or individual; content infringing any patent, trademark, copyright, trade secret or other proprietary right of any party; content defaming, abusing, harassing, stalking, threatening or violating any rights of privacy and/or publicity; content disparaging of Mindbody or its licensors, licensees, affiliates, or partners; or anything other inappropriate or unlawful content;
* Transmit any viruses, worms, defects, Trojan horses, or other disabling code, via the API or otherwise, to the Service or the computers or networks used by Mindbody, users or Subscribers of the Service or any other third parties;
* Access Mindbody Data without the authorization of the Subscriber or use Mindbody Data in any way beyond the purpose for which the Subscriber specifically provided you access under the agreement between you and the Subscriber. For the avoidance of doubt, this restriction is in addition to any other restrictions you agreed to with any applicable Subscriber to the Service;
* Cache (in excess of 48 hours), collect, compile, store, transfer or utilize Mindbody Data or any other data derived from Mindbody, the Service or Mindbody's computer system(s) or database(s), including but not limited to cardholder data, customer addresses, passwords or any other Personal Data about any end user; or
* Leverage or otherwise utilize Mindbody branded search terms for your own purposes.
### 9. Service Providers
You may work with third party service providers as necessary to facilitate your performance and obligations under this Agreement only if you require any such service provider to be bound by conditions and restrictions at least as protective of Mindbody and its Subscribers and users as set forth in this Agreement. You acknowledge and agree that you shall be fully responsible for any act or omission by any service provider you use to facilitate your performance or obligations hereunder. Any such act or omission that amounts to a breach of this Agreement will be deemed a breach by you.
### 10. Fees and Payments
Mindbody calculates and bills its fees and charges on a monthly basis. Commencing 30 days from the date you receive access to Subscriber data ("Effective Date") and continuing on the same day of the month as the Effective Date for each calendar month thereafter until the termination of this Agreement, you shall pay Mindbody any fees charged under this Agreement, as more fully described at https://developers.mindbodyonline.com, as may be amended by Mindbody, in its sole discretion, from time to time. Changes to the fees are effective 30 days after being posted at the above link. Usage fees, if any, will be invoiced on a monthly basis for activity from the previous calendar month. In addition to the API fees, you will be responsible for all other fees associated with use of any Mindbody API. All fees made by you under this Agreement will exclude, and you will pay, any taxes associated with such fees, your Application, or this Agreement.
### 11. Right to Terminate
* Termination, Suspension, or Discontinuance. Mindbody reserves the right to suspend or terminate your API access at any time if: (1) we believe you have violated this Agreement (including the documents incorporated by reference) or, in our sole discretion, if we believe the availability of the API in your Application is not in our or our users' best interests; or (2) otherwise for any reason or no reason at all, without liability for such suspension or termination. We may also impose limits on certain features and services or restrict your access to some or all of the API or the content they provide. Such change, suspension or termination of the API may cause your existing services using the API to stop functioning properly. All of our rights herein may be exercised without prior notice or liability to you.
* Your Termination. You may terminate this Agreement by (1) providing 90 days’ written notice to Mindbody of your intention to terminate your use of the API; (2) ceasing use of the API; and (3) deleting your access credentials.
* Effect of Termination. Upon any termination of this Agreement, you will promptly (1) delete and remove all calls to the API from all web pages, scripts, widgets, applications, and other software in your possession or under your control; (2) destroy and remove any and all copies of the API from all computers, hard drives, networks and other storage media; and (3) upon request, certify in writing to Mindbody that such actions have been taken.
### 12. Your Use of Third Party Services
We may make available third party products or services, including, for example, third party applications, implementation and other consulting services ("Third Party Services"). Any use by you of such Third Party Services, and any exchange of data, including Personal Data, between you and the provider of such Third Party Services, is solely between you and the provider of such Third Party Services. Your access and use of the Third Party Services may also be subject to additional terms and conditions, privacy policies, or other agreements with such third party, and you may be required to authenticate to or create separate accounts to use Third Party Services on the websites or via the technology platforms of their respective providers. Use of Third Party Services is at your own risk and Mindbody disclaims all liability related thereto. Mindbody does not warrant or support Third Party Services, whether or not they are designated as being "certified" or otherwise.
### 13. Disclaimer of Any Warranty
The API, the Service and any and all Mindbody Content and Mindbody Data are provided on an “as is” basis with all faults. To the maximum extent permitted by applicable law, Mindbody and its suppliers disclaim any and all representations and warranties relating to the API, the Service, Mindbody Content, Mindbody Data, and any other services provided by Mindbody, whether express, implied or statutory, including any warranties of merchantability, fitness for a particular purpose, data accuracy, title, non-infringement, non-interference and quiet enjoyment. Mindbody disclaims any warranty that your use of the API will be uninterrupted, error free, secure, timely, complete, reliable, or current. For the avoidance of doubt, you acknowledge and agree that this Agreement does not entitle you to any support for the API. No advice or information, whether oral or in writing, obtained by you from us will create any warranty not expressly stated in this Agreement. All disclaimers of any kind (including in this section and elsewhere in this Agreement) are made on behalf of both Mindbody and its affiliates and their respective shareholders, directors, officers, employees, affiliates, agents, representatives, contractors, licensors, suppliers and service providers (the “Mindbody Parties”).
### 14. Limitation of Liability
You expressly acknowledge and agree that Mindbody shall not, under any circumstances, be liable to you for any indirect, incidental, consequential, special, exemplary, or punitive damages arising out of or in connection with use of the Mindbody API, the Service, the Mindbody Content or the Mindbody Data, including but not limited to, lost profits, goodwill, cost or procurement of substitute goods or services, loss of use, data or other intangible losses whether based on breach of contract, breach of warranty, tort (including negligence, product liability or otherwise), or any other pecuniary loss, arising out of, or in any way connected with the Service or Third Party Services, including but not limited to the use or inability to use the API, any interruption, inaccuracy, error or omission, whether or not Mindbody has been advised of the possibility of such damages. Under no circumstances shall Mindbody be liable to you for any amount.
### 15. Release and Waiver
To the maximum extent permitted by applicable law, you hereby release and waive all claims against Mindbody, and its subsidiaries, affiliates, directors, officers, agents, licensors, co-branders or other partners, and employees from any and all liability for claims, damages (actual and/or consequential), costs and expenses (including litigation costs and attorneys’ fees) of every kind and nature, arising from or in any way related to your use of the Mindbody API. If you are a California resident, you expressly waive your rights under California Civil Code 1542, which states, "A general release does not extend to claims that the creditor or releasing party does not know or suspect to exist in his or her favor at the time of executing the release and that, if known by him or her, would have materially affected his or her settlement with the debtor or released party." You understand that any fact relating to any matter covered by this release may be found to be other than now believed to be true and you accept and assume the risk of such possible differences in fact. In addition, you expressly waive and relinquish any and all rights and benefits which you may have under any other state or federal statute or common law principle of similar effect, to the fullest extent permitted by law.
### 16. Indemnification
You shall indemnify, defend and hold harmless the Mindbody Parties, successors and assigns from and against any and all charges, damages, and expenses (including, but not limited to, reasonable attorneys’ fees) arising out of any third party claim relating to: (1) your breach or alleged breach of this Agreement; (2) any access to or use of the API by you, an affiliate, or an end user; or (3) any actual or alleged violation by you, an affiliate or end user of the intellectual property, privacy or other rights of a third party.
### 17. Confidential Information
"Confidential Information" includes all information provided by Mindbody to you under these this Agreement, including without limitation, Subscriber data (including Personal Data), business plans and processes, and any other information which should be reasonably considered to be confidential in nature. You will not use or disclose Confidential Information other than as required to perform under this Agreement or as otherwise expressly permitted by this Agreement. The parties acknowledge that monetary damages may not be a sufficient remedy for unauthorized use or disclosure of Confidential Information and that Mindbody will be entitled (without waiving any other rights or remedies) to such injunctive or equitable relief as may be deemed proper by a court of competent jurisdiction, without obligation to post any bond. Any information provided by you to Mindbody hereunder is considered by Mindbody to be non-confidential. Mindbody has no duty, express or implied, to pay any compensation for the disclosure or use of any such information provided by you to Mindbody. You acknowledge and agree that any information you provide to Mindbody is solely considered a business relationship under this Agreement and you have no expectation of payment.
### 18. Publicity
You may promote your Application, including talking to traditional and online media and your users about your Application, so long as you do so truthfully and without implying that your Application is created or endorsed by Mindbody. You may not issue any formal press release relating to this Agreement or your relationship with Mindbody without Mindbody’s prior written consent. Mindbody reserves the right to issue publicity and promotional materials mentioning and/or describing your Application without your consent.
### 19. Privacy Policy
By using our API, you are indicating that you’ve read the Mindbody Privacy Policy and agree to its terms. Mindbody may use information in accordance with the Mindbody Privacy Policy. Without limitation, you acknowledge and agree that Mindbody may process your data, including Personal Data of your representatives, for the purpose of performing the Agreement and providing the API and related functions, such as billing and support, as well as to send direct marketing communications to your representatives, data science, aggregation or anonymization, product or service improvement and reporting, and other purposes set out in the Mindbody Privacy Policy. You represent and warrant that you are authorized to process your data and make such data available to Mindbody or its customers for uses as set out in this Agreement and the Mindbody Privacy Policy, including through appropriate notice, disclosures, consent and by your referring individuals to our Privacy Policy (notwithstanding Mindbody’s ability and right, to which you agree, to request consent, and provide notice and its Privacy Policy separately to individuals).
Unless specifically agreed in writing between you and us, in relation to data, including Personal Data, that is subject to European Economic Area, United Kingdom, State of California, or other similar data protection rules, you acknowledge and agree that you are not engaged by us and act as an independent controller, business or service provider (i.e., not as a processor or service provider to us), without prejudice to any use or licensing restriction or condition under the Agreement.
You are solely responsible for ensuring that any contractual arrangements required by applicable law are in place, including with API developers, including any relevant data processing and transfer agreement (including where relevant appropriate standard contractual clauses) between (1) a Subscriber and you (where you are a third party developer), or (2) a third party developer and you (where you are a Subscriber), and that we may freely deny or revoke access to the API if you do not do so.
You represent and warrant that: (1) the Subscriber consents to the disclosure and processing of Personal Data relating to it or its representatives and End Users (as defined in the Mindbody Privacy Policy); and (2) you understand and will comply with your obligations under applicable data protection laws, including the California Consumer Privacy Act, Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation) and the (UK) Data Protection Act 2018.
### 20. Onward Transfer of Personal Data
As described in the Mindbody Privacy Policy, Mindbody and its subsidiaries participate in and have certified compliance with the EU-US Data Privacy Framework and the Swiss-US Data Privacy Framework as set forth by the U.S. Department of Commerce.
For transfers of Personal Data from the EEA, we rely on standard contractual clauses (based on Module 3 of the processor to processor standard contractual clauses for the transfer of personal data to third countries pursuant to Regulation (EU) 2016/679, as amended by Regulation (EU) 2021/914, as amended or replaced from time to time by a competent authority under the relevant data protection laws and published here, a copy of which can be obtained by Contacting Us, see below) (the "Standard Contractual Clauses"), which are expressly incorporated herein and take effect in the event of such transfer, and:
* Clause 7 – Docking clause of Module 3 of the Standard Contractual Clauses shall apply;
* Clause 9 – Use of subprocessors of Module 3 of the Standard Contractual Clauses Option 2 (general authorization) shall apply and the “time period” shall be 5 days in accordance with the Sub-Processor Clause in this Privacy Annex;
* Clause 11(a) – Redress of Module 3 of the EU Standard Contractual Clauses the optional language shall not apply;
* Clause 17 – Governing law of Module 3 of the Standard Contractual Clauses “Option 1” shall apply and the “Member State” shall be Ireland;
* Clause 18 – Choice of forum and jurisdiction of Module 3 of the Standard Contractual Clauses shall be Ireland;
* Annex 1 of Module 3 of the Standard Contractual Clauses shall be deemed to be pre-populated with the relevant information of the parties entering into this Agreement and (1) the data subjects, categories of data, special categories of data and processing operations and, as applicable, retention periods will be the same as described in the Agreement and the Mindbody Data Processing Schedule; (2) the frequency of the transfer is continuous; (3) the period for which the data will be retained is set forth in the Agreement, and (4) data importer may transfer data to its Sub-Processors for the duration of the Services for storage, hosting, computing or similar support services. Further, the competent supervisory authority shall be consistent with the member state specified through Clause 13.
* Annex 2 of Module 3 of the Standard Contractual Clauses shall refer to Security Policy For transfers of Personal Data out of the UK, we are relying on the UK Standard Contractual Clauses (Controller to Processor) as amended by the Commissioner for the UK data protection laws and published here. If at any time the UK Government approves the Standard Contractual Clauses for use under the UK Data Protection Laws, then the relevant EU Standard Contractual Clauses shall apply (and shall replace the UK Standard Contractual Clauses), in respect of any relevant UK transfers, subject to any modifications to the Standard Contractual Clauses required by the UK data protection laws (and subject to the governing law of the UK Standard Contractual Clauses being English law and the supervisory authority being the Information Commissioner’s Office (“Commissioner”)). Appendix 1 and 2 to the Standard Contractual Clauses shall be deemed to be pre-populated with the information set forth on the Mindbody Data Processing Schedule
You agree to (1) use and disclose any Personal Data you receive from Mindbody only for limited and specified purposes that comply with this Agreement, including Section 8, Section 20, and that are consistent with the consent provided by the individual; (2) provide an equal or greater level of protection for the Personal Data as described in the applicable Standard Contractual Clauses, including appropriate technical and organizational measures to protect Personal Data against accidental or unlawful destruction or accidental loss, alteration, unauthorized disclosure or access; (3) notify Mindbody if you can no longer meet your obligations under section (2) of this sentence and, upon notice, cease processing or take reasonable and appropriate steps to stop and remediate unauthorized processing of Personal Data; and (4) assist Mindbody in responding to: (a) lawful requests by public authorities, including to meet national security or law enforcement requirements; and (b) individuals exercising their rights under the Standard Contractual Clauses.
### 21. Records and Audit Rights
Mindbody shall have the right to audit your compliance with payments, copyright, Confidential Information, and any other restrictions and/or obligations in this Agreement.
### 22. General Terms
* Relationship of the Parties. For all purposes of this Agreement, you and Mindbody shall be and act independently and not as partners, joint ventures, agents, employees or employers of the other. You shall not have any authority to assume or create any obligation for or on behalf of Mindbody, express or implied, and you shall not attempt to bind Mindbody to any contract.
* Non-Solicitation. You agree to not solicit for hire any employee of Mindbody with whom you have, at any time, interacted with for the purposes of doing business with Mindbody, and will not solicit for hire a director or officer of Mindbody who was or is employed by Mindbody while this Agreement is in place, for the duration of the Agreement or within 12 months after the termination of the Agreement. Nothing in this provision shall be construed to prevent any individual from being hired by you. If you breach this obligation to not solicit Mindbody employees and the solicited employee is hired by you, you shall pay Mindbody an amount equal to 50% of the solicited employee’s new salary as liquidated damages. The parties agree that quantifying losses arising from your solicitation is inherently difficult insofar as the solicitation may impact Mindbody’s ability to retain personnel and resulting need to recruit, hire and train replacement talent and further stipulate that the agreed upon sum is not a penalty, but rather a reasonable measure of damages, based upon the parties’ experience and given the nature of losses that may result from your solicitation.
* Severability. If any court of competent jurisdiction finds any provision of this Agreement unenforceable, such provision will be changed and interpreted to accomplish the objectives of such provision to the greatest extent possible under applicable law and the remainder of this Agreement will continue in full force and effect.
* Governing Law. This Agreement will be governed by the laws of the State of California, U.S.A., without giving effect to any principles that would provide for the application of the laws of any other jurisdiction. The United Nations Convention for the International Sale of Goods will not apply to this Agreement.
* Mandatory Informal Dispute Resolution. If you have any dispute with Mindbody arising out of or relating to this Agreement, you agree to notify Mindbody in writing with a brief, written description of the dispute and your contact information, and Mindbody will have 30 days from the date of receipt within which to attempt resolve the dispute to your reasonable satisfaction. You agree that regardless of any statute or law to the contrary, any claim or cause of action you may have arising out of or related to use of the API or the Service or otherwise under this Agreement must be filed within 1 year after such claim or cause of action arose or you hereby agree to be forever barred from bringing such claim. If the parties are unable to resolve the dispute through good faith negotiations over such 30 day period under this informal process, either party may pursue resolution of the dispute in accordance with the arbitration agreement below. If we can’t resolve a dispute after following the process above, then we must resolve through arbitration and not in court.
* Arbitration Agreement. All disputes arising out of or related to this Agreement or any aspect of the relationship between you and Mindbody, whether based in contract, tort statute, fraud, misrepresentation or any other legal theory, that are not resolved pursuant to Section 22.5 above will be resolved through final and binding arbitration before a neutral arbitrator instead of in a court by a judge or jury, and Mindbody and you each hereby waive the right to trial by a jury. You agree that any arbitration under this Agreement will take place on an individual basis; class arbitrations and class actions are not permitted and you are agreeing to give up the ability to participate in a class action. The arbitration will be administered by the American Arbitration Association under its Commercial Arbitration Rules and Mediation Procedures (currently accessible at https://www.adr.org/sites/default/files/Commercial-Rules-Web.pdf) as amended by this Agreement. Any arbitration hearing will be held in San Luis Obispo County, California. The applicable governing law will be as set forth in Section 22.4 (provided that with respect to arbitrability issues, federal arbitration law will govern). The arbitrator’s decision will follow the terms of this Agreement and will be final and binding. The arbitrator will have authority to award temporary, interim or permanent injunctive relief or relief providing for specific performance of this Agreement, but only to the extent necessary to provide relief warranted by the individual claim before the arbitrator. The award rendered by the arbitrator may be confirmed and enforced in any court having jurisdiction thereof.
* No Waiver of Rights by Mindbody. Mindbody’s failure to exercise or enforce any right or provision of this Agreement shall not constitute a waiver of such right or provision.
* Construction. The section headings of this Agreement are for convenience only and are not to be used in interpreting this Agreement. As used in this Agreement, the word "including" means "including but not limited to."
* Entire Agreement. This Agreement constitutes the entire agreement between the parties regarding the subject hereof and supersedes all prior or contemporaneous agreements, understandings, and communication, whether written or oral, relating to such subject matter.
# 1. Introduction
## 1. Introduction
Keeping your data secure, confidential, and readily accessible are Mindbody’s greatest priorities. Our industry-leading security program is based on the concept of defense in depth: securing our organization, and users’ data, at every layer.
Our security program aligns with CIS CSC 20 and NIST Cybersecurity frameworks and our CORE solution is HITRUST CSF certified. Our payments platform is PCI DSS Level 1 service provider certified. While no system can guard against every potential threat, Mindbody’s defensive line is advanced and monitored 24/7, 365 days a year by highly trained professionals.
The focus of Mindbody’s security program is to prevent unauthorized access to user data. To this end, our team of dedicated security practitioners, working in partnership with peers across the company, take exhaustive steps to identify and mitigate risks, implement best practices, and continuously develop ways to improve.
Mindbody’s security team, led by the Chief Information Security Officer (“CISO”), is responsible for the implementation and management of our security program. The CISO is supported by members of the Cybersecurity Team, who focus on Security Architecture, Product Security, Security Engineering and Operations, Detection and Response, and IT Risk and Compliance.
## 2. This Agreement
This Security Policy should be read in conjunction with the Privacy Policy.
This Security Policy contains defined terms, which are defined elsewhere in the Agreement. Please refer to these defined terms in reviewing this Security Policy.
When you access, view or use any part of the Mindbody services, you are accepting the terms and conditions of this Agreement.
If you are agreeing to this Security Policy on behalf of a corporation or other legal entity, you represent that you have the authority to bind such entity and its affiliates to the Agreement. If you do not have such authority, you must not enter into this Agreement and may not use any of our services or content.
Having considered the above preliminary matters and mutual agreements below, the Parties hereby agree as follows:
## 3. Secure by Design
Mindbody’s security team has built a robust, secure development lifecycle, which utilizes manual code reviews, static code analysis, and external/internal red team penetration testing. While we strive to catch all vulnerabilities in the design and testing phases, we realize that sometimes, mistakes happen. With this in mind, we have created a public bug reporting program to facilitate responsible disclosure of potential security vulnerabilities. All identified vulnerabilities are validated for accuracy, triaged, and tracked to resolution.
## 4. Encryption
4.1 Data in transit
All data transmitted between Mindbody users and the Mindbody services is done so using strong encryption protocols. Mindbody supports the latest recommended secure cipher suites to encrypt all traffic in transit, including the use of TLS 1.2 protocols and AES256 encryption.
4.2 Data at Rest
Credit Card and PHI (SOAP notes field) data at rest in Mindbody’s production network is encrypted using industry standards for data encryption. All encryption keys are stored in a secure server on a segregated network with limited access. Mindbody has implemented appropriate safeguards to protect the creation, storage, retrieval, and destruction of secrets such as encryption keys and service account credentials. Each Mindbody user’s data is hosted in our shared infrastructure and logically separated from other users’ data. We use a combination of storage technologies to ensure user data is protected from hardware failures and returns quickly when requested.
## 5. Network Protection
Network access to Mindbody’s production environment from open, public networks (the Internet) is restricted, with only a small number of production services accessible from the Internet. Only those network protocols essential for the delivery of Mindbody’s service to its users are open at our perimeter. Mindbody utilizes third-party Content Distribution Network (“CDN”) services for redundancy and performance of services. In addition to CDN, Distributed Denial of Service (“DDoS”) and bot protections are provided through third-party services. All secure servers are protected by firewalls, best-of-class router technology, TLS encryption, file integrity monitoring, and network intrusion detection that identifies malicious traffic and network attacks.
5.1 Endpoint Security
All workstations issued to Mindbody personnel are configured by Mindbody to comply with our standards for security. These standards require all workstations to be properly configured, updated, tracked, and monitored by Mindbody endpoint management solutions. Mindbody’s default workstation configuration encrypts data at rest, requires strong passwords, and locks when idle. Workstations run up-to-date monitoring software to report potential malware, unauthorized software, or other compromises.
5.2 Access Control
To minimize the risk of data exposure, Mindbody adheres to the principles of least privilege and role-based permissions when provisioning access. Mindbody employees and affiliates are only authorized to access data that they reasonably must handle to fulfill their current job responsibilities. All production access is reviewed internally and is part of compliance with PCI and HITRUST.
To further reduce the risk of unauthorized access to data, Mindbody employs multi-factor authentication for all privileged access to systems with highly-classified data, including our production environment, which hosts our user data
5.3 System Monitoring, Logging, and Alerting
Mindbody monitors servers, workstations, and networks to maintain and analyze a comprehensive view of the security state of its corporate and production infrastructure. Administrative access, use of privileged commands, and system calls on all servers hosting sensitive data in the Mindbody production network are logged, analyzed, and retained in accordance with PCI and HITRUST requirements
All networks are monitored using a Security Incident Event Management (“SIEM”) system that gathers logs from all network systems and creates alert triggers based on correlated events. In addition to internally managed SIEM, Mindbody utilizes third-party incident detection and response services for additional monitoring and analysis.
Intrusion detection sensors throughout our internal network report events to the internal and external SIEM systems for logging and for the creation of alerts and reports.
5.4 Vendor Management
In order to provide you with our services, Mindbody may rely on other service organizations that provide their services to Mindbody (“Subservice Organizations”). Where those Subservice Organizations may impact the security of Mindbody’s production environment, we take appropriate steps to ensure our security posture is maintained by establishing agreements that require Subservice Organizations to adhere to confidentiality commitments we have made to users. Mindbody monitors the effective operation of the Subservice Organization’s safeguards by conducting reviews of all such controls before use.
5.5 Security Compliance Audits and Assessments
Mindbody is continuously monitoring, auditing, and improving the design and operating effectiveness of our security controls. These activities are regularly performed by both third-party credentialed assessors and Mindbody’s internal IT Risk and Compliance teams.
Assessment and audit results are shared with senior management, and all findings are tracked to ensure prompt remediation.
5.6 Penetration Testing
In addition to our compliance audits and assessments, Mindbody engages both internal red teams and independent external entities to conduct application-level and infrastructure-level penetration tests at least annually. The results of these tests are shared with senior management and any potential issues are triaged, prioritized, and remediated promptly.
5.7 Hosting Providers
Our hosting and cloud service providers are PCI compliant and have completed the industry standard SOC 2 certifications. This includes controls and processes such as multi-factor authentication, role-based access controls (“RBAC”), redundant utilities, and strict change management processes.
No computer system or information can ever be fully protected against every possible threat. Mindbody is committed to providing reasonable and appropriate security controls to protect our services, Websites, and information against foreseeable threats. If you have any questions about Mindbody security, you can contact us at <EMAIL>.
## 6. Expectations
6.1 User Expectations
Mindbody maintains the security of Mindbody systems, however, you as a Mindbody user are responsible for implementing other security practices. We recommend that you:
* Maintain an appropriate level of security (both physical and logical) for all local systems (including but not limited to networks, desktop computers, credit card readers, tablets, and mobile devices);
* Install appropriate anti-virus and anti-malware protection;
* Enable web browser auto-updates;
* Implement a robust operating system and software patching process;
* Implement secure user and password management processes, including periodic password changes, deleting user accounts promptly after staff departures;
* Replace old peripherals and hardware with more modern and secure alternatives;
* For example, replace systems with non-supported operating systems
* For example, replace swipes with EMV devices
* Use the Mindbody systems as designed;
* Restrict access to consumer data if there is no business need for the team member to view;
* Use at least TLS v1.2 when connecting to the internet; and
* Notify Mindbody immediately of any suspected compromise or unusual account activity by sending an email to <EMAIL>.
6.2 Cardholder Data Handling Expectations
Mindbody is certified as a Level 1 Service Provider under PCI DSS Version 3.2
Any merchant who accepts Visa, MasterCard, American Express, or Discover credit cards for payment is subject to the Payment Card Industry Data Security Standard (“PCI DSS”), which outlines credit card processing merchants' responsibilities for the protection of Cardholder Data. We strongly recommend you follow the requirements of the PCI DSS when handling Cardholder Data. Please refer to the PCI DSS website for a complete list of all rules and restrictions that may apply
At a minimum, you must:
* Maintain updated anti-virus software on all workstations engaged in credit card processing and remove any programs that the anti-virus software flags as potentially malicious;
* Restrict permission to install software on those computers to users, business owner and/or trusted senior staff;
* Maintain up-to-date versions of operating systems (e.g., Microsoft Windows or Macintosh OS) and applications (e.g., Microsoft Office, Adobe Reader, Java, Google Chrome), with all security updates and patches installed;
* Ensure that every individual that logs into the services has a unique username and password that is known only by that individual;
* Only store credit card account numbers in encrypted credit card fields designed for that purpose; and
* Destroy any hard copy documents that have Cardholder Data written on them.
For a more detailed list of the requirements and responsibilities as a Payment Processing Service user, refer to our Detailed PCI Responsibility Matrix.
DISCLAIMER OF RESPONSIBILITY FOR CARDHOLDER DATA. If you use the optional Payment Processing Service to process payments, Mindbody is responsible for protecting Cardholder Data only after such Cardholder Data is encrypted and received by Mindbody’s system(s). You remain responsible for the proper handling and protection of Cardholder Data until such Cardholder Data is encrypted and received by Mindbody’s system(s).
## 7. Protection of Personal Health Information
Mindbody supports users who are subject to the requirements of the Health Insurance Portability and Accountability Act. Under HIPAA, certain information about a person’s health or health care services is classified as Protected Health Information (“PHI”). If you are subject to HIPAA and wish to use our services with PHI, it is your responsibility to request a Business Associate Agreement (“BAA”) with Mindbody. You are solely responsible for determining whether you are subject to HIPAA requirements. If you are subject to HIPAA and have not entered into a BAA, you must not use any of our digital properties in connection with PHI. You agree to indemnify, defend, and hold harmless Mindbody and its directors, employees, and affiliates against any claim relating to a failure by you to request a BAA with Mindbody.
## 8. API Credentials
Your API credentials are extremely sensitive. If you use our API, you must follow the policies below to ensure that you’re accessing user data in a safe and secure manner. Using your API credentials indicates that you agree to the terms of this Security Policy. If you or a member of your team violates this policy, you could permanently lose access to the Mindbody API.
You must:
* Ensure your API credentials are stored securely at rest and in transit;
* Share your credentials with your team only on a need-to-know basis;
* Never store credentials in source control, private or public;
* Never allow API credentials to be logged, even in development tools;
* Make sure your team understands that the credentials grant access to sensitive and confidential production data;
* Use Credentials only server to server; and
* Never use credentials in a mobile application.
Mindbody reserves the right to delete any API credentials after 30 days of low activity (less than 100 calls).
## 9. Changes to the Security Policy
We may, in our sole discretion, make changes to this Security Policy from time to time. Any changes we make will become effective when we post a modified version of the Security Policy to Our Website, and we agree the changes will not be retroactive.
If you have any questions regarding this Security Policy you can contact us by email at <EMAIL> or by postal mail at:MINDBODY, Inc. 651 Tank Farm Road San Luis Obispo, California 93401 (877) 755-4279
Attention: Security Policy Questions
|
GITHUB_JerryLead_SparkInternals.zip_unzipped_PhysicalView.pdf | free_programming_book | Unknown | Job 0 ParallelCollectonRDD
0 Stage 0 FlatMappedRDD
flatMap()
count()
0 result 0 sum()
1 1
result 1 Driver
result of sum(result i)
98 98
result 98 99
99 first count()
result 99 result of count(Array[Int, Byte[]])
e.g., 9 Array[(Int, Array[Byte])]
(2, Byte[1000])
(2, Byte[1000])
(1, Byte[1000])
(4, Byte[1000])
(3, Byte[1000])
(2, Byte[1000])
(5, Byte[1000])
(1, Byte[1000])
(2, Byte[1000])
Job 1 Stage 1 Stage 0 ParallelCollectonRDD
ShuedRDD MapPartitionsRDD
mapPartitions WithContext
MappedValuesRDD count()
mapValues 0
0 result 0 1
1 1
result 1
98 34
34 34
result 34 99
35 35
35 result 35 0
0 1
Array[(Int, Array[Byte])]
Array[(Int, Array[Byte])]
(Int, Iterable[Array[Byte]])
result of
(2, Byte[1000])
(2, Byte[1000])
(1, Byte[1000])
(4, Byte[1000])
(3, Byte[1000])
(2, Byte[1000])
(5, Byte[1000])
(1, Byte[1000])
(2, Byte[1000])
(1, Array(Byte[], Byte[]))
(3, Array(Byte[], Byte[], Byte[])
(5, Array(Byte[], Byte[]))
(1, Iterable[Array[Byte])
(3, Iterable[Array[Byte])
(5, Iterable[Array[Byte])
count(Array[Int,
Iterable[Array[Byte]]])
Legend:
RDD partition:
i cached partition:
i data
in the partition/result ShuffleMapTask
Driver result of sum()
ResultTask |
github.com/gcla/gowid | go | Go | README
[¶](#section-readme)
---
# Terminal User Interface Widgets in Go
Gowid provides widgets and a framework for making terminal user interfaces. It's written in Go and inspired by [urwid](http://urwid.org).
Widgets out-of-the-box include:
* input components like button, checkbox and an editable text field with support for passwords
* layout components for arranging widgets in columns, rows and a grid
* structured components - a tree, an infinite list and a table
* pre-canned widgets - a progress bar, a modal dialog, a bar graph and a menu
* a VT220-compatible terminal widget, heavily cribbed from urwid 😃
All widgets support interaction with the mouse when the terminal allows.
Gowid is built on top of the fantastic [tcell](https://github.com/gdamore/tcell) package.
There are many alternatives to gowid - see [Similar Projects](#readme-similar-projects)
The most developed gowid application is currently [termshark](https://termshark.io), a terminal UI for tshark.
### Installation
```
go get github.com/gcla/gowid/...
```
### Examples
Make sure `$GOPATH/bin` is in your PATH (or `~/go/bin` if `GOPATH` isn't set), then tab complete "gowid-" e.g.
```
gowid-fib
```
Here is a port of urwid's [palette](https://github.com/urwid/urwid/raw/master/examples/palette_test.py) example:
[![](https://drive.google.com/uc?export=view&id=1wENPAEOOdPp6eeHvpH0TvYOYnl4Gmy9Q "Click for the larger version.")](https://drive.google.com/uc?export=view&id=1wENPAEOOdPp6eeHvpH0TvYOYnl4Gmy9Q)
Here is urwid's [graph](https://github.com/urwid/urwid/raw/master/examples/graph.py) example:
[![](https://drive.google.com/uc?export=view&id=16p1NFrc3X3ReD-wz7bPXeYF8pCap3U-y "Click for the larger version.")](https://drive.google.com/uc?export=view&id=16p1NFrc3X3ReD-wz7bPXeYF8pCap3U-y)
And urwid's [fibonacci](https://github.com/urwid/urwid/raw/master/examples/fib.py) example:
[![](https://drive.google.com/uc?export=view&id=1fPVYOWt7EMUP18ZQL78OFY7IXwmeeqUO "Click for the larger version.")](https://drive.google.com/uc?export=view&id=1fPVYOWt7EMUP18ZQL78OFY7IXwmeeqUO)
A demonstration of gowid's terminal widget, a port of urwid's [terminal widget](https://github.com/urwid/urwid/raw/master/examples/terminal.py):
[![](https://drive.google.com/uc?export=view&id=1bRtgHoXcy0UESmKZK6JID8FIlkf5T7aL "Click for the larger version.")](https://drive.google.com/uc?export=view&id=1bRtgHoXcy0UESmKZK6JID8FIlkf5T7aL)
Finally, here is an animation of termshark in action:
[![](https://drive.google.com/uc?export=view&id=1vDecxjqwJrtMGJjOObL-LLvi-1pBVByt "Click for the larger version.")](https://drive.google.com/uc?export=view&id=1vDecxjqwJrtMGJjOObL-LLvi-1pBVByt)
### Hello World
This example is an attempt to mimic urwid's ["Hello World"](http://urwid.org/tutorial/index.html) example.
```
package main
import (
"github.com/gcla/gowid"
"github.com/gcla/gowid/widgets/divider"
"github.com/gcla/gowid/widgets/pile"
"github.com/gcla/gowid/widgets/styled"
"github.com/gcla/gowid/widgets/text"
"github.com/gcla/gowid/widgets/vpadding"
)
//===
func main() {
palette := gowid.Palette{
"banner": gowid.MakePaletteEntry(gowid.ColorWhite, gowid.MakeRGBColor("#60d")),
"streak": gowid.MakePaletteEntry(gowid.ColorNone, gowid.MakeRGBColor("#60a")),
"inside": gowid.MakePaletteEntry(gowid.ColorNone, gowid.MakeRGBColor("#808")),
"outside": gowid.MakePaletteEntry(gowid.ColorNone, gowid.MakeRGBColor("#a06")),
"bg": gowid.MakePaletteEntry(gowid.ColorNone, gowid.MakeRGBColor("#d06")),
}
div := divider.NewBlank()
outside := styled.New(div, gowid.MakePaletteRef("outside"))
inside := styled.New(div, gowid.MakePaletteRef("inside"))
helloworld := styled.New(
text.NewFromContentExt(
text.NewContent([]text.ContentSegment{
text.StyledContent("Hello World", gowid.MakePaletteRef("banner")),
}),
text.Options{
Align: gowid.HAlignMiddle{},
},
),
gowid.MakePaletteRef("streak"),
)
f := gowid.RenderFlow{}
view := styled.New(
vpadding.New(
pile.New([]gowid.IContainerWidget{
&gowid.ContainerWidget{IWidget: outside, D: f},
&gowid.ContainerWidget{IWidget: inside, D: f},
&gowid.ContainerWidget{IWidget: helloworld, D: f},
&gowid.ContainerWidget{IWidget: inside, D: f},
&gowid.ContainerWidget{IWidget: outside, D: f},
}),
gowid.VAlignMiddle{},
f),
gowid.MakePaletteRef("bg"),
)
app, _ := gowid.NewApp(gowid.AppArgs{
View: view,
Palette: &palette,
})
app.SimpleMainLoop()
}
```
Running the example above displays this:
[![](https://drive.google.com/uc?export=view&id=1P2kjWagHJmhtWLV0hPQti0fXKidr_WMB "Click for the larger version.")](https://drive.google.com/uc?export=view&id=1P2kjWagHJmhtWLV0hPQti0fXKidr_WMB)
### Documentation
* The beginnings of a [tutorial](https://github.com/gcla/gowid/blob/v1.4.0/docs/Tutorial.md)
* A list of most of the [widgets](https://github.com/gcla/gowid/blob/v1.4.0/docs/Widgets.md)
* Some [FAQs](https://github.com/gcla/gowid/blob/v1.4.0/docs/FAQ.md) (which I guessed at...)
* Some gowid [programming tricks](https://github.com/gcla/gowid/blob/v1.4.0/docs/Debugging.md)
### Similar Projects
Gowid is late to the TUI party. There are many options from which to choose - please read <https://appliedgo.net/tui/> for a nice summary for the Go language. Here is a selection:
* [urwid](http://urwid.org/) - one of the oldest, for those working in python
* [tview](https://github.com/rivo/tview) - active, polished, concise, lots of widgets, Go
* [termui](https://github.com/gizak/termui) - focus on graphing and dataviz, Go
* [gocui](https://github.com/jroimartin/gocui) - focus on layout, good input options, mouse support, Go
* [clui](https://github.com/VladimirMarkelov/clui) - active, many widgets, mouse support, Go
* [tui-go](https://github.com/marcusolsson/tui-go) - QT-inspired, experimental, nice examples, Go
### Dependencies
Gowid depends on these great open-source packages:
* [urwid](http://urwid.org) - not a Go-dependency, but the model for most of gowid's design
* [tcell](https://github.com/gdamore/tcell) - a cell based view for text terminals, like xterm, inspired by termbox
* [asciigraph](https://github.com/guptarohit/asciigraph) - lightweight ASCII line-graphs for Go
* [logrus](https://github.com/sirupsen/logrus) - structured pluggable logging for Go
* [testify](https://github.com/stretchr/testify) - tools for testifying that your code will behave as you intend
### Contact
* The author - <NAME> ([<EMAIL>](mailto:<EMAIL>))
### License
[![](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
Documentation
[¶](#section-documentation)
---
[Rendered for](https://go.dev/about#build-context)
linux/amd64 windows/amd64 darwin/amd64 js/wasm
### Overview [¶](#pkg-overview)
Package gowid provides widgets and tools for constructing compositional terminal user interfaces.
### Index [¶](#pkg-index)
* [Constants](#pkg-constants)
* [Variables](#pkg-variables)
* [func AddWidgetCallback(c ICallbacks, name interface{}, cb IWidgetChangedCallback)](#AddWidgetCallback)
* [func AppendBlankLines(c IAppendBlankLines, iters int)](#AppendBlankLines)
* [func CanvasToString(c ICanvas) string](#CanvasToString)
* [func ChangeFocus(w IWidget, dir Direction, wrap bool, app IApp) bool](#ChangeFocus)
* [func CopyModeUserInput(w ICopyModeWidget, ev interface{}, size IRenderSize, focus Selector, app IApp) bool](#CopyModeUserInput)
* [func Draw(canvas IDrawCanvas, mode IColorMode, screen tcell.Screen)](#Draw)
* [func FindNextSelectableFrom(w ICompositeMultipleDimensions, start int, dir Direction, wrap bool) (int, bool)](#FindNextSelectableFrom)
* [func FindNextSelectableWidget(w []IWidget, pos int, dir Direction, wrap bool) (int, bool)](#FindNextSelectableWidget)
* [func FixCanvasHeight(c ICanvas, size IRenderSize)](#FixCanvasHeight)
* [func Focus(w IWidget) int](#Focus)
* [func FocusPath(w IWidget) []interface{}](#FocusPath)
* [func HandleQuitKeys(app IApp, event interface{}) bool](#HandleQuitKeys)
* [func KeysEqual(k1, k2 IKey) bool](#KeysEqual)
* [func MakeCanvasRightSize(c IRightSizeCanvas, size IRenderSize)](#MakeCanvasRightSize)
* [func MakeCellStyle(fg TCellColor, bg TCellColor, attr StyleAttrs) tcell.Style](#MakeCellStyle)
* [func PanicIfCanvasNotRightSize(c IRenderBox, size IRenderSize)](#PanicIfCanvasNotRightSize)
* [func PrefPosition(curw interface{}) gwutil.IntOption](#PrefPosition)
* [func QuitFn(app IApp, widget IWidget)](#QuitFn)
* [func RangeOverCanvas(c IRangeOverCanvas, f ICellProcessor)](#RangeOverCanvas)
* [func RemoveWidgetCallback(c ICallbacks, name interface{}, id IIdentity)](#RemoveWidgetCallback)
* [func RenderRoot(w IWidget, t *App)](#RenderRoot)
* [func RunWidgetCallbacks(c ICallbacks, name interface{}, app IApp, data ...interface{})](#RunWidgetCallbacks)
* [func SelectableIfAnySubWidgetsAre(w ICompositeMultipleDimensions) bool](#SelectableIfAnySubWidgetsAre)
* [func SetPrefPosition(curw interface{}, prefPos int, app IApp) bool](#SetPrefPosition)
* [func TranslatedMouseEvent(ev interface{}, x, y int) interface{}](#TranslatedMouseEvent)
* [func UserInputIfSelectable(w IWidget, ev interface{}, size IRenderSize, focus Selector, app IApp) bool](#UserInputIfSelectable)
* [func WriteToCanvas(c IRangeOverCanvas, p []byte) (n int, err error)](#WriteToCanvas)
* [type AddressProvidesID](#AddressProvidesID)
* + [func (a *AddressProvidesID) ID() interface{}](#AddressProvidesID.ID)
* [type App](#App)
* + [func NewApp(args AppArgs) (rapp *App, rerr error)](#NewApp)
* + [func (a *App) ActivateScreen() error](#App.ActivateScreen)
+ [func (a *App) Clips() []ICopyResult](#App.Clips)
+ [func (a *App) Close()](#App.Close)
+ [func (a *App) CopyLevel(lvl ...int) int](#App.CopyLevel)
+ [func (a *App) CopyModeClaimedAt(lvl ...int) int](#App.CopyModeClaimedAt)
+ [func (a *App) CopyModeClaimedBy(id ...IIdentity) IIdentity](#App.CopyModeClaimedBy)
+ [func (a *App) DeactivateScreen()](#App.DeactivateScreen)
+ [func (a *App) GetColorMode() ColorMode](#App.GetColorMode)
+ [func (a *App) GetLastMouseState() MouseState](#App.GetLastMouseState)
+ [func (a *App) GetMouseState() MouseState](#App.GetMouseState)
+ [func (a *App) GetPalette() IPalette](#App.GetPalette)
+ [func (a *App) GetScreen() tcell.Screen](#App.GetScreen)
+ [func (a *App) HandleTCellEvent(ev interface{}, unhandled IUnhandledInput)](#App.HandleTCellEvent)
+ [func (a *App) InCopyMode(on ...bool) bool](#App.InCopyMode)
+ [func (a *App) MainLoop(unhandled IUnhandledInput)](#App.MainLoop)
+ [func (a *App) Quit()](#App.Quit)
+ [func (a *App) Redraw()](#App.Redraw)
+ [func (a *App) RedrawTerminal()](#App.RedrawTerminal)
+ [func (a *App) RefreshCopyMode()](#App.RefreshCopyMode)
+ [func (a *App) RegisterMenu(menu IMenuCompatible)](#App.RegisterMenu)
+ [func (a *App) Run(f IAfterRenderEvent) error](#App.Run)
+ [func (a *App) RunThenRenderEvent(ev IAfterRenderEvent)](#App.RunThenRenderEvent)
+ [func (a *App) Runner() *AppRunner](#App.Runner)
+ [func (a *App) SetColorMode(mode ColorMode)](#App.SetColorMode)
+ [func (a *App) SetPalette(palette IPalette)](#App.SetPalette)
+ [func (a *App) SetSubWidget(widget IWidget, app IApp)](#App.SetSubWidget)
+ [func (a *App) SimpleMainLoop()](#App.SimpleMainLoop)
+ [func (a *App) StartTCellEvents(quit <-chan Unit, wg *sync.WaitGroup)](#App.StartTCellEvents)
+ [func (a *App) StopTCellEvents(quit chan<- Unit, wg *sync.WaitGroup)](#App.StopTCellEvents)
+ [func (a *App) SubWidget() IWidget](#App.SubWidget)
+ [func (a *App) Sync()](#App.Sync)
+ [func (a *App) TerminalSize() (x, y int)](#App.TerminalSize)
+ [func (a *App) UnregisterMenu(menu IMenuCompatible) bool](#App.UnregisterMenu)
* [type AppArgs](#AppArgs)
* [type AppRunner](#AppRunner)
* + [func (st *AppRunner) Start()](#AppRunner.Start)
+ [func (st *AppRunner) Stop()](#AppRunner.Stop)
* [type BackgroundColor](#BackgroundColor)
* + [func MakeBackground(c IColor) BackgroundColor](#MakeBackground)
* + [func (a BackgroundColor) GetStyle(prov IRenderContext) (x IColor, y IColor, z StyleAttrs)](#BackgroundColor.GetStyle)
* [type Callback](#Callback)
* + [func (f Callback) ID() interface{}](#Callback.ID)
* [type CallbackFunction](#CallbackFunction)
* + [func (f CallbackFunction) Call(args ...interface{})](#CallbackFunction.Call)
* [type CallbackID](#CallbackID)
* + [func (f CallbackID) ID() interface{}](#CallbackID.ID)
* [type Callbacks](#Callbacks)
* + [func NewCallbacks() *Callbacks](#NewCallbacks)
* + [func (c *Callbacks) AddCallback(name interface{}, cb ICallback)](#Callbacks.AddCallback)
+ [func (c *Callbacks) CopyOfCallbacks(name interface{}) ([]ICallback, bool)](#Callbacks.CopyOfCallbacks)
+ [func (c *Callbacks) RemoveCallback(name interface{}, cb IIdentity) bool](#Callbacks.RemoveCallback)
+ [func (c *Callbacks) RunCallbacks(name interface{}, args ...interface{})](#Callbacks.RunCallbacks)
* [type Canvas](#Canvas)
* + [func NewCanvas() *Canvas](#NewCanvas)
+ [func NewCanvasOfSize(cols, rows int) *Canvas](#NewCanvasOfSize)
+ [func NewCanvasOfSizeExt(cols, rows int, fill Cell) *Canvas](#NewCanvasOfSizeExt)
+ [func NewCanvasWithLines(lines [][]Cell) *Canvas](#NewCanvasWithLines)
* + [func (c *Canvas) AlignRight()](#Canvas.AlignRight)
+ [func (c *Canvas) AlignRightWith(cell Cell)](#Canvas.AlignRightWith)
+ [func (c *Canvas) AppendBelow(c2 IAppendCanvas, doCursor bool, makeCopy bool)](#Canvas.AppendBelow)
+ [func (c *Canvas) AppendLine(line []Cell, makeCopy bool)](#Canvas.AppendLine)
+ [func (c *Canvas) AppendRight(c2 IMergeCanvas, useCursor bool)](#Canvas.AppendRight)
+ [func (c *Canvas) BoxColumns() int](#Canvas.BoxColumns)
+ [func (c *Canvas) BoxRows() int](#Canvas.BoxRows)
+ [func (c *Canvas) CellAt(col, row int) Cell](#Canvas.CellAt)
+ [func (c *Canvas) ComputeCurrentMaxColumn() int](#Canvas.ComputeCurrentMaxColumn)
+ [func (c *Canvas) CursorCoords() CanvasPos](#Canvas.CursorCoords)
+ [func (c *Canvas) CursorEnabled() bool](#Canvas.CursorEnabled)
+ [func (c *Canvas) Duplicate() ICanvas](#Canvas.Duplicate)
+ [func (c *Canvas) ExtendLeft(cells []Cell)](#Canvas.ExtendLeft)
+ [func (c *Canvas) ExtendRight(cells []Cell)](#Canvas.ExtendRight)
+ [func (c *Canvas) GetMark(name string) (CanvasPos, bool)](#Canvas.GetMark)
+ [func (c *Canvas) ImplementsWidgetDimension()](#Canvas.ImplementsWidgetDimension)
+ [func (c *Canvas) Line(y int, cp LineCopy) LineResult](#Canvas.Line)
+ [func (c *Canvas) MergeUnder(c2 IMergeCanvas, leftOffset, topOffset int, bottomGetsCursor bool)](#Canvas.MergeUnder)
+ [func (c *Canvas) MergeWithFunc(c2 IMergeCanvas, leftOffset, topOffset int, fn CellMergeFunc, ...)](#Canvas.MergeWithFunc)
+ [func (c *Canvas) RangeOverMarks(f func(key string, value CanvasPos) bool)](#Canvas.RangeOverMarks)
+ [func (c *Canvas) RemoveMark(name string)](#Canvas.RemoveMark)
+ [func (c *Canvas) SetCellAt(col, row int, cell Cell)](#Canvas.SetCellAt)
+ [func (c *Canvas) SetCursorCoords(x, y int)](#Canvas.SetCursorCoords)
+ [func (c *Canvas) SetLineAt(row int, line []Cell)](#Canvas.SetLineAt)
+ [func (c *Canvas) SetMark(name string, x, y int)](#Canvas.SetMark)
+ [func (c *Canvas) String() string](#Canvas.String)
+ [func (c *Canvas) TrimLeft(colsToHave int)](#Canvas.TrimLeft)
+ [func (c *Canvas) TrimRight(colsToHave int)](#Canvas.TrimRight)
+ [func (c *Canvas) Truncate(above, below int)](#Canvas.Truncate)
+ [func (c *Canvas) Write(p []byte) (n int, err error)](#Canvas.Write)
* [type CanvasPos](#CanvasPos)
* + [func (c CanvasPos) PlusX(n int) CanvasPos](#CanvasPos.PlusX)
+ [func (c CanvasPos) PlusY(n int) CanvasPos](#CanvasPos.PlusY)
* [type CanvasSizeWrong](#CanvasSizeWrong)
* + [func (e CanvasSizeWrong) Error() string](#CanvasSizeWrong.Error)
* [type Cell](#Cell)
* + [func CellFromRune(r rune) Cell](#CellFromRune)
+ [func CellsFromString(s string) []Cell](#CellsFromString)
+ [func EmptyLine(length int) []Cell](#EmptyLine)
+ [func MakeCell(codePoint rune, fg TCellColor, bg TCellColor, Attr StyleAttrs) Cell](#MakeCell)
* + [func (c Cell) BackgroundColor() TCellColor](#Cell.BackgroundColor)
+ [func (c Cell) ForegroundColor() TCellColor](#Cell.ForegroundColor)
+ [func (c Cell) GetDisplayAttrs() (x TCellColor, y TCellColor, z StyleAttrs)](#Cell.GetDisplayAttrs)
+ [func (c Cell) HasRune() bool](#Cell.HasRune)
+ [func (c Cell) MergeDisplayAttrsUnder(upper Cell) Cell](#Cell.MergeDisplayAttrsUnder)
+ [func (c Cell) MergeUnder(upper Cell) Cell](#Cell.MergeUnder)
+ [func (c Cell) Rune() rune](#Cell.Rune)
+ [func (c Cell) Style() StyleAttrs](#Cell.Style)
+ [func (c Cell) WithBackgroundColor(a TCellColor) Cell](#Cell.WithBackgroundColor)
+ [func (c Cell) WithForegroundColor(a TCellColor) Cell](#Cell.WithForegroundColor)
+ [func (c Cell) WithNoRune() Cell](#Cell.WithNoRune)
+ [func (c Cell) WithRune(r rune) Cell](#Cell.WithRune)
+ [func (c Cell) WithStyle(attr StyleAttrs) Cell](#Cell.WithStyle)
* [type CellMergeFunc](#CellMergeFunc)
* [type CellRangeFunc](#CellRangeFunc)
* + [func (f CellRangeFunc) ProcessCell(cell Cell) Cell](#CellRangeFunc.ProcessCell)
* [type ClickCB](#ClickCB)
* [type ClickCallbacks](#ClickCallbacks)
* + [func (w *ClickCallbacks) OnClick(f IWidgetChangedCallback)](#ClickCallbacks.OnClick)
+ [func (w *ClickCallbacks) RemoveOnClick(f IIdentity)](#ClickCallbacks.RemoveOnClick)
* [type ClickTargets](#ClickTargets)
* + [func MakeClickTargets() ClickTargets](#MakeClickTargets)
* + [func (t ClickTargets) ClickTarget(f func(tcell.ButtonMask, IIdentityWidget))](#ClickTargets.ClickTarget)
+ [func (t ClickTargets) DeleteClickTargets(k tcell.ButtonMask)](#ClickTargets.DeleteClickTargets)
+ [func (t ClickTargets) SetClickTarget(k tcell.ButtonMask, w IIdentityWidget) bool](#ClickTargets.SetClickTarget)
* [type Color](#Color)
* + [func MakeColor(s string) Color](#MakeColor)
+ [func MakeColorSafe(s string) (Color, error)](#MakeColorSafe)
* + [func (c Color) String() string](#Color.String)
* [type ColorByMode](#ColorByMode)
* + [func MakeColorByMode(cols map[ColorMode]IColor) ColorByMode](#MakeColorByMode)
+ [func MakeColorByModeSafe(cols map[ColorMode]IColor) (ColorByMode, error)](#MakeColorByModeSafe)
* + [func (c ColorByMode) ToTCellColor(mode ColorMode) (TCellColor, bool)](#ColorByMode.ToTCellColor)
* [type ColorInverter](#ColorInverter)
* + [func (c ColorInverter) GetStyle(prov IRenderContext) (x IColor, y IColor, z StyleAttrs)](#ColorInverter.GetStyle)
* [type ColorMode](#ColorMode)
* + [func (c ColorMode) String() string](#ColorMode.String)
* [type ColorModeMismatch](#ColorModeMismatch)
* + [func (e ColorModeMismatch) Error() string](#ColorModeMismatch.Error)
* [type ContainerWidget](#ContainerWidget)
* + [func (ww ContainerWidget) Dimension() IWidgetDimension](#ContainerWidget.Dimension)
+ [func (ww *ContainerWidget) SetDimension(d IWidgetDimension)](#ContainerWidget.SetDimension)
+ [func (w *ContainerWidget) SetSubWidget(wi IWidget, app IApp)](#ContainerWidget.SetSubWidget)
+ [func (w *ContainerWidget) String() string](#ContainerWidget.String)
+ [func (w *ContainerWidget) SubWidget() IWidget](#ContainerWidget.SubWidget)
* [type CopyModeClipsEvent](#CopyModeClipsEvent)
* + [func (c CopyModeClipsEvent) When() time.Time](#CopyModeClipsEvent.When)
* [type CopyModeClipsFn](#CopyModeClipsFn)
* + [func (f CopyModeClipsFn) Collect(clips []ICopyResult)](#CopyModeClipsFn.Collect)
* [type CopyModeEvent](#CopyModeEvent)
* + [func (c CopyModeEvent) When() time.Time](#CopyModeEvent.When)
* [type CopyResult](#CopyResult)
* + [func (c CopyResult) ClipName() string](#CopyResult.ClipName)
+ [func (c CopyResult) ClipValue() string](#CopyResult.ClipValue)
* [type DefaultColor](#DefaultColor)
* + [func (r DefaultColor) String() string](#DefaultColor.String)
+ [func (r DefaultColor) ToTCellColor(mode ColorMode) (TCellColor, bool)](#DefaultColor.ToTCellColor)
* [type DimensionError](#DimensionError)
* + [func (e DimensionError) Error() string](#DimensionError.Error)
* [type DimensionsCB](#DimensionsCB)
* [type Direction](#Direction)
* [type EmptyLineTooLong](#EmptyLineTooLong)
* + [func (e EmptyLineTooLong) Error() string](#EmptyLineTooLong.Error)
* [type EmptyPalette](#EmptyPalette)
* + [func MakeEmptyPalette() EmptyPalette](#MakeEmptyPalette)
* + [func (a EmptyPalette) GetStyle(prov IRenderContext) (x IColor, y IColor, z StyleAttrs)](#EmptyPalette.GetStyle)
* [type FocusCB](#FocusCB)
* [type FocusCallbacks](#FocusCallbacks)
* + [func (w *FocusCallbacks) OnFocusChanged(f IWidgetChangedCallback)](#FocusCallbacks.OnFocusChanged)
+ [func (w *FocusCallbacks) RemoveOnFocusChanged(f IIdentity)](#FocusCallbacks.RemoveOnFocusChanged)
* [type FocusPathResult](#FocusPathResult)
* + [func SetFocusPath(w IWidget, path []interface{}, app IApp) FocusPathResult](#SetFocusPath)
* + [func (f FocusPathResult) Error() string](#FocusPathResult.Error)
* [type ForegroundColor](#ForegroundColor)
* + [func MakeForeground(c IColor) ForegroundColor](#MakeForeground)
* + [func (a ForegroundColor) GetStyle(prov IRenderContext) (x IColor, y IColor, z StyleAttrs)](#ForegroundColor.GetStyle)
* [type GrayColor](#GrayColor)
* + [func MakeGrayColor(val string) GrayColor](#MakeGrayColor)
+ [func MakeGrayColorSafe(val string) (GrayColor, error)](#MakeGrayColorSafe)
* + [func (g GrayColor) String() string](#GrayColor.String)
+ [func (s GrayColor) ToTCellColor(mode ColorMode) (TCellColor, bool)](#GrayColor.ToTCellColor)
* [type HAlignCB](#HAlignCB)
* [type HAlignLeft](#HAlignLeft)
* + [func (h HAlignLeft) ImplementsHAlignment()](#HAlignLeft.ImplementsHAlignment)
* [type HAlignMiddle](#HAlignMiddle)
* + [func (h HAlignMiddle) ImplementsHAlignment()](#HAlignMiddle.ImplementsHAlignment)
* [type HAlignRight](#HAlignRight)
* + [func (h HAlignRight) ImplementsHAlignment()](#HAlignRight.ImplementsHAlignment)
* [type HeightCB](#HeightCB)
* [type IAfterRenderEvent](#IAfterRenderEvent)
* [type IApp](#IApp)
* [type IAppendBlankLines](#IAppendBlankLines)
* [type IAppendCanvas](#IAppendCanvas)
* [type IBox](#IBox)
* [type ICallback](#ICallback)
* [type ICallbackRunner](#ICallbackRunner)
* [type ICallbacks](#ICallbacks)
* [type ICanvas](#ICanvas)
* [type ICanvasCellReader](#ICanvasCellReader)
* [type ICanvasLineReader](#ICanvasLineReader)
* [type ICanvasMarkIterator](#ICanvasMarkIterator)
* [type ICellProcessor](#ICellProcessor)
* [type ICellStyler](#ICellStyler)
* [type IChangeFocus](#IChangeFocus)
* [type IClickTracker](#IClickTracker)
* [type IClickable](#IClickable)
* [type IClickableWidget](#IClickableWidget)
* [type IClipboard](#IClipboard)
* [type IClipboardSelected](#IClipboardSelected)
* [type IColor](#IColor)
* [type IColorMode](#IColorMode)
* [type IColumns](#IColumns)
* [type IComposite](#IComposite)
* [type ICompositeMultiple](#ICompositeMultiple)
* [type ICompositeMultipleDimensions](#ICompositeMultipleDimensions)
* [type ICompositeMultipleFocus](#ICompositeMultipleFocus)
* [type ICompositeMultipleWidget](#ICompositeMultipleWidget)
* [type ICompositeWidget](#ICompositeWidget)
* [type IContainerWidget](#IContainerWidget)
* [type ICopyModeClips](#ICopyModeClips)
* [type ICopyModeWidget](#ICopyModeWidget)
* [type ICopyResult](#ICopyResult)
* [type IDrawCanvas](#IDrawCanvas)
* [type IFindNextSelectable](#IFindNextSelectable)
* [type IFocus](#IFocus)
* [type IFocusSelectable](#IFocusSelectable)
* [type IGetFocus](#IGetFocus)
* [type IGetScreen](#IGetScreen)
* [type IHAlignment](#IHAlignment)
* [type IIdentity](#IIdentity)
* [type IIdentityWidget](#IIdentityWidget)
* [type IKey](#IKey)
* [type IKeyPress](#IKeyPress)
* [type IMenuCompatible](#IMenuCompatible)
* [type IMergeCanvas](#IMergeCanvas)
* [type IPalette](#IPalette)
* [type IPreferedPosition](#IPreferedPosition)
* [type IRangeOverCanvas](#IRangeOverCanvas)
* [type IRenderBox](#IRenderBox)
* + [func RenderSize(w IWidget, size IRenderSize, focus Selector, app IApp) IRenderBox](#RenderSize)
* [type IRenderContext](#IRenderContext)
* [type IRenderFixed](#IRenderFixed)
* [type IRenderFlow](#IRenderFlow)
* [type IRenderFlowWith](#IRenderFlowWith)
* [type IRenderMax](#IRenderMax)
* [type IRenderMaxUnits](#IRenderMaxUnits)
* [type IRenderRelative](#IRenderRelative)
* [type IRenderSize](#IRenderSize)
* + [func ComputeHorizontalSubSize(size IRenderSize, d IWidgetDimension) (IRenderSize, error)](#ComputeHorizontalSubSize)
+ [func ComputeHorizontalSubSizeUnsafe(size IRenderSize, d IWidgetDimension) IRenderSize](#ComputeHorizontalSubSizeUnsafe)
+ [func ComputeSubSize(size IRenderSize, w IWidgetDimension, h IWidgetDimension) (IRenderSize, error)](#ComputeSubSize)
+ [func ComputeSubSizeUnsafe(size IRenderSize, w IWidgetDimension, h IWidgetDimension) IRenderSize](#ComputeSubSizeUnsafe)
+ [func ComputeVerticalSubSize(size IRenderSize, d IWidgetDimension, maxCol int, advRow int) (IRenderSize, error)](#ComputeVerticalSubSize)
+ [func ComputeVerticalSubSizeUnsafe(size IRenderSize, d IWidgetDimension, maxCol int, advRow int) IRenderSize](#ComputeVerticalSubSizeUnsafe)
+ [func SubWidgetSize(w ICompositeWidget, size IRenderSize, focus Selector, app IApp) IRenderSize](#SubWidgetSize)
* [type IRenderWithUnits](#IRenderWithUnits)
* [type IRenderWithWeight](#IRenderWithWeight)
* [type IRightSizeCanvas](#IRightSizeCanvas)
* [type IRows](#IRows)
* [type ISelectChild](#ISelectChild)
* [type ISettableComposite](#ISettableComposite)
* [type ISettableDimensions](#ISettableDimensions)
* [type ISettableSubWidgets](#ISettableSubWidgets)
* [type ISubWidgetSize](#ISubWidgetSize)
* [type IUnhandledInput](#IUnhandledInput)
* [type IVAlignment](#IVAlignment)
* [type IWidget](#IWidget)
* + [func CopyWidgets(w []IWidget) []IWidget](#CopyWidgets)
+ [func FindInHierarchy(w IWidget, includeMe bool, pred WidgetPredicate) IWidget](#FindInHierarchy)
* [type IWidgetChangedCallback](#IWidgetChangedCallback)
* [type IWidgetDimension](#IWidgetDimension)
* [type InvalidColor](#InvalidColor)
* + [func (e InvalidColor) Error() string](#InvalidColor.Error)
* [type InvalidTypeToCompare](#InvalidTypeToCompare)
* + [func (e InvalidTypeToCompare) Error() string](#InvalidTypeToCompare.Error)
* [type IsSelectable](#IsSelectable)
* + [func (r *IsSelectable) Selectable() bool](#IsSelectable.Selectable)
* [type Key](#Key)
* + [func MakeKey(ch rune) Key](#MakeKey)
+ [func MakeKeyExt(key tcell.Key) Key](#MakeKeyExt)
+ [func MakeKeyExt2(mod tcell.ModMask, key tcell.Key, ch rune) Key](#MakeKeyExt2)
* + [func (k Key) Key() tcell.Key](#Key.Key)
+ [func (k Key) Modifiers() tcell.ModMask](#Key.Modifiers)
+ [func (k Key) Rune() rune](#Key.Rune)
+ [func (k Key) String() string](#Key.String)
* [type KeyPressCB](#KeyPressCB)
* [type KeyPressCallbacks](#KeyPressCallbacks)
* + [func (w *KeyPressCallbacks) OnKeyPress(f IWidgetChangedCallback)](#KeyPressCallbacks.OnKeyPress)
+ [func (w *KeyPressCallbacks) RemoveOnKeyPress(f IIdentity)](#KeyPressCallbacks.RemoveOnKeyPress)
* [type KeyValueError](#KeyValueError)
* + [func WithKVs(err error, kvs map[string]interface{}) KeyValueError](#WithKVs)
* + [func (e KeyValueError) Cause() error](#KeyValueError.Cause)
+ [func (e KeyValueError) Error() string](#KeyValueError.Error)
+ [func (e KeyValueError) Unwrap() error](#KeyValueError.Unwrap)
* [type LineCanvas](#LineCanvas)
* + [func (c LineCanvas) BoxColumns() int](#LineCanvas.BoxColumns)
+ [func (c LineCanvas) BoxRows() int](#LineCanvas.BoxRows)
+ [func (c LineCanvas) ImplementsWidgetDimension()](#LineCanvas.ImplementsWidgetDimension)
+ [func (c LineCanvas) Line(y int, cp LineCopy) LineResult](#LineCanvas.Line)
+ [func (c LineCanvas) RangeOverMarks(f func(key string, value CanvasPos) bool)](#LineCanvas.RangeOverMarks)
* [type LineCopy](#LineCopy)
* [type LineResult](#LineResult)
* [type LogField](#LogField)
* [type MouseState](#MouseState)
* + [func (m MouseState) LeftIsClicked() bool](#MouseState.LeftIsClicked)
+ [func (m MouseState) MiddleIsClicked() bool](#MouseState.MiddleIsClicked)
+ [func (m MouseState) NoButtonClicked() bool](#MouseState.NoButtonClicked)
+ [func (m MouseState) RightIsClicked() bool](#MouseState.RightIsClicked)
+ [func (m MouseState) String() string](#MouseState.String)
* [type NoColor](#NoColor)
* + [func (r NoColor) String() string](#NoColor.String)
+ [func (r NoColor) ToTCellColor(mode ColorMode) (TCellColor, bool)](#NoColor.ToTCellColor)
* [type NotSelectable](#NotSelectable)
* + [func (r *NotSelectable) Selectable() bool](#NotSelectable.Selectable)
* [type Palette](#Palette)
* + [func (m Palette) CellStyler(name string) (ICellStyler, bool)](#Palette.CellStyler)
+ [func (m Palette) RangeOverPalette(f func(k string, v ICellStyler) bool)](#Palette.RangeOverPalette)
* [type PaletteEntry](#PaletteEntry)
* + [func MakePaletteEntry(fg, bg IColor) PaletteEntry](#MakePaletteEntry)
+ [func MakeStyledPaletteEntry(fg, bg IColor, style StyleAttrs) PaletteEntry](#MakeStyledPaletteEntry)
* + [func (a PaletteEntry) GetStyle(prov IRenderContext) (x IColor, y IColor, z StyleAttrs)](#PaletteEntry.GetStyle)
* [type PaletteRef](#PaletteRef)
* + [func MakePaletteRef(name string) PaletteRef](#MakePaletteRef)
* + [func (a PaletteRef) GetStyle(prov IRenderContext) (x IColor, y IColor, z StyleAttrs)](#PaletteRef.GetStyle)
* [type PrettyModMask](#PrettyModMask)
* + [func (p PrettyModMask) String() string](#PrettyModMask.String)
* [type PrettyTcellKey](#PrettyTcellKey)
* + [func (p *PrettyTcellKey) String() string](#PrettyTcellKey.String)
* [type RGBColor](#RGBColor)
* + [func MakeRGBColor(s string) RGBColor](#MakeRGBColor)
+ [func MakeRGBColorExt(r, g, b int) RGBColor](#MakeRGBColorExt)
+ [func MakeRGBColorExtSafe(r, g, b int) (RGBColor, error)](#MakeRGBColorExtSafe)
+ [func MakeRGBColorSafe(s string) (RGBColor, error)](#MakeRGBColorSafe)
* + [func (rgb RGBColor) RGBA() (r, g, b, a uint32)](#RGBColor.RGBA)
+ [func (r RGBColor) String() string](#RGBColor.String)
+ [func (r RGBColor) ToTCellColor(mode ColorMode) (TCellColor, bool)](#RGBColor.ToTCellColor)
* [type RejectUserInput](#RejectUserInput)
* + [func (r RejectUserInput) UserInput(ev interface{}, size IRenderSize, focus Selector, app IApp) bool](#RejectUserInput.UserInput)
* [type RenderBox](#RenderBox)
* + [func CalculateRenderSizeFallback(w IWidget, size IRenderSize, focus Selector, app IApp) RenderBox](#CalculateRenderSizeFallback)
+ [func MakeRenderBox(columns, rows int) RenderBox](#MakeRenderBox)
* + [func (r RenderBox) BoxColumns() int](#RenderBox.BoxColumns)
+ [func (r RenderBox) BoxRows() int](#RenderBox.BoxRows)
+ [func (r RenderBox) Columns() int](#RenderBox.Columns)
+ [func (r RenderBox) ImplementsWidgetDimension()](#RenderBox.ImplementsWidgetDimension)
+ [func (r RenderBox) Rows() int](#RenderBox.Rows)
+ [func (r RenderBox) String() string](#RenderBox.String)
* [type RenderFixed](#RenderFixed)
* + [func MakeRenderFixed() RenderFixed](#MakeRenderFixed)
* + [func (f RenderFixed) Fixed()](#RenderFixed.Fixed)
+ [func (r RenderFixed) ImplementsWidgetDimension()](#RenderFixed.ImplementsWidgetDimension)
+ [func (f RenderFixed) String() string](#RenderFixed.String)
* [type RenderFlow](#RenderFlow)
* + [func (s RenderFlow) Flow()](#RenderFlow.Flow)
+ [func (r RenderFlow) ImplementsWidgetDimension()](#RenderFlow.ImplementsWidgetDimension)
+ [func (f RenderFlow) String() string](#RenderFlow.String)
* [type RenderFlowWith](#RenderFlowWith)
* + [func MakeRenderFlow(columns int) RenderFlowWith](#MakeRenderFlow)
* + [func (r RenderFlowWith) Columns() int](#RenderFlowWith.Columns)
+ [func (r RenderFlowWith) FlowColumns() int](#RenderFlowWith.FlowColumns)
+ [func (r RenderFlowWith) ImplementsWidgetDimension()](#RenderFlowWith.ImplementsWidgetDimension)
+ [func (r RenderFlowWith) String() string](#RenderFlowWith.String)
* [type RenderMax](#RenderMax)
* + [func (s RenderMax) MaxHeight()](#RenderMax.MaxHeight)
+ [func (f RenderMax) String() string](#RenderMax.String)
* [type RenderWithRatio](#RenderWithRatio)
* + [func (r RenderWithRatio) ImplementsWidgetDimension()](#RenderWithRatio.ImplementsWidgetDimension)
+ [func (f RenderWithRatio) Relative() float64](#RenderWithRatio.Relative)
+ [func (f RenderWithRatio) String() string](#RenderWithRatio.String)
* [type RenderWithUnits](#RenderWithUnits)
* + [func (r RenderWithUnits) ImplementsWidgetDimension()](#RenderWithUnits.ImplementsWidgetDimension)
+ [func (f RenderWithUnits) String() string](#RenderWithUnits.String)
+ [func (f RenderWithUnits) Units() int](#RenderWithUnits.Units)
* [type RenderWithWeight](#RenderWithWeight)
* + [func (r RenderWithWeight) ImplementsWidgetDimension()](#RenderWithWeight.ImplementsWidgetDimension)
+ [func (f RenderWithWeight) String() string](#RenderWithWeight.String)
+ [func (f RenderWithWeight) Weight() int](#RenderWithWeight.Weight)
* [type RunFunction](#RunFunction)
* + [func (f RunFunction) RunThenRenderEvent(app IApp)](#RunFunction.RunThenRenderEvent)
* [type Selector](#Selector)
* + [func (s Selector) And(cond bool) Selector](#Selector.And)
+ [func (s Selector) SelectIf(cond bool) Selector](#Selector.SelectIf)
+ [func (s Selector) String() string](#Selector.String)
* [type StyleAttrs](#StyleAttrs)
* + [func (a StyleAttrs) MergeUnder(upper StyleAttrs) StyleAttrs](#StyleAttrs.MergeUnder)
* [type StyleMod](#StyleMod)
* + [func MakeStyleMod(cur, mod ICellStyler) StyleMod](#MakeStyleMod)
* + [func (a StyleMod) GetStyle(prov IRenderContext) (x IColor, y IColor, z StyleAttrs)](#StyleMod.GetStyle)
* [type StyledAs](#StyledAs)
* + [func MakeStyledAs(s StyleAttrs) StyledAs](#MakeStyledAs)
* + [func (a StyledAs) GetStyle(prov IRenderContext) (x IColor, y IColor, z StyleAttrs)](#StyledAs.GetStyle)
* [type SubWidgetCB](#SubWidgetCB)
* [type SubWidgetCallbacks](#SubWidgetCallbacks)
* + [func (w *SubWidgetCallbacks) OnSetSubWidget(f IWidgetChangedCallback)](#SubWidgetCallbacks.OnSetSubWidget)
+ [func (w *SubWidgetCallbacks) RemoveOnSetSubWidget(f IIdentity)](#SubWidgetCallbacks.RemoveOnSetSubWidget)
* [type SubWidgetsCB](#SubWidgetsCB)
* [type SubWidgetsCallbacks](#SubWidgetsCallbacks)
* + [func (w *SubWidgetsCallbacks) OnSetSubWidgets(f IWidgetChangedCallback)](#SubWidgetsCallbacks.OnSetSubWidgets)
+ [func (w *SubWidgetsCallbacks) RemoveOnSetSubWidgets(f IIdentity)](#SubWidgetsCallbacks.RemoveOnSetSubWidgets)
* [type TCellColor](#TCellColor)
* + [func IColorToTCell(color IColor, def TCellColor, mode ColorMode) TCellColor](#IColorToTCell)
+ [func MakeTCellColor(val string) (TCellColor, error)](#MakeTCellColor)
+ [func MakeTCellColorExt(val tcell.Color) TCellColor](#MakeTCellColorExt)
+ [func MakeTCellNoColor() TCellColor](#MakeTCellNoColor)
* + [func (r TCellColor) String() string](#TCellColor.String)
+ [func (r TCellColor) ToTCell() tcell.Color](#TCellColor.ToTCell)
+ [func (r TCellColor) ToTCellColor(mode ColorMode) (TCellColor, bool)](#TCellColor.ToTCellColor)
* [type UnhandledInputFunc](#UnhandledInputFunc)
* + [func (f UnhandledInputFunc) UnhandledInput(app IApp, ev interface{}) bool](#UnhandledInputFunc.UnhandledInput)
* [type Unit](#Unit)
* [type UrwidColor](#UrwidColor)
* + [func NewUrwidColor(val string) *UrwidColor](#NewUrwidColor)
+ [func NewUrwidColorSafe(val string) (*UrwidColor, error)](#NewUrwidColorSafe)
* + [func (r UrwidColor) String() string](#UrwidColor.String)
+ [func (s *UrwidColor) ToTCellColor(mode ColorMode) (TCellColor, bool)](#UrwidColor.ToTCellColor)
* [type VAlignBottom](#VAlignBottom)
* + [func (v VAlignBottom) ImplementsVAlignment()](#VAlignBottom.ImplementsVAlignment)
* [type VAlignCB](#VAlignCB)
* [type VAlignMiddle](#VAlignMiddle)
* + [func (v VAlignMiddle) ImplementsVAlignment()](#VAlignMiddle.ImplementsVAlignment)
* [type VAlignTop](#VAlignTop)
* + [func (v VAlignTop) ImplementsVAlignment()](#VAlignTop.ImplementsVAlignment)
* [type WidgetCallback](#WidgetCallback)
* + [func MakeWidgetCallback(name interface{}, fn WidgetChangedFunction) WidgetCallback](#MakeWidgetCallback)
* + [func (f WidgetCallback) ID() interface{}](#WidgetCallback.ID)
* [type WidgetCallbackExt](#WidgetCallbackExt)
* + [func MakeWidgetCallbackExt(name interface{}, fn WidgetChangedFunctionExt) WidgetCallbackExt](#MakeWidgetCallbackExt)
* + [func (f WidgetCallbackExt) ID() interface{}](#WidgetCallbackExt.ID)
* [type WidgetChangedFunction](#WidgetChangedFunction)
* + [func (f WidgetChangedFunction) Changed(app IApp, widget IWidget, data ...interface{})](#WidgetChangedFunction.Changed)
* [type WidgetChangedFunctionExt](#WidgetChangedFunctionExt)
* + [func (f WidgetChangedFunctionExt) Changed(app IApp, widget IWidget, data ...interface{})](#WidgetChangedFunctionExt.Changed)
* [type WidgetPredicate](#WidgetPredicate)
* [type WidgetSizeError](#WidgetSizeError)
* + [func (e WidgetSizeError) Error() string](#WidgetSizeError.Error)
* [type WidthCB](#WidthCB)
### Constants [¶](#pkg-constants)
```
const (
StyleNoneSet [tcell](/github.com/gdamore/tcell/v2).[AttrMask](/github.com/gdamore/tcell/v2#AttrMask) = 0 // Just unstyled text.
StyleAllSet [tcell](/github.com/gdamore/tcell/v2).[AttrMask](/github.com/gdamore/tcell/v2#AttrMask) = [tcell](/github.com/gdamore/tcell/v2).[AttrBold](/github.com/gdamore/tcell/v2#AttrBold) | [tcell](/github.com/gdamore/tcell/v2).[AttrBlink](/github.com/gdamore/tcell/v2#AttrBlink) | [tcell](/github.com/gdamore/tcell/v2).[AttrReverse](/github.com/gdamore/tcell/v2#AttrReverse) | [tcell](/github.com/gdamore/tcell/v2).[AttrUnderline](/github.com/gdamore/tcell/v2#AttrUnderline) | [tcell](/github.com/gdamore/tcell/v2).[AttrDim](/github.com/gdamore/tcell/v2#AttrDim)
)
```
These are used as bitmasks - a style is two AttrMasks. The first bitmask says whether or not the style declares an e.g. underline setting; if it's declared, the second bitmask says whether or not underline is affirmatively on or off.
This allows styles to be layered e.g. the lower style declares underline is on, the upper style does not declare an underline preference, so when layered, the cell is rendered with an underline.
```
const (
// Mode256Colors represents a terminal with 256-color support.
Mode256Colors = [ColorMode](#ColorMode)([iota](/builtin#iota))
// Mode88Colors represents a terminal with 88-color support such as rxvt.
Mode88Colors
// Mode16Colors represents a terminal with 16-color support.
Mode16Colors
// Mode8Colors represents a terminal with 8-color support.
Mode8Colors
// Mode8Colors represents a terminal with support for monochrome only.
ModeMonochrome
// Mode24BitColors represents a terminal with 24-bit color support like KDE's terminal.
Mode24BitColors
)
```
```
const (
Forwards = [Direction](#Direction)(1)
Backwards = [Direction](#Direction)(-1)
)
```
### Variables [¶](#pkg-variables)
```
var (
CubeStart = 16 // first index of color cube
CubeSize256 = 6 // one side of the color cube
// ColorNone means no preference if anything is layered underneath
ColorNone = [MakeTCellNoColor](#MakeTCellNoColor)()
// ColorDefault is an affirmative preference for the default terminal color
ColorDefault = [MakeTCellColorExt](#MakeTCellColorExt)([tcell](/github.com/gdamore/tcell/v2).[ColorDefault](/github.com/gdamore/tcell/v2#ColorDefault))
// Some pre-initialized color objects for use in applications e.g.
// MakePaletteEntry(ColorBlack, ColorRed)
ColorBlack = [MakeTCellColorExt](#MakeTCellColorExt)([tcell](/github.com/gdamore/tcell/v2).[ColorBlack](/github.com/gdamore/tcell/v2#ColorBlack))
ColorRed = [MakeTCellColorExt](#MakeTCellColorExt)([tcell](/github.com/gdamore/tcell/v2).[ColorRed](/github.com/gdamore/tcell/v2#ColorRed))
ColorGreen = [MakeTCellColorExt](#MakeTCellColorExt)([tcell](/github.com/gdamore/tcell/v2).[ColorGreen](/github.com/gdamore/tcell/v2#ColorGreen))
ColorLightGreen = [MakeTCellColorExt](#MakeTCellColorExt)([tcell](/github.com/gdamore/tcell/v2).[ColorLightGreen](/github.com/gdamore/tcell/v2#ColorLightGreen))
ColorYellow = [MakeTCellColorExt](#MakeTCellColorExt)([tcell](/github.com/gdamore/tcell/v2).[ColorYellow](/github.com/gdamore/tcell/v2#ColorYellow))
ColorBlue = [MakeTCellColorExt](#MakeTCellColorExt)([tcell](/github.com/gdamore/tcell/v2).[ColorBlue](/github.com/gdamore/tcell/v2#ColorBlue))
ColorLightBlue = [MakeTCellColorExt](#MakeTCellColorExt)([tcell](/github.com/gdamore/tcell/v2).[ColorLightBlue](/github.com/gdamore/tcell/v2#ColorLightBlue))
ColorMagenta = [MakeTCellColorExt](#MakeTCellColorExt)([tcell](/github.com/gdamore/tcell/v2).[ColorDarkMagenta](/github.com/gdamore/tcell/v2#ColorDarkMagenta))
ColorCyan = [MakeTCellColorExt](#MakeTCellColorExt)([tcell](/github.com/gdamore/tcell/v2).[ColorDarkCyan](/github.com/gdamore/tcell/v2#ColorDarkCyan))
ColorWhite = [MakeTCellColorExt](#MakeTCellColorExt)([tcell](/github.com/gdamore/tcell/v2).[ColorWhite](/github.com/gdamore/tcell/v2#ColorWhite))
ColorDarkRed = [MakeTCellColorExt](#MakeTCellColorExt)([tcell](/github.com/gdamore/tcell/v2).[ColorDarkRed](/github.com/gdamore/tcell/v2#ColorDarkRed))
ColorDarkGreen = [MakeTCellColorExt](#MakeTCellColorExt)([tcell](/github.com/gdamore/tcell/v2).[ColorDarkGreen](/github.com/gdamore/tcell/v2#ColorDarkGreen))
ColorDarkBlue = [MakeTCellColorExt](#MakeTCellColorExt)([tcell](/github.com/gdamore/tcell/v2).[ColorDarkBlue](/github.com/gdamore/tcell/v2#ColorDarkBlue))
ColorLightGray = [MakeTCellColorExt](#MakeTCellColorExt)([tcell](/github.com/gdamore/tcell/v2).[ColorLightGray](/github.com/gdamore/tcell/v2#ColorLightGray))
ColorDarkGray = [MakeTCellColorExt](#MakeTCellColorExt)([tcell](/github.com/gdamore/tcell/v2).[ColorDarkGray](/github.com/gdamore/tcell/v2#ColorDarkGray))
ColorPurple = [MakeTCellColorExt](#MakeTCellColorExt)([tcell](/github.com/gdamore/tcell/v2).[ColorPurple](/github.com/gdamore/tcell/v2#ColorPurple))
ColorOrange = [MakeTCellColorExt](#MakeTCellColorExt)([tcell](/github.com/gdamore/tcell/v2).[ColorOrange](/github.com/gdamore/tcell/v2#ColorOrange))
)
```
```
var AllStyleMasks = [...][tcell](/github.com/gdamore/tcell/v2).[AttrMask](/github.com/gdamore/tcell/v2#AttrMask){[tcell](/github.com/gdamore/tcell/v2).[AttrBold](/github.com/gdamore/tcell/v2#AttrBold), [tcell](/github.com/gdamore/tcell/v2).[AttrBlink](/github.com/gdamore/tcell/v2#AttrBlink), [tcell](/github.com/gdamore/tcell/v2).[AttrDim](/github.com/gdamore/tcell/v2#AttrDim), [tcell](/github.com/gdamore/tcell/v2).[AttrReverse](/github.com/gdamore/tcell/v2#AttrReverse), [tcell](/github.com/gdamore/tcell/v2).[AttrUnderline](/github.com/gdamore/tcell/v2#AttrUnderline)}
```
AllStyleMasks is an array of all the styles that can be applied to a Cell.
```
var AppClosingErr = [fmt](/fmt).[Errorf](/fmt#Errorf)("App is closing - no more events accepted.")
```
```
var Focused = [Selector](#Selector){
Focus: [true](/builtin#true),
Selected: [true](/builtin#true),
}
```
```
var IgnoreBase16 = [false](/builtin#false)
```
IgnoreBase16 should be set to true if gowid should not consider colors 0-21 for closest-match when interpolating RGB colors in 256-color space. You might use this if you use base16-shell, for example,
to make use of base16-themes for all terminal applications (<https://github.com/chriskempson/base16-shell>)
```
var NotSelected = [Selector](#Selector){
Focus: [false](/builtin#false),
[Selected](#Selected): [false](/builtin#false),
}
```
```
var Selected = [Selector](#Selector){
Focus: [false](/builtin#false),
Selected: [true](/builtin#true),
}
```
```
var StyleBlink = [StyleAttrs](#StyleAttrs){[tcell](/github.com/gdamore/tcell/v2).[AttrBlink](/github.com/gdamore/tcell/v2#AttrBlink), [tcell](/github.com/gdamore/tcell/v2).[AttrBlink](/github.com/gdamore/tcell/v2#AttrBlink)}
```
StyleBlink specifies the text should blink, but expresses no preference for other text styles.
```
var StyleBlinkOnly = [StyleAttrs](#StyleAttrs){[tcell](/github.com/gdamore/tcell/v2).[AttrBlink](/github.com/gdamore/tcell/v2#AttrBlink), [StyleAllSet](#StyleAllSet)}
```
StyleBlinkOnly specifies the text should blink, and no other styling should apply.
```
var StyleBold = [StyleAttrs](#StyleAttrs){[tcell](/github.com/gdamore/tcell/v2).[AttrBold](/github.com/gdamore/tcell/v2#AttrBold), [tcell](/github.com/gdamore/tcell/v2).[AttrBold](/github.com/gdamore/tcell/v2#AttrBold)}
```
StyleBold specifies the text should be bold, but expresses no preference for other text styles.
```
var StyleBoldOnly = [StyleAttrs](#StyleAttrs){[tcell](/github.com/gdamore/tcell/v2).[AttrBold](/github.com/gdamore/tcell/v2#AttrBold), [StyleAllSet](#StyleAllSet)}
```
StyleBoldOnly specifies the text should be bold, and no other styling should apply.
```
var StyleDim = [StyleAttrs](#StyleAttrs){[tcell](/github.com/gdamore/tcell/v2).[AttrDim](/github.com/gdamore/tcell/v2#AttrDim), [tcell](/github.com/gdamore/tcell/v2).[AttrDim](/github.com/gdamore/tcell/v2#AttrDim)}
```
StyleDim specifies the text should be dim, but expresses no preference for other text styles.
```
var StyleDimOnly = [StyleAttrs](#StyleAttrs){[tcell](/github.com/gdamore/tcell/v2).[AttrDim](/github.com/gdamore/tcell/v2#AttrDim), [StyleAllSet](#StyleAllSet)}
```
StyleDimOnly specifies the text should be dim, and no other styling should apply.
```
var StyleNone = [StyleAttrs](#StyleAttrs){}
```
StyleNone expresses no preference for any text styles.
```
var StyleReverse = [StyleAttrs](#StyleAttrs){[tcell](/github.com/gdamore/tcell/v2).[AttrReverse](/github.com/gdamore/tcell/v2#AttrReverse), [tcell](/github.com/gdamore/tcell/v2).[AttrReverse](/github.com/gdamore/tcell/v2#AttrReverse)}
```
StyleReverse specifies the text should be displayed as reverse-video, but expresses no preference for other text styles.
```
var StyleReverseOnly = [StyleAttrs](#StyleAttrs){[tcell](/github.com/gdamore/tcell/v2).[AttrReverse](/github.com/gdamore/tcell/v2#AttrReverse), [StyleAllSet](#StyleAllSet)}
```
StyleReverseOnly specifies the text should be displayed reverse-video, and no other styling should apply.
```
var StyleUnderline = [StyleAttrs](#StyleAttrs){[tcell](/github.com/gdamore/tcell/v2).[AttrUnderline](/github.com/gdamore/tcell/v2#AttrUnderline), [tcell](/github.com/gdamore/tcell/v2).[AttrUnderline](/github.com/gdamore/tcell/v2#AttrUnderline)}
```
StyleUnderline specifies the text should be underlined, but expresses no preference for other text styles.
```
var StyleUnderlineOnly = [StyleAttrs](#StyleAttrs){[tcell](/github.com/gdamore/tcell/v2).[AttrUnderline](/github.com/gdamore/tcell/v2#AttrUnderline), [StyleAllSet](#StyleAllSet)}
```
StyleUnderlineOnly specifies the text should be underlined, and no other styling should apply.
### Functions [¶](#pkg-functions)
####
func [AddWidgetCallback](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L996) [¶](#AddWidgetCallback)
```
func AddWidgetCallback(c [ICallbacks](#ICallbacks), name interface{}, cb [IWidgetChangedCallback](#IWidgetChangedCallback))
```
####
func [AppendBlankLines](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L902) [¶](#AppendBlankLines)
```
func AppendBlankLines(c [IAppendBlankLines](#IAppendBlankLines), iters [int](/builtin#int))
```
####
func [CanvasToString](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L549) [¶](#CanvasToString)
```
func CanvasToString(c [ICanvas](#ICanvas)) [string](/builtin#string)
```
####
func [ChangeFocus](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1631) [¶](#ChangeFocus)
```
func ChangeFocus(w [IWidget](#IWidget), dir [Direction](#Direction), wrap [bool](/builtin#bool), app [IApp](#IApp)) [bool](/builtin#bool)
```
ChangeFocus is a general algorithm for applying a change of focus to a type. If the type supports IChangeFocus, then that method is called directly. If the type supports IFocusSelectable,
then the next widget is found, and set. Otherwise, if the widget has a child or children, the call is passed to them.
####
func [CopyModeUserInput](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1750) [¶](#CopyModeUserInput)
```
func CopyModeUserInput(w [ICopyModeWidget](#ICopyModeWidget), ev interface{}, size [IRenderSize](#IRenderSize), focus [Selector](#Selector), app [IApp](#IApp)) [bool](/builtin#bool)
```
CopyModeUserInput processes copy mode events in a typical fashion - a widget that wraps one with potentially copyable information could defer to this implementation of UserInput.
####
func [Draw](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L799) [¶](#Draw)
```
func Draw(canvas [IDrawCanvas](#IDrawCanvas), mode [IColorMode](#IColorMode), screen [tcell](/github.com/gdamore/tcell/v2).[Screen](/github.com/gdamore/tcell/v2#Screen))
```
Draw will render a Canvas to a tcell Screen.
####
func [FindNextSelectableFrom](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L848) [¶](#FindNextSelectableFrom)
```
func FindNextSelectableFrom(w [ICompositeMultipleDimensions](#ICompositeMultipleDimensions), start [int](/builtin#int), dir [Direction](#Direction), wrap [bool](/builtin#bool)) ([int](/builtin#int), [bool](/builtin#bool))
```
####
func [FindNextSelectableWidget](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L853) [¶](#FindNextSelectableWidget)
```
func FindNextSelectableWidget(w [][IWidget](#IWidget), pos [int](/builtin#int), dir [Direction](#Direction), wrap [bool](/builtin#bool)) ([int](/builtin#int), [bool](/builtin#bool))
```
####
func [FixCanvasHeight](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L886) [¶](#FixCanvasHeight)
```
func FixCanvasHeight(c [ICanvas](#ICanvas), size [IRenderSize](#IRenderSize))
```
####
func [Focus](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1661) [¶](#Focus)
```
func Focus(w [IWidget](#IWidget)) [int](/builtin#int)
```
####
func [FocusPath](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1684) [¶](#FocusPath)
```
func FocusPath(w [IWidget](#IWidget)) []interface{}
```
FocusPath returns a list of positions, each representing the focus position at that level in the widget hierarchy. The returned list may be shorter than the focus path through the hierarchy - only widgets that have more than one option for the focus will contribute.
####
func [HandleQuitKeys](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L606) [¶](#HandleQuitKeys)
```
func HandleQuitKeys(app [IApp](#IApp), event interface{}) [bool](/builtin#bool)
```
HandleQuitKeys is provided as a simple way to terminate your application using typical
"quit" keys - q/Q, ctrl-c, escape.
####
func [KeysEqual](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1135) [¶](#KeysEqual)
```
func KeysEqual(k1, k2 [IKey](#IKey)) [bool](/builtin#bool)
```
####
func [MakeCanvasRightSize](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L221) [¶](#MakeCanvasRightSize)
```
func MakeCanvasRightSize(c [IRightSizeCanvas](#IRightSizeCanvas), size [IRenderSize](#IRenderSize))
```
####
func [MakeCellStyle](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1003) [¶](#MakeCellStyle)
```
func MakeCellStyle(fg [TCellColor](#TCellColor), bg [TCellColor](#TCellColor), attr [StyleAttrs](#StyleAttrs)) [tcell](/github.com/gdamore/tcell/v2).[Style](/github.com/gdamore/tcell/v2#Style)
```
MakeCellStyle constructs a tcell.Style from gowid colors and styles. The return value can be provided to tcell in order to style a particular region of the screen.
####
func [PanicIfCanvasNotRightSize](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L200) [¶](#PanicIfCanvasNotRightSize)
```
func PanicIfCanvasNotRightSize(c [IRenderBox](#IRenderBox), size [IRenderSize](#IRenderSize))
```
PanicIfCanvasNotRightSize is for debugging - it panics if the size of the supplied canvas does not conform to the size specified by the size argument. For a box argument, columns and rows are checked; for a flow argument, columns are checked.
####
func [PrefPosition](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1548) [¶](#PrefPosition)
```
func PrefPosition(curw interface{}) [gwutil](/github.com/gcla/[email protected]/gwutil).[IntOption](/github.com/gcla/[email protected]/gwutil#IntOption)
```
PrefPosition repeatedly unpacks composite widgets until it has to stop. It looks for a type exports a prefered position API. The widget might be ContainerWidget/StyledWidget/...
####
func [QuitFn](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1010) [¶](#QuitFn)
```
func QuitFn(app [IApp](#IApp), widget [IWidget](#IWidget))
```
QuitFn can be used to construct a widget callback that terminates your application. It can be used as the second argument of the WidgetChangedCallback struct which implements IWidgetChangedCallback.
####
func [RangeOverCanvas](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L341) [¶](#RangeOverCanvas)
```
func RangeOverCanvas(c [IRangeOverCanvas](#IRangeOverCanvas), f [ICellProcessor](#ICellProcessor))
```
RangeOverCanvas applies the supplied function to each cell,
modifying it in place.
####
func [RemoveWidgetCallback](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1000) [¶](#RemoveWidgetCallback)
```
func RemoveWidgetCallback(c [ICallbacks](#ICallbacks), name interface{}, id [IIdentity](#IIdentity))
```
####
func [RenderRoot](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L823) [¶](#RenderRoot)
```
func RenderRoot(w [IWidget](#IWidget), t *[App](#App))
```
RenderRoot is called from the App application object when beginning the widget rendering process. It starts at the root of the widget hierarchy with an IRenderBox size argument equal to the size of the current terminal.
####
func [RunWidgetCallbacks](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L978) [¶](#RunWidgetCallbacks)
```
func RunWidgetCallbacks(c [ICallbacks](#ICallbacks), name interface{}, app [IApp](#IApp), data ...interface{})
```
####
func [SelectableIfAnySubWidgetsAre](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L726) [¶](#SelectableIfAnySubWidgetsAre)
```
func SelectableIfAnySubWidgetsAre(w [ICompositeMultipleDimensions](#ICompositeMultipleDimensions)) [bool](/builtin#bool)
```
SelectableIfAnySubWidgetsAre is useful for various container widgets.
####
func [SetPrefPosition](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1564) [¶](#SetPrefPosition)
```
func SetPrefPosition(curw interface{}, prefPos [int](/builtin#int), app [IApp](#IApp)) [bool](/builtin#bool)
```
####
func [TranslatedMouseEvent](https://github.com/gcla/gowid/blob/v1.4.0/utils.go#L78) [¶](#TranslatedMouseEvent)
```
func TranslatedMouseEvent(ev interface{}, x, y [int](/builtin#int)) interface{}
```
TranslatedMouseEvent is supplied with a tcell event and an x and y offset - it returns a tcell mouse event that represents a horizontal and vertical translation.
####
func [UserInputIfSelectable](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L790) [¶](#UserInputIfSelectable)
```
func UserInputIfSelectable(w [IWidget](#IWidget), ev interface{}, size [IRenderSize](#IRenderSize), focus [Selector](#Selector), app [IApp](#IApp)) [bool](/builtin#bool)
```
UserInputIfSelectable will return false if the widget is not selectable; otherwise it will try the widget's UserInput function.
####
func [WriteToCanvas](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L393) [¶](#WriteToCanvas)
```
func WriteToCanvas(c [IRangeOverCanvas](#IRangeOverCanvas), p [][byte](/builtin#byte)) (n [int](/builtin#int), err [error](/builtin#error))
```
WriteToCanvas extracts the logic of implementing io.Writer into a free function that can be used by any canvas implementing ICanvas.
### Types [¶](#pkg-types)
####
type [AddressProvidesID](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L683) [¶](#AddressProvidesID)
```
type AddressProvidesID struct{}
```
AddressProvidesID is a convenience struct that can be embedded in widgets.
It provides an ID() function by simply returning the pointer of its caller argument. The ID() function is for widgets that want to implement IIdentity, which is needed by containers that want to compare widgets.
For example, if the user clicks on a button.Widget, the app can be used to save that widget. When the click is released, the button's UserInput function tries to determine whether the mouse was released over the same widget that was clicked. It can do this by comparing the widgets'
ID() values. Note that this will not work if new button widgets are created each time Render/UserInput is called (because the caller will change).
####
func (*AddressProvidesID) [ID](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L685) [¶](#AddressProvidesID.ID)
```
func (a *[AddressProvidesID](#AddressProvidesID)) ID() interface{}
```
####
type [App](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L84) [¶](#App)
```
type App struct {
[IPalette](#IPalette) // App holds an IPalette and provides it to each widget when rendering
TCellEvents chan [tcell](/github.com/gdamore/tcell/v2).[Event](/github.com/gdamore/tcell/v2#Event) // Events from tcell e.g. resize
AfterRenderEvents chan [IAfterRenderEvent](#IAfterRenderEvent) // Functions intended to run on the widget goroutine
[MouseState](#MouseState) // Track which mouse buttons are currently down
[ClickTargets](#ClickTargets) // When mouse is clicked, track potential interaction here
// contains filtered or unexported fields
}
```
App is an implementation of IApp. The App struct conforms to IApp and provides services to a running gowid application, such as access to the palette, the screen and the state of the mouse.
####
func [NewApp](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L233) [¶](#NewApp)
```
func NewApp(args [AppArgs](#AppArgs)) (rapp *[App](#App), rerr [error](/builtin#error))
```
####
func (*App) [ActivateScreen](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L819) [¶](#App.ActivateScreen)
added in v1.1.0
```
func (a *[App](#App)) ActivateScreen() [error](/builtin#error)
```
Let screen be taken over by gowid/tcell. A new screen struct is created because I can't make tcell claim and release the same screen successfully. Clients of the app struct shouldn't cache the screen object returned via GetScreen().
Assumes we own the screen...
####
func (*App) [Clips](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L455) [¶](#App.Clips)
```
func (a *[App](#App)) Clips() [][ICopyResult](#ICopyResult)
```
####
func (*App) [Close](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L565) [¶](#App.Close)
```
func (a *[App](#App)) Close()
```
Close should be called by a gowid application after the user terminates the application.
It will cleanup tcell's screen object.
####
func (*App) [CopyLevel](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L336) [¶](#App.CopyLevel)
```
func (a *[App](#App)) CopyLevel(lvl ...[int](/builtin#int)) [int](/builtin#int)
```
####
func (*App) [CopyModeClaimedAt](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L350) [¶](#App.CopyModeClaimedAt)
```
func (a *[App](#App)) CopyModeClaimedAt(lvl ...[int](/builtin#int)) [int](/builtin#int)
```
####
func (*App) [CopyModeClaimedBy](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L357) [¶](#App.CopyModeClaimedBy)
```
func (a *[App](#App)) CopyModeClaimedBy(id ...[IIdentity](#IIdentity)) [IIdentity](#IIdentity)
```
####
func (*App) [DeactivateScreen](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L836) [¶](#App.DeactivateScreen)
added in v1.1.0
```
func (a *[App](#App)) DeactivateScreen()
```
Assumes we own the screen
####
func (*App) [GetColorMode](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L395) [¶](#App.GetColorMode)
```
func (a *[App](#App)) GetColorMode() [ColorMode](#ColorMode)
```
####
func (*App) [GetLastMouseState](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L387) [¶](#App.GetLastMouseState)
```
func (a *[App](#App)) GetLastMouseState() [MouseState](#MouseState)
```
####
func (*App) [GetMouseState](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L383) [¶](#App.GetMouseState)
```
func (a *[App](#App)) GetMouseState() [MouseState](#MouseState)
```
####
func (*App) [GetPalette](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L379) [¶](#App.GetPalette)
```
func (a *[App](#App)) GetPalette() [IPalette](#IPalette)
```
####
func (*App) [GetScreen](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L328) [¶](#App.GetScreen)
```
func (a *[App](#App)) GetScreen() [tcell](/github.com/gdamore/tcell/v2).[Screen](/github.com/gdamore/tcell/v2#Screen)
```
####
func (*App) [HandleTCellEvent](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L481) [¶](#App.HandleTCellEvent)
```
func (a *[App](#App)) HandleTCellEvent(ev interface{}, unhandled [IUnhandledInput](#IUnhandledInput))
```
HandleTCellEvent handles an event from the underlying TCell library,
based on its type (key-press, error, etc.) User input events are sent to onInputEvent, which will check the widget hierarchy to see if the input can be processed; other events might result in gowid updating its internal state, like the size of the underlying terminal.
####
func (*App) [InCopyMode](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L343) [¶](#App.InCopyMode)
```
func (a *[App](#App)) InCopyMode(on ...[bool](/builtin#bool)) [bool](/builtin#bool)
```
####
func (*App) [MainLoop](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L647) [¶](#App.MainLoop)
```
func (a *[App](#App)) MainLoop(unhandled [IUnhandledInput](#IUnhandledInput))
```
MainLoop is the intended gowid entry point for typical applications. After the App is instantiated and the widget hierarchy set up, the application should call MainLoop with a handler for processing input that is not consumed by any widget.
####
func (*App) [Quit](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L806) [¶](#App.Quit)
```
func (a *[App](#App)) Quit()
```
Quit will terminate the gowid main loop.
####
func (*App) [Redraw](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L801) [¶](#App.Redraw)
```
func (a *[App](#App)) Redraw()
```
Redraw will re-render the widget hierarchy.
####
func (*App) [RedrawTerminal](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L714) [¶](#App.RedrawTerminal)
```
func (a *[App](#App)) RedrawTerminal()
```
RedrawTerminal updates the gui, re-drawing frames and buffers. Call this from the widget-handling goroutine only. Intended for use by apps that construct their own main loops and handle gowid events themselves.
####
func (*App) [RefreshCopyMode](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L332) [¶](#App.RefreshCopyMode)
```
func (a *[App](#App)) RefreshCopyMode()
```
####
func (*App) [RegisterMenu](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L724) [¶](#App.RegisterMenu)
```
func (a *[App](#App)) RegisterMenu(menu [IMenuCompatible](#IMenuCompatible))
```
RegisterMenu should be called by any widget that wants to display a menu. The call could be made after initializing the App object. This call adds the menu above the current root of the widget hierarchy - when the App renders from the root down, any open menus will be rendered on top of the original root (using the overlay widget).
####
func (*App) [Run](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L789) [¶](#App.Run)
```
func (a *[App](#App)) Run(f [IAfterRenderEvent](#IAfterRenderEvent)) [error](/builtin#error)
```
Run executes this function on the goroutine that renders widgets and processes their callbacks. Any function that manipulates widget state outside of the Render/UserInput chain should be run this way for thread-safety e.g. a function that changes the UI from a timer event.
####
func (*App) [RunThenRenderEvent](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L658) [¶](#App.RunThenRenderEvent)
```
func (a *[App](#App)) RunThenRenderEvent(ev [IAfterRenderEvent](#IAfterRenderEvent))
```
RunThenRenderEvent dispatches the event by calling it with the app as an argument - then it will force the application to re-render itself.
####
func (*App) [Runner](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L624) [¶](#App.Runner)
```
func (a *[App](#App)) Runner() *[AppRunner](#AppRunner)
```
####
func (*App) [SetColorMode](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L391) [¶](#App.SetColorMode)
```
func (a *[App](#App)) SetColorMode(mode [ColorMode](#ColorMode))
```
####
func (*App) [SetPalette](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L375) [¶](#App.SetPalette)
```
func (a *[App](#App)) SetPalette(palette [IPalette](#IPalette))
```
####
func (*App) [SetSubWidget](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L364) [¶](#App.SetSubWidget)
```
func (a *[App](#App)) SetSubWidget(widget [IWidget](#IWidget), app [IApp](#IApp))
```
####
func (*App) [SimpleMainLoop](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L600) [¶](#App.SimpleMainLoop)
```
func (a *[App](#App)) SimpleMainLoop()
```
SimpleMainLoop will run your application using a default unhandled input function that will terminate your application on q/Q, ctrl-c and escape.
####
func (*App) [StartTCellEvents](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L574) [¶](#App.StartTCellEvents)
```
func (a *[App](#App)) StartTCellEvents(quit <-chan [Unit](#Unit), wg *[sync](/sync).[WaitGroup](/sync#WaitGroup))
```
StartTCellEvents starts a goroutine that listens for events from TCell. The PollEvent function will block until TCell has something to report - when something arrives, it is written to the tcellEvents channel. The function is provided with a quit channel which is consulted for an event that will terminate this goroutine.
####
func (*App) [StopTCellEvents](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L592) [¶](#App.StopTCellEvents)
```
func (a *[App](#App)) StopTCellEvents(quit chan<- [Unit](#Unit), wg *[sync](/sync).[WaitGroup](/sync#WaitGroup))
```
StopTCellEvents will cause TCell to generate an interrupt event; an event is posted to the quit channel first to stop the TCell event goroutine.
####
func (*App) [SubWidget](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L371) [¶](#App.SubWidget)
```
func (a *[App](#App)) SubWidget() [IWidget](#IWidget)
```
####
func (*App) [Sync](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L707) [¶](#App.Sync)
```
func (a *[App](#App)) Sync()
```
Sync defers immediately to tcell's Screen's Sync() function - it is for updating every screen cell in the event something corrupts the screen (e.g. ssh -v logging)
####
func (*App) [TerminalSize](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L400) [¶](#App.TerminalSize)
```
func (a *[App](#App)) TerminalSize() (x, y [int](/builtin#int))
```
TerminalSize returns the terminal's size.
####
func (*App) [UnregisterMenu](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L760) [¶](#App.UnregisterMenu)
```
func (a *[App](#App)) UnregisterMenu(menu [IMenuCompatible](#IMenuCompatible)) [bool](/builtin#bool)
```
UnregisterMenu will remove a menu from the widget hierarchy. If it's not found,
false is returned.
####
type [AppArgs](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L115) [¶](#AppArgs)
```
type AppArgs struct {
Screen [tcell](/github.com/gdamore/tcell/v2).[Screen](/github.com/gdamore/tcell/v2#Screen)
View [IWidget](#IWidget)
Palette [IPalette](#IPalette)
EnableMouseMotion [bool](/builtin#bool)
EnableBracketedPaste [bool](/builtin#bool)
Log [log](/github.com/sirupsen/logrus).[StdLogger](/github.com/sirupsen/logrus#StdLogger)
DontActivate [bool](/builtin#bool)
Tty [string](/builtin#string)
}
```
AppArgs is a helper struct, providing arguments for the initialization of App.
####
type [AppRunner](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L617) [¶](#AppRunner)
```
type AppRunner struct {
// contains filtered or unexported fields
}
```
####
func (*AppRunner) [Start](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L632) [¶](#AppRunner.Start)
```
func (st *[AppRunner](#AppRunner)) Start()
```
####
func (*AppRunner) [Stop](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L637) [¶](#AppRunner.Stop)
```
func (st *[AppRunner](#AppRunner)) Stop()
```
####
type [BackgroundColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1662) [¶](#BackgroundColor)
```
type BackgroundColor struct {
[IColor](#IColor)
}
```
BackgroundColor is an ICellStyler that expresses a specific background color and no preference for foreground color or style.
####
func [MakeBackground](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1668) [¶](#MakeBackground)
```
func MakeBackground(c [IColor](#IColor)) [BackgroundColor](#BackgroundColor)
```
####
func (BackgroundColor) [GetStyle](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1673) [¶](#BackgroundColor.GetStyle)
```
func (a [BackgroundColor](#BackgroundColor)) GetStyle(prov [IRenderContext](#IRenderContext)) (x [IColor](#IColor), y [IColor](#IColor), z [StyleAttrs](#StyleAttrs))
```
GetStyle implements ICellStyler.
####
type [Callback](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L40) [¶](#Callback)
```
type Callback struct {
Name interface{}
[CallbackFunction](#CallbackFunction)
}
```
Callback is a simple implementation of ICallback.
####
func (Callback) [ID](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L53) [¶](#Callback.ID)
```
func (f [Callback](#Callback)) ID() interface{}
```
####
type [CallbackFunction](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L33) [¶](#CallbackFunction)
```
type CallbackFunction func(args ...interface{})
```
####
func (CallbackFunction) [Call](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L45) [¶](#CallbackFunction.Call)
```
func (f [CallbackFunction](#CallbackFunction)) Call(args ...interface{})
```
####
type [CallbackID](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L35) [¶](#CallbackID)
```
type CallbackID struct {
Name interface{}
}
```
####
func (CallbackID) [ID](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L49) [¶](#CallbackID.ID)
```
func (f [CallbackID](#CallbackID)) ID() interface{}
```
####
type [Callbacks](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L57) [¶](#Callbacks)
```
type Callbacks struct {
[sync](/sync).[Mutex](/sync#Mutex)
// contains filtered or unexported fields
}
```
####
func [NewCallbacks](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L68) [¶](#NewCallbacks)
```
func NewCallbacks() *[Callbacks](#Callbacks)
```
####
func (*Callbacks) [AddCallback](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L106) [¶](#Callbacks.AddCallback)
```
func (c *[Callbacks](#Callbacks)) AddCallback(name interface{}, cb [ICallback](#ICallback))
```
####
func (*Callbacks) [CopyOfCallbacks](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L82) [¶](#Callbacks.CopyOfCallbacks)
```
func (c *[Callbacks](#Callbacks)) CopyOfCallbacks(name interface{}) ([][ICallback](#ICallback), [bool](/builtin#bool))
```
CopyOfCallbacks is used when callbacks are run - they are copied so that any callers modifying the callbacks themselves can do so safely with the modifications taking effect after all callbacks are run. Can be called with a nil receiver if the widget's callback object has not been initialized and e.g. RunWidgetCallbacks is called.
####
func (*Callbacks) [RemoveCallback](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L114) [¶](#Callbacks.RemoveCallback)
```
func (c *[Callbacks](#Callbacks)) RemoveCallback(name interface{}, cb [IIdentity](#IIdentity)) [bool](/builtin#bool)
```
####
func (*Callbacks) [RunCallbacks](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L96) [¶](#Callbacks.RunCallbacks)
```
func (c *[Callbacks](#Callbacks)) RunCallbacks(name interface{}, args ...interface{})
```
####
type [Canvas](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L259) [¶](#Canvas)
```
type Canvas struct {
Lines [][][Cell](#Cell) // inner array is a line
Marks *map[[string](/builtin#string)][CanvasPos](#CanvasPos)
// contains filtered or unexported fields
}
```
Canvas is a simple implementation of ICanvas, and is returned by the Render() function of all the current widgets. It represents the canvas by a 2-dimensional array of Cells -
no tricks or attempts to optimize this yet! The canvas also stores a map of string identifiers to positions - for example, the cursor position is tracked this way, and the menu widget keeps track of where it should render a "dropdown" using canvas marks. Most Canvas APIs expect that each line has the same length.
####
func [NewCanvas](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L267) [¶](#NewCanvas)
```
func NewCanvas() *[Canvas](#Canvas)
```
NewCanvas returns an initialized Canvas struct. Its size is 0 columns and 0 rows.
####
func [NewCanvasOfSize](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L290) [¶](#NewCanvasOfSize)
```
func NewCanvasOfSize(cols, rows [int](/builtin#int)) *[Canvas](#Canvas)
```
NewCanvasOfSize returns a canvas struct of size cols x rows, where each Cell is default-initialized (i.e. empty).
####
func [NewCanvasOfSizeExt](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L296) [¶](#NewCanvasOfSizeExt)
```
func NewCanvasOfSizeExt(cols, rows [int](/builtin#int), fill [Cell](#Cell)) *[Canvas](#Canvas)
```
NewCanvasOfSize returns a canvas struct of size cols x rows, where each Cell is initialized by copying the fill argument.
####
func [NewCanvasWithLines](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L278) [¶](#NewCanvasWithLines)
```
func NewCanvasWithLines(lines [][][Cell](#Cell)) *[Canvas](#Canvas)
```
NewCanvasWithLines allocates a canvas struct and sets its contents to the 2-d array provided as an argument.
####
func (*Canvas) [AlignRight](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L794) [¶](#Canvas.AlignRight)
```
func (c *[Canvas](#Canvas)) AlignRight()
```
AlignRight will extend each row of Cells in the receiver Canvas with an empty Cell in order to ensure all rows are the same length. Note that the Canvas will not increase in width as a result.
####
func (*Canvas) [AlignRightWith](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L776) [¶](#Canvas.AlignRightWith)
```
func (c *[Canvas](#Canvas)) AlignRightWith(cell [Cell](#Cell))
```
AlignRightWith will extend each row of Cells in the receiver Canvas with the supplied Cell in order to ensure all rows are the same length. Note that the Canvas will not increase in width as a result.
####
func (*Canvas) [AppendBelow](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L604) [¶](#Canvas.AppendBelow)
```
func (c *[Canvas](#Canvas)) AppendBelow(c2 [IAppendCanvas](#IAppendCanvas), doCursor [bool](/builtin#bool), makeCopy [bool](/builtin#bool))
```
AppendBelow appends the supplied Canvas to the "bottom" of the receiver Canvas. If doCursor is true and the supplied Canvas has an enabled cursor, it is applied to the received Canvas, with a suitable Y offset. If makeCopy is true then the supplied Canvas is copied; if false, and the supplied Canvas is capable of giving up ownership of its data structures, then they are moved to the receiver Canvas.
####
func (*Canvas) [AppendLine](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L525) [¶](#Canvas.AppendLine)
```
func (c *[Canvas](#Canvas)) AppendLine(line [][Cell](#Cell), makeCopy [bool](/builtin#bool))
```
AppendLine will append the array of Cells provided to the bottom of the receiver Canvas. If the makeCopy argument is true, a copy is made of the provided Cell array; otherwise, a slice is taken and used directly, meaning the Canvas will hold a reference to the underlying array.
####
func (*Canvas) [AppendRight](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L699) [¶](#Canvas.AppendRight)
```
func (c *[Canvas](#Canvas)) AppendRight(c2 [IMergeCanvas](#IMergeCanvas), useCursor [bool](/builtin#bool))
```
AppendRight appends the supplied Canvas to the right of the receiver Canvas. It assumes both Canvases have the same number of rows. If useCursor is true and the supplied Canvas has an enabled cursor, then it is applied with a suitable X offset applied.
####
func (*Canvas) [BoxColumns](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L361) [¶](#Canvas.BoxColumns)
```
func (c *[Canvas](#Canvas)) BoxColumns() [int](/builtin#int)
```
BoxColumns helps Canvas conform to IRenderBox.
####
func (*Canvas) [BoxRows](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L366) [¶](#Canvas.BoxRows)
```
func (c *[Canvas](#Canvas)) BoxRows() [int](/builtin#int)
```
BoxRows helps Canvas conform to IRenderBox.
####
func (*Canvas) [CellAt](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L503) [¶](#Canvas.CellAt)
```
func (c *[Canvas](#Canvas)) CellAt(col, row [int](/builtin#int)) [Cell](#Cell)
```
CellAt returns the Cell at the Canvas position provided. Note that the function assumes the caller has ensured the position is not out of bounds.
####
func (*Canvas) [ComputeCurrentMaxColumn](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L376) [¶](#Canvas.ComputeCurrentMaxColumn)
```
func (c *[Canvas](#Canvas)) ComputeCurrentMaxColumn() [int](/builtin#int)
```
ComputeCurrentMaxColumn walks the 2-d array of Cells to determine the length of the longest line. This is used by certain APIs that manipulate the canvas.
####
func (*Canvas) [CursorCoords](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L436) [¶](#Canvas.CursorCoords)
```
func (c *[Canvas](#Canvas)) CursorCoords() [CanvasPos](#CanvasPos)
```
CursorCoords returns a pair of ints representing the current cursor coordinates. Note that the caller must be sure the Canvas's cursor is enabled.
####
func (*Canvas) [CursorEnabled](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L426) [¶](#Canvas.CursorEnabled)
```
func (c *[Canvas](#Canvas)) CursorEnabled() [bool](/builtin#bool)
```
CursorEnabled returns true if the cursor is enabled in this canvas, false otherwise.
####
func (*Canvas) [Duplicate](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L318) [¶](#Canvas.Duplicate)
```
func (c *[Canvas](#Canvas)) Duplicate() [ICanvas](#ICanvas)
```
Duplicate returns a deep copy of the receiver canvas.
####
func (*Canvas) [ExtendLeft](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L582) [¶](#Canvas.ExtendLeft)
```
func (c *[Canvas](#Canvas)) ExtendLeft(cells [][Cell](#Cell))
```
ExtendLeft prepends to each line of the receiver Canvas the array of Cells provided as an argument.
####
func (*Canvas) [ExtendRight](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L566) [¶](#Canvas.ExtendRight)
```
func (c *[Canvas](#Canvas)) ExtendRight(cells [][Cell](#Cell))
```
ExtendRight appends to each line of the receiver Canvas the array of Cells provided as an argument.
####
func (*Canvas) [GetMark](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L472) [¶](#Canvas.GetMark)
```
func (c *[Canvas](#Canvas)) GetMark(name [string](/builtin#string)) ([CanvasPos](#CanvasPos), [bool](/builtin#bool))
```
GetMark returns the position and presence/absence of the specified string identifier in the Canvas.
####
func (*Canvas) [ImplementsWidgetDimension](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L371) [¶](#Canvas.ImplementsWidgetDimension)
```
func (c *[Canvas](#Canvas)) ImplementsWidgetDimension()
```
BoxRows helps Canvas conform to IWidgetDimension.
####
func (*Canvas) [Line](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L353) [¶](#Canvas.Line)
```
func (c *[Canvas](#Canvas)) Line(y [int](/builtin#int), cp [LineCopy](#LineCopy)) [LineResult](#LineResult)
```
Line provides access to the lines of the canvas. LineCopy determines what the Line() function should allocate if it needs to make a copy of the Line. Return true if line was copied.
####
func (*Canvas) [MergeUnder](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L691) [¶](#Canvas.MergeUnder)
```
func (c *[Canvas](#Canvas)) MergeUnder(c2 [IMergeCanvas](#IMergeCanvas), leftOffset, topOffset [int](/builtin#int), bottomGetsCursor [bool](/builtin#bool))
```
MergeUnder merges the supplied Canvas "under" the receiver Canvas, meaning the receiver Canvas's Cells' settings are given priority.
####
func (*Canvas) [MergeWithFunc](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L660) [¶](#Canvas.MergeWithFunc)
```
func (c *[Canvas](#Canvas)) MergeWithFunc(c2 [IMergeCanvas](#IMergeCanvas), leftOffset, topOffset [int](/builtin#int), fn [CellMergeFunc](#CellMergeFunc), bottomGetsCursor [bool](/builtin#bool))
```
MergeWithFunc merges the supplied Canvas with the receiver canvas, where the receiver canvas is considered to start at column leftOffset and at row topOffset, therefore translated some distance from the top-left, and the receiver Canvas is the one modified. A function argument is supplied which specifies how Cells are merged, one by one e.g. which style takes effect,
which rune, and so on.
####
func (*Canvas) [RangeOverMarks](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L490) [¶](#Canvas.RangeOverMarks)
```
func (c *[Canvas](#Canvas)) RangeOverMarks(f func(key [string](/builtin#string), value [CanvasPos](#CanvasPos)) [bool](/builtin#bool))
```
RangeOverMarks applies the supplied function to each mark and position in the received Canvas. If the function returns false, the loop is terminated.
####
func (*Canvas) [RemoveMark](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L482) [¶](#Canvas.RemoveMark)
```
func (c *[Canvas](#Canvas)) RemoveMark(name [string](/builtin#string))
```
RemoveMark removes a mark from the Canvas.
####
func (*Canvas) [SetCellAt](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L510) [¶](#Canvas.SetCellAt)
```
func (c *[Canvas](#Canvas)) SetCellAt(col, row [int](/builtin#int), cell [Cell](#Cell))
```
SetCellAt sets the Canvas Cell at the position provided. Note that the function assumes the caller has ensured the position is not out of bounds.
####
func (*Canvas) [SetCursorCoords](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L451) [¶](#Canvas.SetCursorCoords)
```
func (c *[Canvas](#Canvas)) SetCursorCoords(x, y [int](/builtin#int))
```
SetCursorCoords will set the Canvas's cursor coordinates. The special input of (-1,-1)
will disable the cursor.
####
func (*Canvas) [SetLineAt](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L516) [¶](#Canvas.SetLineAt)
```
func (c *[Canvas](#Canvas)) SetLineAt(row [int](/builtin#int), line [][Cell](#Cell))
```
SetLineAt sets a line of the Canvas at the given y position. The function assumes a line of the correct width has been provided.
####
func (*Canvas) [SetMark](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L462) [¶](#Canvas.SetMark)
```
func (c *[Canvas](#Canvas)) SetMark(name [string](/builtin#string), x, y [int](/builtin#int))
```
SetMark allows the caller to store a string identifier at a particular position in the Canvas. The menu widget uses this feature to keep track of where it should "open", acting as an overlay over the widgets below.
####
func (*Canvas) [String](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L545) [¶](#Canvas.String)
```
func (c *[Canvas](#Canvas)) String() [string](/builtin#string)
```
String lets Canvas conform to fmt.Stringer.
####
func (*Canvas) [TrimLeft](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L741) [¶](#Canvas.TrimLeft)
```
func (c *[Canvas](#Canvas)) TrimLeft(colsToHave [int](/builtin#int))
```
TrimLeft removes columns from the left of the receiver Canvas until there is the specified number left.
####
func (*Canvas) [TrimRight](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L730) [¶](#Canvas.TrimRight)
```
func (c *[Canvas](#Canvas)) TrimRight(colsToHave [int](/builtin#int))
```
TrimRight removes columns from the right of the receiver Canvas until there is the specified number left.
####
func (*Canvas) [Truncate](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L635) [¶](#Canvas.Truncate)
```
func (c *[Canvas](#Canvas)) Truncate(above, below [int](/builtin#int))
```
Truncate removes "above" lines from above the receiver Canvas, and
"below" lines from below.
####
func (*Canvas) [Write](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L387) [¶](#Canvas.Write)
```
func (c *[Canvas](#Canvas)) Write(p [][byte](/builtin#byte)) (n [int](/builtin#int), err [error](/builtin#error))
```
Write lets Canvas conform to io.Writer. Since each Canvas Cell holds a rune, the byte array argument is interpreted as the UTF-8 encoding of a sequence of runes.
####
type [CanvasPos](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L172) [¶](#CanvasPos)
```
type CanvasPos struct {
X, Y [int](/builtin#int)
}
```
CanvasPos is a convenience struct to represent the coordinates of a position on a canvas.
####
func (CanvasPos) [PlusX](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L176) [¶](#CanvasPos.PlusX)
```
func (c [CanvasPos](#CanvasPos)) PlusX(n [int](/builtin#int)) [CanvasPos](#CanvasPos)
```
####
func (CanvasPos) [PlusY](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L180) [¶](#CanvasPos.PlusY)
```
func (c [CanvasPos](#CanvasPos)) PlusY(n [int](/builtin#int)) [CanvasPos](#CanvasPos)
```
####
type [CanvasSizeWrong](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L186) [¶](#CanvasSizeWrong)
```
type CanvasSizeWrong struct {
Requested [IRenderSize](#IRenderSize)
Actual [IRenderBox](#IRenderBox)
}
```
####
func (CanvasSizeWrong) [Error](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L193) [¶](#CanvasSizeWrong.Error)
```
func (e [CanvasSizeWrong](#CanvasSizeWrong)) Error() [string](/builtin#string)
```
####
type [Cell](https://github.com/gcla/gowid/blob/v1.4.0/cell.go#L14) [¶](#Cell)
```
type Cell struct {
// contains filtered or unexported fields
}
```
Cell represents a single element of terminal output. The empty value is a blank cell with default colors, style, and a 'blank' rune. It is closely tied to TCell's underlying cell representation - colors are TCell-specific, so are translated from anything more general before a Cell is instantiated.
####
func [CellFromRune](https://github.com/gcla/gowid/blob/v1.4.0/cell.go#L148) [¶](#CellFromRune)
```
func CellFromRune(r [rune](/builtin#rune)) [Cell](#Cell)
```
CellFromRune returns a Cell with the supplied rune and with default coloring and styling.
####
func [CellsFromString](https://github.com/gcla/gowid/blob/v1.4.0/cell.go#L154) [¶](#CellsFromString)
```
func CellsFromString(s [string](/builtin#string)) [][Cell](#Cell)
```
CellsFromString is a utility function to turn a string into an array of Cells. Note that each Cell has no color or style set.
####
func [EmptyLine](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L161) [¶](#EmptyLine)
```
func EmptyLine(length [int](/builtin#int)) [][Cell](#Cell)
```
EmptyLine provides a ready-allocated source of empty cells. Of course this is to be treated as read-only.
####
func [MakeCell](https://github.com/gcla/gowid/blob/v1.4.0/cell.go#L26) [¶](#MakeCell)
```
func MakeCell(codePoint [rune](/builtin#rune), fg [TCellColor](#TCellColor), bg [TCellColor](#TCellColor), Attr [StyleAttrs](#StyleAttrs)) [Cell](#Cell)
```
MakeCell returns a Cell initialized with the supplied run (char to display),
foreground color, background color and style attributes. Each color can specify
"default" meaning whatever the terminal default foreground/background is, or
"none" meaning no preference, allowing it to be overridden when laid on top of another Cell during the render process.
####
func (Cell) [BackgroundColor](https://github.com/gcla/gowid/blob/v1.4.0/cell.go#L96) [¶](#Cell.BackgroundColor)
```
func (c [Cell](#Cell)) BackgroundColor() [TCellColor](#TCellColor)
```
BackgroundColor returns the background color of the receiver Cell.
####
func (Cell) [ForegroundColor](https://github.com/gcla/gowid/blob/v1.4.0/cell.go#L101) [¶](#Cell.ForegroundColor)
```
func (c [Cell](#Cell)) ForegroundColor() [TCellColor](#TCellColor)
```
ForegroundColor returns the foreground color of the receiver Cell.
####
func (Cell) [GetDisplayAttrs](https://github.com/gcla/gowid/blob/v1.4.0/cell.go#L64) [¶](#Cell.GetDisplayAttrs)
```
func (c [Cell](#Cell)) GetDisplayAttrs() (x [TCellColor](#TCellColor), y [TCellColor](#TCellColor), z [StyleAttrs](#StyleAttrs))
```
GetDisplayAttrs returns the receiver Cell's foreground and background color and styling.
####
func (Cell) [HasRune](https://github.com/gcla/gowid/blob/v1.4.0/cell.go#L74) [¶](#Cell.HasRune)
```
func (c [Cell](#Cell)) HasRune() [bool](/builtin#bool)
```
HasRune returns true if the Cell actively specifies a rune to display; otherwise false, meaning there it is "empty", and a Cell layered underneath it will have its rune displayed.
####
func (Cell) [MergeDisplayAttrsUnder](https://github.com/gcla/gowid/blob/v1.4.0/cell.go#L49) [¶](#Cell.MergeDisplayAttrsUnder)
```
func (c [Cell](#Cell)) MergeDisplayAttrsUnder(upper [Cell](#Cell)) [Cell](#Cell)
```
MergeDisplayAttrsUnder returns a Cell representing the receiver Cell with the argument Cell's color and styling applied, if they are explicitly set.
####
func (Cell) [MergeUnder](https://github.com/gcla/gowid/blob/v1.4.0/cell.go#L39) [¶](#Cell.MergeUnder)
```
func (c [Cell](#Cell)) MergeUnder(upper [Cell](#Cell)) [Cell](#Cell)
```
MergeUnder returns a Cell representing the receiver merged "underneath" the Cell argument provided. This means the argument's rune value will be used unless it is "empty", and the cell's color and styling come from the argument's value in a similar fashion.
####
func (Cell) [Rune](https://github.com/gcla/gowid/blob/v1.4.0/cell.go#L80) [¶](#Cell.Rune)
```
func (c [Cell](#Cell)) Rune() [rune](/builtin#rune)
```
Rune will return a rune that can be displayed, if this Cell is being rendered in some fashion. If the Cell is empty, then a space rune is returned.
####
func (Cell) [Style](https://github.com/gcla/gowid/blob/v1.4.0/cell.go#L106) [¶](#Cell.Style)
```
func (c [Cell](#Cell)) Style() [StyleAttrs](#StyleAttrs)
```
Style returns the style of the receiver Cell.
####
func (Cell) [WithBackgroundColor](https://github.com/gcla/gowid/blob/v1.4.0/cell.go#L121) [¶](#Cell.WithBackgroundColor)
```
func (c [Cell](#Cell)) WithBackgroundColor(a [TCellColor](#TCellColor)) [Cell](#Cell)
```
WithBackgroundColor returns a Cell equal to the receiver Cell but that will render with the supplied background color instead. Note that this color can be set to "none" by passing the value gowid.ColorNone, meaning allow Cells layered underneath to determine the background color.
####
func (Cell) [WithForegroundColor](https://github.com/gcla/gowid/blob/v1.4.0/cell.go#L130) [¶](#Cell.WithForegroundColor)
```
func (c [Cell](#Cell)) WithForegroundColor(a [TCellColor](#TCellColor)) [Cell](#Cell)
```
WithForegroundColor returns a Cell equal to the receiver Cell but that will render with the supplied foreground color instead. Note that this color can be set to "none" by passing the value gowid.ColorNone, meaning allow Cells layered underneath to determine the background color.
####
func (Cell) [WithNoRune](https://github.com/gcla/gowid/blob/v1.4.0/cell.go#L112) [¶](#Cell.WithNoRune)
```
func (c [Cell](#Cell)) WithNoRune() [Cell](#Cell)
```
WithRune returns a Cell equal to the receiver Cell but that will render no rune instead i.e. it is "empty".
####
func (Cell) [WithRune](https://github.com/gcla/gowid/blob/v1.4.0/cell.go#L90) [¶](#Cell.WithRune)
```
func (c [Cell](#Cell)) WithRune(r [rune](/builtin#rune)) [Cell](#Cell)
```
WithRune returns a Cell equal to the receiver Cell but that will render the supplied rune instead.
####
func (Cell) [WithStyle](https://github.com/gcla/gowid/blob/v1.4.0/cell.go#L139) [¶](#Cell.WithStyle)
```
func (c [Cell](#Cell)) WithStyle(attr [StyleAttrs](#StyleAttrs)) [Cell](#Cell)
```
WithStyle returns a Cell equal to the receiver Cell but that will render with the supplied style (e.g. underline) instead. Note that this style can be set to "none" by passing the value gowid.AttrNone, meaning allow Cells layered underneath to determine the style.
####
type [CellMergeFunc](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L653) [¶](#CellMergeFunc)
```
type CellMergeFunc func(lower, upper [Cell](#Cell)) [Cell](#Cell)
```
####
type [CellRangeFunc](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1117) [¶](#CellRangeFunc)
```
type CellRangeFunc func(cell [Cell](#Cell)) [Cell](#Cell)
```
CellRangeFunc is an adaptor for a simple function to implement ICellProcessor.
####
func (CellRangeFunc) [ProcessCell](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1120) [¶](#CellRangeFunc.ProcessCell)
```
func (f [CellRangeFunc](#CellRangeFunc)) ProcessCell(cell [Cell](#Cell)) [Cell](#Cell)
```
ProcessCell hands over processing to the adapted function.
####
type [ClickCB](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L12) [¶](#ClickCB)
```
type ClickCB struct{}
```
####
type [ClickCallbacks](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1054) [¶](#ClickCallbacks)
```
type ClickCallbacks struct {
CB **[Callbacks](#Callbacks)
}
```
ClickCallbacks is a convenience struct for embedding in a widget, providing methods to add and remove callbacks that are executed when the widget is "clicked".
####
func (*ClickCallbacks) [OnClick](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1058) [¶](#ClickCallbacks.OnClick)
```
func (w *[ClickCallbacks](#ClickCallbacks)) OnClick(f [IWidgetChangedCallback](#IWidgetChangedCallback))
```
####
func (*ClickCallbacks) [RemoveOnClick](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1065) [¶](#ClickCallbacks.RemoveOnClick)
```
func (w *[ClickCallbacks](#ClickCallbacks)) RemoveOnClick(f [IIdentity](#IIdentity))
```
####
type [ClickTargets](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L152) [¶](#ClickTargets)
```
type ClickTargets struct {
// contains filtered or unexported fields
}
```
ClickTargets is used by the App to keep track of which widgets have been clicked. This allows the application to determine if a widget has been
"selected" which may be best determined across two calls to UserInput - click and release.
####
func [MakeClickTargets](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L156) [¶](#MakeClickTargets)
```
func MakeClickTargets() [ClickTargets](#ClickTargets)
```
####
func (ClickTargets) [ClickTarget](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L180) [¶](#ClickTargets.ClickTarget)
```
func (t [ClickTargets](#ClickTargets)) ClickTarget(f func([tcell](/github.com/gdamore/tcell/v2).[ButtonMask](/github.com/gdamore/tcell/v2#ButtonMask), [IIdentityWidget](#IIdentityWidget)))
```
####
func (ClickTargets) [DeleteClickTargets](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L188) [¶](#ClickTargets.DeleteClickTargets)
```
func (t [ClickTargets](#ClickTargets)) DeleteClickTargets(k [tcell](/github.com/gdamore/tcell/v2).[ButtonMask](/github.com/gdamore/tcell/v2#ButtonMask))
```
####
func (ClickTargets) [SetClickTarget](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L168) [¶](#ClickTargets.SetClickTarget)
```
func (t [ClickTargets](#ClickTargets)) SetClickTarget(k [tcell](/github.com/gdamore/tcell/v2).[ButtonMask](/github.com/gdamore/tcell/v2#ButtonMask), w [IIdentityWidget](#IIdentityWidget)) [bool](/builtin#bool)
```
SetClickTarget expects a Widget that provides an ID() function. Most widgets that can be clicked on can just use the default (&w). But if a widget might be recreated between the click down and release, and the widget under focus at the time of the release provides the same ID()
(even if not the same object), then it can be given the click.
####
type [Color](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1024) [¶](#Color)
```
type Color struct {
[IColor](#IColor)
Id [string](/builtin#string)
}
```
Color satisfies IColor, embeds an IColor, and allows a gowid Color to be constructed from a string alone. Each of the more specific color types is tried in turn with the string until one succeeds.
####
func [MakeColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1061) [¶](#MakeColor)
```
func MakeColor(s [string](/builtin#string)) [Color](#Color)
```
####
func [MakeColorSafe](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1038) [¶](#MakeColorSafe)
```
func MakeColorSafe(s [string](/builtin#string)) ([Color](#Color), [error](/builtin#error))
```
MakeColorSafe returns a Color struct specified by the string argument, in a do-what-I-mean fashion - it tries the Color struct maker functions in a pre-determined order until one successfully initialized a Color, or until all fail - in which case an error is returned. The order tried is TCellColor, RGBColor, GrayColor, UrwidColor.
####
func (Color) [String](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1029) [¶](#Color.String)
```
func (c [Color](#Color)) String() [string](/builtin#string)
```
####
type [ColorByMode](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1071) [¶](#ColorByMode)
```
type ColorByMode struct {
Colors map[[ColorMode](#ColorMode)][IColor](#IColor) // Indexed by ColorMode
}
```
####
func [MakeColorByMode](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1077) [¶](#MakeColorByMode)
```
func MakeColorByMode(cols map[[ColorMode](#ColorMode)][IColor](#IColor)) [ColorByMode](#ColorByMode)
```
####
func [MakeColorByModeSafe](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1085) [¶](#MakeColorByModeSafe)
```
func MakeColorByModeSafe(cols map[[ColorMode](#ColorMode)][IColor](#IColor)) ([ColorByMode](#ColorByMode), [error](/builtin#error))
```
####
func (ColorByMode) [ToTCellColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1089) [¶](#ColorByMode.ToTCellColor)
```
func (c [ColorByMode](#ColorByMode)) ToTCellColor(mode [ColorMode](#ColorMode)) ([TCellColor](#TCellColor), [bool](/builtin#bool))
```
####
type [ColorInverter](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1511) [¶](#ColorInverter)
```
type ColorInverter struct {
[ICellStyler](#ICellStyler)
}
```
ColorInverter implements ICellStyler, and simply swaps foreground and background colors.
####
func (ColorInverter) [GetStyle](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1515) [¶](#ColorInverter.GetStyle)
```
func (c [ColorInverter](#ColorInverter)) GetStyle(prov [IRenderContext](#IRenderContext)) (x [IColor](#IColor), y [IColor](#IColor), z [StyleAttrs](#StyleAttrs))
```
####
type [ColorMode](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L101) [¶](#ColorMode)
```
type ColorMode [int](/builtin#int)
```
ColorMode represents the color capability of a terminal.
####
func (ColorMode) [String](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L123) [¶](#ColorMode.String)
```
func (c [ColorMode](#ColorMode)) String() [string](/builtin#string)
```
####
type [ColorModeMismatch](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L961) [¶](#ColorModeMismatch)
```
type ColorModeMismatch struct {
Color [IColor](#IColor)
Mode [ColorMode](#ColorMode)
}
```
####
func (ColorModeMismatch) [Error](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L968) [¶](#ColorModeMismatch.Error)
```
func (e [ColorModeMismatch](#ColorModeMismatch)) Error() [string](/builtin#string)
```
####
type [ContainerWidget](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L316) [¶](#ContainerWidget)
```
type ContainerWidget struct {
[IWidget](#IWidget)
D [IWidgetDimension](#IWidgetDimension)
}
```
ContainerWidget is a simple implementation that conforms to IContainerWidget. It can be used to pass widgets to containers like pile.Widget and columns.Widget.
####
func (ContainerWidget) [Dimension](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L321) [¶](#ContainerWidget.Dimension)
```
func (ww [ContainerWidget](#ContainerWidget)) Dimension() [IWidgetDimension](#IWidgetDimension)
```
####
func (*ContainerWidget) [SetDimension](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L325) [¶](#ContainerWidget.SetDimension)
```
func (ww *[ContainerWidget](#ContainerWidget)) SetDimension(d [IWidgetDimension](#IWidgetDimension))
```
####
func (*ContainerWidget) [SetSubWidget](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L333) [¶](#ContainerWidget.SetSubWidget)
```
func (w *[ContainerWidget](#ContainerWidget)) SetSubWidget(wi [IWidget](#IWidget), app [IApp](#IApp))
```
####
func (*ContainerWidget) [String](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L337) [¶](#ContainerWidget.String)
```
func (w *[ContainerWidget](#ContainerWidget)) String() [string](/builtin#string)
```
####
func (*ContainerWidget) [SubWidget](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L329) [¶](#ContainerWidget.SubWidget)
```
func (w *[ContainerWidget](#ContainerWidget)) SubWidget() [IWidget](#IWidget)
```
####
type [CopyModeClipsEvent](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L441) [¶](#CopyModeClipsEvent)
```
type CopyModeClipsEvent struct {
Action [ICopyModeClips](#ICopyModeClips)
}
```
####
func (CopyModeClipsEvent) [When](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L445) [¶](#CopyModeClipsEvent.When)
```
func (c [CopyModeClipsEvent](#CopyModeClipsEvent)) When() [time](/time).[Time](/time#Time)
```
####
type [CopyModeClipsFn](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L435) [¶](#CopyModeClipsFn)
```
type CopyModeClipsFn func([][ICopyResult](#ICopyResult))
```
####
func (CopyModeClipsFn) [Collect](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L437) [¶](#CopyModeClipsFn.Collect)
```
func (f [CopyModeClipsFn](#CopyModeClipsFn)) Collect(clips [][ICopyResult](#ICopyResult))
```
####
type [CopyModeEvent](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L425) [¶](#CopyModeEvent)
```
type CopyModeEvent struct{}
```
####
func (CopyModeEvent) [When](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L427) [¶](#CopyModeEvent.When)
```
func (c [CopyModeEvent](#CopyModeEvent)) When() [time](/time).[Time](/time#Time)
```
####
type [CopyResult](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L644) [¶](#CopyResult)
```
type CopyResult struct {
Name [string](/builtin#string)
Val [string](/builtin#string)
}
```
####
func (CopyResult) [ClipName](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L651) [¶](#CopyResult.ClipName)
```
func (c [CopyResult](#CopyResult)) ClipName() [string](/builtin#string)
```
####
func (CopyResult) [ClipValue](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L654) [¶](#CopyResult.ClipValue)
```
func (c [CopyResult](#CopyResult)) ClipValue() [string](/builtin#string)
```
####
type [DefaultColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1497) [¶](#DefaultColor)
```
type DefaultColor struct{}
```
DefaultColor implements IColor and means use whatever the default terminal color is. This is different to NoColor, which expresses no preference.
####
func (DefaultColor) [String](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1504) [¶](#DefaultColor.String)
```
func (r [DefaultColor](#DefaultColor)) String() [string](/builtin#string)
```
####
func (DefaultColor) [ToTCellColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1500) [¶](#DefaultColor.ToTCellColor)
```
func (r [DefaultColor](#DefaultColor)) ToTCellColor(mode [ColorMode](#ColorMode)) ([TCellColor](#TCellColor), [bool](/builtin#bool))
```
ToTCellColor converts DefaultColor to TCellColor. This lets DefaultColor conform to the IColor interface.
####
type [DimensionError](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L265) [¶](#DimensionError)
```
type DimensionError struct {
Size [IRenderSize](#IRenderSize)
Dim [IWidgetDimension](#IWidgetDimension)
Row [int](/builtin#int)
}
```
####
func (DimensionError) [Error](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L273) [¶](#DimensionError.Error)
```
func (e [DimensionError](#DimensionError)) Error() [string](/builtin#string)
```
####
type [DimensionsCB](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L16) [¶](#DimensionsCB)
```
type DimensionsCB struct{}
```
####
type [Direction](https://github.com/gcla/gowid/blob/v1.4.0/utils.go#L15) [¶](#Direction)
```
type Direction [int](/builtin#int)
```
####
type [EmptyLineTooLong](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L149) [¶](#EmptyLineTooLong)
```
type EmptyLineTooLong struct {
Requested [int](/builtin#int)
}
```
####
func (EmptyLineTooLong) [Error](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L155) [¶](#EmptyLineTooLong.Error)
```
func (e [EmptyLineTooLong](#EmptyLineTooLong)) Error() [string](/builtin#string)
```
####
type [EmptyPalette](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1583) [¶](#EmptyPalette)
```
type EmptyPalette struct{}
```
EmptyPalette implements ICellStyler and returns no preference for any colors or styling.
####
func [MakeEmptyPalette](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1587) [¶](#MakeEmptyPalette)
```
func MakeEmptyPalette() [EmptyPalette](#EmptyPalette)
```
####
func (EmptyPalette) [GetStyle](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1592) [¶](#EmptyPalette.GetStyle)
```
func (a [EmptyPalette](#EmptyPalette)) GetStyle(prov [IRenderContext](#IRenderContext)) (x [IColor](#IColor), y [IColor](#IColor), z [StyleAttrs](#StyleAttrs))
```
GetStyle implements ICellStyler.
####
type [FocusCB](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L17) [¶](#FocusCB)
```
type FocusCB struct{}
```
####
type [FocusCallbacks](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1092) [¶](#FocusCallbacks)
```
type FocusCallbacks struct {
CB **[Callbacks](#Callbacks)
}
```
FocusCallbacks is a convenience struct for embedding in a widget, providing methods to add and remove callbacks that are executed when the widget's focus widget changes.
####
func (*FocusCallbacks) [OnFocusChanged](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1096) [¶](#FocusCallbacks.OnFocusChanged)
```
func (w *[FocusCallbacks](#FocusCallbacks)) OnFocusChanged(f [IWidgetChangedCallback](#IWidgetChangedCallback))
```
####
func (*FocusCallbacks) [RemoveOnFocusChanged](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1103) [¶](#FocusCallbacks.RemoveOnFocusChanged)
```
func (w *[FocusCallbacks](#FocusCallbacks)) RemoveOnFocusChanged(f [IIdentity](#IIdentity))
```
####
type [FocusPathResult](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1703) [¶](#FocusPathResult)
```
type FocusPathResult struct {
Succeeded [bool](/builtin#bool)
FailedLevel [int](/builtin#int)
}
```
####
func [SetFocusPath](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1717) [¶](#SetFocusPath)
```
func SetFocusPath(w [IWidget](#IWidget), path []interface{}, app [IApp](#IApp)) [FocusPathResult](#FocusPathResult)
```
SetFocusPath takes an array of focus positions, and applies them down the widget hierarchy starting at the supplied widget, w. If not all positions can be applied, the result's Succeeded field is set to false, and the FailedLevel field provides the index in the array of paths that could not be applied.
####
func (FocusPathResult) [Error](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1708) [¶](#FocusPathResult.Error)
```
func (f [FocusPathResult](#FocusPathResult)) Error() [string](/builtin#string)
```
####
type [ForegroundColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1640) [¶](#ForegroundColor)
```
type ForegroundColor struct {
[IColor](#IColor)
}
```
ForegroundColor is an ICellStyler that expresses a specific foreground color and no preference for background color or style.
####
func [MakeForeground](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1646) [¶](#MakeForeground)
```
func MakeForeground(c [IColor](#IColor)) [ForegroundColor](#ForegroundColor)
```
####
func (ForegroundColor) [GetStyle](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1651) [¶](#ForegroundColor.GetStyle)
```
func (a [ForegroundColor](#ForegroundColor)) GetStyle(prov [IRenderContext](#IRenderContext)) (x [IColor](#IColor), y [IColor](#IColor), z [StyleAttrs](#StyleAttrs))
```
GetStyle implements ICellStyler.
####
type [GrayColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1328) [¶](#GrayColor)
```
type GrayColor struct {
Val [int](/builtin#int)
}
```
GrayColor is an IColor that represents a greyscale specified by the same syntax as urwid - <http://urwid.org/manual/displayattributes.html>
and search for "gray scale entries". Strings may be of the form "g3",
"g100" or "g#a1", "g#ff" if hexadecimal is preferred. These index the grayscale color cube.
####
func [MakeGrayColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1359) [¶](#MakeGrayColor)
```
func MakeGrayColor(val [string](/builtin#string)) [GrayColor](#GrayColor)
```
MakeGrayColor returns an initialized GrayColor provided with a string input like "g50" or "g#ab". If the input is invalid, the function panics.
####
func [MakeGrayColorSafe](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1338) [¶](#MakeGrayColorSafe)
```
func MakeGrayColorSafe(val [string](/builtin#string)) ([GrayColor](#GrayColor), [error](/builtin#error))
```
MakeGrayColorSafe returns an initialized GrayColor provided with a string input like "g50" or "g#ab". If the input is invalid, an error is returned.
####
func (GrayColor) [String](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1332) [¶](#GrayColor.String)
```
func (g [GrayColor](#GrayColor)) String() [string](/builtin#string)
```
####
func (GrayColor) [ToTCellColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1394) [¶](#GrayColor.ToTCellColor)
```
func (s [GrayColor](#GrayColor)) ToTCellColor(mode [ColorMode](#ColorMode)) ([TCellColor](#TCellColor), [bool](/builtin#bool))
```
ToTCellColor converts the receiver GrayColor to a TCellColor, ready for rendering to a tcell screen. This lets GrayColor conform to IColor.
####
type [HAlignCB](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L19) [¶](#HAlignCB)
```
type HAlignCB struct{}
```
####
type [HAlignLeft](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L743) [¶](#HAlignLeft)
```
type HAlignLeft struct {
Margin [int](/builtin#int)
MarginRight [int](/builtin#int)
}
```
####
func (HAlignLeft) [ImplementsHAlignment](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L750) [¶](#HAlignLeft.ImplementsHAlignment)
```
func (h [HAlignLeft](#HAlignLeft)) ImplementsHAlignment()
```
####
type [HAlignMiddle](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L742) [¶](#HAlignMiddle)
```
type HAlignMiddle struct{}
```
####
func (HAlignMiddle) [ImplementsHAlignment](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L749) [¶](#HAlignMiddle.ImplementsHAlignment)
```
func (h [HAlignMiddle](#HAlignMiddle)) ImplementsHAlignment()
```
####
type [HAlignRight](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L741) [¶](#HAlignRight)
```
type HAlignRight struct{}
```
####
func (HAlignRight) [ImplementsHAlignment](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L748) [¶](#HAlignRight.ImplementsHAlignment)
```
func (h [HAlignRight](#HAlignRight)) ImplementsHAlignment()
```
####
type [HeightCB](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L20) [¶](#HeightCB)
```
type HeightCB struct{}
```
####
type [IAfterRenderEvent](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L772) [¶](#IAfterRenderEvent)
```
type IAfterRenderEvent interface {
RunThenRenderEvent([IApp](#IApp))
}
```
IAfterRenderEvent is implemented by clients that wish to run a function on the gowid rendering goroutine, directly after the widget hierarchy is rendered. This allows the client to be sure that there is no race condition with the widget rendering code.
####
type [IApp](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L58) [¶](#IApp)
```
type IApp interface {
[IRenderContext](#IRenderContext)
[IGetScreen](#IGetScreen)
[ISettableComposite](#ISettableComposite)
Quit() // Terminate the running gowid app + main loop soon
Redraw() // Issue a redraw of the terminal soon
Sync() // From tcell's screen - refresh every screen cell e.g. if screen becomes corrupted
SetColorMode(mode [ColorMode](#ColorMode)) // Change the terminal's color mode - 256, 16, mono, etc
Run(f [IAfterRenderEvent](#IAfterRenderEvent)) [error](/builtin#error) // Send a function to run on the widget rendering goroutine
SetClickTarget(k [tcell](/github.com/gdamore/tcell/v2).[ButtonMask](/github.com/gdamore/tcell/v2#ButtonMask), w [IIdentityWidget](#IIdentityWidget)) [bool](/builtin#bool) // When a mouse is clicked on a widget, track that widget. So...
ClickTarget(func([tcell](/github.com/gdamore/tcell/v2).[ButtonMask](/github.com/gdamore/tcell/v2#ButtonMask), [IIdentityWidget](#IIdentityWidget))) // when the button is released, we can activate the widget if we are still "over" it
GetMouseState() [MouseState](#MouseState) // Which buttons are currently clicked
GetLastMouseState() [MouseState](#MouseState) // Which buttons were clicked before current event
RegisterMenu(menu [IMenuCompatible](#IMenuCompatible)) // Required for an app to display an overlaying menu
UnregisterMenu(menu [IMenuCompatible](#IMenuCompatible)) [bool](/builtin#bool) // Returns false if the menu is not found in the hierarchy
InCopyMode(...[bool](/builtin#bool)) [bool](/builtin#bool) // A getter/setter - to set the app into copy mode. Widgets might render differently as a result
CopyModeClaimedAt(...[int](/builtin#int)) [int](/builtin#int) // the level that claims copy, 0 means deepest should claim
CopyModeClaimedBy(...[IIdentity](#IIdentity)) [IIdentity](#IIdentity) // the level that claims copy, 0 means deepest should claim
RefreshCopyMode() // Give widgets another chance to display copy options (after the user perhaps adjusted the scope of a copy selection)
Clips() [][ICopyResult](#ICopyResult) // If in copy-mode, the app will descend the widget hierarchy with a special user input, gathering options for copying data
CopyLevel(...[int](/builtin#int)) [int](/builtin#int) // level we're at as we descend
}
```
IApp is the interface of the application passed to every widget during Render or UserInput.
It provides several features:
- a function to terminate the application
- access to the state of the mouse
- access to the underlying tcell screen
- access to an application-specific logger
- functions to get and set the root widget of the widget hierarchy
- a method to keep track of which widgets were last "clicked"
####
type [IAppendBlankLines](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L897) [¶](#IAppendBlankLines)
```
type IAppendBlankLines interface {
BoxColumns() [int](/builtin#int)
AppendBelow(c [IAppendCanvas](#IAppendCanvas), doCursor [bool](/builtin#bool), makeCopy [bool](/builtin#bool))
}
```
####
type [IAppendCanvas](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L41) [¶](#IAppendCanvas)
```
type IAppendCanvas interface {
[IRenderBox](#IRenderBox)
[ICanvasLineReader](#ICanvasLineReader)
[ICanvasMarkIterator](#ICanvasMarkIterator)
}
```
####
type [IBox](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L58) [¶](#IBox)
```
type IBox interface {
BoxColumns() [int](/builtin#int)
BoxRows() [int](/builtin#int)
}
```
####
type [ICallback](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L28) [¶](#ICallback)
```
type ICallback interface {
[IIdentity](#IIdentity)
Call(args ...interface{})
}
```
ICallback represents any object that can provide a way to be compared to others,
and that can be called with an arbitrary number of arguments returning no result.
The comparison is expected to be used by having the callback object provide a name to identify the callback operation e.g. "buttonclicked", so that it can later be removed.
####
type [ICallbackRunner](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L911) [¶](#ICallbackRunner)
```
type ICallbackRunner interface {
RunWidgetCallbacks(name interface{}, app [IApp](#IApp), w [IWidget](#IWidget))
}
```
####
type [ICallbacks](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L62) [¶](#ICallbacks)
```
type ICallbacks interface {
RunCallbacks(name interface{}, args ...interface{})
AddCallback(name interface{}, cb [ICallback](#ICallback))
RemoveCallback(name interface{}, cb [IIdentity](#IIdentity)) [bool](/builtin#bool)
}
```
####
type [ICanvas](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L67) [¶](#ICanvas)
```
type ICanvas interface {
Duplicate() [ICanvas](#ICanvas)
MergeUnder(c [IMergeCanvas](#IMergeCanvas), leftOffset, topOffset [int](/builtin#int), bottomGetsCursor [bool](/builtin#bool))
AppendBelow(c [IAppendCanvas](#IAppendCanvas), doCursor [bool](/builtin#bool), makeCopy [bool](/builtin#bool))
AppendRight(c [IMergeCanvas](#IMergeCanvas), useCursor [bool](/builtin#bool))
SetCellAt(col, row [int](/builtin#int), c [Cell](#Cell))
SetLineAt(row [int](/builtin#int), line [][Cell](#Cell))
Truncate(above, below [int](/builtin#int))
ExtendRight(cells [][Cell](#Cell))
ExtendLeft(cells [][Cell](#Cell))
TrimRight(cols [int](/builtin#int))
TrimLeft(cols [int](/builtin#int))
SetCursorCoords(col, row [int](/builtin#int))
SetMark(name [string](/builtin#string), col, row [int](/builtin#int))
GetMark(name [string](/builtin#string)) ([CanvasPos](#CanvasPos), [bool](/builtin#bool))
RemoveMark(name [string](/builtin#string))
[ICanvasMarkIterator](#ICanvasMarkIterator)
[ICanvasCellReader](#ICanvasCellReader)
[IDrawCanvas](#IDrawCanvas)
[fmt](/fmt).[Stringer](/fmt#Stringer)
}
```
ICanvas is the interface of any object which can generate a 2-dimensional array of Cells that are intended to be rendered on a terminal. This interface is pretty awful - cluttered and inconsistent and subject to cleanup... Note though that this interface is not here as the minimum requirement for providing arguments to a function or module - instead it's supposed to be an API surface for widgets so includes features that I am trying to guess will be needed, or that widgets already need.
####
type [ICanvasCellReader](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L37) [¶](#ICanvasCellReader)
```
type ICanvasCellReader interface {
CellAt(col, row [int](/builtin#int)) [Cell](#Cell)
}
```
ICanvasCellReader can provide a Cell given a row and a column.
####
type [ICanvasLineReader](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L25) [¶](#ICanvasLineReader)
```
type ICanvasLineReader interface {
Line([int](/builtin#int), [LineCopy](#LineCopy)) [LineResult](#LineResult)
}
```
ICanvasLineReader can provide a particular line of Cells, at the specified y offset. The result may or may not be a copy of the actual Cells, and is determined by whether the user requested a copy and/or the capability of the ICanvasLineReader
(maybe it has to provide a copy).
####
type [ICanvasMarkIterator](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L32) [¶](#ICanvasMarkIterator)
```
type ICanvasMarkIterator interface {
RangeOverMarks(f func(key [string](/builtin#string), value [CanvasPos](#CanvasPos)) [bool](/builtin#bool))
}
```
ICanvasMarkIterator will call the supplied function argument with the name and position of every mark set on the canvas. If the function returns true, the loop is terminated early.
####
type [ICellProcessor](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1112) [¶](#ICellProcessor)
```
type ICellProcessor interface {
ProcessCell(cell [Cell](#Cell)) [Cell](#Cell)
}
```
ICellProcessor is a general interface used by several gowid types for processing a range of Cell types. For example, a canvas provides a function to range over its contents, each cell being handed to an ICellProcessor.
####
type [ICellStyler](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L989) [¶](#ICellStyler)
```
type ICellStyler interface {
GetStyle([IRenderContext](#IRenderContext)) ([IColor](#IColor), [IColor](#IColor), [StyleAttrs](#StyleAttrs))
}
```
ICellStyler is an analog to urwid's AttrSpec (<http://urwid.org/reference/attrspec.html>). When provided a RenderContext (specifically the color mode in which to be rendered), the GetStyle() function will return foreground, background and style values with which a cell should be rendered. The IRenderContext argument provides access to the global palette, so an ICellStyle implementation can look up palette entries by name.
####
type [IChangeFocus](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1623) [¶](#IChangeFocus)
```
type IChangeFocus interface {
ChangeFocus(dir [Direction](#Direction), wrap [bool](/builtin#bool), app [IApp](#IApp)) [bool](/builtin#bool)
}
```
####
type [IClickTracker](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L579) [¶](#IClickTracker)
```
type IClickTracker interface {
SetClickPending([bool](/builtin#bool))
}
```
IClickTracker is implemented by any type that can track the state of whether it was clicked. This is trivial, and may just be a boolean flag. It's intended for widgets that want to change their look when a mouse button is clicked when they are in focus, but before the button is released - to indicate that the widget is about to be activated. Of course if the user moves the cursor off the widget then releases the mouse button, the widget will not be activated.
####
type [IClickable](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L561) [¶](#IClickable)
```
type IClickable interface {
Click(app [IApp](#IApp))
}
```
IClickable is implemented by any type that implements a Click()
method, intended to be run in response to a user interaction with the type such as left mouse click or hitting enter.
####
type [IClickableWidget](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L589) [¶](#IClickableWidget)
```
type IClickableWidget interface {
[IWidget](#IWidget)
[IClickable](#IClickable)
}
```
IClickableWidget is implemented by any widget that implements a Click()
method, intended to be run in response to a user interaction with the widget such as left mouse click or hitting enter. A widget implementing Click() and ID() may be able to run UserInputCheckedWidget() for its UserInput() implementation.
####
type [IClipboard](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L658) [¶](#IClipboard)
```
type IClipboard interface {
Clips(app [IApp](#IApp)) [][ICopyResult](#ICopyResult)
}
```
####
type [IClipboardSelected](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L666) [¶](#IClipboardSelected)
```
type IClipboardSelected interface {
AlterWidget(w [IWidget](#IWidget), app [IApp](#IApp)) [IWidget](#IWidget)
}
```
IClipboardSelected is implemented by widgets that support changing their look when they have been "selected" in some application-level copy-mode,
the idea being to provide the user with the information that this widget's contents will be copied.
####
type [IColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L997) [¶](#IColor)
```
type IColor interface {
ToTCellColor(mode [ColorMode](#ColorMode)) ([TCellColor](#TCellColor), [bool](/builtin#bool))
}
```
IColor is implemented by any object that can turn itself into a TCellColor, meaning a color with which a cell can be rendered. The display mode (e.g. 256 colors) is provided. If no TCellColor is available, the second argument should be set to false e.g. no color can be found given a particular string name.
####
type [IColorMode](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L30) [¶](#IColorMode)
```
type IColorMode interface {
GetColorMode() [ColorMode](#ColorMode)
}
```
IColorMode provides access to a ColorMode value which represents the current mode of the terminal e.g. 24-bit color, 256-color, monochrome.
####
type [IColumns](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L23) [¶](#IColumns)
```
type IColumns interface {
Columns() [int](/builtin#int)
}
```
####
type [IComposite](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L455) [¶](#IComposite)
```
type IComposite interface {
SubWidget() [IWidget](#IWidget)
}
```
IComposite is an interface for anything that has a concept of a single
"inner" widget. This applies to certain widgets themselves
(e.g. ButtonWidget) and also to the App object which holds the top-level view.
####
type [ICompositeMultiple](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L493) [¶](#ICompositeMultiple)
```
type ICompositeMultiple interface {
SubWidgets() [][IWidget](#IWidget)
}
```
ICompositeMultiple is an interface for widget containers that have multiple children and that support specifying how the children are laid out relative to each other.
####
type [ICompositeMultipleDimensions](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L506) [¶](#ICompositeMultipleDimensions)
```
type ICompositeMultipleDimensions interface {
[ICompositeMultiple](#ICompositeMultiple)
Dimensions() [][IWidgetDimension](#IWidgetDimension)
}
```
ICompositeMultipleDimensions is an interface for collections of widget dimensions,
used in laying out some container widgets.
####
type [ICompositeMultipleFocus](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1618) [¶](#ICompositeMultipleFocus)
```
type ICompositeMultipleFocus interface {
[IFocus](#IFocus)
[ICompositeMultiple](#ICompositeMultiple)
}
```
####
type [ICompositeMultipleWidget](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L537) [¶](#ICompositeMultipleWidget)
```
type ICompositeMultipleWidget interface {
[IWidget](#IWidget)
[ICompositeMultipleDimensions](#ICompositeMultipleDimensions)
[IFocus](#IFocus)
// SubWidgetSize should return the IRenderSize value that will be used to render
// an inner widget given the size used to render the outer widget and an
// IWidgetDimension (such as units, weight, etc)
SubWidgetSize(size [IRenderSize](#IRenderSize), val [int](/builtin#int), sub [IWidget](#IWidget), dim [IWidgetDimension](#IWidgetDimension)) [IRenderSize](#IRenderSize)
// RenderSubWidgets should return an array of canvases representing each inner
// widget, rendered in the context of the containing widget with the supplied
// size argument.
RenderSubWidgets(size [IRenderSize](#IRenderSize), focus [Selector](#Selector), focusIdx [int](/builtin#int), app [IApp](#IApp)) [][ICanvas](#ICanvas)
// RenderedSubWidgetsSizes should return a bounding box for each inner widget
// when the containing widget is rendered with the provided size. Note that this
// is not the same as rendering each inner widget separately, because the
// container context might result in size adjustments e.g. adjusting the
// height of inner widgets to make sure they're aligned vertically.
RenderedSubWidgetsSizes(size [IRenderSize](#IRenderSize), focus [Selector](#Selector), focusIdx [int](/builtin#int), app [IApp](#IApp)) [][IRenderBox](#IRenderBox)
}
```
ICompositeMultipleWidget is a widget that implements ICompositeMultiple. The widget must support computing the render-time size of any of its children and setting focus.
####
type [ICompositeWidget](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L483) [¶](#ICompositeWidget)
```
type ICompositeWidget interface {
[IWidget](#IWidget)
[IComposite](#IComposite)
[ISubWidgetSize](#ISubWidgetSize)
}
```
ICompositeWidget is an interface implemented by widgets that contain one subwidget. Further implented methods could make it an IButtonWidget for example, which then means the RenderButton() function can be exploited to implement Render(). If you make a new Button by embedding ButtonWidget, you may be able to implement Render() by simply calling RenderButton().
####
type [IContainerWidget](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L304) [¶](#IContainerWidget)
```
type IContainerWidget interface {
[IWidget](#IWidget)
[IComposite](#IComposite)
Dimension() [IWidgetDimension](#IWidgetDimension)
SetDimension([IWidgetDimension](#IWidgetDimension))
}
```
IContainerWidget is the type of an object that contains a widget and a render object that determines how it is rendered within a container of widgets. Note that it itself is an IWidget.
####
type [ICopyModeClips](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L431) [¶](#ICopyModeClips)
```
type ICopyModeClips interface {
Collect([][ICopyResult](#ICopyResult))
}
```
####
type [ICopyModeWidget](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1741) [¶](#ICopyModeWidget)
```
type ICopyModeWidget interface {
[IComposite](#IComposite)
[IIdentity](#IIdentity)
[IClipboard](#IClipboard)
CopyModeLevels() [int](/builtin#int)
}
```
####
type [ICopyResult](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L639) [¶](#ICopyResult)
```
type ICopyResult interface {
ClipName() [string](/builtin#string)
ClipValue() [string](/builtin#string)
}
```
####
type [IDrawCanvas](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L53) [¶](#IDrawCanvas)
```
type IDrawCanvas interface {
[IRenderBox](#IRenderBox)
[ICanvasLineReader](#ICanvasLineReader)
CursorEnabled() [bool](/builtin#bool)
CursorCoords() [CanvasPos](#CanvasPos)
}
```
####
type [IFindNextSelectable](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L529) [¶](#IFindNextSelectable)
```
type IFindNextSelectable interface {
FindNextSelectable(dir [Direction](#Direction), wrap [bool](/builtin#bool)) ([int](/builtin#int), [bool](/builtin#bool))
}
```
IFindNextSelectable is for any object that can iterate to its next or previous object
####
type [IFocus](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L522) [¶](#IFocus)
```
type IFocus interface {
Focus() [int](/builtin#int)
SetFocus(app [IApp](#IApp), i [int](/builtin#int))
}
```
IFocus is a container widget concept that describes which widget will be the target of keyboard input.
####
type [IFocusSelectable](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1613) [¶](#IFocusSelectable)
```
type IFocusSelectable interface {
[IFocus](#IFocus)
[IFindNextSelectable](#IFindNextSelectable)
}
```
####
type [IGetFocus](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1657) [¶](#IGetFocus)
```
type IGetFocus interface {
Focus() [int](/builtin#int)
}
```
####
type [IGetScreen](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L24) [¶](#IGetScreen)
```
type IGetScreen interface {
GetScreen() [tcell](/github.com/gdamore/tcell/v2).[Screen](/github.com/gdamore/tcell/v2#Screen)
}
```
IGetScreen provides access to a tcell.Screen object e.g. for rendering a canvas to the terminal.
####
type [IHAlignment](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L737) [¶](#IHAlignment)
```
type IHAlignment interface {
ImplementsHAlignment()
}
```
####
type [IIdentity](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L446) [¶](#IIdentity)
```
type IIdentity interface {
ID() interface{}
}
```
IIdentity is used for widgets that support being a click target - so it is possible to link the widget that is the target of MouseReleased with the one that was the target of MouseLeft/Right/Middle when they might not necessarily be the same object (i.e. rebuilt widget hierarchy in between). Also used to name callbacks so they can be removed (since function objects can't be compared)
####
type [IIdentityWidget](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L608) [¶](#IIdentityWidget)
```
type IIdentityWidget interface {
[IWidget](#IWidget)
[IIdentity](#IIdentity)
}
```
IIdentityWidget is implemented by any widget that provides an ID()
function that identifies itself and allows itself to be compared against other IIdentity implementers. This is intended be to used to check whether or not the widget that was in focus when a mouse click was issued is the same widget in focus when the mouse is released. If so then the widget was "clicked". This allows gowid to run the action on release rather than on click, which is more forgiving of mistaken clicks. The widget in focus on release may be logically the same widget as the one clicked, but possibly a different object, if the widget hierarchy was somehow rebuilt in response to the first click - so to receive the click event, make sure the newly built widget has the same ID() as the original (e.g. a serialized representation of a position in a ListWalker)
####
type [IKey](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1129) [¶](#IKey)
```
type IKey interface {
Rune() [rune](/builtin#rune)
Key() [tcell](/github.com/gdamore/tcell/v2).[Key](/github.com/gdamore/tcell/v2#Key)
Modifiers() [tcell](/github.com/gdamore/tcell/v2).[ModMask](/github.com/gdamore/tcell/v2#ModMask)
}
```
IKey represents a keypress. It's a subset of tcell.EventKey because it doesn't capture the time of the keypress. It can be used by widgets to customize what keypresses they respond to.
####
type [IKeyPress](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L569) [¶](#IKeyPress)
```
type IKeyPress interface {
KeyPress(key [IKey](#IKey), app [IApp](#IApp))
}
```
IKeyPress is implemented by any type that implements a KeyPress()
method, intended to be run in response to a user interaction with the type such as hitting the escape key.
####
type [IMenuCompatible](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L632) [¶](#IMenuCompatible)
```
type IMenuCompatible interface {
[IWidget](#IWidget)
[ISettableComposite](#ISettableComposite)
}
```
IMenuCompatible is implemented by any widget that can set a subwidget.
It's used by widgets like menus that need to inject themselves into the widget hierarchy close to the root (to be rendered over the main
"view") i.e. the current root is made a child of the new menu widget,
whuch becomes the new root.
####
type [IMergeCanvas](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L47) [¶](#IMergeCanvas)
```
type IMergeCanvas interface {
[IRenderBox](#IRenderBox)
[ICanvasCellReader](#ICanvasCellReader)
[ICanvasMarkIterator](#ICanvasMarkIterator)
}
```
####
type [IPalette](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L38) [¶](#IPalette)
```
type IPalette interface {
CellStyler(name [string](/builtin#string)) ([ICellStyler](#ICellStyler), [bool](/builtin#bool))
RangeOverPalette(f func(key [string](/builtin#string), value [ICellStyler](#ICellStyler)) [bool](/builtin#bool))
}
```
IPalette provides application "palette" information - it can look up a Cell styling interface by name (e.g. "main text" -> (black, white, underline))
and it can let clients apply a function to each member of the palette (e.g.
in order to construct a new modified palette).
####
type [IPreferedPosition](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L622) [¶](#IPreferedPosition)
```
type IPreferedPosition interface {
GetPreferedPosition() [gwutil](/github.com/gcla/[email protected]/gwutil).[IntOption](/github.com/gcla/[email protected]/gwutil#IntOption)
SetPreferedPosition(col [int](/builtin#int), app [IApp](#IApp))
}
```
IPreferedPosition is implemented by any widget that supports a prefered column or row (position in a dimension), meaning it understands what subwidget is at the current dimensional coordinate, and can move its focus widget to a new position. This is modeled on Urwid's get_pref_col()
feature, which tries to provide a sensible switch of focus widget when moving the cursor vertically around the screen - instead of having it hop left and right depending on which widget happens to be in focus at the current y coordinate.
####
type [IRangeOverCanvas](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L333) [¶](#IRangeOverCanvas)
```
type IRangeOverCanvas interface {
[IRenderBox](#IRenderBox)
[ICanvasCellReader](#ICanvasCellReader)
SetCellAt(col, row [int](/builtin#int), c [Cell](#Cell))
}
```
####
type [IRenderBox](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L63) [¶](#IRenderBox)
```
type IRenderBox interface {
[IWidgetDimension](#IWidgetDimension)
[IBox](#IBox)
}
```
####
func [RenderSize](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L806) [¶](#RenderSize)
```
func RenderSize(w [IWidget](#IWidget), size [IRenderSize](#IRenderSize), focus [Selector](#Selector), app [IApp](#IApp)) [IRenderBox](#IRenderBox)
```
RenderSize currently passes control through to the widget's RenderSize method. Having this function allows for easier instrumentation of the RenderSize path. RenderSize is intended to compute the size of the canvas that will be generated when the widget is rendered. Some parent widgets need this value from their children, and it might be possible to compute it much more cheaply than rendering the widget in order to determine the canvas size only.
####
type [IRenderContext](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L44) [¶](#IRenderContext)
```
type IRenderContext interface {
[IPalette](#IPalette)
[IColorMode](#IColorMode)
}
```
IRenderContext proviees palette and color mode information.
####
type [IRenderFixed](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L43) [¶](#IRenderFixed)
```
type IRenderFixed interface {
[IWidgetDimension](#IWidgetDimension)
Fixed() // dummy
}
```
####
type [IRenderFlow](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L53) [¶](#IRenderFlow)
```
type IRenderFlow interface {
[IWidgetDimension](#IWidgetDimension)
Flow() // dummy
}
```
####
type [IRenderFlowWith](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L48) [¶](#IRenderFlowWith)
```
type IRenderFlowWith interface {
[IWidgetDimension](#IWidgetDimension)
FlowColumns() [int](/builtin#int)
}
```
####
type [IRenderMax](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L86) [¶](#IRenderMax)
```
type IRenderMax interface {
MaxHeight() // dummy
}
```
Used in widgets laid out side-by-side - intended to have the effect that these widgets are rendered last and provided a height that corresponds to the max of the height of those widgets already rendered.
####
type [IRenderMaxUnits](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L94) [¶](#IRenderMaxUnits)
```
type IRenderMaxUnits interface {
MaxUnits() [int](/builtin#int)
}
```
Used in widgets laid out side-by-side - intended to limit the width of a widget which may otherwise be specified to be dimensioned in relation to the width available.
This can let the layout algorithm give more space (e.g. maximized terminal) to widgets that can use it by constraining those that don't need it.
####
type [IRenderRelative](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L73) [¶](#IRenderRelative)
```
type IRenderRelative interface {
[IWidgetDimension](#IWidgetDimension)
Relative() [float64](/builtin#float64)
}
```
####
type [IRenderSize](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L31) [¶](#IRenderSize)
```
type IRenderSize interface{}
```
IRenderSize is the type of objects that can specify how a widget is to be rendered.
This is the empty interface, and only serves as a placeholder at the moment. In practise, actual rendering sizes will be determined by an IFlowDimension, IBoxDimension or an IFixedDimension
####
func [ComputeHorizontalSubSize](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1329) [¶](#ComputeHorizontalSubSize)
```
func ComputeHorizontalSubSize(size [IRenderSize](#IRenderSize), d [IWidgetDimension](#IWidgetDimension)) ([IRenderSize](#IRenderSize), [error](/builtin#error))
```
ComputeHorizontalSubSize is used to determine the size with which a child widget should be rendered given the parent's render size, and an IWidgetDimension. The function will make adjustments to the size's number of columns i.e. in the horizontal dimension, and as such is used by hpadding and columns. For example the function can transform a RenderBox to a narrower RenderBox if the IWidgetDimension specifies a RenderWithUnits{} - so it allows widgets like columns and hpadding to force widgets to be of a certain width, or to have their width be in a certain ratio to other widgets.
####
func [ComputeHorizontalSubSizeUnsafe](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1312) [¶](#ComputeHorizontalSubSizeUnsafe)
```
func ComputeHorizontalSubSizeUnsafe(size [IRenderSize](#IRenderSize), d [IWidgetDimension](#IWidgetDimension)) [IRenderSize](#IRenderSize)
```
ComputeHorizontalSubSizeUnsafe calls ComputeHorizontalSubSize but returns only a single value - the IRenderSize. If there is an error the function will panic.
####
func [ComputeSubSize](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1405) [¶](#ComputeSubSize)
```
func ComputeSubSize(size [IRenderSize](#IRenderSize), w [IWidgetDimension](#IWidgetDimension), h [IWidgetDimension](#IWidgetDimension)) ([IRenderSize](#IRenderSize), [error](/builtin#error))
```
TODO - doc
####
func [ComputeSubSizeUnsafe](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1396) [¶](#ComputeSubSizeUnsafe)
```
func ComputeSubSizeUnsafe(size [IRenderSize](#IRenderSize), w [IWidgetDimension](#IWidgetDimension), h [IWidgetDimension](#IWidgetDimension)) [IRenderSize](#IRenderSize)
```
####
func [ComputeVerticalSubSize](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1244) [¶](#ComputeVerticalSubSize)
```
func ComputeVerticalSubSize(size [IRenderSize](#IRenderSize), d [IWidgetDimension](#IWidgetDimension), maxCol [int](/builtin#int), advRow [int](/builtin#int)) ([IRenderSize](#IRenderSize), [error](/builtin#error))
```
ComputeVerticalSubSize is used to determine the size with which a child widget should be rendered given the parent's render size, and an IWidgetDimension. The function will make adjustments to the size's number of rows i.e. in the vertical dimension, and as such is used by vpadding and pile. For example, if the parent render size is RenderBox{C: 20, R: 5} and the IWidgetDimension argument is RenderFlow{}, the function will return RenderFlowWith{C: 20}, i.e. it will transform a RenderBox to a RenderFlow of the same width. Another example is to transform a RenderBox to a shorter RenderBox if the IWidgetDimension specifies a RenderWithUnits{} - so it allows widgets like pile and vpadding to force widgets to be of a certain height, or to have their height be in a certain ratio to other widgets.
####
func [ComputeVerticalSubSizeUnsafe](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1224) [¶](#ComputeVerticalSubSizeUnsafe)
```
func ComputeVerticalSubSizeUnsafe(size [IRenderSize](#IRenderSize), d [IWidgetDimension](#IWidgetDimension), maxCol [int](/builtin#int), advRow [int](/builtin#int)) [IRenderSize](#IRenderSize)
```
ComputeVerticalSubSizeUnsafe calls ComputeVerticalSubSize but returns only a single value - the IRenderSize. If there is an error the function will panic.
####
func [SubWidgetSize](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L816) [¶](#SubWidgetSize)
```
func SubWidgetSize(w [ICompositeWidget](#ICompositeWidget), size [IRenderSize](#IRenderSize), focus [Selector](#Selector), app [IApp](#IApp)) [IRenderSize](#IRenderSize)
```
SubWidgetSize currently passes control through to the widget's SubWidgetSize method. Having this function allows for easier instrumentation of the SubWidgetSize path. The function should compute the size that it will itself use to render its child widget; for example, a framing widget rendered with IRenderBox might return a RenderBox value that is 2 units smaller in both height and width.
####
type [IRenderWithUnits](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L78) [¶](#IRenderWithUnits)
```
type IRenderWithUnits interface {
[IWidgetDimension](#IWidgetDimension)
Units() [int](/builtin#int)
}
```
####
type [IRenderWithWeight](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L68) [¶](#IRenderWithWeight)
```
type IRenderWithWeight interface {
[IWidgetDimension](#IWidgetDimension)
Weight() [int](/builtin#int)
}
```
####
type [IRightSizeCanvas](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L213) [¶](#IRightSizeCanvas)
```
type IRightSizeCanvas interface {
[IRenderBox](#IRenderBox)
ExtendRight(cells [][Cell](#Cell))
TrimRight(cols [int](/builtin#int))
Truncate(above, below [int](/builtin#int))
AppendBelow(c [IAppendCanvas](#IAppendCanvas), doCursor [bool](/builtin#bool), makeCopy [bool](/builtin#bool))
}
```
####
type [IRows](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L19) [¶](#IRows)
```
type IRows interface {
Rows() [int](/builtin#int)
}
```
####
type [ISelectChild](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L402) [¶](#ISelectChild)
```
type ISelectChild interface {
SelectChild([Selector](#Selector)) [bool](/builtin#bool) // Whether or not this widget will set focus.Selected for its selected child
}
```
ISelectChild is implemented by any type that controls whether or not it will set focus.Selected on its currently "selected" child. For example, a columns widget will have a notion of a child widget that will take focus.
The user may want to render that widget in a way that highlights the selected child, even when the columns widget itself does not have focus. The columns widget will set focus.Selected on Render() and UserInput() calls depending on the result of SelectChild() - if focus.Selected is set, then a styling widget can change the look of the widget appropriately.
####
type [ISettableComposite](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L464) [¶](#ISettableComposite)
```
type ISettableComposite interface {
[IComposite](#IComposite)
SetSubWidget([IWidget](#IWidget), [IApp](#IApp))
}
```
IComposite is an interface for anything that has a concept of a single settable "inner" widget. This applies to certain widgets themselves
(e.g. ButtonWidget) and also to the App object which holds the top-level view.
####
type [ISettableDimensions](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L515) [¶](#ISettableDimensions)
```
type ISettableDimensions interface {
SetDimensions([][IWidgetDimension](#IWidgetDimension), [IApp](#IApp))
}
```
ISettableDimensions is implemented by types that maintain a collection of dimensions - to be used by containers that use these dimensions to layout their children widgets.
####
type [ISettableSubWidgets](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L499) [¶](#ISettableSubWidgets)
```
type ISettableSubWidgets interface {
SetSubWidgets([][IWidget](#IWidget), [IApp](#IApp))
}
```
ISettableSubWidgetsis implemented by a type that maintains a collection of child widgets (like pile, columns) and that allows them to be changed.
####
type [ISubWidgetSize](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L472) [¶](#ISubWidgetSize)
```
type ISubWidgetSize interface {
SubWidgetSize(size [IRenderSize](#IRenderSize), focus [Selector](#Selector), app [IApp](#IApp)) [IRenderSize](#IRenderSize)
}
```
ISubWidgetSize returns the size argument that should be provided to render the inner widget based on the size argument provided to the containing widget.
####
type [IUnhandledInput](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L128) [¶](#IUnhandledInput)
```
type IUnhandledInput interface {
UnhandledInput(app [IApp](#IApp), ev interface{}) [bool](/builtin#bool)
}
```
IUnhandledInput is used as a handler for application user input that is not handled by any widget in the widget hierarchy.
####
type [IVAlignment](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L752) [¶](#IVAlignment)
```
type IVAlignment interface {
ImplementsVAlignment()
}
```
####
type [IWidget](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L432) [¶](#IWidget)
```
type IWidget interface {
Render(size [IRenderSize](#IRenderSize), focus [Selector](#Selector), app [IApp](#IApp)) [ICanvas](#ICanvas)
RenderSize(size [IRenderSize](#IRenderSize), focus [Selector](#Selector), app [IApp](#IApp)) [IRenderBox](#IRenderBox)
UserInput(ev interface{}, size [IRenderSize](#IRenderSize), focus [Selector](#Selector), app [IApp](#IApp)) [bool](/builtin#bool)
Selectable() [bool](/builtin#bool)
}
```
IWidget is the interface of any object acting as a gowid widget.
Render() is provided a size (cols, maybe rows), whether or not the widget is in focus, and a context (palette, etc). It must return an object conforming to gowid's ICanvas, which is a representation of what can be displayed in the terminal.
RenderSize() is used by clients that need to know only how big the widget will be when rendered. It is expected to be cheaper to compute in some cases than Render(), but a fallback is to run Render() then return the size of the canvas.
Selectable() should return true if this widget is designed for interaction e.g.
a Button would return true, but a Text widget would return false. Note that,
like urwid, returning false does not guarantee the widget will never have focus - it might be given focus if there is no other option (no other selectable widgets in the container, for example).
UserInput() is provided the TCell event (mouse or keyboard action),
the size spec that would be given to Render(), whether or not the widget has focus, and access to the application, useful for effecting changes like changing colors, running a function, or quitting. The render size is needed because the widget might have to pass the event down to children widgets, and the correct one may depend on the coordinates of a mouse click relative to the dimensions of the widget itself.
####
func [CopyWidgets](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1783) [¶](#CopyWidgets)
```
func CopyWidgets(w [][IWidget](#IWidget)) [][IWidget](#IWidget)
```
CopyWidgets is a trivial utility to return a copy of the array of widgets supplied.
Note that this is not a deep copy! The array is different, but the IWidgets are not.
####
func [FindInHierarchy](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1590) [¶](#FindInHierarchy)
```
func FindInHierarchy(w [IWidget](#IWidget), includeMe [bool](/builtin#bool), pred [WidgetPredicate](#WidgetPredicate)) [IWidget](#IWidget)
```
FindInHierarchy starts at w, and applies the supplied predicate function; if it returns true, w is returned. If not, then the hierarchy is descended. If w has a child widget, then the predicate is applied to that child. If w has a set of children with a concept of one with focus, the predicate is applied to the child in focus. This repeats until a suitable widget is found, or the hierarchy terminates.
####
type [IWidgetChangedCallback](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L921) [¶](#IWidgetChangedCallback)
```
type IWidgetChangedCallback interface {
[IIdentity](#IIdentity)
Changed(app [IApp](#IApp), widget [IWidget](#IWidget), data ...interface{})
}
```
IWidgetChangedCallback defines the types that can be used as callbacks that are issued when widget properties change. It expects a function Changed() that is called with the current app and the widget that is issuing the callback. It also expects to conform to IIdentity, so that one callback instance can be compared to another - this is to allow callbacks to be removed correctly, if that is required.
####
type [IWidgetDimension](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L39) [¶](#IWidgetDimension)
```
type IWidgetDimension interface {
ImplementsWidgetDimension() // This exists as a marker so that IWidgetDimension is not empty, meaning satisfied by any struct.
}
```
Widgets that are used in containers such as Pile or Columns must implement this interface. It specifies how each subwidget of the container should be rendered.
####
type [InvalidColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L972) [¶](#InvalidColor)
```
type InvalidColor struct {
Color interface{}
}
```
####
func (InvalidColor) [Error](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L978) [¶](#InvalidColor.Error)
```
func (e [InvalidColor](#InvalidColor)) Error() [string](/builtin#string)
```
####
type [InvalidTypeToCompare](https://github.com/gcla/gowid/blob/v1.4.0/utils.go#L29) [¶](#InvalidTypeToCompare)
```
type InvalidTypeToCompare struct {
LHS interface{}
RHS interface{}
}
```
####
func (InvalidTypeToCompare) [Error](https://github.com/gcla/gowid/blob/v1.4.0/utils.go#L36) [¶](#InvalidTypeToCompare.Error)
```
func (e [InvalidTypeToCompare](#InvalidTypeToCompare)) Error() [string](/builtin#string)
```
####
type [IsSelectable](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L716) [¶](#IsSelectable)
```
type IsSelectable struct{}
```
IsSelectable is a convenience struct that can be embedded in widgets. It provides a function that simply return true to the call to Selectable()
####
func (*IsSelectable) [Selectable](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L718) [¶](#IsSelectable.Selectable)
```
func (r *[IsSelectable](#IsSelectable)) Selectable() [bool](/builtin#bool)
```
####
type [Key](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1148) [¶](#Key)
```
type Key struct {
// contains filtered or unexported fields
}
```
Key is a trivial representation of a keypress, a subset of tcell.Key. Key implements IKey. This exists as a convenience to widgets looking to customize keypress responses.
####
func [MakeKey](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1154) [¶](#MakeKey)
```
func MakeKey(ch [rune](/builtin#rune)) [Key](#Key)
```
####
func [MakeKeyExt](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1158) [¶](#MakeKeyExt)
```
func MakeKeyExt(key [tcell](/github.com/gdamore/tcell/v2).[Key](/github.com/gdamore/tcell/v2#Key)) [Key](#Key)
```
####
func [MakeKeyExt2](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1162) [¶](#MakeKeyExt2)
```
func MakeKeyExt2(mod [tcell](/github.com/gdamore/tcell/v2).[ModMask](/github.com/gdamore/tcell/v2#ModMask), key [tcell](/github.com/gdamore/tcell/v2).[Key](/github.com/gdamore/tcell/v2#Key), ch [rune](/builtin#rune)) [Key](#Key)
```
####
func (Key) [Key](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1174) [¶](#Key.Key)
```
func (k [Key](#Key)) Key() [tcell](/github.com/gdamore/tcell/v2).[Key](/github.com/gdamore/tcell/v2#Key)
```
####
func (Key) [Modifiers](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1178) [¶](#Key.Modifiers)
```
func (k [Key](#Key)) Modifiers() [tcell](/github.com/gdamore/tcell/v2).[ModMask](/github.com/gdamore/tcell/v2#ModMask)
```
####
func (Key) [Rune](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1170) [¶](#Key.Rune)
```
func (k [Key](#Key)) Rune() [rune](/builtin#rune)
```
####
func (Key) [String](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1183) [¶](#Key.String)
```
func (k [Key](#Key)) String() [string](/builtin#string)
```
Stolen from tcell, but omit the Rune[...]
####
type [KeyPressCB](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L13) [¶](#KeyPressCB)
```
type KeyPressCB struct{}
```
####
type [KeyPressCallbacks](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1073) [¶](#KeyPressCallbacks)
```
type KeyPressCallbacks struct {
CB **[Callbacks](#Callbacks)
}
```
KeyPressCallbacks is a convenience struct for embedding in a widget, providing methods to add and remove callbacks that are executed when the widget is "clicked".
####
func (*KeyPressCallbacks) [OnKeyPress](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1077) [¶](#KeyPressCallbacks.OnKeyPress)
```
func (w *[KeyPressCallbacks](#KeyPressCallbacks)) OnKeyPress(f [IWidgetChangedCallback](#IWidgetChangedCallback))
```
####
func (*KeyPressCallbacks) [RemoveOnKeyPress](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1084) [¶](#KeyPressCallbacks.RemoveOnKeyPress)
```
func (w *[KeyPressCallbacks](#KeyPressCallbacks)) RemoveOnKeyPress(f [IIdentity](#IIdentity))
```
####
type [KeyValueError](https://github.com/gcla/gowid/blob/v1.4.0/utils.go#L42) [¶](#KeyValueError)
```
type KeyValueError struct {
Base [error](/builtin#error)
KeyVals map[[string](/builtin#string)]interface{}
}
```
####
func [WithKVs](https://github.com/gcla/gowid/blob/v1.4.0/utils.go#L66) [¶](#WithKVs)
```
func WithKVs(err [error](/builtin#error), kvs map[[string](/builtin#string)]interface{}) [KeyValueError](#KeyValueError)
```
####
func (KeyValueError) [Cause](https://github.com/gcla/gowid/blob/v1.4.0/utils.go#L58) [¶](#KeyValueError.Cause)
```
func (e [KeyValueError](#KeyValueError)) Cause() [error](/builtin#error)
```
####
func (KeyValueError) [Error](https://github.com/gcla/gowid/blob/v1.4.0/utils.go#L50) [¶](#KeyValueError.Error)
```
func (e [KeyValueError](#KeyValueError)) Error() [string](/builtin#string)
```
####
func (KeyValueError) [Unwrap](https://github.com/gcla/gowid/blob/v1.4.0/utils.go#L62) [¶](#KeyValueError.Unwrap)
added in v1.2.0
```
func (e [KeyValueError](#KeyValueError)) Unwrap() [error](/builtin#error)
```
####
type [LineCanvas](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L115) [¶](#LineCanvas)
```
type LineCanvas [][Cell](#Cell)
```
LineCanvas exists to make an array of Cells conform to some interfaces, specifically IRenderBox (it has a width of len(.) and a height of 1), IAppendCanvas, to allow an array of Cells to be passed to the canvas function AppendLine(), and ICanvasLineReader so that an array of Cells can act as a line returned from a canvas.
####
func (LineCanvas) [BoxColumns](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L118) [¶](#LineCanvas.BoxColumns)
```
func (c [LineCanvas](#LineCanvas)) BoxColumns() [int](/builtin#int)
```
BoxColumns lets LineCanvas conform to IRenderBox
####
func (LineCanvas) [BoxRows](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L123) [¶](#LineCanvas.BoxRows)
```
func (c [LineCanvas](#LineCanvas)) BoxRows() [int](/builtin#int)
```
BoxRows lets LineCanvas conform to IRenderBox
####
func (LineCanvas) [ImplementsWidgetDimension](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L128) [¶](#LineCanvas.ImplementsWidgetDimension)
```
func (c [LineCanvas](#LineCanvas)) ImplementsWidgetDimension()
```
BoxRows lets LineCanvas conform to IWidgetDimension
####
func (LineCanvas) [Line](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L131) [¶](#LineCanvas.Line)
```
func (c [LineCanvas](#LineCanvas)) Line(y [int](/builtin#int), cp [LineCopy](#LineCopy)) [LineResult](#LineResult)
```
Line lets LineCanvas conform to ICanvasLineReader
####
func (LineCanvas) [RangeOverMarks](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L139) [¶](#LineCanvas.RangeOverMarks)
```
func (c [LineCanvas](#LineCanvas)) RangeOverMarks(f func(key [string](/builtin#string), value [CanvasPos](#CanvasPos)) [bool](/builtin#bool))
```
RangeOverMarks lets LineCanvas conform to ICanvasMarkIterator
####
type [LineCopy](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L104) [¶](#LineCopy)
```
type LineCopy struct {
Len [int](/builtin#int)
Cap [int](/builtin#int)
}
```
LineCopy is an argument provided to some Canvas APIs, like Line(). It tells the function how to allocate the backing array for a line if the line it returns must be a copy. Typically the API will return a type that indicates whether the result is a copy or not. Since the caller may receive a copy,
it can help to indicate the allocation details like length and capacity in case the caller intends to extend the line returned for some other use.
####
type [LineResult](https://github.com/gcla/gowid/blob/v1.4.0/canvas.go#L93) [¶](#LineResult)
```
type LineResult struct {
Line [][Cell](#Cell)
Copied [bool](/builtin#bool)
}
```
LineResult is returned by some Canvas Line-accessing APIs. If the Canvas can return a line without copying it, the Copied field will be false, and the caller is expected to make a copy if necessary (or risk modifying the original).
####
type [LogField](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L420) [¶](#LogField)
```
type LogField struct {
Name [string](/builtin#string)
Val interface{}
}
```
####
type [MouseState](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L201) [¶](#MouseState)
```
type MouseState struct {
MouseLeftClicked [bool](/builtin#bool)
MouseMiddleClicked [bool](/builtin#bool)
MouseRightClicked [bool](/builtin#bool)
}
```
####
func (MouseState) [LeftIsClicked](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L219) [¶](#MouseState.LeftIsClicked)
```
func (m [MouseState](#MouseState)) LeftIsClicked() [bool](/builtin#bool)
```
####
func (MouseState) [MiddleIsClicked](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L223) [¶](#MouseState.MiddleIsClicked)
```
func (m [MouseState](#MouseState)) MiddleIsClicked() [bool](/builtin#bool)
```
####
func (MouseState) [NoButtonClicked](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L215) [¶](#MouseState.NoButtonClicked)
```
func (m [MouseState](#MouseState)) NoButtonClicked() [bool](/builtin#bool)
```
####
func (MouseState) [RightIsClicked](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L227) [¶](#MouseState.RightIsClicked)
```
func (m [MouseState](#MouseState)) RightIsClicked() [bool](/builtin#bool)
```
####
func (MouseState) [String](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L207) [¶](#MouseState.String)
```
func (m [MouseState](#MouseState)) String() [string](/builtin#string)
```
####
type [NoColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1482) [¶](#NoColor)
```
type NoColor struct{}
```
NoColor implements IColor, and represents "no color preference", distinct from the default terminal color,
white, black, etc. This means that if a NoColor is rendered over another color, the color underneath will be displayed.
####
func (NoColor) [String](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1489) [¶](#NoColor.String)
```
func (r [NoColor](#NoColor)) String() [string](/builtin#string)
```
####
func (NoColor) [ToTCellColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1485) [¶](#NoColor.ToTCellColor)
```
func (r [NoColor](#NoColor)) ToTCellColor(mode [ColorMode](#ColorMode)) ([TCellColor](#TCellColor), [bool](/builtin#bool))
```
ToTCellColor converts NoColor to TCellColor. This lets NoColor conform to the IColor interface.
####
type [NotSelectable](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L705) [¶](#NotSelectable)
```
type NotSelectable struct{}
```
NotSelectable is a convenience struct that can be embedded in widgets. It provides a function that simply return false to the call to Selectable()
####
func (*NotSelectable) [Selectable](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L707) [¶](#NotSelectable.Selectable)
```
func (r *[NotSelectable](#NotSelectable)) Selectable() [bool](/builtin#bool)
```
####
type [Palette](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1706) [¶](#Palette)
```
type Palette map[[string](/builtin#string)][ICellStyler](#ICellStyler)
```
Palette implements IPalette and is a trivial implementation of a type that can store cell stylers and provide access to them via iteration.
####
func (Palette) [CellStyler](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1711) [¶](#Palette.CellStyler)
```
func (m [Palette](#Palette)) CellStyler(name [string](/builtin#string)) ([ICellStyler](#ICellStyler), [bool](/builtin#bool))
```
CellStyler will return an ICellStyler by name, if it exists.
####
func (Palette) [RangeOverPalette](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1718) [¶](#Palette.RangeOverPalette)
```
func (m [Palette](#Palette)) RangeOverPalette(f func(k [string](/builtin#string), v [ICellStyler](#ICellStyler)) [bool](/builtin#bool))
```
RangeOverPalette applies the supplied function to each member of the palette. If the function returns false, the loop terminates early.
####
type [PaletteEntry](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1526) [¶](#PaletteEntry)
```
type PaletteEntry struct {
FG [IColor](#IColor)
BG [IColor](#IColor)
Style [StyleAttrs](#StyleAttrs)
}
```
PaletteEntry is typically used by a gowid application to represent a set of color and style preferences for use by different application widgets e.g. black text on a white background with text underlined. PaletteEntry implements the ICellStyler interface meaning it can provide a triple of foreground and background IColor, and a StyleAttrs struct.
####
func [MakePaletteEntry](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1541) [¶](#MakePaletteEntry)
```
func MakePaletteEntry(fg, bg [IColor](#IColor)) [PaletteEntry](#PaletteEntry)
```
MakePaletteEntry stores the two IColor parameters provided, and has no style preference.
####
func [MakeStyledPaletteEntry](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1536) [¶](#MakeStyledPaletteEntry)
```
func MakeStyledPaletteEntry(fg, bg [IColor](#IColor), style [StyleAttrs](#StyleAttrs)) [PaletteEntry](#PaletteEntry)
```
MakeStyledPaletteEntry simply stores the three parameters provided - a foreground and background IColor, and a StyleAttrs struct.
####
func (PaletteEntry) [GetStyle](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1546) [¶](#PaletteEntry.GetStyle)
```
func (a [PaletteEntry](#PaletteEntry)) GetStyle(prov [IRenderContext](#IRenderContext)) (x [IColor](#IColor), y [IColor](#IColor), z [StyleAttrs](#StyleAttrs))
```
GetStyle returns the individual colors and style attributes.
####
type [PaletteRef](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1557) [¶](#PaletteRef)
```
type PaletteRef struct {
Name [string](/builtin#string)
}
```
PaletteRef is intended to represent a PaletteEntry, looked up by name. The ICellStyler API GetStyle() provides an IRenderContext and should return two colors and style attributes.
PaletteRef provides these by looking up the IRenderContext with the name (string) provided to it at initialization.
####
func [MakePaletteRef](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1565) [¶](#MakePaletteRef)
```
func MakePaletteRef(name [string](/builtin#string)) [PaletteRef](#PaletteRef)
```
MakePaletteRef returns a PaletteRef struct storing the (string) name of the PaletteEntry which will be looked up in the IRenderContext.
####
func (PaletteRef) [GetStyle](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1570) [¶](#PaletteRef.GetStyle)
```
func (a [PaletteRef](#PaletteRef)) GetStyle(prov [IRenderContext](#IRenderContext)) (x [IColor](#IColor), y [IColor](#IColor), z [StyleAttrs](#StyleAttrs))
```
GetStyle returns the two colors and a style, looked up in the IRenderContext by name.
####
type [PrettyModMask](https://github.com/gcla/gowid/blob/v1.4.0/utils.go#L101) [¶](#PrettyModMask)
added in v1.2.0
```
type PrettyModMask [tcell](/github.com/gdamore/tcell/v2).[ModMask](/github.com/gdamore/tcell/v2#ModMask)
```
####
func (PrettyModMask) [String](https://github.com/gcla/gowid/blob/v1.4.0/utils.go#L103) [¶](#PrettyModMask.String)
added in v1.2.0
```
func (p [PrettyModMask](#PrettyModMask)) String() [string](/builtin#string)
```
####
type [PrettyTcellKey](https://github.com/gcla/gowid/blob/v1.4.0/utils.go#L125) [¶](#PrettyTcellKey)
added in v1.2.0
```
type PrettyTcellKey [tcell](/github.com/gdamore/tcell/v2).[EventKey](/github.com/gdamore/tcell/v2#EventKey)
```
####
func (*PrettyTcellKey) [String](https://github.com/gcla/gowid/blob/v1.4.0/utils.go#L127) [¶](#PrettyTcellKey.String)
added in v1.2.0
```
func (p *[PrettyTcellKey](#PrettyTcellKey)) String() [string](/builtin#string)
```
####
type [RGBColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1105) [¶](#RGBColor)
```
type RGBColor struct {
Red, Green, Blue [int](/builtin#int)
}
```
RGBColor allows for use of colors specified as three components, each with values from 0x0 to 0xf.
Note that an RGBColor should render as close to the components specify regardless of the color mode of the terminal - 24-bit color, 256-color, 88-color. Gowid constructs a color cube, just like urwid,
and for each color mode, has a lookup table that maps the rgb values to a color cube value which is closest to the intended color. Note that RGBColor is not supported in 16-color, 8-color or monochrome.
####
func [MakeRGBColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1114) [¶](#MakeRGBColor)
```
func MakeRGBColor(s [string](/builtin#string)) [RGBColor](#RGBColor)
```
MakeRGBColor constructs an RGBColor from a string e.g. "#f00" is red. Note that MakeRGBColorSafe should be used unless you are sure the string provided is valid
(otherwise there will be a panic).
####
func [MakeRGBColorExt](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1163) [¶](#MakeRGBColorExt)
```
func MakeRGBColorExt(r, g, b [int](/builtin#int)) [RGBColor](#RGBColor)
```
MakeRGBColorExt builds an RGBColor from the red, green and blue components provided as integers. If the values are out of range, the function will panic.
####
func [MakeRGBColorExtSafe](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1153) [¶](#MakeRGBColorExtSafe)
```
func MakeRGBColorExtSafe(r, g, b [int](/builtin#int)) ([RGBColor](#RGBColor), [error](/builtin#error))
```
MakeRGBColorExtSafe builds an RGBColor from the red, green and blue components provided as integers. If the values are out of range, an error is returned.
####
func [MakeRGBColorSafe](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1128) [¶](#MakeRGBColorSafe)
```
func MakeRGBColorSafe(s [string](/builtin#string)) ([RGBColor](#RGBColor), [error](/builtin#error))
```
MakeRGBColorSafe does the same as MakeRGBColor except will return an error if provided with invalid input.
####
func (RGBColor) [RGBA](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1173) [¶](#RGBColor.RGBA)
added in v1.1.0
```
func (rgb [RGBColor](#RGBColor)) RGBA() (r, g, b, a [uint32](/builtin#uint32))
```
Implements golang standard library's color.Color
####
func (RGBColor) [String](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1122) [¶](#RGBColor.String)
```
func (r [RGBColor](#RGBColor)) String() [string](/builtin#string)
```
####
func (RGBColor) [ToTCellColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1206) [¶](#RGBColor.ToTCellColor)
```
func (r [RGBColor](#RGBColor)) ToTCellColor(mode [ColorMode](#ColorMode)) ([TCellColor](#TCellColor), [bool](/builtin#bool))
```
ToTCellColor converts an RGBColor to a TCellColor, suitable for rendering to the screen with tcell. It lets RGBColor conform to IColor.
####
type [RejectUserInput](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L694) [¶](#RejectUserInput)
```
type RejectUserInput struct{}
```
RejectUserInput is a convenience struct that can be embedded in widgets that don't accept any user input.
####
func (RejectUserInput) [UserInput](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L696) [¶](#RejectUserInput.UserInput)
```
func (r [RejectUserInput](#RejectUserInput)) UserInput(ev interface{}, size [IRenderSize](#IRenderSize), focus [Selector](#Selector), app [IApp](#IApp)) [bool](/builtin#bool)
```
####
type [RenderBox](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L130) [¶](#RenderBox)
```
type RenderBox struct {
C [int](/builtin#int)
R [int](/builtin#int)
}
```
RenderBox is an object passed to a widget's Render function that specifies that it should be rendered with a set number of columns and rows.
####
func [CalculateRenderSizeFallback](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L774) [¶](#CalculateRenderSizeFallback)
```
func CalculateRenderSizeFallback(w [IWidget](#IWidget), size [IRenderSize](#IRenderSize), focus [Selector](#Selector), app [IApp](#IApp)) [RenderBox](#RenderBox)
```
CalculateRenderSizeFallback can be used by widgets that cannot easily compute a value for RenderSize without actually rendering the widget and measuring the bounding box.
It assumes that if IRenderBox size is provided, then the widget's canvas when rendered will be that large, and simply returns the box. If an IRenderFlow is provided, then the widget is rendered, and the bounding box is returned.
####
func [MakeRenderBox](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L135) [¶](#MakeRenderBox)
```
func MakeRenderBox(columns, rows [int](/builtin#int)) [RenderBox](#RenderBox)
```
####
func (RenderBox) [BoxColumns](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L139) [¶](#RenderBox.BoxColumns)
```
func (r [RenderBox](#RenderBox)) BoxColumns() [int](/builtin#int)
```
####
func (RenderBox) [BoxRows](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L143) [¶](#RenderBox.BoxRows)
```
func (r [RenderBox](#RenderBox)) BoxRows() [int](/builtin#int)
```
####
func (RenderBox) [Columns](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L148) [¶](#RenderBox.Columns)
```
func (r [RenderBox](#RenderBox)) Columns() [int](/builtin#int)
```
For IColumns
####
func (RenderBox) [ImplementsWidgetDimension](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L157) [¶](#RenderBox.ImplementsWidgetDimension)
```
func (r [RenderBox](#RenderBox)) ImplementsWidgetDimension()
```
####
func (RenderBox) [Rows](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L153) [¶](#RenderBox.Rows)
```
func (r [RenderBox](#RenderBox)) Rows() [int](/builtin#int)
```
For IRows
####
func (RenderBox) [String](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L159) [¶](#RenderBox.String)
```
func (r [RenderBox](#RenderBox)) String() [string](/builtin#string)
```
####
type [RenderFixed](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L167) [¶](#RenderFixed)
```
type RenderFixed struct{}
```
RenderFixed is an object passed to a widget's Render function that specifies that the widget itself will determine its own size.
####
func [MakeRenderFixed](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L169) [¶](#MakeRenderFixed)
```
func MakeRenderFixed() [RenderFixed](#RenderFixed)
```
####
func (RenderFixed) [Fixed](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L177) [¶](#RenderFixed.Fixed)
```
func (f [RenderFixed](#RenderFixed)) Fixed()
```
####
func (RenderFixed) [ImplementsWidgetDimension](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L179) [¶](#RenderFixed.ImplementsWidgetDimension)
```
func (r [RenderFixed](#RenderFixed)) ImplementsWidgetDimension()
```
####
func (RenderFixed) [String](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L173) [¶](#RenderFixed.String)
```
func (f [RenderFixed](#RenderFixed)) String() [string](/builtin#string)
```
####
type [RenderFlow](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L204) [¶](#RenderFlow)
```
type RenderFlow struct{}
```
RenderFlow is used by widgets that embed an inner widget, like hpadding.Widget.
It directs the outer widget how it should render the inner widget. If the outer widget is rendered in box mode, the inner widget should be rendered in flow mode,
using the box's number of columns. If the outer widget is rendered in flow mode,
the inner widget should be rendered in flow mode with the same number of columns.
####
func (RenderFlow) [Flow](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L206) [¶](#RenderFlow.Flow)
```
func (s [RenderFlow](#RenderFlow)) Flow()
```
####
func (RenderFlow) [ImplementsWidgetDimension](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L212) [¶](#RenderFlow.ImplementsWidgetDimension)
```
func (r [RenderFlow](#RenderFlow)) ImplementsWidgetDimension()
```
####
func (RenderFlow) [String](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L208) [¶](#RenderFlow.String)
```
func (f [RenderFlow](#RenderFlow)) String() [string](/builtin#string)
```
####
type [RenderFlowWith](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L103) [¶](#RenderFlowWith)
```
type RenderFlowWith struct {
C [int](/builtin#int)
}
```
RenderFlowWith is an object passed to a widget's Render function that specifies that it should be rendered with a set number of columns, but using as many rows as the widget itself determines it needs.
####
func [MakeRenderFlow](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L107) [¶](#MakeRenderFlow)
```
func MakeRenderFlow(columns [int](/builtin#int)) [RenderFlowWith](#RenderFlowWith)
```
####
func (RenderFlowWith) [Columns](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L116) [¶](#RenderFlowWith.Columns)
```
func (r [RenderFlowWith](#RenderFlowWith)) Columns() [int](/builtin#int)
```
For IColumns
####
func (RenderFlowWith) [FlowColumns](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L111) [¶](#RenderFlowWith.FlowColumns)
```
func (r [RenderFlowWith](#RenderFlowWith)) FlowColumns() [int](/builtin#int)
```
####
func (RenderFlowWith) [ImplementsWidgetDimension](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L124) [¶](#RenderFlowWith.ImplementsWidgetDimension)
```
func (r [RenderFlowWith](#RenderFlowWith)) ImplementsWidgetDimension()
```
####
func (RenderFlowWith) [String](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L120) [¶](#RenderFlowWith.String)
```
func (r [RenderFlowWith](#RenderFlowWith)) String() [string](/builtin#string)
```
####
type [RenderMax](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L220) [¶](#RenderMax)
```
type RenderMax struct{}
```
RenderMax is used in widgets laid out side-by-side - it's intended to have the effect that these widgets are rendered last and provided a height/width that corresponds to the max of the height/width of those widgets already rendered.
####
func (RenderMax) [MaxHeight](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L222) [¶](#RenderMax.MaxHeight)
```
func (s [RenderMax](#RenderMax)) MaxHeight()
```
####
func (RenderMax) [String](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L224) [¶](#RenderMax.String)
```
func (f [RenderMax](#RenderMax)) String() [string](/builtin#string)
```
####
type [RenderWithRatio](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L249) [¶](#RenderWithRatio)
```
type RenderWithRatio struct {
R [float64](/builtin#float64)
}
```
RenderWithRatio is used by widgets within a container
####
func (RenderWithRatio) [ImplementsWidgetDimension](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L261) [¶](#RenderWithRatio.ImplementsWidgetDimension)
```
func (r [RenderWithRatio](#RenderWithRatio)) ImplementsWidgetDimension()
```
####
func (RenderWithRatio) [Relative](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L253) [¶](#RenderWithRatio.Relative)
```
func (f [RenderWithRatio](#RenderWithRatio)) Relative() [float64](/builtin#float64)
```
####
func (RenderWithRatio) [String](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L257) [¶](#RenderWithRatio.String)
```
func (f [RenderWithRatio](#RenderWithRatio)) String() [string](/builtin#string)
```
####
type [RenderWithUnits](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L232) [¶](#RenderWithUnits)
```
type RenderWithUnits struct {
U [int](/builtin#int)
}
```
RenderWithUnits is used by widgets within a container. It specifies the number of columns or rows to use when rendering.
####
func (RenderWithUnits) [ImplementsWidgetDimension](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L244) [¶](#RenderWithUnits.ImplementsWidgetDimension)
```
func (r [RenderWithUnits](#RenderWithUnits)) ImplementsWidgetDimension()
```
####
func (RenderWithUnits) [String](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L240) [¶](#RenderWithUnits.String)
```
func (f [RenderWithUnits](#RenderWithUnits)) String() [string](/builtin#string)
```
####
func (RenderWithUnits) [Units](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L236) [¶](#RenderWithUnits.Units)
```
func (f [RenderWithUnits](#RenderWithUnits)) Units() [int](/builtin#int)
```
####
type [RenderWithWeight](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L183) [¶](#RenderWithWeight)
```
type RenderWithWeight struct {
W [int](/builtin#int)
}
```
####
func (RenderWithWeight) [ImplementsWidgetDimension](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L195) [¶](#RenderWithWeight.ImplementsWidgetDimension)
```
func (r [RenderWithWeight](#RenderWithWeight)) ImplementsWidgetDimension()
```
####
func (RenderWithWeight) [String](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L187) [¶](#RenderWithWeight.String)
```
func (f [RenderWithWeight](#RenderWithWeight)) String() [string](/builtin#string)
```
####
func (RenderWithWeight) [Weight](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L191) [¶](#RenderWithWeight.Weight)
```
func (f [RenderWithWeight](#RenderWithWeight)) Weight() [int](/builtin#int)
```
####
type [RunFunction](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L766) [¶](#RunFunction)
```
type RunFunction func([IApp](#IApp))
```
####
func (RunFunction) [RunThenRenderEvent](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L778) [¶](#RunFunction.RunThenRenderEvent)
```
func (f [RunFunction](#RunFunction)) RunThenRenderEvent(app [IApp](#IApp))
```
RunThenRenderEvent lets the receiver RunOnRenderFunction implement IOnRenderEvent. This lets a regular function be executed on the same goroutine as the rendering code.
####
type [Selector](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L346) [¶](#Selector)
```
type Selector struct {
Focus [bool](/builtin#bool)
Selected [bool](/builtin#bool)
}
```
Three states - false+false, false+true, true+true
####
func (Selector) [And](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L381) [¶](#Selector.And)
```
func (s [Selector](#Selector)) And(cond [bool](/builtin#bool)) [Selector](#Selector)
```
And returns a Selector with both Selected and Focus set dependent on the supplied condition AND the receiver. Used to propagate Selected and Focus state to sub widgets for input and rendering.
####
func (Selector) [SelectIf](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L371) [¶](#Selector.SelectIf)
```
func (s [Selector](#Selector)) SelectIf(cond [bool](/builtin#bool)) [Selector](#Selector)
```
SelectIf returns a Selector with the Selected field set dependent on the supplied condition only. The Focus field is set based on the supplied condition AND the receiver's Focus field. Used by composite widgets with multiple children to allow children to change their state dependent on whether they are selected but independent of whether the widget is currently in focus.
####
func (Selector) [String](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L388) [¶](#Selector.String)
```
func (s [Selector](#Selector)) String() [string](/builtin#string)
```
####
type [StyleAttrs](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L33) [¶](#StyleAttrs)
```
type StyleAttrs struct {
OnOff [tcell](/github.com/gdamore/tcell/v2).[AttrMask](/github.com/gdamore/tcell/v2#AttrMask) // If the specific bit in Set is 1, then the specific bit on OnOff says whether the style is on or off
Set [tcell](/github.com/gdamore/tcell/v2).[AttrMask](/github.com/gdamore/tcell/v2#AttrMask) // If the specific bit in Set is 0, then no style preference is declared (e.g. for underline)
}
```
StyleAttrs allows the user to represent a set of styles, either affirmatively set (on) or unset (off)
with the rest of the styles being unspecified, meaning they can be determined by styles layered
"underneath".
####
func (StyleAttrs) [MergeUnder](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L83) [¶](#StyleAttrs.MergeUnder)
```
func (a [StyleAttrs](#StyleAttrs)) MergeUnder(upper [StyleAttrs](#StyleAttrs)) [StyleAttrs](#StyleAttrs)
```
MergeUnder merges cell styles. E.g. if a is {underline, underline}, and upper is {!bold, bold}, that means a declares that it should be rendered with underline and doesn't care about other styles; and upper declares it should NOT be rendered bold, and doesn't declare about other styles. When merged,
the result is {underline|!bold, underline|bold}.
####
type [StyleMod](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1601) [¶](#StyleMod)
```
type StyleMod struct {
Cur [ICellStyler](#ICellStyler)
Mod [ICellStyler](#ICellStyler)
}
```
StyleMod implements ICellStyler. It returns colors and styles from its Cur field unless they are overridden by settings in its Mod field. This provides a way for a layering of ICellStylers.
####
func [MakeStyleMod](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1610) [¶](#MakeStyleMod)
```
func MakeStyleMod(cur, mod [ICellStyler](#ICellStyler)) [StyleMod](#StyleMod)
```
MakeStyleMod implements ICellStyler and stores two ICellStylers, one to layer on top of the other.
####
func (StyleMod) [GetStyle](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1616) [¶](#StyleMod.GetStyle)
```
func (a [StyleMod](#StyleMod)) GetStyle(prov [IRenderContext](#IRenderContext)) (x [IColor](#IColor), y [IColor](#IColor), z [StyleAttrs](#StyleAttrs))
```
GetStyle returns the IColors and StyleAttrs from the Mod ICellStyler if they express an affirmative preference, otherwise defers to the values from the Cur ICellStyler.
####
type [StyledAs](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1684) [¶](#StyledAs)
```
type StyledAs struct {
[StyleAttrs](#StyleAttrs)
}
```
StyledAs is an ICellStyler that expresses a specific text style and no preference for foreground and background color.
####
func [MakeStyledAs](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1690) [¶](#MakeStyledAs)
```
func MakeStyledAs(s [StyleAttrs](#StyleAttrs)) [StyledAs](#StyledAs)
```
####
func (StyledAs) [GetStyle](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1695) [¶](#StyledAs.GetStyle)
```
func (a [StyledAs](#StyledAs)) GetStyle(prov [IRenderContext](#IRenderContext)) (x [IColor](#IColor), y [IColor](#IColor), z [StyleAttrs](#StyleAttrs))
```
GetStyle implements ICellStyler.
####
type [SubWidgetCB](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L14) [¶](#SubWidgetCB)
```
type SubWidgetCB struct{}
```
####
type [SubWidgetCallbacks](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1016) [¶](#SubWidgetCallbacks)
```
type SubWidgetCallbacks struct {
CB **[Callbacks](#Callbacks)
}
```
SubWidgetCallbacks is a convenience struct for embedding in a widget, providing methods to add and remove callbacks that are executed when the widget's child is modified.
####
func (*SubWidgetCallbacks) [OnSetSubWidget](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1020) [¶](#SubWidgetCallbacks.OnSetSubWidget)
```
func (w *[SubWidgetCallbacks](#SubWidgetCallbacks)) OnSetSubWidget(f [IWidgetChangedCallback](#IWidgetChangedCallback))
```
####
func (*SubWidgetCallbacks) [RemoveOnSetSubWidget](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1027) [¶](#SubWidgetCallbacks.RemoveOnSetSubWidget)
```
func (w *[SubWidgetCallbacks](#SubWidgetCallbacks)) RemoveOnSetSubWidget(f [IIdentity](#IIdentity))
```
####
type [SubWidgetsCB](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L15) [¶](#SubWidgetsCB)
```
type SubWidgetsCB struct{}
```
####
type [SubWidgetsCallbacks](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1035) [¶](#SubWidgetsCallbacks)
```
type SubWidgetsCallbacks struct {
CB **[Callbacks](#Callbacks)
}
```
SubWidgetsCallbacks is a convenience struct for embedding in a widget, providing methods to add and remove callbacks that are executed when the widget's children are modified.
####
func (*SubWidgetsCallbacks) [OnSetSubWidgets](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1039) [¶](#SubWidgetsCallbacks.OnSetSubWidgets)
```
func (w *[SubWidgetsCallbacks](#SubWidgetsCallbacks)) OnSetSubWidgets(f [IWidgetChangedCallback](#IWidgetChangedCallback))
```
####
func (*SubWidgetsCallbacks) [RemoveOnSetSubWidgets](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1046) [¶](#SubWidgetsCallbacks.RemoveOnSetSubWidgets)
```
func (w *[SubWidgetsCallbacks](#SubWidgetsCallbacks)) RemoveOnSetSubWidgets(f [IIdentity](#IIdentity))
```
####
type [TCellColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1417) [¶](#TCellColor)
```
type TCellColor struct {
// contains filtered or unexported fields
}
```
TCellColor is an IColor using tcell's color primitives. If you are not porting from urwid or translating from urwid, this is the simplest approach to using color. Gowid's layering approach means that the empty value for a color should mean "no color preference" - so we want the zero value to mean that. A tcell.Color of 0 means "default color". So gowid coopts nil to mean "no color preference".
####
func [IColorToTCell](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1731) [¶](#IColorToTCell)
```
func IColorToTCell(color [IColor](#IColor), def [TCellColor](#TCellColor), mode [ColorMode](#ColorMode)) [TCellColor](#TCellColor)
```
IColorToTCell is a utility function that will convert an IColor to a TCellColor in preparation for passing to tcell to render; if the conversion fails, a default TCellColor is returned (provided to the function via a parameter)
####
func [MakeTCellColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1430) [¶](#MakeTCellColor)
```
func MakeTCellColor(val [string](/builtin#string)) ([TCellColor](#TCellColor), [error](/builtin#error))
```
MakeTCellColor returns an initialized TCellColor given a string input like "yellow". The names that can be used are provided here: <https://github.com/gdamore/tcell/blob/master/color.go#L821>.
####
func [MakeTCellColorExt](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1444) [¶](#MakeTCellColorExt)
```
func MakeTCellColorExt(val [tcell](/github.com/gdamore/tcell/v2).[Color](/github.com/gdamore/tcell/v2#Color)) [TCellColor](#TCellColor)
```
MakeTCellColor returns an initialized TCellColor given a tcell.Color input. The values that can be used are provided here: <https://github.com/gdamore/tcell/blob/master/color.go#L41>.
####
func [MakeTCellNoColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1450) [¶](#MakeTCellNoColor)
```
func MakeTCellNoColor() [TCellColor](#TCellColor)
```
MakeTCellNoColor returns an initialized TCellColor that represents "no color" - meaning if another color is rendered "under" this one, then the color underneath will be displayed.
####
func (TCellColor) [String](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1455) [¶](#TCellColor.String)
```
func (r [TCellColor](#TCellColor)) String() [string](/builtin#string)
```
String implements Stringer for '%v' support.
####
func (TCellColor) [ToTCell](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1465) [¶](#TCellColor.ToTCell)
```
func (r [TCellColor](#TCellColor)) ToTCell() [tcell](/github.com/gdamore/tcell/v2).[Color](/github.com/gdamore/tcell/v2#Color)
```
ToTCell converts a TCellColor back to a tcell.Color for passing to tcell APIs.
####
func (TCellColor) [ToTCellColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1473) [¶](#TCellColor.ToTCellColor)
```
func (r [TCellColor](#TCellColor)) ToTCellColor(mode [ColorMode](#ColorMode)) ([TCellColor](#TCellColor), [bool](/builtin#bool))
```
ToTCellColor is a no-op, and exists so that TCellColor conforms to the IColor interface.
####
type [UnhandledInputFunc](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L134) [¶](#UnhandledInputFunc)
```
type UnhandledInputFunc func(app [IApp](#IApp), ev interface{}) [bool](/builtin#bool)
```
UnhandledInputFunc satisfies IUnhandledInput, allowing use of a simple function for handling input not claimed by any widget.
```
var IgnoreUnhandledInput [UnhandledInputFunc](#UnhandledInputFunc) = func(app [IApp](#IApp), ev interface{}) [bool](/builtin#bool) {
return [false](/builtin#false)
}
```
IgnoreUnhandledInput is a helper function for main loops that don't need to deal with hanlding input that the widgets haven't claimed.
####
func (UnhandledInputFunc) [UnhandledInput](https://github.com/gcla/gowid/blob/v1.4.0/app.go#L136) [¶](#UnhandledInputFunc.UnhandledInput)
```
func (f [UnhandledInputFunc](#UnhandledInputFunc)) UnhandledInput(app [IApp](#IApp), ev interface{}) [bool](/builtin#bool)
```
####
type [Unit](https://github.com/gcla/gowid/blob/v1.4.0/utils.go#L25) [¶](#Unit)
```
type Unit struct{}
```
Unit is a one-valued type used to send a message over a channel.
####
type [UrwidColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1239) [¶](#UrwidColor)
```
type UrwidColor struct {
Id [string](/builtin#string)
// contains filtered or unexported fields
}
```
UrwidColor is a gowid Color implementing IColor and which allows urwid color names to be used
(<http://urwid.org/manual/displayattributes.html#foreground-and-background-settings>) e.g.
"dark blue", "light gray".
####
func [NewUrwidColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1262) [¶](#NewUrwidColor)
```
func NewUrwidColor(val [string](/builtin#string)) *[UrwidColor](#UrwidColor)
```
NewUrwidColorSafe returns a pointer to an UrwidColor struct and builds the UrwidColor from a string argument e.g. "yellow"; this function will panic if the there is an error during initialization.
####
func [NewUrwidColorSafe](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1250) [¶](#NewUrwidColorSafe)
```
func NewUrwidColorSafe(val [string](/builtin#string)) (*[UrwidColor](#UrwidColor), [error](/builtin#error))
```
NewUrwidColorSafe returns a pointer to an UrwidColor struct and builds the UrwidColor from a string argument e.g. "yellow". Note that in urwid proper (python), a color can also specify a style, like "yellow, underline". UrwidColor does not support specifying styles in that manner.
####
func (UrwidColor) [String](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1271) [¶](#UrwidColor.String)
```
func (r [UrwidColor](#UrwidColor)) String() [string](/builtin#string)
```
####
func (*UrwidColor) [ToTCellColor](https://github.com/gcla/gowid/blob/v1.4.0/decoration.go#L1277) [¶](#UrwidColor.ToTCellColor)
```
func (s *[UrwidColor](#UrwidColor)) ToTCellColor(mode [ColorMode](#ColorMode)) ([TCellColor](#TCellColor), [bool](/builtin#bool))
```
ToTCellColor converts the receiver UrwidColor to a TCellColor, ready for rendering to a tcell screen. This lets UrwidColor conform to IColor.
####
type [VAlignBottom](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L755) [¶](#VAlignBottom)
```
type VAlignBottom struct {
Margin [int](/builtin#int)
}
```
####
func (VAlignBottom) [ImplementsVAlignment](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L763) [¶](#VAlignBottom.ImplementsVAlignment)
```
func (v [VAlignBottom](#VAlignBottom)) ImplementsVAlignment()
```
####
type [VAlignCB](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L18) [¶](#VAlignCB)
```
type VAlignCB struct{}
```
####
type [VAlignMiddle](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L758) [¶](#VAlignMiddle)
```
type VAlignMiddle struct{}
```
####
func (VAlignMiddle) [ImplementsVAlignment](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L764) [¶](#VAlignMiddle.ImplementsVAlignment)
```
func (v [VAlignMiddle](#VAlignMiddle)) ImplementsVAlignment()
```
####
type [VAlignTop](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L759) [¶](#VAlignTop)
```
type VAlignTop struct {
Margin [int](/builtin#int)
}
```
####
func (VAlignTop) [ImplementsVAlignment](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L765) [¶](#VAlignTop.ImplementsVAlignment)
```
func (v [VAlignTop](#VAlignTop)) ImplementsVAlignment()
```
####
type [WidgetCallback](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L943) [¶](#WidgetCallback)
```
type WidgetCallback struct {
Name interface{}
[WidgetChangedFunction](#WidgetChangedFunction)
}
```
WidgetCallback is a simple struct with a name field for IIdentity and that embeds a WidgetChangedFunction to be issued as a callback when a widget property changes.
####
func [MakeWidgetCallback](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L948) [¶](#MakeWidgetCallback)
```
func MakeWidgetCallback(name interface{}, fn [WidgetChangedFunction](#WidgetChangedFunction)) [WidgetCallback](#WidgetCallback)
```
####
func (WidgetCallback) [ID](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L955) [¶](#WidgetCallback.ID)
```
func (f [WidgetCallback](#WidgetCallback)) ID() interface{}
```
####
type [WidgetCallbackExt](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L962) [¶](#WidgetCallbackExt)
added in v1.2.0
```
type WidgetCallbackExt struct {
Name interface{}
[WidgetChangedFunctionExt](#WidgetChangedFunctionExt)
}
```
WidgetCallbackExt is a simple struct with a name field for IIdentity and that embeds a WidgetChangedFunction to be issued as a callback when a widget property changes.
####
func [MakeWidgetCallbackExt](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L967) [¶](#MakeWidgetCallbackExt)
added in v1.2.0
```
func MakeWidgetCallbackExt(name interface{}, fn [WidgetChangedFunctionExt](#WidgetChangedFunctionExt)) [WidgetCallbackExt](#WidgetCallbackExt)
```
####
func (WidgetCallbackExt) [ID](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L974) [¶](#WidgetCallbackExt.ID)
added in v1.2.0
```
func (f [WidgetCallbackExt](#WidgetCallbackExt)) ID() interface{}
```
####
type [WidgetChangedFunction](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L928) [¶](#WidgetChangedFunction)
```
type WidgetChangedFunction func(app [IApp](#IApp), widget [IWidget](#IWidget))
```
WidgetChangedFunction meets the IWidgetChangedCallback interface, for simpler usage.
####
func (WidgetChangedFunction) [Changed](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L930) [¶](#WidgetChangedFunction.Changed)
```
func (f [WidgetChangedFunction](#WidgetChangedFunction)) Changed(app [IApp](#IApp), widget [IWidget](#IWidget), data ...interface{})
```
####
type [WidgetChangedFunctionExt](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L934) [¶](#WidgetChangedFunctionExt)
added in v1.2.0
```
type WidgetChangedFunctionExt func(app [IApp](#IApp), widget [IWidget](#IWidget), data ...interface{})
```
####
func (WidgetChangedFunctionExt) [Changed](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L936) [¶](#WidgetChangedFunctionExt.Changed)
added in v1.2.0
```
func (f [WidgetChangedFunctionExt](#WidgetChangedFunctionExt)) Changed(app [IApp](#IApp), widget [IWidget](#IWidget), data ...interface{})
```
####
type [WidgetPredicate](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L1583) [¶](#WidgetPredicate)
```
type WidgetPredicate func(w [IWidget](#IWidget)) [bool](/builtin#bool)
```
####
type [WidgetSizeError](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L283) [¶](#WidgetSizeError)
```
type WidgetSizeError struct {
Widget interface{}
Size [IRenderSize](#IRenderSize)
Required [string](/builtin#string) // in case I only need an interface - not sure how to capture it and not concrete type
}
```
####
func (WidgetSizeError) [Error](https://github.com/gcla/gowid/blob/v1.4.0/support.go#L291) [¶](#WidgetSizeError.Error)
```
func (e [WidgetSizeError](#WidgetSizeError)) Error() [string](/builtin#string)
```
####
type [WidthCB](https://github.com/gcla/gowid/blob/v1.4.0/callbacks.go#L21) [¶](#WidthCB)
```
type WidthCB struct{}
``` |
@times-components/date-publication | npm | JavaScript | [Date Publication](#date-publication)
===
This package takes a required date as a string, and converts it into a formatted component. A consumer may pass in an optional publication string to display alongside the date, and this must have a value of either `SUNDAYTIMES` or
`TIMES`.
[Contributing](#contributing)
---
Please read [CONTRIBUTING.md](https://github.com/newsuk/times-components/blob/HEAD/CONTRIBUTING.md) before contributing to this package
[Running the code](#running-the-code)
---
Please see our main [README.md](https://github.com/newsuk/times-components/blob/README.md) to get the project running locally
[Development](#development)
---
The code can be formatted and linted in accordance with the agreed standards.
```
yarn fmt yarn lint
```
[Testing](#testing)
---
This package uses [yarn](https://yarnpkg.com) (latest) to run unit tests on each platform with [jest](https://facebook.github.io/jest/).
```
yarn test:web
```
Visit the official storybook
to see our available date publication templates.
Readme
---
### Keywords
* react
* web
* date-publication
* component |
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner | go | Go | README
[¶](#section-readme)
---
### Azure Edge Order Partner Module for Go
[![PkgGoDev](https://pkg.go.dev/badge/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner)](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner)
The `armedgeorderpartner` module provides operations for working with Azure Edge Order Partner.
[Source code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner)
### Getting started
#### Prerequisites
* an [Azure subscription](https://azure.microsoft.com/free/)
* Go 1.18 or above (You could download and install the latest version of Go from [here](https://go.dev/doc/install). It will replace the existing Go on your machine. If you want to install multiple Go versions on the same machine, you could refer this [doc](https://go.dev/doc/manage-install).)
#### Install the package
This project uses [Go modules](https://github.com/golang/go/wiki/Modules) for versioning and dependency management.
Install the Azure Edge Order Partner module:
```
go get github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner
```
#### Authorization
When creating a client, you will need to provide a credential for authenticating with Azure Edge Order Partner. The `azidentity` module provides facilities for various ways of authenticating with Azure including client/secret, certificate, managed identity, and more.
```
cred, err := azidentity.NewDefaultAzureCredential(nil)
```
For more information on authentication, please see the documentation for `azidentity` at [pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity).
#### Client Factory
Azure Edge Order Partner module consists of one or more clients. We provide a client factory which could be used to create any client in this module.
```
clientFactory, err := armedgeorderpartner.NewClientFactory(<subscription ID>, cred, nil)
```
You can use `ClientOptions` in package `github.com/Azure/azure-sdk-for-go/sdk/azcore/arm` to set endpoint to connect with public and sovereign clouds as well as Azure Stack. For more information, please see the documentation for `azcore` at [pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore).
```
options := arm.ClientOptions {
ClientOptions: azcore.ClientOptions {
Cloud: cloud.AzureChina,
},
}
clientFactory, err := armedgeorderpartner.NewClientFactory(<subscription ID>, cred, &options)
```
#### Clients
A client groups a set of related APIs, providing access to its functionality. Create one or more clients to access the APIs you require using client factory.
```
client := clientFactory.NewAPISClient()
```
#### Provide Feedback
If you encounter bugs or have suggestions, please
[open an issue](https://github.com/Azure/azure-sdk-for-go/issues) and assign the `Edge Order Partner` label.
### Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution.
For details, visit <https://cla.microsoft.com>.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label,
comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the
[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information, see the
[Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
or contact [<EMAIL>](mailto:<EMAIL>) with any additional questions or comments.
Documentation
[¶](#section-documentation)
---
### Index [¶](#pkg-index)
* [type APISClient](#APISClient)
* + [func NewAPISClient(subscriptionID string, credential azcore.TokenCredential, ...) (*APISClient, error)](#NewAPISClient)
* + [func (client *APISClient) BeginManageInventoryMetadata(ctx context.Context, familyIdentifier string, location string, ...) (*runtime.Poller[APISClientManageInventoryMetadataResponse], error)](#APISClient.BeginManageInventoryMetadata)
+ [func (client *APISClient) ManageLink(ctx context.Context, familyIdentifier string, location string, ...) (APISClientManageLinkResponse, error)](#APISClient.ManageLink)
+ [func (client *APISClient) NewListOperationsPartnerPager(options *APISClientListOperationsPartnerOptions) *runtime.Pager[APISClientListOperationsPartnerResponse]](#APISClient.NewListOperationsPartnerPager)
+ [func (client *APISClient) NewSearchInventoriesPager(searchInventoriesRequest SearchInventoriesRequest, ...) *runtime.Pager[APISClientSearchInventoriesResponse]](#APISClient.NewSearchInventoriesPager)
* [type APISClientBeginManageInventoryMetadataOptions](#APISClientBeginManageInventoryMetadataOptions)
* [type APISClientListOperationsPartnerOptions](#APISClientListOperationsPartnerOptions)
* [type APISClientListOperationsPartnerResponse](#APISClientListOperationsPartnerResponse)
* [type APISClientManageInventoryMetadataResponse](#APISClientManageInventoryMetadataResponse)
* [type APISClientManageLinkOptions](#APISClientManageLinkOptions)
* [type APISClientManageLinkResponse](#APISClientManageLinkResponse)
* [type APISClientSearchInventoriesOptions](#APISClientSearchInventoriesOptions)
* [type APISClientSearchInventoriesResponse](#APISClientSearchInventoriesResponse)
* [type ActionType](#ActionType)
* + [func PossibleActionTypeValues() []ActionType](#PossibleActionTypeValues)
* [type AdditionalErrorInfo](#AdditionalErrorInfo)
* + [func (a AdditionalErrorInfo) MarshalJSON() ([]byte, error)](#AdditionalErrorInfo.MarshalJSON)
+ [func (a *AdditionalErrorInfo) UnmarshalJSON(data []byte) error](#AdditionalErrorInfo.UnmarshalJSON)
* [type AdditionalInventoryDetails](#AdditionalInventoryDetails)
* + [func (a AdditionalInventoryDetails) MarshalJSON() ([]byte, error)](#AdditionalInventoryDetails.MarshalJSON)
+ [func (a *AdditionalInventoryDetails) UnmarshalJSON(data []byte) error](#AdditionalInventoryDetails.UnmarshalJSON)
* [type AdditionalOrderItemDetails](#AdditionalOrderItemDetails)
* + [func (a AdditionalOrderItemDetails) MarshalJSON() ([]byte, error)](#AdditionalOrderItemDetails.MarshalJSON)
+ [func (a *AdditionalOrderItemDetails) UnmarshalJSON(data []byte) error](#AdditionalOrderItemDetails.UnmarshalJSON)
* [type BillingDetails](#BillingDetails)
* + [func (b BillingDetails) MarshalJSON() ([]byte, error)](#BillingDetails.MarshalJSON)
+ [func (b *BillingDetails) UnmarshalJSON(data []byte) error](#BillingDetails.UnmarshalJSON)
* [type ClientFactory](#ClientFactory)
* + [func NewClientFactory(subscriptionID string, credential azcore.TokenCredential, ...) (*ClientFactory, error)](#NewClientFactory)
* + [func (c *ClientFactory) NewAPISClient() *APISClient](#ClientFactory.NewAPISClient)
* [type ConfigurationData](#ConfigurationData)
* + [func (c ConfigurationData) MarshalJSON() ([]byte, error)](#ConfigurationData.MarshalJSON)
+ [func (c *ConfigurationData) UnmarshalJSON(data []byte) error](#ConfigurationData.UnmarshalJSON)
* [type ConfigurationDetails](#ConfigurationDetails)
* + [func (c ConfigurationDetails) MarshalJSON() ([]byte, error)](#ConfigurationDetails.MarshalJSON)
+ [func (c *ConfigurationDetails) UnmarshalJSON(data []byte) error](#ConfigurationDetails.UnmarshalJSON)
* [type ConfigurationOnDevice](#ConfigurationOnDevice)
* + [func (c ConfigurationOnDevice) MarshalJSON() ([]byte, error)](#ConfigurationOnDevice.MarshalJSON)
+ [func (c *ConfigurationOnDevice) UnmarshalJSON(data []byte) error](#ConfigurationOnDevice.UnmarshalJSON)
* [type ErrorAdditionalInfo](#ErrorAdditionalInfo)
* + [func (e ErrorAdditionalInfo) MarshalJSON() ([]byte, error)](#ErrorAdditionalInfo.MarshalJSON)
+ [func (e *ErrorAdditionalInfo) UnmarshalJSON(data []byte) error](#ErrorAdditionalInfo.UnmarshalJSON)
* [type ErrorDetail](#ErrorDetail)
* + [func (e ErrorDetail) MarshalJSON() ([]byte, error)](#ErrorDetail.MarshalJSON)
+ [func (e *ErrorDetail) UnmarshalJSON(data []byte) error](#ErrorDetail.UnmarshalJSON)
* [type ErrorResponse](#ErrorResponse)
* + [func (e ErrorResponse) MarshalJSON() ([]byte, error)](#ErrorResponse.MarshalJSON)
+ [func (e *ErrorResponse) UnmarshalJSON(data []byte) error](#ErrorResponse.UnmarshalJSON)
* [type InventoryAdditionalDetails](#InventoryAdditionalDetails)
* + [func (i InventoryAdditionalDetails) MarshalJSON() ([]byte, error)](#InventoryAdditionalDetails.MarshalJSON)
+ [func (i *InventoryAdditionalDetails) UnmarshalJSON(data []byte) error](#InventoryAdditionalDetails.UnmarshalJSON)
* [type InventoryData](#InventoryData)
* + [func (i InventoryData) MarshalJSON() ([]byte, error)](#InventoryData.MarshalJSON)
+ [func (i *InventoryData) UnmarshalJSON(data []byte) error](#InventoryData.UnmarshalJSON)
* [type InventoryProperties](#InventoryProperties)
* + [func (i InventoryProperties) MarshalJSON() ([]byte, error)](#InventoryProperties.MarshalJSON)
+ [func (i *InventoryProperties) UnmarshalJSON(data []byte) error](#InventoryProperties.UnmarshalJSON)
* [type ManageInventoryMetadataRequest](#ManageInventoryMetadataRequest)
* + [func (m ManageInventoryMetadataRequest) MarshalJSON() ([]byte, error)](#ManageInventoryMetadataRequest.MarshalJSON)
+ [func (m *ManageInventoryMetadataRequest) UnmarshalJSON(data []byte) error](#ManageInventoryMetadataRequest.UnmarshalJSON)
* [type ManageLinkOperation](#ManageLinkOperation)
* + [func PossibleManageLinkOperationValues() []ManageLinkOperation](#PossibleManageLinkOperationValues)
* [type ManageLinkRequest](#ManageLinkRequest)
* + [func (m ManageLinkRequest) MarshalJSON() ([]byte, error)](#ManageLinkRequest.MarshalJSON)
+ [func (m *ManageLinkRequest) UnmarshalJSON(data []byte) error](#ManageLinkRequest.UnmarshalJSON)
* [type ManagementResourceData](#ManagementResourceData)
* + [func (m ManagementResourceData) MarshalJSON() ([]byte, error)](#ManagementResourceData.MarshalJSON)
+ [func (m *ManagementResourceData) UnmarshalJSON(data []byte) error](#ManagementResourceData.UnmarshalJSON)
* [type Operation](#Operation)
* + [func (o Operation) MarshalJSON() ([]byte, error)](#Operation.MarshalJSON)
+ [func (o *Operation) UnmarshalJSON(data []byte) error](#Operation.UnmarshalJSON)
* [type OperationDisplay](#OperationDisplay)
* + [func (o OperationDisplay) MarshalJSON() ([]byte, error)](#OperationDisplay.MarshalJSON)
+ [func (o *OperationDisplay) UnmarshalJSON(data []byte) error](#OperationDisplay.UnmarshalJSON)
* [type OperationListResult](#OperationListResult)
* + [func (o OperationListResult) MarshalJSON() ([]byte, error)](#OperationListResult.MarshalJSON)
+ [func (o *OperationListResult) UnmarshalJSON(data []byte) error](#OperationListResult.UnmarshalJSON)
* [type OrderItemData](#OrderItemData)
* + [func (o OrderItemData) MarshalJSON() ([]byte, error)](#OrderItemData.MarshalJSON)
+ [func (o *OrderItemData) UnmarshalJSON(data []byte) error](#OrderItemData.UnmarshalJSON)
* [type OrderItemType](#OrderItemType)
* + [func PossibleOrderItemTypeValues() []OrderItemType](#PossibleOrderItemTypeValues)
* [type Origin](#Origin)
* + [func PossibleOriginValues() []Origin](#PossibleOriginValues)
* [type PartnerInventory](#PartnerInventory)
* + [func (p PartnerInventory) MarshalJSON() ([]byte, error)](#PartnerInventory.MarshalJSON)
+ [func (p *PartnerInventory) UnmarshalJSON(data []byte) error](#PartnerInventory.UnmarshalJSON)
* [type PartnerInventoryList](#PartnerInventoryList)
* + [func (p PartnerInventoryList) MarshalJSON() ([]byte, error)](#PartnerInventoryList.MarshalJSON)
+ [func (p *PartnerInventoryList) UnmarshalJSON(data []byte) error](#PartnerInventoryList.UnmarshalJSON)
* [type SearchInventoriesRequest](#SearchInventoriesRequest)
* + [func (s SearchInventoriesRequest) MarshalJSON() ([]byte, error)](#SearchInventoriesRequest.MarshalJSON)
+ [func (s *SearchInventoriesRequest) UnmarshalJSON(data []byte) error](#SearchInventoriesRequest.UnmarshalJSON)
* [type SpecificationDetails](#SpecificationDetails)
* + [func (s SpecificationDetails) MarshalJSON() ([]byte, error)](#SpecificationDetails.MarshalJSON)
+ [func (s *SpecificationDetails) UnmarshalJSON(data []byte) error](#SpecificationDetails.UnmarshalJSON)
* [type StageDetails](#StageDetails)
* + [func (s StageDetails) MarshalJSON() ([]byte, error)](#StageDetails.MarshalJSON)
+ [func (s *StageDetails) UnmarshalJSON(data []byte) error](#StageDetails.UnmarshalJSON)
* [type StageName](#StageName)
* + [func PossibleStageNameValues() []StageName](#PossibleStageNameValues)
* [type StageStatus](#StageStatus)
* + [func PossibleStageStatusValues() []StageStatus](#PossibleStageStatusValues)
* [type SubscriptionDetails](#SubscriptionDetails)
* + [func (s SubscriptionDetails) MarshalJSON() ([]byte, error)](#SubscriptionDetails.MarshalJSON)
+ [func (s *SubscriptionDetails) UnmarshalJSON(data []byte) error](#SubscriptionDetails.UnmarshalJSON)
#### Examples [¶](#pkg-examples)
* [APISClient.BeginManageInventoryMetadata](#example-APISClient.BeginManageInventoryMetadata)
* [APISClient.ManageLink](#example-APISClient.ManageLink)
* [APISClient.NewListOperationsPartnerPager](#example-APISClient.NewListOperationsPartnerPager)
* [APISClient.NewSearchInventoriesPager (SearchInventories)](#example-APISClient.NewSearchInventoriesPager-SearchInventories)
* [APISClient.NewSearchInventoriesPager (SearchInventoriesDetails)](#example-APISClient.NewSearchInventoriesPager-SearchInventoriesDetails)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
This section is empty.
### Types [¶](#pkg-types)
####
type [APISClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/apis_client.go#L26) [¶](#APISClient)
added in v0.2.0
```
type APISClient struct {
// contains filtered or unexported fields
}
```
APISClient contains the methods for the EdgeOrderPartnerAPIS group.
Don't use this type directly, use NewAPISClient() instead.
####
func [NewAPISClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/apis_client.go#L35) [¶](#NewAPISClient)
added in v0.2.0
```
func NewAPISClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[APISClient](#APISClient), [error](/builtin#error))
```
NewAPISClient creates a new instance of APISClient with the specified values.
* subscriptionID - The ID of the target subscription.
* credential - used to authorize requests. Usually a credential from azidentity.
* options - pass nil to accept the default values.
####
func (*APISClient) [BeginManageInventoryMetadata](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/apis_client.go#L113) [¶](#APISClient.BeginManageInventoryMetadata)
added in v0.2.0
```
func (client *[APISClient](#APISClient)) BeginManageInventoryMetadata(ctx [context](/context).[Context](/context#Context), familyIdentifier [string](/builtin#string), location [string](/builtin#string), serialNumber [string](/builtin#string), manageInventoryMetadataRequest [ManageInventoryMetadataRequest](#ManageInventoryMetadataRequest), options *[APISClientBeginManageInventoryMetadataOptions](#APISClientBeginManageInventoryMetadataOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[APISClientManageInventoryMetadataResponse](#APISClientManageInventoryMetadataResponse)], [error](/builtin#error))
```
BeginManageInventoryMetadata - API for updating inventory metadata and inventory configuration If the operation fails it returns an *azcore.ResponseError type.
Generated from API version 2020-12-01-preview
* familyIdentifier - Unique identifier for the product family
* location - The location of the resource
* serialNumber - The serial number of the device
* manageInventoryMetadataRequest - Updates inventory metadata and inventory configuration
* options - APISClientBeginManageInventoryMetadataOptions contains the optional parameters for the APISClient.BeginManageInventoryMetadata method.
Example [¶](#example-APISClient.BeginManageInventoryMetadata)
Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/edgeorderpartner/resource-manager/Microsoft.EdgeOrderPartner/preview/2020-12-01-preview/examples/ManageInventoryMetadata.json>
```
package main
import (
"context"
"log"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner"
)
func main() {
cred, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
log.Fatalf("failed to obtain a credential: %v", err)
}
ctx := context.Background()
clientFactory, err := armedgeorderpartner.NewClientFactory("<subscription-id>", cred, nil)
if err != nil {
log.Fatalf("failed to create client: %v", err)
}
poller, err := clientFactory.NewAPISClient().BeginManageInventoryMetadata(ctx, "AzureStackEdge", "westus", "SerialNumber1", armedgeorderpartner.ManageInventoryMetadataRequest{
ConfigurationOnDevice: &armedgeorderpartner.ConfigurationOnDevice{
ConfigurationIdentifier: to.Ptr("EdgeP_High"),
},
InventoryMetadata: to.Ptr("InventoryMetadata"),
}, nil)
if err != nil {
log.Fatalf("failed to finish the request: %v", err)
}
_, err = poller.PollUntilDone(ctx, nil)
if err != nil {
log.Fatalf("failed to pull the result: %v", err)
}
}
```
```
Output:
```
Share Format
Run
####
func (*APISClient) [ManageLink](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/apis_client.go#L183) [¶](#APISClient.ManageLink)
added in v0.2.0
```
func (client *[APISClient](#APISClient)) ManageLink(ctx [context](/context).[Context](/context#Context), familyIdentifier [string](/builtin#string), location [string](/builtin#string), serialNumber [string](/builtin#string), manageLinkRequest [ManageLinkRequest](#ManageLinkRequest), options *[APISClientManageLinkOptions](#APISClientManageLinkOptions)) ([APISClientManageLinkResponse](#APISClientManageLinkResponse), [error](/builtin#error))
```
ManageLink - API for linking management resource with inventory If the operation fails it returns an *azcore.ResponseError type.
Generated from API version 2020-12-01-preview
* familyIdentifier - Unique identifier for the product family
* location - The location of the resource
* serialNumber - The serial number of the device
* manageLinkRequest - Links the management resource to the inventory
* options - APISClientManageLinkOptions contains the optional parameters for the APISClient.ManageLink method.
Example [¶](#example-APISClient.ManageLink)
Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/edgeorderpartner/resource-manager/Microsoft.EdgeOrderPartner/preview/2020-12-01-preview/examples/ManageLink.json>
```
package main
import (
"context"
"log"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner"
)
func main() {
cred, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
log.Fatalf("failed to obtain a credential: %v", err)
}
ctx := context.Background()
clientFactory, err := armedgeorderpartner.NewClientFactory("<subscription-id>", cred, nil)
if err != nil {
log.Fatalf("failed to create client: %v", err)
}
_, err = clientFactory.NewAPISClient().ManageLink(ctx, "AzureStackEdge", "westus", "SerialNumber1", armedgeorderpartner.ManageLinkRequest{
ManagementResourceArmID: to.Ptr("/subscriptions/c783ea86-c85c-4175-b76d-3992656af50d/resourceGroups/EdgeTestRG/providers/Microsoft.DataBoxEdge/DataBoxEdgeDevices/TestEdgeDeviceName1"),
Operation: to.Ptr(armedgeorderpartner.ManageLinkOperationLink),
TenantID: to.Ptr("a783ea86-c85c-4175-b76d-3992656af50d"),
}, nil)
if err != nil {
log.Fatalf("failed to finish the request: %v", err)
}
}
```
```
Output:
```
Share Format
Run
####
func (*APISClient) [NewListOperationsPartnerPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/apis_client.go#L52) [¶](#APISClient.NewListOperationsPartnerPager)
added in v0.4.0
```
func (client *[APISClient](#APISClient)) NewListOperationsPartnerPager(options *[APISClientListOperationsPartnerOptions](#APISClientListOperationsPartnerOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[APISClientListOperationsPartnerResponse](#APISClientListOperationsPartnerResponse)]
```
NewListOperationsPartnerPager - This method gets all the operations that are exposed for customer.
Generated from API version 2020-12-01-preview
* options - APISClientListOperationsPartnerOptions contains the optional parameters for the APISClient.NewListOperationsPartnerPager method.
Example [¶](#example-APISClient.NewListOperationsPartnerPager)
Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/edgeorderpartner/resource-manager/Microsoft.EdgeOrderPartner/preview/2020-12-01-preview/examples/ListOperationsPartner.json>
```
package main
import (
"context"
"log"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner"
)
func main() {
cred, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
log.Fatalf("failed to obtain a credential: %v", err)
}
ctx := context.Background()
clientFactory, err := armedgeorderpartner.NewClientFactory("<subscription-id>", cred, nil)
if err != nil {
log.Fatalf("failed to create client: %v", err)
}
pager := clientFactory.NewAPISClient().NewListOperationsPartnerPager(nil)
for pager.More() {
page, err := pager.NextPage(ctx)
if err != nil {
log.Fatalf("failed to advance page: %v", err)
}
for _, v := range page.Value {
// You could use page here. We use blank identifier for just demo purposes.
_ = v
}
// If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes.
// page.OperationListResult = armedgeorderpartner.OperationListResult{
// Value: []*armedgeorderpartner.Operation{
// {
// Name: to.Ptr("Microsoft.EdgeOrderPartner/operations/read"),
// Display: &armedgeorderpartner.OperationDisplay{
// Description: to.Ptr("List or get the Operations"),
// Operation: to.Ptr("List or Get Operations"),
// Provider: to.Ptr("Edge Ordering"),
// Resource: to.Ptr("Operations"),
// },
// IsDataAction: to.Ptr(false),
// Origin: to.Ptr(armedgeorderpartner.OriginUser),
// },
// {
// Name: to.Ptr("Microsoft.EdgeOrderPartner/searchInventories/action"),
// Display: &armedgeorderpartner.OperationDisplay{
// Provider: to.Ptr("Edge Ordering"),
// Resource: to.Ptr("ArmApiRes_Microsoft.EdgeOrderPartner"),
// },
// IsDataAction: to.Ptr(true),
// Origin: to.Ptr(armedgeorderpartner.OriginUser),
// },
// {
// Name: to.Ptr("Microsoft.EdgeOrderPartner/locations/productFamilies/inventories/manageLink/action"),
// Display: &armedgeorderpartner.OperationDisplay{
// Provider: to.Ptr("Edge Ordering"),
// Resource: to.Ptr("ArmApiRes_inventories"),
// },
// IsDataAction: to.Ptr(true),
// Origin: to.Ptr(armedgeorderpartner.OriginUser),
// },
// {
// Name: to.Ptr("Microsoft.EdgeOrderPartner/locations/productFamilies/inventories/manageInventoryMetadata/action"),
// Display: &armedgeorderpartner.OperationDisplay{
// Provider: to.Ptr("Edge Ordering"),
// Resource: to.Ptr("ArmApiRes_inventories"),
// },
// IsDataAction: to.Ptr(true),
// Origin: to.Ptr(armedgeorderpartner.OriginUser),
// }},
// }
}
}
```
```
Output:
```
Share Format
Run
####
func (*APISClient) [NewSearchInventoriesPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/apis_client.go#L234) [¶](#APISClient.NewSearchInventoriesPager)
added in v0.4.0
```
func (client *[APISClient](#APISClient)) NewSearchInventoriesPager(searchInventoriesRequest [SearchInventoriesRequest](#SearchInventoriesRequest), options *[APISClientSearchInventoriesOptions](#APISClientSearchInventoriesOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[APISClientSearchInventoriesResponse](#APISClientSearchInventoriesResponse)]
```
NewSearchInventoriesPager - API for Search inventories
Generated from API version 2020-12-01-preview
* searchInventoriesRequest - Searches inventories with the given filters and returns in the form of a list
* options - APISClientSearchInventoriesOptions contains the optional parameters for the APISClient.NewSearchInventoriesPager method.
Example (SearchInventories) [¶](#example-APISClient.NewSearchInventoriesPager-SearchInventories)
Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/edgeorderpartner/resource-manager/Microsoft.EdgeOrderPartner/preview/2020-12-01-preview/examples/SearchInventories.json>
```
package main
import (
"context"
"log"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner"
)
func main() {
cred, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
log.Fatalf("failed to obtain a credential: %v", err)
}
ctx := context.Background()
clientFactory, err := armedgeorderpartner.NewClientFactory("<subscription-id>", cred, nil)
if err != nil {
log.Fatalf("failed to create client: %v", err)
}
pager := clientFactory.NewAPISClient().NewSearchInventoriesPager(armedgeorderpartner.SearchInventoriesRequest{
FamilyIdentifier: to.Ptr("AzureStackEdge"),
SerialNumber: to.Ptr("SerialNumber1"),
}, nil)
for pager.More() {
page, err := pager.NextPage(ctx)
if err != nil {
log.Fatalf("failed to advance page: %v", err)
}
for _, v := range page.Value {
// You could use page here. We use blank identifier for just demo purposes.
_ = v
}
// If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes.
// page.PartnerInventoryList = armedgeorderpartner.PartnerInventoryList{
// Value: []*armedgeorderpartner.PartnerInventory{
// {
// Properties: &armedgeorderpartner.InventoryProperties{
// Configuration: &armedgeorderpartner.ConfigurationData{
// ConfigurationIdentifier: to.Ptr("EdgeP_Base"),
// ConfigurationIdentifierOnDevice: to.Ptr("EdgeP_High"),
// FamilyIdentifier: to.Ptr("AzureStackEdge"),
// ProductIdentifier: to.Ptr("AzureStackEdgeProGPU"),
// ProductLineIdentifier: to.Ptr("AzureStackEdgePL"),
// },
// Inventory: &armedgeorderpartner.InventoryData{
// Location: to.Ptr("Rack"),
// RegistrationAllowed: to.Ptr(true),
// Status: to.Ptr("Healthy"),
// },
// Location: to.Ptr("westus"),
// ManagementResource: &armedgeorderpartner.ManagementResourceData{
// ArmID: to.Ptr("/subscriptions/c783ea86-c85c-4175-b76d-3992656af50d/resourceGroups/EdgeTestRG/providers/Microsoft.DataBoxEdge/DataBoxEdgeDevices/TestEdgeDeviceName1"),
// TenantID: to.Ptr("a783ea86-c85c-4175-b76d-3992656af50d"),
// },
// OrderItem: &armedgeorderpartner.OrderItemData{
// ArmID: to.Ptr("/subscriptions/b783ea86-c85c-4175-b76d-3992656af50d/resourceGroups/TestRG/providers/Microsoft.EdgeOrder/orders/TestOrderName1"),
// OrderItemType: to.Ptr(armedgeorderpartner.OrderItemTypeRental),
// },
// SerialNumber: to.Ptr("SerialNumber1"),
// },
// }},
// }
}
}
```
```
Output:
```
Share Format
Run
Example (SearchInventoriesDetails) [¶](#example-APISClient.NewSearchInventoriesPager-SearchInventoriesDetails)
Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/edgeorderpartner/resource-manager/Microsoft.EdgeOrderPartner/preview/2020-12-01-preview/examples/SearchInventoriesDetails.json>
```
package main
import (
"context"
"log"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner"
)
func main() {
cred, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
log.Fatalf("failed to obtain a credential: %v", err)
}
ctx := context.Background()
clientFactory, err := armedgeorderpartner.NewClientFactory("<subscription-id>", cred, nil)
if err != nil {
log.Fatalf("failed to create client: %v", err)
}
pager := clientFactory.NewAPISClient().NewSearchInventoriesPager(armedgeorderpartner.SearchInventoriesRequest{
FamilyIdentifier: to.Ptr("AzureStackEdge"),
SerialNumber: to.Ptr("SerialNumber1"),
}, nil)
for pager.More() {
page, err := pager.NextPage(ctx)
if err != nil {
log.Fatalf("failed to advance page: %v", err)
}
for _, v := range page.Value {
// You could use page here. We use blank identifier for just demo purposes.
_ = v
}
// If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes.
// page.PartnerInventoryList = armedgeorderpartner.PartnerInventoryList{
// Value: []*armedgeorderpartner.PartnerInventory{
// {
// Properties: &armedgeorderpartner.InventoryProperties{
// Configuration: &armedgeorderpartner.ConfigurationData{
// ConfigurationIdentifier: to.Ptr("EdgeP_Base"),
// ConfigurationIdentifierOnDevice: to.Ptr("EdgeP_High"),
// FamilyIdentifier: to.Ptr("AzureStackEdge"),
// ProductIdentifier: to.Ptr("AzureStackEdgeProGPU"),
// ProductLineIdentifier: to.Ptr("AzureStackEdgePL"),
// },
// Inventory: &armedgeorderpartner.InventoryData{
// Location: to.Ptr("Rack"),
// RegistrationAllowed: to.Ptr(true),
// Status: to.Ptr("Healthy"),
// },
// Location: to.Ptr("westus"),
// ManagementResource: &armedgeorderpartner.ManagementResourceData{
// ArmID: to.Ptr("/subscriptions/c783ea86-c85c-4175-b76d-3992656af50d/resourceGroups/EdgeTestRG/providers/Microsoft.DataBoxEdge/DataBoxEdgeDevices/TestEdgeDeviceName1"),
// TenantID: to.Ptr("a783ea86-c85c-4175-b76d-3992656af50d"),
// },
// OrderItem: &armedgeorderpartner.OrderItemData{
// ArmID: to.Ptr("/subscriptions/b783ea86-c85c-4175-b76d-3992656af50d/resourceGroups/TestRG/providers/Microsoft.EdgeOrder/orders/TestOrderName1"),
// OrderItemType: to.Ptr(armedgeorderpartner.OrderItemTypeRental),
// },
// SerialNumber: to.Ptr("SerialNumber1"),
// Details: &armedgeorderpartner.InventoryAdditionalDetails{
// Billing: &armedgeorderpartner.BillingDetails{
// BillingType: to.Ptr("Pav2"),
// Status: to.Ptr("InProgress"),
// },
// Configuration: &armedgeorderpartner.ConfigurationDetails{
// Specifications: []*armedgeorderpartner.SpecificationDetails{
// {
// Name: to.Ptr("Cores"),
// Value: to.Ptr("24"),
// },
// {
// Name: to.Ptr("Memory"),
// Value: to.Ptr("128 GB"),
// },
// {
// Name: to.Ptr("Storage"),
// Value: to.Ptr("~8 TB"),
// }},
// },
// Inventory: &armedgeorderpartner.AdditionalInventoryDetails{
// AdditionalData: map[string]*string{
// "ManuacturingYear": to.Ptr("2020"),
// "SourceCountry": to.Ptr("USA"),
// },
// },
// InventoryMetadata: to.Ptr("This is currently in Japan"),
// InventorySecrets: map[string]*string{
// "PublicCert": to.Ptr("<PublicCert>"),
// },
// OrderItem: &armedgeorderpartner.AdditionalOrderItemDetails{
// Status: &armedgeorderpartner.StageDetails{
// DisplayName: to.Ptr("Delivered - Succeeded"),
// StageName: to.Ptr(armedgeorderpartner.StageNameDelivered),
// StageStatus: to.Ptr(armedgeorderpartner.StageStatusSucceeded),
// StartTime: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2020-08-07T10:50:36.3341513+05:30"); return t}()),
// },
// Subscription: &armedgeorderpartner.SubscriptionDetails{
// ID: to.Ptr("b783ea86-c85c-4175-b76d-3992656af50d"),
// QuotaID: to.Ptr("Internal_2014-09-01"),
// State: to.Ptr("Registered"),
// },
// },
// },
// },
// }},
// }
}
}
```
```
Output:
```
Share Format
Run
####
type [APISClientBeginManageInventoryMetadataOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L16) [¶](#APISClientBeginManageInventoryMetadataOptions)
added in v0.2.0
```
type APISClientBeginManageInventoryMetadataOptions struct {
// Resumes the LRO from the provided token.
ResumeToken [string](/builtin#string)
}
```
APISClientBeginManageInventoryMetadataOptions contains the optional parameters for the APISClient.BeginManageInventoryMetadata method.
####
type [APISClientListOperationsPartnerOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L23) [¶](#APISClientListOperationsPartnerOptions)
added in v0.2.0
```
type APISClientListOperationsPartnerOptions struct {
}
```
APISClientListOperationsPartnerOptions contains the optional parameters for the APISClient.NewListOperationsPartnerPager method.
####
type [APISClientListOperationsPartnerResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/response_types.go#L13) [¶](#APISClientListOperationsPartnerResponse)
added in v0.2.0
```
type APISClientListOperationsPartnerResponse struct {
[OperationListResult](#OperationListResult)
}
```
APISClientListOperationsPartnerResponse contains the response from method APISClient.NewListOperationsPartnerPager.
####
type [APISClientManageInventoryMetadataResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/response_types.go#L18) [¶](#APISClientManageInventoryMetadataResponse)
added in v0.2.0
```
type APISClientManageInventoryMetadataResponse struct {
}
```
APISClientManageInventoryMetadataResponse contains the response from method APISClient.BeginManageInventoryMetadata.
####
type [APISClientManageLinkOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L28) [¶](#APISClientManageLinkOptions)
added in v0.2.0
```
type APISClientManageLinkOptions struct {
}
```
APISClientManageLinkOptions contains the optional parameters for the APISClient.ManageLink method.
####
type [APISClientManageLinkResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/response_types.go#L23) [¶](#APISClientManageLinkResponse)
added in v0.2.0
```
type APISClientManageLinkResponse struct {
}
```
APISClientManageLinkResponse contains the response from method APISClient.ManageLink.
####
type [APISClientSearchInventoriesOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L33) [¶](#APISClientSearchInventoriesOptions)
added in v0.2.0
```
type APISClientSearchInventoriesOptions struct {
}
```
APISClientSearchInventoriesOptions contains the optional parameters for the APISClient.NewSearchInventoriesPager method.
####
type [APISClientSearchInventoriesResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/response_types.go#L28) [¶](#APISClientSearchInventoriesResponse)
added in v0.2.0
```
type APISClientSearchInventoriesResponse struct {
[PartnerInventoryList](#PartnerInventoryList)
}
```
APISClientSearchInventoriesResponse contains the response from method APISClient.NewSearchInventoriesPager.
####
type [ActionType](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/constants.go#L18) [¶](#ActionType)
```
type ActionType [string](/builtin#string)
```
ActionType - Enum. Indicates the action type. "Internal" refers to actions that are for internal only APIs.
```
const (
ActionTypeInternal [ActionType](#ActionType) = "Internal"
)
```
####
func [PossibleActionTypeValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/constants.go#L25) [¶](#PossibleActionTypeValues)
```
func PossibleActionTypeValues() [][ActionType](#ActionType)
```
PossibleActionTypeValues returns the possible values for the ActionType const type.
####
type [AdditionalErrorInfo](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L37) [¶](#AdditionalErrorInfo)
```
type AdditionalErrorInfo struct {
// Anything
Info [any](/builtin#any)
Type *[string](/builtin#string)
}
```
####
func (AdditionalErrorInfo) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L20) [¶](#AdditionalErrorInfo.MarshalJSON)
added in v0.6.0
```
func (a [AdditionalErrorInfo](#AdditionalErrorInfo)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type AdditionalErrorInfo.
####
func (*AdditionalErrorInfo) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L28) [¶](#AdditionalErrorInfo.UnmarshalJSON)
added in v0.6.0
```
func (a *[AdditionalErrorInfo](#AdditionalErrorInfo)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type AdditionalErrorInfo.
####
type [AdditionalInventoryDetails](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L44) [¶](#AdditionalInventoryDetails)
```
type AdditionalInventoryDetails struct {
// READ-ONLY; Additional Data
AdditionalData map[[string](/builtin#string)]*[string](/builtin#string)
}
```
AdditionalInventoryDetails - Contains additional data about inventory in dictionary format
####
func (AdditionalInventoryDetails) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L51) [¶](#AdditionalInventoryDetails.MarshalJSON)
```
func (a [AdditionalInventoryDetails](#AdditionalInventoryDetails)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type AdditionalInventoryDetails.
####
func (*AdditionalInventoryDetails) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L58) [¶](#AdditionalInventoryDetails.UnmarshalJSON)
added in v0.6.0
```
func (a *[AdditionalInventoryDetails](#AdditionalInventoryDetails)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type AdditionalInventoryDetails.
####
type [AdditionalOrderItemDetails](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L50) [¶](#AdditionalOrderItemDetails)
```
type AdditionalOrderItemDetails struct {
// READ-ONLY; Order item status
Status *[StageDetails](#StageDetails)
// READ-ONLY; Subscription details
Subscription *[SubscriptionDetails](#SubscriptionDetails)
}
```
AdditionalOrderItemDetails - Contains additional order item details
####
func (AdditionalOrderItemDetails) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L78) [¶](#AdditionalOrderItemDetails.MarshalJSON)
added in v0.6.0
```
func (a [AdditionalOrderItemDetails](#AdditionalOrderItemDetails)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type AdditionalOrderItemDetails.
####
func (*AdditionalOrderItemDetails) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L86) [¶](#AdditionalOrderItemDetails.UnmarshalJSON)
added in v0.6.0
```
func (a *[AdditionalOrderItemDetails](#AdditionalOrderItemDetails)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type AdditionalOrderItemDetails.
####
type [BillingDetails](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L59) [¶](#BillingDetails)
```
type BillingDetails struct {
// READ-ONLY; Billing type for the inventory
BillingType *[string](/builtin#string)
// READ-ONLY; Billing status for the inventory
Status *[string](/builtin#string)
}
```
BillingDetails - Contains billing details for the inventory
####
func (BillingDetails) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L109) [¶](#BillingDetails.MarshalJSON)
added in v0.6.0
```
func (b [BillingDetails](#BillingDetails)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type BillingDetails.
####
func (*BillingDetails) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L117) [¶](#BillingDetails.UnmarshalJSON)
added in v0.6.0
```
func (b *[BillingDetails](#BillingDetails)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type BillingDetails.
####
type [ClientFactory](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/client_factory.go#L19) [¶](#ClientFactory)
added in v0.6.0
```
type ClientFactory struct {
// contains filtered or unexported fields
}
```
ClientFactory is a client factory used to create any client in this module.
Don't use this type directly, use NewClientFactory instead.
####
func [NewClientFactory](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/client_factory.go#L30) [¶](#NewClientFactory)
added in v0.6.0
```
func NewClientFactory(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[ClientFactory](#ClientFactory), [error](/builtin#error))
```
NewClientFactory creates a new instance of ClientFactory with the specified values.
The parameter values will be propagated to any client created from this factory.
* subscriptionID - The ID of the target subscription.
* credential - used to authorize requests. Usually a credential from azidentity.
* options - pass nil to accept the default values.
####
func (*ClientFactory) [NewAPISClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/client_factory.go#L41) [¶](#ClientFactory.NewAPISClient)
added in v0.6.0
```
func (c *[ClientFactory](#ClientFactory)) NewAPISClient() *[APISClient](#APISClient)
```
####
type [ConfigurationData](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L68) [¶](#ConfigurationData)
```
type ConfigurationData struct {
// READ-ONLY; Configuration identifier of inventory
ConfigurationIdentifier *[string](/builtin#string)
// READ-ONLY; Configuration identifier on device - this is used in case of any mismatch between actual configuration on inventory
// and configuration stored in service
ConfigurationIdentifierOnDevice *[string](/builtin#string)
// READ-ONLY; Family identifier of inventory
FamilyIdentifier *[string](/builtin#string)
// READ-ONLY; Product identifier of inventory
ProductIdentifier *[string](/builtin#string)
// READ-ONLY; Product Line identifier of inventory
ProductLineIdentifier *[string](/builtin#string)
}
```
ConfigurationData - Contains information about inventory configuration
####
func (ConfigurationData) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L140) [¶](#ConfigurationData.MarshalJSON)
added in v0.6.0
```
func (c [ConfigurationData](#ConfigurationData)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type ConfigurationData.
####
func (*ConfigurationData) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L151) [¶](#ConfigurationData.UnmarshalJSON)
added in v0.6.0
```
func (c *[ConfigurationData](#ConfigurationData)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type ConfigurationData.
####
type [ConfigurationDetails](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L87) [¶](#ConfigurationDetails)
```
type ConfigurationDetails struct {
// READ-ONLY; Collection of specification details about the inventory
Specifications []*[SpecificationDetails](#SpecificationDetails)
}
```
ConfigurationDetails - Contains additional configuration details about inventory
####
func (ConfigurationDetails) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L183) [¶](#ConfigurationDetails.MarshalJSON)
```
func (c [ConfigurationDetails](#ConfigurationDetails)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type ConfigurationDetails.
####
func (*ConfigurationDetails) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L190) [¶](#ConfigurationDetails.UnmarshalJSON)
added in v0.6.0
```
func (c *[ConfigurationDetails](#ConfigurationDetails)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type ConfigurationDetails.
####
type [ConfigurationOnDevice](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L93) [¶](#ConfigurationOnDevice)
```
type ConfigurationOnDevice struct {
// REQUIRED; Configuration identifier on device
ConfigurationIdentifier *[string](/builtin#string)
}
```
ConfigurationOnDevice - Configuration parameters for ManageInventoryMetadata call
####
func (ConfigurationOnDevice) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L210) [¶](#ConfigurationOnDevice.MarshalJSON)
added in v0.6.0
```
func (c [ConfigurationOnDevice](#ConfigurationOnDevice)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type ConfigurationOnDevice.
####
func (*ConfigurationOnDevice) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L217) [¶](#ConfigurationOnDevice.UnmarshalJSON)
added in v0.6.0
```
func (c *[ConfigurationOnDevice](#ConfigurationOnDevice)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type ConfigurationOnDevice.
####
type [ErrorAdditionalInfo](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L99) [¶](#ErrorAdditionalInfo)
```
type ErrorAdditionalInfo struct {
// READ-ONLY; The additional info.
Info [any](/builtin#any)
// READ-ONLY; The additional info type.
Type *[string](/builtin#string)
}
```
ErrorAdditionalInfo - The resource management error additional info.
####
func (ErrorAdditionalInfo) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L237) [¶](#ErrorAdditionalInfo.MarshalJSON)
added in v0.6.0
```
func (e [ErrorAdditionalInfo](#ErrorAdditionalInfo)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type ErrorAdditionalInfo.
####
func (*ErrorAdditionalInfo) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L245) [¶](#ErrorAdditionalInfo.UnmarshalJSON)
added in v0.6.0
```
func (e *[ErrorAdditionalInfo](#ErrorAdditionalInfo)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type ErrorAdditionalInfo.
####
type [ErrorDetail](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L108) [¶](#ErrorDetail)
```
type ErrorDetail struct {
// READ-ONLY; The error additional info.
AdditionalInfo []*[ErrorAdditionalInfo](#ErrorAdditionalInfo)
// READ-ONLY; The error code.
Code *[string](/builtin#string)
// READ-ONLY; The error details.
Details []*[ErrorDetail](#ErrorDetail)
// READ-ONLY; The error message.
Message *[string](/builtin#string)
// READ-ONLY; The error target.
Target *[string](/builtin#string)
}
```
ErrorDetail - The error detail.
####
func (ErrorDetail) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L268) [¶](#ErrorDetail.MarshalJSON)
```
func (e [ErrorDetail](#ErrorDetail)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type ErrorDetail.
####
func (*ErrorDetail) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L279) [¶](#ErrorDetail.UnmarshalJSON)
added in v0.6.0
```
func (e *[ErrorDetail](#ErrorDetail)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type ErrorDetail.
####
type [ErrorResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L127) [¶](#ErrorResponse)
```
type ErrorResponse struct {
// The error object.
Error *[ErrorDetail](#ErrorDetail)
}
```
ErrorResponse - Common error response for all Azure Resource Manager APIs to return error details for failed operations.
(This also follows the OData error response format.).
####
func (ErrorResponse) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L311) [¶](#ErrorResponse.MarshalJSON)
added in v0.6.0
```
func (e [ErrorResponse](#ErrorResponse)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type ErrorResponse.
####
func (*ErrorResponse) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L318) [¶](#ErrorResponse.UnmarshalJSON)
added in v0.6.0
```
func (e *[ErrorResponse](#ErrorResponse)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type ErrorResponse.
####
type [InventoryAdditionalDetails](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L133) [¶](#InventoryAdditionalDetails)
```
type InventoryAdditionalDetails struct {
// Represents additional details about the order item
OrderItem *[AdditionalOrderItemDetails](#AdditionalOrderItemDetails)
// READ-ONLY; Represents additional details about billing for the inventory
Billing *[BillingDetails](#BillingDetails)
// READ-ONLY; Represents additional details about the configuration
Configuration *[ConfigurationDetails](#ConfigurationDetails)
// READ-ONLY; Represents additional data about the inventory
Inventory *[AdditionalInventoryDetails](#AdditionalInventoryDetails)
// READ-ONLY; Contains inventory metadata
InventoryMetadata *[string](/builtin#string)
// READ-ONLY; Represents secrets on the inventory
InventorySecrets map[[string](/builtin#string)]*[string](/builtin#string)
}
```
InventoryAdditionalDetails - Represents additional details about the partner inventory
####
func (InventoryAdditionalDetails) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L338) [¶](#InventoryAdditionalDetails.MarshalJSON)
```
func (i [InventoryAdditionalDetails](#InventoryAdditionalDetails)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type InventoryAdditionalDetails.
####
func (*InventoryAdditionalDetails) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L350) [¶](#InventoryAdditionalDetails.UnmarshalJSON)
added in v0.6.0
```
func (i *[InventoryAdditionalDetails](#InventoryAdditionalDetails)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type InventoryAdditionalDetails.
####
type [InventoryData](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L154) [¶](#InventoryData)
```
type InventoryData struct {
// READ-ONLY; Inventory location
Location *[string](/builtin#string)
// READ-ONLY; Boolean flag to indicate if registration is allowed
RegistrationAllowed *[bool](/builtin#bool)
// READ-ONLY; Inventory status
Status *[string](/builtin#string)
}
```
InventoryData - Contains basic information about inventory
####
func (InventoryData) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L385) [¶](#InventoryData.MarshalJSON)
added in v0.6.0
```
func (i [InventoryData](#InventoryData)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type InventoryData.
####
func (*InventoryData) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L394) [¶](#InventoryData.UnmarshalJSON)
added in v0.6.0
```
func (i *[InventoryData](#InventoryData)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type InventoryData.
####
type [InventoryProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L166) [¶](#InventoryProperties)
```
type InventoryProperties struct {
// READ-ONLY; Represents basic configuration data.
Configuration *[ConfigurationData](#ConfigurationData)
// READ-ONLY; Represents additional details of inventory
Details *[InventoryAdditionalDetails](#InventoryAdditionalDetails)
// READ-ONLY; Represents basic inventory data.
Inventory *[InventoryData](#InventoryData)
// READ-ONLY; Location of inventory
Location *[string](/builtin#string)
// READ-ONLY; Represents management resource data associated with inventory.
ManagementResource *[ManagementResourceData](#ManagementResourceData)
// READ-ONLY; Represents basic order item data.
OrderItem *[OrderItemData](#OrderItemData)
// READ-ONLY; Serial number of the device.
SerialNumber *[string](/builtin#string)
}
```
InventoryProperties - Represents inventory properties
####
func (InventoryProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L420) [¶](#InventoryProperties.MarshalJSON)
added in v0.6.0
```
func (i [InventoryProperties](#InventoryProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type InventoryProperties.
####
func (*InventoryProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L433) [¶](#InventoryProperties.UnmarshalJSON)
added in v0.6.0
```
func (i *[InventoryProperties](#InventoryProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type InventoryProperties.
####
type [ManageInventoryMetadataRequest](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L190) [¶](#ManageInventoryMetadataRequest)
```
type ManageInventoryMetadataRequest struct {
// REQUIRED; Inventory metadata to be updated
InventoryMetadata *[string](/builtin#string)
// Inventory configuration to be updated
ConfigurationOnDevice *[ConfigurationOnDevice](#ConfigurationOnDevice)
}
```
ManageInventoryMetadataRequest - Request body for ManageInventoryMetadata call
####
func (ManageInventoryMetadataRequest) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L471) [¶](#ManageInventoryMetadataRequest.MarshalJSON)
added in v0.6.0
```
func (m [ManageInventoryMetadataRequest](#ManageInventoryMetadataRequest)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type ManageInventoryMetadataRequest.
####
func (*ManageInventoryMetadataRequest) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L479) [¶](#ManageInventoryMetadataRequest.UnmarshalJSON)
added in v0.6.0
```
func (m *[ManageInventoryMetadataRequest](#ManageInventoryMetadataRequest)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type ManageInventoryMetadataRequest.
####
type [ManageLinkOperation](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/constants.go#L32) [¶](#ManageLinkOperation)
```
type ManageLinkOperation [string](/builtin#string)
```
ManageLinkOperation - Operation to be performed - Link, Unlink, Relink
```
const (
// ManageLinkOperationLink - Link.
ManageLinkOperationLink [ManageLinkOperation](#ManageLinkOperation) = "Link"
// ManageLinkOperationUnlink - Unlink.
ManageLinkOperationUnlink [ManageLinkOperation](#ManageLinkOperation) = "Unlink"
// ManageLinkOperationRelink - Relink.
ManageLinkOperationRelink [ManageLinkOperation](#ManageLinkOperation) = "Relink"
)
```
####
func [PossibleManageLinkOperationValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/constants.go#L44) [¶](#PossibleManageLinkOperationValues)
```
func PossibleManageLinkOperationValues() [][ManageLinkOperation](#ManageLinkOperation)
```
PossibleManageLinkOperationValues returns the possible values for the ManageLinkOperation const type.
####
type [ManageLinkRequest](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L199) [¶](#ManageLinkRequest)
```
type ManageLinkRequest struct {
// REQUIRED; Arm Id of the management resource to which inventory is to be linked For unlink operation, enter empty string
ManagementResourceArmID *[string](/builtin#string)
// REQUIRED; Operation to be performed - Link, Unlink, Relink
Operation *[ManageLinkOperation](#ManageLinkOperation)
// REQUIRED; Tenant ID of management resource associated with inventory
TenantID *[string](/builtin#string)
}
```
ManageLinkRequest - Request body for ManageLink call
####
func (ManageLinkRequest) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L502) [¶](#ManageLinkRequest.MarshalJSON)
added in v0.6.0
```
func (m [ManageLinkRequest](#ManageLinkRequest)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type ManageLinkRequest.
####
func (*ManageLinkRequest) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L511) [¶](#ManageLinkRequest.UnmarshalJSON)
added in v0.6.0
```
func (m *[ManageLinkRequest](#ManageLinkRequest)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type ManageLinkRequest.
####
type [ManagementResourceData](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L211) [¶](#ManagementResourceData)
```
type ManagementResourceData struct {
// READ-ONLY; Arm ID of management resource associated with inventory
ArmID *[string](/builtin#string)
// READ-ONLY; Tenant ID of management resource associated with inventory
TenantID *[string](/builtin#string)
}
```
ManagementResourceData - Contains information about management resource
####
func (ManagementResourceData) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L537) [¶](#ManagementResourceData.MarshalJSON)
added in v0.6.0
```
func (m [ManagementResourceData](#ManagementResourceData)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type ManagementResourceData.
####
func (*ManagementResourceData) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L545) [¶](#ManagementResourceData.UnmarshalJSON)
added in v0.6.0
```
func (m *[ManagementResourceData](#ManagementResourceData)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type ManagementResourceData.
####
type [Operation](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L220) [¶](#Operation)
```
type Operation struct {
// Localized display information for this particular operation.
Display *[OperationDisplay](#OperationDisplay)
// READ-ONLY; Enum. Indicates the action type. "Internal" refers to actions that are for internal only APIs.
ActionType *[ActionType](#ActionType)
// READ-ONLY; Whether the operation applies to data-plane. This is "true" for data-plane operations and "false" for ARM/control-plane
// operations.
IsDataAction *[bool](/builtin#bool)
// READ-ONLY; The name of the operation, as per Resource-Based Access Control (RBAC). Examples: "Microsoft.Compute/virtualMachines/write",
// "Microsoft.Compute/virtualMachines/capture/action"
Name *[string](/builtin#string)
// READ-ONLY; The intended executor of the operation; as in Resource Based Access Control (RBAC) and audit logs UX. Default
// value is "user,system"
Origin *[Origin](#Origin)
}
```
Operation - Details of a REST API operation, returned from the Resource Provider Operations API
####
func (Operation) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L568) [¶](#Operation.MarshalJSON)
added in v0.6.0
```
func (o [Operation](#Operation)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type Operation.
####
func (*Operation) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L579) [¶](#Operation.UnmarshalJSON)
added in v0.6.0
```
func (o *[Operation](#Operation)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type Operation.
####
type [OperationDisplay](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L241) [¶](#OperationDisplay)
```
type OperationDisplay struct {
// READ-ONLY; The short, localized friendly description of the operation; suitable for tool tips and detailed views.
Description *[string](/builtin#string)
// READ-ONLY; The concise, localized friendly name for the operation; suitable for dropdowns. E.g. "Create or Update Virtual
// Machine", "Restart Virtual Machine".
Operation *[string](/builtin#string)
// READ-ONLY; The localized friendly form of the resource provider name, e.g. "Microsoft Monitoring Insights" or "Microsoft
// Compute".
Provider *[string](/builtin#string)
// READ-ONLY; The localized friendly name of the resource type related to this operation. E.g. "Virtual Machines" or "Job
// Schedule Collections".
Resource *[string](/builtin#string)
}
```
OperationDisplay - Localized display information for this particular operation.
####
func (OperationDisplay) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L611) [¶](#OperationDisplay.MarshalJSON)
added in v0.6.0
```
func (o [OperationDisplay](#OperationDisplay)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type OperationDisplay.
####
func (*OperationDisplay) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L621) [¶](#OperationDisplay.UnmarshalJSON)
added in v0.6.0
```
func (o *[OperationDisplay](#OperationDisplay)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type OperationDisplay.
####
type [OperationListResult](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L260) [¶](#OperationListResult)
```
type OperationListResult struct {
// READ-ONLY; URL to get the next set of operation list results (if there are any).
NextLink *[string](/builtin#string)
// READ-ONLY; List of operations supported by the resource provider
Value []*[Operation](#Operation)
}
```
OperationListResult - A list of REST API operations supported by an Azure Resource Provider. It contains an URL link to get the next set of results.
####
func (OperationListResult) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L650) [¶](#OperationListResult.MarshalJSON)
```
func (o [OperationListResult](#OperationListResult)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type OperationListResult.
####
func (*OperationListResult) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L658) [¶](#OperationListResult.UnmarshalJSON)
added in v0.6.0
```
func (o *[OperationListResult](#OperationListResult)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type OperationListResult.
####
type [OrderItemData](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L269) [¶](#OrderItemData)
```
type OrderItemData struct {
// READ-ONLY; Arm ID of order item
ArmID *[string](/builtin#string)
// READ-ONLY; Order item type - purchase or rental
OrderItemType *[OrderItemType](#OrderItemType)
}
```
OrderItemData - Contains information about the order item to which inventory belongs
####
func (OrderItemData) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L681) [¶](#OrderItemData.MarshalJSON)
added in v0.6.0
```
func (o [OrderItemData](#OrderItemData)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type OrderItemData.
####
func (*OrderItemData) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L689) [¶](#OrderItemData.UnmarshalJSON)
added in v0.6.0
```
func (o *[OrderItemData](#OrderItemData)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type OrderItemData.
####
type [OrderItemType](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/constants.go#L53) [¶](#OrderItemType)
```
type OrderItemType [string](/builtin#string)
```
OrderItemType - Order item type - purchase or rental
```
const (
// OrderItemTypePurchase - Purchase OrderItem.
OrderItemTypePurchase [OrderItemType](#OrderItemType) = "Purchase"
// OrderItemTypeRental - Rental OrderItem.
OrderItemTypeRental [OrderItemType](#OrderItemType) = "Rental"
)
```
####
func [PossibleOrderItemTypeValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/constants.go#L63) [¶](#PossibleOrderItemTypeValues)
```
func PossibleOrderItemTypeValues() [][OrderItemType](#OrderItemType)
```
PossibleOrderItemTypeValues returns the possible values for the OrderItemType const type.
####
type [Origin](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/constants.go#L72) [¶](#Origin)
```
type Origin [string](/builtin#string)
```
Origin - The intended executor of the operation; as in Resource Based Access Control (RBAC) and audit logs UX. Default value is "user,system"
```
const (
OriginSystem [Origin](#Origin) = "system"
OriginUser [Origin](#Origin) = "user"
OriginUserSystem [Origin](#Origin) = "user,system"
)
```
####
func [PossibleOriginValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/constants.go#L81) [¶](#PossibleOriginValues)
```
func PossibleOriginValues() [][Origin](#Origin)
```
PossibleOriginValues returns the possible values for the Origin const type.
####
type [PartnerInventory](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L278) [¶](#PartnerInventory)
```
type PartnerInventory struct {
// READ-ONLY; Inventory properties
Properties *[InventoryProperties](#InventoryProperties)
}
```
PartnerInventory - Represents partner inventory contract
####
func (PartnerInventory) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L712) [¶](#PartnerInventory.MarshalJSON)
added in v0.6.0
```
func (p [PartnerInventory](#PartnerInventory)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type PartnerInventory.
####
func (*PartnerInventory) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L719) [¶](#PartnerInventory.UnmarshalJSON)
added in v0.6.0
```
func (p *[PartnerInventory](#PartnerInventory)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type PartnerInventory.
####
type [PartnerInventoryList](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L284) [¶](#PartnerInventoryList)
```
type PartnerInventoryList struct {
// Link for the next set of partner inventories.
NextLink *[string](/builtin#string)
// READ-ONLY; List of partner inventories
Value []*[PartnerInventory](#PartnerInventory)
}
```
PartnerInventoryList - Represents the list of partner inventories
####
func (PartnerInventoryList) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L739) [¶](#PartnerInventoryList.MarshalJSON)
```
func (p [PartnerInventoryList](#PartnerInventoryList)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type PartnerInventoryList.
####
func (*PartnerInventoryList) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L747) [¶](#PartnerInventoryList.UnmarshalJSON)
added in v0.6.0
```
func (p *[PartnerInventoryList](#PartnerInventoryList)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type PartnerInventoryList.
####
type [SearchInventoriesRequest](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L293) [¶](#SearchInventoriesRequest)
```
type SearchInventoriesRequest struct {
// REQUIRED; Family identifier for inventory
FamilyIdentifier *[string](/builtin#string)
// REQUIRED; Serial number of the inventory
SerialNumber *[string](/builtin#string)
}
```
SearchInventoriesRequest - Request body for SearchInventories call
####
func (SearchInventoriesRequest) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L770) [¶](#SearchInventoriesRequest.MarshalJSON)
added in v0.6.0
```
func (s [SearchInventoriesRequest](#SearchInventoriesRequest)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type SearchInventoriesRequest.
####
func (*SearchInventoriesRequest) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L778) [¶](#SearchInventoriesRequest.UnmarshalJSON)
added in v0.6.0
```
func (s *[SearchInventoriesRequest](#SearchInventoriesRequest)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type SearchInventoriesRequest.
####
type [SpecificationDetails](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L302) [¶](#SpecificationDetails)
```
type SpecificationDetails struct {
// READ-ONLY; Name of the specification property
Name *[string](/builtin#string)
// READ-ONLY; Value of the specification property
Value *[string](/builtin#string)
}
```
SpecificationDetails - Specification details for the inventory
####
func (SpecificationDetails) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L801) [¶](#SpecificationDetails.MarshalJSON)
added in v0.6.0
```
func (s [SpecificationDetails](#SpecificationDetails)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type SpecificationDetails.
####
func (*SpecificationDetails) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L809) [¶](#SpecificationDetails.UnmarshalJSON)
added in v0.6.0
```
func (s *[SpecificationDetails](#SpecificationDetails)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type SpecificationDetails.
####
type [StageDetails](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L311) [¶](#StageDetails)
```
type StageDetails struct {
// READ-ONLY; Display name of the resource stage.
DisplayName *[string](/builtin#string)
// READ-ONLY; Stage name
StageName *[StageName](#StageName)
// READ-ONLY; Stage status.
StageStatus *[StageStatus](#StageStatus)
// READ-ONLY; Stage start time
StartTime *[time](/time).[Time](/time#Time)
}
```
StageDetails - Resource stage details.
####
func (StageDetails) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L832) [¶](#StageDetails.MarshalJSON)
```
func (s [StageDetails](#StageDetails)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type StageDetails.
####
func (*StageDetails) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L842) [¶](#StageDetails.UnmarshalJSON)
```
func (s *[StageDetails](#StageDetails)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type StageDetails.
####
type [StageName](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/constants.go#L90) [¶](#StageName)
```
type StageName [string](/builtin#string)
```
StageName - Stage name
```
const (
// StageNameDeviceOrdered - An order has been created.
StageNameDeviceOrdered [StageName](#StageName) = "DeviceOrdered"
// StageNameDevicePrepared - A device has been prepared for the order.
StageNameDevicePrepared [StageName](#StageName) = "DevicePrepared"
// StageNamePickedUp - Device has been picked up from user and in transit to Azure datacenter.
StageNamePickedUp [StageName](#StageName) = "PickedUp"
// StageNameAtAzureDC - Device has been received at Azure datacenter from the user.
StageNameAtAzureDC [StageName](#StageName) = "AtAzureDC"
// StageNameDataCopy - Data copy from the device at Azure datacenter.
StageNameDataCopy [StageName](#StageName) = "DataCopy"
// StageNameCompleted - Order has completed.
StageNameCompleted [StageName](#StageName) = "Completed"
// StageNameCompletedWithErrors - Order has completed with errors.
StageNameCompletedWithErrors [StageName](#StageName) = "CompletedWithErrors"
// StageNameCancelled - Order has been cancelled.
StageNameCancelled [StageName](#StageName) = "Cancelled"
// StageNameAborted - Order has been aborted.
StageNameAborted [StageName](#StageName) = "Aborted"
StageNameCurrent [StageName](#StageName) = "Current"
// StageNameCompletedWithWarnings - Order has completed with warnings.
StageNameCompletedWithWarnings [StageName](#StageName) = "CompletedWithWarnings"
// StageNameReadyToDispatchFromAzureDC - Device is ready to be handed to customer from Azure DC.
StageNameReadyToDispatchFromAzureDC [StageName](#StageName) = "ReadyToDispatchFromAzureDC"
// StageNameReadyToReceiveAtAzureDC - Device can be dropped off at Azure DC.
StageNameReadyToReceiveAtAzureDC [StageName](#StageName) = "ReadyToReceiveAtAzureDC"
// StageNamePlaced - Currently in draft mode and can still be cancelled
StageNamePlaced [StageName](#StageName) = "Placed"
// StageNameInReview - Order is currently in draft mode and can still be cancelled
StageNameInReview [StageName](#StageName) = "InReview"
// StageNameConfirmed - Order is confirmed
StageNameConfirmed [StageName](#StageName) = "Confirmed"
// StageNameReadyForDispatch - Order is ready for dispatch
StageNameReadyForDispatch [StageName](#StageName) = "ReadyForDispatch"
// StageNameShipped - Order is in transit to customer
StageNameShipped [StageName](#StageName) = "Shipped"
// StageNameDelivered - Order is delivered to customer
StageNameDelivered [StageName](#StageName) = "Delivered"
// StageNameInUse - Order is in use at customer site
StageNameInUse [StageName](#StageName) = "InUse"
)
```
####
func [PossibleStageNameValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/constants.go#L135) [¶](#PossibleStageNameValues)
```
func PossibleStageNameValues() [][StageName](#StageName)
```
PossibleStageNameValues returns the possible values for the StageName const type.
####
type [StageStatus](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/constants.go#L161) [¶](#StageStatus)
```
type StageStatus [string](/builtin#string)
```
StageStatus - Stage status.
```
const (
// StageStatusNone - No status available yet.
StageStatusNone [StageStatus](#StageStatus) = "None"
// StageStatusInProgress - Stage is in progress.
StageStatusInProgress [StageStatus](#StageStatus) = "InProgress"
// StageStatusSucceeded - Stage has succeeded.
StageStatusSucceeded [StageStatus](#StageStatus) = "Succeeded"
// StageStatusFailed - Stage has failed.
StageStatusFailed [StageStatus](#StageStatus) = "Failed"
// StageStatusCancelled - Stage has been cancelled.
StageStatusCancelled [StageStatus](#StageStatus) = "Cancelled"
// StageStatusCancelling - Stage is cancelling.
StageStatusCancelling [StageStatus](#StageStatus) = "Cancelling"
)
```
####
func [PossibleStageStatusValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/constants.go#L179) [¶](#PossibleStageStatusValues)
```
func PossibleStageStatusValues() [][StageStatus](#StageStatus)
```
PossibleStageStatusValues returns the possible values for the StageStatus const type.
####
type [SubscriptionDetails](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models.go#L326) [¶](#SubscriptionDetails)
```
type SubscriptionDetails struct {
// READ-ONLY; Subscription Id
ID *[string](/builtin#string)
// READ-ONLY; Subscription QuotaId
QuotaID *[string](/builtin#string)
// READ-ONLY; Subscription State
State *[string](/builtin#string)
}
```
SubscriptionDetails - Contains subscription details
####
func (SubscriptionDetails) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L871) [¶](#SubscriptionDetails.MarshalJSON)
added in v0.6.0
```
func (s [SubscriptionDetails](#SubscriptionDetails)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type SubscriptionDetails.
####
func (*SubscriptionDetails) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/v0.6.1/sdk/resourcemanager/edgeorderpartner/armedgeorderpartner/models_serde.go#L880) [¶](#SubscriptionDetails.UnmarshalJSON)
added in v0.6.0
```
func (s *[SubscriptionDetails](#SubscriptionDetails)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type SubscriptionDetails. |
s2dv | cran | R | Package ‘s2dv’
June 4, 2023
Title A Set of Common Tools for Seasonal to Decadal Verification
Version 1.4.1
Description The advanced version of package 's2dverification'. It is
intended for 'seasonal to decadal' (s2d) climate forecast verification, but
it can also be used in other kinds of forecasts or general climate analysis.
This package is specially designed for the comparison between the experimental
and observational datasets. The functionality of the included functions covers
from data retrieval, data post-processing, skill scores against observation,
to visualization. Compared to 's2dverification', 's2dv' is more compatible
with the package 'startR', able to use multiple cores for computation and
handle multi-dimensional arrays with a higher flexibility. The CDO version used
in development is 1.9.8.
Depends R (>= 3.6.0)
Imports abind, bigmemory, graphics, grDevices, maps, mapproj, methods,
parallel, ClimProjDiags, stats, plyr, ncdf4, NbClust,
multiApply (>= 2.1.1), SpecsVerification (>= 0.5.0), easyNCDF,
easyVerification
Suggests testthat
License GPL-3
URL https://earth.bsc.es/gitlab/es/s2dv/
BugReports https://earth.bsc.es/gitlab/es/s2dv/-/issues
LazyData true
SystemRequirements cdo
Encoding UTF-8
RoxygenNote 7.2.0
NeedsCompilation no
Author BSC-CNS [aut, cph],
<NAME> [aut, cre],
<NAME> [aut],
<NAME> [ctb],
<NAME> [ctb],
<NAME> [ctb],
<NAME> [ctb],
<NAME> [ctb],
<NAME> [ctb],
<NAME> [ctb]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-06-04 12:20:02 UTC
R topics documented:
AbsBiasS... 4
AC... 5
AM... 8
AnimateMa... 10
An... 13
Ano_CrossVali... 14
Bia... 16
BrierScor... 17
CDORema... 19
Cli... 24
clim.palett... 26
Cluste... 27
ColorBa... 29
Composit... 33
ConfigApplyMatchingEntrie... 35
ConfigEditDefinitio... 36
ConfigEditEntr... 37
ConfigFileOpe... 40
ConfigShowSimilarEntrie... 43
ConfigShowTabl... 45
Consist_Tren... 46
Cor... 48
CRP... 51
CRPS... 52
DiffCor... 54
En... 56
EO... 57
EuroAtlanticT... 59
Filte... 60
GMS... 61
GSA... 64
Histo2Hindcas... 66
InsertDi... 67
LeapYea... 68
Loa... 69
MeanDim... 84
NA... 85
Persistenc... 87
Plot2VarsVsLTim... 89
PlotAC... 92
PlotAn... 94
PlotBoxWhiske... 96
PlotCli... 98
PlotEquiMa... 100
PlotLayou... 107
PlotMatri... 112
PlotSectio... 114
PlotStereoMa... 115
PlotVsLTim... 120
ProbBin... 123
ProjectFiel... 124
RandomWalkTes... 126
RatioPredictableComponent... 128
RatioRM... 129
RatioSDRM... 130
Regressio... 132
REO... 134
Reorde... 135
ResidualCor... 136
RM... 138
RMSS... 140
ROCS... 142
RP... 143
RPS... 145
sampleDepthDat... 147
sampleMa... 148
sampleTimeSerie... 150
Seaso... 151
SignalNoiseRati... 152
Smoothin... 153
Spectru... 154
SPO... 155
Sprea... 157
StatSeasAtlHur... 159
ToyMode... 160
TP... 163
Tren... 165
UltimateBrie... 167
AbsBiasSS Compute the Absolute Mean Bias Skill Score
Description
The Absolute Mean Bias Skill Score is based on the Absolute Mean Error (Wilks, 2011) between the
ensemble mean forecast and the observations. It measures the accuracy of the forecast in compari-
son with a reference forecast to assess whether the forecast presents an improvement or a worsening
with respect to that reference. The Mean Bias Skill Score ranges between minus infinite and 1. Pos-
itive values indicate that the forecast has higher skill than the reference forecast, while negative
values indicate that it has a lower skill. Examples of reference forecasts are the climatological fore-
cast (average of the observations), a previous model version, or another model. It is computed as
AbsBiasSS = 1 - AbsBias_exp / AbsBias_ref. The statistical significance is obtained based on a
Random Walk test at the 95 and Tippett, 2016). If there is more than one dataset, the result will be
computed for each pair of exp and obs data.
Usage
AbsBiasSS(
exp,
obs,
ref = NULL,
time_dim = "sdate",
memb_dim = NULL,
dat_dim = NULL,
na.rm = FALSE,
ncores = NULL
)
Arguments
exp A named numerical array of the forecast with at least time dimension.
obs A named numerical array of the observation with at least time dimension. The
dimensions must be the same as ’exp’ except ’memb_dim’ and ’dat_dim’.
ref A named numerical array of the reference forecast data with at least time di-
mension. The dimensions must be the same as ’exp’ except ’memb_dim’ and
’dat_dim’. If there is only one reference dataset, it should not have dataset di-
mension. If there is corresponding reference for each experiement, the dataset
dimension must have the same length as in ’exp’. If ’ref’ is NULL, the climato-
logical forecast is used as reference forecast. The default value is NULL.
time_dim A character string indicating the name of the time dimension. The default value
is ’sdate’.
memb_dim A character string indicating the name of the member dimension to compute the
ensemble mean; it should be set to NULL if the parameter ’exp’ and ’ref’ are
already the ensemble mean. The default value is NULL.
dat_dim A character string indicating the name of dataset dimension. The length of this
dimension can be different between ’exp’ and ’obs’. The default value is NULL.
na.rm A logical value indicating if NAs should be removed (TRUE) or kept (FALSE)
for computation. The default value is FALSE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
$biasSS A numerical array of BiasSS with dimensions nexp, nobs and the rest dimen-
sions of ’exp’ except ’time_dim’ and ’memb_dim’.
$sign A logical array of the statistical significance of the BiasSS with the same dimen-
sions as $biasSS. nexp is the number of experiment (i.e., ’dat_dim’ in exp), and
nobs is the number of observation (i.e., ’dat_dim’ in obs). If dat_dim is NULL,
nexp and nobs are omitted.
References
Wilks, 2011; https://doi.org/10.1016/B978-0-12-385022-5.00008-7 DelSole and Tippett, 2016; https://doi.org/10.1175/MWR
D-15-0218.1
Examples
exp <- array(rnorm(1000), dim = c(dat = 1, lat = 3, lon = 5, member = 10, sdate = 50))
ref <- array(rnorm(1000), dim = c(dat = 1, lat = 3, lon = 5, member = 10, sdate = 50))
obs <- array(rnorm(1000), dim = c(dat = 1, lat = 3, lon = 5, sdate = 50))
biasSS1 <- AbsBiasSS(exp = exp, obs = obs, ref = ref, memb_dim = 'member')
biasSS2 <- AbsBiasSS(exp = exp, obs = obs, ref = NULL, memb_dim = 'member')
ACC Compute the spatial anomaly correlation coefficient between the fore-
cast and corresponding observation
Description
Calculate the spatial anomaly correlation coefficient (ACC) for the ensemble mean of each model
and the corresponding references over a spatial domain. It can return a forecast time series if the
data contain forest time dimension, and also the ACC mean over one dimension, e.g., start date
dimension. The domain of interest can be specified by providing the list of longitudes/ latitudes of
the data together with the corners of the domain: lonlatbox = c(lonmin, lonmax, latmin, latmax).
The data will be adjusted to have a spatial mean of zero, then area weighting is applied. The formula
is referenced from Wilks (2011; section 7.6.4; https://doi.org/10.1016/B978-0-12-385022-5.00008-
7).
Usage
ACC(
exp,
obs,
dat_dim = "dataset",
lat_dim = "lat",
lon_dim = "lon",
space_dim = c("lat", "lon"),
avg_dim = "sdate",
memb_dim = "member",
lat = NULL,
lon = NULL,
lonlatbox = NULL,
conf = TRUE,
conftype = "parametric",
conf.lev = 0.95,
pval = TRUE,
ncores = NULL
)
Arguments
exp A numeric array of experimental anomalies with named dimensions. The di-
mension must have at least ’lat_dim’ and ’lon_dim’.
obs A numeric array of observational anomalies with named dimensions. The di-
mension should be the same as ’exp’ except the length of ’dat_dim’ and ’memb_dim’.
dat_dim A character string indicating the name of dataset (nobs/nexp) dimension. The
default value is ’dataset’. If there is no dataset dimension, set NULL.
lat_dim A character string indicating the name of the latitude dimension of ’exp’ and
’obs’ along which ACC is computed. The default value is ’lat’.
lon_dim A character string indicating the name of the longitude dimension of ’exp’ and
’obs’ along which ACC is computed. The default value is ’lon’.
space_dim A character string vector of 2 indicating the name of the latitude and longi-
tude dimensions (in order) along which ACC is computed. The default value is
c(’lat’, ’lon’). This argument has been deprecated. Use ’lat_dim’ and ’lon_dim’
instead.
avg_dim A character string indicating the name of the dimension to be averaged, which
is usually the time dimension. If no need to calculate mean ACC, set as NULL.
The default value is ’sdate’.
memb_dim A character string indicating the name of the member dimension. If the data are
not ensemble ones, set as NULL. The default value is ’member’.
lat A vector of the latitudes of the exp/obs grids. It is used for area weighting and
when the domain of interested ’lonlatbox’ is specified.
lon A vector of the longitudes of the exp/obs grids. Only required when the domain
of interested ’lonlatbox’ is specified. The default value is NULL.
lonlatbox A numeric vector of 4 indicating the corners of the domain of interested: c(lonmin,
lonmax, latmin, latmax). The default value is NULL and the whole data will be
used.
conf A logical value indicating whether to retrieve the confidence intervals or not.
The default value is TRUE.
conftype A charater string of "parametric" or "bootstrap". "parametric" provides a confi-
dence interval for the ACC computed by a Fisher transformation and a signif-
icance level for the ACC from a one-sided student-T distribution. "bootstrap"
provides a confidence interval for the ACC and MACC computed from boot-
strapping on the members with 100 drawings with replacement. To guaran-
tee the statistical robustness of the result, make sure that your experiment and
observation always have the same number of members. "bootstrap" requires
’memb_dim’ has value. The default value is ’parametric’.
conf.lev A numeric indicating the confidence level for the regression computation. The
default value is 0.95.
pval A logical value indicating whether to compute the p-value or not. The default
value is TRUE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A list containing the numeric arrays:
acc The ACC with the dimensions c(nexp, nobs, the rest of the dimension except
lat_dim, lon_dim and memb_dim). nexp is the number of experiment (i.e.,
dat_dim in exp), and nobs is the number of observation (i.e., dat_dim in obs). If
dat_dim is NULL, nexp and nobs are omitted.
conf.lower (if conftype = "parametric") or acc_conf.lower (if conftype = "bootstrap")
The lower confidence interval of ACC with the same dimensions as ACC. Only
present if conf = TRUE.
conf.upper (if conftype = "parametric") or acc_conf.upper (if conftype = "bootstrap")
The upper confidence interval of ACC with the same dimensions as ACC. Only
present if conf = TRUE.
p.val The p-value with the same dimensions as ACC. Only present if pval = TRUE and
codeconftype = "parametric".
macc The mean anomaly correlation coefficient with dimensions c(nexp, nobs, the
rest of the dimension except lat_dim, lon_dim, memb_dim, and avg_dim). Only
present if ’avg_dim’ is not NULL. If dat_dim is NULL, nexp and nobs are omit-
ted.
macc_conf.lower
The lower confidence interval of MACC with the same dimensions as MACC.
Only present if conftype = "bootstrap".
macc_conf.upper
The upper confidence interval of MACC with the same dimensions as MACC.
Only present if conftype = "bootstrap".
References
Joliffe and Stephenson (2012). Forecast Verification: A Practitioner’s Guide in Atmospheric Sci-
ence. Wiley-Blackwell.; Wilks (2011; section 7.6.4; https://doi.org/10.1016/B978-0-12-385022-
5.00008-7).
Examples
sampleData$mod <- Season(sampleData$mod, monini = 11, moninf = 12, monsup = 2)
sampleData$obs <- Season(sampleData$obs, monini = 11, moninf = 12, monsup = 2)
clim <- Clim(sampleData$mod, sampleData$obs)
ano_exp <- Ano(sampleData$mod, clim$clim_exp)
ano_obs <- Ano(sampleData$obs, clim$clim_obs)
acc <- ACC(ano_exp, ano_obs, lat = sampleData$lat)
acc_bootstrap <- ACC(ano_exp, ano_obs, conftype = 'bootstrap', lat = sampleData$lat)
# Combine acc results for PlotACC
res <- array(c(acc$conf.lower, acc$acc, acc$conf.upper, acc$p.val),
dim = c(dim(acc$acc), 4))
res_bootstrap <- array(c(acc$acc_conf.lower, acc$acc, acc$acc_conf.upper, acc$p.val),
dim = c(dim(acc$acc), 4))
PlotACC(res, startDates)
PlotACC(res_bootstrap, startDates)
AMV Compute the Atlantic Multidecadal Variability (AMV) index
Description
The Atlantic Multidecadal Variability (AMV), also known as Atlantic Multidecadal Oscillation
(AMO), is a mode of natural variability of the sea surface temperatures (SST) over the North
Atlantic Ocean on multi-decadal time scales. The AMV index is computed as the difference of
weighted-averaged SST anomalies over the North Atlantic region (0ºN-60ºN, 280ºE-360ºE) and the
weighted-averaged SST anomalies over 60ºS-60ºN, 0ºE-360ºE (Trenberth & Dennis, 2005; Doblas-
Reyes et al., 2013). If different members and/or datasets are provided, the climatology (used to
calculate the anomalies) is computed individually for all of them.
Usage
AMV(
data,
data_lats,
data_lons,
type,
lat_dim = "lat",
lon_dim = "lon",
mask = NULL,
monini = 11,
fmonth_dim = "fmonth",
sdate_dim = "sdate",
indices_for_clim = NULL,
year_dim = "year",
month_dim = "month",
na.rm = TRUE,
ncores = NULL
)
Arguments
data A numerical array to be used for the index computation with, at least, the dimen-
sions: 1) latitude, longitude, start date and forecast month (in case of decadal
predictions), 2) latitude, longitude, year and month (in case of historical simu-
lations or observations). This data has to be provided, at least, over the whole
region needed to compute the index.
data_lats A numeric vector indicating the latitudes of the data.
data_lons A numeric vector indicating the longitudes of the data.
type A character string indicating the type of data (’dcpp’ for decadal predictions,
’hist’ for historical simulations, or ’obs’ for observations or reanalyses).
lat_dim A character string of the name of the latitude dimension. The default value is
’lat’.
lon_dim A character string of the name of the longitude dimension. The default value is
’lon’.
mask An array of a mask (with 0’s in the grid points that have to be masked) or NULL
(i.e., no mask is used). This parameter allows to remove the values over land
in case the dataset is a combination of surface air temperature over land and
sea surface temperature over the ocean. Also, it can be used to mask those grid
points that are missing in the observational dataset for a fair comparison between
the forecast system and the reference dataset. The default value is NULL.
monini An integer indicating the month in which the forecast system is initialized. Only
used when parameter ’type’ is ’dcpp’. The default value is 11, i.e., initialized in
November.
fmonth_dim A character string indicating the name of the forecast month dimension. Only
used if parameter ’type’ is ’dcpp’. The default value is ’fmonth’.
sdate_dim A character string indicating the name of the start date dimension. Only used if
parameter ’type’ is ’dcpp’. The default value is ’sdate’.
indices_for_clim
A numeric vector of the indices of the years to compute the climatology for cal-
culating the anomalies, or NULL so the climatology is calculated over the whole
period. If the data are already anomalies, set it to FALSE. The default value is
NULL.
In case of parameter ’type’ is ’dcpp’, ’indices_for_clim’ must be relative to the
first forecast year, and the climatology is automatically computed over the com-
mon calendar period for the different forecast years.
year_dim A character string indicating the name of the year dimension The default value
is ’year’. Only used if parameter ’type’ is ’hist’ or ’obs’.
month_dim A character string indicating the name of the month dimension. The default
value is ’month’. Only used if parameter ’type’ is ’hist’ or ’obs’.
na.rm A logical value indicanting whether to remove NA values. The default value is
TRUE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A numerical array with the AMV index with the same dimensions as data except the lat_dim,
lon_dim and fmonth_dim (month_dim) in case of decadal predictions (historical simulations or
observations). In case of decadal predictions, a new dimension ’fyear’ is added.
Examples
## Observations or reanalyses
obs <- array(1:100, dim = c(year = 5, lat = 19, lon = 37, month = 12))
lat <- seq(-90, 90, 10)
lon <- seq(0, 360, 10)
index_obs <- AMV(data = obs, data_lats = lat, data_lons = lon, type = 'obs')
## Historical simulations
hist <- array(1:100, dim = c(year = 5, lat = 19, lon = 37, month = 12, member = 5))
lat <- seq(-90, 90, 10)
lon <- seq(0, 360, 10)
index_hist <- AMV(data = hist, data_lats = lat, data_lons = lon, type = 'hist')
## Decadal predictions
dcpp <- array(1:100, dim = c(sdate = 5, lat = 19, lon = 37, fmonth = 24, member = 5))
lat <- seq(-90, 90, 10)
lon <- seq(0, 360, 10)
index_dcpp <- AMV(data = dcpp, data_lats = lat, data_lons = lon, type = 'dcpp', monini = 1)
AnimateMap Animate Maps of Forecast/Observed Values or Scores Over Forecast
Time
Description
Create animations of maps in an equi-rectangular or stereographic projection, showing the anoma-
lies, the climatologies, the mean InterQuartile Range, Maximum-Mininum, Standard Deviation,
Median Absolute Deviation, the trends, the RMSE, the correlation or the RMSSS, between mod-
elled and observed data along the forecast time (lead-time) for all input experiments and input
observational datasets.
Usage
AnimateMap(
var,
lon,
lat,
toptitle = rep("", 11),
sizetit = 1,
units = "",
monini = 1,
freq = 12,
msk95lev = FALSE,
brks = NULL,
cols = NULL,
filled.continents = FALSE,
lonmin = 0,
lonmax = 360,
latmin = -90,
latmax = 90,
intlon = 20,
intlat = 30,
drawleg = TRUE,
subsampleg = 1,
colNA = "white",
equi = TRUE,
fileout = c("output1_animvsltime.gif", "output2_animvsltime.gif",
"output3_animvsltime.gif"),
...
)
Arguments
var Matrix of dimensions (nltime, nlat, nlon) or (nexp/nmod, nltime, nlat, nlon)
or (nexp/nmod, 3/4, nltime, nlat, nlon) or (nexp/nmod, nobs, 3/4, nltime, nlat,
nlon).
lon Vector containing longtitudes (degrees).
lat Vector containing latitudes (degrees).
toptitle c(”,”, . . . ) array of main title for each animation, optional. If RMS, RMSSS,
correlations: first exp with successive obs, then second exp with successive obs,
etc ...
sizetit Multiplicative factor to increase title size, optional.
units Units, optional.
monini Starting month between 1 and 12. Default = 1.
freq 1 = yearly, 12 = monthly, 4 = seasonal ...
msk95lev TRUE/FALSE grid points with dots if 95% significance level reached. Default
= FALSE.
brks Limits of colour levels, optional. For example: seq(min(var), max(var), (max(var)
- min(var)) / 10).
cols Vector of colours of length(brks) - 1, optional.
filled.continents
Continents filled in grey (TRUE) or represented by a black line (FALSE). De-
fault = TRUE. Filling unavailable if crossing Greenwich and equi = TRUE. Fill-
ing unavailable if square = FALSE and equi = TRUE.
lonmin Westward limit of the domain to plot (> 0 or < 0). Default : 0 degrees.
lonmax Eastward limit of the domain to plot (> 0 or < 0). lonmax > lonmin. Default :
360 degrees.
latmin Southward limit of the domain to plot. Default : -90 degrees.
latmax Northward limit of the domain to plot. Default : 90 degrees.
intlon Interval between longitude ticks on x-axis. Default = 20 degrees.
intlat Interval between latitude ticks on y-axis for equi = TRUE or between latitude
circles for equi = FALSE. Default = 30 degrees.
drawleg Draw a colorbar. Can be FALSE only if square = FALSE or equi = FALSE.
Default = TRUE.
subsampleg Supsampling factor of the interval between ticks on colorbar. Default = 1 =
every colour level.
colNA Color used to represent NA. Default = ’white’.
equi TRUE/FALSE == cylindrical equidistant/stereographic projection. Default: TRUE.
fileout c(”, ”, . . . ) array of output file name for each animation. If RMS, RMSSS,
correlations : first exp with successive obs, then second exp with successive
obs, etc ...
... Arguments to be passed to the method. Only accepts the following graphical
parameters:
adj ann ask bty cex cex.axis cex.lab cex.main cex.sub cin col.axis col.lab col.main
col.sub cra crt csi cxy err family fg fig font font.axis font.lab font.main font.sub
las lheight ljoin lmitre lty lwd mai mar mex mfcol mfrow mfg mgp mkh oma
omd omi page pch plt pty smo srt tck tcl usr xaxp xaxs xaxt xlog xpd yaxp yaxs
yaxt ylbias ylog.
For more information about the parameters see ‘par‘.
Details
Examples of input:
1. Outputs from clim (exp, obs, memb = FALSE): (nmod, nltime, nlat, nlon) or (nobs, nltime,
nlat, nlon)
2. Model output from load/ano/smoothing: (nmod, nmemb, sdate, nltime, nlat, nlon) then passed
through spread(var, posdim = 2, narm = TRUE) & mean1dim(var, posdim = 3, narm = TRUE)
or through trend(mean1dim(var, 2), posTR = 2): (nmod, 3, nltime, nlat, nlon) animates average
along start dates of IQR/MaxMin/SD/MAD across members or trends of the ensemble-mean
computed accross the start dates.
3. model and observed output from load/ano/smoothing: (nmod, nmemb, sdate, nltime, nlat,
nlon) & (nobs, nmemb, sdate, nltime, nlat, nlon) then averaged along members mean1dim(var_exp/var_obs,
posdim = 2): (nmod, sdate, nltime, nlat, nlon) (nobs, sdate, nltime, nlat, nlon) then passed
through corr(exp, obs, posloop = 1, poscor = 2) or RMS(exp, obs, posloop = 1, posRMS = 2):
(nmod, nobs, 3, nltime, nlat, nlon) animates correlations or RMS between each exp & each
obs against leadtime.
Examples
# See ?Load for explanations on the first part of this example
## Not run:
data_path <- system.file('sample_data', package = 's2dv')
expA <- list(name = 'experiment', path = file.path(data_path,
'model/$EXP_NAME$/$STORE_FREQ$_mean/$VAR_NAME$_3hourly',
'$VAR_NAME$_$START_DATE$.nc'))
obsX <- list(name = 'observation', path = file.path(data_path,
'$OBS_NAME$/$STORE_FREQ$_mean/$VAR_NAME$',
'$VAR_NAME$_$YEAR$$MONTH$.nc'))
# Now we are ready to use Load().
startDates <- c('19851101', '19901101', '19951101', '20001101', '20051101')
sampleData <- Load('tos', list(expA), list(obsX), startDates,
output = 'lonlat', latmin = 27, latmax = 48,
lonmin = -12, lonmax = 40)
## End(Not run)
clim <- Clim(sampleData$mod, sampleData$obs, memb = FALSE)
## Not run:
AnimateMap(clim$clim_exp, sampleData$lon, sampleData$lat,
toptitle = "climatology of decadal prediction", sizetit = 1,
units = "degree", brks = seq(270, 300, 3), monini = 11, freq = 12,
msk95lev = FALSE, filled.continents = TRUE, intlon = 10, intlat = 10,
fileout = 'clim_dec.gif')
## End(Not run)
# More examples in s2dverification but are deleted for now
Ano Compute forecast or observation anomalies
Description
This function computes anomalies from a multidimensional data array and a climatology array.
Usage
Ano(data, clim, ncores = NULL)
Arguments
data A numeric array with named dimensions, representing the model or observa-
tional data to be calculated the anomalies. It should involve all the dimensions
in parameter ’clim’, and it can have more other dimensions.
clim A numeric array with named dimensions, representing the climatologies to be
deducted from parameter ’data’. It can be generated by Clim(). The dimensions
should all be involved in parameter ’data’ with the same length.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
An array with same dimensions as parameter ’data’.
Examples
# Load sample data as in Load() example:
example(Load)
clim <- Clim(sampleData$mod, sampleData$obs)
ano_exp <- Ano(sampleData$mod, clim$clim_exp)
ano_obs <- Ano(sampleData$obs, clim$clim_obs)
## Not run:
PlotAno(ano_exp, ano_obs, startDates,
toptitle = 'Anomaly', ytitle = c('K', 'K', 'K'),
legends = 'ERSST', biglab = FALSE)
## End(Not run)
Ano_CrossValid Compute anomalies in cross-validation mode
Description
Compute the anomalies from the arrays of the experimental and observational data output by sub-
tracting the climatologies computed with a leave-one-out cross validation technique and a per-pair
method (Garcia-Serrano and Doblas-Reyes, CD, 2012). Per-pair climatology means that only the
start dates covered by the whole experiments/observational datasets will be used. In other words,
the startdates which do not all have values along ’dat_dim’ dimension of both the ’exp’ and ’obs’
are excluded when computing the climatologies.
Usage
Ano_CrossValid(
exp,
obs,
time_dim = "sdate",
dat_dim = c("dataset", "member"),
memb_dim = "member",
memb = TRUE,
ncores = NULL
)
Arguments
exp A named numeric array of experimental data, with at least dimensions ’time_dim’
and ’dat_dim’.
obs A named numeric array of observational data, same dimensions as parameter
’exp’ except along ’dat_dim’.
time_dim A character string indicating the name of the time dimension. The default value
is ’sdate’.
dat_dim A character vector indicating the name of the dataset and member dimensions.
When calculating the climatology, if data at one startdate (i.e., ’time_dim’) is not
complete along ’dat_dim’, this startdate along ’dat_dim’ will be discarded. If
there is no dataset dimension, it can be NULL. The default value is "c(’dataset’,
’member’)".
memb_dim A character string indicating the name of the member dimension. Only used
when parameter ’memb’ is FALSE. It must be one element in ’dat_dim’. The
default value is ’member’.
memb A logical value indicating whether to subtract the climatology based on the in-
dividual members (TRUE) or the ensemble mean over all members (FALSE)
when calculating the anomalies. The default value is TRUE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A list of 2:
$exp A numeric array with the same dimensions as ’exp’. The dimension order may
change.
$obs A numeric array with the same dimensions as ’obs’.The dimension order may
change.
Examples
# Load sample data as in Load() example:
example(Load)
anomalies <- Ano_CrossValid(sampleData$mod, sampleData$obs)
## Not run:
PlotAno(anomalies$exp, anomalies$obs, startDates,
toptitle = paste('anomalies'), ytitle = c('K', 'K', 'K'),
legends = 'ERSST', biglab = FALSE)
## End(Not run)
Bias Compute the Mean Bias
Description
The Mean Bias or Mean Error (Wilks, 2011) is defined as the mean difference between the ensemble
mean forecast and the observations. It is a deterministic metric. Positive values indicate that the
forecasts are on average too high and negative values indicate that the forecasts are on average too
low. It also allows to compute the Absolute Mean Bias or bias without temporal mean. If there is
more than one dataset, the result will be computed for each pair of exp and obs data.
Usage
Bias(
exp,
obs,
time_dim = "sdate",
memb_dim = NULL,
dat_dim = NULL,
na.rm = FALSE,
absolute = FALSE,
time_mean = TRUE,
ncores = NULL
)
Arguments
exp A named numerical array of the forecast with at least time dimension.
obs A named numerical array of the observation with at least time dimension. The
dimensions must be the same as ’exp’ except ’memb_dim’ and ’dat_dim’.
time_dim A character string indicating the name of the time dimension. The default value
is ’sdate’.
memb_dim A character string indicating the name of the member dimension to compute the
ensemble mean; it should be set to NULL if the parameter ’exp’ is already the
ensemble mean. The default value is NULL.
dat_dim A character string indicating the name of dataset dimension. The length of this
dimension can be different between ’exp’ and ’obs’. The default value is NULL.
na.rm A logical value indicating if NAs should be removed (TRUE) or kept (FALSE)
for computation. The default value is FALSE.
absolute A logical value indicating whether to compute the absolute bias. The default
value is FALSE.
time_mean A logical value indicating whether to compute the temporal mean of the bias.
The default value is TRUE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A numerical array of bias with dimensions c(nexp, nobs, the rest dimensions of ’exp’ except
’time_dim’ (if time_mean = T) and ’memb_dim’). nexp is the number of experiment (i.e., ’dat_dim’
in exp), and nobs is the number of observation (i.e., ’dat_dim’ in obs). If dat_dim is NULL, nexp
and nobs are omitted.
References
Wilks, 2011; https://doi.org/10.1016/B978-0-12-385022-5.00008-7
Examples
exp <- array(rnorm(1000), dim = c(dat = 1, lat = 3, lon = 5, member = 10, sdate = 50))
obs <- array(rnorm(1000), dim = c(dat = 1, lat = 3, lon = 5, sdate = 50))
bias <- Bias(exp = exp, obs = obs, memb_dim = 'member')
BrierScore Compute Brier score, its decomposition, and Brier skill score
Description
Compute the Brier score (BS) and the components of its standard decompostion with the two within-
bin components described in Stephenson et al., (2008). It also returns the bias-corrected decompo-
sition of the BS (Ferro and Fricker, 2012). BSS has the climatology as the reference forecast.
Usage
BrierScore(
exp,
obs,
thresholds = seq(0.1, 0.9, 0.1),
time_dim = "sdate",
dat_dim = NULL,
memb_dim = NULL,
ncores = NULL
)
Arguments
exp A vector or a numeric array with named dimensions. It should be the predicted
probabilities which are within the range [0, 1] if memb_dim doesn’t exist. If it
has memb_dim, the value should be 0 or 1, and the predicted probabilities will
be computed by ensemble mean. The dimensions must at least have ’time_dim’.
range [0, 1].
obs A numeric array with named dimensions of the binary observations (0 or 1). The
dimension must be the same as ’exp’ except memb_dim, which is optional. If
it has ’memb_dim’, then the length must be 1. The length of ’dat_dim’ can be
different from ’exp’ if it has.
thresholds A numeric vector used to bin the forecasts. The default value is seq(0.1, 0.9,
0.1), which means that the bins are [0, 0.1), [0.1, 0.2), ... [0.9, 1].
time_dim A character string indicating the name of dimension along which Brier score is
computed. The default value is ’sdate’.
dat_dim A character string indicating the name of dataset dimension in ’exp’ and ’obs’.
The length of this dimension can be different between ’exp’ and ’obs’. The
default value is NULL.
memb_dim A character string of the name of the member dimension in ’exp’ (and ’obs’,
optional). The function will do the ensemble mean over this dimension. If there
is no member dimension, set NULL. The default value is NULL.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A list that contains:
$rel standard reliability
$res standard resolution
$unc standard uncertainty
$bs Brier score
$bs_check_res rel - res + unc
$bss_res res - rel / unc
$gres generalized resolution
$bs_check_gres rel - gres + unc
$bss_gres gres - rel / unc
$rel_bias_corrected
bias - corrected rel
$gres_bias_corrected
bias - corrected gres
$unc_bias_corrected
bias - corrected unc
$bss_bias_corrected
gres_bias_corrected - rel_bias_corrected / unc_bias_corrected
$nk number of forecast in each bin
$fkbar average probability of each bin
$okbar relative frequency that the observed event occurred
The data type and dimensions of the items depend on if the input ’exp’ and ’obs’ are:
(a) Vectors
(b) Arrays with ’dat_dim’ specified
(c) Arrays with no ’dat_dim’ specified
Items ’rel’, ’res’, ’unc’, ’bs’, ’bs_check_res’, ’bss_res’, ’gres’, ’bs_check_gres’, ’bss_gres’, ’rel_bias_corrected’,
’gres_bias_corrected’, ’unc_bias_corrected’, and ’bss_bias_corrected’ are (a) a number (b) an ar-
ray with dimensions c(nexp, nobs, all the rest dimensions in ’exp’ and ’obs’ except ’time_dim’ and
’memb_dim’) (c) an array with dimensions of ’exp’ and ’obs’ except ’time_dim’ and ’memb_dim’
Items ’nk’, ’fkbar’, and ’okbar’ are (a) a vector of length of bin number determined by ’threshold’
(b) an array with dimensions c(nexp, nobs, no. of bins, all the rest dimensions in ’exp’ and ’obs’ ex-
cept ’time_dim’ and ’memb_dim’) (c) an array with dimensions c(no. of bin, all the rest dimensions
in ’exp’ and ’obs’ except ’time_dim’ and ’memb_dim’)
References
Wilks (2006) Statistical Methods in the Atmospheric Sciences.
Stephenson et al. (2008). Two extra components in the Brier score decomposition. Weather and
Forecasting, 23: 752-757.
Ferro and Fricker (2012). A bias-corrected decomposition of the BS. Quarterly Journal of the Royal
Meteorological Society, DOI: 10.1002/qj.1924.
Examples
# Inputs are vectors
exp <- runif(10)
obs <- round(exp)
x <- BrierScore(exp, obs)
# Inputs are arrays
example(Load)
clim <- Clim(sampleData$mod, sampleData$obs)
ano_exp <- Ano(sampleData$mod, clim$clim_exp)
ano_obs <- Ano(sampleData$obs, clim$clim_obs)
bins_ano_exp <- ProbBins(ano_exp, thr = c(1/3, 2/3))
bins_ano_obs <- ProbBins(ano_obs, thr = c(1/3, 2/3))
res <- BrierScore(bins_ano_exp, MeanDims(bins_ano_obs, 'member'), memb_dim = 'member')
CDORemap Interpolate arrays with longitude and latitude dimensions using CDO
Description
This function takes as inputs a multidimensional array (optional), a vector or matrix of longitudes, a
vector or matrix of latitudes, a destination grid specification, and the name of a method to be used to
interpolate (one of those available in the ’remap’ utility in CDO). The interpolated array is returned
(if provided) together with the new longitudes and latitudes.
CDORemap() permutes by default the dimensions of the input array (if needed), splits it in chunks
(CDO can work with data arrays of up to 4 dimensions), generates a file with the data of each chunk,
interpolates it with CDO, reads it back into R and merges it into a result array. If no input array is
provided, the longitude and latitude vectors will be transformed only. If the array is already on the
desired destination grid, no transformation is performed (this behvaiour works only for lonlat and
gaussian grids).
Any metadata attached to the input data array, longitudes or latitudes will be preserved or accord-
ingly modified.
Usage
CDORemap(
data_array = NULL,
lons,
lats,
grid,
method,
avoid_writes = TRUE,
crop = TRUE,
force_remap = FALSE,
write_dir = tempdir()
)
Arguments
data_array Multidimensional numeric array to be interpolated. If provided, it must have
at least a longitude and a latitude dimensions, identified by the array dimen-
sion names. The names for these dimensions must be one of the recognized by
s2dverification (can be checked with s2dv:::.KnownLonNames() and s2dv:::.KnownLatNames()).
lons Numeric vector or array of longitudes of the centers of the grid cells. Its size
must match the size of the longitude/latitude dimensions of the input array.
lats Numeric vector or array of latitudes of the centers of the grid cells. Its size must
match the size of the longitude/latitude dimensions of the input array.
grid Character string specifying either a name of a target grid (recognized by CDO;
e.g.: ’r256x128’, ’t106grid’) or a path to another NetCDF file which to read the
target grid from (a single grid must be defined in such file).
method Character string specifying an interpolation method (recognized by CDO; e.g.:
’con’, ’bil’, ’bic’, ’dis’, ’con2’, ’laf’, ’nn’). The following long names are also
supported: ’conservative’, ’bilinear’, ’bicubic’ and ’distance-weighted’.
avoid_writes The step of permutation is needed when the input array has more than 3 dimen-
sions and none of the longitude or latitude dimensions in the right-most position
(CDO would not accept it without permuting previously). This step, executed
by default when needed, can be avoided for the price of writing more intermedi-
ate files (whis usually is unconvenient) by setting the parameter avoid_writes
= TRUE.
crop Whether to crop the data after interpolation with ’cdo sellonlatbox’ (TRUE) or
to extend interpolated data to the whole world as CDO does by default (FALSE).
The default value is TRUE.
• If crop = TRUE, the longitude and latitude borders to be cropped at are taken
as the limits of the cells at the borders (not the values of ’lons’ and ’lats’,
which are perceived as cell centers), i.e., the resulting array will contain data
that covers the same area as the input array. This is equivalent to specifying
crop = 'preserve', i.e., preserving area. Notice that the longitude range
of returning array will follow the original data ’lons’ instead of the target
grid ’grid’.
• If crop = FALSE, the returning array is not cropped, i.e., a global domain,
and the longitude range will be the same as the target grid ’grid’.
• If crop = 'tight', the borders to be cropped at are taken as the minimum
and maximum cell centers in ’lons’ and ’lats’, i.e., the area covered by the
resulting array may be smaller if interpolating from a coarse grid to a fine
grid.
• The parameter ’crop’ also accepts a numeric vector of customized borders
to be cropped at:
c(western border, eastern border, southern border, northern border).
force_remap Whether to force remapping, even if the input data array is already on the target
grid.
write_dir Path to the directory where to create the intermediate files for CDO to work. By
default, the R session temporary directory is used (tempdir()).
Value
A list with the following components:
’data_array’ The interpolated data array (if an input array is provided at all, NULL other-
wise).
’lons’ The longitudes of the data on the destination grid.
’lats’ The latitudes of the data on the destination grid.
Examples
## Not run:
# Interpolating only vectors of longitudes and latitudes
lon <- seq(0, 360 - 360/50, length.out = 50)
lat <- seq(-90, 90, length.out = 25)
tas2 <- CDORemap(NULL, lon, lat, 't170grid', 'bil', TRUE)
# Minimal array interpolation
tas <- array(1:50, dim = c(25, 50))
names(dim(tas)) <- c('lat', 'lon')
lon <- seq(0, 360 - 360/50, length.out = 50)
lat <- seq(-90, 90, length.out = 25)
tas2 <- CDORemap(tas, lon, lat, 't170grid', 'bil', TRUE)
# Metadata can be attached to the inputs. It will be preserved and
# accordignly modified.
tas <- array(1:50, dim = c(25, 50))
names(dim(tas)) <- c('lat', 'lon')
lon <- seq(0, 360 - 360/50, length.out = 50)
metadata <- list(lon = list(units = 'degrees_east'))
attr(lon, 'variables') <- metadata
lat <- seq(-90, 90, length.out = 25)
metadata <- list(lat = list(units = 'degrees_north'))
attr(lat, 'variables') <- metadata
metadata <- list(tas = list(dim = list(lat = list(len = 25,
vals = lat),
lon = list(len = 50,
vals = lon)
)))
attr(tas, 'variables') <- metadata
tas2 <- CDORemap(tas, lon, lat, 't170grid', 'bil', TRUE)
# Arrays of any number of dimensions in any order can be provided.
num_lats <- 25
num_lons <- 50
tas <- array(1:(10*num_lats*10*num_lons*10),
dim = c(10, num_lats, 10, num_lons, 10))
names(dim(tas)) <- c('a', 'lat', 'b', 'lon', 'c')
lon <- seq(0, 360 - 360/num_lons, length.out = num_lons)
metadata <- list(lon = list(units = 'degrees_east'))
attr(lon, 'variables') <- metadata
lat <- seq(-90, 90, length.out = num_lats)
metadata <- list(lat = list(units = 'degrees_north'))
attr(lat, 'variables') <- metadata
metadata <- list(tas = list(dim = list(a = list(),
lat = list(len = num_lats,
vals = lat),
b = list(),
lon = list(len = num_lons,
vals = lon),
c = list()
)))
attr(tas, 'variables') <- metadata
tas2 <- CDORemap(tas, lon, lat, 't17grid', 'bil', TRUE)
# The step of permutation can be avoided but more intermediate file writes
# will be performed.
tas2 <- CDORemap(tas, lon, lat, 't17grid', 'bil', FALSE)
# If the provided array has the longitude or latitude dimension in the
# right-most position, the same number of file writes will be performed,
# even if avoid_wrties = FALSE.
num_lats <- 25
num_lons <- 50
tas <- array(1:(10*num_lats*10*num_lons*10),
dim = c(10, num_lats, 10, num_lons))
names(dim(tas)) <- c('a', 'lat', 'b', 'lon')
lon <- seq(0, 360 - 360/num_lons, length.out = num_lons)
metadata <- list(lon = list(units = 'degrees_east'))
attr(lon, 'variables') <- metadata
lat <- seq(-90, 90, length.out = num_lats)
metadata <- list(lat = list(units = 'degrees_north'))
attr(lat, 'variables') <- metadata
metadata <- list(tas = list(dim = list(a = list(),
lat = list(len = num_lats,
vals = lat),
b = list(),
lon = list(len = num_lons,
vals = lon)
)))
attr(tas, 'variables') <- metadata
tas2 <- CDORemap(tas, lon, lat, 't17grid', 'bil', TRUE)
tas2 <- CDORemap(tas, lon, lat, 't17grid', 'bil', FALSE)
# An example of an interpolation from and onto a rectangular regular grid
num_lats <- 25
num_lons <- 50
tas <- array(1:(1*num_lats*num_lons), dim = c(num_lats, num_lons))
names(dim(tas)) <- c('y', 'x')
lon <- array(seq(0, 360 - 360/num_lons, length.out = num_lons),
dim = c(num_lons, num_lats))
metadata <- list(lon = list(units = 'degrees_east'))
names(dim(lon)) <- c('x', 'y')
attr(lon, 'variables') <- metadata
lat <- t(array(seq(-90, 90, length.out = num_lats),
dim = c(num_lats, num_lons)))
metadata <- list(lat = list(units = 'degrees_north'))
names(dim(lat)) <- c('x', 'y')
attr(lat, 'variables') <- metadata
tas2 <- CDORemap(tas, lon, lat, 'r100x50', 'bil')
# An example of an interpolation from an irregular grid onto a gaussian grid
num_lats <- 25
num_lons <- 50
tas <- array(1:(10*num_lats*10*num_lons*10),
dim = c(10, num_lats, 10, num_lons))
names(dim(tas)) <- c('a', 'j', 'b', 'i')
lon <- array(seq(0, 360 - 360/num_lons, length.out = num_lons),
dim = c(num_lons, num_lats))
metadata <- list(lon = list(units = 'degrees_east'))
names(dim(lon)) <- c('i', 'j')
attr(lon, 'variables') <- metadata
lat <- t(array(seq(-90, 90, length.out = num_lats),
dim = c(num_lats, num_lons)))
metadata <- list(lat = list(units = 'degrees_north'))
names(dim(lat)) <- c('i', 'j')
attr(lat, 'variables') <- metadata
tas2 <- CDORemap(tas, lon, lat, 't17grid', 'bil')
# Again, the dimensions can be in any order
num_lats <- 25
num_lons <- 50
tas <- array(1:(10*num_lats*10*num_lons),
dim = c(10, num_lats, 10, num_lons))
names(dim(tas)) <- c('a', 'j', 'b', 'i')
lon <- array(seq(0, 360 - 360/num_lons, length.out = num_lons),
dim = c(num_lons, num_lats))
names(dim(lon)) <- c('i', 'j')
lat <- t(array(seq(-90, 90, length.out = num_lats),
dim = c(num_lats, num_lons)))
names(dim(lat)) <- c('i', 'j')
tas2 <- CDORemap(tas, lon, lat, 't17grid', 'bil')
tas2 <- CDORemap(tas, lon, lat, 't17grid', 'bil', FALSE)
# It is ossible to specify an external NetCDF file as target grid reference
tas2 <- CDORemap(tas, lon, lat, 'external_file.nc', 'bil')
## End(Not run)
Clim Compute Bias Corrected Climatologies
Description
This function computes per-pair climatologies for the experimental and observational data using
one of the following methods:
1. per-pair method (Garcia-Serrano and Doblas-Reyes, CD, 2012 https://doi.org/10.1007/s00382-
012-1413-1)
2. Kharin method (Kharin et al, GRL, 2012 https://doi.org/10.1029/2012GL052647)
3. Fuckar method (Fuckar et al, GRL, 2014 https://doi.org/10.1002/2014GL060815)
Per-pair climatology means that only the startdates covered by the whole experiments/observational
dataset will be used. In other words, the startdates which are not all available along ’dat_dim’
dimension of both the ’exp’ and ’obs’ are excluded when computing the climatologies. Kharin
method is the linear trend bias correction method, and Fuckar method is the initial condition bias
correction method. The two methods both do the per-pair correction beforehand.
Usage
Clim(
exp,
obs,
time_dim = "sdate",
dat_dim = c("dataset", "member"),
method = "clim",
ftime_dim = "ftime",
memb = TRUE,
memb_dim = "member",
na.rm = TRUE,
ncores = NULL
)
Arguments
exp A named numeric array of experimental data with at least dimension ’time_dim’.
obs A named numeric array of observational data that has the same dimension as
’exp’ except ’dat_dim’.
time_dim A character string indicating the name of dimension along which the climatolo-
gies are computed. The default value is ’sdate’.
dat_dim A character vector indicating the name of the dataset and member dimensions.
If data at one startdate (i.e., ’time_dim’) are not complete along ’dat_dim’, this
startdate along ’dat_dim’ will be discarded. If there is no dataset dimension, it
can be NULL, however, it will be more efficient to simply use mean() to do the
calculation. The default value is "c(’dataset’, ’member’)".
method A character string indicating the method to be used. The options include ’clim’
(per-pair method), ’kharin’ (Kharin method), and ’NDV’ (Fuckar method). The
default value is ’clim’.
ftime_dim A character string indicating the name of forecast time dimension. Only used
when method = ’NDV’. The default value is ’ftime’.
memb A logical value indicating whether to remain ’memb_dim’ dimension (TRUE)
or do ensemble mean over ’memb_dim’ (FALSE). The default value is TRUE.
memb_dim A character string indicating the name of the member dimension. Only used
when parameter ’memb’ is FALSE. It must be one element in ’dat_dim’. The
default value is ’member’.
na.rm A logical value indicating whether to remove NA values along ’time_dim’ when
calculating climatology (TRUE) or return NA if there is NA along ’time_dim’
(FALSE). The default value is TRUE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A list of 2:
$clim_exp A numeric array with the same dimensions as parameter ’exp’ but dimension
’time_dim’ is moved to the first position. If parameter ’method’ is ’clim’, di-
mension ’time_dim’ is removed. If parameter ’memb’ is FALSE, dimension
’memb_dim’ is also removed.
$clim_obs A numeric array with the same dimensions as parameter ’obs’ except dimension
’time_dim’ is removed. If parameter ’memb’ is FALSE, dimension ’memb_dim’
is also removed.
Examples
# Load sample data as in Load() example:
example(Load)
clim <- Clim(sampleData$mod, sampleData$obs)
clim2 <- Clim(sampleData$mod, sampleData$obs, method = 'kharin', memb = FALSE)
## Not run:
PlotClim(clim$clim_exp, clim$clim_obs,
toptitle = paste('sea surface temperature climatologies'),
ytitle = 'K', monini = 11, listexp = c('CMIP5 IC3'),
listobs = c('ERSST'), biglab = FALSE)
## End(Not run)
clim.palette Generate Climate Color Palettes
Description
Generates a colorblind friendly color palette with color ranges useful in climate temperature variable
plotting.
Usage
clim.palette(palette = "bluered")
clim.colors(n, palette = "bluered")
Arguments
palette Which type of palette to generate: from blue through white to red (’bluered’),
from red through white to blue (’redblue’), from yellow through orange to red
(’yellowred’), from red through orange to red (’redyellow’), from purple through
white to orange (’purpleorange’), and from orange through white to purple (’or-
angepurple’).
n Number of colors to generate.
Examples
lims <- seq(-1, 1, length.out = 21)
ColorBar(lims, color_fun = clim.palette('redyellow'))
cols <- clim.colors(20)
ColorBar(lims, cols)
Cluster K-means Clustering
Description
Compute cluster centers and their time series of occurrences, with the K-means clustering method
using Euclidean distance, of an array of input data with any number of dimensions that at least
contain time_dim. Specifically, it partitions the array along time axis in K groups or clusters in
which each space vector/array belongs to (i.e., is a member of) the cluster with the nearest center
or centroid. This function is a wrapper of kmeans() and relies on the NbClust package (Charrad et
al., 2014 JSS) to determine the optimal number of clusters used for K-means clustering if it is not
provided by users.
Usage
Cluster(
data,
weights = NULL,
time_dim = "sdate",
space_dim = NULL,
nclusters = NULL,
index = "sdindex",
ncores = NULL
)
Arguments
data A numeric array with named dimensions that at least have ’time_dim’ cor-
responding to time and ’space_dim’ (optional) corresponding to either area-
averages over a series of domains or the grid points for any sptial grid structure.
weights A numeric array with named dimension of multiplicative weights based on the
areas covering each domain/region or grid-cell of ’data’. The dimensions must
be equal to the ’space_dim’ in ’data’. The default value is NULL which means
no weighting is applied.
time_dim A character string indicating the name of time dimension in ’data’. The default
value is ’sdate’.
space_dim A character vector indicating the names of spatial dimensions in ’data’. The
default value is NULL.
nclusters A positive integer K that must be bigger than 1 indicating the number of clusters
to be computed, or K initial cluster centers to be used in the method. The default
value is NULL, which means that the number of clusters will be determined by
NbClust(). The parameter ’index’ therefore needs to be specified for NbClust()
to find the optimal number of clusters to be used for K-means clustering calcu-
lation.
index A character string of the validity index from NbClust package that can be used
to determine optimal K if K is not specified with ’nclusters’. The default value
is ’sdindex’ (Halkidi et al. 2001, JIIS). Other indices available in NBClust are
"kl", "ch", "hartigan", "ccc", "scott", "marriot", "trcovw", "tracew", "friedman",
"rubin", "cindex", "db", "silhouette", "duda", "pseudot2", "beale", "ratkowsky",
"ball", "ptbiserial", "gap", "frey", "mcclain", "gamma", "gplus", "tau", "dunn",
"hubert", "sdindex", and "sdbw". One can also use all of them with the op-
tion ’alllong’ or almost all indices except gap, gamma, gplus and tau with ’all’,
when the optimal number of clusters K is detremined by the majority rule (the
maximum of histogram of the results of all indices with finite solutions). Use
of some indices on a big and/or unstructured dataset can be computationally
intense and/or could lead to numerical singularity.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A list containing:
$cluster An integer array of the occurrence of a cluster along time, i.e., when certain data
member in time is allocated to a specific cluster. The dimensions are same as
’data’ without ’space_dim’.
$centers A numeric array of cluster centres or centroids (e.g. [1:K, 1:spatial degrees
of freedom]). The rest dimensions are same as ’data’ except ’time_dim’ and
’space_dim’.
$totss A numeric array of the total sum of squares. The dimensions are same as ’data’
except ’time_dim’ and ’space_dim’.
$withinss A numeric array of within-cluster sum of squares, one component per cluster.
The first dimenion is the number of cluster, and the rest dimensions are same as
’data’ except ’time_dim’ and ’space_dim’.
$tot.withinss A numeric array of the total within-cluster sum of squares, i.e., sum(withinss).
The dimensions are same as ’data’ except ’time_dim’ and ’space_dim’.
$betweenss A numeric array of the between-cluster sum of squares, i.e. totss-tot.withinss.
The dimensions are same as ’data’ except ’time_dim’ and ’space_dim’.
$size A numeric array of the number of points in each cluster. The first dimenion is the
number of cluster, and the rest dimensions are same as ’data’ except ’time_dim’
and ’space_dim’.
$iter A numeric array of the number of (outer) iterations. The dimensions are same
as ’data’ except ’time_dim’ and ’space_dim’.
$ifault A numeric array of an indicator of a possible algorithm problem. The dimen-
sions are same as ’data’ except ’time_dim’ and ’space_dim’.
References
Wilks, 2011, Statistical Methods in the Atmospheric Sciences, 3rd ed., Elsevire, pp 676.
Examples
# Generating synthetic data
a1 <- array(dim = c(200, 4))
mean1 <- 0
sd1 <- 0.3
c0 <- seq(1, 200)
c1 <- sort(sample(x = 1:200, size = sample(x = 50:150, size = 1), replace = FALSE))
x1 <- c(1, 1, 1, 1)
for (i1 in c1) {
a1[i1, ] <- x1 + rnorm(4, mean = mean1, sd = sd1)
}
c1p5 <- c0[!(c0 %in% c1)]
c2 <- c1p5[seq(1, length(c1p5), 2)]
x2 <- c(2, 2, 4, 4)
for (i2 in c2) {
a1[i2, ] <- x2 + rnorm(4, mean = mean1, sd = sd1)
}
c3 <- c1p5[seq(2, length(c1p5), 2)]
x3 <- c(3, 3, 1, 1)
for (i3 in c3) {
a1[i3, ] <- x3 + rnorm(4, mean = mean1, sd = sd1)
}
# Computing the clusters
names(dim(a1)) <- c('sdate', 'space')
res1 <- Cluster(data = a1, weights = array(1, dim = dim(a1)[2]), nclusters = 3)
res2 <- Cluster(data = a1, weights = array(1, dim = dim(a1)[2]))
ColorBar Draws a Color Bar
Description
Generates a color bar to use as colouring function for map plots and optionally draws it (horizon-
tally or vertically) to be added to map multipanels or plots. It is possible to draw triangles at the
ends of the colour bar to represent values that go beyond the range of interest. A number of options
is provided to adjust the colours and the position and size of the components. The drawn colour bar
spans a whole figure region and is compatible with figure layouts.
The generated colour bar consists of a set of breaks that define the length(brks) - 1 intervals to
classify each of the values in each of the grid cells of a two-dimensional field. The corresponding
grid cell of a given value of the field will be coloured in function of the interval it belongs to.
The only mandatory parameters are ’var_limits’ or ’brks’ (in its second format, see below).
Usage
ColorBar(
brks = NULL,
cols = NULL,
vertical = TRUE,
subsampleg = NULL,
bar_limits = NULL,
var_limits = NULL,
triangle_ends = NULL,
col_inf = NULL,
col_sup = NULL,
color_fun = clim.palette(),
plot = TRUE,
draw_ticks = TRUE,
draw_separators = FALSE,
triangle_ends_scale = 1,
extra_labels = NULL,
title = NULL,
title_scale = 1,
label_scale = 1,
tick_scale = 1,
extra_margin = rep(0, 4),
label_digits = 4,
...
)
Arguments
brks Can be provided in two formats:
• A single value with the number of breaks to be generated automatically,
between the minimum and maximum specified in ’var_limits’ (both inclu-
sive). Hence the parameter ’var_limits’ is mandatory if ’brks’ is provided
with this format. If ’bar_limits’ is additionally provided, values only be-
tween ’bar_limits’ will be generated. The higher the value of ’brks’, the
smoother the plot will look.
• A vector with the actual values of the desired breaks. Values will be re-
ordered by force to ascending order. If provided in this format, no other
parameters are required to generate/plot the colour bar.
This parameter is optional if ’var_limits’ is specified. If ’brks’ not specified but
’cols’ is specified, it will take as value length(cols) + 1. If ’cols’ is not specified
either, ’brks’ will take 21 as value.
cols Vector of length(brks) - 1 valid colour identifiers, for each interval defined by
the breaks. This parameter is optional and will be filled in with a vector of
length(brks) - 1 colours generated with the function provided in ’color_fun’
(clim.colors by default).
’cols’ can have one additional colour at the beginning and/or at the end with
the aim to colour field values beyond the range of interest represented in the
colour bar. If any of these extra colours is provided, parameter ’triangle_ends’
becomes mandatory in order to disambiguate which of the ends the colours have
been provided for.
vertical TRUE/FALSE for vertical/horizontal colour bar (disregarded if plot = FALSE).
subsampleg The first of each subsampleg breaks will be ticked on the colorbar. Takes by
default an approximation of a value that yields a readable tick arrangement (ex-
treme breaks always ticked). If set to 0 or lower, no labels are drawn. See the
code of the function for details or use ’extra_labels’ for customized tick arrange-
ments.
bar_limits Vector of two numeric values with the extremes of the range of values repre-
sented in the colour bar. If ’var_limits’ go beyond this interval, the drawing of
triangle extremes is triggered at the corresponding sides, painted in ’col_inf’
and ’col_sup’. Either of them can be set as NA and will then take as value the
corresponding extreme in ’var_limits’ (hence a triangle end won’t be triggered
for these sides). Takes as default the extremes of ’brks’ if available, else the
same values as ’var_limits’.
var_limits Vector of two numeric values with the minimum and maximum values of the
field to represent. These are used to know whether to draw triangle ends at the
extremes of the colour bar and what colour to fill them in with. If not specified,
take the same value as the extremes of ’brks’. Hence the parameter ’brks’ is
mandatory if ’var_limits’ is not specified.
triangle_ends Vector of two logical elements, indicating whether to force the drawing of tri-
angle ends at each of the extremes of the colour bar. This choice is automat-
ically made from the provided ’brks’, ’bar_limits’, ’var_limits’, ’col_inf’ and
’col_sup’, but the behaviour can be manually forced to draw or not to draw the
triangle ends with this parameter. If ’cols’ is provided, ’col_inf’ and ’col_sup’
will take priority over ’triangle_ends’ when deciding whether to draw the trian-
gle ends or not.
col_inf Colour to fill the inferior triangle end with. Useful if specifying colours man-
ually with parameter ’cols’, to specify the colour and to trigger the drawing of
the lower extreme triangle, or if ’cols’ is not specified, to replace the colour
automatically generated by ColorBar().
col_sup Colour to fill the superior triangle end with. Useful if specifying colours man-
ually with parameter ’cols’, to specify the colour and to trigger the drawing of
the upper extreme triangle, or if ’cols’ is not specified, to replace the colour
automatically generated by ColorBar().
color_fun Function to generate the colours of the color bar. Must take an integer and
must return as many colours. The returned colour vector can have the attribute
’na_color’, with a colour to draw NA values. This parameter is set by default to
clim.palette().
plot Logical value indicating whether to only compute its breaks and colours (FALSE)
or to also draw it on the current device (TRUE).
draw_ticks Whether to draw ticks for the labels along the colour bar (TRUE) or not (FALSE).
TRUE by default. Disregarded if ’plot = FALSE’.
draw_separators
Whether to draw black lines in the borders of each of the colour rectancles of
the colour bar (TRUE) or not (FALSE). FALSE by default. Disregarded if ’plot
= FALSE’.
triangle_ends_scale
Scale factor for the drawn triangle ends of the colour bar, if drawn at all. Takes
1 by default (rectangle triangle proportional to the thickness of the colour bar).
Disregarded if ’plot = FALSE’.
extra_labels Numeric vector of extra labels to draw along axis of the colour bar. The number
of provided decimals will be conserved. Disregarded if ’plot = FALSE’.
title Title to draw on top of the colour bar, most commonly with the units of the
represented field in the neighbour figures. Empty by default.
title_scale Scale factor for the ’title’ of the colour bar. Takes 1 by default.
label_scale Scale factor for the labels of the colour bar. Takes 1 by default.
tick_scale Scale factor for the length of the ticks of the labels along the colour bar. Takes
1 by default.
extra_margin Extra margins to be added around the colour bar, in the format c(y1, x1, y2, x2).
The units are margin lines. Takes rep(0, 4) by default.
label_digits Number of significant digits to be displayed in the labels of the colour bar, usu-
ally to avoid too many decimal digits overflowing the figure region. This does
not have effect over the labels provided in ’extra_labels’. Takes 4 by default.
... Arguments to be passed to the method. Only accepts the following graphical
parameters:
adj ann ask bg bty cex.lab cex.main cex.sub cin col.axis col.lab col.main col.sub
cra crt csi cxy err family fg fig fin font font.axis font.lab font.main font.sub lend
lheight ljoin lmitre lty lwd mai mex mfcol mfrow mfg mkh oma omd omi page
pch pin plt pty smo srt tck tcl usr xaxp xaxs xaxt xlog xpd yaxp yaxs yaxt ylbias
ylog.
For more information about the parameters see ‘par‘.
Value
brks Breaks used for splitting the range in intervals.
cols Colours generated for each of the length(brks) - 1 intervals. Always of length
length(brks) - 1.
col_inf Colour used to draw the lower triangle end in the colour bar (NULL if not drawn
at all).
col_sup Colour used to draw the upper triangle end in the colour bar (NULL if not drawn
at all).
Examples
cols <- c("dodgerblue4", "dodgerblue1", "forestgreen", "yellowgreen", "white",
"white", "yellow", "orange", "red", "saddlebrown")
lims <- seq(-1, 1, 0.2)
ColorBar(lims, cols)
Composite Compute composites
Description
Composite a multi-dimensional array which contains two spatial and one temporal dimensions, e.g.,
(lon, lat, time), according to the indices of mode/cluster occurrences in time. The p-value by t-test
is also computed.
Usage
Composite(
data,
occ,
time_dim = "time",
space_dim = c("lon", "lat"),
lag = 0,
eno = FALSE,
K = NULL,
fileout = NULL,
ncores = NULL
)
Arguments
data A numeric array containing two spatial and one temporal dimensions.
occ A vector of the occurrence time series of mode(s)/cluster(s). The length should
be the same as the temporal dimension in ’data’. (*1) When one wants to com-
posite all modes, e.g., all K = 3 clusters then for example occurrences could look
like: 1 1 2 3 2 3 1 3 3 2 3 2 2 3 2. (*2) Otherwise for compositing only the 2nd
mode or cluster of the above example occurrences should look like 0 0 1 0 1 0 0
0 0 1 0 1 1 0 1.
time_dim A character string indicating the name of the temporal dimension in ’data’. The
default value is ’time’.
space_dim A character vector indicating the names of the spatial dimensions in ’data’. The
default value is c(’lon’, ’lat’).
lag An integer indicating the lag time step. E.g., for lag = 2, +2 occurrences will be
used (i.e., shifted 2 time steps forward). The default value is 0.
eno A logical value indicating whether to use the effective sample size (TRUE) or
the total sample size (FALSE) for the number of degrees of freedom. The default
value is FALSE.
K A numeric value indicating the maximum number of composites. The default
value is NULL, which means the maximum value provided in ’occ’ is used.
fileout A character string indicating the name of the .sav output file The default value
is NULL, which means not to save the output.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A list containing:
$composite A numeric array of the spatial dimensions and new dimension ’K’ first, followed
by the same dimensions as parameter ’data’. The length of dimension ’K’ is
parameter ’K’.
$p.val A numeric array with the same dimension as $composite. It is the p-value of the
composites obtained through a t-test that accounts for the serial dependence of
the data.
Examples
blank <- array(0, dim = c(20, 10, 30))
x1 <- blank
t1 <- blank
f1 <- blank
for (i in 1:20) {
x1[i, , ] <- i
}
for (i in 1:30) {
t1[, , i] <- i
}
# This is 2D propagating sin wave example, where we use f1(lon, lat, time)
# wave field. Compositing (like using stroboscopicc light) at different
# time steps can lead to modification or cancelation of wave pattern.
for (i in 1:20) {
for (j in 1:30) {
f1[i, , j] <- 3 * sin(2 * pi * x1[i, , j] / 5. - 2 * pi * t1[i, , j] / 6.)
}
}
names(dim(f1)) <- c('lon', 'lat', 'time')
occ <- rep(0, 30)
occ[c(2, 5, 8, 11, 14, 17, 20, 23)] <- 1
res <- Composite(data = f1, occ = occ)
filled.contour(res$composite[, , 1])
occ <- rep(0, 30)
occ[c(3, 9, 15, 21)] <- 1
res <- Composite(data = f1, occ = occ)
filled.contour(res$composite[, , 1])
# Example with one missing composite in occ:
data <- 1:(4 * 5 * 6)
dim(data) <- c(lon = 4, lat = 5, case = 6)
occ <- c(1, 1, 2, 2, 3, 3)
res <- Composite(data, occ, time_dim = 'case', K = 4)
ConfigApplyMatchingEntries
Apply Matching Entries To Dataset Name And Variable Name To Find
Related Info
Description
Given a pair of dataset name and variable name, this function determines applies all the matching
entries found in the corresponding configuration table to work out the dataset main path, file path,
actual name of variable inside NetCDF files, ...
Usage
ConfigApplyMatchingEntries(
configuration,
var,
exp = NULL,
obs = NULL,
show_entries = FALSE,
show_result = TRUE
)
Arguments
configuration Configuration object obtained from ConfigFileOpen() or ConfigFileCreate().
var Name of the variable to load. Will be interpreted as a string, regular expressions
do not apply here. Examples: ’tas’ or ’tasmax_q90’.
exp Set of experimental dataset identifiers. Will be interpreted as a strings, regu-
lar expressions do not apply here. Can be NULL (not to check in experimental
dataset tables), and takes by default NULL. Examples: c(’EnsEcmwfSeas’, ’En-
sUkmoSeas’), c(’i00k’).
obs Set of observational dataset identifiers. Will be interpreted as a strings, reg-
ular expressions do not apply here. Can be NULL (not to check in observa-
tional dataset tables), and takes by default NULL. Examples: c(’GLORYS’,
’ERAint’), c(’NCEP’).
show_entries Flag to stipulate whether to show the found matching entries for all datasets and
variable name.
show_result Flag to stipulate whether to show the result of applying all the matching entries
(dataset main path, file path, ...).
Value
A list with the information resulting of applying the matching entries is returned.
See Also
ConfigApplyMatchingEntries, ConfigEditDefinition, ConfigEditEntry, ConfigFileOpen, ConfigShowSim-
ilarEntries, ConfigShowTable
Examples
# Create an empty configuration file
config_file <- paste0(tempdir(), "/example.conf")
s2dv::ConfigFileCreate(config_file, confirm = FALSE)
# Open it into a configuration object
configuration <- ConfigFileOpen(config_file)
# Add an entry at the bottom of 4th level of file-per-startdate experiments
# table which will associate the experiment "ExampleExperiment2" and variable
# "ExampleVariable" to some information about its location.
configuration <- ConfigAddEntry(configuration, "experiments",
"last", "ExampleExperiment2", "ExampleVariable",
"/path/to/ExampleExperiment2/",
"ExampleVariable/ExampleVariable_$START_DATE$.nc")
# Edit entry to generalize for any variable. Changing variable needs .
configuration <- ConfigEditEntry(configuration, "experiments", 1,
var_name = ".*",
file_path = "$VAR_NAME$/$VAR_NAME$_$START_DATE$.nc")
# Now apply matching entries for variable and experiment name and show the
# result
match_info <- ConfigApplyMatchingEntries(configuration, 'tas',
exp = c('ExampleExperiment2'), show_result = TRUE)
ConfigEditDefinition Add Modify Or Remove Variable Definitions In Configuration
Description
These functions help in adding, modifying or removing variable definitions in a configuration ob-
ject obtained with ConfigFileOpen or ConfigFileCreate. ConfigEditDefinition() will add the
definition if not existing.
Usage
ConfigEditDefinition(configuration, name, value, confirm = TRUE)
ConfigRemoveDefinition(configuration, name)
Arguments
configuration Configuration object obtained wit ConfigFileOpen() or ConfigFileCreate().
name Name of the variable to add/modify/remove.
value Value to associate to the variable.
confirm Flag to stipulate whether to ask for confirmation if the variable is being modified.
Takes by default TRUE.
Value
A modified configuration object is returned.
See Also
[ConfigApplyMatchingEntries()], [ConfigEditDefinition()], [ConfigEditEntry()], [ConfigFileOpen()],
[ConfigShowSimilarEntries()], [ConfigShowTable()].
Examples
# Create an empty configuration file
config_file <- paste0(tempdir(), "/example.conf")
ConfigFileCreate(config_file, confirm = FALSE)
# Open it into a configuration object
configuration <- ConfigFileOpen(config_file)
# Add an entry at the bottom of 4th level of file-per-startdate experiments
# table which will associate the experiment "ExampleExperiment2" and variable
# "ExampleVariable" to some information about its location.
configuration <- ConfigAddEntry(configuration, "experiments",
"last", "ExampleExperiment2", "ExampleVariable",
"/path/to/ExampleExperiment2/",
"ExampleVariable/ExampleVariable_$START_DATE$.nc")
# Edit entry to generalize for any variable. Changing variable needs .
configuration <- ConfigEditEntry(configuration, "experiments", 1,
var_name = ".*",
file_path = "$VAR_NAME$/$VAR_NAME$_$START_DATE$.nc")
# Now apply matching entries for variable and experiment name and show the
# result
match_info <- ConfigApplyMatchingEntries(configuration, 'tas',
exp = c('ExampleExperiment2'), show_result = TRUE)
ConfigEditEntry Add, Remove Or Edit Entries In The Configuration
Description
ConfigAddEntry(), ConfigEditEntry() and ConfigRemoveEntry() are functions to manage entries in
a configuration object created with ConfigFileOpen().
Before adding an entry, make sure the defaults don’t do already what you want (ConfigShowDefi-
nitions(), ConfigShowTable()).
Before adding an entry, make sure it doesn’t override and spoil what other entries do (ConfigShowTable(),
ConfigFileOpen()).
Before adding an entry, make sure there aren’t other entries that already do what you want (Con-
figShowSimilarEntries()).
Usage
ConfigEditEntry(
configuration,
dataset_type,
position,
dataset_name = NULL,
var_name = NULL,
main_path = NULL,
file_path = NULL,
nc_var_name = NULL,
suffix = NULL,
varmin = NULL,
varmax = NULL
)
ConfigAddEntry(
configuration,
dataset_type,
position = "last",
dataset_name = ".*",
var_name = ".*",
main_path = "*",
file_path = "*",
nc_var_name = "*",
suffix = "*",
varmin = "*",
varmax = "*"
)
ConfigRemoveEntry(
configuration,
dataset_type,
dataset_name = NULL,
var_name = NULL,
position = NULL
)
Arguments
configuration Configuration object obtained via ConfigFileOpen() or ConfigFileCreate() that
will be modified accordingly.
dataset_type Whether to modify a table of experimental datasets or a table of observational
datasets. Can take values ’experiments’ or ’observations’ respectively.
position ’position’ tells the index in the table of the entry to edit or remove. Use Con-
figShowTable() to see the index of the entry. In ConfigAddEntry() it can also
take the value "last" (default), that will put the entry at the end of the corre-
sponding level, or "first" at the beginning. See ?ConfigFileOpen for more infor-
mation. If ’dataset_name’ and ’var_name’ are specified this argument is ignored
in ConfigRemoveEntry().
dataset_name, var_name, main_path, file_path, nc_var_name, suffix, varmin, varmax
These parameters tell the dataset name, variable name, main path, ..., of the en-
try to add, edit or remove.
’dataset_name’ and ’var_name’ can take as a value a POSIX 1003.2 regular ex-
pression (see ?ConfigFileOpen).
Other parameters can take as a value a shell globbing expression (see ?Config-
FileOpen).
’dataset_name’ and ’var_name’ take by default the regular expression ’.*’ (match
any dataset and variable name), and the others take by default ’*’ (associate to
the pair ’dataset_name’ and ’var_name’ all the defined default values. In this
case ’*’ has a special behaviour, it won’t be used as a shell globbing expression.
See ?ConfigFileOpen and ?ConfigShowDefinitions).
’var_min’ and ’var_max’ must be a character string.
To define these values, you can use defined variables via $VARIABLE_NAME$
or other entry attributes via $ATTRIBUTE_NAME$. See ?ConfigFileOpen for
more information.
Value
The function returns an accordingly modified configuration object. To apply the changes in the
configuration file it must be saved using ConfigFileSave().
See Also
ConfigApplyMatchingEntries, ConfigEditDefinition, ConfigEditEntry, ConfigFileOpen, ConfigShowSim-
ilarEntries, ConfigShowTable
Examples
# Create an empty configuration file
config_file <- paste0(tempdir(), "/example.conf")
ConfigFileCreate(config_file, confirm = FALSE)
# Open it into a configuration object
configuration <- ConfigFileOpen(config_file)
# Add an entry at the bottom of 4th level of file-per-startdate experiments
# table which will associate the experiment "ExampleExperiment" and variable
# "ExampleVariable" to some information about its location.
configuration <- ConfigAddEntry(configuration, "experiments",
"last", "ExampleExperiment", "ExampleVariable",
"/path/to/ExampleExperiment/",
"ExampleVariable/ExampleVariable_$START_DATE$.nc")
# Add another entry
configuration <- ConfigAddEntry(configuration, "experiments",
"last", "ExampleExperiment2", "ExampleVariable",
"/path/to/ExampleExperiment2/",
"ExampleVariable/ExampleVariable_$START_DATE$.nc")
# Edit second entry to generalize for any variable. Changing variable needs .
configuration <- ConfigEditEntry(configuration, "experiments", 2,
var_name = ".*",
file_path = "$VAR_NAME$/$VAR_NAME$_$START_DATE$.nc")
# Remove first entry
configuration <- ConfigRemoveEntry(configuration, "experiments",
"ExampleExperiment", "ExampleVariable")
# Show results
ConfigShowTable(configuration, "experiments")
# Save the configuration
ConfigFileSave(configuration, config_file, confirm = FALSE)
ConfigFileOpen Functions To Create Open And Save Configuration File
Description
These functions help in creating, opening and saving configuration files.
Usage
ConfigFileOpen(file_path, silent = FALSE, stop = FALSE)
ConfigFileCreate(file_path, confirm = TRUE)
ConfigFileSave(configuration, file_path, confirm = TRUE)
Arguments
file_path Path to the configuration file to create/open/save.
silent Flag to activate or deactivate verbose mode. Defaults to FALSE (verbose mode
on).
stop TRUE/FALSE whether to raise an error if not all the mandatory default variables
are defined in the configuration file.
confirm Flag to stipulate whether to ask for confirmation when saving a configuration
file that already exists.
Defaults to TRUE (confirmation asked).
configuration Configuration object to save in a file.
Details
ConfigFileOpen() loads all the data contained in the configuration file specified as parameter ’file_path’.
Returns a configuration object with the variables needed for the configuration file mechanism to
work. This function is called from inside the Load() function to load the configuration file specified
in ’configfile’.
ConfigFileCreate() creates an empty configuration file and saves it to the specified path. It may
be opened later with ConfigFileOpen() to be edited. Some default values are set when creating a
file with this function, you can check these with ConfigShowDefinitions().
ConfigFileSave() saves a configuration object into a file, which may then be used from Load().
Two examples of configuration files can be found inside the ’inst/config/’ folder in the package:
• BSC.conf: configuration file used at BSC-CNS. Contains location data on several datasets and
variables.
• template.conf: very simple configuration file intended to be used as pattern when starting from
scratch.
How the configuration file works:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It contains one list and two tables.
Each of these have a header that starts with ’!!’. These are key lines and should not be removed or
reordered.
Lines starting with ’#’ and blank lines will be ignored. The list should contains variable definitions
and default value definitions.
The first table contains information about experiments.
The third table contains information about observations.
Each table entry is a list of comma-separated elements.
The two first are part of a key that is associated to a value formed by the other elements.
The key elements are a dataset identifier and a variable name.
The value elements are the dataset main path, dataset file path, the variable name inside the .nc file,
a default suffix (explained below) and a minimum and maximum vaues beyond which loaded data
is deactivated.
Given a dataset name and a variable name, a full path is obtained concatenating the main path and
the file path.
Also the nc variable name, the suffixes and the limit values are obtained.
Any of the elements in the keys can contain regular expressions[1] that will cause matching for sets
of dataset names or variable names.
The dataset path and file path can contain shell globbing expressions[2] that will cause matching
for sets of paths when fetching the file in the full path.
The full path can point to an OPeNDAP URL.
Any of the elements in the value can contain variables that will be replaced to an associated string.
Variables can be defined only in the list at the top of the file.
The pattern of a variable definition is
VARIABLE_NAME = VARIABLE_VALUE
and can be accessed from within the table values or from within the variable values as
$VARIABLE_NAME$
For example:
FILE_NAME = tos.nc
!!table of experiments
ecmwf, tos, /path/to/dataset/, $FILE_NAME$
There are some reserved variables that will offer information about the store frequency, the current
startdate Load() is fetching, etc:
$VAR_NAME$, $START_DATE$, $STORE_FREQ$, $MEMBER_NUMBER$
for experiments only: $EXP_NAME$
for observations only: $OBS_NAME$, $YEAR$, $MONTH$, $DAY$
Additionally, from an element in an entry value you can access the other elements of the entry as:
$EXP_MAIN_PATH$, $EXP_FILE_PATH$,
$VAR_NAME$, $SUFFIX$, $VAR_MIN$, $VAR_MAX$
The variable $SUFFIX$ is useful because it can be used to take part in the main or file path. For
example: ’/path/to$SUFFIX$/dataset/’.
It will be replaced by the value in the column that corresponds to the suffix unless the user specifies
a different suffix via the parameter ’suffixexp’ or ’suffixobs’.
This way the user is able to load two variables with the same name in the same dataset but with
slight modifications, with a suffix anywhere in the path to the data that advices of this slight modi-
fication.
The entries in a table will be grouped in 4 levels of specificity:
1. General entries:
- the key dataset name and variable name are both a regular expression matching any sequence
of characters (.*) that will cause matching for any pair of dataset and variable names
Example: .*, .*, /dataset/main/path/, file/path, nc_var_name, suffix, var_min, var_max
2. Dataset entries:
- the key variable name matches any sequence of characters
Example: ecmwf, .*, /dataset/main/path/, file/path, nc_var_name, suffix, var_min, var_max
3. Variable entries:
- the key dataset name matches any sequence of characters
Example: .*, tos, /dataset/main/path/, file/path, nc_var_name, suffix, var_min, var_max
4. Specific entries:
- both key values are specified
Example: ecmwf, tos, /dataset/main/path/, file/path, nc_var_name, suffix, var_min, var_max
Given a pair of dataset name and variable name for which we want to know the full path, all the
rules that match will be applied from more general to more specific.
If there is more than one entry per group that match a given key pair, these will be applied in the
order of appearance in the configuration file (top to bottom).
An asterisk (*) in any value element will be interpreted as ’leave it as is or take the default value if
yet not defined’.
The default values are defined in the following reserved variables:
$DEFAULT_EXP_MAIN_PATH$, $DEFAULT_EXP_FILE_PATH$, $DEFAULT_NC_VAR_NAME$,
$DEFAULT_OBS_MAIN_PATH$, $DEFAULT_OBS_FILE_PATH$, $DEFAULT_SUFFIX$, $DE-
FAULT_VAR_MIN$, $DEFAULT_VAR_MAX$,
$DEFAULT_DIM_NAME_LATITUDES$, $DEFAULT_DIM_NAME_LONGITUDES$,
$DEFAULT_DIM_NAME_MEMBERS$
Trailing asterisks in an entry are not mandatory. For example
ecmwf, .*, /dataset/main/path/, *, *, *, *, *
will have the same effect as
ecmwf, .*, /dataset/main/path/
A double quote only (") in any key or value element will be interpreted as ’fill in with the same
value as the entry above’.
Value
ConfigFileOpen() returns a configuration object with all the information for the configuration file
mechanism to work.
ConfigFileSave() returns TRUE if the file has been saved and FALSE otherwise.
ConfigFileCreate() returns nothing.
References
[1] https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html
[2] https://tldp.org/LDP/abs/html/globbingref.html
See Also
ConfigApplyMatchingEntries, ConfigEditDefinition, ConfigEditEntry, ConfigFileOpen, ConfigShowSim-
ilarEntries, ConfigShowTable
Examples
# Create an empty configuration file
config_file <- paste0(tempdir(), "/example.conf")
ConfigFileCreate(config_file, confirm = FALSE)
# Open it into a configuration object
configuration <- ConfigFileOpen(config_file)
# Add an entry at the bottom of 4th level of file-per-startdate experiments
# table which will associate the experiment "ExampleExperiment2" and variable
# "ExampleVariable" to some information about its location.
configuration <- ConfigAddEntry(configuration, "experiments",
"last", "ExampleExperiment2", "ExampleVariable",
"/path/to/ExampleExperiment2/",
"ExampleVariable/ExampleVariable_$START_DATE$.nc")
# Edit entry to generalize for any variable. Changing variable needs .
configuration <- ConfigEditEntry(configuration, "experiments", 1,
var_name = ".*",
file_path = "$VAR_NAME$/$VAR_NAME$_$START_DATE$.nc")
# Now apply matching entries for variable and experiment name and show the
# result
match_info <- ConfigApplyMatchingEntries(configuration, 'tas',
exp = c('ExampleExperiment2'), show_result = TRUE)
# Finally save the configuration file.
ConfigFileSave(configuration, config_file, confirm = FALSE)
ConfigShowSimilarEntries
Find Similar Entries In Tables Of Datasets
Description
These functions help in finding similar entries in tables of supported datasets by comparing all
entries with some given information.
This is useful when dealing with complex configuration files and not sure if already support certain
variables or datasets.
At least one field must be provided in ConfigShowSimilarEntries(). Other fields can be unspecified
and won’t be taken into account. If more than one field is provided, sameness is avreaged over all
provided fields and entries are sorted from higher average to lower.
Usage
ConfigShowSimilarEntries(
configuration,
dataset_name = NULL,
var_name = NULL,
main_path = NULL,
file_path = NULL,
nc_var_name = NULL,
suffix = NULL,
varmin = NULL,
varmax = NULL,
n_results = 10
)
Arguments
configuration Configuration object obtained either from ConfigFileCreate() or ConfigFileOpen().
dataset_name Optional dataset name to look for similars of.
var_name Optional variable name to look for similars of.
main_path Optional main path to look for similars of.
file_path Optional file path to look for similars of.
nc_var_name Optional variable name inside NetCDF file to look for similars of.
suffix Optional suffix to look for similars of.
varmin Optional variable minimum to look for similars of.
varmax Optional variable maximum to look for similars of.
n_results Top ’n_results’ alike results will be shown only. Defaults to 10 in ConfigShowSim-
ilarEntries() and to 5 in ConfigShowSimilarVars().
Details
Sameness is calculated with string distances as specified by <NAME> in [1].
Value
These functions return information about the found matches.
References
[1] <NAME>, string seamness: http://www.catalysoft.com/articles/StrikeAMatch.html
See Also
ConfigApplyMatchingEntries, ConfigEditDefinition, ConfigEditEntry, ConfigFileOpen, ConfigShowSim-
ilarEntries, ConfigShowTable
Examples
# Create an empty configuration file
config_file <- paste0(tempdir(), "/example.conf")
ConfigFileCreate(config_file, confirm = FALSE)
# Open it into a configuration object
configuration <- ConfigFileOpen(config_file)
# Add an entry at the bottom of 4th level of file-per-startdate experiments
# table which will associate the experiment "ExampleExperiment2" and variable
# "ExampleVariable" to some information about its location.
configuration <- ConfigAddEntry(configuration, "experiments", "last",
"ExampleExperiment2", "ExampleVariable",
"/path/to/ExampleExperiment2/",
"ExampleVariable/ExampleVariable_$START_DATE$.nc")
# Edit entry to generalize for any variable. Changing variable needs .
configuration <- ConfigEditEntry(configuration, "experiments", 1,
var_name = "Var.*",
file_path = "$VAR_NAME$/$VAR_NAME$_$START_DATE$.nc")
# Look for similar entries
ConfigShowSimilarEntries(configuration, dataset_name = "Exper",
var_name = "Vari")
ConfigShowTable Show Configuration Tables And Definitions
Description
These functions show the tables of supported datasets and definitions in a configuration object
obtained via ConfigFileCreate() or ConfigFileOpen().
Usage
ConfigShowTable(configuration, dataset_type, line_numbers = NULL)
ConfigShowDefinitions(configuration)
Arguments
configuration Configuration object obtained from ConfigFileCreate() or ConfigFileOpen().
dataset_type In ConfigShowTable(), ’dataset_type’ tells whether the table to show is of ex-
perimental datasets or of observational datasets. Can take values ’experiments’
or ’observations’.
line_numbers ’line_numbers’ is an optional vector of numbers as long as the number of entries
in the specified table. Intended for internal use.
Value
These functions return nothing.
See Also
[ConfigApplyMatchingEntries()], [ConfigEditDefinition()], [ConfigEditEntry()], [ConfigFileOpen()],
[ConfigShowSimilarEntries()], [ConfigShowTable()].
Examples
# Create an empty configuration file
config_file <- paste0(tempdir(), "/example.conf")
ConfigFileCreate(config_file, confirm = FALSE)
# Open it into a configuration object
configuration <- ConfigFileOpen(config_file)
# Add an entry at the bottom of 4th level of file-per-startdate experiments
# table which will associate the experiment "ExampleExperiment2" and variable
# "ExampleVariable" to some information about its location.
configuration <- ConfigAddEntry(configuration, "experiments", "last",
"ExampleExperiment2", "ExampleVariable",
"/path/to/ExampleExperiment2/",
"ExampleVariable/ExampleVariable_$START_DATE$.nc")
# Edit entry to generalize for any variable. Changing variable needs .
configuration <- ConfigEditEntry(configuration, "experiments", 1,
var_name = ".*",
file_path = "$VAR_NAME$/$VAR_NAME$_$START_DATE$.nc")
# Show tables, lists and definitions
ConfigShowTable(configuration, 'experiments')
ConfigShowDefinitions(configuration)
Consist_Trend Compute trend using only model data for which observations are
available
Description
Compute the linear trend for a time series by least square fitting together with the associated error
interval for both the observational and model data. The 95% confidence interval and detrended
observational and model data are also provided.
The function doesn’t do the ensemble mean, so if the input data have the member dimension, en-
semble mean needs to be computed beforehand.
Usage
Consist_Trend(
exp,
obs,
dat_dim = "dataset",
time_dim = "sdate",
interval = 1,
ncores = NULL
)
Arguments
exp A named numeric array of experimental data, with at least two dimensions
’time_dim’ and ’dat_dim’.
obs A named numeric array of observational data, same dimensions as parameter
’exp’ except along ’dat_dim’.
dat_dim A character string indicating the name of the dataset dimensions. If data at some
point of ’time_dim’ are not complete along ’dat_dim’ in both ’exp’ and ’obs’,
this point in all ’dat_dim’ will be discarded. The default value is ’dataset’.
time_dim A character string indicating the name of dimension along which the trend is
computed. The default value is ’sdate’.
interval A positive numeric indicating the unit length between two points along ’time_dim’
dimension. The default value is 1.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A list containing:
$trend A numeric array of the trend coefficients of model and observational data with
dimensions c(stats = 2, nexp + nobs, the rest dimensions of ’exp’ and ’obs’
except time_dim), where ’nexp’ is the length of ’dat_dim’ in ’exp’ and ’nobs’ is
the length of ’dat_dim’ in ’obs. The ’stats’ dimension contains the intercept and
the slope.
$conf.lower A numeric array of the lower limit of 95% confidence interval with dimensions
same as $trend. The ’stats’ dimension contains the lower confidence level of the
intercept and the slope.
$conf.upper A numeric array of the upper limit of 95% confidence interval with dimensions
same as $trend. The ’stats’ dimension contains the upper confidence level of the
intercept and the slope.
$detrended_exp A numeric array of the detrended model data with the same dimensions as ’exp’.
$detrended_obs A numeric array of the detrended observational data with the same dimensions
as ’obs’.
Examples
#'# Load sample data as in Load() example:
example(Load)
clim <- Clim(sampleData$mod, sampleData$obs)
ano_exp <- Ano(sampleData$mod, clim$clim_exp)
ano_obs <- Ano(sampleData$obs, clim$clim_obs)
runmean_months <- 12
smooth_ano_exp <- Smoothing(ano_exp, runmeanlen = runmean_months)
smooth_ano_obs <- Smoothing(ano_obs, runmeanlen = runmean_months)
dim_to_mean <- 'member' # average along members
years_between_startdates <- 5
trend <- Consist_Trend(MeanDims(smooth_ano_exp, dim_to_mean, na.rm = TRUE),
MeanDims(smooth_ano_obs, dim_to_mean, na.rm = TRUE),
interval = years_between_startdates)
#Bind data for plotting
trend_bind <- abind::abind(trend$conf.lower[2, , ], trend$trend[2, , ],
trend$conf.upper[2, , ], trend$trend[1, , ], along = 0)
trend_bind <- Reorder(trend_bind, c(2, 1, 3))
PlotVsLTime(trend_bind, toptitle = "trend", ytitle = "K/(5 years)",
monini = 11, limits = c(-0.8, 0.8), listexp = c('CMIP5 IC3'),
listobs = c('ERSST'), biglab = FALSE, hlines = c(0))
PlotAno(InsertDim(trend$detrended_exp, 2, 1), InsertDim(trend$detrended_obs, 2, 1),
startDates, "Detrended tos anomalies", ytitle = 'K',
legends = 'ERSST', biglab = FALSE)
Corr Compute the correlation coefficient between an array of forecast and
their corresponding observation
Description
Calculate the correlation coefficient (Pearson, Kendall or Spearman) for an array of forecast and an
array of observation. The correlations are computed along ’time_dim’ that usually refers to the start
date dimension. If ’comp_dim’ is given, the correlations are computed only if obs along comp_dim
dimension are complete between limits[1] and limits[2], i.e., there is no NA between limits[1] and
limits[2]. This option can be activated if the user wants to account only for the forecasts which the
corresponding observations are available at all leadtimes.
The confidence interval is computed by the Fisher transformation and the significance level relies
on an one-sided student-T distribution.
The function can calculate ensemble mean before correlation by ’memb_dim’ specified and ’memb
= F’. If ensemble mean is not calculated, correlation will be calculated for each member. If there is
only one dataset for exp and obs, you can simply use cor() to compute the correlation.
Usage
Corr(
exp,
obs,
time_dim = "sdate",
dat_dim = "dataset",
comp_dim = NULL,
limits = NULL,
method = "pearson",
memb_dim = NULL,
memb = TRUE,
pval = TRUE,
conf = TRUE,
sign = FALSE,
alpha = 0.05,
conf.lev = NULL,
ncores = NULL
)
Arguments
exp A named numeric array of experimental data, with at least dimension ’time_dim’.
obs A named numeric array of observational data, same dimensions as parameter
’exp’ except along ’dat_dim’ and ’memb_dim’.
time_dim A character string indicating the name of dimension along which the correlations
are computed. The default value is ’sdate’.
dat_dim A character string indicating the name of dataset (nobs/nexp) dimension. The
default value is ’dataset’. If there is no dataset dimension, set NULL.
comp_dim A character string indicating the name of dimension along which obs is taken
into account only if it is complete. The default value is NULL.
limits A vector of two integers indicating the range along comp_dim to be completed.
The default is c(1, length(comp_dim dimension)).
method A character string indicating the type of correlation: ’pearson’, ’spearman’, or
’kendall’. The default value is ’pearson’.
memb_dim A character string indicating the name of the member dimension. It must be one
dimension in ’exp’ and ’obs’. If there is no member dimension, set NULL. The
default value is NULL.
memb A logical value indicating whether to remain ’memb_dim’ dimension (TRUE) or
do ensemble mean over ’memb_dim’ (FALSE). Only functional when ’memb_dim’
is not NULL. The default value is TRUE.
pval A logical value indicating whether to return or not the p-value of the test Ho:
Corr = 0. The default value is TRUE.
conf A logical value indicating whether to return or not the confidence intervals. The
default value is TRUE.
sign A logical value indicating whether to retrieve the statistical significance of the
test Ho: Corr = 0 based on ’alpha’. The default value is FALSE.
alpha A numeric indicating the significance level for the statistical significance test.
The default value is 0.05.
conf.lev Deprecated. Use alpha now instead. alpha = 1 - conf.lev.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A list containing the numeric arrays with dimension:
c(nexp, nobs, exp_memb, obs_memb, all other dimensions of exp except time_dim and memb_dim).
nexp is the number of experiment (i.e., ’dat_dim’ in exp), and nobs is the number of observation
(i.e., ’dat_dim’ in obs). If dat_dim is NULL, nexp and nobs are omitted. exp_memb is the number
of member in experiment (i.e., ’memb_dim’ in exp) and obs_memb is the number of member in
observation (i.e., ’memb_dim’ in obs). If memb = F, exp_memb and obs_memb are omitted.
$corr The correlation coefficient.
$p.val The p-value. Only present if pval = TRUE.
$conf.lower The lower confidence interval. Only present if conf = TRUE.
$conf.upper The upper confidence interval. Only present if conf = TRUE.
$sign The statistical significance. Only present if sign = TRUE.
Examples
# Case 1: Load sample data as in Load() example:
example(Load)
clim <- Clim(sampleData$mod, sampleData$obs)
ano_exp <- Ano(sampleData$mod, clim$clim_exp)
ano_obs <- Ano(sampleData$obs, clim$clim_obs)
runmean_months <- 12
# Smooth along lead-times
smooth_ano_exp <- Smoothing(ano_exp, runmeanlen = runmean_months)
smooth_ano_obs <- Smoothing(ano_obs, runmeanlen = runmean_months)
required_complete_row <- 3 # Discard start dates which contain any NA lead-times
leadtimes_per_startdate <- 60
corr <- Corr(MeanDims(smooth_ano_exp, 'member'),
MeanDims(smooth_ano_obs, 'member'),
comp_dim = 'ftime',
limits = c(ceiling((runmean_months + 1) / 2),
leadtimes_per_startdate - floor(runmean_months / 2)))
# Case 2: Keep member dimension
corr <- Corr(smooth_ano_exp, smooth_ano_obs, memb_dim = 'member')
# ensemble mean
corr <- Corr(smooth_ano_exp, smooth_ano_obs, memb_dim = 'member', memb = FALSE)
CRPS Compute the Continuous Ranked Probability Score
Description
The Continuous Ranked Probability Score (CRPS; Wilks, 2011) is the continuous version of the
Ranked Probability Score (RPS; Wilks, 2011). It is a skill metric to evaluate the full distribution
of probabilistic forecasts. It has a negative orientation (i.e., the higher-quality forecast the smaller
CRPS) and it rewards the forecast that has probability concentration around the observed value. In
case of a deterministic forecast, the CRPS is reduced to the mean absolute error. It has the same
units as the data. The function is based on enscrps_cpp from SpecsVerification. If there is more
than one dataset, CRPS will be computed for each pair of exp and obs data.
Usage
CRPS(
exp,
obs,
time_dim = "sdate",
memb_dim = "member",
dat_dim = NULL,
Fair = FALSE,
ncores = NULL
)
Arguments
exp A named numerical array of the forecast with at least time dimension.
obs A named numerical array of the observation with at least time dimension. The
dimensions must be the same as ’exp’ except ’memb_dim’ and ’dat_dim’.
time_dim A character string indicating the name of the time dimension. The default value
is ’sdate’.
memb_dim A character string indicating the name of the member dimension to compute the
probabilities of the forecast. The default value is ’member’.
dat_dim A character string indicating the name of dataset dimension. The length of this
dimension can be different between ’exp’ and ’obs’. The default value is NULL.
Fair A logical indicating whether to compute the FairCRPS (the potential CRPS that
the forecast would have with an infinite ensemble size). The default value is
FALSE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A numerical array of CRPS with dimensions c(nexp, nobs, the rest dimensions of ’exp’ except
’time_dim’ and ’memb_dim’ dimensions). nexp is the number of experiment (i.e., dat_dim in exp),
and nobs is the number of observation (i.e., dat_dim in obs). If dat_dim is NULL, nexp and nobs
are omitted.
References
Wilks, 2011; https://doi.org/10.1016/B978-0-12-385022-5.00008-7
Examples
exp <- array(rnorm(1000), dim = c(lat = 3, lon = 2, member = 10, sdate = 50))
obs <- array(rnorm(1000), dim = c(lat = 3, lon = 2, sdate = 50))
res <- CRPS(exp = exp, obs = obs)
CRPSS Compute the Continuous Ranked Probability Skill Score
Description
The Continuous Ranked Probability Skill Score (CRPSS; Wilks, 2011) is the skill score based on
the Continuous Ranked Probability Score (CRPS; Wilks, 2011). It can be used to assess whether
a forecast presents an improvement or worsening with respect to a reference forecast. The CRPSS
ranges between minus infinite and 1. If the CRPSS is positive, it indicates that the forecast has
higher skill than the reference forecast, while a negative value means that it has a lower skill.
Examples of reference forecasts are the climatological forecast (same probabilities for all categories
for all time steps), persistence, a previous model version, or another model. It is computed as
CRPSS = 1 - CRPS_exp / CRPS_ref. The statistical significance is obtained based on a Random
Walk test at the 95 2016).
Usage
CRPSS(
exp,
obs,
ref = NULL,
time_dim = "sdate",
memb_dim = "member",
dat_dim = NULL,
Fair = FALSE,
ncores = NULL
)
Arguments
exp A named numerical array of the forecast with at least time dimension.
obs A named numerical array of the observation with at least time dimension. The
dimensions must be the same as ’exp’ except ’memb_dim’ and ’dat_dim’.
ref A named numerical array of the reference forecast data with at least time and
member dimension. The dimensions must be the same as ’exp’ except ’memb_dim’
and ’dat_dim’. If there is only one reference dataset, it should not have dataset
dimension. If there is corresponding reference for each experiement, the dataset
dimension must have the same length as in ’exp’. If ’ref’ is NULL, the climato-
logical forecast is used as reference forecast. The default value is NULL.
time_dim A character string indicating the name of the time dimension. The default value
is ’sdate’.
memb_dim A character string indicating the name of the member dimension to compute
the probabilities of the forecast and the reference forecast. The default value is
’member’.
dat_dim A character string indicating the name of dataset dimension. The length of this
dimension can be different between ’exp’ and ’obs’. The default value is NULL.
Fair A logical indicating whether to compute the FairCRPSS (the potential CRPSS
that the forecast would have with an infinite ensemble size). The default value
is FALSE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
$crpss A numerical array of the CRPSS with dimensions c(nexp, nobs, the rest dimen-
sions of ’exp’ except ’time_dim’ and ’memb_dim’ dimensions). nexp is the
number of experiment (i.e., dat_dim in exp), and nobs is the number of observa-
tion (i.e., dat_dim in obs). If ’dat_dim’ is NULL, nexp and nobs are omitted.
$sign A logical array of the statistical significance of the CRPSS with the same di-
mensions as $crpss.
References
Wilks, 2011; https://doi.org/10.1016/B978-0-12-385022-5.00008-7 DelSole and Tippett, 2016; https://doi.org/10.1175/MWR
D-15-0218.1
Examples
exp <- array(rnorm(1000), dim = c(lat = 3, lon = 2, member = 10, sdate = 50))
obs <- array(rnorm(1000), dim = c(lat = 3, lon = 2, sdate = 50))
ref <- array(rnorm(1000), dim = c(lat = 3, lon = 2, member = 10, sdate = 50))
res <- CRPSS(exp = exp, obs = obs) ## climatology as reference forecast
res <- CRPSS(exp = exp, obs = obs, ref = ref) ## ref as reference forecast
DiffCorr Compute the correlation difference and its significance
Description
Compute the correlation difference between two deterministic forecasts. Positive values of the
correlation difference indicate that the forecast is more skillful than the reference forecast, while
negative values mean that the reference forecast is more skillful. The statistical significance of the
correlation differences is computed with a one-sided or two-sided test for equality of dependent
correlation coefficients (Steiger, 1980; Siegert et al., 2017) using effective degrees of freedom to
account for the autocorrelation of the time series (Zwiers and von Storch, 1995).
Usage
DiffCorr(
exp,
obs,
ref,
N.eff = NA,
time_dim = "sdate",
memb_dim = NULL,
method = "pearson",
alpha = NULL,
handle.na = "return.na",
test.type = "two-sided",
ncores = NULL
)
Arguments
exp A named numerical array of the forecast data with at least time dimension.
obs A named numerical array with the observations with at least time dimension.
The dimensions must be the same as "exp" except ’memb_dim’.
ref A named numerical array of the reference forecast data with at least time dimen-
sion. The dimensions must be the same as "exp" except ’memb_dim’.
N.eff Effective sample size to be used in the statistical significance test. It can be NA
(and it will be computed with the s2dv:::.Eno), a numeric (which is used for all
cases), or an array with the same dimensions as "obs" except "time_dim" (for a
particular N.eff to be used for each case) . The default value is NA.
time_dim A character string indicating the name of the time dimension. The default value
is ’sdate’.
memb_dim A character string indicating the name of the member dimension to compute the
ensemble mean of the forecast and reference forecast. If it is NULL (default),
the ensemble mean should be provided directly to the function.
method A character string indicating the correlation coefficient to be computed ("pear-
son" or "spearman"). The default value is "pearson".
alpha A numeric of the significance level to be used in the statistical significance test.
If it is a numeric, "sign" will be returned. If NULL, the p-value will be returned
instead. The default value is NULL.
handle.na A charcater string indicating how to handle missing values. If "return.na", NAs
will be returned for the cases that contain at least one NA in "exp", "ref", or
"obs". If "only.complete.triplets", only the time steps with no missing values in
all "exp", "ref", and "obs" will be used. If "na.fail", an error will arise if any of
"exp", "ref", or "obs" contains any NA. The default value is "return.na".
test.type A character string indicating the type of significance test. It can be "two-sided"
(to assess whether the skill of "exp" and "ref" are significantly different) or "one-
sided" (to assess whether the skill of "exp" is significantly higher than that of
"ref") following Steiger (1980). The default value is "two-sided".
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A list with:
$diff.corr A numerical array of the correlation differences with the same dimensions as the
input arrays except "time_dim" (and "memb_dim" if provided).
$sign A logical array of the statistical significance of the correlation differences with
the same dimensions as the input arrays except "time_dim" (and "memb_dim"
if provided). Returned only if "alpha" is a numeric.
$p.val A numeric array of the p-values with the same dimensions as the input arrays
except "time_dim" (and "memb_dim" if provided). Returned only if "alpha" is
NULL.
References
Steiger, 1980; https://content.apa.org/doi/10.1037/0033-2909.87.2.245 Siegert et al., 2017; https://doi.org/10.1175/MWR-
D-16-0037.1 Zwiers and <NAME>, 1995; https://doi.org/10.1175/1520-0442(1995)008<0336:TSCIAI>2.0.CO;2
Examples
exp <- array(rnorm(1000), dim = c(lat = 3, lon = 2, member = 10, sdate = 50))
obs <- array(rnorm(1000), dim = c(lat = 3, lon = 2, sdate = 50))
ref <- array(rnorm(1000), dim = c(lat = 3, lon = 2, member = 10, sdate = 50))
res_two.sided_sign <- DiffCorr(exp, obs, ref, memb_dim = 'member',
test.type = 'two-sided', alpha = 0.05)
res_one.sided_pval <- DiffCorr(exp, obs, ref, memb_dim = 'member',
test.type = 'one-sided', alpha = NULL)
Eno Compute effective sample size with classical method
Description
Compute the number of effective samples along one dimension of an array. This effective number
of independent observations can be used in statistical/inference tests.
The calculation is based on eno function from <NAME> from rclim.txt.
Usage
Eno(data, time_dim = "sdate", na.action = na.pass, ncores = NULL)
Arguments
data A numeric array with named dimensions.
time_dim A function indicating the dimension along which to compute the effective sam-
ple size. The default value is ’sdate’.
na.action A function. It can be na.pass (missing values are allowed) or na.fail (no missing
values are allowed). See details in stats::acf(). The default value is na.pass.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
An array with the same dimension as parameter ’data’ except the time_dim dimension, which is
removed after the computation. The array indicates the number of effective sample along time_dim.
Examples
set.seed(1)
data <- array(rnorm(800), dim = c(dataset = 1, member = 2, sdate = 4,
ftime = 4, lat = 10, lon = 10))
na <- floor(runif(40, min = 1, max = 800))
data[na] <- NA
res <- Eno(data)
EOF Area-weighted empirical orthogonal function analysis using SVD
Description
Perform an area-weighted EOF analysis using single value decomposition (SVD) based on a co-
variance matrix or a correlation matrix if parameter ’corr’ is set to TRUE.
Usage
EOF(
ano,
lat,
lon,
time_dim = "sdate",
space_dim = c("lat", "lon"),
neofs = 15,
corr = FALSE,
ncores = NULL
)
Arguments
ano A numerical array of anomalies with named dimensions to calculate EOF. The
dimensions must have at least ’time_dim’ and ’space_dim’. NAs could exist but
it should be consistent along time_dim. That is, if one grid point has NAs, all
the time steps at this point should be NAs.
lat A vector of the latitudes of ’ano’.
lon A vector of the longitudes of ’ano’.
time_dim A character string indicating the name of the time dimension of ’ano’. The
default value is ’sdate’.
space_dim A vector of two character strings. The first is the dimension name of latitude of
’ano’ and the second is the dimension name of longitude of ’ano’. The default
value is c(’lat’, ’lon’).
neofs A positive integer of the modes to be kept. The default value is 15. If time
length or the product of the length of space_dim is smaller than neofs, neofs
will be changed to the minimum of the three values.
corr A logical value indicating whether to base on a correlation (TRUE) or on a
covariance matrix (FALSE). The default value is FALSE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A list containing:
EOFs An array of EOF patterns normalized to 1 (unitless) with dimensions (number
of modes, rest of the dimensions of ’ano’ except ’time_dim’). Multiplying EOFs
by PCs gives the original reconstructed field.
PCs An array of principal components with the units of the original field to the power
of 2, with dimensions (time_dim, number of modes, rest of the dimensions of
’ano’ except ’space_dim’). ’PCs’ contains already the percentage of explained
variance so, to reconstruct the original field it’s only needed to multiply ’EOFs’
by ’PCs’.
var An array of the percentage ( explained by each mode (number of modes). The di-
mensions are (number of modes, rest of the dimensions of ’ano’ except ’time_dim’
and ’space_dim’).
mask An array of the mask with dimensions (space_dim, rest of the dimensions of
’ano’ except ’time_dim’). It is made from ’ano’, 1 for the positions that ’ano’ has
value and NA for the positions that ’ano’ has NA. It is used to replace NAs with
0s for EOF calculation and mask the result with NAs again after the calculation.
wght An array of the area weighting with dimensions ’space_dim’. It is calculated by
cosine of ’lat’ and used to compute the fraction of variance explained by each
EOFs.
tot_var A number or a numeric array of the total variance explained by all the modes.
The dimensions are same as ’ano’ except ’time_dim’ and ’space_dim’.
See Also
ProjectField, NAO, PlotBoxWhisker
Examples
# This example computes the EOFs along forecast horizons and plots the one
# that explains the greatest amount of variability. The example data has low
# resolution so the result may not be explanatory, but it displays how to
# use this function.
ano <- Ano_CrossValid(sampleData$mod, sampleData$obs)
tmp <- MeanDims(ano$exp, c('dataset', 'member'))
ano <- tmp[1, , ,]
names(dim(ano)) <- names(dim(tmp))[-2]
eof <- EOF(ano, sampleData$lat, sampleData$lon)
## Not run:
PlotEquiMap(eof$EOFs[1, , ], sampleData$lon, sampleData$lat)
## End(Not run)
EuroAtlanticTC Teleconnection indices in European Atlantic Ocean region
Description
Calculate the four main teleconnection indices in European Atlantic Ocean region: North Atlantic
oscillation (NAO), East Atlantic Pattern (EA), East Atlantic/Western Russia (EAWR), and Scandi-
navian pattern (SCA). The function REOF() is used for the calculation, and the first four modes are
returned.
Usage
EuroAtlanticTC(
ano,
lat,
lon,
ntrunc = 30,
time_dim = "sdate",
space_dim = c("lat", "lon"),
corr = FALSE,
ncores = NULL
)
Arguments
ano A numerical array of anomalies with named dimensions to calculate REOF then
the four teleconnections. The dimensions must have at least ’time_dim’ and
’space_dim’, and the data should cover the European Atlantic Ocean area (20N-
80N, 90W-60E).
lat A vector of the latitudes of ’ano’. It should be 20N-80N.
lon A vector of the longitudes of ’ano’. It should be 90W-60E.
ntrunc A positive integer of the modes to be kept. The default value is 30. If time length
or the product of latitude length and longitude length is less than ntrunc, ntrunc
is equal to the minimum of the three values.
time_dim A character string indicating the name of the time dimension of ’ano’. The
default value is ’sdate’.
space_dim A vector of two character strings. The first is the dimension name of latitude of
’ano’ and the second is the dimension name of longitude of ’ano’. The default
value is c(’lat’, ’lon’).
corr A logical value indicating whether to base on a correlation (TRUE) or on a
covariance matrix (FALSE). The default value is FALSE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A list containing:
patterns An array of the first four REOF patterns normalized to 1 (unitless) with di-
mensions (modes = 4, the rest of the dimensions of ’ano’ except ’time_dim’).
The modes represent NAO, EA, EAWR, and SCA, of which the order and sign
changes depending on the dataset and period employed, so manual reordering
may be needed. Multiplying ’patterns’ by ’indices’ gives the original recon-
structed field.
indices An array of the first four principal components with the units of the original
field to the power of 2, with dimensions (time_dim, modes = 4, the rest of the
dimensions of ’ano’ except ’space_dim’).
var An array of the percentage ( explained by each mode. The dimensions are
(modes = ntrunc, the rest of the dimensions of ’ano’ except ’time_dim’ and
’space_dim’).
wght An array of the area weighting with dimensions ’space_dim’. It is calculated by
the square root of cosine of ’lat’ and used to compute the fraction of variance
explained by each REOFs.
See Also
REOF NAO
Examples
# Use synthetic data
set.seed(1)
dat <- array(rnorm(800), dim = c(dat = 2, sdate = 5, lat = 8, lon = 15))
lat <- seq(10, 90, length.out = 8)
lon <- seq(-100, 70, length.out = 15)
res <- EuroAtlanticTC(dat, lat = lat, lon = lon)
Filter Filter frequency peaks from an array
Description
Filter out the selected frequency from a time series. The filtering is performed by dichotomy,
seeking for a frequency around the parameter ’freq’ and the phase that maximizes the signal to
subtract from the time series. The maximization of the signal to subtract relies on a minimization
of the mean square differences between the time series (’data’) and the cosine of the specified
frequency and phase.
Usage
Filter(data, freq, time_dim = "ftime", ncores = NULL)
Arguments
data A numeric vector or array of the data to be filtered. If it’s a vector, it should be
a time series. If it’s an array, the dimensions must have at least ’time_dim’.
freq A number of the frequency to filter.
time_dim A character string indicating the dimension along which to compute the filtering.
The default value is ’ftime’.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A numeric vector or array of the filtered data with the dimensions the same as ’data’.
Examples
# Load sample data as in Load() example:
example(Load)
ensmod <- MeanDims(sampleData$mod, 2)
spectrum <- Spectrum(ensmod)
for (jsdate in 1:dim(spectrum)['sdate']) {
for (jlen in 1:dim(spectrum)['ftime']) {
if (spectrum[jlen, 2, 1, jsdate] > spectrum[jlen, 3, 1, jsdate]) {
ensmod[1, jsdate, ] <- Filter(ensmod[1, jsdate, ], spectrum[jlen, 1, 1, jsdate])
}
}
}
PlotAno(InsertDim(ensmod, 2, 1), sdates = startDates)
GMST Compute the Global Mean Surface Temperature (GMST) anomalies
Description
The Global Mean Surface Temperature (GMST) anomalies are computed as the weighted-averaged
surface air temperature anomalies over land and sea surface temperature anomalies over the ocean.
If different members and/or datasets are provided, the climatology (used to calculate the anomalies)
is computed individually for all of them.
Usage
GMST(
data_tas,
data_tos,
data_lats,
data_lons,
mask_sea_land,
sea_value,
type,
mask = NULL,
lat_dim = "lat",
lon_dim = "lon",
monini = 11,
fmonth_dim = "fmonth",
sdate_dim = "sdate",
indices_for_clim = NULL,
year_dim = "year",
month_dim = "month",
na.rm = TRUE,
ncores = NULL
)
Arguments
data_tas A numerical array with the surface air temperature data to be used for the index
computation with, at least, the dimensions: 1) latitude, longitude, start date and
forecast month (in case of decadal predictions), 2) latitude, longitude, year and
month (in case of historical simulations or observations). This data has to be
provided, at least, over the whole region needed to compute the index. The
dimensions must be identical to thos of data_tos.
data_tos A numerical array with the sea surface temperature data to be used for the index
computation with, at least, the dimensions: 1) latitude, longitude, start date and
forecast month (in case of decadal predictions), 2) latitude, longitude, year and
month (in case of historical simulations or observations). This data has to be
provided, at least, over the whole region needed to compute the index. The
dimensions must be identical to thos of data_tas.
data_lats A numeric vector indicating the latitudes of the data.
data_lons A numeric vector indicating the longitudes of the data.
mask_sea_land An array with dimensions [lat_dim = data_lats, lon_dim = data_lons] for blend-
ing ’data_tas’ and ’data_tos’.
sea_value A numeric value indicating the sea grid points in ’mask_sea_land’.
type A character string indicating the type of data (’dcpp’ for decadal predictions,
’hist’ for historical simulations, or ’obs’ for observations or reanalyses).
mask An array of a mask (with 0’s in the grid points that have to be masked) or NULL
(i.e., no mask is used). This parameter allows to remove the values over land
in case the dataset is a combination of surface air temperature over land and
sea surface temperature over the ocean. Also, it can be used to mask those grid
points that are missing in the observational dataset for a fair comparison between
the forecast system and the reference dataset. The default value is NULL.
lat_dim A character string of the name of the latitude dimension. The default value is
’lat’.
lon_dim A character string of the name of the longitude dimension. The default value is
’lon’.
monini An integer indicating the month in which the forecast system is initialized. Only
used when parameter ’type’ is ’dcpp’. The default value is 11, i.e., initialized in
November.
fmonth_dim A character string indicating the name of the forecast month dimension. Only
used if parameter ’type’ is ’dcpp’. The default value is ’fmonth’.
sdate_dim A character string indicating the name of the start date dimension. Only used if
parameter ’type’ is ’dcpp’. The default value is ’sdate’.
indices_for_clim
A numeric vector of the indices of the years to compute the climatology for cal-
culating the anomalies, or NULL so the climatology is calculated over the whole
period. If the data are already anomalies, set it to FALSE. The default value is
NULL.
In case of parameter ’type’ is ’dcpp’, ’indices_for_clim’ must be relative to the
first forecast year, and the climatology is automatically computed over the com-
mon calendar period for the different forecast years.
year_dim A character string indicating the name of the year dimension The default value
is ’year’. Only used if parameter ’type’ is ’hist’ or ’obs’.
month_dim A character string indicating the name of the month dimension. The default
value is ’month’. Only used if parameter ’type’ is ’hist’ or ’obs’.
na.rm A logical value indicanting whether to remove NA values. The default value is
TRUE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A numerical array with the GMST anomalies with the same dimensions as data_tas except the
lat_dim, lon_dim and fmonth_dim (month_dim) in case of decadal predictions (historical simula-
tions or observations). In case of decadal predictions, a new dimension ’fyear’ is added.
Examples
## Observations or reanalyses
obs_tas <- array(1:100, dim = c(year = 5, lat = 19, lon = 37, month = 12))
obs_tos <- array(2:101, dim = c(year = 5, lat = 19, lon = 37, month = 12))
mask_sea_land <- array(c(1,0,1), dim = c(lat = 19, lon = 37))
sea_value <- 1
lat <- seq(-90, 90, 10)
lon <- seq(0, 360, 10)
index_obs <- GMST(data_tas = obs_tas, data_tos = obs_tos, data_lats = lat,
data_lons = lon, type = 'obs',
mask_sea_land = mask_sea_land, sea_value = sea_value)
## Historical simulations
hist_tas <- array(1:100, dim = c(year = 5, lat = 19, lon = 37, month = 12, member = 5))
hist_tos <- array(2:101, dim = c(year = 5, lat = 19, lon = 37, month = 12, member = 5))
mask_sea_land <- array(c(1,0,1), dim = c(lat = 19, lon = 37))
sea_value <- 1
lat <- seq(-90, 90, 10)
lon <- seq(0, 360, 10)
index_hist <- GMST(data_tas = hist_tas, data_tos = hist_tos, data_lats = lat,
data_lons = lon, type = 'hist', mask_sea_land = mask_sea_land,
sea_value = sea_value)
## Decadal predictions
dcpp_tas <- array(1:100, dim = c(sdate = 5, lat = 19, lon = 37, fmonth = 24, member = 5))
dcpp_tos <- array(2:101, dim = c(sdate = 5, lat = 19, lon = 37, fmonth = 24, member = 5))
mask_sea_land <- array(c(1,0,1), dim = c(lat = 19, lon = 37))
sea_value <- 1
lat <- seq(-90, 90, 10)
lon <- seq(0, 360, 10)
index_dcpp <- GMST(data_tas = dcpp_tas, data_tos = dcpp_tos, data_lats = lat,
data_lons = lon, type = 'dcpp', monini = 1, mask_sea_land = mask_sea_land,
sea_value = sea_value)
GSAT Compute the Global Surface Air Temperature (GSAT) anomalies
Description
The Global Surface Air Temperature (GSAT) anomalies are computed as the weighted-averaged
surface air temperature anomalies over the global region. If different members and/or datasets are
provided, the climatology (used to calculate the anomalies) is computed individually for all of them.
Usage
GSAT(
data,
data_lats,
data_lons,
type,
lat_dim = "lat",
lon_dim = "lon",
mask = NULL,
monini = 11,
fmonth_dim = "fmonth",
sdate_dim = "sdate",
indices_for_clim = NULL,
year_dim = "year",
month_dim = "month",
na.rm = TRUE,
ncores = NULL
)
Arguments
data A numerical array to be used for the index computation with, at least, the dimen-
sions: 1) latitude, longitude, start date and forecast month (in case of decadal
predictions), 2) latitude, longitude, year and month (in case of historical simu-
lations or observations). This data has to be provided, at least, over the whole
region needed to compute the index.
data_lats A numeric vector indicating the latitudes of the data.
data_lons A numeric vector indicating the longitudes of the data.
type A character string indicating the type of data (’dcpp’ for decadal predictions,
’hist’ for historical simulations, or ’obs’ for observations or reanalyses).
lat_dim A character string of the name of the latitude dimension. The default value is
’lat’.
lon_dim A character string of the name of the longitude dimension. The default value is
’lon’.
mask An array of a mask (with 0’s in the grid points that have to be masked) or NULL
(i.e., no mask is used). This parameter allows to remove the values over land
in case the dataset is a combination of surface air temperature over land and
sea surface temperature over the ocean. Also, it can be used to mask those grid
points that are missing in the observational dataset for a fair comparison between
the forecast system and the reference dataset. The default value is NULL.
monini An integer indicating the month in which the forecast system is initialized. Only
used when parameter ’type’ is ’dcpp’. The default value is 11, i.e., initialized in
November.
fmonth_dim A character string indicating the name of the forecast month dimension. Only
used if parameter ’type’ is ’dcpp’. The default value is ’fmonth’.
sdate_dim A character string indicating the name of the start date dimension. Only used if
parameter ’type’ is ’dcpp’. The default value is ’sdate’.
indices_for_clim
A numeric vector of the indices of the years to compute the climatology for cal-
culating the anomalies, or NULL so the climatology is calculated over the whole
period. If the data are already anomalies, set it to FALSE. The default value is
NULL.
In case of parameter ’type’ is ’dcpp’, ’indices_for_clim’ must be relative to the
first forecast year, and the climatology is automatically computed over the com-
mon calendar period for the different forecast years.
year_dim A character string indicating the name of the year dimension The default value
is ’year’. Only used if parameter ’type’ is ’hist’ or ’obs’.
month_dim A character string indicating the name of the month dimension. The default
value is ’month’. Only used if parameter ’type’ is ’hist’ or ’obs’.
na.rm A logical value indicanting whether to remove NA values. The default value is
TRUE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A numerical array with the GSAT anomalies with the same dimensions as data except the lat_dim,
lon_dim and fmonth_dim (month_dim) in case of decadal predictions (historical simulations or
observations). In case of decadal predictions, a new dimension ’fyear’ is added.
Examples
## Observations or reanalyses
obs <- array(1:100, dim = c(year = 5, lat = 19, lon = 37, month = 12))
lat <- seq(-90, 90, 10)
lon <- seq(0, 360, 10)
index_obs <- GSAT(data = obs, data_lats = lat, data_lons = lon, type = 'obs')
## Historical simulations
hist <- array(1:100, dim = c(year = 5, lat = 19, lon = 37, month = 12, member = 5))
lat <- seq(-90, 90, 10)
lon <- seq(0, 360, 10)
index_hist <- GSAT(data = hist, data_lats = lat, data_lons = lon, type = 'hist')
## Decadal predictions
dcpp <- array(1:100, dim = c(sdate = 5, lat = 19, lon = 37, fmonth = 24, member = 5))
lat <- seq(-90, 90, 10)
lon <- seq(0, 360, 10)
index_dcpp <- GSAT(data = dcpp, data_lats = lat, data_lons = lon, type = 'dcpp', monini = 1)
Histo2Hindcast Chunk long simulations for comparison with hindcasts
Description
Reorganize a long run (historical typically) with only one start date into chunks corresponding to a
set of start dates. The time frequency of the data should be monthly.
Usage
Histo2Hindcast(
data,
sdatesin,
sdatesout,
nleadtimesout,
sdate_dim = "sdate",
ftime_dim = "ftime",
ncores = NULL
)
Arguments
data A numeric array of model or observational data with dimensions at least sdate_dim
and ftime_dim.
sdatesin A character string of the start date of ’data’. The format should be ’YYYYM-
MDD’ or ’YYYYMM’.
sdatesout A vector of character string indicating the expected start dates of the output. The
format should be ’YYYYMMDD’ or ’YYYYMM’.
nleadtimesout A positive integer indicating the length of leadtimes of the output.
sdate_dim A character string indicating the name of the sdate date dimension of ’data’. The
default value is ’sdate’.
ftime_dim A character string indicating the name of the lead time dimension of ’data’. The
default value is ’ftime’.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A numeric array with the same dimensions as data, except the length of sdate_dim is ’sdatesout’
and the length of ftime_dim is nleadtimesout.
Examples
sdates_out <- c('19901101', '19911101', '19921101', '19931101', '19941101')
leadtimes_per_startdate <- 12
exp_data <- Histo2Hindcast(sampleData$mod, startDates,
sdates_out, leadtimes_per_startdate)
obs_data <- Histo2Hindcast(sampleData$obs, startDates,
sdates_out, leadtimes_per_startdate)
## Not run:
exp_data <- Reorder(exp_data, c(3, 4, 1, 2))
obs_data <- Reorder(obs_data, c(3, 4, 1, 2))
PlotAno(exp_data, obs_data, sdates_out,
toptitle = paste('Anomalies reorganized into shorter chunks'),
ytitle = 'K', fileout = NULL)
## End(Not run)
InsertDim Add a named dimension to an array
Description
Insert an extra dimension into an array at position ’posdim’ with length ’lendim’. The array repeats
along the new dimension.
Usage
InsertDim(data, posdim, lendim, name = NULL, ncores = NULL)
Arguments
data An array to which the additional dimension to be added.
posdim An integer indicating the position of the new dimension.
lendim An integer indicating the length of the new dimension.
name A character string indicating the name for the new dimension. The default value
is NULL.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL. This parameter is deprecated now.
Value
An array as parameter ’data’ but with the added named dimension.
Examples
a <- array(rnorm(15), dim = c(a = 3, b = 1, c = 5, d = 1))
res <- InsertDim(InsertDim(a, posdim = 2, lendim = 1, name = 'e'), 4, c(f = 2))
dim(res)
LeapYear Checks Whether A Year Is Leap Year
Description
This function tells whether a year is a leap year or not.
Usage
LeapYear(year)
Arguments
year A numeric value indicating the year in the Gregorian calendar.
Value
Boolean telling whether the year is a leap year or not.
Examples
print(LeapYear(1990))
print(LeapYear(1991))
print(LeapYear(1992))
print(LeapYear(1993))
Load Loads Experimental And Observational Data
Description
This function loads monthly or daily data from a set of specified experimental datasets together with
data that date-corresponds from a set of specified observational datasets. See parameters ’storefreq’,
’sampleperiod’, ’exp’ and ’obs’.
A set of starting dates is specified through the parameter ’sdates’. Data of each starting date is
loaded for each model. Load() arranges the data in two arrays with a similar format both with the
following dimensions:
1. The number of experimental datasets determined by the user through the argument ’exp’ (for
the experimental data array) or the number of observational datasets available for validation
(for the observational array) determined as well by the user through the argument ’obs’.
2. The greatest number of members across all experiments (in the experimental data array) or
across all observational datasets (in the observational data array).
3. The number of starting dates determined by the user through the ’sdates’ argument.
4. The greatest number of lead-times.
5. The number of latitudes of the selected zone.
6. The number of longitudes of the selected zone.
Dimensions 5 and 6 are optional and their presence depends on the type of the specified variable
(global mean or 2-dimensional) and on the selected output type (area averaged time series, latitude
averaged time series, longitude averaged time series or 2-dimensional time series).
In the case of loading an area average the dimensions of the arrays will be only the first 4.
Only a specified variable is loaded from each experiment at each starting date. See parameter
’var’.
Afterwards, observational data that matches every starting date and lead-time of every experimental
dataset is fetched in the file system (so, if two predictions at two different start dates overlap, some
observational values will be loaded and kept in memory more than once).
If no data is found in the file system for an experimental or observational array point it is filled with
an NA value.
If the specified output is 2-dimensional or latitude- or longitude-averaged time series all the data is
interpolated into a common grid. If the specified output type is area averaged time series the data
is averaged on the individual grid of each dataset but can also be averaged after interpolating into a
common grid. See parameters ’grid’ and ’method’.
Once the two arrays are filled by calling this function, other functions in the s2dv package that
receive as inputs data formatted in this data structure can be executed (e.g: Clim() to compute
climatologies, Ano() to compute anomalies, ...).
Load() has many additional parameters to disable values and trim dimensions of selected variable,
even masks can be applied to 2-dimensional variables. See parameters ’nmember’, ’nmemberobs’,
’nleadtime’, ’leadtimemin’, ’leadtimemax’, ’sampleperiod’, ’lonmin’, ’lonmax’, ’latmin’, ’latmax’,
’maskmod’, ’maskobs’, ’varmin’, ’varmax’.
The parameters ’exp’ and ’obs’ can take various forms. The most direct form is a list of lists,
where each sub-list has the component ’path’ associated to a character string with a pattern of the
path to the files of a dataset to be loaded. These patterns can contain wildcards and tags that will
be replaced automatically by Load() with the specified starting dates, member numbers, variable
name, etc.
See parameter ’exp’ or ’obs’ for details.
Only NetCDF files are supported. OPeNDAP URLs to NetCDF files are also supported.
Load() can load 2-dimensional or global mean variables in any of the following formats:
• experiments:
– file per ensemble per starting date (YYYY, MM and DD somewhere in the path)
– file per member per starting date (YYYY, MM, DD and MemberNumber somewhere in
the path. Ensemble experiments with different numbers of members can be loaded in a
single Load() call.)
(YYYY, MM and DD specify the starting dates of the predictions)
• observations:
– file per ensemble per month (YYYY and MM somewhere in the path)
– file per member per month (YYYY, MM and MemberNumber somewhere in the path,
obs with different numbers of members supported)
– file per dataset (No constraints in the path but the time axes in the file have to be properly
defined)
(YYYY and MM correspond to the actual month data in the file)
In all the formats the data can be stored in a daily or monthly frequency, or a multiple of these (see
parameters ’storefreq’ and ’sampleperiod’).
All the data files must contain the target variable defined over time and potentially over members,
latitude and longitude dimensions in any order, time being the record dimension.
In the case of a two-dimensional variable, the variables longitude and latitude must be defined in-
side the data file too and must have the same names as the dimension for longitudes and latitudes
respectively.
The names of these dimensions (and longitude and latitude variables) and the name for the members
dimension are expected to be ’longitude’, ’latitude’ and ’ensemble’ respectively. However, these
names can be adjusted with the parameter ’dimnames’ or can be configured in the configuration file
(read below in parameters ’exp’, ’obs’ or see ?ConfigFileOpen for more information.
All the data files are expected to have numeric values representable with 32 bits. Be aware when
choosing the fill values or infinite values in the datasets to load.
The Load() function returns a named list following a structure similar to the used in the pack-
age ’downscaleR’.
The components are the following:
• ’mod’ is the array that contains the experimental data. It has the attribute ’dimensions’ asso-
ciated to a vector of strings with the labels of each dimension of the array, in order.
• ’obs’ is the array that contains the observational data. It has the attribute ’dimensions’ associ-
ated to a vector of strings with the labels of each dimension of the array, in order.
• ’obs’ is the array that contains the observational data.
• ’lat’ and ’lon’ are the latitudes and longitudes of the grid into which the data is interpolated (0
if the loaded variable is a global mean or the output is an area average).
Both have the attribute ’cdo_grid_des’ associated with a character string with the name of the
common grid of the data, following the CDO naming conventions for grids.
The attribute ’projection’ is kept for compatibility with ’downscaleR’.
• ’Variable’ has the following components:
– ’varName’, with the short name of the loaded variable as specified in the parameter ’var’.
– ’level’, with information on the pressure level of the variable. Is kept to NULL by now.
And the following attributes:
– ’is_standard’, kept for compatibility with ’downscaleR’, tells if a dataset has been ho-
mogenized to standards with ’downscaleR’ catalogs.
– ’units’, a character string with the units of measure of the variable, as found in the source
files.
– ’longname’, a character string with the long name of the variable, as found in the source
files.
– ’daily_agg_cellfun’, ’monthly_agg_cellfun’, ’verification_time’, kept for compatibility
with ’downscaleR’.
• ’Datasets’ has the following components:
– ’exp’, a named list where the names are the identifying character strings of each experi-
ment in ’exp’, each associated to a list with the following components:
* ’members’, a list with the names of the members of the dataset.
* ’source’, a path or URL to the source of the dataset.
– ’obs’, similar to ’exp’ but for observational datasets.
• ’Dates’, with the follwing components:
– ’start’, an array of dimensions (sdate, time) with the POSIX initial date of each forecast
time of each starting date.
– ’end’, an array of dimensions (sdate, time) with the POSIX final date of each forecast
time of each starting date.
• ’InitializationDates’, a vector of starting dates as specified in ’sdates’, in POSIX format.
• ’when’, a time stamp of the date the Load() call to obtain the data was issued.
• ’source_files’, a vector of character strings with complete paths to all the found files involved
in the Load() call.
• ’not_found_files’, a vector of character strings with complete paths to not found files involved
in the Load() call.
Usage
Load(
var,
exp = NULL,
obs = NULL,
sdates,
nmember = NULL,
nmemberobs = NULL,
nleadtime = NULL,
leadtimemin = 1,
leadtimemax = NULL,
storefreq = "monthly",
sampleperiod = 1,
lonmin = 0,
lonmax = 360,
latmin = -90,
latmax = 90,
output = "areave",
method = "conservative",
grid = NULL,
maskmod = vector("list", 15),
maskobs = vector("list", 15),
configfile = NULL,
varmin = NULL,
varmax = NULL,
silent = FALSE,
nprocs = NULL,
dimnames = NULL,
remapcells = 2,
path_glob_permissive = "partial"
)
Arguments
var Short name of the variable to load. It should coincide with the variable name
inside the data files.
E.g.: var = 'tos', var = 'tas', var = 'prlr'.
In some cases, though, the path to the files contains twice or more times the short
name of the variable but the actual name of the variable inside the data files is
different. In these cases it may be convenient to provide var with the name that
appears in the file paths (see details on parameters exp and obs).
exp Parameter to specify which experimental datasets to load data from.
It can take two formats: a list of lists or a vector of character strings. Each for-
mat will trigger a different mechanism of locating the requested datasets.
The first format is adequate when loading data you’ll only load once or occasion-
ally. The second format is targeted to avoid providing repeatedly the information
on a certain dataset but is more complex to use.
IMPORTANT: Place first the experiment with the largest number of members
and, if possible, with the largest number of leadtimes. If not possible, the argu-
ments ’nmember’ and/or ’nleadtime’ should be filled to not miss any member or
leadtime.
If ’exp’ is not specified or set to NULL, observational data is loaded for each
start-date as far as ’leadtimemax’. If ’leadtimemax’ is not provided, Load()
will retrieve data of a period of time as long as the time period between the first
specified start date and the current date.
List of lists:
A list of lists where each sub-list contains information on the location and for-
mat of the data files of the dataset to load.
Each sub-list can have the following components:
• ’name’: A character string to identify the dataset. Optional.
• ’path’: A character string with the pattern of the path to the files of the
dataset. This pattern can be built up making use of some special tags that
Load() will replace with the appropriate values to find the dataset files.
The allowed tags are $START_DATE$, $YEAR$, $MONTH$, $DAY$,
$MEMBER_NUMBER$, $STORE_FREQ$, $VAR_NAME$, $EXP_NAME$
(only for experimental datasets), $OBS_NAME$ (only for observational
datasets) and $SUFFIX$
Example: /path/to/$EXP_NAME$/postprocessed/$VAR_NAME$/
$VAR_NAME$_$START_DATE$.nc
If ’path’ is not specified and ’name’ is specified, the dataset information will
be fetched with the same mechanism as when using the vector of character
strings (read below).
• ’nc_var_name’: Character string with the actual variable name to look for
inside the dataset files. Optional. Takes, by default, the same value as the
parameter ’var’.
• ’suffix’: Wildcard character string that can be used to build the ’path’ of the
dataset. It can be accessed with the tag $SUFFIX$. Optional. Takes ” by
default.
• ’var_min’: Important: Character string. Minimum value beyond which read
values will be deactivated to NA. Optional. No deactivation is performed
by default.
• ’var_max’: Important: Character string. Maximum value beyond which
read values will be deactivated to NA. Optional. No deactivation is per-
formed by default.
The tag $START_DATES$ will be replaced with all the starting dates specified
in ’sdates’. $YEAR$, $MONTH$ and $DAY$ will take a value for each iter-
ation over ’sdates’, simply these are the same as $START_DATE$ but split in
parts.
$MEMBER_NUMBER$ will be replaced by a character string with each mem-
ber number, from 1 to the value specified in the parameter ’nmember’ (in exper-
imental datasets) or in ’nmemberobs’ (in observational datasets). It will range
from ’01’ to ’N’ or ’0N’ if N < 10.
$STORE_FREQ$ will take the value specified in the parameter ’storefreq’ (’monthly’
or ’daily’).
$VAR_NAME$ will take the value specified in the parameter ’var’.
$EXP_NAME$ will take the value specified in each component of the parame-
ter ’exp’ in the sub-component ’name’.
$OBS_NAME$ will take the value specified in each component of the parame-
ter ’obs’ in the sub-component ’obs.
$SUFFIX$ will take the value specified in each component of the parameters
’exp’ and ’obs’ in the sub-component ’suffix’.
Example:
list(
list(
name = 'experimentA',
path = file.path('/path/to/$DATASET_NAME$/$STORE_FREQ$',
'$VAR_NAME$$SUFFIX$',
'$VAR_NAME$_$START_DATE$.nc'),
nc_var_name = '$VAR_NAME$',
suffix = '_3hourly',
var_min = '-1e19',
var_max = '1e19'
)
)
This will make Load() look for, for instance, the following paths, if ’sdates’ is
c(’19901101’, ’19951101’, ’20001101’):
/path/to/experimentA/monthly_mean/tas_3hourly/tas_19901101.nc
/path/to/experimentA/monthly_mean/tas_3hourly/tas_19951101.nc
/path/to/experimentA/monthly_mean/tas_3hourly/tas_20001101.nc
Vector of character strings: To avoid specifying constantly the same informa-
tion to load the same datasets, a vector with only the names of the datasets to
load can be specified.
Load() will then look for the information in a configuration file whose path
must be specified in the parameter ’configfile’.
Check ?ConfigFileCreate, ConfigFileOpen, ConfigEditEntry & co. to
learn how to create a new configuration file and how to add the information
there.
Example: c(’experimentA’, ’experimentB’)
obs Argument with the same format as parameter ’exp’. See details on parameter
’exp’.
If ’obs’ is not specified or set to NULL, no observational data is loaded.
sdates Vector of starting dates of the experimental runs to be loaded following the pat-
tern ’YYYYMMDD’.
This argument is mandatory.
E.g. c(’19601101’, ’19651101’, ’19701101’)
nmember Vector with the numbers of members to load from the specified experimental
datasets in ’exp’.
If not specified, the automatically detected number of members of the first ex-
perimental dataset is detected and replied to all the experimental datasets.
If a single value is specified it is replied to all the experimental datasets.
Data for each member is fetched in the file system. If not found is filled with
NA values.
An NA value in the ’nmember’ list is interpreted as "fetch as many members of
each experimental dataset as the number of members of the first experimental
dataset".
Note: It is recommended to specify the number of members of the first experi-
mental dataset if it is stored in file per member format because there are known
issues in the automatic detection of members if the path to the dataset in the
configuration file contains Shell Globbing wildcards such as ’*’.
E.g., c(4, 9)
nmemberobs Vector with the numbers of members to load from the specified observational
datasets in ’obs’.
If not specified, the automatically detected number of members of the first ob-
servational dataset is detected and replied to all the observational datasets.
If a single value is specified it is replied to all the observational datasets.
Data for each member is fetched in the file system. If not found is filled with
NA values.
An NA value in the ’nmemberobs’ list is interpreted as "fetch as many members
of each observational dataset as the number of members of the first observational
dataset".
Note: It is recommended to specify the number of members of the first observa-
tional dataset if it is stored in file per member format because there are known
issues in the automatic detection of members if the path to the dataset in the
configuration file contains Shell Globbing wildcards such as ’*’.
E.g., c(1, 5)
nleadtime Deprecated. See parameter ’leadtimemax’.
leadtimemin Only lead-times higher or equal to ’leadtimemin’ are loaded. Takes by default
value 1.
leadtimemax Only lead-times lower or equal to ’leadtimemax’ are loaded. Takes by default
the number of lead-times of the first experimental dataset in ’exp’.
If ’exp’ is NULL this argument won’t have any effect (see ?Load description).
storefreq Frequency at which the data to be loaded is stored in the file system. Can take
values ’monthly’ or ’daily’.
By default it takes ’monthly’.
Note: Data stored in other frequencies with a period which is divisible by a
month can be loaded with a proper use of ’storefreq’ and ’sampleperiod’ parame-
ters. It can also be loaded if the period is divisible by a day and the observational
datasets are stored in a file per dataset format or ’obs’ is empty.
sampleperiod To load only a subset between ’leadtimemin’ and ’leadtimemax’ with the period
of subsampling ’sampleperiod’.
Takes by default value 1 (all lead-times are loaded).
See ’storefreq’ for more information.
lonmin If a 2-dimensional variable is loaded, values at longitudes lower than ’lonmin’
aren’t loaded.
Must take a value in the range [-360, 360] (if negative longitudes are found in
the data files these are translated to this range).
It is set to 0 if not specified.
If ’lonmin’ > ’lonmax’, data across Greenwich is loaded.
lonmax If a 2-dimensional variable is loaded, values at longitudes higher than ’lonmax’
aren’t loaded.
Must take a value in the range [-360, 360] (if negative longitudes are found in
the data files these are translated to this range).
It is set to 360 if not specified.
If ’lonmin’ > ’lonmax’, data across Greenwich is loaded.
latmin If a 2-dimensional variable is loaded, values at latitudes lower than ’latmin’
aren’t loaded.
Must take a value in the range [-90, 90].
It is set to -90 if not specified.
latmax If a 2-dimensional variable is loaded, values at latitudes higher than ’latmax’
aren’t loaded.
Must take a value in the range [-90, 90].
It is set to 90 if not specified.
output This parameter determines the format in which the data is arranged in the output
arrays.
Can take values ’areave’, ’lon’, ’lat’, ’lonlat’.
• ’areave’: Time series of area-averaged variables over the specified domain.
• ’lon’: Time series of meridional averages as a function of longitudes.
• ’lat’: Time series of zonal averages as a function of latitudes.
• ’lonlat’: Time series of 2d fields.
Takes by default the value ’areave’. If the variable specified in ’var’ is a global
mean, this parameter is forced to ’areave’.
All the loaded data is interpolated into the grid of the first experimental dataset
except if ’areave’ is selected. In that case the area averages are computed on each
dataset original grid. A common grid different than the first experiment’s can
be specified through the parameter ’grid’. If ’grid’ is specified when selecting
’areave’ output type, all the loaded data is interpolated into the specified grid
before calculating the area averages.
method This parameter determines the interpolation method to be used when regrid-
ding data (see ’output’). Can take values ’bilinear’, ’bicubic’, ’conservative’,
’distance-weighted’.
See remapcells for advanced adjustments.
Takes by default the value ’conservative’.
grid A common grid can be specified through the parameter ’grid’ when loading 2-
dimensional data. Data is then interpolated onto this grid whichever ’output’
type is specified. If the selected output type is ’areave’ and a ’grid’ is specified,
the area averages are calculated after interpolating to the specified grid.
If not specified and the selected output type is ’lon’, ’lat’ or ’lonlat’, this param-
eter takes as default value the grid of the first experimental dataset, which is read
automatically from the source files.
The grid must be supported by ’cdo’ tools. Now only supported: rNXxNY or
tTRgrid.
Both rNXxNY and tRESgrid yield rectangular regular grids. rNXxNY yields
grids that are evenly spaced in longitudes and latitudes (in degrees). tRESgrid
refers to a grid generated with series of spherical harmonics truncated at the
RESth harmonic. However these spectral grids are usually associated to a gaus-
sian grid, the latitudes of which are spaced with a Gaussian quadrature (not
evenly spaced in degrees). The pattern tRESgrid will yield a gaussian grid.
E.g., ’r96x72’ Advanced: If the output type is ’lon’, ’lat’ or ’lonlat’ and no com-
mon grid is specified, the grid of the first experimental or observational dataset
is detected and all data is then interpolated onto this grid. If the first experi-
mental or observational dataset’s data is found shifted along the longitudes (i.e.,
there’s no value at the longitude 0 but at a longitude close to it), the data is re-
interpolated to suppress the shift. This has to be done in order to make sure all
the data from all the datasets is properly aligned along longitudes, as there’s no
option so far in Load to specify grids starting at longitudes other than 0. This
issue doesn’t affect when loading in ’areave’ mode without a common grid, the
data is not re-interpolated in that case.
maskmod List of masks to be applied to the data of each experimental dataset respectively,
if a 2-dimensional variable is specified in ’var’.
Each mask can be defined in 2 formats:
a) a matrix with dimensions c(longitudes, latitudes).
b) a list with the components ’path’ and, optionally, ’nc_var_name’.
In the format a), the matrix must have the same size as the common grid or with
the same size as the grid of the corresponding experimental dataset if ’areave’
output type is specified and no common ’grid’ is specified.
In the format b), the component ’path’ must be a character string with the path to
a NetCDF mask file, also in the common grid or in the grid of the corresponding
dataset if ’areave’ output type is specified and no common ’grid’ is specified.
If the mask file contains only a single variable, there’s no need to specify the
component ’nc_var_name’. Otherwise it must be a character string with the
name of the variable inside the mask file that contains the mask values. This
variable must be defined only over 2 dimensions with length greater or equal to
1.
Whichever the mask format, a value of 1 at a point of the mask keeps the original
value at that point whereas a value of 0 disables it (replaces by a NA value).
By default all values are kept (all ones).
The longitudes and latitudes in the matrix must be in the same order as in the
common grid or as in the original grid of the corresponding dataset when loading
in ’areave’ mode. You can find out the order of the longitudes and latitudes of a
file with ’cdo griddes’.
Note that in a common CDO grid defined with the patterns ’t<RES>grid’ or
’r<NX>x<NY>’ the latitudes and latitudes are ordered, by definition, from -90
to 90 and from 0 to 360, respectively.
If you are loading maps (’lonlat’, ’lon’ or ’lat’ output types) all the data will
be interpolated onto the common ’grid’. If you want to specify a mask, you
will have to provide it already interpolated onto the common grid (you may
use ’cdo’ libraries for this purpose). It is not usual to apply different masks on
experimental datasets on the same grid, so all the experiment masks are expected
to be the same.
Warning: When loading maps, any masks defined for the observational data
will be ignored to make sure the same mask is applied to the experimental and
observational data.
Warning: list() compulsory even if loading 1 experimental dataset only!
E.g., list(array(1, dim = c(num_lons, num_lats)))
maskobs See help on parameter ’maskmod’.
configfile Path to the s2dv configuration file from which to retrieve information on loca-
tion in file system (and other) of datasets.
If not specified, the configuration file used at BSC-ES will be used (it is included
in the package).
Check the BSC’s configuration file or a template of configuration file in the
folder ’inst/config’ in the package.
Check further information on the configuration file mechanism in ConfigFileOpen().
varmin Loaded experimental and observational data values smaller than ’varmin’ will
be disabled (replaced by NA values).
By default no deactivation is performed.
varmax Loaded experimental and observational data values greater than ’varmax’ will
be disabled (replaced by NA values).
By default no deactivation is performed.
silent Parameter to show (FALSE) or hide (TRUE) information messages.
Warnings will be displayed even if ’silent’ is set to TRUE.
Takes by default the value ’FALSE’.
nprocs Number of parallel processes created to perform the fetch and computation of
data.
These processes will use shared memory in the processor in which Load() is
launched.
By default the number of logical cores in the machine will be detected and as
many processes as logical cores there are will be created.
A value of 1 won’t create parallel processes.
When running in multiple processes, if an error occurs in any of the processes,
a crash message appears in the R session of the original process but no detail is
given about the error. A value of 1 will display all error messages in the original
and only R session.
Note: the parallel process create other blocking processes each time they need
to compute an interpolation via ’cdo’.
dimnames Named list where the name of each element is a generic name of the expected
dimensions inside the NetCDF files. These generic names are ’lon’, ’lat’ and
’member’. ’time’ is not needed because it’s detected automatically by discard.
The value associated to each name is the actual dimension name in the NetCDF
file.
The variables in the file that contain the longitudes and latitudes of the data (if
the data is a 2-dimensional variable) must have the same name as the longitude
and latitude dimensions.
By default, these names are ’longitude’, ’latitude’ and ’ensemble. If any of
those is defined in the ’dimnames’ parameter, it takes priority and overwrites
the default value. E.g., list(lon = ’x’, lat = ’y’) In that example, the dimension
’member’ will take the default value ’ensemble’.
remapcells When loading a 2-dimensional variable, spatial subsets can be requested via
lonmin, lonmax, latmin and latmax. When Load() obtains the subset it is
then interpolated if needed with the method specified in method.
The result of this interpolation can vary if the values surrounding the spatial
subset are not present. To better control this process, the width in number of
grid cells of the surrounding area to be taken into account can be specified with
remapcells. A value of 0 will take into account no additional cells but will
generate less traffic between the storage and the R processes that load data.
A value beyond the limits in the data files will be automatically runcated to the
actual limit.
The default value is 2.
path_glob_permissive
In some cases, when specifying a path pattern (either in the parameters ’exp’/’obs’
or in a configuration file) one can specify path patterns that contain shell glob-
bing expressions. Too much freedom in putting globbing expressions in the
path patterns can be dangerous and make Load() find a file in the file system
for a start date for a dataset that really does not belong to that dataset. For ex-
ample, if the file system contains two directories for two different experiments
that share a part of their path and the path pattern contains globbing expres-
sions: /experiments/model1/expA/monthly_mean/tos/tos_19901101.nc /experi-
ments/model2/expA/monthly_mean/tos/tos_19951101.nc And the path pattern
is used as in the example right below to load data of only the experiment ’expA’
of the model ’model1’ for the starting dates ’19901101’ and ’19951101’, Load()
will undesiredly yield data for both starting dates, even if in fact there is data
only for the first one:
expA <- list(path = file.path('/experiments/*/expA/monthly_mean/$VAR_NAME$',
'$VAR_NAME$_$START_DATE$.nc') data <- Load('tos', list(expA), NULL,
c('19901101', '19951101')) To avoid these situations, the parameter path_glob_permissive
is set by default to 'partial', which forces Load() to replace all the globbing
expressions of a path pattern of a data set by fixed values taken from the path of
the first found file for each data set, up to the folder right before the final files
(globbing expressions in the file name will not be replaced, only those in the
path to the file). Replacement of globbing expressions in the file name can also
be triggered by setting path_glob_permissive to FALSE or 'no'. If needed to
keep all globbing expressions, path_glob_permissive can be set to TRUE or
'yes'.
Details
The two output matrices have between 2 and 6 dimensions:
1. Number of experimental/observational datasets.
2. Number of members.
3. Number of startdates.
4. Number of leadtimes.
5. Number of latitudes (optional).
6. Number of longitudes (optional).
but the two matrices have the same number of dimensions and only the first two dimensions can
have different lengths depending on the input arguments. For a detailed explanation of the process,
read the documentation attached to the package or check the comments in the code.
Value
Load() returns a named list following a structure similar to the used in the package ’downscaleR’.
The components are the following:
• ’mod’ is the array that contains the experimental data. It has the attribute ’dimensions’ asso-
ciated to a vector of strings with the labels of each dimension of the array, in order. The order
of the latitudes is always forced to be from 90 to -90 whereas the order of the longitudes is
kept as in the original files (if possible). The longitude values provided in lon lower than 0
are added 360 (but still kept in the original order). In some cases, however, if multiple data
sets are loaded in longitude-latitude mode, the longitudes (and also the data arrays in mod and
obs) are re-ordered afterwards by Load() to range from 0 to 360; a warning is given in such
cases. The longitude and latitude of the center of the grid cell that corresponds to the value
[j, i] in ’mod’ (along the dimensions latitude and longitude, respectively) can be found in the
outputs lon[i] and lat[j]
• ’obs’ is the array that contains the observational data. The same documentation of parameter
’mod’ applies to this parameter.
• ’lat’ and ’lon’ are the latitudes and longitudes of the centers of the cells of the grid the data is
interpolated into (0 if the loaded variable is a global mean or the output is an area average).
Both have the attribute ’cdo_grid_des’ associated with a character string with the name of the
common grid of the data, following the CDO naming conventions for grids.
’lon’ has the attributes ’first_lon’ and ’last_lon’, with the first and last longitude values found
in the region defined by ’lonmin’ and ’lonmax’. ’lat’ has also the equivalent attributes ’first_lat’
and ’last_lat’.
’lon’ has also the attribute ’data_across_gw’ which tells whether the requested region via ’lon-
min’, ’lonmax’, ’latmin’, ’latmax’ goes across the Greenwich meridian. As explained in the
documentation of the parameter ’mod’, the loaded data array is kept in the same order as in the
original files when possible: this means that, in some cases, even if the data goes across the
Greenwich, the data array may not go across the Greenwich. The attribute ’array_across_gw’
tells whether the array actually goes across the Greenwich. E.g: The longitudes in the data
files are defined to be from 0 to 360. The requested longitudes are from -80 to 40. The original
order is kept, hence the longitudes in the array will be ordered as follows: 0, ..., 40, 280, ...,
360. In that case, ’data_across_gw’ will be TRUE and ’array_across_gw’ will be FALSE.
The attribute ’projection’ is kept for compatibility with ’downscaleR’.
• ’Variable’ has the following components:
– ’varName’, with the short name of the loaded variable as specified in the parameter ’var’.
– ’level’, with information on the pressure level of the variable. Is kept to NULL by now.
And the following attributes:
– ’is_standard’, kept for compatibility with ’downscaleR’, tells if a dataset has been ho-
mogenized to standards with ’downscaleR’ catalogs.
– ’units’, a character string with the units of measure of the variable, as found in the source
files.
– ’longname’, a character string with the long name of the variable, as found in the source
files.
– ’daily_agg_cellfun’, ’monthly_agg_cellfun’, ’verification_time’, kept for compatibility
with ’downscaleR’.
• ’Datasets’ has the following components:
– ’exp’, a named list where the names are the identifying character strings of each experi-
ment in ’exp’, each associated to a list with the following components:
* ’members’, a list with the names of the members of the dataset.
* ’source’, a path or URL to the source of the dataset.
– ’obs’, similar to ’exp’ but for observational datasets.
• ’Dates’, with the follwing components:
– ’start’, an array of dimensions (sdate, time) with the POSIX initial date of each forecast
time of each starting date.
– ’end’, an array of dimensions (sdate, time) with the POSIX final date of each forecast
time of each starting date.
• ’InitializationDates’, a vector of starting dates as specified in ’sdates’, in POSIX format.
• ’when’, a time stamp of the date the Load() call to obtain the data was issued.
• ’source_files’, a vector of character strings with complete paths to all the found files involved
in the Load() call.
• ’not_found_files’, a vector of character strings with complete paths to not found files involved
in the Load() call.
Examples
# Let's assume we want to perform verification with data of a variable
# called 'tos' from a model called 'model' and observed data coming from
# an observational dataset called 'observation'.
#
# The model was run in the context of an experiment named 'experiment'.
# It simulated from 1st November in 1985, 1990, 1995, 2000 and 2005 for a
# period of 5 years time from each starting date. 5 different sets of
# initial conditions were used so an ensemble of 5 members was generated
# for each starting date.
# The model generated values for the variables 'tos' and 'tas' in a
# 3-hourly frequency but, after some initial post-processing, it was
# averaged over every month.
# The resulting monthly average series were stored in a file for each
# starting date for each variable with the data of the 5 ensemble members.
# The resulting directory tree was the following:
# model
# |--> experiment
# |--> monthly_mean
# |--> tos_3hourly
# | |--> tos_19851101.nc
# | |--> tos_19901101.nc
# | .
# | .
# | |--> tos_20051101.nc
# |--> tas_3hourly
# |--> tas_19851101.nc
# |--> tas_19901101.nc
# .
# .
# |--> tas_20051101.nc
#
# The observation recorded values of 'tos' and 'tas' at each day of the
# month over that period but was also averaged over months and stored in
# a file per month. The directory tree was the following:
# observation
# |--> monthly_mean
# |--> tos
# | |--> tos_198511.nc
# | |--> tos_198512.nc
# | |--> tos_198601.nc
# | .
# | .
# | |--> tos_201010.nc
# |--> tas
# |--> tas_198511.nc
# |--> tas_198512.nc
# |--> tas_198601.nc
# .
# .
# |--> tas_201010.nc
#
# The model data is stored in a file-per-startdate fashion and the
# observational data is stored in a file-per-month, and both are stored in
# a monthly frequency. The file format is NetCDF.
# Hence all the data is supported by Load() (see details and other supported
# conventions in ?Load) but first we need to configure it properly.
#
# These data files are included in the package (in the 'sample_data' folder),
# only for the variable 'tos'. They have been interpolated to a very low
# resolution grid so as to make it on CRAN.
# The original grid names (following CDO conventions) for experimental and
# observational data were 't106grid' and 'r180x89' respectively. The final
# resolutions are 'r20x10' and 'r16x8' respectively.
# The experimental data comes from the decadal climate prediction experiment
# run at IC3 in the context of the CMIP5 project. Its name within IC3 local
# database is 'i00k'.
# The observational dataset used for verification is the 'ERSST'
# observational dataset.
#
# The next two examples are equivalent and show how to load the variable
# 'tos' from these sample datasets, the first providing lists of lists to
# the parameters 'exp' and 'obs' (see documentation on these parameters) and
# the second providing vectors of character strings, hence using a
# configuration file.
#
# The code is not run because it dispatches system calls to 'cdo' which is
# not allowed in the examples as per CRAN policies. You can run it on your
# system though.
# Instead, the code in 'dontshow' is run, which loads the equivalent
# already processed data in R.
#
# Example 1: Providing lists of lists to 'exp' and 'obs':
#
## Not run:
data_path <- system.file('sample_data', package = 's2dv')
exp <- list(
name = 'experiment',
path = file.path(data_path, 'model/$EXP_NAME$/monthly_mean',
'$VAR_NAME$_3hourly/$VAR_NAME$_$START_DATES$.nc')
)
obs <- list(
name = 'observation',
path = file.path(data_path, 'observation/$OBS_NAME$/monthly_mean',
'$VAR_NAME$/$VAR_NAME$_$YEAR$$MONTH$.nc')
)
# Now we are ready to use Load().
startDates <- c('19851101', '19901101', '19951101', '20001101', '20051101')
sampleData <- Load('tos', list(exp), list(obs), startDates,
output = 'areave', latmin = 27, latmax = 48,
lonmin = -12, lonmax = 40)
## End(Not run)
#
# Example 2: Providing vectors of character strings to 'exp' and 'obs'
# and using a configuration file.
#
# The configuration file 'sample.conf' that we will create in the example
# has the proper entries to load these (see ?LoadConfigFile for details on
# writing a configuration file).
#
## Not run:
data_path <- system.file('sample_data', package = 's2dv')
expA <- list(name = 'experiment', path = file.path(data_path,
'model/$EXP_NAME$/$STORE_FREQ$_mean/$VAR_NAME$_3hourly',
'$VAR_NAME$_$START_DATE$.nc'))
obsX <- list(name = 'observation', path = file.path(data_path,
'$OBS_NAME$/$STORE_FREQ$_mean/$VAR_NAME$',
'$VAR_NAME$_$YEAR$$MONTH$.nc'))
# Now we are ready to use Load().
startDates <- c('19851101', '19901101', '19951101', '20001101', '20051101')
sampleData <- Load('tos', list(expA), list(obsX), startDates,
output = 'areave', latmin = 27, latmax = 48,
lonmin = -12, lonmax = 40)
#
# Example 3: providing character strings in 'exp' and 'obs', and providing
# a configuration file.
# The configuration file 'sample.conf' that we will create in the example
# has the proper entries to load these (see ?LoadConfigFile for details on
# writing a configuration file).
#
configfile <- paste0(tempdir(), '/sample.conf')
ConfigFileCreate(configfile, confirm = FALSE)
c <- ConfigFileOpen(configfile)
c <- ConfigEditDefinition(c, 'DEFAULT_VAR_MIN', '-1e19', confirm = FALSE)
c <- ConfigEditDefinition(c, 'DEFAULT_VAR_MAX', '1e19', confirm = FALSE)
data_path <- system.file('sample_data', package = 's2dv')
exp_data_path <- paste0(data_path, '/model/$EXP_NAME$/')
obs_data_path <- paste0(data_path, '/$OBS_NAME$/')
c <- ConfigAddEntry(c, 'experiments', dataset_name = 'experiment',
var_name = 'tos', main_path = exp_data_path,
file_path = '$STORE_FREQ$_mean/$VAR_NAME$_3hourly/$VAR_NAME$_$START_DATE$.nc')
c <- ConfigAddEntry(c, 'observations', dataset_name = 'observation',
var_name = 'tos', main_path = obs_data_path,
file_path = '$STORE_FREQ$_mean/$VAR_NAME$/$VAR_NAME$_$YEAR$$MONTH$.nc')
ConfigFileSave(c, configfile, confirm = FALSE)
# Now we are ready to use Load().
startDates <- c('19851101', '19901101', '19951101', '20001101', '20051101')
sampleData <- Load('tos', c('experiment'), c('observation'), startDates,
output = 'areave', latmin = 27, latmax = 48,
lonmin = -12, lonmax = 40, configfile = configfile)
## End(Not run)
MeanDims Average an array along multiple dimensions
Description
This function returns the mean of an array along a set of dimensions and preserves the dimension
names if it has.
Usage
MeanDims(data, dims, na.rm = FALSE, drop = TRUE)
Arguments
data An array to be averaged.
dims A vector of numeric or charactor string, indicating along which dimensions to
average.
na.rm A logical value indicating whether to ignore NA values (TRUE) or not (FALSE).
drop A logical value indicating whether to keep the averaged dimension (FALSE) or
drop it (TRUE). The default value is TRUE.
Value
A numeric or an array with the same dimension as parameter ’data’ except the ’dims’ dimensions.
If ’drop’ is TRUE, ’dims’ will be removed; if ’drop’ is FALSE, ’dims’ will be preserved and the
length will be 1. If all the dimensions are averaged out, a numeric is returned.
Examples
a <- array(rnorm(24), dim = c(dat = 2, member = 3, time = 4))
ens_mean <- MeanDims(a, 'member')
dim(ens_mean)
ens_time_mean <- MeanDims(a, c(2, 3), drop = FALSE)
dim(ens_time_mean)
NAO Compute the North Atlantic Oscillation (NAO) Index
Description
Compute the North Atlantic Oscillation (NAO) index based on the leading EOF of the sea level pres-
sure (SLP) anomalies over the north Atlantic region (20N-80N, 80W-40E). The PCs are obtained
by projecting the forecast and observed anomalies onto the observed EOF pattern or the forecast
anomalies onto the EOF pattern of the other years of the forecast. By default (ftime_avg = 2:4),
NAO() computes the NAO index for 1-month lead seasonal forecasts that can be plotted with Plot-
BoxWhisker(). It returns cross-validated PCs of the NAO index for forecast (exp) and observations
(obs) based on the leading EOF pattern.
Usage
NAO(
exp = NULL,
obs = NULL,
lat,
lon,
time_dim = "sdate",
memb_dim = "member",
space_dim = c("lat", "lon"),
ftime_dim = "ftime",
ftime_avg = 2:4,
obsproj = TRUE,
ncores = NULL
)
Arguments
exp A named numeric array of North Atlantic SLP (20N-80N, 80W-40E) forecast
anomalies from Ano() or Ano_CrossValid() with dimensions ’time_dim’, ’memb_dim’,
’ftime_dim’, and ’space_dim’ at least. If only NAO of observational data needs
to be computed, this parameter can be left to NULL. The default value is NULL.
obs A named numeric array of North Atlantic SLP (20N-80N, 80W-40E) observed
anomalies from Ano() or Ano_CrossValid() with dimensions ’time_dim’, ’ftime_dim’,
and ’space_dim’ at least. If only NAO of experimental data needs to be com-
puted, this parameter can be left to NULL. The default value is NULL.
lat A vector of the latitudes of ’exp’ and ’obs’.
lon A vector of the longitudes of ’exp’ and ’obs’.
time_dim A character string indicating the name of the time dimension of ’exp’ and ’obs’.
The default value is ’sdate’.
memb_dim A character string indicating the name of the member dimension of ’exp’ (and
’obs’, optional). If ’obs’ has memb_dim, the length must be 1. The default value
is ’member’.
space_dim A vector of two character strings. The first is the dimension name of latitude of
’ano’ and the second is the dimension name of longitude of ’ano’. The default
value is c(’lat’, ’lon’).
ftime_dim A character string indicating the name of the forecast time dimension of ’exp’
and ’obs’. The default value is ’ftime’.
ftime_avg A numeric vector of the forecast time steps to average across the target period.
If average is not needed, set NULL. The default value is 2:4, i.e., from 2nd to
4th forecast time steps.
obsproj A logical value indicating whether to compute the NAO index by projecting the
forecast anomalies onto the leading EOF of observational reference (TRUE) or
compute the NAO by first computing the leading EOF of the forecast anomalies
(in cross-validation mode, i.e. leaving the year you are evaluating out), and
then projecting forecast anomalies onto this EOF (FALSE). The default value is
TRUE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A list which contains:
exp A numeric array of forecast NAO index in verification format with the same
dimensions as ’exp’ except space_dim and ftime_dim. If ftime_avg is NULL,
ftime_dim remains.
obs A numeric array of observed NAO index in verification format with the same
dimensions as ’obs’ except space_dim and ftime_dim. If ftime_avg is NULL,
ftime_dim remains.
References
<NAME>., <NAME>. and <NAME>. (2003). The skill of multi-model seasonal fore-
casts of the wintertime North Atlantic Oscillation. Climate Dynamics, 21, 501-514. DOI: 10.1007/s00382-
003-0350-4
Examples
# Make up synthetic data
set.seed(1)
exp <- array(rnorm(1620), dim = c(member = 2, sdate = 3, ftime = 5, lat = 6, lon = 9))
set.seed(2)
obs <- array(rnorm(1620), dim = c(member = 1, sdate = 3, ftime = 5, lat = 6, lon = 9))
lat <- seq(20, 80, length.out = 6)
lon <- seq(-80, 40, length.out = 9)
nao <- NAO(exp = exp, obs = obs, lat = lat, lon = lon)
# plot the NAO index
## Not run:
nao$exp <- Reorder(nao$exp, c(2, 1))
nao$obs <- Reorder(nao$obs, c(2, 1))
PlotBoxWhisker(nao$exp, nao$obs, "NAO index, DJF", "NAO index (PC1) TOS",
monini = 12, yearini = 1985, freq = 1, "Exp. A", "Obs. X")
## End(Not run)
Persistence Compute persistence
Description
Compute a persistence forecast based on a lagged autoregression of observational data along the
time dimension, with a measure of forecast uncertainty (prediction interval) based on Coelho et al.,
2004.
Usage
Persistence(
data,
dates,
time_dim = "time",
start,
end,
ft_start,
ft_end = ft_start,
max_ft = 10,
nmemb = 1,
na.action = 10,
ncores = NULL
)
Arguments
data A numeric array corresponding to the observational data including the time di-
mension along which the autoregression is computed. The data should start at
least 40 time steps (years or days) before ’start’.
dates A sequence of 4-digit integers (YYYY) or string (YYYY-MM-DD) in class
’Date’ indicating the dates available in the observations.
time_dim A character string indicating the dimension along which to compute the autore-
gression. The default value is ’time’.
start A 4-digit integer (YYYY) or a string (YYYY-MM-DD) in class ’Date’ indicat-
ing the first start date of the persistence forecast. It must be between 1850 and
2020.
end A 4-digit integer (YYYY) or a string (YYYY-MM-DD) in class ’Date’ indicat-
ing the last start date of the persistence forecast. It must be between 1850 and
2020.
ft_start An integer indicating the forecast time for which the persistence forecast should
be calculated, or the first forecast time of the average forecast times for which
persistence should be calculated.
ft_end An (optional) integer indicating the last forecast time of the average forecast
times for which persistence should be calculated in the case of a multi-timestep
average persistence. The default value is ’ft_start’.
max_ft An integer indicating the maximum forecast time possible for ’data’. For exam-
ple, for decadal prediction ’max_ft’ would correspond to 10 (years). The default
value is 10.
nmemb An integer indicating the number of ensemble members to generate for the per-
sistence forecast. The default value is 1.
na.action A function or an integer. A function (e.g., na.omit, na.exclude, na.fail, na.pass)
indicates what should happen when the data contain NAs. A numeric indicates
the maximum number of NA position allowed to compute regression. The de-
fault value is 10.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A list containing:
$persistence A numeric array with dimensions ’memb’, time (start dates), latitudes and lon-
gitudes containing the persistence forecast.
$persistence.mean
A numeric array with same dimensions as ’persistence’, except the ’memb’ di-
mension which is of length 1, containing the ensemble mean persistence fore-
cast.
$persistence.predint
A numeric array with same dimensions as ’persistence’, except the ’memb’ di-
mension which is of length 1, containing the prediction interval of the persis-
tence forecast.
$AR.slope A numeric array with same dimensions as ’persistence’, except the ’memb’ di-
mension which is of length 1, containing the slope coefficient of the autoregres-
sion.
$AR.intercept A numeric array with same dimensions as ’persistence’, except the ’memb’ di-
mension which is of length 1, containing the intercept coefficient of the autore-
gression.
$AR.lowCI A numeric array with same dimensions as ’persistence’, except the ’memb’ di-
mension which is of length 1, containing the lower value of the confidence in-
terval of the autoregression.
$AR.highCI A numeric array with same dimensions as ’persistence’, except the ’memb’ di-
mension which is of length 1, containing the upper value of the confidence in-
terval of the autoregression.
Examples
# Case 1: year
# Building an example dataset with yearly start dates from 1920 to 2009
set.seed(1)
obs1 <- rnorm(1 * 70 * 2 * 2)
dim(obs1) <- c(member = 1, time = 70, lat = 2, lon = 2)
dates <- seq(1920, 1989, 1)
res <- Persistence(obs1, dates = dates, start = 1961, end = 1980, ft_start = 1,
nmemb = 2)
# Case 2: day
dates <- seq(as.Date(ISOdate(1990, 1, 1)), as.Date(ISOdate(1990, 4, 1)) ,1)
start <- as.Date(ISOdate(1990, 2, 15))
end <- as.Date(ISOdate(1990, 4, 1))
set.seed(1)
data <- rnorm(1 * length(dates))
dim(data) <- c(member = 1, time = length(dates))
res <- Persistence(data, dates = dates, start = start, end = end, ft_start = 1)
Plot2VarsVsLTime Plot two scores with confidence intervals in a common plot
Description
Plot two input variables that have the same dimensions in a common plot. One plot for all experi-
ments. The input variables should have dimensions (nexp/nmod, nltime).
Usage
Plot2VarsVsLTime(
var1,
var2,
toptitle = "",
ytitle = "",
monini = 1,
freq = 12,
nticks = NULL,
limits = NULL,
listexp = c("exp1", "exp2", "exp3"),
listvars = c("var1", "var2"),
biglab = FALSE,
hlines = NULL,
leg = TRUE,
siglev = FALSE,
sizetit = 1,
show_conf = TRUE,
fileout = NULL,
width = 8,
height = 5,
size_units = "in",
res = 100,
...
)
Arguments
var1 Matrix of dimensions (nexp/nmod, nltime).
var2 Matrix of dimensions (nexp/nmod, nltime).
toptitle Main title, optional.
ytitle Title of Y-axis, optional.
monini Starting month between 1 and 12. Default = 1.
freq 1 = yearly, 12 = monthly, 4 = seasonal, ... Default = 12.
nticks Number of ticks and labels on the x-axis, optional.
limits c(lower limit, upper limit): limits of the Y-axis, optional.
listexp List of experiment names, up to three, optional.
listvars List of names of input variables, optional.
biglab TRUE/FALSE for presentation/paper plot. Default = FALSE.
hlines c(a, b, ...) Add horizontal black lines at Y-positions a, b, ... The default value is
NULL.
leg TRUE/FALSE if legend should be added or not to the plot. Default = TRUE.
siglev TRUE/FALSE if significance level should replace confidence interval.
Default = FALSE.
sizetit Multiplicative factor to change title size, optional.
show_conf TRUE/FALSE to show/not confidence intervals for input variables.
fileout Name of output file. Extensions allowed: eps/ps, jpeg, png, pdf, bmp and tiff.
The default value is NULL.
width File width, in the units specified in the parameter size_units (inches by default).
Takes 8 by default.
height File height, in the units specified in the parameter size_units (inches by default).
Takes 5 by default.
size_units Units of the size of the device (file or window) to plot in. Inches (’in’) by default.
See ?Devices and the creator function of the corresponding device.
res Resolution of the device (file or window) to plot in. See ?Devices and the creator
function of the corresponding device.
... Arguments to be passed to the method. Only accepts the following graphical
parameters:
adj ann ask bg bty cex.sub cin col.axis col.lab col.main col.sub cra crt csi cxy err
family fg fig font font.axis font.lab font.main font.sub lend lheight ljoin lmitre
mar mex mfcol mfrow mfg mkh oma omd omi page pch plt smo srt tck tcl usr
xaxp xaxs xaxt xlog xpd yaxp yaxs yaxt ylbias ylog
For more information about the parameters see ‘par‘.
Details
Examples of input:
——————
RMSE error for a number of experiments and along lead-time: (nexp, nltime)
Examples
# Load sample data as in Load() example:
example(Load)
clim <- Clim(sampleData$mod, sampleData$obs)
ano_exp <- Ano(sampleData$mod, clim$clim_exp)
ano_obs <- Ano(sampleData$obs, clim$clim_obs)
runmean_months <- 12
smooth_ano_exp <- Smoothing(data = ano_exp, runmeanlen = runmean_months)
smooth_ano_obs <- Smoothing(data = ano_obs, runmeanlen = runmean_months)
dim_to_mean <- 'member' # mean along members
required_complete_row <- 'ftime' # discard startdates for which there are NA leadtimes
leadtimes_per_startdate <- 60
rms <- RMS(MeanDims(smooth_ano_exp, dim_to_mean),
MeanDims(smooth_ano_obs, dim_to_mean),
comp_dim = required_complete_row,
limits = c(ceiling((runmean_months + 1) / 2),
leadtimes_per_startdate - floor(runmean_months / 2)))
smooth_ano_exp_m_sub <- smooth_ano_exp - InsertDim(MeanDims(smooth_ano_exp, 'member',
na.rm = TRUE),
lendim = dim(smooth_ano_exp)['member'],
name = 'member')
spread <- Spread(smooth_ano_exp_m_sub, compute_dim = c('member', 'sdate'))
#Combine rms outputs into one array
rms_combine <- abind::abind(rms$conf.lower, rms$rms, rms$conf.upper, along = 0)
rms_combine <- Reorder(rms_combine, c(2, 3, 1, 4))
Plot2VarsVsLTime(InsertDim(rms_combine[, , , ], 1, 1), Reorder(spread$sd, c(1, 3, 2)),
toptitle = 'RMSE and spread', monini = 11, freq = 12,
listexp = c('CMIP5 IC3'), listvar = c('RMSE', 'spread'))
PlotACC Plot Plumes/Timeseries Of Anomaly Correlation Coefficients
Description
Plots plumes/timeseries of ACC from an array with dimensions (output from ACC()):
c(nexp, nobs, nsdates, nltime, 4)
where the fourth dimension is of length 4 and contains the lower limit of the 95% confidence
interval, the ACC, the upper limit of the 95% confidence interval and the 95% significance level
given by a one-sided T-test.
Usage
PlotACC(
ACC,
sdates,
toptitle = "",
sizetit = 1,
ytitle = "",
limits = NULL,
legends = NULL,
freq = 12,
biglab = FALSE,
fill = FALSE,
linezero = FALSE,
points = TRUE,
vlines = NULL,
fileout = NULL,
width = 8,
height = 5,
size_units = "in",
res = 100,
...
)
Arguments
ACC An ACC array with with dimensions:
c(nexp, nobs, nsdates, nltime, 4)
with the fourth dimension of length 4 containing the lower limit of the 95%
confidence interval, the ACC, the upper limit of the 95% confidence interval
and the 95% significance level.
sdates A character vector of startdates: c(’YYYYMMDD’,’YYYYMMDD’).
toptitle A character string of the main title, optional.
sizetit A multiplicative factor to scale title size, optional.
ytitle A character string of the title of Y-axis for each experiment: c(”, ”), optional.
limits A numeric vector c(lower limit, upper limit): limits of the Y-axis, optional.
legends A character vector of flags to be written in the legend, optional.
freq A integer: 1 = yearly, 12 = monthly, 4 = seasonal, ... Default: 12.
biglab A logical value for presentation/paper plot, Default = FALSE.
fill A logical value if filled confidence interval. Default = FALSE.
linezero A logical value if a line at y=0 should be added. Default = FALSE.
points A logical value if points instead of lines. Default = TRUE.
Must be TRUE if only 1 leadtime.
vlines A vector of x location where to add vertical black lines, optional.
fileout A character string of the output file name. Extensions allowed: eps/ps, jpeg,
png, pdf, bmp and tiff. Default is NULL.
width A numeric of the file width, in the units specified in the parameter size_units
(inches by default). Takes 8 by default.
height A numeric of the file height, in the units specified in the parameter size_units
(inches by default). Takes 5 by default.
size_units A character string of the units of the size of the device (file or window) to plot
in. Inches (’in’) by default. See ?Devices and the creator function of the corre-
sponding device.
res Resolution of the device (file or window) to plot in. See ?Devices and the creator
function of the corresponding device.
... Arguments to be passed to the method. Only accepts the following graphical
parameters:
adj ann ask bg bty cex.sub cin col.axis col.lab col.main col.sub cra crt csi cxy
err family fg fig fin font font.axis font.lab font.main font.sub lend lheight ljoin
lmitre mar mex mfcol mfrow mfg mkh oma omd omi page plt smo srt tck tcl usr
xaxp xaxs xaxt xlog xpd yaxp yaxs yaxt ylbias ylog
For more information about the parameters see ‘par‘.
Examples
sampleData$mod <- Season(sampleData$mod, monini = 11, moninf = 12, monsup = 2)
sampleData$obs <- Season(sampleData$obs, monini = 11, moninf = 12, monsup = 2)
clim <- Clim(sampleData$mod, sampleData$obs)
ano_exp <- Ano(sampleData$mod, clim$clim_exp)
ano_obs <- Ano(sampleData$obs, clim$clim_obs)
acc <- ACC(ano_exp, ano_obs, lat = sampleData$lat)
acc_bootstrap <- ACC(ano_exp, ano_obs, lat = sampleData$lat, conftype = 'bootstrap')
# Combine acc results for PlotACC
res <- array(c(acc$conf.lower, acc$acc, acc$conf.upper, acc$p.val),
dim = c(dim(acc$acc), 4))
res_bootstrap <- array(c(acc$acc_conf.lower, acc$acc, acc$acc_conf.upper, acc$p.val),
dim = c(dim(acc$acc), 4))
PlotACC(res, startDates)
PlotACC(res_bootstrap, startDates)
PlotAno Plot Anomaly time series
Description
Plots time series of raw or smoothed anomalies of any variable output from Load() or Ano() or or
Ano_CrossValid() or Smoothing().
Usage
PlotAno(
exp_ano,
obs_ano = NULL,
sdates,
toptitle = rep("", 15),
ytitle = rep("", 15),
limits = NULL,
legends = NULL,
freq = 12,
biglab = FALSE,
fill = TRUE,
memb = TRUE,
ensmean = TRUE,
linezero = FALSE,
points = FALSE,
vlines = NULL,
sizetit = 1,
fileout = NULL,
width = 8,
height = 5,
size_units = "in",
res = 100,
...
)
Arguments
exp_ano A numerical array containing the experimental data:
c(nmod/nexp, nmemb/nparam, nsdates, nltime).
obs_ano A numerical array containing the observational data:
c(nobs, nmemb, nsdates, nltime)
sdates A character vector of start dates in the format of c(’YYYYMMDD’,’YYYYMMDD’).
toptitle Main title for each experiment: c(”,”), optional.
ytitle Title of Y-axis for each experiment: c(”,”), optional.
limits c(lower limit, upper limit): limits of the Y-axis, optional.
legends List of observational dataset names, optional.
freq 1 = yearly, 12 = monthly, 4 = seasonal, ... Default: 12.
biglab TRUE/FALSE for presentation/paper plot. Default = FALSE.
fill TRUE/FALSE if the spread between members should be filled. Default = TRUE.
memb TRUE/FALSE if all members/only the ensemble-mean should be plotted.
Default = TRUE.
ensmean TRUE/FALSE if the ensemble-mean should be plotted. Default = TRUE.
linezero TRUE/FALSE if a line at y=0 should be added. Default = FALSE.
points TRUE/FALSE if points instead of lines should be shown. Default = FALSE.
vlines List of x location where to add vertical black lines, optional.
sizetit Multiplicative factor to scale title size, optional.
fileout Name of the output file for each experiment: c(”,”). Extensions allowed: eps/ps,
jpeg, png, pdf, bmp and tiff. If filenames with different extensions are passed,
it will be considered only the first one and it will be extended to the rest. The
default value is NULL, which the pop-up window shows.
width File width, in the units specified in the parameter size_units (inches by default).
Takes 8 by default.
height File height, in the units specified in the parameter size_units (inches by default).
Takes 5 by default.
size_units Units of the size of the device (file or window) to plot in. Inches (’in’) by default.
See ?Devices and the creator function of the corresponding device.
res Resolution of the device (file or window) to plot in. See ?Devices and the creator
function of the corresponding device.
... Arguments to be passed to the method. Only accepts the following graphical
parameters:
adj ann ask bg bty cex.sub cin col.axis col.lab col.main col.sub cra crt csi cxy err
family fg fig font font.axis font.lab font.main font.sub lend lheight ljoin lmitre
mar mex mfcol mfrow mfg mkh oma omd omi page plt smo srt tck tcl usr xaxp
xaxs xaxt xlog xpd yaxp yaxs yaxt ylbias ylog
For more information about the parameters see ‘par‘.
Examples
# Load sample data as in Load() example:
example(Load)
clim <- Clim(sampleData$mod, sampleData$obs)
ano_exp <- Ano(sampleData$mod, clim$clim_exp)
ano_obs <- Ano(sampleData$obs, clim$clim_obs)
smooth_ano_exp <- Smoothing(ano_exp, time_dim = 'ftime', runmeanlen = 12)
smooth_ano_obs <- Smoothing(ano_obs, time_dim = 'ftime', runmeanlen = 12)
smooth_ano_exp <- Reorder(smooth_ano_exp, c(2, 3, 4, 1))
smooth_ano_obs <- Reorder(smooth_ano_obs, c(2, 3, 4, 1))
PlotAno(smooth_ano_exp, smooth_ano_obs, startDates,
toptitle = paste('smoothed anomalies'), ytitle = c('K', 'K', 'K'),
legends = 'ERSST', biglab = FALSE)
PlotBoxWhisker Box-And-Whisker Plot of Time Series with Ensemble Distribution
Description
Produce time series of box-and-whisker plot showing the distribution of the members of a forecast
vs. the observed evolution. The correlation between forecast and observational data is calculated
and displayed. Only works for n-monthly to n-yearly time series.
Usage
PlotBoxWhisker(
exp,
obs,
toptitle = "",
ytitle = "",
monini = 1,
yearini = 0,
freq = 1,
expname = "exp 1",
obsname = "obs 1",
drawleg = TRUE,
fileout = NULL,
width = 8,
height = 5,
size_units = "in",
res = 100,
...
)
Arguments
exp Forecast array of multi-member time series, e.g., the NAO index of one exper-
iment. The expected dimensions are c(members, start dates/forecast horizons).
A vector with only the time dimension can also be provided. Only monthly or
lower frequency time series are supported. See parameter freq.
obs Observational vector or array of time series, e.g., the NAO index of the ob-
servations that correspond the forecast data in exp. The expected dimensions
are c(start dates/forecast horizons) or c(1, start dates/forecast horizons). Only
monthly or lower frequency time series are supported. See parameter freq.
toptitle Character string to be drawn as figure title.
ytitle Character string to be drawn as y-axis title.
monini Number of the month of the first time step, from 1 to 12.
yearini Year of the first time step.
freq Frequency of the provided time series: 1 = yearly, 12 = monthly,
expname Experimental dataset name.
obsname Name of the observational reference dataset.
drawleg TRUE/FALSE: whether to draw the legend or not.
fileout Name of output file. Extensions allowed: eps/ps, jpeg, png, pdf, bmp and tiff.
Default = ’output_PlotBox.ps’.
width File width, in the units specified in the parameter size_units (inches by default).
Takes 8 by default.
height File height, in the units specified in the parameter size_units (inches by default).
Takes 5 by default.
size_units Units of the size of the device (file or window) to plot in. Inches (’in’) by default.
See ?Devices and the creator function of the corresponding device.
res Resolution of the device (file or window) to plot in. See ?Devices and the creator
function of the corresponding device.
... Arguments to be passed to the method. Only accepts the following graphical
parameters:
ann ask bg cex.lab cex.sub cin col.axis col.lab col.main col.sub cra crt csi cxy err
family fg fig font font.axis font.lab font.main font.sub lend lheight ljoin lmitre
mex mfcol mfrow mfg mkh oma omd omi page pin plt pty smo srt tck tcl usr
xaxp xaxs xaxt xlog xpd yaxp yaxs yaxt ylbias ylog
For more information about the parameters see ‘par‘.
Value
Generates a file at the path specified via fileout.
Author(s)
History:
0.1 - 2013-09 (<NAME>, <<EMAIL>>) - Original code
0.2 - 2015-03 (<NAME>, <<EMAIL>>) - Removed all
normalization for sake of clarity. 1.0 - 2016-03 (<NAME>, <<EMAIL>>) -
Formatting to R CRAN
See Also
EOF, ProjectField, NAO
Examples
# See examples on Load() to understand the first lines in this example
## Not run:
data_path <- system.file('sample_data', package = 's2dverification')
expA <- list(name = 'experiment', path = file.path(data_path,
'model/$EXP_NAME$/$STORE_FREQ$_mean/$VAR_NAME$_3hourly',
'$VAR_NAME$_$START_DATE$.nc'))
obsX <- list(name = 'observation', path = file.path(data_path,
'$OBS_NAME$/$STORE_FREQ$_mean/$VAR_NAME$',
'$VAR_NAME$_$YEAR$$MONTH$.nc'))
# Now we are ready to use Load().
startDates <- c('19851101', '19901101', '19951101', '20001101', '20051101')
sampleData <- Load('tos', list(expA), list(obsX), startDates,
leadtimemin = 1, leadtimemax = 4, output = 'lonlat',
latmin = 27, latmax = 48, lonmin = -12, lonmax = 40)
## End(Not run)
# Now ready to compute the EOFs and project on, for example, the first
# variability mode.
ano <- Ano_CrossValid(sampleData$mod, sampleData$obs)
ano_exp <- array(ano$exp, dim = dim(ano$exp)[-2])
ano_obs <- array(ano$obs, dim = dim(ano$obs)[-2])
nao <- NAO(ano_exp, ano_obs, sampleData$lat, sampleData$lon)
# Finally plot the nao index
## Not run:
nao$exp <- Reorder(nao$exp, c(2, 1))
nao$obs <- Reorder(nao$obs, c(2, 1))
PlotBoxWhisker(nao$exp, nao$obs, "NAO index, DJF", "NAO index (PC1) TOS",
monini = 12, yearini = 1985, freq = 1, "Exp. A", "Obs. X")
## End(Not run)
PlotClim Plots Climatologies
Description
Plots climatologies as a function of the forecast time for any index output from Clim() and orga-
nized in matrix with dimensions:
c(nmod/nexp, nmemb/nparam, nltime) or c(nmod/nexp, nltime) for the experiment data
c(nobs, nmemb, nltime) or c(nobs, nltime) for the observational data
Usage
PlotClim(
exp_clim,
obs_clim = NULL,
toptitle = "",
ytitle = "",
monini = 1,
freq = 12,
limits = NULL,
listexp = c("exp1", "exp2", "exp3"),
listobs = c("obs1", "obs2", "obs3"),
biglab = FALSE,
leg = TRUE,
sizetit = 1,
fileout = NULL,
width = 8,
height = 5,
size_units = "in",
res = 100,
...
)
Arguments
exp_clim Matrix containing the experimental data with dimensions:
c(nmod/nexp, nmemb/nparam, nltime) or c(nmod/nexp, nltime)
obs_clim Matrix containing the observational data (optional) with dimensions:
c(nobs, nmemb, nltime) or c(nobs, nltime)
toptitle Main title, optional.
ytitle Title of Y-axis, optional.
monini Starting month between 1 and 12. Default = 1.
freq 1 = yearly, 12 = monthly, 4 = seasonal, ... Default = 12.
limits c(lower limit, upper limit): limits of the Y-axis, optional.
listexp List of experiment names, optional.
listobs List of observational dataset names, optional.
biglab TRUE/FALSE for presentation/paper plot. Default = FALSE.
leg TRUE/FALSE to plot the legend or not.
sizetit Multiplicative factor to scale title size, optional.
fileout Name of output file. Extensions allowed: eps/ps, jpeg, png, pdf, bmp and tiff.
The default value is NULL, which the figure is shown in a pop-up window.
width File width, in the units specified in the parameter size_units (inches by default).
Takes 8 by default.
height File height, in the units specified in the parameter size_units (inches by default).
Takes 5 by default.
size_units Units of the size of the device (file or window) to plot in. Inches (’in’) by default.
See ?Devices and the creator function of the corresponding device.
res Resolution of the device (file or window) to plot in. See ?Devices and the creator
function of the corresponding device.
... Arguments to be passed to the method. Only accepts the following graphical
parameters:
adj ann ask bg bty cex.sub cin col.axis col.lab col.main col.sub cra crt csi cxy err
family fg fig font font.axis font.lab font.main font.sub lend lheight ljoin lmitre
mar mex mfcol mfrow mfg mkh oma omd omi page pch plt smo srt tck usr xaxp
xaxs xaxt xlog xpd yaxp yaxs yaxt ylbias ylog
For more information about the parameters see ‘par‘.
Examples
# Load sample data as in Load() example:
example(Load)
clim <- Clim(sampleData$mod, sampleData$obs)
PlotClim(clim$clim_exp, clim$clim_obs, toptitle = paste('climatologies'),
ytitle = 'K', monini = 11, listexp = c('CMIP5 IC3'),
listobs = c('ERSST'), biglab = FALSE, fileout = NULL)
PlotEquiMap Maps A Two-Dimensional Variable On A Cylindrical Equidistant Pro-
jection
Description
Map longitude-latitude array (on a regular rectangular or gaussian grid) on a cylindrical equidistant
latitude and longitude projection with coloured grid cells. Only the region for which data has been
provided is displayed. A colour bar (legend) can be plotted and adjusted. It is possible to draw
superimposed arrows, dots, symbols, contour lines and boxes. A number of options is provided to
adjust the position, size and colour of the components. Some parameters are provided to add and
adjust the masks that include continents, oceans, and lakes. This plot function is compatible with
figure layouts if colour bar is disabled.
Usage
PlotEquiMap(
var,
lon,
lat,
varu = NULL,
varv = NULL,
toptitle = NULL,
sizetit = NULL,
units = NULL,
brks = NULL,
cols = NULL,
PlotEquiMap 101
bar_limits = NULL,
triangle_ends = NULL,
col_inf = NULL,
col_sup = NULL,
colNA = NULL,
color_fun = clim.palette(),
square = TRUE,
filled.continents = NULL,
filled.oceans = FALSE,
country.borders = FALSE,
coast_color = NULL,
coast_width = 1,
lake_color = NULL,
shapefile = NULL,
shapefile_color = NULL,
shapefile_lwd = 1,
contours = NULL,
brks2 = NULL,
contour_lwd = 0.5,
contour_color = "black",
contour_lty = 1,
contour_draw_label = TRUE,
contour_label_scale = 1,
dots = NULL,
dot_symbol = 4,
dot_size = 1,
arr_subsamp = floor(length(lon)/30),
arr_scale = 1,
arr_ref_len = 15,
arr_units = "m/s",
arr_scale_shaft = 1,
arr_scale_shaft_angle = 1,
axelab = TRUE,
labW = FALSE,
lab_dist_x = NULL,
lab_dist_y = NULL,
degree_sym = FALSE,
intylat = 20,
intxlon = 20,
xlonshft = 0,
ylatshft = 0,
xlabels = NULL,
ylabels = NULL,
axes_tick_scale = 1,
axes_label_scale = 1,
drawleg = TRUE,
subsampleg = NULL,
bar_extra_labels = NULL,
draw_bar_ticks = TRUE,
draw_separators = FALSE,
triangle_ends_scale = 1,
bar_label_digits = 4,
bar_label_scale = 1,
units_scale = 1,
bar_tick_scale = 1,
bar_extra_margin = rep(0, 4),
boxlim = NULL,
boxcol = "purple2",
boxlwd = 5,
margin_scale = rep(1, 4),
title_scale = 1,
numbfig = NULL,
fileout = NULL,
width = 8,
height = 5,
size_units = "in",
res = 100,
...
)
Arguments
var Array with the values at each cell of a grid on a regular rectangular or gaus-
sian grid. The array is expected to have two dimensions: c(latitude, longitude).
Longitudes can be in ascending or descending order and latitudes in any or-
der. It can contain NA values (coloured with ’colNA’). Arrays with dimensions
c(longitude, latitude) will also be accepted but ’lon’ and ’lat’ will be used to dis-
ambiguate so this alternative is not appropriate for square arrays. It is allowed
that the positions of the longitudinal and latitudinal coordinate dimensions are
interchanged.
lon Numeric vector of longitude locations of the cell centers of the grid of ’var’, in
ascending or descending order (same as ’var’). Expected to be regularly spaced,
within either of the ranges [-180, 180] or [0, 360]. Data for two adjacent re-
gions split by the limits of the longitude range can also be provided, e.g. lon =
c(0:50, 300:360) (’var’ must be provided consitently).
lat Numeric vector of latitude locations of the cell centers of the grid of ’var’, in
any order (same as ’var’). Expected to be from a regular rectangular or gaussian
grid, within the range [-90, 90].
varu Array of the zonal component of wind/current/other field with the same dimen-
sions as ’var’. It is allowed that the positions of the longitudinal and latitudinal
coordinate dimensions are interchanged.
varv Array of the meridional component of wind/current/other field with the same
dimensions as ’var’. It is allowed that the positions of the longitudinal and lati-
tudinal coordinate dimensions are interchanged.
toptitle Top title of the figure, scalable with parameter ’title_scale’.
sizetit Scale factor for the figure top title provided in parameter ’toptitle’. Deprecated.
Use ’title_scale’ instead.
units Title at the top of the colour bar, most commonly the units of the variable pro-
vided in parameter ’var’.
brks, cols, bar_limits, triangle_ends
Usually only providing ’brks’ is enough to generate the desired colour bar.
These parameters allow to define n breaks that define n - 1 intervals to clas-
sify each of the values in ’var’. The corresponding grid cell of a given value in
’var’ will be coloured in function of the interval it belongs to. These parameters
are sent to ColorBar() to generate the breaks and colours. Additional colours
for values beyond the limits of the colour bar are also generated and applied to
the plot if ’bar_limits’ or ’brks’ and ’triangle_ends’ are properly provided to do
so. See ?ColorBar for a full explanation.
col_inf, col_sup, colNA
Colour identifiers to colour the values in ’var’ that go beyond the extremes of
the colour bar and to colour NA values, respectively. ’colNA’ takes attr(cols,
’na_color’) if available by default, where cols is the parameter ’cols’ if provided
or the vector of colors returned by ’color_fun’. If not available, it takes ’pink’ by
default. ’col_inf’ and ’col_sup’ will take the value of ’colNA’ if not specified.
See ?ColorBar for a full explanation on ’col_inf’ and ’col_sup’.
color_fun, subsampleg, bar_extra_labels, draw_bar_ticks
Set of parameters to control the visual aspect of the drawn colour bar (1/3). See
?ColorBar for a full explanation.
square Logical value to choose either to draw a coloured square for each grid cell in
’var’ (TRUE; default) or to draw contour lines and fill the spaces in between
with colours (FALSE). In the latter case, ’filled.continents’ will take the value
FALSE if not specified.
filled.continents
Colour to fill in drawn projected continents. Takes the value gray(0.5) by default
or, if ’square = FALSE’, takes the value FALSE. If set to FALSE, continents are
not filled in.
filled.oceans A logical value or the color name to fill in drawn projected oceans. The default
value is FALSE. If it is TRUE, the default colour is "light blue".
country.borders
A logical value indicating if the country borders should be plotted (TRUE) or not
(FALSE). It only works when ’filled.continents’ is FALSE. The default value is
FALSE.
coast_color Colour of the coast line of the drawn projected continents. Takes the value
gray(0.5) by default.
coast_width Line width of the coast line of the drawn projected continents. Takes the value
1 by default.
lake_color Colour of the lake or other water body inside continents. The default value is
NULL.
shapefile A character string of the path to a .rds file or a list object containinig shape file
data. If it is a .rds file, it should contain a list. The list should contains ’x’ and
’y’ at least, which indicate the location of the shape. The default value is NULL.
shapefile_color
Line color of the shapefile.
shapefile_lwd Line width of the shapefile. The default value is 1.
contours Array of same dimensions as ’var’ to be added to the plot and displayed with
contours. Parameter ’brks2’ is required to define the magnitude breaks for each
contour curve. Disregarded if ’square = FALSE’. It is allowed that the positions
of the longitudinal and latitudinal coordinate dimensions are interchanged.
brks2 Vector of magnitude breaks where to draw contour curves for the array provided
in ’contours’ or if ’square = FALSE’.
contour_lwd Line width of the contour curves provided via ’contours’ and ’brks2’, or if
’square = FALSE’.
contour_color Line color of the contour curves provided via ’contours’ and ’brks2’, or if
’square = FALSE’.
contour_lty Line type of the contour curves. Takes 1 (solid) by default. See help on ’lty’ in
par() for other accepted values.
contour_draw_label
A logical value indicating whether to draw the contour labels or not. The default
value is TRUE.
contour_label_scale
Scale factor for the superimposed labels when drawing contour levels.
dots Array of same dimensions as ’var’ or with dimensions c(n, dim(var)), where n
is the number of dot/symbol layers to add to the plot. A value of TRUE at a
grid cell will draw a dot/symbol on the corresponding square of the plot. By
default all layers provided in ’dots’ are plotted with dots, but a symbol can be
specified for each of the layers via the parameter ’dot_symbol’. It is allowed
that the positions of the longitudinal and latitudinal coordinate dimensions are
interchanged.
dot_symbol Single character/number or vector of characters/numbers that correspond to each
of the symbol layers specified in parameter ’dots’. If a single value is specified,
it will be applied to all the layers in ’dots’. Takes 15 (centered square) by default.
See ’pch’ in par() for additional accepted options.
dot_size Scale factor for the dots/symbols to be plotted, specified in ’dots’. If a single
value is specified, it will be applied to all layers in ’dots’. Takes 1 by default.
arr_subsamp Subsampling factor to select a subset of arrows in ’varu’ and ’varv’ to be drawn.
Only one out of arr_subsamp arrows will be drawn. Takes 1 by default.
arr_scale Scale factor for drawn arrows from ’varu’ and ’varv’. Takes 1 by default.
arr_ref_len Length of the refence arrow to be drawn as legend at the bottom of the figure (in
same units as ’varu’ and ’varv’, only affects the legend for the wind or variable
in these arrays). Defaults to 15.
arr_units Units of ’varu’ and ’varv’, to be drawn in the legend. Takes ’m/s’ by default.
arr_scale_shaft
Parameter for the scale of the shaft of the arrows (which also depend on the
number of figures and the arr_scale parameter). Defaults to 1.
arr_scale_shaft_angle
Parameter for the scale of the angle of the shaft of the arrows (which also depend
on the number of figure and the arr_scale parameter). Defaults to 1.
axelab Whether to draw longitude and latitude axes or not. TRUE by default.
labW Whether to label the longitude axis with a ’W’ instead of minus for negative
values. Defaults to FALSE.
lab_dist_x A numeric of the distance of the longitude labels to the box borders. The default
value is NULL and is automatically adjusted by the function.
lab_dist_y A numeric of the distance of the latitude labels to the box borders. The default
value is NULL and is automatically adjusted by the function.
degree_sym A logical indicating whether to include degree symbol (30° N) or not (30N;
default).
intylat Interval between latitude ticks on y-axis, in degrees. Defaults to 20.
intxlon Interval between latitude ticks on x-axis, in degrees. Defaults to 20.
xlonshft A numeric of the degrees to shift the latitude ticks. The default value is 0.
ylatshft A numeric of the degrees to shift the longitude ticks. The default value is 0.
xlabels A vector of character string of the custumized x-axis labels. The values should
correspond to each tick, which is decided by the longitude and parameter ’in-
txlon’. The default value is NULL and the labels will be automatically gener-
ated.
ylabels A vector of character string of the custumized y-axis labels. The values should
correspond to each tick, which is decided by the latitude and parameter ’intylat’.
The default value is NULL and the labels will be automatically generated.
axes_tick_scale
Scale factor for the tick lines along the longitude and latitude axes.
axes_label_scale
Scale factor for the labels along the longitude and latitude axes.
drawleg Whether to plot a color bar (legend, key) or not. Defaults to TRUE. It is not
possible to plot the colour bar if ’add = TRUE’. Use ColorBar() and the return
values of PlotEquiMap() instead.
draw_separators, triangle_ends_scale, bar_label_digits
Set of parameters to control the visual aspect of the drawn colour bar (2/3). See
?ColorBar for a full explanation.
bar_label_scale, units_scale, bar_tick_scale, bar_extra_margin
Set of parameters to control the visual aspect of the drawn colour bar (3/3). See
?ColorBar for a full explanation.
boxlim Limits of a box to be added to the plot, in degrees: c(x1, y1, x2, y2). A list with
multiple box specifications can also be provided.
boxcol Colour of the box lines. A vector with a colour for each of the boxes is also
accepted. Defaults to ’purple2’.
boxlwd Line width of the box lines. A vector with a line width for each of the boxes is
also accepted. Defaults to 5.
margin_scale Scale factor for the margins around the map plot, with the format c(y1, x1,
y2, x2). Defaults to rep(1, 4). If drawleg = TRUE, then margin_scale[1] is
subtracted 1 unit.
title_scale Scale factor for the figure top title. Defaults to 1.
numbfig Number of figures in the layout the plot will be put into. A higher numbfig will
result in narrower margins and smaller labels, axe labels, ticks, thinner lines, ...
Defaults to 1.
fileout File where to save the plot. If not specified (default) a graphics device will pop
up. Extensions allowed: eps/ps, jpeg, png, pdf, bmp and tiff.
width File width, in the units specified in the parameter size_units (inches by default).
Takes 8 by default.
height File height, in the units specified in the parameter size_units (inches by default).
Takes 5 by default.
size_units Units of the size of the device (file or window) to plot in. Inches (’in’) by default.
See ?Devices and the creator function of the corresponding device.
res Resolution of the device (file or window) to plot in. See ?Devices and the creator
function of the corresponding device.
... Arguments to be passed to the method. Only accepts the following graphical
parameters:
adj ann ask bg bty cex.sub cin col.axis col.lab col.main col.sub cra crt csi cxy
err family fg font font.axis font.lab font.main font.sub lend lheight ljoin lmitre
mex mfcol mfrow mfg mkh omd omi page pch pin plt pty smo srt tcl usr xaxp
xaxs xaxt xlog xpd yaxp yaxs yaxt ylbias ylog
For more information about the parameters see ‘par‘.
Value
brks Breaks used for colouring the map (and legend if drawleg = TRUE).
cols Colours used for colouring the map (and legend if drawleg = TRUE). Always of
length length(brks) - 1.
col_inf Colour used to draw the lower triangle end in the colour bar (NULL if not drawn
at all).
col_sup Colour used to draw the upper triangle end in the colour bar (NULL if not drawn
at all).
Examples
# See examples on Load() to understand the first lines in this example
## Not run:
data_path <- system.file('sample_data', package = 's2dv')
expA <- list(name = 'experiment', path = file.path(data_path,
'model/$EXP_NAME$/$STORE_FREQ$_mean/$VAR_NAME$_3hourly',
'$VAR_NAME$_$START_DATE$.nc'))
obsX <- list(name = 'observation', path = file.path(data_path,
'$OBS_NAME$/$STORE_FREQ$_mean/$VAR_NAME$',
'$VAR_NAME$_$YEAR$$MONTH$.nc'))
# Now we are ready to use Load().
startDates <- c('19851101', '19901101', '19951101', '20001101', '20051101')
sampleData <- Load('tos', list(expA), list(obsX), startDates,
leadtimemin = 1, leadtimemax = 4, output = 'lonlat',
latmin = 27, latmax = 48, lonmin = -12, lonmax = 40)
## End(Not run)
PlotEquiMap(sampleData$mod[1, 1, 1, 1, , ], sampleData$lon, sampleData$lat,
toptitle = 'Predicted sea surface temperature for Nov 1960 from 1st Nov',
sizetit = 0.5)
PlotLayout Arrange and Fill Multi-Pannel Layouts With Optional Colour Bar
Description
This function takes an array or list of arrays and loops over each of them to plot all the sub-arrays
they contain on an automatically generated multi-pannel layout. A different plot function (not
necessarily from s2dv) can be applied over each of the provided arrays. The input dimensions of
each of the functions have to be specified, either with the names or the indices of the corresponding
input dimensions. It is possible to draw a common colour bar at any of the sides of the multi-
pannel for all the s2dv plots that use a colour bar. Common plotting arguments for all the arrays
in ’var’ can be specified via the ’...’ parameter, and specific plotting arguments for each array can
be fully adjusted via ’special_args’. It is possible to draw titles for each of the figures, layout rows,
layout columns and for the whole figure. A number of parameters is provided in order to adjust the
position, size and colour of the components. Blank cells can be forced to appear and later be filled
in manually with customized plots.
This function pops up a blank new device and fills it in, so it cannot be nested in complex layouts.
Usage
PlotLayout(
fun,
plot_dims,
var,
...,
special_args = NULL,
nrow = NULL,
ncol = NULL,
toptitle = NULL,
row_titles = NULL,
col_titles = NULL,
bar_scale = 1,
title_scale = 1,
title_margin_scale = 1,
title_left_shift_scale = 1,
subtitle_scale = 1,
subtitle_margin_scale = 1,
subplot_titles_scale = 1,
brks = NULL,
cols = NULL,
drawleg = "S",
titles = NULL,
subsampleg = NULL,
bar_limits = NULL,
triangle_ends = NULL,
col_inf = NULL,
col_sup = NULL,
color_fun = clim.colors,
draw_bar_ticks = TRUE,
draw_separators = FALSE,
triangle_ends_scale = 1,
bar_extra_labels = NULL,
units = NULL,
units_scale = 1,
bar_label_scale = 1,
bar_tick_scale = 1,
bar_extra_margin = rep(0, 4),
bar_left_shift_scale = 1,
bar_label_digits = 4,
extra_margin = rep(0, 4),
layout_by_rows = TRUE,
fileout = NULL,
width = NULL,
height = NULL,
size_units = "in",
res = 100,
close_device = TRUE
)
Arguments
fun Plot function (or name of the function) to be called on the arrays provided in
’var’. If multiple arrays are provided in ’var’, a vector of as many function
names (character strings!) can be provided in ’fun’, one for each array in ’var’.
plot_dims Numeric or character string vector with identifiers of the input plot dimen-
sions of the plot function specified in ’fun’. If character labels are provided,
names(dim(var)) or attr(’dimensions’, var) will be checked to locate the dimen-
sions. As many plots as prod(dim(var)[-plot_dims]) will be generated. If mul-
tiple arrays are provided in ’var’, ’plot_dims’ can be sent a list with a vector of
plot dimensions for each. If a single vector is provided, it will be used for all the
arrays in ’var’.
var Multi-dimensional array with at least the dimensions expected by the specified
plot function in ’fun’. The dimensions reqired by the function must be spec-
ified in ’plot_dims’. The dimensions can be disordered and will be reordered
automatically. Dimensions can optionally be labelled in order to refer to them
with names in ’plot_dims’. All the available plottable sub-arrays will be auto-
matically plotted and arranged in consecutive cells of an automatically arranged
layout. A list of multiple (super-)arrays can be specified. The process will be
repeated for each of them, by default applying the same plot function to all of
them or, if properly specified in ’fun’, a different plot function will be applied to
each of them. NAs can be passed to the list: a NA will yield a blank cell in the
layout, which can be populated after (see .SwitchToFigure).
... Parameters to be sent to the plotting function ’fun’. If multiple arrays are pro-
vided in ’var’ and multiple functions are provided in ’fun’, the parameters pro-
vided through . . . will be sent to all the plot functions, as common parameters.
To specify concrete arguments for each of the plot functions see parameter ’spe-
cial_args’.
special_args List of sub-lists, each sub-list having specific extra arguments for each of the
plot functions provided in ’fun’. If you want to fix a different value for each plot
in the layout you can do so by a) splitting your array into a list of sub-arrays
(each with the data for one plot) and providing it as parameter ’var’, b) provid-
ing a list of named sub-lists in ’special_args’, where the names of each sub-list
match the names of the parameters to be adjusted, and each value in a sub-list
contains the value of the corresponding parameter. For example, if the plots are
two maps with different arguments, the structure would be like:
var:
List of 2
$ : num [1:360, 1:181] 1 3.82 5.02 6.63 8.72 ...
$ : num [1:360, 1:181] 2.27 2.82 4.82 7.7 10.32 ...
special_args:
List of 2
$ :List of 2
..$ arg1: ...
..$ arg2: ...
$ :List of 1
..$ arg1: ...
nrow Numeric value to force the number of rows in the automatically generated lay-
out. If higher than the required, this will yield blank cells in the layout (which
can then be populated). If lower than the required the function will stop. By de-
fault it is configured to arrange the layout in a shape as square as possible. Blank
cells can be manually populated after with customized plots (see SwitchTofig-
ure).
ncol Numeric value to force the number of columns in the automatically generated
layout. If higher than the required, this will yield blank cells in the layout (which
can then be populated). If lower than the required the function will stop. By de-
fault it is configured to arrange the layout in a shape as square as possible. Blank
cells can be manually populated after with customized plots (see SwitchTofig-
ure).
toptitle Topt title for the multi-pannel. Blank by default.
row_titles Character string vector with titles for each of the rows in the layout. Blank by
default.
col_titles Character string vector with titles for each of the columns in the layout. Blank
by default.
bar_scale Scale factor for the common colour bar. Takes 1 by default.
title_scale Scale factor for the multi-pannel title. Takes 1 by default.
title_margin_scale
Scale factor for the margins surrounding the top title. Takes 1 by default.
title_left_shift_scale
When plotting row titles, a shift is added to the horizontal positioning of the top
title in order to center it to the region of the figures (without taking row titles
into account). This shift can be reduced. A value of 0 will remove the shift
completely, centering the title to the total width of the device. This parameter
will be disregarded if no ’row_titles’ are provided.
subtitle_scale Scale factor for the row titles and column titles (specified in ’row_titles’ and
’col_titles’). Takes 1 by default.
subtitle_margin_scale
Scale factor for the margins surrounding the subtitles. Takes 1 by default.
subplot_titles_scale
Scale factor for the subplots top titles. Takes 1 by default.
brks, cols, bar_limits, triangle_ends
Usually only providing ’brks’ is enough to generate the desired colour bar.
These parameters allow to define n breaks that define n - 1 intervals to clas-
sify each of the values in ’var’. The corresponding grid cell of a given value in
’var’ will be coloured in function of the interval it belongs to. These parameters
are sent to ColorBar() to generate the breaks and colours. Additional colours
for values beyond the limits of the colour bar are also generated and applied to
the plot if ’bar_limits’ or ’brks’ and ’triangle_ends’ are properly provided to do
so. See ?ColorBar for a full explanation.
drawleg Where to draw the common colour bar. Can take values TRUE, FALSE or:
’up’, ’u’, ’U’, ’top’, ’t’, ’T’, ’north’, ’n’, ’N’
’down’, ’d’, ’D’, ’bottom’, ’b’, ’B’, ’south’, ’s’, ’S’ (default)
’right’, ’r’, ’R’, ’east’, ’e’, ’E’
’left’, ’l’, ’L’, ’west’, ’w’, ’W’
titles Character string vector with titles for each of the figures in the multi-pannel,
from top-left to bottom-right. Blank by default.
col_inf, col_sup
Colour identifiers to colour the values in ’var’ that go beyond the extremes of
the colour bar and to colour NA values, respectively. ’colNA’ takes ’white’ by
default. ’col_inf’ and ’col_sup’ will take the value of ’colNA’ if not specified.
See ?ColorBar for a full explanation on ’col_inf’ and ’col_sup’.
color_fun, subsampleg, bar_extra_labels, draw_bar_ticks, draw_separators, triangle_ends_scale, bar_lab
Set of parameters to control the visual aspect of the drawn colour bar. See
?ColorBar for a full explanation.
units Title at the top of the colour bar, most commonly the units of the variable pro-
vided in parameter ’var’.
bar_left_shift_scale
When plotting row titles, a shift is added to the horizontal positioning of the
colour bar in order to center it to the region of the figures (without taking row
titles into account). This shift can be reduced. A value of 0 will remove the
shift completely, centering the colour bar to the total width of the device. This
parameter will be disregarded if no ’row_titles’ are provided.
extra_margin Extra margins to be added around the layout, in the format c(y1, x1, y2, x2).
The units are margin lines. Takes rep(0, 4) by default.
layout_by_rows Logical indicating wether the panels should be filled by columns (FALSE) or by
raws (TRUE, default).
fileout File where to save the plot. If not specified (default) a graphics device will pop
up. Extensions allowed: eps/ps, jpeg, png, pdf, bmp and tiff.
width Width in inches of the multi-pannel. 7 by default, or 11 if ’fielout’ has been
specified.
height Height in inches of the multi-pannel. 7 by default, or 11 if ’fileout’ has been
specified.
size_units Units of the size of the device (file or window) to plot in. Inches (’in’) by default.
See ?Devices and the creator function of the corresponding device.
res Resolution of the device (file or window) to plot in. See ?Devices and the creator
function of the corresponding device.
close_device Whether to close the graphics device after plotting the layout and a ’fileout’ has
been specified. This is useful to avoid closing the device when saving the layout
into a file and willing to add extra elements or figures. Takes TRUE by default.
Disregarded if no ’fileout’ has been specified.
Value
brks Breaks used for colouring the map (and legend if drawleg = TRUE).
cols Colours used for colouring the map (and legend if drawleg = TRUE). Always of
length length(brks) - 1.
col_inf Colour used to draw the lower triangle end in the colour bar (NULL if not drawn
at all).
col_sup Colour used to draw the upper triangle end in the colour bar (NULL if not drawn
at all).
layout_matrix Underlying matrix of the layout. Useful to later set any of the layout cells as
current figure to add plot elements. See .SwitchToFigure.
Examples
# See examples on Load() to understand the first lines in this example
## Not run:
data_path <- system.file('sample_data', package = 's2dv')
expA <- list(name = 'experiment', path = file.path(data_path,
'model/$EXP_NAME$/$STORE_FREQ$_mean/$VAR_NAME$_3hourly',
'$VAR_NAME$_$START_DATE$.nc'))
obsX <- list(name = 'observation', path = file.path(data_path,
'$OBS_NAME$/$STORE_FREQ$_mean/$VAR_NAME$',
'$VAR_NAME$_$YEAR$$MONTH$.nc'))
# Now we are ready to use Load().
startDates <- c('19851101', '19901101', '19951101', '20001101', '20051101')
sampleData <- Load('tos', list(expA), list(obsX), startDates,
leadtimemin = 1, leadtimemax = 4, output = 'lonlat',
latmin = 27, latmax = 48, lonmin = -12, lonmax = 40)
## End(Not run)
PlotLayout(PlotEquiMap, c('lat', 'lon'), sampleData$mod[1, , 1, 1, , ],
sampleData$lon, sampleData$lat,
toptitle = 'Predicted tos for Nov 1960 from 1st Nov',
titles = paste('Member', 1:15))
PlotMatrix Function to convert any numerical table to a grid of coloured squares.
Description
This function converts a numerical data matrix into a coloured grid. It is useful for a slide or article
to present tabular results as colors instead of numbers.
Usage
PlotMatrix(
var,
brks = NULL,
cols = NULL,
toptitle = NULL,
title.color = "royalblue4",
xtitle = NULL,
ytitle = NULL,
xlabels = NULL,
xvert = FALSE,
ylabels = NULL,
line = 3,
figure.width = 1,
legend = TRUE,
legend.width = 0.15,
xlab_dist = NULL,
ylab_dist = NULL,
fileout = NULL,
size_units = "px",
res = 100,
...
)
Arguments
var A numerical matrix containing the values to be displayed in a colored image.
brks A vector of the color bar intervals. The length must be one more than the pa-
rameter ’cols’. Use ColorBar() to generate default values.
cols A vector of valid color identifiers for color bar. The length must be one less than
the parameter ’brks’. Use ColorBar() to generate default values.
toptitle A string of the title of the grid. Set NULL as default.
title.color A string of valid color identifier to decide the title color. Set "royalblue4" as
default.
xtitle A string of title of the x-axis. Set NULL as default.
ytitle A string of title of the y-axis. Set NULL as default.
xlabels A vector of labels of the x-axis. The length must be length of the column of pa-
rameter ’var’. Set the sequence from 1 to the length of the column of parameter
’var’ as default.
xvert A logical value to decide whether to place x-axis labels vertically. Set FALSE
as default, which keeps the labels horizontally.
ylabels A vector of labels of the y-axis The length must be length of the row of parameter
’var’. Set the sequence from 1 to the length of the row of parameter ’var’ as
default.
line An integer specifying the distance between the title of the x-axis and the x-axis.
Set 3 as default. Adjust if the x-axis labels are long.
figure.width A positive number as a ratio adjusting the width of the grids. Set 1 as default.
legend A logical value to decide to draw the grid color legend or not. Set TRUE as
default.
legend.width A number between 0 and 0.5 to adjust the legend width. Set 0.15 as default.
xlab_dist A number specifying the distance between the x labels and the x axis. If not
specified, it equals to -1 - (nrow(var) / 10 - 1).
ylab_dist A number specifying the distance between the y labels and the y axis. If not
specified, it equals to 0.5 - ncol(var) / 10.
fileout A string of full directory path and file name indicating where to save the plot. If
not specified (default), a graphics device will pop up.
size_units A string indicating the units of the size of the device (file or window) to plot in.
Set ’px’ as default. See ?Devices and the creator function of the corresponding
device.
res A positive number indicating resolution of the device (file or window) to plot in.
See ?Devices and the creator function of the corresponding device.
... The additional parameters to be passed to function ColorBar() in s2dv for color
legend creation.
Value
A figure in popup window by default, or saved to the specified path.
Examples
#Example with random data
PlotMatrix(var = matrix(rnorm(n = 120, mean = 0.3), 10, 12),
cols = c('white','#fef0d9','#fdd49e','#fdbb84','#fc8d59',
'#e34a33','#b30000', '#7f0000'),
brks = c(-1, 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 1),
toptitle = "Mean Absolute Error",
xtitle = "Forecast time (month)", ytitle = "Start date",
xlabels = c("Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul",
"Aug", "Sep", "Oct", "Nov", "Dec"))
PlotSection Plots A Vertical Section
Description
Plot a (longitude,depth) or (latitude,depth) section.
Usage
PlotSection(
var,
horiz,
depth,
toptitle = "",
sizetit = 1,
units = "",
brks = NULL,
cols = NULL,
axelab = TRUE,
intydep = 200,
intxhoriz = 20,
drawleg = TRUE,
fileout = NULL,
width = 8,
height = 5,
size_units = "in",
res = 100,
...
)
Arguments
var Matrix to plot with (longitude/latitude, depth) dimensions.
horiz Array of longitudes or latitudes.
depth Array of depths.
toptitle Title, optional.
sizetit Multiplicative factor to increase title size, optional.
units Units, optional.
brks Colour levels, optional.
cols List of colours, optional.
axelab TRUE/FALSE, label the axis. Default = TRUE.
intydep Interval between depth ticks on y-axis. Default: 200m.
intxhoriz Interval between longitude/latitude ticks on x-axis.
Default: 20deg.
drawleg Draw colorbar. Default: TRUE.
fileout Name of output file. Extensions allowed: eps/ps, jpeg, png, pdf, bmp and tiff.
Default = NULL
width File width, in the units specified in the parameter size_units (inches by default).
Takes 8 by default.
height File height, in the units specified in the parameter size_units (inches by default).
Takes 5 by default.
size_units Units of the size of the device (file or window) to plot in. Inches (’in’) by default.
See ?Devices and the creator function of the corresponding device.
res Resolution of the device (file or window) to plot in. See ?Devices and the creator
function of the corresponding device.
... Arguments to be passed to the method. Only accepts the following graphical
parameters:
adj ann ask bg bty cex.lab cex.sub cin col.axis col.lab col.main col.sub cra crt
csi cxy err family fg fig fin font font.axis font.lab font.main font.sub lend lheight
ljoin lmitre lty lwd mex mfcol mfrow mfg mkh oma omd omi page pch pin plt
pty smo srt tcl usr xaxp xaxs xaxt xlog xpd yaxp yaxs yaxt ylbias ylog
For more information about the parameters see ‘par‘.
Examples
sampleData <- s2dv::sampleDepthData
PlotSection(sampleData$mod[1, 1, 1, 1, , ], sampleData$lat, sampleData$depth,
toptitle = 'temperature 1995-11 member 0')
PlotStereoMap Maps A Two-Dimensional Variable On A Polar Stereographic Projec-
tion
Description
Map longitude-latitude array (on a regular rectangular or gaussian grid) on a polar stereographic
world projection with coloured grid cells. Only the region within a specified latitude interval is
displayed. A colour bar (legend) can be plotted and adjusted. It is possible to draw superimposed
dots, symbols, boxes, contours, and arrows. A number of options is provided to adjust the position,
size and colour of the components. This plot function is compatible with figure layouts if colour
bar is disabled.
116 PlotStereoMap
Usage
PlotStereoMap(
var,
lon,
lat,
varu = NULL,
varv = NULL,
latlims = c(60, 90),
toptitle = NULL,
sizetit = NULL,
units = NULL,
brks = NULL,
cols = NULL,
bar_limits = NULL,
triangle_ends = NULL,
col_inf = NULL,
col_sup = NULL,
colNA = NULL,
color_fun = clim.palette(),
filled.continents = FALSE,
coast_color = NULL,
coast_width = 1,
contours = NULL,
brks2 = NULL,
contour_lwd = 0.5,
contour_color = "black",
contour_lty = 1,
contour_label_draw = TRUE,
contour_label_scale = 0.6,
dots = NULL,
dot_symbol = 4,
dot_size = 0.8,
intlat = 10,
arr_subsamp = floor(length(lon)/30),
arr_scale = 1,
arr_ref_len = 15,
arr_units = "m/s",
arr_scale_shaft = 1,
arr_scale_shaft_angle = 1,
drawleg = TRUE,
subsampleg = NULL,
bar_extra_labels = NULL,
draw_bar_ticks = TRUE,
draw_separators = FALSE,
triangle_ends_scale = 1,
bar_label_digits = 4,
bar_label_scale = 1,
units_scale = 1,
bar_tick_scale = 1,
bar_extra_margin = rep(0, 4),
boxlim = NULL,
boxcol = "purple2",
boxlwd = 5,
margin_scale = rep(1, 4),
title_scale = 1,
numbfig = NULL,
fileout = NULL,
width = 6,
height = 5,
size_units = "in",
res = 100,
...
)
Arguments
var Array with the values at each cell of a grid on a regular rectangular or gaus-
sian grid. The array is expected to have two dimensions: c(latitude, longitude).
Longitudes can be in ascending or descending order and latitudes in any or-
der. It can contain NA values (coloured with ’colNA’). Arrays with dimensions
c(longitude, latitude) will also be accepted but ’lon’ and ’lat’ will be used to
disambiguate so this alternative is not appropriate for square arrays.
lon Numeric vector of longitude locations of the cell centers of the grid of ’var’, in
ascending or descending order (same as ’var’). Expected to be regularly spaced,
within either of the ranges [-180, 180] or [0, 360]. Data for two adjacent re-
gions split by the limits of the longitude range can also be provided, e.g. lon =
c(0:50, 300:360) (’var’ must be provided consitently).
lat Numeric vector of latitude locations of the cell centers of the grid of ’var’, in
any order (same as ’var’). Expected to be from a regular rectangular or gaussian
grid, within the range [-90, 90].
varu Array of the zonal component of wind/current/other field with the same dimen-
sions as ’var’.
varv Array of the meridional component of wind/current/other field with the same
dimensions as ’var’.
latlims Latitudinal limits of the figure.
Example : c(60, 90) for the North Pole
c(-90,-60) for the South Pole
toptitle Top title of the figure, scalable with parameter ’title_scale’.
sizetit Scale factor for the figure top title provided in parameter ’toptitle’. Deprecated.
Use ’title_scale’ instead.
units Title at the top of the colour bar, most commonly the units of the variable pro-
vided in parameter ’var’.
brks, cols, bar_limits, triangle_ends
Usually only providing ’brks’ is enough to generate the desired colour bar.
These parameters allow to define n breaks that define n - 1 intervals to clas-
sify each of the values in ’var’. The corresponding grid cell of a given value in
’var’ will be coloured in function of the interval it belongs to. These parameters
are sent to ColorBar() to generate the breaks and colours. Additional colours
for values beyond the limits of the colour bar are also generated and applied to
the plot if ’bar_limits’ or ’brks’ and ’triangle_ends’ are properly provided to do
so. See ?ColorBar for a full explanation.
col_inf, col_sup, colNA
Colour identifiers to colour the values in ’var’ that go beyond the extremes of
the colour bar and to colour NA values, respectively. ’colNA’ takes attr(cols,
’na_color’) if available by default, where cols is the parameter ’cols’ if provided
or the vector of colors returned by ’color_fun’. If not available, it takes ’pink’ by
default. ’col_inf’ and ’col_sup’ will take the value of ’colNA’ if not specified.
See ?ColorBar for a full explanation on ’col_inf’ and ’col_sup’.
color_fun, subsampleg, bar_extra_labels, draw_bar_ticks, draw_separators, triangle_ends_scale, bar_lab
Set of parameters to control the visual aspect of the drawn colour bar. See
?ColorBar for a full explanation.
filled.continents
Colour to fill in drawn projected continents. Takes the value gray(0.5) by default.
If set to FALSE, continents are not filled in.
coast_color Colour of the coast line of the drawn projected continents. Takes the value
gray(0.5) by default.
coast_width Line width of the coast line of the drawn projected continents. Takes the value
1 by default.
contours Array of same dimensions as ’var’ to be added to the plot and displayed with
contours. Parameter ’brks2’ is required to define the magnitude breaks for each
contour curve.
brks2 A numeric value or vector of magnitude breaks where to draw contour curves
for the array provided in ’contours’. If it is a number, it represents the number
of breaks (n) that defines (n - 1) intervals to classify ’contours’.
contour_lwd Line width of the contour curves provided via ’contours’ and ’brks2’. The de-
fault value is 0.5.
contour_color Line color of the contour curves provided via ’contours’ and ’brks2’.
contour_lty Line type of the contour curves. Takes 1 (solid) by default. See help on ’lty’ in
par() for other accepted values.
contour_label_draw
A logical value indicating whether to draw the contour labels (TRUE) or not
(FALSE) when ’contours’ is used. The default value is TRUE.
contour_label_scale
Scale factor for the superimposed labels when drawing contour levels. The de-
fault value is 0.6.
dots Array of same dimensions as ’var’ or with dimensions c(n, dim(var)), where n
is the number of dot/symbol layers to add to the plot. A value of TRUE at a grid
cell will draw a dot/symbol on the corresponding square of the plot. By default
all layers provided in ’dots’ are plotted with dots, but a symbol can be specified
for each of the layers via the parameter ’dot_symbol’.
dot_symbol Single character/number or vector of characters/numbers that correspond to each
of the symbol layers specified in parameter ’dots’. If a single value is specified,
it will be applied to all the layers in ’dots’. Takes 15 (centered square) by default.
See ’pch’ in par() for additional accepted options.
dot_size Scale factor for the dots/symbols to be plotted, specified in ’dots’. If a single
value is specified, it will be applied to all layers in ’dots’. Takes 1 by default.
intlat Interval between latitude lines (circles), in degrees. Defaults to 10.
arr_subsamp A number as subsampling factor to select a subset of arrows in ’varu’ and ’varv’
to be drawn. Only one out of arr_subsamp arrows will be drawn. The default
value is 1.
arr_scale A number as scale factor for drawn arrows from ’varu’ and ’varv’. The default
value is 1.
arr_ref_len A number of the length of the refence arrow to be drawn as legend at the bottom
of the figure (in same units as ’varu’ and ’varv’, only affects the legend for the
wind or variable in these arrays). The default value is 15.
arr_units Units of ’varu’ and ’varv’, to be drawn in the legend. Takes ’m/s’ by default.
arr_scale_shaft
A number for the scale of the shaft of the arrows (which also depend on the
number of figures and the arr_scale parameter). The default value is 1.
arr_scale_shaft_angle
A number for the scale of the angle of the shaft of the arrows (which also depend
on the number of figure and the arr_scale parameter). The default value is 1.
drawleg Whether to plot a color bar (legend, key) or not. Defaults to TRUE.
boxlim Limits of a box to be added to the plot, in degrees: c(x1, y1, x2, y2). A list with
multiple box specifications can also be provided.
boxcol Colour of the box lines. A vector with a colour for each of the boxes is also
accepted. Defaults to ’purple2’.
boxlwd Line width of the box lines. A vector with a line width for each of the boxes is
also accepted. Defaults to 5.
margin_scale Scale factor for the margins to be added to the plot, with the format c(y1, x1, y2,
x2). Defaults to rep(1, 4). If drawleg = TRUE, margin_scale[1] is subtracted 1
unit.
title_scale Scale factor for the figure top title. Defaults to 1.
numbfig Number of figures in the layout the plot will be put into. A higher numbfig will
result in narrower margins and smaller labels, axe labels, ticks, thinner lines, ...
Defaults to 1.
fileout File where to save the plot. If not specified (default) a graphics device will pop
up. Extensions allowed: eps/ps, jpeg, png, pdf, bmp and tiff.
width File width, in the units specified in the parameter size_units (inches by default).
Takes 8 by default.
height File height, in the units specified in the parameter size_units (inches by default).
Takes 5 by default.
size_units Units of the size of the device (file or window) to plot in. Inches (’in’) by default.
See ?Devices and the creator function of the corresponding device.
res Resolution of the device (file or window) to plot in. See ?Devices and the creator
function of the corresponding device.
... Arguments to be passed to the method. Only accepts the following graphical
parameters:
adj ann ask bg bty cex.sub cin col.axis col.lab col.main col.sub cra crt csi cxy
err family fg font font.axis font.lab font.main font.sub lend lheight ljoin lmitre
mex mfcol mfrow mfg mkh omd omi page pch pin plt pty smo srt tcl usr xaxp
xaxs xaxt xlog xpd yaxp yaxs yaxt ylbias ylog
For more information about the parameters see ‘par‘.
Value
brks Breaks used for colouring the map (and legend if drawleg = TRUE).
cols Colours used for colouring the map (and legend if drawleg = TRUE). Always of
length length(brks) - 1.
col_inf Colour used to draw the lower triangle end in the colour bar (NULL if not drawn
at all).
col_sup Colour used to draw the upper triangle end in the colour bar (NULL if not drawn
at all).
Examples
data <- matrix(rnorm(100 * 50), 100, 50)
x <- seq(from = 0, to = 360, length.out = 100)
y <- seq(from = -90, to = 90, length.out = 50)
PlotStereoMap(data, x, y, latlims = c(60, 90), brks = 50,
toptitle = "This is the title")
PlotVsLTime Plot a score along the forecast time with its confidence interval
Description
Plot the correlation (Corr()), the root mean square error (RMS()) between the forecast values
and their observational counterpart, the slope of their trend (Trend()), the InterQuartile range,
maximum-mininum, standard deviation or median absolute Deviation of the ensemble members
(Spread()), or the ratio between the ensemble spread and the RMSE of the ensemble mean (RatioSDRMS())
along the forecast time for all the input experiments on the same figure with their confidence inter-
vals.
Usage
PlotVsLTime(
var,
toptitle = "",
ytitle = "",
monini = 1,
freq = 12,
nticks = NULL,
limits = NULL,
listexp = c("exp1", "exp2", "exp3"),
listobs = c("obs1", "obs2", "obs3"),
biglab = FALSE,
hlines = NULL,
leg = TRUE,
siglev = FALSE,
sizetit = 1,
show_conf = TRUE,
fileout = NULL,
width = 8,
height = 5,
size_units = "in",
res = 100,
...
)
Arguments
var Matrix containing any Prediction Score with dimensions:
(nexp/nmod, 3/4 ,nltime)
or (nexp/nmod, nobs, 3/4 ,nltime).
toptitle Main title, optional.
ytitle Title of Y-axis, optional.
monini Starting month between 1 and 12. Default = 1.
freq 1 = yearly, 12 = monthly, 4 = seasonal, ... Default = 12.
nticks Number of ticks and labels on the x-axis, optional.
limits c(lower limit, upper limit): limits of the Y-axis, optional.
listexp List of experiment names, optional.
listobs List of observation names, optional.
biglab TRUE/FALSE for presentation/paper plot. Default = FALSE.
hlines c(a,b, ..) Add horizontal black lines at Y-positions a,b, ...
Default = NULL.
leg TRUE/FALSE if legend should be added or not to the plot. Default = TRUE.
siglev TRUE/FALSE if significance level should replace confidence interval.
Default = FALSE.
sizetit Multiplicative factor to change title size, optional.
show_conf TRUE/FALSE to show/not confidence intervals for input variables.
fileout Name of output file. Extensions allowed: eps/ps, jpeg, png, pdf, bmp and tiff.
The default value is NULL.
width File width, in the units specified in the parameter size_units (inches by default).
Takes 8 by default.
height File height, in the units specified in the parameter size_units (inches by default).
Takes 5 by default.
size_units Units of the size of the device (file or window) to plot in. Inches (’in’) by default.
See ?Devices and the creator function of the corresponding device.
res Resolution of the device (file or window) to plot in. See ?Devices and the creator
function of the corresponding device.
... Arguments to be passed to the method. Only accepts the following graphical
parameters:
adj ann ask bg bty cex.sub cin col.axis col.lab col.main col.sub cra crt csi cxy err
family fg fig font font.axis font.lab font.main font.sub lheight ljoin lmitre mar
mex mfcol mfrow mfg mkh oma omd omi page pch plt smo srt tck tcl usr xaxp
xaxs xaxt xlog xpd yaxp yaxs yaxt ylbias ylog
For more information about the parameters see ‘par‘.
Details
Examples of input:
Model and observed output from Load() then Clim() then Ano() then Smoothing():
(nmod, nmemb, nsdate, nltime) and (nobs, nmemb, nsdate, nltime)
then averaged over the members
Mean1Dim(var_exp/var_obs, posdim = 2):
(nmod, nsdate, nltime) and (nobs, nsdate, nltime)
then passed through
Corr(exp, obs, posloop = 1, poscor = 2) or
RMS(exp, obs, posloop = 1, posRMS = 2):
(nmod, nobs, 3, nltime)
would plot the correlations or RMS between each exp & each obs as a function of the forecast time.
Examples
# Load sample data as in Load() example:
example(Load)
clim <- Clim(sampleData$mod, sampleData$obs)
ano_exp <- Ano(sampleData$mod, clim$clim_exp)
ano_obs <- Ano(sampleData$obs, clim$clim_obs)
runmean_months <- 12
smooth_ano_exp <- Smoothing(data = ano_exp, runmeanlen = runmean_months)
smooth_ano_obs <- Smoothing(data = ano_obs, runmeanlen = runmean_months)
dim_to_mean <- 'member' # mean along members
required_complete_row <- 'ftime' # discard startdates for which there are NA leadtimes
leadtimes_per_startdate <- 60
corr <- Corr(MeanDims(smooth_ano_exp, dim_to_mean),
MeanDims(smooth_ano_obs, dim_to_mean),
comp_dim = required_complete_row,
limits = c(ceiling((runmean_months + 1) / 2),
leadtimes_per_startdate - floor(runmean_months / 2)))
# Combine corr results for plotting
corr_combine <- abind::abind(corr$conf.lower, corr$corr, corr$conf.upper, corr$p.val, along = 0)
corr_combine <- Reorder(corr_combine, c(2, 3, 1, 4))
PlotVsLTime(corr_combine, toptitle = "correlations", ytitle = "correlation",
monini = 11, limits = c(-1, 2), listexp = c('CMIP5 IC3'),
listobs = c('ERSST'), biglab = FALSE, hlines = c(-1, 0, 1))
ProbBins Compute probabilistic information of a forecast relative to a threshold
or a quantile
Description
Compute probabilistic bins of a set of forecast years (’fcyr’) relative to the forecast climatology
over the whole period of anomalies, optionally excluding the selected forecast years (’fcyr’) or the
forecast year for which the probabilistic bins are being computed (see ’compPeriod’).
Usage
ProbBins(
data,
thr,
fcyr = "all",
time_dim = "sdate",
memb_dim = "member",
quantile = TRUE,
compPeriod = "Full period",
ncores = NULL
)
Arguments
data An numeric array of anomalies with the dimensions ’time_dim’ and ’memb_dim’
at least. It can be generated by Ano().
thr A numeric vector used as the quantiles (if ’quantile’ is TRUE) or thresholds (if
’quantile’ is FALSE) to bin the anomalies. If it is quantile, it must be within [0,
1].
fcyr A numeric vector of the indices of the forecast years (i.e., time_dim) to compute
the probabilistic bins for, or ’all’ to compute the bins for all the years. E.g.,
c(1:5), c(1, 4), 4, or ’all’. The default value is ’all’.
time_dim A character string indicating the dimension along which to compute the proba-
bilistic bins. The default value is ’sdate’.
memb_dim A character string indicating the name of the member dimension or the dimen-
sion to be merged with ’time_dim’ for probabilistic calculation. The default
value is ’member’.
quantile A logical value indicating if the thresholds (’thr’) are quantiles (TRUE) or the
absolute thresholds of the bins (FALSE). The default value is TRUE.
compPeriod A character string referring to three computation options:
"Full period": The probabilities are computed based on ’data’;
"Without fcyr": The probabilities are computed based on ’data’ with all ’fcyr’
removed;
"Cross-validation": The probabilities are computed based on leave-one-out cross-
validation.
The default value is "Full period".
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A numeric array of probabilistic information with dimensions:
c(bin = length of ’thr’ + 1, time_dim = length of ’fcyr’, memb_dim, the rest of dimensions of ’data’)
The values along the ’bin’ dimension take values 0 or 1 depending on which of the ’thr’ + 1 cathe-
gories the forecast or observation at the corresponding grid point, time step, member and start date
belongs to.
Examples
clim <- Clim(sampleMap$mod, sampleMap$obs)
ano_exp <- Ano(sampleMap$mod, clim$clim_exp)
PB <- ProbBins(ano_exp, fcyr = 3, thr = c(1/3, 2/3), quantile = TRUE)
ProjectField Project anomalies onto modes of variability
Description
Project anomalies onto modes of variability to get the temporal evolution of the EOF mode selected.
It returns principal components (PCs) by area-weighted projection onto EOF pattern (from EOF())
or REOF pattern (from REOF() or EuroAtlanticTC()). The calculation removes NA and returns
NA if the whole spatial pattern is NA.
Usage
ProjectField(
ano,
eof,
time_dim = "sdate",
space_dim = c("lat", "lon"),
mode = NULL,
ncores = NULL
)
Arguments
ano A numerical array of anomalies with named dimensions. The dimensions must
have at least ’time_dim’ and ’space_dim’. It can be generated by Ano().
eof A list that contains at least ’EOFs’ or ’REOFs’ and ’wght’, which are both ar-
rays. ’EOFs’ or ’REOFs’ must have dimensions ’mode’ and ’space_dim’ at
least. ’wght’ has dimensions space_dim. It can be generated by EOF() or
REOF().
time_dim A character string indicating the name of the time dimension of ’ano’. The
default value is ’sdate’.
space_dim A vector of two character strings. The first is the dimension name of latitude of
’ano’ and the second is the dimension name of longitude of ’ano’. The default
value is c(’lat’, ’lon’).
mode An integer of the variability mode number in the EOF to be projected on. The
default value is NULL, which means all the modes of ’eof’ is calculated.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A numerical array of the principal components in the verification format. The dimensions are the
same as ’ano’ except ’space_dim’.
See Also
EOF, NAO, PlotBoxWhisker
Examples
ano <- Ano_CrossValid(sampleData$mod, sampleData$obs)
eof_exp <- EOF(ano$exp, sampleData$lat, sampleData$lon)
eof_obs <- EOF(ano$obs, sampleData$lat, sampleData$lon)
mode1_exp <- ProjectField(ano$exp, eof_exp, mode = 1)
mode1_obs <- ProjectField(ano$obs, eof_obs, mode = 1)
## Not run:
# Plot the forecast and the observation of the first mode for the last year
# of forecast
sdate_dim_length <- dim(mode1_obs)['sdate']
plot(mode1_obs[sdate_dim_length, 1, 1, ], type = "l", ylim = c(-1, 1),
lwd = 2)
for (i in 1:dim(mode1_exp)['member']) {
par(new = TRUE)
plot(mode1_exp[sdate_dim_length, 1, i, ], type = "l", col = rainbow(10)[i],
ylim = c(-15000, 15000))
}
## End(Not run)
RandomWalkTest Random Walk test for skill differences
Description
Forecast comparison of the skill obtained with 2 forecasts (with respect to a common observational
reference) based on Random Walks (DelSole and Tippett, 2016).
Usage
RandomWalkTest(
skill_A,
skill_B,
time_dim = "sdate",
test.type = "two.sided.approx",
alpha = 0.05,
pval = TRUE,
sign = FALSE,
ncores = NULL
)
Arguments
skill_A A numerical array of the time series of the scores obtained with the forecaster
A.
skill_B A numerical array of the time series of the scores obtained with the forecaster
B. The dimensions should be identical as parameter ’skill_A’.
time_dim A character string indicating the name of the dimension along which the tests
are computed. The default value is ’sdate’.
test.type A character string indicating the type of significance test. It can be "two.sided.approx"
(to assess whether forecaster A and forecaster B are significantly different in
terms of skill with a two-sided test using the approximation of DelSole and Tip-
pett, 2016), "two.sided" (to assess whether forecaster A and forecaster B are
significantly different in terms of skill with an exact two-sided test), "greater"
(to assess whether forecaster A shows significantly better skill than forecaster B
with a one-sided test for negatively oriented scores), or "less" (to assess whether
forecaster A shows significantly better skill than forecaster B with a one-sided
test for positively oriented scores). The default value is "two.sided.approx".
alpha A numeric of the significance level to be used in the statistical significance test
(output "sign"). The default value is 0.05.
pval A logical value indicating whether to return the p-value of the significance test.
The default value is TRUE.
sign A logical value indicating whether to return the statistical significance of the test
based on ’alpha’. The default value is FALSE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Details
Null and alternative hypothesis for "two-sided" test (regardless of the orientation of the scores):
H0: forecaster A and forecaster B are not different in terms of skill
H1: forecaster A and forecaster B are different in terms of skill
Null and alternative hypothesis for one-sided "greater" (for negatively oriented scores, i.e., the lower
the better) and "less" (for positively oriented scores, i.e., the higher the better) tests:
H0: forecaster A is not better than forecaster B
H1: forecaster A is better than forecaster B
Examples of negatively oriented scores are the RPS, RMSE and the Error, while the ROC score is a
positively oriented score.
DelSole and Tippett (2016) approximation for two-sided test at 95 level: significant if the difference
between the number of times that forecaster A has been better than forecaster B and forecaster B
has been better than forecaster A is above 2sqrt(N) or below -2sqrt(N).
Value
A list with:
$score A numerical array with the same dimensions as the input arrays except ’time_dim’.
The number of times that forecaster A has been better than forecaster B minus
the number of times that forecaster B has been better than forecaster A (for skill
negatively oriented, i.e., the lower the better). If $score is positive, forecaster A
has been better more times than forecaster B. If $score is negative, forecaster B
has been better more times than forecaster A.
$sign A logical array of the statistical significance with the same dimensions as the
input arrays except "time_dim". Returned only if "sign" is TRUE.
$p.val A numeric array of the p-values with the same dimensions as the input arrays
except "time_dim". Returned only if "pval" is TRUE.
References
DelSole and Tippett (2016): https://doi.org/10.1175/MWR-D-15-0218.1
Examples
fcst_A <- array(data = 11:50, dim = c(sdate = 10, lat = 2, lon = 2))
fcst_B <- array(data = 21:60, dim = c(sdate = 10, lat = 2, lon = 2))
reference <- array(data = 1:40, dim = c(sdate = 10, lat = 2, lon = 2))
scores_A <- abs(fcst_A - reference)
scores_B <- abs(fcst_B - reference)
res1 <- RandomWalkTest(skill_A = scores_A, skill_B = scores_B, pval = FALSE, sign = TRUE)
res2 <- RandomWalkTest(skill_A = scores_A, skill_B = scores_B, test.type = 'greater')
RatioPredictableComponents
Calculate ratio of predictable components (RPC)
Description
This function computes the ratio of predictable components (RPC; Eade et al., 2014).
Usage
RatioPredictableComponents(
exp,
obs,
time_dim = "year",
member_dim = "member",
na.rm = FALSE,
ncores = NULL
)
Arguments
exp A numerical array with, at least, ’time_dim’ and ’member_dim’ dimensions.
obs A numerical array with the same dimensions than ’exp’ except the ’member_dim’
dimension.
time_dim A character string indicating the name of the time dimension. The default value
is ’year’.
member_dim A character string indicating the name of the member dimension. The default
value is ’member’.
na.rm A logical value indicating whether to remove NA values during the computation.
The default value is FALSE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
An array of the ratio of the predictable components. it has the same dimensions as ’exp’ except
’time_dim’ and ’member_dim’ dimensions.
Examples
exp <- array(data = runif(600), dim = c(year = 15, member = 10, lat = 2, lon = 2))
obs <- array(data = runif(60), dim = c(year = 15, lat = 2, lon = 2))
RatioPredictableComponents(exp, obs)
RatioRMS Compute the ratio between the RMSE of two experiments
Description
Calculate the ratio of the RMSE for two forecasts with the same observation, that is, RMSE(ens,
obs) / RMSE(ens.ref, obs). The p-value is provided by a two-sided Fischer test.
Usage
RatioRMS(exp1, exp2, obs, time_dim = "sdate", pval = TRUE, ncores = NULL)
Arguments
exp1 A numeric array with named dimensions of the first experimental data. It must
have at least ’time_dim’ and have the same dimensions as ’exp2’ and ’obs’.
exp2 A numeric array with named dimensions of the second experimental data. It
must have at least ’time_dim’ and have the same dimensions as ’exp1’ and ’obs’.
obs A numeric array with named dimensions of the observational data. It must have
at least ’time_dim’ and have the same dimensions as ’exp1’ and ’exp2’.
time_dim A character string of the dimension name along which RMS is computed. The
default value is ’sdate’.
pval A logical value indicating whether to compute the p-value of Ho: RMSE1/RMSE2
= 1 or not. The default value is TRUE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A list containing the numeric arrays with dimensions identical with ’exp1’, ’exp2’, and ’obs’, except
’time_dim’:
$ratiorms The ratio between the RMSE (i.e., RMSE1/RMSE2).
$p.val The p-value of the two-sided Fisher test with Ho: RMSE1/RMSE2 = 1. Only
exists if ’pval’ is TRUE.
Examples
# Compute DJF seasonal means and anomalies.
initial_month <- 11
mean_start_month <- 12
mean_stop_month <- 2
sampleData$mod <- Season(sampleData$mod, monini = initial_month,
moninf = mean_start_month, monsup = mean_stop_month)
sampleData$obs <- Season(sampleData$obs, monini = initial_month,
moninf = mean_start_month, monsup = mean_stop_month)
clim <- Clim(sampleData$mod, sampleData$obs)
ano_exp <- Ano(sampleData$mod, clim$clim_exp)
ano_obs <- Ano(sampleData$obs, clim$clim_obs)
# Generate two experiments with 2 and 1 members from the only experiment
# available in the sample data. Take only data values for a single forecast
# time step.
ano_exp_1 <- ClimProjDiags::Subset(ano_exp, 'member', c(1, 2))
ano_exp_2 <- ClimProjDiags::Subset(ano_exp, 'member', c(3))
ano_exp_1 <- ClimProjDiags::Subset(ano_exp_1, c('dataset', 'ftime'), list(1, 1), drop = 'selected')
ano_exp_2 <- ClimProjDiags::Subset(ano_exp_2, c('dataset', 'ftime'), list(1, 1), drop = 'selected')
ano_obs <- ClimProjDiags::Subset(ano_obs, c('dataset', 'ftime'), list(1, 1), drop = 'selected')
# Compute ensemble mean and provide as inputs to RatioRMS.
rrms <- RatioRMS(MeanDims(ano_exp_1, 'member'),
MeanDims(ano_exp_2, 'member'),
MeanDims(ano_obs, 'member'))
# Plot the RatioRMS for the first forecast time step.
PlotEquiMap(rrms$ratiorms, sampleData$lon, sampleData$lat,
toptitle = 'Ratio RMSE')
RatioSDRMS Compute the ratio between the ensemble spread and RMSE
Description
Compute the ratio between the standard deviation of the members around the ensemble mean in
experimental data and the RMSE between the ensemble mean of experimental and observational
data. The p-value is provided by a one-sided Fischer test.
Usage
RatioSDRMS(
exp,
obs,
dat_dim = "dataset",
memb_dim = "member",
time_dim = "sdate",
pval = TRUE,
ncores = NULL
)
Arguments
exp A named numeric array of experimental data with at least two dimensions ’memb_dim’
and ’time_dim’.
obs A named numeric array of observational data with at least two dimensions ’memb_dim’
and ’time_dim’. It should have the same dimensions as parameter ’exp’ except
along ’dat_dim’ and ’memb_dim’.
dat_dim A character string indicating the name of dataset (nobs/nexp) dimension. If there
is no dataset dimension, set as NULL. The default value is ’dataset’.
memb_dim A character string indicating the name of the member dimension. It must be one
dimension in ’exp’ and ’obs’. The default value is ’member’.
time_dim A character string indicating the name of dimension along which the ratio is
computed. The default value is ’sdate’.
pval A logical value indicating whether to compute or not the p-value of the test Ho
: SD/RMSE = 1 or not. The default value is TRUE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A list of two arrays with dimensions c(nexp, nobs, the rest of dimensions of ’exp’ and ’obs’ except
memb_dim and time_dim), which nexp is the length of dat_dim of ’exp’ and nobs is the length of
dat_dim of ’obs’. (only present if pval = TRUE) of the one-sided Fisher test with Ho: SD/RMSE =
1.
$ratio The ratio of the ensemble spread and RMSE.
$p_val The p-value of the one-sided Fisher test with Ho: SD/RMSE = 1. Only present
if pval = TRUE.
Examples
# Load sample data as in Load() example:
example(Load)
rsdrms <- RatioSDRMS(sampleData$mod, sampleData$obs)
# Reorder the data in order to plot it with PlotVsLTime
rsdrms_plot <- array(dim = c(dim(rsdrms$ratio)[1:2], 4, dim(rsdrms$ratio)[3]))
rsdrms_plot[, , 2, ] <- rsdrms$ratio
rsdrms_plot[, , 4, ] <- rsdrms$p.val
## Not run:
PlotVsLTime(rsdrms_plot, toptitle = "Ratio ensemble spread / RMSE", ytitle = "",
monini = 11, limits = c(-1, 1.3), listexp = c('CMIP5 IC3'),
listobs = c('ERSST'), biglab = FALSE, siglev = TRUE)
## End(Not run)
Regression Compute the regression of an array on another along one dimension.
Description
Compute the regression of the array ’datay’ on the array ’datax’ along the ’reg_dim’ dimension by
least square fitting (default) or self-defined model. The function provides the slope of the regres-
sion, the intercept, and the associated p-value and confidence interval. The filtered datay from the
regression onto datax is also provided.
The p-value relies on the F distribution, and the confidence interval relies on the student-T distribu-
tion.
Usage
Regression(
datay,
datax,
reg_dim = "sdate",
formula = y ~ x,
pval = TRUE,
conf = TRUE,
conf.lev = 0.95,
na.action = na.omit,
ncores = NULL
)
Arguments
datay An numeric array as predictand including the dimension along which the regres-
sion is computed.
datax An numeric array as predictor. The dimension should be identical as parameter
’datay’.
reg_dim A character string indicating the dimension along which to compute the regres-
sion. The default value is ’sdate’.
formula An object of class "formula" (see function link[stats]{lm}).
pval A logical value indicating whether to retrieve the p-value or not. The default
value is TRUE.
conf A logical value indicating whether to retrieve the confidence intervals or not.
The default value is TRUE.
conf.lev A numeric indicating the confidence level for the regression computation. The
default value is 0.95.
na.action A function or an integer. A function (e.g., na.omit, na.exclude, na.fail, na.pass)
indicates what should happen when the data contain NAs. A numeric indicates
the maximum number of NA position (it counts as long as one of datay and
datax is NA) allowed for compute regression. The default value is na.omit-
ncores An integer indicating the number of cores to use for parallel computation. De-
fault value is NULL.
Value
A list containing:
$regression A numeric array with same dimensions as parameter ’datay’ and ’datax’ except
the ’reg_dim’ dimension, which is replaced by a ’stats’ dimension containing
the regression coefficients from the lowest order (i.e., intercept) to the highest
degree. The length of the ’stats’ dimension should be polydeg + 1.
$conf.lower A numeric array with same dimensions as parameter ’daty’ and ’datax’ except
the ’reg_dim’ dimension, which is replaced by a ’stats’ dimension containing
the lower value of the siglev% confidence interval for all the regression co-
efficients with the same order as $regression. The length of ’stats’ dimension
should be polydeg + 1. Only present if conf = TRUE.
$conf.upper A numeric array with same dimensions as parameter ’daty’ and ’datax’ except
the ’reg_dim’ dimension, which is replaced by a ’stats’ dimension containing
the upper value of the siglev% confidence interval for all the regression co-
efficients with the same order as $regression. The length of ’stats’ dimension
should be polydeg + 1. Only present if conf = TRUE.
$p.val A numeric array with same dimensions as parameter ’daty’ and ’datax’ except
the ’reg_dim’ dimension, The array contains the p-value.
$filtered A numeric array with the same dimension as paramter ’datay’ and ’datax’, the
filtered datay from the regression onto datax along the ’reg_dim’ dimension.
Examples
# Load sample data as in Load() example:
example(Load)
datay <- sampleData$mod[, 1, , ]
names(dim(datay)) <- c('sdate', 'ftime')
datax <- sampleData$obs[, 1, , ]
names(dim(datax)) <- c('sdate', 'ftime')
res1 <- Regression(datay, datax, formula = y~poly(x, 2, raw = TRUE))
res2 <- Regression(datay, datax, conf.lev = 0.9)
REOF Area-weighted empirical orthogonal function analysis with varimax
rotation using SVD
Description
Perform an area-weighted EOF analysis with varimax rotation using single value decomposition
(SVD) based on a covariance matrix or a correlation matrix if parameter ’corr’ is set to TRUE. The
internal s2dv function .EOF() is used internally.
Usage
REOF(
ano,
lat,
lon,
ntrunc = 15,
time_dim = "sdate",
space_dim = c("lat", "lon"),
corr = FALSE,
ncores = NULL
)
Arguments
ano A numerical array of anomalies with named dimensions to calculate REOF. The
dimensions must have at least ’time_dim’ and ’space_dim’.
lat A vector of the latitudes of ’ano’.
lon A vector of the longitudes of ’ano’.
ntrunc A positive integer of the number of eofs to be kept for varimax rotation. This
function uses this value as ’neof’ too, which is the number of eofs to return by
.EOF(). The default value is 15. If time length or the product of latitude length
and longitude length is less than ’ntrunc’, ’ntrunc’ is equal to the minimum of
the three values.
time_dim A character string indicating the name of the time dimension of ’ano’. The
default value is ’sdate’.
space_dim A vector of two character strings. The first is the dimension name of latitude of
’ano’ and the second is the dimension name of longitude of ’ano’. The default
value is c(’lat’, ’lon’).
corr A logical value indicating whether to base on a correlation (TRUE) or on a
covariance matrix (FALSE). The default value is FALSE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A list containing:
REOFs An array of REOF patterns normalized to 1 (unitless) with dimensions (number
of modes, the rest of the dimensions of ’ano’ except ’time_dim’). Multiplying
’REOFs’ by ’RPCs’ gives the original reconstructed field.
RPCs An array of principal components with the units of the original field to the power
of 2, with dimensions (time_dim, number of modes, the rest of the dimensions
of ’ano’ except ’space_dim’).
var An array of the percentage ( explained by each mode. The dimensions are (num-
ber of modes, the rest of the dimension except ’time_dim’ and ’space_dim’).
wght An array of the area weighting with dimensions ’space_dim’. It is calculated by
the square root of cosine of ’lat’ and used to compute the fraction of variance
explained by each REOFs.
See Also
EOF
Examples
# This example computes the REOFs along forecast horizons and plots the one
# that explains the greatest amount of variability. The example data has low
# resolution so the result may not be explanatory, but it displays how to
# use this function.
ano <- Ano_CrossValid(sampleData$mod, sampleData$obs)
ano <- MeanDims(ano$exp, c('dataset', 'member'))
res <- REOF(ano, lat = sampleData$lat, lon = sampleData$lon, ntrunc = 5)
## Not run:
PlotEquiMap(eof$EOFs[1, , , 1], sampleData$lat, sampleData$lon)
## End(Not run)
Reorder Reorder the dimension of an array
Description
Reorder the dimensions of a multi-dimensional array. The order can be provided either as indices
or the dimension names. If the order is dimension name, the function looks for names(dim(x)). If
it doesn’t exist, the function checks if attributes "dimensions" exists; this attribute is in the objects
generated by Load().
Usage
Reorder(data, order)
Arguments
data An array of which the dimensions to be reordered.
order A vector of indices or character strings indicating the new order of the dimen-
sions.
Value
An array which has the same values as parameter ’data’ but with different dimension order.
Examples
dat1 <- array(c(1:30), dim = c(dat = 1, sdate = 3, ftime = 2, lon = 5))
print(dim(Reorder(dat1, c(2, 1, 4, 3))))
print(dim(Reorder(dat1, c('sdate', 'dat', 'lon', 'ftime'))))
dat2 <- array(c(1:10), dim = c(2, 1, 5))
print(dim(Reorder(dat2, c(2, 1, 3))))
attr(dat2, 'dimensions') <- c('sdate', 'time', 'region')
dat2_reorder <- Reorder(dat2, c('time', 'sdate', 'region'))
# A character array
dat3 <- array(paste0('a', 1:24), dim = c(b = 2, c = 3, d = 4))
dat3_reorder <- Reorder(dat3, c('d', 'c', 'b'))
ResidualCorr Compute the residual correlation and its significance
Description
The residual correlation assesses whether a forecast captures any of the observed variability that
is not already captured by a reference forecast (Smith et al., 2019; https://doi.org/10.1038/s41612-
019-0071-y.). The procedure is as follows: the residuals of the forecasts and observations are
computed by linearly regressing out the reference forecast’s ensemble mean from the forecasts’
ensemble mean and observations, respectively. Then, the residual correlation is computed as the
correlation between both residuals. Positive values of the residual correlation indicate that the
forecast capture more observed variability than the reference forecast, while negative values mean
that the reference forecast capture more. The significance of the residual correlation is computed
with a two-sided t-test (Wilks, 2011; https://doi.org/10.1016/B978-0-12-385022-5.00008-7) using
an effective degrees of freedom to account for the time series’ autocorrelation (von Storch and
Zwiers, 1999; https://doi.org/10.1017/CBO9780511612336).
Usage
ResidualCorr(
exp,
obs,
ref,
N.eff = NA,
time_dim = "sdate",
memb_dim = NULL,
method = "pearson",
alpha = NULL,
handle.na = "return.na",
ncores = NULL
)
Arguments
exp A named numerical array of the forecast with at least time dimension.
obs A named numerical array of the observations with at least time dimension. The
dimensions must be the same as "exp" except ’memb_dim’.
ref A named numerical array of the reference forecast data with at least time dimen-
sion. The dimensions must be the same as "exp" except ’memb_dim’.
N.eff Effective sample size to be used in the statistical significance test. It can be NA
(and it will be computed with the s2dv:::.Eno), a numeric (which is used for all
cases), or an array with the same dimensions as "obs" except "time_dim" (for a
particular N.eff to be used for each case) . The default value is NA.
time_dim A character string indicating the name of the time dimension. The default value
is ’year’.
memb_dim A character string indicating the name of the member dimension to compute
the ensemble mean of the forecast and reference forecast. If it is NULL, the
ensemble mean should be provided directly to the function. The default value is
NULL.
method A character string indicating the correlation coefficient to be computed ("pear-
son", "kendall", or "spearman"). The default value is "pearson".
alpha A numeric of the significance level to be used in the statistical significance test.
If it is a numeric, "sign" will be returned. If NULL, the p-value will be returned
instead. The default value is NULL.
handle.na A charcater string indicating how to handle missing values. If "return.na", NAs
will be returned for the cases that contain at least one NA in "exp", "ref", or
"obs". If "only.complete.triplets", only the time steps with no missing values in
all "exp", "ref", and "obs" will be used. If "na.fail", an error will arise if any of
"exp", "ref", or "obs" contains any NA. The default value is "return.na".
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A list with:
$res.corr A numerical array of the residual correlation with the same dimensions as the
input arrays except "time_dim" (and "memb_dim" if provided).
$sign A logical array indicating whether the residual correlation is statistically signifi-
cant or not with the same dimensions as the input arrays except "time_dim" (and
"memb_dim" if provided). Returned only if "alpha" is a numeric.
$p.val A numeric array of the p-values with the same dimensions as the input arrays
except "time_dim" (and "memb_dim" if provided). Returned only if "alpha" is
NULL.
Examples
exp <- array(rnorm(1000), dim = c(lat = 3, lon = 2, member = 10, sdate = 50))
obs <- array(rnorm(1000), dim = c(lat = 3, lon = 2, sdate = 50))
ref <- array(rnorm(1000), dim = c(lat = 3, lon = 2, member = 5, sdate = 50))
res <- ResidualCorr(exp = exp, obs = obs, ref = ref, memb_dim = 'member')
RMS Compute root mean square error
Description
Compute the root mean square error for an array of forecasts and an array of observations. The
RMSEs are computed along time_dim, the dimension which corresponds to the startdate dimen-
sion. If comp_dim is given, the RMSEs are computed only if obs along the comp_dim dimension
are complete between limits[1] and limits[2], i.e. there are no NAs between limits[1] and limits[2].
This option can be activated if the user wishes to account only for the forecasts for which the corre-
sponding observations are available at all leadtimes.
The confidence interval is computed by the chi2 distribution.
Usage
RMS(
exp,
obs,
time_dim = "sdate",
dat_dim = "dataset",
comp_dim = NULL,
limits = NULL,
conf = TRUE,
conf.lev = 0.95,
ncores = NULL
)
Arguments
exp A named numeric array of experimental data, with at least two dimensions
’time_dim’ and ’dat_dim’. It can also be a vector with the same length as ’obs’,
then the vector will automatically be ’time_dim’ and ’dat_dim’ will be 1.
obs A named numeric array of observational data, same dimensions as parameter
’exp’ except along dat_dim. It can also be a vector with the same length as
’exp’, then the vector will automatically be ’time_dim’ and ’dat_dim’ will be 1.
time_dim A character string indicating the name of dimension along which the correlations
are computed. The default value is ’sdate’.
dat_dim A character string indicating the name of member (nobs/nexp) dimension. The
default value is ’dataset’.
comp_dim A character string indicating the name of dimension along which obs is taken
into account only if it is complete. The default value is NULL.
limits A vector of two integers indicating the range along comp_dim to be completed.
The default value is c(1, length(comp_dim dimension)).
conf A logical value indicating whether to retrieve the confidence intervals or not.
The default value is TRUE.
conf.lev A numeric indicating the confidence level for the regression computation. The
default value is 0.95.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A list containing the numeric arrays with dimension:
c(nexp, nobs, all other dimensions of exp except time_dim).
nexp is the number of experiment (i.e., dat_dim in exp), and nobs is the number of observation (i.e.,
dat_dim in obs).
$rms The root mean square error.
$conf.lower The lower confidence interval. Only present if conf = TRUE.
$conf.upper The upper confidence interval. Only present if conf = TRUE.
Examples
# Load sample data as in Load() example:
set.seed(1)
exp1 <- array(rnorm(120), dim = c(dataset = 3, sdate = 5, ftime = 2, lon = 1, lat = 4))
set.seed(2)
obs1 <- array(rnorm(80), dim = c(dataset = 2, sdate = 5, ftime = 2, lon = 1, lat = 4))
set.seed(2)
na <- floor(runif(10, min = 1, max = 80))
obs1[na] <- NA
res <- RMS(exp1, obs1, comp_dim = 'ftime')
# Renew example when Ano and Smoothing are ready
RMSSS Compute root mean square error skill score
Description
Compute the root mean square error skill score (RMSSS) between an array of forecast ’exp’ and an
array of observation ’obs’. The two arrays should have the same dimensions except along dat_dim,
where the length can be different, with the number of experiments/models (nexp) and the number
of observational datasets (nobs).
RMSSS computes the root mean square error skill score of each jexp in 1:nexp against each job in
1:nobs which gives nexp * nobs RMSSS for each grid point of the array.
The RMSSS are computed along the time_dim dimension which should correspond to the start date
dimension.
The p-value and significance test are optionally provided by an one-sided Fisher test or Random
Walk test.
Usage
RMSSS(
exp,
obs,
ref = NULL,
time_dim = "sdate",
dat_dim = "dataset",
memb_dim = NULL,
pval = TRUE,
sign = FALSE,
alpha = 0.05,
sig_method = "one-sided Fisher",
ncores = NULL
)
Arguments
exp A named numeric array of experimental data which contains at least two dimen-
sions for dat_dim and time_dim. It can also be a vector with the same length as
’obs’, then the vector will automatically be ’time_dim’ and ’dat_dim’ will be 1.
obs A named numeric array of observational data which contains at least two di-
mensions for dat_dim and time_dim. The dimensions should be the same as
paramter ’exp’ except the length of ’dat_dim’ dimension. The order of dimen-
sion can be different. It can also be a vector with the same length as ’exp’, then
the vector will automatically be ’time_dim’ and ’dat_dim’ will be 1.
ref A named numerical array of the reference forecast data with at least time di-
mension, or 0 (typical climatological forecast) or 1 (normalized climatological
forecast). If it is an array, the dimensions must be the same as ’exp’ except
’memb_dim’ and ’dat_dim’. If there is only one reference dataset, it should not
have dataset dimension. If there is corresponding reference for each experiment,
the dataset dimension must have the same length as in ’exp’. If ’ref’ is NULL,
the typical climatological forecast is used as reference forecast (equivelant to 0.)
The default value is NULL.
time_dim A character string indicating the name of dimension along which the RMSSS
are computed. The default value is ’sdate’.
dat_dim A character string indicating the name of dataset (nobs/nexp) dimension. The
default value is ’dataset’.
memb_dim A character string indicating the name of the member dimension to compute the
ensemble mean; it should be set to NULL if the parameter ’exp’ and ’ref’ are
already the ensemble mean. The default value is NULL.
pval A logical value indicating whether to compute or not the p-value of the test Ho:
RMSSS = 0. The default value is TRUE.
sign A logical value indicating whether to compute or not the statistical significance
of the test Ho: RMSSS = 0. The default value is FALSE.
alpha A numeric of the significance level to be used in the statistical significance test.
The default value is 0.05.
sig_method A character string indicating the significance method. The options are "one-
sided Fisher" (default) and "Random Walk".
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A list containing the numeric arrays with dimension:
c(nexp, nobs, all other dimensions of exp except time_dim).
nexp is the number of experiment (i.e., dat_dim in exp), and nobs is the number of observation (i.e.,
dat_dim in obs). If dat_dim is NULL, nexp and nobs are omitted.
$rmsss A numerical array of the root mean square error skill score.
$p.val A numerical array of the p-value with the same dimensions as $rmsss. Only
present if pval = TRUE.
sign A logical array of the statistical significance of the RMSSS with the same di-
mensions as $rmsss. Only present if sign = TRUE.
Examples
set.seed(1)
exp <- array(rnorm(30), dim = c(dataset = 2, time = 3, memb = 5))
set.seed(2)
obs <- array(rnorm(15), dim = c(time = 3, memb = 5, dataset = 1))
res <- RMSSS(exp, obs, time_dim = 'time', dat_dim = 'dataset')
ROCSS Compute the Relative Operating Characteristic Skill Score
Description
The Relative Operating Characteristic Skill Score (ROCSS; Kharin and Zwiers, 2003) is based
on the ROC curve, which gives information about the hit rates against the false-alarm rates for a
particular category or event. The ROC curve can be summarized with the area under the ROC curve,
known as the ROC score, to provide a skill value for each category. The ROCSS ranges between
minus infinite and 1. A positive ROCSS value indicates that the forecast has higher skill than the
reference forecasts, meaning the contrary otherwise.
Usage
ROCSS(
exp,
obs,
ref = NULL,
time_dim = "sdate",
memb_dim = "member",
dat_dim = NULL,
prob_thresholds = c(1/3, 2/3),
indices_for_clim = NULL,
cross.val = FALSE,
ncores = NULL
)
Arguments
exp A named numerical array of the forecast with at least time and member dimen-
sion.
obs A named numerical array of the observation with at least time dimension. The
dimensions must be the same as ’exp’ except ’memb_dim’ and ’dat_dim’.
ref A named numerical array of the reference forecast data with at least time and
member dimension. The dimensions must be the same as ’exp’ except ’memb_dim’
and ’dat_dim’. If there is only one reference dataset, it should not have dataset
dimension. If there is corresponding reference for each experiement, the dataset
dimension must have the same length as in ’exp’. If ’ref’ is NULL, the random
forecast is used as reference forecast. The default value is NULL.
time_dim A character string indicating the name of the time dimension. The default value
is ’sdate’.
memb_dim A character string indicating the name of the member dimension to compute
the probabilities of the forecast and the reference forecast. The default value is
’member’.
dat_dim A character string indicating the name of dataset dimension. The length of this
dimension can be different between ’exp’ and ’obs’. The default value is NULL.
prob_thresholds
A numeric vector of the relative thresholds (from 0 to 1) between the categories.
The default value is c(1/3, 2/3), which corresponds to tercile equiprobable cate-
gories.
indices_for_clim
A vector of the indices to be taken along ’time_dim’ for computing the thresh-
olds between the probabilistic categories. If NULL, the whole period is used.
The default value is NULL.
cross.val A logical indicating whether to compute the thresholds between probabilistic
categories in cross-validation. The default value is FALSE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A numerical array of ROCSS with the same dimensions as ’exp’ excluding ’time_dim’ and ’memb_dim’
dimensions and including ’cat’ dimension, which is each category. The length if ’cat’ dimension
corresponds to the number of probabilistic categories, i.e., 1 + length(prob_thresholds). If there are
multiple datasets, two additional dimensions ’nexp’ and ’nobs’ are added.
References
<NAME>. and <NAME>. (2003): https://doi.org/10.1175/1520-0442(2003)016
Examples
exp <- array(rnorm(1000), dim = c(lon = 3, lat = 2, sdate = 60, member = 10))
ref <- array(rnorm(1000), dim = c(lon = 3, lat = 2, sdate = 60, member = 10))
obs <- array(rnorm(1000), dim = c(lon = 3, lat = 2, sdate = 60))
ROCSS(exp = exp, obs = obs) ## random forecast as reference forecast
ROCSS(exp = exp, obs = obs, ref = ref) ## ref as reference forecast
RPS Compute the Ranked Probability Score
Description
The Ranked Probability Score (RPS; Wilks, 2011) is defined as the sum of the squared differences
between the cumulative forecast probabilities (computed from the ensemble members) and the ob-
servations (defined as 0 did not happen and 100 of multi-categorical probabilistic forecasts. The
RPS ranges between 0 (perfect forecast) and n-1 (worst possible forecast), where n is the number
of categories. In the case of a forecast divided into two categories (the lowest number of categories
that a probabilistic forecast can have), the RPS corresponds to the Brier Score (BS; Wilks, 2011),
therefore, ranges between 0 and 1. If there is more than one dataset, RPS will be computed for each
pair of exp and obs data.
Usage
RPS(
exp,
obs,
time_dim = "sdate",
memb_dim = "member",
dat_dim = NULL,
prob_thresholds = c(1/3, 2/3),
indices_for_clim = NULL,
Fair = FALSE,
weights = NULL,
cross.val = FALSE,
ncores = NULL
)
Arguments
exp A named numerical array of the forecast with at least time and member dimen-
sion.
obs A named numerical array of the observation with at least time dimension. The
dimensions must be the same as ’exp’ except ’memb_dim’ and ’dat_dim’.
time_dim A character string indicating the name of the time dimension. The default value
is ’sdate’.
memb_dim A character string indicating the name of the member dimension to compute the
probabilities of the forecast. The default value is ’member’.
dat_dim A character string indicating the name of dataset dimension. The length of this
dimension can be different between ’exp’ and ’obs’. The default value is NULL.
prob_thresholds
A numeric vector of the relative thresholds (from 0 to 1) between the categories.
The default value is c(1/3, 2/3), which corresponds to tercile equiprobable cate-
gories.
indices_for_clim
A vector of the indices to be taken along ’time_dim’ for computing the thresh-
olds between the probabilistic categories. If NULL, the whole period is used.
The default value is NULL.
Fair A logical indicating whether to compute the FairRPS (the potential RPS that
the forecast would have with an infinite ensemble size). The default value is
FALSE.
weights A named numerical array of the weights for ’exp’. If ’dat_dim’ is NULL, the
dimension should include ’memb_dim’ and ’time_dim’. Else, the dimension
should also include ’dat_dim’. The default value is NULL. The ensemble should
have at least 70 members or span at least 10 time steps and have more than 45
members if consistency between the weighted and unweighted methodologies is
desired.
cross.val A logical indicating whether to compute the thresholds between probabilistic
categories in cross-validation. The default value is FALSE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A numerical array of RPS with dimensions c(nexp, nobs, the rest dimensions of ’exp’ except
’time_dim’ and ’memb_dim’ dimensions). nexp is the number of experiment (i.e., dat_dim in exp),
and nobs is the number of observation (i.e., dat_dim in obs). If dat_dim is NULL, nexp and nobs
are omitted.
References
Wilks, 2011; https://doi.org/10.1016/B978-0-12-385022-5.00008-7
Examples
exp <- array(rnorm(1000), dim = c(lat = 3, lon = 2, member = 10, sdate = 50))
obs <- array(rnorm(1000), dim = c(lat = 3, lon = 2, sdate = 50))
res <- RPS(exp = exp, obs = obs)
RPSS Compute the Ranked Probability Skill Score
Description
The Ranked Probability Skill Score (RPSS; Wilks, 2011) is the skill score based on the Ranked
Probability Score (RPS; Wilks, 2011). It can be used to assess whether a forecast presents an
improvement or worsening with respect to a reference forecast. The RPSS ranges between minus
infinite and 1. If the RPSS is positive, it indicates that the forecast has higher skill than the reference
forecast, while a negative value means that it has a lower skill. Examples of reference forecasts are
the climatological forecast (same probabilities for all categories for all time steps), persistence, a
previous model version, and another model. It is computed as RPSS = 1 - RPS_exp / RPS_ref. The
statistical significance is obtained based on a Random Walk test at the 95 2016). If there is more
than one dataset, RPS will be computed for each pair of exp and obs data.
Usage
RPSS(
exp,
obs,
ref = NULL,
time_dim = "sdate",
memb_dim = "member",
dat_dim = NULL,
prob_thresholds = c(1/3, 2/3),
indices_for_clim = NULL,
Fair = FALSE,
weights = NULL,
weights_exp = NULL,
weights_ref = NULL,
cross.val = FALSE,
ncores = NULL
)
Arguments
exp A named numerical array of the forecast with at least time and member dimen-
sion.
obs A named numerical array of the observation with at least time dimension. The
dimensions must be the same as ’exp’ except ’memb_dim’ and ’dat_dim’.
ref A named numerical array of the reference forecast data with at least time and
member dimension. The dimensions must be the same as ’exp’ except ’memb_dim’
and ’dat_dim’. If there is only one reference dataset, it should not have dataset
dimension. If there is corresponding reference for each experiement, the dataset
dimension must have the same length as in ’exp’. If ’ref’ is NULL, the climato-
logical forecast is used as reference forecast. The default value is NULL.
time_dim A character string indicating the name of the time dimension. The default value
is ’sdate’.
memb_dim A character string indicating the name of the member dimension to compute
the probabilities of the forecast and the reference forecast. The default value is
’member’.
dat_dim A character string indicating the name of dataset dimension. The length of this
dimension can be different between ’exp’ and ’obs’. The default value is NULL.
prob_thresholds
A numeric vector of the relative thresholds (from 0 to 1) between the categories.
The default value is c(1/3, 2/3), which corresponds to tercile equiprobable cate-
gories.
indices_for_clim
A vector of the indices to be taken along ’time_dim’ for computing the thresh-
olds between the probabilistic categories. If NULL, the whole period is used.
The default value is NULL.
Fair A logical indicating whether to compute the FairRPSS (the potential RPSS that
the forecast would have with an infinite ensemble size). The default value is
FALSE.
weights Deprecated and will be removed in the next release. Please use ’weights_exp’
and ’weights_ref’ instead.
weights_exp A named numerical array of the forecast ensemble weights. The dimension
should include ’memb_dim’, ’time_dim’ and ’dat_dim’ if there are multiple
datasets. All dimension lengths must be equal to ’exp’ dimension lengths. The
default value is NULL, which means no weighting is applied. The ensemble
should have at least 70 members or span at least 10 time steps and have more
than 45 members if consistency between the weighted and unweighted method-
ologies is desired.
weights_ref Same as ’weights_exp’ but for the reference forecast.
cross.val A logical indicating whether to compute the thresholds between probabilistics
categories in cross-validation. The default value is FALSE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
$rpss A numerical array of RPSS with dimensions c(nexp, nobs, the rest dimensions
of ’exp’ except ’time_dim’ and ’memb_dim’ dimensions). nexp is the number
of experiment (i.e., dat_dim in exp), and nobs is the number of observation i.e.,
dat_dim in obs). If dat_dim is NULL, nexp and nobs are omitted.
$sign A logical array of the statistical significance of the RPSS with the same dimen-
sions as $rpss.
References
Wilks, 2011; https://doi.org/10.1016/B978-0-12-385022-5.00008-7 DelSole and Tippett, 2016; https://doi.org/10.1175/MWR
D-15-0218.1
Examples
set.seed(1)
exp <- array(rnorm(3000), dim = c(lat = 3, lon = 2, member = 10, sdate = 50))
set.seed(2)
obs <- array(rnorm(300), dim = c(lat = 3, lon = 2, sdate = 50))
set.seed(3)
ref <- array(rnorm(3000), dim = c(lat = 3, lon = 2, member = 10, sdate = 50))
weights <- sapply(1:dim(exp)['sdate'], function(i) {
n <- abs(rnorm(10))
n/sum(n)
})
dim(weights) <- c(member = 10, sdate = 50)
res <- RPSS(exp = exp, obs = obs) ## climatology as reference forecast
res <- RPSS(exp = exp, obs = obs, ref = ref) ## ref as reference forecast
res <- RPSS(exp = exp, obs = obs, ref = ref, weights_exp = weights, weights_ref = weights)
sampleDepthData Sample of Experimental Data for Forecast Verification In Function Of
Latitudes And Depths
Description
This data set provides data in function of latitudes and depths for the variable ’tos’, i.e. sea surface
temperature, from the decadal climate prediction experiment run at IC3 in the context of the CMIP5
project.
Its name within IC3 local database is ’i00k’.
Usage
data(sampleDepthData)
Format
The data set provides with a variable named ’sampleDepthData’.
sampleDepthData$exp is an array that contains the experimental data and the dimension meanings
and values are:
c(# of experimental datasets, # of members, # of starting dates, # of lead-times, # of depths, # of
latitudes)
c(1, 5, 3, 60, 7, 21)
sampleDepthData$obs should be an array that contained the observational data but in this sample is
not defined (NULL).
sampleDepthData$depths is an array with the 7 longitudes covered by the data.
sampleDepthData$lat is an array with the 21 latitudes covered by the data.
sampleMap Sample Of Observational And Experimental Data For Forecast Verifi-
cation In Function Of Longitudes And Latitudes
Description
This data set provides data in function of longitudes and latitudes for the variable ’tos’, i.e. sea
surface temperature, over the mediterranean zone from the sample experimental and observational
datasets attached to the package. See examples on how to use Load() for details.
The data is provided through a variable named ’sampleMap’ and is structured as expected from
the ’Load()’ function in the ’s2dv’ package if was called as follows:
data_path <- system.file('sample_data', package = 's2dv')
exp <- list(
name = 'experiment',
path = file.path(data_path, 'model/$EXP_NAME$/monthly_mean',
'$VAR_NAME$_3hourly/$VAR_NAME$_$START_DATES$.nc')
)
obs <- list(
name = 'observation',
path = file.path(data_path, 'observation/$OBS_NAME$/monthly_mean',
'$VAR_NAME$/$VAR_NAME$_$YEAR$$MONTH$.nc')
)
# Now we are ready to use Load().
startDates <- c('19851101', '19901101', '19951101', '20001101', '20051101')
sampleData <- Load('tos', list(exp), list(obs), startDates,
leadtimemin = 1, leadtimemax = 4, output = 'lonlat',
latmin = 27, latmax = 48, lonmin = -12, lonmax = 40)
Check the documentation on ’Load()’ in the package ’s2dv’ for more information.
Usage
data(sampleMap)
Format
The data set provides with a variable named ’sampleMap’.
sampleMap$mod is an array that contains the experimental data and the dimension meanings and
values are:
c(# of experimental datasets, # of members, # of starting dates, # of lead-times, # of latitudes, # of
longitudes)
c(1, 3, 5, 60, 2, 3)
sampleMap$obs is an array that contains the observational data and the dimension meanings and
values are:
c(# of observational datasets, # of members, # of starting dates, # of lead-times, # of latitudes, # of
longitudes)
c(1, 1, 5, 60, 2, 3)
sampleMap$lat is an array with the 2 latitudes covered by the data (see examples on Load() for
details on why such low resolution).
sampleMap$lon is an array with the 3 longitudes covered by the data (see examples on Load() for
details on why such low resolution).
sampleTimeSeries Sample Of Observational And Experimental Data For Forecast Verifi-
cation As Area Averages
Description
This data set provides area averaged data for the variable ’tos’, i.e. sea surface temperature, over
the mediterranean zone from the example datasets attached to the package. See examples on Load()
for more details.
The data is provided through a variable named ’sampleTimeSeries’ and is structured as expected
from the ’Load()’ function in the ’s2dv’ package if was called as follows:
data_path <- system.file('sample_data', package = 's2dv')
exp <- list(
name = 'experiment',
path = file.path(data_path, 'model/$EXP_NAME$/monthly_mean',
'$VAR_NAME$_3hourly/$VAR_NAME$_$START_DATES$.nc')
)
obs <- list(
name = 'observation',
path = file.path(data_path, 'observation/$OBS_NAME$/monthly_mean',
'$VAR_NAME$/$VAR_NAME$_$YEAR$$MONTH$.nc')
)
# Now we are ready to use Load().
startDates <- c('19851101', '19901101', '19951101', '20001101', '20051101')
sampleData <- Load('tos', list(exp), list(obs), startDates,
output = 'areave', latmin = 27, latmax = 48, lonmin = -12,
lonmax = 40)
Check the documentation on ’Load()’ in the package ’s2dv’ for more information.
Usage
data(sampleTimeSeries)
Format
The data set provides with a variable named ’sampleTimeSeries’.
sampleTimeSeries$mod is an array that contains the experimental data and the dimension meanings
and values are:
c(# of experimental datasets, # of members, # of starting dates, # of lead-times)
c(1, 3, 5, 60)
sampleTimeSeries$obs is an array that contains the observational data and the dimension meanings
and values are:
c(# of observational datasets, # of members, # of starting dates, # of lead-times)
c(1, 1, 5, 60)
sampleTimeSeries$lat is an array with the 2 latitudes covered by the data that was area averaged to
calculate the time series (see examples on Load() for details on why such low resolution).
sampleTimeSeries$lon is an array with the 3 longitudes covered by the data that was area averaged
to calculate the time series (see examples on Load() for details on why such low resolution).
Season Compute seasonal mean or other calculations
Description
Compute the seasonal mean (or other methods) on monthly time series along one dimension of a
named multi-dimensional arrays. Partial season is not accounted.
Usage
Season(
data,
time_dim = "ftime",
monini,
moninf,
monsup,
method = mean,
na.rm = TRUE,
ncores = NULL
)
Arguments
data A named numeric array with at least one dimension ’time_dim’.
time_dim A character string indicating the name of dimension along which the seasonal
mean or other calculations are computed. The default value is ’ftime’.
monini An integer indicating what the first month of the time series is. It can be from 1
to 12.
moninf An integer indicating the starting month of the seasonal calculation. It can be
from 1 to 12.
monsup An integer indicating the end month of the seasonal calculation. It can be from
1 to 12.
method An R function to be applied for seasonal calculation. For example, ’sum’ can be
used for total precipitation. The default value is mean.
na.rm A logical value indicating whether to remove NA values along ’time_dim’ when
calculating climatology (TRUE) or return NA if there is NA along ’time_dim’
(FALSE). The default value is TRUE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
An array with the same dimensions as data except along the ’time_dim’ dimension, of which the
length changes to the number of seasons.
Examples
set.seed(1)
dat1 <- array(rnorm(144 * 3), dim = c(member = 2, sdate = 2, ftime = 12*3, lon = 3))
res <- Season(data = dat1, monini = 1, moninf = 1, monsup = 2)
res <- Season(data = dat1, monini = 10, moninf = 12, monsup = 2)
dat2 <- dat1
set.seed(2)
na <- floor(runif(30, min = 1, max = 144 * 3))
dat2[na] <- NA
res <- Season(data = dat2, monini = 3, moninf = 1, monsup = 2)
res <- Season(data = dat2, monini = 3, moninf = 1, monsup = 2, na.rm = FALSE)
SignalNoiseRatio Calculate Signal-to-noise ratio
Description
This function computes the signal-to-noise ratio, where the signal is the ensemble mean variance
and the noise is the variance of the ensemble members about the ensemble mean (Eade et al., 2014;
Scaife and Smith, 2018).
Usage
SignalNoiseRatio(
data,
time_dim = "year",
member_dim = "member",
na.rm = FALSE,
ncores = NULL
)
Arguments
data A numerical array with, at least, ’time_dim’ and ’member_dim’ dimensions.
time_dim A character string indicating the name of the time dimension in ’data’. The
default value is ’year’.
member_dim A character string indicating the name of the member dimension in ’data’. The
default value is ’member’.
na.rm A logical value indicating whether to remove NA values during the computation.
The default value is FALSE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
An array with of the signal-to-noise ratio. It has the same dimensions as ’data’ except ’time_dim’
and ’member_dim’ dimensions.
Examples
exp <- array(data = runif(600), dim = c(year = 15, member = 10, lat = 2, lon = 2))
SignalNoiseRatio(exp)
Smoothing Smooth an array along one dimension
Description
Smooth an array of any number of dimensions along one dimension.
Usage
Smoothing(data, time_dim = "ftime", runmeanlen = 12, ncores = NULL)
Arguments
data A numerical array to be smoothed along one of its dimension (typically the
forecast time dimension).
time_dim A character string indicating the name of the dimension to be smoothed along.
The default value is ’ftime’.
runmeanlen An integer indicating the running mean length of sampling units (typically months).
The default value is 12.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A numerical array with the same dimensions as parameter ’data’ but the ’time_dim’ dimension is
moved to the first. The head and tail part which do not have enough neighboring data for smoothing
is assigned as NA.
Examples
# Load sample data as in Load() example:
example(Load)
clim <- Clim(sampleData$mod, sampleData$obs)
ano_exp <- Ano(sampleData$mod, clim$clim_exp)
ano_obs <- Ano(sampleData$obs, clim$clim_obs)
smooth_ano_exp <- Smoothing(ano_exp, time_dim = 'ftime', runmeanlen = 12)
smooth_ano_obs <- Smoothing(ano_obs, time_dim = 'ftime', runmeanlen = 12)
smooth_ano_exp <- Reorder(smooth_ano_exp, c(2, 3, 4, 1))
smooth_ano_obs <- Reorder(smooth_ano_obs, c(2, 3, 4, 1))
## Not run:
PlotAno(smooth_ano_exp, smooth_ano_obs, startDates,
toptitle = "Smoothed Mediterranean mean SST", ytitle = "K")
## End(Not run)
Spectrum Estimate frequency spectrum
Description
Estimate the frequency spectrum of the data array together with a user-specified confidence level.
The output is provided as an array with dimensions c(number of frequencies, stats = 3, other margin
dimensions of data). The ’stats’ dimension contains the frequencies at which the spectral density is
estimated, the estimates of the spectral density, and the significance level.
The spectrum estimation relies on an R built-in function spectrum() and the confidence interval is
estimated by the Monte-Carlo method.
Usage
Spectrum(data, time_dim = "ftime", conf.lev = 0.95, ncores = NULL)
Arguments
data A vector or numeric array of which the frequency spectrum is required. If it’s
a vector, it should be a time series. If it’s an array, the dimensions must have at
least ’time_dim’. The data is assumed to be evenly spaced in time.
time_dim A character string indicating the dimension along which to compute the fre-
quency spectrum. The default value is ’ftime’.
conf.lev A numeric indicating the confidence level for the Monte-Carlo significance test.
The default value is 0.95.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A numeric array of the frequency spectrum with dimensions c(<time_dim> = number of frequen-
cies, stats = 3, the rest of the dimensions of ’data’). The ’stats’ dimension contains the frequency
values, the spectral density, and the confidence interval.
Examples
# Load sample data as in Load() example:
example(Load)
ensmod <- MeanDims(sampleData$mod, 2)
spectrum <- Spectrum(ensmod)
for (jsdate in 1:dim(spectrum)['sdate']) {
for (jlen in 1:dim(spectrum)['ftime']) {
if (spectrum[jlen, 2, 1, jsdate] > spectrum[jlen, 3, 1, jsdate]) {
ensmod[1, jsdate, ] <- Filter(ensmod[1, jsdate, ], spectrum[jlen, 1, 1, jsdate])
}
}
}
PlotAno(InsertDim(ensmod, 2, 1), sdates = startDates)
SPOD Compute the South Pacific Ocean Dipole (SPOD) index
Description
The South Pacific Ocean Dipole (SPOD) index is related to the El Nino-Southern Oscillation
(ENSO) and the Inderdecadal Pacific Oscillation (IPO). The SPOD index is computed as the dif-
ference of weighted-averaged SST anomalies over 20ºS-48ºS, 165ºE-190ºE (NW pole) and the
weighted-averaged SST anomalies over 44ºS-65ºS, 220ºE-260ºE (SE pole) (Saurral et al., 2020).
If different members and/or datasets are provided, the climatology (used to calculate the anomalies)
is computed individually for all of them.
Usage
SPOD(
data,
data_lats,
data_lons,
type,
lat_dim = "lat",
lon_dim = "lon",
mask = NULL,
monini = 11,
fmonth_dim = "fmonth",
sdate_dim = "sdate",
indices_for_clim = NULL,
year_dim = "year",
month_dim = "month",
na.rm = TRUE,
ncores = NULL
)
Arguments
data A numerical array to be used for the index computation with, at least, the dimen-
sions: 1) latitude, longitude, start date and forecast month (in case of decadal
predictions), 2) latitude, longitude, year and month (in case of historical simu-
lations or observations). This data has to be provided, at least, over the whole
region needed to compute the index.
data_lats A numeric vector indicating the latitudes of the data.
data_lons A numeric vector indicating the longitudes of the data.
type A character string indicating the type of data (’dcpp’ for decadal predictions,
’hist’ for historical simulations, or ’obs’ for observations or reanalyses).
lat_dim A character string of the name of the latitude dimension. The default value is
’lat’.
lon_dim A character string of the name of the longitude dimension. The default value is
’lon’.
mask An array of a mask (with 0’s in the grid points that have to be masked) or NULL
(i.e., no mask is used). This parameter allows to remove the values over land
in case the dataset is a combination of surface air temperature over land and
sea surface temperature over the ocean. Also, it can be used to mask those grid
points that are missing in the observational dataset for a fair comparison between
the forecast system and the reference dataset. The default value is NULL.
monini An integer indicating the month in which the forecast system is initialized. Only
used when parameter ’type’ is ’dcpp’. The default value is 11, i.e., initialized in
November.
fmonth_dim A character string indicating the name of the forecast month dimension. Only
used if parameter ’type’ is ’dcpp’. The default value is ’fmonth’.
sdate_dim A character string indicating the name of the start date dimension. Only used if
parameter ’type’ is ’dcpp’. The default value is ’sdate’.
indices_for_clim
A numeric vector of the indices of the years to compute the climatology for cal-
culating the anomalies, or NULL so the climatology is calculated over the whole
period. If the data are already anomalies, set it to FALSE. The default value is
NULL.
In case of parameter ’type’ is ’dcpp’, ’indices_for_clim’ must be relative to the
first forecast year, and the climatology is automatically computed over the com-
mon calendar period for the different forecast years.
year_dim A character string indicating the name of the year dimension The default value
is ’year’. Only used if parameter ’type’ is ’hist’ or ’obs’.
month_dim A character string indicating the name of the month dimension. The default
value is ’month’. Only used if parameter ’type’ is ’hist’ or ’obs’.
na.rm A logical value indicanting whether to remove NA values. The default value is
TRUE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A numerical array with the SPOD index with the same dimensions as data except the lat_dim,
lon_dim and fmonth_dim (month_dim) in case of decadal predictions (historical simulations or
observations). In case of decadal predictions, a new dimension ’fyear’ is added.
Examples
## Observations or reanalyses
obs <- array(1:100, dim = c(year = 5, lat = 19, lon = 37, month = 12))
lat <- seq(-90, 90, 10)
lon <- seq(0, 360, 10)
index_obs <- SPOD(data = obs, data_lats = lat, data_lons = lon, type = 'obs')
## Historical simulations
hist <- array(1:100, dim = c(year = 5, lat = 19, lon = 37, month = 12, member = 5))
lat <- seq(-90, 90, 10)
lon <- seq(0, 360, 10)
index_hist <- SPOD(data = hist, data_lats = lat, data_lons = lon, type = 'hist')
## Decadal predictions
dcpp <- array(1:100, dim = c(sdate = 5, lat = 19, lon = 37, fmonth = 24, member = 5))
lat <- seq(-90, 90, 10)
lon <- seq(0, 360, 10)
index_dcpp <- SPOD(data = dcpp, data_lats = lat, data_lons = lon, type = 'dcpp', monini = 1)
Spread Compute interquartile range, maximum-minimum, standard deviation
and median absolute deviation
Description
Compute interquartile range, maximum-minimum, standard deviation and median absolute devia-
tion along the list of dimensions provided by the compute_dim argument (typically along the en-
semble member and start date dimension). The confidence interval is computed by bootstrapping by
100 times. The input data can be the output of Load(), Ano(), or Ano_CrossValid(), for example.
Usage
Spread(
data,
compute_dim = "member",
na.rm = TRUE,
conf = TRUE,
conf.lev = 0.95,
ncores = NULL
)
Arguments
data A numeric vector or array with named dimensions to compute the statistics. The
dimensions should at least include ’compute_dim’.
compute_dim A vector of character strings of the dimension names along which to compute
the statistics. The default value is ’member’.
na.rm A logical value indicating if NAs should be removed (TRUE) or kept (FALSE)
for computation. The default value is TRUE.
conf A logical value indicating whether to compute the confidence intervals or not.
The default value is TRUE.
conf.lev A numeric value of the confidence level for the computation. The default value
is 0.95.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A list of numeric arrays with the same dimensions as ’data’ but without ’compute_dim’ and with
the first dimension ’stats’. If ’conf’ is TRUE, the length of ’stats’ is 3 corresponding to the lower
limit of the confidence interval, the spread, and the upper limit of the confidence interval. If ’conf’
is FALSE, the length of ’stats’ is 1 corresponding to the spread.
$iqr InterQuartile Range.
$maxmin Maximum - Minimum.
$sd Standard Deviation.
$mad Median Absolute Deviation.
Examples
# Load sample data as in Load() example:
example(Load)
clim <- Clim(sampleData$mod, sampleData$obs)
ano_exp <- Ano(sampleData$mod, clim$clim_exp)
runmean_months <- 12
smooth_ano_exp <- Smoothing(ano_exp, runmeanlen = runmean_months)
smooth_ano_exp_m_sub <- smooth_ano_exp - InsertDim(MeanDims(smooth_ano_exp, 'member',
na.rm = TRUE),
lendim = dim(smooth_ano_exp)['member'],
name = 'member')
spread <- Spread(smooth_ano_exp_m_sub, compute_dim = c('member', 'sdate'))
## Not run:
PlotVsLTime(Reorder(spread$iqr, c('dataset', 'stats', 'ftime')),
toptitle = "Inter-Quartile Range between ensemble members",
ytitle = "K", monini = 11, limits = NULL,
listexp = c('CMIP5 IC3'), listobs = c('ERSST'), biglab = FALSE,
hlines = c(0))
PlotVsLTime(Reorder(spread$maxmin, c('dataset', 'stats', 'ftime')),
toptitle = "Maximum minus minimum of the members",
ytitle = "K", monini = 11, limits = NULL,
listexp = c('CMIP5 IC3'), listobs = c('ERSST'), biglab = FALSE,
hlines = c(0))
PlotVsLTime(Reorder(spread$sd, c('dataset', 'stats', 'ftime')),
toptitle = "Standard deviation of the members",
ytitle = "K", monini = 11, limits = NULL,
listexp = c('CMIP5 IC3'), listobs = c('ERSST'), biglab = FALSE,
hlines = c(0))
PlotVsLTime(Reorder(spread$mad, c('dataset', 'stats', 'ftime')),
toptitle = "Median Absolute Deviation of the members",
ytitle = "K", monini = 11, limits = NULL,
listexp = c('CMIP5 IC3'), listobs = c('ERSST'), biglab = FALSE,
hlines = c(0))
## End(Not run)
StatSeasAtlHurr Compute estimate of seasonal mean of Atlantic hurricane activity
Description
Compute one of G. Villarini’s statistically downscaled measure of mean Atlantic hurricane activity
and its variance. The hurricane activity is estimated using seasonal averages of sea surface temper-
ature anomalies over the tropical Atlantic (bounded by 10N-25N and 80W-20W) and the tropics at
large (bounded by 30N-30S). The anomalies are for the JJASON season.
The estimated seasonal average is either 1) number of hurricanes, 2) number of tropical cyclones
with lifetime >=48h or 3) power dissipation index (PDI; in 10^11 m^3 s^-2).
The statistical models used in this function are described in references.
Usage
StatSeasAtlHurr(atlano, tropano, hrvar = "HR", ncores = NULL)
Arguments
atlano A numeric array with named dimensions of Atlantic sea surface temperature
anomalies. It must have the same dimensions as ’tropano’.
tropano A numeric array with named dimensions of tropical sea surface temperature
anomalies. It must have the same dimensions as ’atlano’.
hrvar A character string of the seasonal average to be estimated. The options are ei-
ther "HR" (hurricanes), "TC" (tropical cyclones with lifetime >=48h), or "PDI"
(power dissipation index). The default value is ’HR’.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A list composed of two arrays with the same dimensions as ’atlano’ and ’tropano’.
$mean The mean of the desired quantity.
$var The variance of that quantity.
References
Villarini et al. (2010) Mon Wea Rev, 138, 2681-2705.
Villarini et al. (2012) Mon Wea Rev, 140, 44-65.
Villarini et al. (2012) J Clim, 25, 625-637.
An example of how the function can be used in hurricane forecast studies is given in
<NAME> al. (2014) Multi-year prediction skill of Atlantic hurricane activity in CMIP5 decadal
hindcasts. Climate Dynamics, 42, 2675-2690. doi:10.1007/s00382-013-1773-1.
Examples
# Let AtlAno represents 5 different 5-year forecasts of seasonally averaged
# Atlantic sea surface temperature anomalies.
AtlAno <- array(runif(25, -1, 1), dim = c(sdate = 5, ftime = 5))
# Let TropAno represents 5 corresponding 5-year forecasts of seasonally
# averaged tropical sea surface temperature anomalies.
TropAno <- array(runif(25, -1, 1), dim = c(sdate = 5, ftime = 5))
# The seasonal average of hurricanes for each of the five forecasted years,
# for each forecast, would then be given by.
hr_count <- StatSeasAtlHurr(atlano = AtlAno, tropano = TropAno, hrvar = 'HR')
ToyModel Synthetic forecast generator imitating seasonal to decadal forecasts.
The components of a forecast: (1) predictabiltiy (2) forecast error (3)
non-stationarity and (4) ensemble generation. The forecast can be
computed for real observations or observations generated artifically.
Description
The toymodel is based on the model presented in Weigel et al. (2008) QJRS with an extension to
consider non-stationary distributions prescribing a linear trend. The toymodel allows to generate
an aritifical forecast based on obsevations provided by the input (from Load) or artificially gener-
ated observations based on the input parameters (sig, trend). The forecast can be specfied for any
number of start-dates, lead-time and ensemble members. It imitates components of a forecast: (1)
predictabiltiy (2) forecast error (3) non-stationarity and (4) ensemble generation. The forecast can
be computed for real observations or observations generated artifically.
Usage
ToyModel(
alpha = 0.1,
beta = 0.4,
gamma = 1,
sig = 1,
trend = 0,
nstartd = 30,
nleadt = 4,
nmemb = 10,
obsini = NULL,
fxerr = NULL
)
Arguments
alpha Predicabiltiy of the forecast on the observed residuals Must be a scalar 0 < alpha
< 1.
beta Standard deviation of forecast error Must be a scalar 0 < beta < 1.
gamma Factor on the linear trend to sample model uncertainty. Can be a scalar or a
vector of scalars -inf < gammay < inf. Defining a scalar results in multiple
forecast, corresponding to different models with different trends.
sig Standard deviation of the residual variability of the forecast. If observations are
provided ’sig’ is computed from the observations.
trend Linear trend of the forecast. The same trend is used for each lead-time. If
observations are provided the ’trend’ is computed from the observations, with
potentially different trends for each lead-time. The trend has no unit and needs
to be defined according to the time vector [1,2,3,... nstartd].
nstartd Number of start-dates of the forecast. If observations are provided the ’nstartd’
is computed from the observations.
nleadt Number of lead-times of the forecats. If observations are provided the ’nleadt’
is computed from the observations.
nmemb Number of members of the forecasts.
obsini Observations that can be used in the synthetic forecast coming from Load (anoma-
lies are expected). If no observations are provided artifical observations are
generated based on Gaussian variaiblity with standard deviation from ’sig’ and
linear trend from ’trend’.
fxerr Provides a fixed error of the forecast instead of generating one from the level of
beta. This allows to perform pair of forecasts with the same conditional error as
required for instance in an attribution context.
Value
List of forecast with $mod including the forecast and $obs the observations. The dimensions corre-
spond to c(length(gamma), nmemb, nstartd, nleadt)
Examples
# Example 1: Generate forecast with artifical observations
# Seasonal prediction example
a <- 0.1
b <- 0.3
g <- 1
sig <- 1
t <- 0.02
ntd <- 30
nlt <- 4
nm <- 10
toyforecast <- ToyModel(alpha = a, beta = b, gamma = g, sig = sig, trend = t,
nstartd = ntd, nleadt = nlt, nmemb = nm)
# Example 2: Generate forecast from loaded observations
# Decadal prediction example
## Not run:
data_path <- system.file('sample_data', package = 's2dv')
expA <- list(name = 'experiment', path = file.path(data_path,
'model/$EXP_NAME$/$STORE_FREQ$_mean/$VAR_NAME$_3hourly',
'$VAR_NAME$_$START_DATE$.nc'))
obsX <- list(name = 'observation', path = file.path(data_path,
'$OBS_NAME$/$STORE_FREQ$_mean/$VAR_NAME$',
'$VAR_NAME$_$YEAR$$MONTH$.nc'))
# Now we are ready to use Load().
startDates <- c('19851101', '19901101', '19951101', '20001101', '20051101')
sampleData <- Load('tos', list(expA), list(obsX), startDates,
output = 'areave', latmin = 27, latmax = 48,
lonmin = -12, lonmax = 40)
## End(Not run)
a <- 0.1
b <- 0.3
g <- 1
nm <- 10
toyforecast <- ToyModel(alpha = a, beta = b, gamma = g, nmemb = nm,
obsini = sampleData$obs, nstartd = 5, nleadt = 60)
## Add PlotAno() back when this function is included!!
# \donttest{
#PlotAno(toyforecast$mod, toyforecast$obs, startDates,
# toptitle = c("Synthetic decadal temperature prediction"),
# fileout = "ex_toymodel.eps")
# }
TPI Compute the Tripole Index (TPI) for the Interdecadal Pacific Oscilla-
tion (IPO)
Description
The Tripole Index (TPI) for the Interdecadal Pacific Oscillation (IPO) is computed as the dif-
ference of weighted-averaged SST anomalies over 10ºS-10ºN, 170ºE-270ºE minus the mean of
the weighted-averaged SST anomalies over 25ºN-45ºN, 140ºE-215ºE and 50ºS-15ºS, 150ºE-200ºE
(Henley et al., 2015). If different members and/or datasets are provided, the climatology (used to
calculate the anomalies) is computed individually for all of them.
Usage
TPI(
data,
data_lats,
data_lons,
type,
lat_dim = "lat",
lon_dim = "lon",
mask = NULL,
monini = 11,
fmonth_dim = "fmonth",
sdate_dim = "sdate",
indices_for_clim = NULL,
year_dim = "year",
month_dim = "month",
na.rm = TRUE,
ncores = NULL
)
Arguments
data A numerical array to be used for the index computation with, at least, the dimen-
sions: 1) latitude, longitude, start date and forecast month (in case of decadal
predictions), 2) latitude, longitude, year and month (in case of historical simu-
lations or observations). This data has to be provided, at least, over the whole
region needed to compute the index.
data_lats A numeric vector indicating the latitudes of the data.
data_lons A numeric vector indicating the longitudes of the data.
type A character string indicating the type of data (’dcpp’ for decadal predictions,
’hist’ for historical simulations, or ’obs’ for observations or reanalyses).
lat_dim A character string of the name of the latitude dimension. The default value is
’lat’.
lon_dim A character string of the name of the longitude dimension. The default value is
’lon’.
mask An array of a mask (with 0’s in the grid points that have to be masked) or NULL
(i.e., no mask is used). This parameter allows to remove the values over land
in case the dataset is a combination of surface air temperature over land and
sea surface temperature over the ocean. Also, it can be used to mask those grid
points that are missing in the observational dataset for a fair comparison between
the forecast system and the reference dataset. The default value is NULL.
monini An integer indicating the month in which the forecast system is initialized. Only
used when parameter ’type’ is ’dcpp’. The default value is 11, i.e., initialized in
November.
fmonth_dim A character string indicating the name of the forecast month dimension. Only
used if parameter ’type’ is ’dcpp’. The default value is ’fmonth’.
sdate_dim A character string indicating the name of the start date dimension. Only used if
parameter ’type’ is ’dcpp’. The default value is ’sdate’.
indices_for_clim
A numeric vector of the indices of the years to compute the climatology for cal-
culating the anomalies, or NULL so the climatology is calculated over the whole
period. If the data are already anomalies, set it to FALSE. The default value is
NULL.
In case of parameter ’type’ is ’dcpp’, ’indices_for_clim’ must be relative to the
first forecast year, and the climatology is automatically computed over the com-
mon calendar period for the different forecast years.
year_dim A character string indicating the name of the year dimension The default value
is ’year’. Only used if parameter ’type’ is ’hist’ or ’obs’.
month_dim A character string indicating the name of the month dimension. The default
value is ’month’. Only used if parameter ’type’ is ’hist’ or ’obs’.
na.rm A logical value indicanting whether to remove NA values. The default value is
TRUE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A numerical array with the TPI index with the same dimensions as data except the lat_dim, lon_dim
and fmonth_dim (month_dim) in case of decadal predictions (historical simulations or observa-
tions). In case of decadal predictions, a new dimension ’fyear’ is added.
Examples
## Observations or reanalyses
obs = array(1:100, dim = c(year = 5, lat = 19, lon = 37, month = 12))
lat = seq(-90, 90, 10)
lon = seq(0, 360, 10)
index_obs = TPI(data = obs, data_lats = lat, data_lons = lon, type = 'obs')
## Historical simulations
hist = array(1:100, dim = c(year = 5, lat = 19, lon = 37, month = 12, member = 5))
lat = seq(-90, 90, 10)
lon = seq(0, 360, 10)
index_hist = TPI(data = hist, data_lats = lat, data_lons = lon, type = 'hist')
## Decadal predictions
dcpp = array(1:100, dim = c(sdate = 5, lat = 19, lon = 37, fmonth = 24, member = 5))
lat = seq(-90, 90, 10)
lon = seq(0, 360, 10)
index_dcpp = TPI(data = dcpp, data_lats = lat, data_lons = lon, type = 'dcpp', monini = 1)
Trend Compute the trend
Description
Compute the linear trend or any degree of polynomial regression along the forecast time. It returns
the regression coefficients (including the intercept) and the detrended array. The confidence inter-
vals and p-value are also provided if needed.
The confidence interval relies on the student-T distribution, and the p-value is calculated by ANOVA.
Usage
Trend(
data,
time_dim = "ftime",
interval = 1,
polydeg = 1,
conf = TRUE,
conf.lev = 0.95,
pval = TRUE,
ncores = NULL
)
Arguments
data An numeric array including the dimension along which the trend is computed.
time_dim A character string indicating the dimension along which to compute the trend.
The default value is ’ftime’.
interval A positive numeric indicating the unit length between two points along ’time_dim’
dimension. The default value is 1.
polydeg A positive integer indicating the degree of polynomial regression. The default
value is 1.
conf A logical value indicating whether to retrieve the confidence intervals or not.
The default value is TRUE.
conf.lev A numeric indicating the confidence level for the regression computation. The
default value is 0.95.
pval A logical value indicating whether to compute the p-value or not. The default
value is TRUE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
A list containing:
$trend A numeric array with the first dimension ’stats’, followed by the same dimen-
sions as parameter ’data’ except the ’time_dim’ dimension. The length of the
’stats’ dimension should be polydeg + 1, containing the regression coefficients
from the lowest order (i.e., intercept) to the highest degree.
$conf.lower A numeric array with the first dimension ’stats’, followed by the same dimen-
sions as parameter ’data’ except the ’time_dim’ dimension. The length of the
’stats’ dimension should be polydeg + 1, containing the lower limit of the conf.lev%
confidence interval for all the regression coefficients with the same order as
$trend. Only present conf = TRUE.
$conf.upper A numeric array with the first dimension ’stats’, followed by the same dimen-
sions as parameter ’data’ except the ’time_dim’ dimension. The length of the
’stats’ dimension should be polydeg + 1, containing the upper limit of the conf.lev%
confidence interval for all the regression coefficients with the same order as
$trend. Only present conf = TRUE.
$p.val A numeric array of p-value calculated by anova(). The first dimension ’stats’ is
1, followed by the same dimensions as parameter ’data’ except the ’time_dim’
dimension. Only present if pval = TRUE.
$detrended A numeric array with the same dimensions as paramter ’data’, containing the
detrended values along the ’time_dim’ dimension.
Examples
# Load sample data as in Load() example:
example(Load)
months_between_startdates <- 60
trend <- Trend(sampleData$obs, polydeg = 2, interval = months_between_startdates)
UltimateBrier Compute Brier scores
Description
Interface to compute probabilistic scores (Brier Score, Brier Skill Score) from the forecast and
observational data anomalies. It provides six types to choose.
Usage
UltimateBrier(
exp,
obs,
dat_dim = "dataset",
memb_dim = "member",
time_dim = "sdate",
quantile = TRUE,
thr = c(5/100, 95/100),
type = "BS",
decomposition = TRUE,
ncores = NULL
)
Arguments
exp A numeric array of forecast anomalies with named dimensions that at least in-
clude ’memb_dim’, and ’time_dim’. It can be provided by Ano().
obs A numeric array of observational reference anomalies with named dimensions
that at least include ’time_dim’. If it has ’memb_dim’, the length must be 1. The
dimensions should be consistent with ’exp’ except ’dat_dim’ and ’memb_dim’.
It can be provided by Ano().
dat_dim A character string indicating the name of the dataset dimension in ’exp’ and
’obs’. The default value is ’dataset’. If there is no dataset dimension, set NULL.
memb_dim A character string indicating the name of the member dimension in ’exp’ (and
’obs’) for ensemble mean calculation. The default value is ’member’.
time_dim A character string indicating the dimension along which to compute the proba-
bilistic scores. The default value is ’sdate’.
quantile A logical value to decide whether a quantile (TRUE) or a threshold (FALSE) is
used to estimate the forecast and observed probabilities. If ’type’ is ’FairEnsem-
bleBS’ or ’FairEnsembleBSS’, it must be TRUE. The default value is TRUE.
thr A numeric vector to be used in probability calculation (for ’BS’, ’FairStart-
DatesBS’, ’BSS’, and ’FairStartDatesBSS’) and binary event judgement (for
’FairEnsembleBS’ and ’FairEnsembleBSS’). It is as quantiles if ’quantile’ is
TRUE or as thresholds if ’quantile’ is FALSE. The default value is c(0.05,
0.95) for ’quantile = TRUE’.
type A character string of the desired score type. It can be the following values:
• ’BS’: Simple Brier Score. Use SpecsVerification::BrierDecomp inside.
• ’FairEnsembleBS’: Corrected Brier Score computed across ensemble mem-
bers. Use SpecsVerification::FairBrier inside.
• ’FairStartDatesBS’: Corrected Brier Score computed across starting dates.
Use s2dv:::.BrierScore inside.
• ’BSS’: Simple Brier Skill Score. Use s2dv:::.BrierScore inside.
• ’FairEnsembleBSS’: Corrected Brier Skill Score computed across ensem-
ble members. Use SpecsVerification::FairBrierSs inside.
• ’FairStartDatesBSS’: Corrected Brier Skill Score computed across starting
dates. Use s2dv:::.BrierScore inside.
The default value is ’BS’.
decomposition A logical value to determine whether the decomposition of the Brier Score
should be provided (TRUE) or not (FALSE). It is only used when ’type’ is ’BS’
or ’FairStartDatesBS’. The default value is TRUE.
ncores An integer indicating the number of cores to use for parallel computation. The
default value is NULL.
Value
If ’type’ is ’BS’ or ’FairStartDatesBS’ and ’decomposition’ is TRUE, the output is a list of 4 arrays
(see details below.) In other cases, the output is an array of Brier scores or Brier skill scores. All
the arrays have the same dimensions: c(nexp, nobs, no. of bins, the rest dimensions of ’exp’ except
’time_dim’ and ’memb_dim’). ’nexp’ and ’nobs’ is the length of dataset dimension in ’exp’ and
’obs’ respectively. If dat_dim is NULL, nexp and nobs are omitted.
The list of 4 includes:
• $bs: Brier Score
• $rel: Reliability component
• $res: Resolution component
• $unc: Uncertainty component
Examples
sampleData$mod <- Season(sampleData$mod, monini = 11, moninf = 12, monsup = 2)
sampleData$obs <- Season(sampleData$obs, monini = 11, moninf = 12, monsup = 2)
clim <- Clim(sampleData$mod, sampleData$obs)
exp <- Ano(sampleData$mod, clim$clim_exp)
obs <- Ano(sampleData$obs, clim$clim_obs)
bs <- UltimateBrier(exp, obs)
bss <- UltimateBrier(exp, obs, type = 'BSS') |
coat | cran | R | Package ‘coat’
July 11, 2023
Title Conditional Method Agreement Trees (COAT)
Version 0.2.0
Date 2023-07-06
Description Agreement of continuously scaled measurements made by two techniques, de-
vices or methods is usually
evaluated by the well-established Bland-
Altman analysis or plot. Conditional method agreement trees (COAT),
proposed by Karapetyan, Zeileis, Henrik-
sen, and Hapfelmeier (2023) <doi:10.48550/arXiv.2306.04456>,
embed the Bland-
Altman analysis in the framework of recursive partitioning to explore heterogeneous method
agreement in dependence of covariates. COAT can also be used to perform a Bland-
Altman test for differences
in method agreement.
License GPL-2 | GPL-3
Depends R (>= 3.5.0)
Imports partykit, ggplot2, grid, ggparty, gridExtra, ggtext
Suggests disttree, MethComp
Additional_repositories https://R-Forge.R-project.org
Encoding UTF-8
RoxygenNote 7.2.3
NeedsCompilation no
Author <NAME> [aut, cre]
(<https://orcid.org/0000-0001-6765-6352>),
<NAME> [aut] (<https://orcid.org/0000-0003-1831-9741>),
<NAME> [aut] (<https://orcid.org/0000-0003-0918-3766>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-07-11 15:30:09 UTC
R topics documented:
bates... 2
coa... 3
diff... 6
print.coa... 6
batest Bland-Altman Test of Method Agreement
Description
Function to perform a Bland-Altman test of differences in method agreement. Additional functions
are given for printing and plotting.
Usage
batest(formula, data, subset, na.action, weights, ...)
## S3 method for class 'batest'
print(x, digits = 2, type = c("test", "model", "both"), ...)
## S3 method for class 'batest'
plot(x, ...)
Arguments
formula symbolic description of the model used to perform the Bland-Altman test of type
y1 + y2 ~ x. The left-hand side should specify a pair of measurements (y1 and
y2) to assess the agreement. The right-hand side should specify a factor with
two levels indicating two independent groups or samples to be compared. Al-
ternatively, multilevel factors or continuously scaled variables can be specified
to perform a Bland-Altman test of association, followed by binary splitting into
two subgroups.
data, subset, na.action
arguments controlling the formula processing via model.frame.
weights optional numeric vector of weights (case/frequency weights, by default).
... further control arguments, passed to ctree_control
x an object as returned by batest.
digits a numeric specifying the number of digits to display.
type character string specifying whether "test" statistics (default), the "model" or
"both" should be printed.
Value
Object of class batest with elements
test result of the Bland-Altman test.
model tree model used to perform the Bland-Altman test.
Methods (by generic)
• print(batest): function to print the result of the Bland-Altman test.
• plot(batest): function to plot the result of the Bland-Altman test.
Examples
## package and data (reshaped to wide format)
library("coat")
data("VitCap", package = "MethComp")
VitCap_wide <- reshape(VitCap, v.names = "y", timevar = "instrument",
idvar = c("item", "user"), drop = "meth", direction = "wide")
## two-sample BA-test
testresult <- batest(y.St + y.Exp ~ user, data = VitCap_wide)
## display
testresult
print(testresult, digits = 1, type = "both")
plot(testresult)
coat Conditional Method Agreement Trees (COAT)
Description
Tree models capturing the dependence of method agreement on covariates. The classic Bland-
Altman analysis is used for modeling method agreement while the covariate dependency can be
learned either nonparametrically via conditional inference trees (CTree) or using model-based re-
cursive partitioning (MOB).
Usage
coat(
formula,
data,
subset,
na.action,
weights,
means = FALSE,
type = c("ctree", "mob"),
minsize = 10L,
minbucket = minsize,
minsplit = NULL,
...
)
Arguments
formula symbolic description of the model of type y1 + y2 ~ x1 + ... + xk. The left-
hand side should specify a pair of measurements (y1 and y2) for the Bland-
Altman analysis. The right-hand side can specify any number of potential split
variables for the tree.
data, subset, na.action
arguments controlling the formula processing via model.frame.
weights optional numeric vector of weights (case/frequency weights, by default).
means logical. Should the intra-individual mean values of measurements be included
as potential split variable?
type character string specifying the type of tree to be fit. Either "ctree" (default) or
"mob".
minsize, minbucket
integer. The minimum number of observations in a subgroup. Only one of the
two arguments should be used (see also below).
minsplit integer. The minimum number of observations to consider splitting. Must be at
least twice the minimal subgroup size (minsplit or minbucket). If set to NULL
(the default) it is set to be at least 2.5 times the minimal subgroup size.
... further control arguments, either passed to ctree_control or mob_control,
respectively.
Details
Conditional method agreement trees (COAT) employ unbiased recursive partitioning in order to
detect and model dependency on covariates in the classic Bland-Altman analysis. One of two
recursive partitioning techniques can be used to find subgroups defined by splits in covariates to
a pair of measurements, either nonparametric conditional inference trees (CTree) or parametric
model-based trees (MOB). In both cases, each subgroup is associated with two parameter estimates:
the mean of the measurement difference (“Bias”) and the corresponding sample standard deviation
(“SD”) which can be used to construct the limits of agreement (i.e., the corresponding confidence
intervals).
The minimum number of observations in a subgroup defaults to 10, so that the mean and variance
of the measurement differences can be estimated reasonably for the Bland-Altman analysis. The
default can be changed with with the argument minsize or, equivalently, minbucket. (The different
names stem from slightly different conventions in the underlying tree functions.) Consequently, the
minimum number of observations to consider splitting (minsplit) must be, at the very least, twice
the minimum number of observations per subgroup (which would allow only one possible split,
though). By default, minsplit is 2.5 times minsize. Users are encouraged to consider whether
for their application it is sensible to increase or decrease these defaults. Finally, further control
parameters can be specified through the ... argument, see ctree_control and mob_control,
respectively, for details.
In addition to the standard specification of the two response measurements in the formula via y1
+ y2 ~ ..., it is also possible to use y1 - y2 ~ .... The latter may be more intuitive for users that
think of it as a model for the difference of two measurements. Finally cbind(y1, y2) ~ ... also
works. Internally, all of these are processed in the same way, namely as a bivariate dependent
variable that can then be modeled and plotted appropriately.
To add the means of the measurement pair as a potential splitting variable, there are also different
equivalent strategies. The standard specification would be via the means argument: y1 + y2 ~ x1 +
..., means = TRUE. Alternatively, the user can also extend the formula argument via y1 + y2 ~ x1
+ ... + means(y1, y2).
The SD is estimated by the usual sample standard deviation in each subgroup, i.e., divided by the
sample size n − 1. Note that the inference in the MOB algorithm internally uses the maximum
likelihood estimate (divided by n) instead so the the fluctuation tests for parameter instability can
be applied.
Value
Object of class coat, inheriting either from constparty (if ctree is used) or modelparty (if mob
is used).
References
<NAME>, <NAME>, <NAME>, <NAME> (2023). “Tree Models for Assessing Covariate-
Dependent Method Agreement.” arXiv 2306.04456, arXiv.org E-Print Archive. doi:10.48550/
arXiv.2306.04456
Examples
## package and data (reshaped to wide format)
library("coat")
data("scint", package = "MethComp")
scint_wide <- reshape(scint, v.names = "y", timevar = "meth", idvar = "item", direction = "wide")
## coat based on ctree() without and with mean values of paired measurements as predictor
tr1 <- coat(y.DTPA + y.DMSA ~ age + sex, data = scint_wide)
tr2 <- coat(y.DTPA + y.DMSA ~ age + sex, data = scint_wide, means = TRUE)
## display
print(tr1)
plot(tr1)
print(tr2)
plot(tr2)
## tweak various graphical arguments of the panel function (just for illustration):
## different colors, nonparametric bootstrap percentile confidence intervals, ...
plot(tr1, tp_args = list(
xscale = c(0, 150), linecol = "deeppink",
confint = TRUE, B = 250, cilevel = 0.5, cicol = "gold"
))
diffs Convenience Functions for Bland-Altman Analysis
Description
Auxiliary functions for obtain the differences and means of a measurement pair, as used in the
classic Bland-Altman analysis.
Usage
diffs(y1, y2)
means(y1, y2)
Arguments
y1, y2 numeric. Vectors of numeric measurements of the same length.
Value
Numeric vector with the differences or means of y1 and y2, respectively.
Examples
## pair of measurements
y1 <- 1:4
y2 <- c(2, 2, 1, 3)
## differences and means
diffs(y1, y2)
means(y1, y2)
print.coat Methods for Conditional Method Agreement Trees (COAT)
Description
Extracting information from or visualization of conditional method agreement trees. Visualizations
use trees with Bland-Altman plots in terminal nodes, drawn either via grid graphics directly or via
ggplot2.
Usage
## S3 method for class 'coat'
print(
x,
digits = 2L,
header = TRUE,
footer = TRUE,
title = "Conditional method agreement tree (COAT)",
...
)
## S3 method for class 'coat'
coef(object, node = NULL, drop = TRUE, ...)
## S3 method for class 'coat'
plot(x, terminal_panel = node_baplot, tnex = 2, drop_terminal = TRUE, ...)
node_baplot(
obj,
level = 0.95,
digits = 2,
pch = 1,
cex = 0.5,
col = 1,
linecol = 4,
lty = c(1, 2),
bg = "white",
confint = FALSE,
B = 500,
cilevel = 0.95,
cicol = "lightgray",
xscale = NULL,
yscale = NULL,
ylines = 3,
id = TRUE,
mainlab = NULL,
gp = gpar()
)
## S3 method for class 'coat'
autoplot(
object,
digits = 2,
xlim.max = NULL,
level = 0.95,
label.align = 0.95,
...
)
Arguments
x, object, obj a coat object as returned by coat.
digits numeric. Number of digits used for rounding the displayed coefficients or limits
of agreement.
header, footer logical. Should a header/footer be printed for the tree?
title character with the title for the tree.
... further arguments passed to methods.
node integer. ID of the node for which the Bland-Altman parameters (coefficients)
should be extracted.
drop logical. Should the matrix attribute be dropped if the parameters from only a
single node are extracted?
terminal_panel a panel function or panel-generating function passed to plot.party. By default,
node_baplot is used to generate a suitable panel function for drawing Bland-
Altman plots based on the the provided coat object. It can be customized using
the tp_args argument (passed through ...).
tnex numeric specification of the terminal node extension relative to the inner nodes
(default is twice the size).
drop_terminal logical. Should all terminal nodes be "dropped" to the bottom row?
level numeric level for the limits of agreement.
pch, cex, col, linecol, lty, bg
graphical parameters for the scatter plot and limits of agreement in the Bland-
Altman plot (scatter plot character, character extension, plot color, line color,
line types, and background color).
confint logical. Should nonparametric bootstrap percentile confidence intervals be plot-
ted?
B numeric. Number of bootstrap samples to be used if confint = TRUE.
cilevel numeric. Level of the confidence intervals if confint = TRUE.
cicol color specification for the confidence intervals if confint = TRUE.
xscale, yscale numeric specification of scale of x-axis and y-axis, respectively. By default the
range of all scatter plots and limits of agreement across all nodes are used.
ylines numeric. Number of lines for spaces in y-direction.
id logical. Should node IDs be plotted?
mainlab character or function. An optional title for the plots. Either a character or a
function(id, nobs).
gp grid graphical parameters.
xlim.max numeric. Optional value to define the upper limit of the x-axis.
label.align numeric. Specification between 0 and 1 for the alignment of labels relative to
the plot width or xlim.max.
Details
Various methods are provided for trees fitted by coat, in particular print, plot (via grid/partykit),
autoplot (via ggplot2/ggparty), coef. The plot method draws Bland-Altman plots in the terminal
panels by default, using the function node_baplot. The autoplot draws very similar plots by
customizing the geom_node_plot "from scratch".
In addition to these dedicated coat methods, further methods are inherited from ctree or mob,
respectively, depending on which type of coat was fitted.
Value
The print() method returns the printed object invisibly. The coef() method returns the vector (for
a single node) or matrix (for multiple nodes) of estimated parameters (bias and standard deviation).
The plot() method returns NULL. The node_baplot() panel-generating function returns a function
that can be plugged into the plot() method. The autoplot() method returns a ggplot object.
Examples
## package and data (reshaped to wide format)
library("coat")
data("scint", package = "MethComp")
scint_wide <- reshape(scint, v.names = "y", timevar = "meth", idvar = "item", direction = "wide")
## conditional method agreement tree
tr <- coat(y.DTPA + y.DMSA ~ age + sex, data = scint_wide)
## illustration of methods (including some customization)
## printing
print(tr)
print(tr, header = FALSE, footer = FALSE)
## extracting Bland-Altman parameters
coef(tr)
coef(tr, node = 1)
## visualization (via grid with node_baplot)
plot(tr)
plot(tr, ip_args = list(id = FALSE),
tp_args = list(col = "slategray", id = FALSE, digits = 3, pch = 19))
## visualization (via ggplot2 with ggparty)
library("ggplot2")
autoplot(tr)
autoplot(tr, digits = 3) + ggtitle("Conditional method agreement tree") +
theme(plot.title = element_text(hjust = 0.5)) |
playdate-graphics | rust | Rust | Crate playdate_graphics
===
Playdate graphics API
Re-exports
---
* `pub extern crate color;`
* `pub use bitmap::debug_bitmap;`
* `pub use bitmap::display_buffer_bitmap;`
* `pub use bitmap::copy_frame_buffer_bitmap;`
* `pub use bitmap::set_stencil;`
* `pub use bitmap::set_stencil_tiled;`
* `pub use bitmap::set_draw_mode;`
* `pub use bitmap::push_context;`
* `pub use bitmap::pop_context;`
Modules
---
* apiGlobal Playdate graphics API.
* bitmap
* error
* textPlaydate text API
* videoPlaydate video API
Structs
---
* Graphics
Enums
---
* BitmapDrawMode
* BitmapFlip
* LineCapStyle
Traits
---
* BitmapDrawModeExt
* BitmapFlipExt
* LineCapStyleExt
Functions
---
* clearClears the entire display, filling it with `color`.
* clear_clip_rectClears the current clip rect.
* clear_rawClears the entire display, filling it with `color`.
* displayManually flushes the current frame buffer out to the display.
This function is automatically called after each pass through the run loop,
so there shouldn’t be any need to call it yourself.
* draw_ellipseDraw an ellipse stroked inside the rect.
* draw_lineDraws a line from `x1, y1` to `x2, y2` with a stroke width of `width`.
* draw_rectDraws a `width` by `height` rect at `x, y`.
* fill_ellipseFills an ellipse inside the rectangle `x, y, width, height`.
* fill_polygonFills the polygon with vertices at the given coordinates
(an array of `2 * num_points` ints containing alternating x and y values)
using the given `color` and fill, or winding, `rule`.
* fill_rectDraws a filled `width` by `height` rect at `x, y`.
* fill_triangleDraws a filled triangle with points at `x1, y1`, `x2, y2`, and `x3, y3`.
* get_display_frameReturns the raw bits in the display buffer,
**the last completed frame**.
* get_frameReturns the current display frame buffer.
Rows are 32-bit aligned, so the row stride is 52 bytes, with the extra 2 bytes per row ignored.
Bytes are MSB-ordered; i.e., the pixel in column 0 is the 0x80 bit of the first byte of the row.
* mark_updated_rowsAfter updating pixels in the buffer returned by `get_frame`,
you must tell the graphics system which rows were updated.
* set_background_colorSets the background color shown when the display is offset or for clearing dirty areas in the sprite system.
* set_clip_rectSets the current clip rect, using **world** coordinates that is,
the given rectangle will be translated by the current drawing offset.
* set_draw_offsetOffsets the origin point for all drawing calls to `x, y` (can be negative).
* set_line_cap_styleSets the end cap style used in the line drawing functions.
* set_screen_clip_rectSets the current clip rect in **screen** coordinates.
Crate playdate_graphics
===
Playdate graphics API
Re-exports
---
* `pub extern crate color;`
* `pub use bitmap::debug_bitmap;`
* `pub use bitmap::display_buffer_bitmap;`
* `pub use bitmap::copy_frame_buffer_bitmap;`
* `pub use bitmap::set_stencil;`
* `pub use bitmap::set_stencil_tiled;`
* `pub use bitmap::set_draw_mode;`
* `pub use bitmap::push_context;`
* `pub use bitmap::pop_context;`
Modules
---
* apiGlobal Playdate graphics API.
* bitmap
* error
* textPlaydate text API
* videoPlaydate video API
Structs
---
* Graphics
Enums
---
* BitmapDrawMode
* BitmapFlip
* LineCapStyle
Traits
---
* BitmapDrawModeExt
* BitmapFlipExt
* LineCapStyleExt
Functions
---
* clearClears the entire display, filling it with `color`.
* clear_clip_rectClears the current clip rect.
* clear_rawClears the entire display, filling it with `color`.
* displayManually flushes the current frame buffer out to the display.
This function is automatically called after each pass through the run loop,
so there shouldn’t be any need to call it yourself.
* draw_ellipseDraw an ellipse stroked inside the rect.
* draw_lineDraws a line from `x1, y1` to `x2, y2` with a stroke width of `width`.
* draw_rectDraws a `width` by `height` rect at `x, y`.
* fill_ellipseFills an ellipse inside the rectangle `x, y, width, height`.
* fill_polygonFills the polygon with vertices at the given coordinates
(an array of `2 * num_points` ints containing alternating x and y values)
using the given `color` and fill, or winding, `rule`.
* fill_rectDraws a filled `width` by `height` rect at `x, y`.
* fill_triangleDraws a filled triangle with points at `x1, y1`, `x2, y2`, and `x3, y3`.
* get_display_frameReturns the raw bits in the display buffer,
**the last completed frame**.
* get_frameReturns the current display frame buffer.
Rows are 32-bit aligned, so the row stride is 52 bytes, with the extra 2 bytes per row ignored.
Bytes are MSB-ordered; i.e., the pixel in column 0 is the 0x80 bit of the first byte of the row.
* mark_updated_rowsAfter updating pixels in the buffer returned by `get_frame`,
you must tell the graphics system which rows were updated.
* set_background_colorSets the background color shown when the display is offset or for clearing dirty areas in the sprite system.
* set_clip_rectSets the current clip rect, using **world** coordinates that is,
the given rectangle will be translated by the current drawing offset.
* set_draw_offsetOffsets the origin point for all drawing calls to `x, y` (can be negative).
* set_line_cap_styleSets the end cap style used in the line drawing functions.
* set_screen_clip_rectSets the current clip rect in **screen** coordinates.
Function playdate_graphics::bitmap::debug_bitmap
===
```
pub fn debug_bitmap() -> Result<Bitmap<Default, false>, ApiError>
```
Only valid in the Simulator,
returns the debug framebuffer as a bitmap.
Returns error on device.
This function is shorthand for `Graphics::debug_bitmap`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::getDebugBitmap`.
Function playdate_graphics::bitmap::display_buffer_bitmap
===
```
pub fn display_buffer_bitmap() -> Result<Bitmap<Default, false>, Error>
```
Returns a bitmap containing the contents of the display buffer.
**The system owns this bitmap—do not free it.**
This function is shorthand for `Graphics::display_buffer_bitmap`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::getDisplayBufferBitmap`.
Function playdate_graphics::bitmap::copy_frame_buffer_bitmap
===
```
pub fn copy_frame_buffer_bitmap() -> Result<Bitmap<Default, true>, Error>
```
Returns a copy the contents of the working frame buffer as a bitmap.
The caller is responsible for freeing the returned bitmap, it will automatically on drop.
This function is shorthand for `Graphics::frame_buffer_bitmap`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::copyFrameBufferBitmap`.
Function playdate_graphics::bitmap::set_stencil
===
```
pub fn set_stencil(image: &impl AnyBitmap)
```
Sets the stencil used for drawing.
For a tiled stencil, use `set_stencil_tiled` instead.
NOTE: Officially deprecated in favor of `set_stencil_tiled`, which adds a `tile` flag
This function is shorthand for `Graphics::set_stencil`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::setStencil`.
Function playdate_graphics::bitmap::set_stencil_tiled
===
```
pub fn set_stencil_tiled(image: &impl AnyBitmap, tile: bool)
```
Sets the stencil used for drawing.
If the `tile` is `true` the stencil image will be tiled.
Tiled stencils must have width equal to a multiple of 32 pixels.
This function is shorthand for `Graphics::set_stencil_tiled`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::setStencilImage`.
Function playdate_graphics::bitmap::set_draw_mode
===
```
pub fn set_draw_mode(mode: BitmapDrawMode)
```
Sets the mode used for drawing bitmaps.
Note that text drawing uses bitmaps, so this affects how fonts are displayed as well.
This function is shorthand for `Graphics::set_draw_mode`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::setDrawMode`.
Function playdate_graphics::bitmap::push_context
===
```
pub fn push_context(target: &impl AnyBitmap)
```
Push a new drawing context for drawing into the given bitmap.
If underlying ptr in the `target` is `null`, the drawing functions will use the display framebuffer.
This mostly should not happen, just for note.
To clear entire context use `clear_context`.
This function is shorthand for `Graphics::push_context`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::pushContext`.
Function playdate_graphics::bitmap::pop_context
===
```
pub fn pop_context()
```
Pops a context off the stack (if any are left),
restoring the drawing settings from before the context was pushed.
This function is shorthand for `Graphics::pop_context`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::popContext`.
Module playdate_graphics::api
===
Global Playdate graphics API.
Structs
---
* CacheCached graphics api end-point.
* DefaultDefault graphics api end-point, ZST.
Traits
---
* Api
Module playdate_graphics::text
===
Playdate text API
Modules
---
* api
Structs
---
* FontPlaydate Font representation.
* FontPagePlaydate FontPage representation.
* GlyphPlaydate Glyph representation.
Enums
---
* StringEncoding
Traits
---
* StringEncodingExt
Functions
---
* draw_textDraws the given `text` using the provided coords `x`, `y`.
* draw_text_cstrDraws the given `text` using the provided options.
* get_font_heightReturns the height of the given `font`.
* get_font_pageReturns an `FontPage` object for the given character code `c`.
* get_glyph_kerningReturns the kerning adjustment between characters `glyph_code` and `next_code` as specified by the font
* get_page_glyphReturns an `Glyph` object for character `c` in `FontPage` page,
* get_page_glyph_with_bitmapReturns an `Glyph` object for character `c` in `FontPage` page,
and optionally returns the glyph’s bitmap and `advance` value.
* get_text_widthReturns the width of the given `text` in the given `font`.
* get_text_width_cstrReturns the width of the given `text` in the given `font`.
* load_fontReturns the `Font` object for the font file at `path`.
* make_font_from_bytes⚠️ Caution: This function is not tested.
* set_fontSets the `font` to use in subsequent `draw_text` calls.
* set_text_leadingSets the leading adjustment (added to the leading specified in the font) to use when drawing text.
* set_text_trackingSets the tracking to use when drawing text.
Module playdate_graphics::video
===
Playdate video API
Modules
---
* api
Structs
---
* Video
* VideoPlayer
* VideoPlayerOutInfo
Struct playdate_graphics::Graphics
===
```
pub struct Graphics<Api = Default>(/* private fields */);
```
Implementations
---
### impl<Api: Api> Graphics<Api#### pub fn draw_text<S: AsRef<str>>(
&self,
text: S,
x: c_int,
y: c_int
) -> Result<c_int, NulErrorDraws the given `text` using the provided coords `x`, `y`.
Encoding is always `StringEncoding::UTF8`.
If another encoding is desired, use `draw_text_cstr` instead.
If no `font` has been set with `set_font`,
the default system font `Asheville Sans 14 Light` is used.
Equivalent to `sys::ffi::playdate_graphics::drawText`.
#### pub fn draw_text_cstr(
&self,
text: &CStr,
encoding: StringEncoding,
x: c_int,
y: c_int
) -> c_int
Draws the given `text` using the provided options.
If no `font` has been set with `set_font`,
the default system font `Asheville Sans 14 Light` is used.
Same as `draw_text` but takes a [`sys::ffi::CStr`],
but little bit more efficient.
Equivalent to `sys::ffi::playdate_graphics::drawText`.
#### pub fn get_text_width<S: AsRef<str>>(
&self,
text: S,
font: Option<&Font>,
tracking: c_int
) -> Result<c_int, NulErrorReturns the width of the given `text` in the given `font`.
Equivalent to `sys::ffi::playdate_graphics::getTextWidth`.
#### pub fn get_text_width_cstr(
&self,
text: &CStr,
encoding: StringEncoding,
font: Option<&Font>,
tracking: c_int
) -> c_int
Returns the width of the given `text` in the given `font`.
Same as `get_text_width` but takes a [`sys::ffi::CStr`],
but little bit more efficient.
Equivalent to `sys::ffi::playdate_graphics::getTextWidth`.
#### pub fn get_font_height(&self, font: &Font) -> u8
Returns the height of the given `font`.
Equivalent to `sys::ffi::playdate_graphics::getFontHeight`.
#### pub fn set_font(&self, font: &Font)
Sets the `font` to use in subsequent `draw_text` calls.
Equivalent to `sys::ffi::playdate_graphics::setFont`.
#### pub fn get_glyph_kerning(
&self,
glyph: &Glyph,
glyph_code: u32,
next_code: u32
) -> c_int
Returns the kerning adjustment between characters `glyph_code` and `next_code` as specified by the font
Equivalent to `sys::ffi::playdate_graphics::getGlyphKerning`.
#### pub fn get_page_glyph(&self, page: &FontPage, c: u32) -> Result<Glyph, ErrorReturns an `Glyph` object for character `c` in `FontPage` page,
To also get the glyph’s bitmap and `advance` value use `get_page_glyph_with_bitmap` instead.
Equivalent to `sys::ffi::playdate_graphics::getPageGlyph`.
#### pub fn get_page_glyph_with_bitmap<'p>(
&self,
page: &'p FontPage,
c: u32,
advance: &mut c_int
) -> Result<(Glyph, BitmapRef<'p>), ErrorReturns an `Glyph` object for character `c` in `FontPage` page,
and optionally returns the glyph’s bitmap and `advance` value.
If bitmap is not needed, use `get_page_glyph` instead.
Equivalent to `sys::ffi::playdate_graphics::getPageGlyph`.
#### pub fn get_font_page(&self, font: &Font, c: u32) -> Result<FontPage, ErrorReturns an `FontPage` object for the given character code `c`.
Each `FontPage` contains information for 256 characters;
specifically, if `(c1 & ~0xff) == (c2 & ~0xff)`,
then `c1` and `c2` belong to the same page and the same `FontPage`
can be used to fetch the character data for both instead of searching for the page twice.
Equivalent to `sys::ffi::playdate_graphics::getFontPage`.
#### pub fn load_font<P: AsRef<Path>>(&self, path: P) -> Result<Font, ApiErrorReturns the `Font` object for the font file at `path`.
Equivalent to `sys::ffi::playdate_graphics::loadFont`.
#### pub fn make_font_from_bytes(
&self,
data: &[u8],
wide: c_int
) -> Result<Font, Error⚠️ Caution: This function is not tested.
Returns an `Font` object wrapping the LCDFontData data comprising the contents (minus 16-byte header) of an uncompressed pft file.
The `wide` corresponds to the flag in the header indicating whether the font contains glyphs at codepoints above `U+1FFFF`.
Equivalent to `sys::ffi::playdate_graphics::makeFontFromData`.
#### pub fn set_text_leading(&self, line_height_adjustment: c_int)
Sets the leading adjustment (added to the leading specified in the font) to use when drawing text.
Equivalent to `sys::ffi::playdate_graphics::setTextLeading`.
#### pub fn set_text_tracking(&self, tracking: c_int)
Sets the tracking to use when drawing text.
Equivalent to `sys::ffi::playdate_graphics::setTextTracking`.
### impl<Api: Api> Graphics<Api#### pub fn debug_bitmap(&self) -> Result<Bitmap<Default, false>, ApiErrorOnly valid in the Simulator,
returns the debug framebuffer as a bitmap.
Returns error on device.
Equivalent to `sys::ffi::playdate_graphics::getDebugBitmap`.
#### pub fn display_buffer_bitmap(&self) -> Result<Bitmap<Default, false>, ErrorReturns a bitmap containing the contents of the display buffer.
**The system owns this bitmap—do not free it.**
Equivalent to `sys::ffi::playdate_graphics::getDisplayBufferBitmap`.
#### pub fn frame_buffer_bitmap(&self) -> Result<Bitmap<Default, true>, ErrorReturns a **copy** the contents of the working frame buffer as a bitmap.
The caller is responsible for freeing the returned bitmap, it will automatically on drop.
Equivalent to `sys::ffi::playdate_graphics::copyFrameBufferBitmap`.
#### pub fn set_stencil_tiled(&self, image: &impl AnyBitmap, tile: bool)
Sets the stencil used for drawing.
If the `tile` is `true` the stencil image will be tiled.
Tiled stencils must have width equal to a multiple of 32 pixels.
Equivalent to `sys::ffi::playdate_graphics::setStencilImage`.
#### pub fn set_stencil(&self, image: &impl AnyBitmap)
Sets the stencil used for drawing.
For a tiled stencil, use `set_stencil_tiled` instead.
NOTE: Officially deprecated in favor of `set_stencil_tiled`, which adds a `tile` flag
Equivalent to `sys::ffi::playdate_graphics::setStencil`.
#### pub fn set_draw_mode(&self, mode: BitmapDrawMode)
Sets the mode used for drawing bitmaps.
Note that text drawing uses bitmaps, so this affects how fonts are displayed as well.
Equivalent to `sys::ffi::playdate_graphics::setDrawMode`.
#### pub fn push_context(&self, target: &impl AnyBitmap)
Push a new drawing context for drawing into the given bitmap.
If underlying ptr in the `target` is `null`, the drawing functions will use the display framebuffer.
This mostly should not happen, just for note.
To clear entire context use `clear_context`.
Equivalent to `sys::ffi::playdate_graphics::pushContext`.
#### pub fn clear_context(&self)
Resets drawing context for drawing into the system display framebuffer.
So drawing functions will use the display framebuffer.
Equivalent to `sys::ffi::playdate_graphics::pushContext`.
#### pub fn pop_context(&self)
Pops a context off the stack (if any are left),
restoring the drawing settings from before the context was pushed.
Equivalent to `sys::ffi::playdate_graphics::popContext`.
### impl<Api: Api> Graphics<Api#### pub fn draw(
&self,
bitmap: &impl AnyBitmap,
x: c_int,
y: c_int,
flip: BitmapFlip
)
Draws `self` with its upper-left corner at location `x`, `y`,
using the given `flip` orientation.
Equivalent to `sys::ffi::playdate_graphics::drawBitmap`.
#### pub fn draw_tiled(
&self,
bitmap: &impl AnyBitmap,
x: c_int,
y: c_int,
width: c_int,
height: c_int,
flip: BitmapFlip
)
Draws `self` with its upper-left corner at location `x`, `y`
**tiled inside a `width` by `height` rectangle**.
Equivalent to `sys::ffi::playdate_graphics::tileBitmap`.
#### pub fn draw_rotated(
&self,
bitmap: &impl AnyBitmap,
x: c_int,
y: c_int,
degrees: c_float,
center_x: c_float,
center_y: c_float,
x_scale: c_float,
y_scale: c_float
)
Draws the *bitmap* scaled to `x_scale` and `y_scale`
then rotated by `degrees` with its center as given by proportions `center_x` and `center_y` at `x`, `y`;
that is:
* if `center_x` and `center_y` are both 0.5 the center of the image is at (`x`,`y`),
* if `center_x` and `center_y` are both 0 the top left corner of the image (before rotation) is at (`x`,`y`), etc.
Equivalent to `sys::ffi::playdate_graphics::drawRotatedBitmap`.
#### pub fn draw_scaled(
&self,
bitmap: &impl AnyBitmap,
x: c_int,
y: c_int,
x_scale: c_float,
y_scale: c_float
)
Draws this bitmap scaled to `x_scale` and `y_scale` with its upper-left corner at location `x`, `y`.
Note that flip is not available when drawing scaled bitmaps but negative scale values will achieve the same effect.
Equivalent to `sys::ffi::playdate_graphics::drawScaledBitmap`.
### impl<Api: Api> Graphics<Api#### pub fn video(&self) -> Video<CacheCreates a new `Video` instance with cached api.
Equivalent to `sys::ffi::playdate_graphics::video`
#### pub fn video_with<VideoApi: Api>(&self, api: VideoApi) -> Video<VideoApiCreates a new `Video` instance using given `api`.
Equivalent to `sys::ffi::playdate_graphics::video`
### impl Graphics<Default#### pub fn Default() -> Self
Creates default `Graphics` without type parameter requirement.
Uses ZST `api::Default`.
### impl Graphics<Cache#### pub fn Cached() -> Self
Creates `Graphics` without type parameter requirement.
Uses `api::Cache`.
### impl<Api: Default + Api> Graphics<Api#### pub fn new() -> Self
### impl<Api: Api> Graphics<Api#### pub fn new_with(api: Api) -> Self
### impl<Api: Api> Graphics<Api#### pub fn get_frame(&self) -> Result<&'static mut [u8], ApiErrorReturns the current display frame buffer.
Rows are 32-bit aligned, so the row stride is 52 bytes, with the extra 2 bytes per row ignored.
Bytes are MSB-ordered; i.e., the pixel in column 0 is the 0x80 bit of the first byte of the row.
Equivalent to `sys::ffi::playdate_graphics::getFrame`.
#### pub fn get_display_frame(&self) -> Result<&'static mut [u8], ApiErrorReturns the raw bits in the display buffer,
**the last completed frame**.
Equivalent to `sys::ffi::playdate_graphics::getDisplayFrame`.
#### pub fn mark_updated_rows(&self, start: c_int, end: c_int)
After updating pixels in the buffer returned by `get_frame`,
you must tell the graphics system which rows were updated.
This function marks a contiguous range of rows as updated
(e.g., `markUpdatedRows(0, LCD_ROWS-1)` tells the system to update the entire display).
Both `start` and `end` are **included** in the range.
Equivalent to `sys::ffi::playdate_graphics::markUpdatedRows`.
#### pub fn display(&self)
Manually flushes the current frame buffer out to the display.
This function is automatically called after each pass through the run loop,
so there shouldn’t be any need to call it yourself.
Equivalent to `sys::ffi::playdate_graphics::display`.
#### pub fn clear(&self, color: Color<'_>)
Clears the entire display, filling it with `color`.
Equivalent to `sys::ffi::playdate_graphics::clear`.
#### pub fn clear_raw(&self, color: LCDColor)
Clears the entire display, filling it with `color`.
Same as `clear`, but without conversion `Color` -> `LCDColor`.
That conversion is really cheap,
so this function is useful if you’re working with `LCDColor` directly.
Equivalent to `sys::ffi::playdate_graphics::clear`.
#### pub fn set_screen_clip_rect(
&self,
x: c_int,
y: c_int,
width: c_int,
height: c_int
)
Sets the current clip rect in **screen** coordinates.
Equivalent to `sys::ffi::playdate_graphics::setScreenClipRect`.
#### pub fn set_draw_offset(&self, dx: c_int, dy: c_int)
Offsets the origin point for all drawing calls to `x, y` (can be negative).
This is useful, for example, for centering a “camera”
on a sprite that is moving around a world larger than the screen.
Equivalent to `sys::ffi::playdate_graphics::setDrawOffset`.
#### pub fn set_clip_rect(&self, x: c_int, y: c_int, width: c_int, height: c_int)
Sets the current clip rect, using **world** coordinates that is,
the given rectangle will be translated by the current drawing offset.
The clip rect is cleared at the beginning of each update.
Equivalent to `sys::ffi::playdate_graphics::setClipRect`.
#### pub fn clear_clip_rect(&self)
Clears the current clip rect.
Equivalent to `sys::ffi::playdate_graphics::clearClipRect`.
#### pub fn set_background_color(&self, color: LCDSolidColor)
Sets the background color shown when the display is offset or for clearing dirty areas in the sprite system.
Equivalent to `sys::ffi::playdate_graphics::setBackgroundColor`.
#### pub fn fill_polygon(
&self,
num_points: c_int,
coords: &mut [c_int],
color: LCDColor,
rule: LCDPolygonFillRule
)
Fills the polygon with vertices at the given coordinates
(an array of `2 * num_points` ints containing alternating x and y values)
using the given `color` and fill, or winding, `rule`.
See wikipedia for an explanation of the winding rule.
Equivalent to `sys::ffi::playdate_graphics::fillPolygon`.
#### pub fn draw_line(
&self,
x1: c_int,
y1: c_int,
x2: c_int,
y2: c_int,
width: c_int,
color: LCDColor
)
Draws a line from `x1, y1` to `x2, y2` with a stroke width of `width`.
Equivalent to `sys::ffi::playdate_graphics::drawLine`.
#### pub fn fill_triangle(
&self,
x1: c_int,
y1: c_int,
x2: c_int,
y2: c_int,
x3: c_int,
y3: c_int,
color: LCDColor
)
Draws a filled triangle with points at `x1, y1`, `x2, y2`, and `x3, y3`.
Equivalent to `sys::ffi::playdate_graphics::fillTriangle`.
#### pub fn draw_rect(
&self,
x: c_int,
y: c_int,
width: c_int,
height: c_int,
color: LCDColor
)
Draws a `width` by `height` rect at `x, y`.
Equivalent to `sys::ffi::playdate_graphics::drawRect`.
#### pub fn fill_rect(
&self,
x: c_int,
y: c_int,
width: c_int,
height: c_int,
color: LCDColor
)
Draws a filled `width` by `height` rect at `x, y`.
Equivalent to `sys::ffi::playdate_graphics::fillRect`.
#### pub fn draw_ellipse(
&self,
x: c_int,
y: c_int,
width: c_int,
height: c_int,
line_width: c_int,
start_angle: c_float,
end_angle: c_float,
color: LCDColor
)
Draw an ellipse stroked inside the rect.
Draws an ellipse inside the rectangle `x, y, width, height` of width `line_width`
(inset from the rectangle bounds).
If `start_angle != end_angle`, this draws an arc between the given angles.
Angles are given in degrees, clockwise from due north.
Equivalent to `sys::ffi::playdate_graphics::drawEllipse`.
#### pub fn fill_ellipse(
&self,
x: c_int,
y: c_int,
width: c_int,
height: c_int,
start_angle: c_float,
end_angle: c_float,
color: LCDColor
)
Fills an ellipse inside the rectangle `x, y, width, height`.
If `start_angle != end_angle`, this draws a wedge/Pacman between the given angles.
Angles are given in degrees, clockwise from due north.
Equivalent to `sys::ffi::playdate_graphics::fillEllipse`.
#### pub fn set_line_cap_style(&self, end_cap_style: LineCapStyle)
Sets the end cap style used in the line drawing functions.
Equivalent to `sys::ffi::playdate_graphics::setLineCapStyle`.
Trait Implementations
---
### impl<Api: Clone> Clone for Graphics<Api#### fn clone(&self) -> Graphics<ApiReturns a copy of the value. Read more1.0.0#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
Formats the value using the given formatter.
Returns the “default value” for a type.
---
### impl<Api> RefUnwindSafe for Graphics<Api>where
Api: RefUnwindSafe,
### impl<Api> Send for Graphics<Api>where
Api: Send,
### impl<Api> Sync for Graphics<Api>where
Api: Sync,
### impl<Api> Unpin for Graphics<Api>where
Api: Unpin,
### impl<Api> UnwindSafe for Graphics<Api>where
Api: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`[From]<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.Layout
---
**Note:** Unable to compute type layout, possibly due to this type having generic parameters. Layout can only be computed for concrete, fully-instantiated types.
Enum playdate_graphics::BitmapDrawMode
===
```
#[repr(u8)]pub enum BitmapDrawMode {
kDrawModeCopy,
kDrawModeWhiteTransparent,
kDrawModeBlackTransparent,
kDrawModeFillWhite,
kDrawModeFillBlack,
kDrawModeXOR,
kDrawModeNXOR,
kDrawModeInverted,
}
```
Variants
---
### kDrawModeCopy
### kDrawModeWhiteTransparent
### kDrawModeBlackTransparent
### kDrawModeFillWhite
### kDrawModeFillBlack
### kDrawModeXOR
### kDrawModeNXOR
### kDrawModeInverted
Trait Implementations
---
### impl BitmapDrawModeExt for BitmapDrawMode
#### const Copy: BitmapDrawMode = BitmapDrawMode::kDrawModeCopy
#### const WhiteTransparent: BitmapDrawMode = BitmapDrawMode::kDrawModeWhiteTransparent
#### const BlackTransparent: BitmapDrawMode = BitmapDrawMode::kDrawModeBlackTransparent
#### const FillWhite: BitmapDrawMode = BitmapDrawMode::kDrawModeFillWhite
#### const FillBlack: BitmapDrawMode = BitmapDrawMode::kDrawModeFillBlack
#### const XOR: BitmapDrawMode = BitmapDrawMode::kDrawModeXOR
#### const NXOR: BitmapDrawMode = BitmapDrawMode::kDrawModeNXOR
#### const Inverted: BitmapDrawMode = BitmapDrawMode::kDrawModeInverted
### impl Clone for LCDBitmapDrawMode
#### fn clone(&self) -> LCDBitmapDrawMode
Returns a copy of the value. Read more1.0.0#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn hash<__H>(&self, state: &mut __H)where
__H: Hasher,
Feeds this value into the given [`Hasher`]. Read more1.3.0#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given [`Hasher`].
#### fn cmp(&self, other: &LCDBitmapDrawMode) -> Ordering
This method returns an [`Ordering`] between `self` and `other`. Read more1.21.0#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &LCDBitmapDrawMode) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<LCDBitmapDrawMode> for LCDBitmapDrawMode
#### fn partial_cmp(&self, other: &LCDBitmapDrawMode) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
### impl Eq for LCDBitmapDrawMode
### impl StructuralEq for LCDBitmapDrawMode
### impl StructuralPartialEq for LCDBitmapDrawMode
Auto Trait Implementations
---
### impl RefUnwindSafe for LCDBitmapDrawMode
### impl Send for LCDBitmapDrawMode
### impl Sync for LCDBitmapDrawMode
### impl Unpin for LCDBitmapDrawMode
### impl UnwindSafe for LCDBitmapDrawMode
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`[From]<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.Layout
---
**Note:** Most layout information is **completely unstable** and may even differ between compilations. The only exception is types with certain `repr(...)` attributes. Please see the Rust Reference’s “Type Layout” chapter for details on type layout guarantees.
**Size:**1 byte
**Size for each variant:**
* `kDrawModeCopy`: 0 bytes
* `kDrawModeWhiteTransparent`: 0 bytes
* `kDrawModeBlackTransparent`: 0 bytes
* `kDrawModeFillWhite`: 0 bytes
* `kDrawModeFillBlack`: 0 bytes
* `kDrawModeXOR`: 0 bytes
* `kDrawModeNXOR`: 0 bytes
* `kDrawModeInverted`: 0 bytes
Enum playdate_graphics::BitmapFlip
===
```
#[repr(u8)]pub enum BitmapFlip {
kBitmapUnflipped,
kBitmapFlippedX,
kBitmapFlippedY,
kBitmapFlippedXY,
}
```
Variants
---
### kBitmapUnflipped
### kBitmapFlippedX
### kBitmapFlippedY
### kBitmapFlippedXY
Trait Implementations
---
### impl BitmapFlipExt for BitmapFlip
#### const Unflipped: BitmapFlip = BitmapFlip::kBitmapUnflipped
#### const FlippedX: BitmapFlip = BitmapFlip::kBitmapFlippedX
#### const FlippedY: BitmapFlip = BitmapFlip::kBitmapFlippedY
#### const FlippedXY: BitmapFlip = BitmapFlip::kBitmapFlippedXY
### impl Clone for LCDBitmapFlip
#### fn clone(&self) -> LCDBitmapFlip
Returns a copy of the value. Read more1.0.0#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn hash<__H>(&self, state: &mut __H)where
__H: Hasher,
Feeds this value into the given [`Hasher`]. Read more1.3.0#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given [`Hasher`].
#### fn cmp(&self, other: &LCDBitmapFlip) -> Ordering
This method returns an [`Ordering`] between `self` and `other`. Read more1.21.0#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &LCDBitmapFlip) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<LCDBitmapFlip> for LCDBitmapFlip
#### fn partial_cmp(&self, other: &LCDBitmapFlip) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
### impl Eq for LCDBitmapFlip
### impl StructuralEq for LCDBitmapFlip
### impl StructuralPartialEq for LCDBitmapFlip
Auto Trait Implementations
---
### impl RefUnwindSafe for LCDBitmapFlip
### impl Send for LCDBitmapFlip
### impl Sync for LCDBitmapFlip
### impl Unpin for LCDBitmapFlip
### impl UnwindSafe for LCDBitmapFlip
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`[From]<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.Layout
---
**Note:** Most layout information is **completely unstable** and may even differ between compilations. The only exception is types with certain `repr(...)` attributes. Please see the Rust Reference’s “Type Layout” chapter for details on type layout guarantees.
**Size:**1 byte
**Size for each variant:**
* `kBitmapUnflipped`: 0 bytes
* `kBitmapFlippedX`: 0 bytes
* `kBitmapFlippedY`: 0 bytes
* `kBitmapFlippedXY`: 0 bytes
Enum playdate_graphics::LineCapStyle
===
```
#[repr(u8)]pub enum LineCapStyle {
kLineCapStyleButt,
kLineCapStyleSquare,
kLineCapStyleRound,
}
```
Variants
---
### kLineCapStyleButt
### kLineCapStyleSquare
### kLineCapStyleRound
Trait Implementations
---
### impl Clone for LCDLineCapStyle
#### fn clone(&self) -> LCDLineCapStyle
Returns a copy of the value. Read more1.0.0#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn hash<__H>(&self, state: &mut __H)where
__H: Hasher,
Feeds this value into the given [`Hasher`]. Read more1.3.0#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given [`Hasher`].
#### const Butt: LineCapStyle = LineCapStyle::kLineCapStyleButt
#### const Square: LineCapStyle = LineCapStyle::kLineCapStyleSquare
#### const Round: LineCapStyle = LineCapStyle::kLineCapStyleRound
### impl Ord for LCDLineCapStyle
#### fn cmp(&self, other: &LCDLineCapStyle) -> Ordering
This method returns an [`Ordering`] between `self` and `other`. Read more1.21.0#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &LCDLineCapStyle) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<LCDLineCapStyle> for LCDLineCapStyle
#### fn partial_cmp(&self, other: &LCDLineCapStyle) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
### impl Eq for LCDLineCapStyle
### impl StructuralEq for LCDLineCapStyle
### impl StructuralPartialEq for LCDLineCapStyle
Auto Trait Implementations
---
### impl RefUnwindSafe for LCDLineCapStyle
### impl Send for LCDLineCapStyle
### impl Sync for LCDLineCapStyle
### impl Unpin for LCDLineCapStyle
### impl UnwindSafe for LCDLineCapStyle
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`[From]<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.Layout
---
**Note:** Most layout information is **completely unstable** and may even differ between compilations. The only exception is types with certain `repr(...)` attributes. Please see the Rust Reference’s “Type Layout” chapter for details on type layout guarantees.
**Size:**1 byte
**Size for each variant:**
* `kLineCapStyleButt`: 0 bytes
* `kLineCapStyleSquare`: 0 bytes
* `kLineCapStyleRound`: 0 bytes
Function playdate_graphics::clear
===
```
pub fn clear(color: Color<'_>)
```
Clears the entire display, filling it with `color`.
This function is shorthand for [`Graphics::always`],
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::clear`.
Function playdate_graphics::clear_clip_rect
===
```
pub fn clear_clip_rect()
```
Clears the current clip rect.
This function is shorthand for `Graphics::clear_clip_rect`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::clearClipRect`.
Function playdate_graphics::clear_raw
===
```
pub fn clear_raw(color: LCDColor)
```
Clears the entire display, filling it with `color`.
Same as `clear`, but without conversion `Color` -> `LCDColor`.
That conversion is really cheap,
so this function is useful if you’re working with `LCDColor` directly.
This function is shorthand for `Graphics::clear_raw`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::clear`.
Function playdate_graphics::display
===
```
pub fn display()
```
Manually flushes the current frame buffer out to the display.
This function is automatically called after each pass through the run loop,
so there shouldn’t be any need to call it yourself.
This function is shorthand for `Graphics::display`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::display`.
Function playdate_graphics::draw_ellipse
===
```
pub fn draw_ellipse(
x: c_int,
y: c_int,
width: c_int,
height: c_int,
line_width: c_int,
start_angle: c_float,
end_angle: c_float,
color: LCDColor
)
```
Draw an ellipse stroked inside the rect.
Draws an ellipse inside the rectangle `x, y, width, height` of width `line_width`
(inset from the rectangle bounds).
If `start_angle != end_angle`, this draws an arc between the given angles.
Angles are given in degrees, clockwise from due north.
This function is shorthand for `Graphics::draw_ellipse`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::drawEllipse`.
Function playdate_graphics::draw_line
===
```
pub fn draw_line(
x1: c_int,
y1: c_int,
x2: c_int,
y2: c_int,
width: c_int,
color: LCDColor
)
```
Draws a line from `x1, y1` to `x2, y2` with a stroke width of `width`.
This function is shorthand for `Graphics::draw_line`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::drawLine`.
Function playdate_graphics::draw_rect
===
```
pub fn draw_rect(
x: c_int,
y: c_int,
width: c_int,
height: c_int,
color: LCDColor
)
```
Draws a `width` by `height` rect at `x, y`.
This function is shorthand for `Graphics::draw_rect`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::drawRect`.
Function playdate_graphics::fill_ellipse
===
```
pub fn fill_ellipse(
x: c_int,
y: c_int,
width: c_int,
height: c_int,
start_angle: c_float,
end_angle: c_float,
color: LCDColor
)
```
Fills an ellipse inside the rectangle `x, y, width, height`.
If `start_angle != end_angle`, this draws a wedge/Pacman between the given angles.
Angles are given in degrees, clockwise from due north.
This function is shorthand for `Graphics::fill_ellipse`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::fillEllipse`.
Function playdate_graphics::fill_polygon
===
```
pub fn fill_polygon(
num_points: c_int,
coords: &mut [c_int],
color: LCDColor,
rule: LCDPolygonFillRule
)
```
Fills the polygon with vertices at the given coordinates
(an array of `2 * num_points` ints containing alternating x and y values)
using the given `color` and fill, or winding, `rule`.
See wikipedia for an explanation of the winding rule.
This function is shorthand for `Graphics::fill_polygon`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::fillPolygon`.
Function playdate_graphics::fill_rect
===
```
pub fn fill_rect(
x: c_int,
y: c_int,
width: c_int,
height: c_int,
color: LCDColor
)
```
Draws a filled `width` by `height` rect at `x, y`.
This function is shorthand for `Graphics::fill_rect`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::fillRect`.
Function playdate_graphics::fill_triangle
===
```
pub fn fill_triangle(
x1: c_int,
y1: c_int,
x2: c_int,
y2: c_int,
x3: c_int,
y3: c_int,
color: LCDColor
)
```
Draws a filled triangle with points at `x1, y1`, `x2, y2`, and `x3, y3`.
This function is shorthand for `Graphics::fill_triangle`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::fillTriangle`.
Function playdate_graphics::get_display_frame
===
```
pub fn get_display_frame() -> Result<&'static mut [u8], ApiError>
```
Returns the raw bits in the display buffer,
**the last completed frame**.
This function is shorthand for `Graphics::get_display_frame`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::getDisplayFrame`.
Function playdate_graphics::get_frame
===
```
pub fn get_frame() -> Result<&'static mut [u8], ApiError>
```
Returns the current display frame buffer.
Rows are 32-bit aligned, so the row stride is 52 bytes, with the extra 2 bytes per row ignored.
Bytes are MSB-ordered; i.e., the pixel in column 0 is the 0x80 bit of the first byte of the row.
This function is shorthand for `Graphics::get_frame`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::getFrame`.
Function playdate_graphics::mark_updated_rows
===
```
pub fn mark_updated_rows(start: c_int, end: c_int)
```
After updating pixels in the buffer returned by `get_frame`,
you must tell the graphics system which rows were updated.
This function marks a contiguous range of rows as updated
(e.g., `markUpdatedRows(0, LCD_ROWS-1)` tells the system to update the entire display).
Both `start` and `end` are **included** in the range.
This function is shorthand for `Graphics::mark_updated_rows`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::markUpdatedRows`.
Function playdate_graphics::set_background_color
===
```
pub fn set_background_color(color: LCDSolidColor)
```
Sets the background color shown when the display is offset or for clearing dirty areas in the sprite system.
This function is shorthand for `Graphics::set_background_color`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::setBackgroundColor`.
Function playdate_graphics::set_clip_rect
===
```
pub fn set_clip_rect(x: c_int, y: c_int, width: c_int, height: c_int)
```
Sets the current clip rect, using **world** coordinates that is,
the given rectangle will be translated by the current drawing offset.
The clip rect is cleared at the beginning of each update.
This function is shorthand for `Graphics::set_clip_rect`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::setClipRect`.
Function playdate_graphics::set_draw_offset
===
```
pub fn set_draw_offset(dx: c_int, dy: c_int)
```
Offsets the origin point for all drawing calls to `x, y` (can be negative).
This is useful, for example, for centering a “camera”
on a sprite that is moving around a world larger than the screen.
This function is shorthand for `Graphics::set_draw_offset`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::setDrawOffset`.
Function playdate_graphics::set_line_cap_style
===
```
pub fn set_line_cap_style(end_cap_style: LineCapStyle)
```
Sets the end cap style used in the line drawing functions.
This function is shorthand for `Graphics::set_line_cap_style`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::setLineCapStyle`.
Function playdate_graphics::set_screen_clip_rect
===
```
pub fn set_screen_clip_rect(x: c_int, y: c_int, width: c_int, height: c_int)
```
Sets the current clip rect in **screen** coordinates.
This function is shorthand for `Graphics::set_screen_clip_rect`,
using default ZST end-point.
Equivalent to `sys::ffi::playdate_graphics::setScreenClipRect`. |
greenteapress_com_wp_think-complexity-2e | free_programming_book | Unknown | Date: 2018-09-01
Categories:
Tags:
by <NAME>
Download the second edition in PDF.
Read the second edition online.
All code from the book is in Jupyter notebooks you can run on Colab. This page has links to the notebooks.
The text and supporting code for this book is in this GitHub repository.
The first edition is still available here.
### Description
Complexity Science is an interdisciplinaryfield—at the intersection of mathematics, computer science, and natural science—that focuses on discrete models of physical and social systems. In particular, it focuses on complex systems, which are systems with many interacting components.
Complex systems include networks and graphs, cellular automatons, agent-based models and swarms, fractals and self-organizing systems, chaotic systems and cybernetic systems.
This book is primarily about complexity science, but studying complexity science gives you a chance to explore topics and ideas you might not encounter otherwise, practice programming in Python, and learn about data structures and algorithms.
This book picks up where Think Python leaves off. I assume that you have read that book or have equivalent knowledge of Python. As always, I try to emphasize fundamental ideas that apply to programming in many languages, but along the way you will learn useful features that are specific to Python.
The models and results in this book raise a number of questions relevant to the philosophy of science, including the nature of scientific laws, theory choice, realism and instrumentalism, holism and reductionism, and Bayesian epistemology.
Think Complexity is a free book, available under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)
Date: 2016-01-01
Categories:
Tags:
| Buy this book at Amazon.com |
| --- | --- |
> This document was translated from LATEX by HEVEA.
|
Rlabkey | cran | R | Package ‘Rlabkey’
August 18, 2023
Version 2.12.0
Date 2023-08-18
Title Data Exchange Between R and 'LabKey' Server
Author <NAME>
Maintainer <NAME> <<EMAIL>>
Description The 'LabKey' client library for R makes it easy for R users to
load live data from a 'LabKey' Server, <https://www.labkey.com/>,
into the R environment for analysis, provided users have permissions
to read the data. It also enables R users to insert, update, and
delete records stored on a 'LabKey' Server, provided they have appropriate
permissions to do so.
License Apache License 2.0
Copyright Copyright (c) 2010-2018 LabKey Corporation
LazyLoad true
Depends httr, jsonlite
LinkingTo Rcpp
Imports Rcpp (>= 0.11.0)
NeedsCompilation yes
Repository CRAN
Date/Publication 2023-08-18 21:22:32 UTC
R topics documented:
Rlabkey-packag... 3
getFolderPat... 6
getLookup... 7
getRow... 8
getSchem... 10
getSessio... 11
labkey.acceptSelfSignedCert... 13
labkey.curlOption... 13
labkey.deleteRow... 13
labkey.domain.creat... 16
labkey.domain.createAndLoad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
labkey.domain.createConditionalFormat . . . . . . . . . . . . . . . . . . . . . . . . . . 21
labkey.domain.createConditionalFormatQueryFilter . . . . . . . . . . . . . . . . . . . . 22
labkey.domain.createDesign . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
labkey.domain.createIndices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
labkey.domain.dro... 26
labkey.domain.FILTER_TYPES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
labkey.domain.ge... 28
labkey.domain.inferField... 29
labkey.domain.sav... 30
labkey.executeSq... 31
labkey.experiment.createData . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
labkey.experiment.createMaterial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
labkey.experiment.createRun . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
labkey.experiment.SAMPLE_DERIVATION_PROTOCOL . . . . . . . . . . . . . . . . 37
labkey.experiment.saveBatch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
labkey.experiment.saveRuns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
labkey.getBaseUr... 41
labkey.getDefaultViewDetails . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
labkey.getFolder... 43
labkey.getLookupDetail... 45
labkey.getModulePropert... 46
labkey.getQuerie... 47
labkey.getQueryDetail... 48
labkey.getQueryView... 51
labkey.getRequestOption... 52
labkey.getSchema... 53
labkey.importRow... 54
labkey.insertRow... 55
labkey.makeRemotePat... 58
labkey.pipeline.getFileStatus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
labkey.pipeline.getPipelineContainer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
labkey.pipeline.getProtocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
labkey.pipeline.startAnalysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
labkey.provenance.addRecordingStep . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
labkey.provenance.createProvenanceParams . . . . . . . . . . . . . . . . . . . . . . . . 66
labkey.provenance.startRecording . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
labkey.provenance.stopRecording . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
labkey.query.impor... 71
labkey.rstudio.initRepor... 73
labkey.rstudio.initRStudi... 74
labkey.rstudio.initSessio... 74
labkey.rstudio.isInitialize... 75
labkey.rstudio.saveRepor... 76
labkey.saveBatc... 76
labkey.security.createContainer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
labkey.security.deleteContaine... 79
labkey.security.getContainer... 80
labkey.security.impersonateUse... 82
labkey.security.moveContaine... 83
labkey.security.renameContaine... 84
labkey.security.stopImpersonatin... 86
labkey.selectRow... 87
labkey.setCurlOption... 90
labkey.setDebugMod... 91
labkey.setDefault... 92
labkey.setModulePropert... 94
labkey.storage.creat... 94
labkey.storage.delet... 96
labkey.storage.updat... 98
labkey.transform.getRunPropertyValu... 99
labkey.transform.readRunPropertiesFil... 100
labkey.truncateTabl... 100
labkey.updateRow... 101
labkey.webdav.delet... 104
labkey.webdav.downloadFolde... 105
labkey.webdav.ge... 106
labkey.webdav.listDi... 108
labkey.webdav.mkDi... 109
labkey.webdav.mkDir... 111
labkey.webdav.pathExist... 112
labkey.webdav.pu... 113
labkey.whoAm... 115
lsFolder... 116
lsProject... 117
lsSchema... 118
makeFilte... 119
saveResult... 121
Rlabkey-package Exchange data between LabKey Server and R
Description
This package allows the transfer of data between a LabKey Server and an R session. Data can
be retrieved from LabKey into a data frame in R by specifying the query schema information
(labkey.selectRows and getRows) or by using sql commands (labkey.executeSql). From an R
session, existing data can be updated (labkey.updateRows), new data can be inserted (labkey.insertRows
and labkey.importRows) or data can be deleted from the LabKey database (labkey.deleteRows).
Interactive R users can discover available data via schema objects (labkey.getSchema).
4 Rlabkey-package
Details
Package: Rlabkey
Type: Package
Version: 2.12.0
Date: 2023-08-18
License: Apache License 2.0
LazyLoad: yes
The user must have appropriate authorization on the LabKey Server in order to access or modify
data using these functions. All access to secure content on LabKey Server requires authentication
via an api key (see labkey.setDefaults for more details) or a properly configured netrc file that
includes the user’s login information.
The netrc file is a standard mechanism for conveying configuration and autologin information to the
File Transfer Protocol client (ftp) and other programs such as CURL. On a Linux or Mac system
this file should be named .netrc (dot netrc) and on Windows it should be named _netrc (underscore
netrc). The file should be located in the user’s home directory and the permissions on the file should
be unreadable for everybody except the owner.
To create the _netrc on a Windows machine, first create an environment variable called ’HOME’
set to your home directory (e.g., c:/Users/<User-Name> on recent versions of Windows) or any
directory you want to use. In that directory, create a text file named _netrc (note that it’s underscore
netrc, not dot netrc like it is on Linux/Mac).
The following three lines must be included in the .netrc or _netrc file either separated by white space
(spaces, tabs, or newlines) or commas.
machine <remote-machine-name>
login <user-email>
password <user-password>
One example would be:
machine localhost
login <EMAIL>
password mypassword
Another example would be:
machine atlas.scharp.org login <EMAIL> password mypassword
Multiple such blocks can exist in one file.
Author(s)
<NAME>
References
http://www.omegahat.net/RCurl/,
https://www.labkey.org/project/home/begin.view
See Also
labkey.selectRows, labkey.executeSql, makeFilter, labkey.insertRows, labkey.importRows,
labkey.updateRows, labkey.deleteRows
getFolderPath Returns the folder path associated with a session
Description
Returns the current folder path for a LabKey session
Usage
getFolderPath(session)
Arguments
session the session key returned from getSession
Details
Returns a string containing the current folder path for the passed in LabKey session
Value
A character array containing the folder path, relative to the root.
Author(s)
<NAME>
References
https://www.labkey.org/Documentation/wiki-page.view?name=projects
See Also
getSession lsFolders
Examples
## Not run:
# library(Rlabkey)
lks<- getSession("https://www.labkey.org", "/home")
getFolderPath(lks) #returns "/home"
## End(Not run)
getLookups Get related data fields that are available to include in a query on a
given query object
Description
Retrieve a related query object referenced by a lookup column in the current query
Usage
getLookups(session, lookupField)
Arguments
session the session key returned from getSession
lookupField an object representing a lookup field on LabKey Server, a named member of a
query object.
Details
Lookup fields in LabKey Server are the equivalent of declared foreign keys
Value
A query object representing the related data set. The fields of a lookup query object are usually
added to the colSelect parameter in getRows, If a lookup query object is used as the query parameter
in getRows, the call will return all of the base query columns and all of the lookup query columns.
A lookup query object is very similar to base query objects that are named elemenets of a schema
object, A lookup query object, however, does not have a parent schema object, it is only returned
by getLookups. Also, the field names in a lookup query object are compound names relative to the
base query object used in getLookups.
Author(s)
<NAME>
References
https://www.labkey.org/Documentation/wiki-page.view?name=propertyFields
See Also
getSession, getRows getSchema
Examples
## Not run:
## get fields from lookup tables and add to query
# library(Rlabkey)
s<- getSession(baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples")
scobj <- getSchema(s, "lists")
# can add fields from related queries
lucols <- getLookups(s, scobj$AllTypes$Category)
# keep going to other tables
lucols2 <- getLookups(s, lucols[["Category/Group"]])
cols <- c(names(scobj$AllTypes)[2:6], names(lucols)[2:4])
getRows(s, scobj$AllTypes, colSelect=paste(cols, sep=","))
## End(Not run)
getRows Retrieve data from LabKey Server
Description
Retrive rows from a LabKey Server given a session and query object
Usage
getRows(session, query, maxRows=NULL, colNameOpt='fieldname', ...)
Arguments
session the session key returned from getSession
query an object representing a query on LabKey Server, a child object of the object
returned by getSchema()
maxRows (optional) an integer specifying how many rows of data to return. If no value is
specified, all rows are returned.
colNameOpt (optional) controls the name source for the columns of the output dataframe,
with valid values of ’caption’, ’fieldname’, and ’rname’
... Any of the remaining options to link{labkey.selectRows}
Details
This function works as a shortcut wrapper to labkey.selectRows. All of the arguments are the
same as documented in labkey.selectRows.
See labkey.selectRows for a discussion of the valid options and defaults for colNameOpt. Note
in particular that with getRows the default is ’fieldname’ instead of ’caption’.
Value
A data frame containing the query results corresponding to the default view of the specified query.
Author(s)
<NAME>
See Also
getSession, getSchema, getLookups, saveResults labkey.selectRows
Examples
## Not run:
## simple example of getting data using schema objects
# library(Rlabkey)
s<-getSession(baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples")
s # shows schemas
scobj <- getSchema(s, "lists")
scobj # shows available queries
scobj$AllTypes ## this is the query object
getRows(s, scobj$AllTypes)
## End(Not run)
getSchema Returns an object representing a LabKey schema
Description
Creates and returns an object representing a LabKey schema, containing child objects representing
LabKey queries
Usage
getSchema(session, schemaIndex)
Arguments
session the session key returned from getSession
schemaIndex the name of the schema that contains the table on which you want to base a
query, or the number of that schema as displayed by print(session)
Details
Creates and returns an object representing a LabKey schema, containing child objects represent-
ing LabKey queries. This compound object is created by calling labkey.getQueries on the re-
quested schema and labkey.getQueryDetails on each returned query. The information returned
in the schema objects is essentially the same as the schema and query objects shown in the Schema
Browser on LabKey Server.
Value
an object representing the schema. The named elements of a schema are the queries within that
schema.
Author(s)
<NAME>
References
https://www.labkey.org/Documentation/wiki-page.view?name=querySchemaBrowser
See Also
getSession
Examples
## Not run:
## the basics of using session, schema, and query objects
# library(Rlabkey)
s<- getSession(baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples")
sch<- getSchema(s, "lists")
# can walk down the populataed schema tree from schema node or query node
sch$AllTypes$Category
sch$AllTypes$Category$caption
sch$AllTypes$Category$type
# can add fields from related queries
lucols <- getLookups(s, sch$AllTypes$Category)
cols <- c(names(sch$AllTypes[2:6]), names(lucols)[2:4])
getRows(s, sch$AllTypes, colSelect=cols)
## End(Not run)
getSession Creates and returns a LabKey Server session
Description
The session object holds server address and folder context for a user working with LabKey Server.
The session-based model supports more efficient interactive use of LabKey Server from R.
Usage
getSession(baseUrl, folderPath="/home",
curlOptions=NULL, lkOptions=NULL)
Arguments
baseUrl a string specifying the address of the LabKey Server, including the context root
folderPath a string specifying the hierarchy of folders to the current folder (container) for
the operation, starting with the project folder
curlOptions (optional) a list of curlOptions to be set on connections to the LabKey Server,
see details
lkOptions (optional) a list of settings for default behavior on naming of objects, see details
Details
Creates a session key that is passed to all the other session-based functions. Associated with the
key are a baseUrl and a folderPath which determine the security context.
curlOptions
The curlOptions parameter gives a mechanism to pass control options down to the RCurl library
used by Rlabkey. This can be very useful for debugging problems or setting proxy server properties.
See example for debugging.
lkOptions
The lkOptions parameter gives a mechanism to change default behavior in the objects returned by
Rlabkey. Currently the only available options are colNameOpt, which affects the names of columns
in the data frames returned by getRows(), and maxRows, which sets a default value for this parameter
when calling getRows()
Value
getSession returns a session object that represents a specific user within a specific project folder
within the LabKey Server identified by the baseUrl. The combination of user, server and project/folder
determines the security context of the client program.
Author(s)
<NAME>
See Also
getRows, getSchema, getLookups saveResults
Examples
## Not run:
# library(Rlabkey)
s <- getSession("https://www.labkey.org", "/home")
s #shows schemas
## using the curlOptions for generating debug tracesof network traffic
d<- debugGatherer()
copt <- curlOptions(debugfunction=d$update, verbose=TRUE,
cookiefile='/cooks.txt')
sdbg<- getSession(baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples", curlOptions=copt)
getRows(sdbg, scobj$AllTypes)
strwrap(d$value(), 100)
## End(Not run)
labkey.acceptSelfSignedCerts
Convenience method to configure Rlabkey connections to accept self-
signed certificates
Description
Rlabkey uses the package RCurl to connect to the LabKey Server. This is equivalent to executing
the function: labkey.setCurlOptions(ssl_verifyhost=0, ssl_verifypeer=FALSE)
Usage
labkey.acceptSelfSignedCerts()
labkey.curlOptions Returns the current set of Curl options that are being used in the ex-
isting session
Description
Rlabkey uses the package RCurl to connect to the LabKey Server.
Usage
labkey.curlOptions()
labkey.deleteRows Delete rows of data from a LabKey database
Description
Specify rows of data to be deleted from the LabKey Server
Usage
labkey.deleteRows(baseUrl, folderPath,
schemaName, queryName, toDelete, provenanceParams=NULL)
Arguments
baseUrl a string specifying the baseUrlfor LabKey server
folderPath a string specifying the folderPath
schemaName a string specifying the schemaName for the query
queryName a string specifying the queryName
toDelete a data frame containing a single column of data containing the data identifiers
of the rows to be deleted
provenanceParams
the provenance parameter object which contains the options to include as part of
a provenance recording. This is a premium feature and requires the Provenance
LabKey module to function correctly, if it is not present this parameter will be
ignored.
Details
A single row or multiple rows of data can be deleted. For the toDelete data frame, version 0.0.5
or later accepts either a single column of data containing the data identifiers (e.g., key or lsid) or
the entire row of data to be deleted. The names of the data in the data frame must be the column
names from the LabKey Server. The data frame must be created with the stringsAsFactors set to
FALSE.
NOTE: Each variable in a dataset has both a column label and a column name. The column label is
visible at the top of each column on the web page and is longer and more descriptive. The column
name is shorter and is used “behind the scenes” for database manipulation. It is the column name
that must be used in the Rlabkey functions when a column name is expected. To identify a particular
column name in a dataset on a web site, use the “export to R script” option available as a drop down
option under the “views” tab for each dataset.
Value
A list is returned with named categories of command, rowsAffected, rows, queryName, contain-
erPath and schemaName. The schemaName, queryName and containerPath properties contain
the same schema, query and folder path used in the request. The rowsAffected property indicates
the number of rows affected by the API action. This will typically be the same number as passed in
the request. The rows property contains a list of rows corresponding to the rows deleted.
Author(s)
<NAME>
See Also
labkey.selectRows, labkey.executeSql, makeFilter, labkey.insertRows, labkey.importRows,
labkey.updateRows, labkey.provenance.createProvenanceParams, labkey.provenance.startRecording,
labkey.provenance.addRecordingStep, labkey.provenance.stopRecording
Examples
## Not run:
## Insert, update and delete
## Note that users must have the necessary permissions in the LabKey Server
## to be able to modify data through the use of these functions
# library(Rlabkey)
newrow <- data.frame(
DisplayFld="Inserted from R"
, TextFld="how its done"
, IntFld= 98
, DoubleFld = 12.345
, DateTimeFld = "03/01/2010"
, BooleanFld= FALSE
, LongTextFld = "Four score and seven years ago"
# , AttachmentFld = NA #attachment fields not supported
, RequiredText = "Veni, vidi, vici"
, RequiredInt = 0
, Category = "LOOKUP2"
, stringsAsFactors=FALSE)
insertedRow <- labkey.insertRows("http://localhost:8080/labkey",
folderPath="/apisamples", schemaName="lists",
queryName="AllTypes", toInsert=newrow)
newRowId <- insertedRow$rows[[1]]$RowId
selectedRow<-labkey.selectRows("http://localhost:8080/labkey",
folderPath="/apisamples", schemaName="lists", queryName="AllTypes",
colFilter=makeFilter(c("RowId", "EQUALS", newRowId)))
selectedRow
updaterow=data.frame(
RowId=newRowId
, DisplayFld="Updated from R"
, TextFld="how to update"
, IntFld= 777
, stringsAsFactors=FALSE)
updatedRow <- labkey.updateRows("http://localhost:8080/labkey",
folderPath="/apisamples", schemaName="lists",
queryName="AllTypes", toUpdate=updaterow)
selectedRow<-labkey.selectRows("http://localhost:8080/labkey",
folderPath="/apisamples", schemaName="lists", queryName="AllTypes",
colFilter=makeFilter(c("RowId", "EQUALS", newRowId)))
selectedRow
deleterow <- data.frame(RowId=newRowId, stringsAsFactors=FALSE)
result <- labkey.deleteRows(baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples", schemaName="lists",
queryName="AllTypes", toDelete=deleterow)
result
## End(Not run)
labkey.domain.create Create a new LabKey domain
Description
Create a domain of the type specified by the domainKind and the domainDesign. A LabKey domain
represents a table in a specific schema.
Usage
labkey.domain.create(baseUrl=NULL, folderPath,
domainKind=NULL, domainDesign=NULL, options=NULL,
module=NULL, domainGroup=NULL, domainTemplate=NULL,
createDomain=TRUE, importData=TRUE)
Arguments
baseUrl a string specifying the baseUrl for the labkey server
folderPath a string specifying the folderPath
domainKind (optional) a string specifying the type of domain to create
domainDesign (optional) a list containing the domain design to create
options (optional) a list containing options specific to the domain kind
module (optional) the name of the module that contains the domain template group
domainGroup (optional) the name of a domain template group
domainTemplate (optional) the name of a domain template within the domain group
createDomain (optional) when using a domain template, create the domain. Defaults to TRUE
importData (optional) when using a domain template, import initial data asociated in the
template. Defaults to TRUE
Details
When creating a domain using a domainKind parameter, the domainDesign parameter will be re-
quired. If a domain template is being used, then module, domainGroup, and domainTemplate are
required.
Will create a domain of the specified domain type, valid types are
• "IntList": A list with an integer key field
• "VarList": A list with a string key field
• "StudyDatasetVisit": A dataset in a visit based study
• "StudyDatasetDate": A dataset in a date based study
• "IssueDefinition": An issue list domain
• "SampleSet": Sample set
• "DataClass": Data class
The domain design parameter describes the set of fields in the domain, see labkey.domain.createDesign
for the helper function that can be used to construct this data structure. The options parameter should
contain a list of attributes that are specific to the domain kind specified. The list of valid options for
each domain kind are:
• IntList and VarList
– keyName (required) : The name of the field in the domain design which identifies the
key field
• StudyDatasetVisit and StudyDatasetDate
– datasetId : Specifies a dataset ID to use, the default is to auto generate an ID
– categoryId : Specifies an existing category ID
– categoryName : Specifies an existing category name
– demographics : (TRUE | FALSE) Determines whether the dataset is created as demo-
graphic
– keyPropertyName : The name of an additional key field to be used in conjunction with
participantId and (visitId or date) to create unique records
– useTimeKeyField : (TRUE | FALSE) Specifies to use the time portion of the date field
as an additional key
– isManagedField : (TRUE | FALSE) Specifies whether the field from keyPropertyName
should be managed by LabKey.
• IssueDefinition
– providerName : The type of issue list to create (IssueDefinition (default) or AssayRe-
questDefinition)
– singularNoun : The singular name to use for items in the issue definition (defaults to
issue)
– pluralNoun : The plural name (defaults to issues)
• SampleSet
– idCols : The columns to use when constructing the concatenated unique ID. Can be up
to 3 numeric IDs which represent the zero-based position of the fields in the domain.
– parentCol : The column to represent the parent identifier in the sample set. This is a
numeric value representing the zero-based position of the field in the domain.
– nameExpression : The name expression to use for creating unique IDs
• DataClass
– sampleSet : The ID of the sample set if this data class is associated with a sample set.
– nameExpression : The name expression to use for creating unique IDs
Value
A list containing elements describing the newly created domain.
Author(s)
<NAME>
See Also
labkey.domain.get, labkey.domain.inferFields, labkey.domain.createDesign, labkey.domain.createIndices,
labkey.domain.save, labkey.domain.drop, labkey.domain.createConditionalFormat, labkey.domain.createCon
labkey.domain.FILTER_TYPES
Examples
## Not run:
## create a data frame and infer it's fields, then create a domain design from it
library(Rlabkey)
df <- data.frame(ptid=c(1:3), age = c(10,20,30), sex = c("f", "m", "f"))
fields <- labkey.domain.inferFields(baseUrl="http://labkey/", folderPath="home", df=df)
dd <- labkey.domain.createDesign(name="test list", fields=fields)
## create a new list with an integer key field
labkey.domain.create(baseUrl="http://labkey/", folderPath="home",
domainKind="IntList", domainDesign=dd, options=list(keyName = "ptid"))
## create a domain using a domain template
labkey.domain.create(baseUrl="http://labkey/", folderPath="home",
domainTemplate="Priority", module="simpletest", domainGroup="todolist")
## End(Not run)
labkey.domain.createAndLoad
Create a new LabKey domain and load data
Description
Create a domain of the type specified by the domainKind. A LabKey domain represents a table in a
specific schema. Once the domain is created the data from the data frame will be imported.
Usage
labkey.domain.createAndLoad(baseUrl=NULL, folderPath,
name, description="", df, domainKind, options=NULL, schemaName=NULL)
Arguments
baseUrl a string specifying the baseUrl for the labkey server
folderPath a string specifying the folderPath
name a string specifying the name of the domain to create
description (optional) a string specifying the domain description
df a data frame specifying fields to infer. The data frame must have column names
as well as row data to infer the type of the field from.
domainKind a string specifying the type of domain to create
options (optional) a list containing options specific to the domain kind
schemaName (optional) a string specifying the schema name to import the data into
Details
Will create a domain of the specified domain type, valid types are
• "IntList": A list with an integer key field
• "VarList": A list with a string key field
• "StudyDatasetVisit": A dataset in a visit based study
• "StudyDatasetDate": A dataset in a date based study
• "IssueDefinition": An issue list domain
• "SampleSet": Sample set
• "DataClass": Data class
The options parameter should contain a list of attributes that are specific to the domain kind speci-
fied. The list of valid options for each domain kind are:
• IntList and VarList
– keyName (required) : The name of the field in the domain design which identifies the
key field
• StudyDatasetVisit and StudyDatasetDate
– datasetId : Specifies a dataset ID to use, the default is to auto generate an ID
– categoryId : Specifies an existing category ID
– categoryName : Specifies an existing category name
– demographics : (TRUE | FALSE) Determines whether the dataset is created as demo-
graphic
– keyPropertyName : The name of an additional key field to be used in conjunction with
participantId and (visitId or date) to create unique records
– useTimeKeyField : (TRUE | FALSE) Specifies to use the time portion of the date field
as an additional key
• IssueDefinition
– providerName : The type of issue list to create (IssueDefinition (default) or AssayRe-
questDefinition)
– singularNoun : The singular name to use for items in the issue definition (defaults to
issue)
– pluralNoun : The plural name (defaults to issues)
• SampleSet
– idCols : The columns to use when constructing the concatenated unique ID. Can be up
to 3 numeric IDs which represent the zero-based position of the fields in the domain.
– parentCol : The column to represent the parent identifier in the sample set. This is a
numeric value representing the zero-based position of the field in the domain.
– nameExpression : The name expression to use for creating unique IDs
• DataClass
– sampleSet : The ID of the sample set if this data class is associated with a sample set.
– nameExpression : The name expression to use for creating unique IDs
Value
A list containing the newly uploaded data frame rows.
Author(s)
<NAME>
See Also
labkey.domain.get, labkey.domain.inferFields, labkey.domain.createDesign, labkey.domain.createIndices,
labkey.domain.save, labkey.domain.drop
Examples
## Not run:
library(Rlabkey)
## Prepare a data.frame
participants = c("0001","0001","0002","0002","0007","0008")
Visit = c("V1", "V2", "V2", "V1", "V2", "V1")
IntValue = c(256:261)
dataset = data.frame("ParticipantID" = participants, Visit,
"IntegerValue" = IntValue, check.names = FALSE)
## Create the dataset and import
labkey.domain.createAndLoad(baseUrl="http://labkey", folderPath="home",
name="demo dataset", df=dataset, domainKind="StudyDatasetVisit")
## End(Not run)
labkey.domain.createConditionalFormat
Create a conditional format data frame
Description
Create a conditional format data frame.
Usage
labkey.domain.createConditionalFormat(queryFilter, bold=FALSE, italic=FALSE,
strikeThrough=FALSE, textColor="", backgroundColor="")
Arguments
queryFilter a string specifying what logical filter should be applied
bold a boolean for if the text display should be formatted in bold
italic a boolean for if the text display should be formatted in italic
strikeThrough a boolean for if the text display should be formatted with a strikethrough
textColor a string specifying the hex code of the text color for display
backgroundColor
a string specifying the hex code of the text background color for display
Details
This function can be used to construct a conditional format data frame intended for use within a do-
main design’s conditionalFormats component while creating or updating a domain. The queryFilter
parameter can be used in conjunction with labkey.domain.createConditionalFormatQueryFilter
for convenient construction of a query filter string. Multiple conditional formats can be applied to
one field, where each format specified constitutes a new row of the field’s conditionalFormats data
frame. If text formatting options are not specified, the default is to display the value as black text
on a white background.
Value
The data frame containing values describing a conditional format.
Author(s)
<NAME>
See Also
labkey.domain.get, labkey.domain.create, labkey.domain.createDesign, labkey.domain.inferFields,
labkey.domain.save, labkey.domain.drop, labkey.domain.createConditionalFormatQueryFilter,
labkey.domain.FILTER_TYPES
Examples
## Not run:
library(Rlabkey)
domain <- labkey.domain.get(baseUrl="http://labkey/", folderPath="home",
schemaName="lists", queryName="test list")
## update the third field to use two conditional formats
qf <- labkey.domain.FILTER_TYPES
cf1 = labkey.domain.createConditionalFormat(labkey.domain.createConditionalFormatQueryFilter(qf$GT,
100), bold=TRUE, text_color="D33115", background_color="333333")
cf2 = labkey.domain.createConditionalFormat(labkey.domain.createConditionalFormatQueryFilter(
qf$LESS_THAN, 400), italic=TRUE, text_color="68BC00")
domain$fields$conditionalFormats[[3]] = rbind(cf1,cf2)
labkey.domain.save(baseUrl="http://labkey/", folderPath="home",
schemaName="lists", queryName="test list", domainDesign=domain)
## End(Not run)
labkey.domain.createConditionalFormatQueryFilter
Create a conditional format query filter
Description
Create a conditional format query filter string.
Usage
labkey.domain.createConditionalFormatQueryFilter(filterType, value,
additionalFilter=NULL, additionalValue=NULL)
Arguments
filterType a string specifying a permitted relational operator
value a string specifying a comparand
additionalFilter
a string specifying a second relational operator
additionalValue
a string specifying a second comparand
Details
This function can be used to as a convenience wrapper to construct a query filter string for condi-
tional formats. Two relational expressions may be formed, one with the first two parameters (for
instance, parameter values ’50’ and ’eq’ for value and filter respectively would create a condition
of ’equals 50’) and the second with the remaining two optional parameters. If both conditions are
created, they are conjunct with a logical AND, and a value would have to pass both conditions to
clear the filter. This function can be used in conjunction with labkey.domain.FILTER_TYPES for
easy access to the set of permitted relational operators.
Value
The string specifying a query filter in LabKey filter URL format.
Author(s)
<NAME>
See Also
labkey.domain.get, labkey.domain.create, labkey.domain.createDesign, labkey.domain.inferFields,
labkey.domain.save, labkey.domain.drop, labkey.domain.createConditionalFormat, labkey.domain.FILTER_TY
Examples
## Not run:
library(Rlabkey)
qf <- labkey.domain.FILTER_TYPES
# Filters for values equal to 750
qf1 <- labkey.domain.createConditionalFormatQueryFilter(qf$EQUAL, 750)
# Filters for values greater than 500, but less than 1000
qf2 <- labkey.domain.createConditionalFormatQueryFilter(qf$GREATER_THAN, 500, qf$LESS_THAN, 1000)
## End(Not run)
labkey.domain.createDesign
Helper function to create a domain design data structure
Description
Create a domain design data structure which can then be used by labkey.domain.create or
labkey.domain.save
Usage
labkey.domain.createDesign(name, description = NULL, fields, indices = NULL)
Arguments
name a string specifying the name of the domain
description (optional) a string specifying domain description
fields a list containing the fields of the domain design, this should be in the same
format as returned by labkey.inferFields.
indices (optional) a list of indices definitions to be used for this domain design on cre-
ation
Details
This is a function which can be used to create a domain design data structure. Domain designs are
used both when creating a new domain or updating an existing domain.
Value
A list containing elements describing the domain design. Any of the APIs which take a domain
design parameter can accept this data structure.
Author(s)
<NAME>
See Also
labkey.domain.get, labkey.domain.inferFields, labkey.domain.createIndices, labkey.domain.create,
labkey.domain.save, labkey.domain.drop, labkey.domain.createConditionalFormat, labkey.domain.createCon
labkey.domain.FILTER_TYPES
Examples
## Not run:
## create a data frame and infer it's fields, then create a domain design from it
library(Rlabkey)
df <- data.frame(ptid=c(1:3), age = c(10,20,30), sex = c("f", "m", "f"))
fields <- labkey.domain.inferFields(baseUrl="http://labkey/", folderPath="home", df=df)
indices = labkey.domain.createIndices(list("ptid", "age"), TRUE)
indices = labkey.domain.createIndices(list("age"), FALSE, indices)
dd <- labkey.domain.createDesign(name="test list", fields=fields, indices=indices)
## End(Not run)
labkey.domain.createIndices
Helper function to create a domain design indices list
Description
Create a list of indices definitions which can then be used by labkey.domain.createDesign
Usage
labkey.domain.createIndices(colNames, asUnique, existingIndices = NULL)
Arguments
colNames a list of string column names for the index
asUnique a logical TRUE or FALSE value for if a UNIQUE index should be used
existingIndices
a list of previously created indices definitions to append to
Details
This helper function can be used to construct the list of indices definitions for a domain design
structure. Each call to this function takes in the column names from the domain to use in the index
and a parameter indicating if this should be a UNIQUE index. A third parameter can be used to
build up more then one indices definitions.
Value
The data frame containing the list of indices definitions, concatenated with the existingIndices ob-
ject if provided.
Author(s)
<NAME>
See Also
labkey.domain.get, labkey.domain.create, labkey.domain.createDesign, labkey.domain.inferFields,
labkey.domain.save, labkey.domain.drop
Examples
## Not run:
## create a list of indices definitions to use for a domain design
library(Rlabkey)
indices = labkey.domain.createIndices(list("intKey", "customInt"), TRUE)
indices = labkey.domain.createIndices(list("customInt"), FALSE, indices)
## End(Not run)
labkey.domain.drop Delete a LabKey domain
Description
Delete an existing domain.
Usage
labkey.domain.drop(baseUrl=NULL, folderPath, schemaName, queryName)
Arguments
baseUrl a string specifying the baseUrlfor the labkey server
folderPath a string specifying the folderPath
schemaName a string specifying the name of the schema of the domain
queryName a string specifying the query name
Details
This function will delete an existing domain along with any data that may have been uploaded to it.
Author(s)
<NAME>
See Also
labkey.domain.get, labkey.domain.inferFields, labkey.domain.createDesign, labkey.domain.createIndices,
labkey.domain.save, labkey.domain.create
Examples
## Not run:
## delete an existing domain
library(Rlabkey)
labkey.domain.drop(baseUrl="http://labkey/", folderPath="home",
schemaName="lists", queryName="test list")
## End(Not run)
labkey.domain.FILTER_TYPES
Provide comparator access
Description
A list specifying permitted validator comparators.
Usage
labkey.domain.FILTER_TYPES
Details
This constant contains a list specifying the set of permitted validator operators, using names to map
conventional terms to the expressions used by LabKey filter URL formats. The values are intended
to be used in conjunction with conditional formats or property validators.
Value
A named list of strings.
Author(s)
<NAME>
See Also
labkey.domain.get, labkey.domain.create, labkey.domain.createDesign, labkey.domain.inferFields,
labkey.domain.save, labkey.domain.drop, labkey.domain.createConditionalFormat, labkey.domain.createCon
Examples
## Not run:
library(Rlabkey)
qf <- labkey.domain.FILTER_TYPES
# Example of available comparators
comparator1 <- qf$EQUAL
comparator2 <- qf$GREATER_THAN
comparator3 <- qf$DATE_LESS_THAN_OR_EQUAL
comparator4 <- qf$STARTS_WITH
comparator5 <- qf$CONTAINS_ONE_OF
## End(Not run)
labkey.domain.get Returns the metadata for an existing LabKey domain
Description
Get the data structure for a domain.
Usage
labkey.domain.get(baseUrl=NULL, folderPath, schemaName, queryName)
Arguments
baseUrl a string specifying the baseUrlfor the labkey server
folderPath a string specifying the folderPath
schemaName a string specifying the name of the schema of the domain
queryName a string specifying the query name
Details
Returns the domain design of an existing domain. The returned domain design can be used for
reporting purposes or it can be modified and used to create a new domain or update the domain
source.
Value
A list containing elements describing the domain. The structure is the same as a domain design
created by labkey.createDomainDesign
Author(s)
<NAME>
See Also
labkey.domain.create, labkey.domain.inferFields, labkey.domain.createDesign, labkey.domain.createIndic
labkey.domain.save, labkey.domain.drop
Examples
## Not run:
## retrieve an existing domain
library(Rlabkey)
labkey.domain.get(baseUrl="http://labkey/", folderPath="home",
schemaName="lists", queryName="test list")
## End(Not run)
labkey.domain.inferFields
Infer field metadata from a data frame
Description
Generate field information from the specified data frame. The resulting list can be used to create or
edit a domain using the labkey.domain.create or labkey.domain.save APIs.
Usage
labkey.domain.inferFields(baseUrl = NULL, folderPath, df)
Arguments
baseUrl a string specifying the baseUrl for the labkey server
folderPath a string specifying the folderPath
df a data frame specifying fields to infer. The data frame must have column names
as well as row data to infer the type of the field from.
Details
Field information can be generated from a data frame by introspecting the data associated with it
along with other properties about that column. The data frame is posted to the server endpoint
where the data is analyzed and returned as a list of fields each with it’s associated list of properties
and values. This list can be edited and/or used to create a domain on the server.
Value
The inferred metadata will be returned as a list with an element called : "fields" which contains the
list of fields inferred from the data frame. Each field will contain the list of attributes and values for
that field definition.
Author(s)
<NAME>
See Also
labkey.domain.get, labkey.domain.create, labkey.domain.createDesign, labkey.domain.createIndices,
labkey.domain.save, labkey.domain.drop
Examples
## Not run:
## create a data frame and infer it's fields
library(Rlabkey)
df <- data.frame(ptid=c(1:3), age = c(10,20,30), sex = c("f", "m", "f"))
fields <- labkey.domain.inferFields(baseUrl="http://labkey/", folderPath="home", df=df)
## End(Not run)
labkey.domain.save Updates an existing LabKey domain
Description
Modify an existing domain with the specified domain design.
Usage
labkey.domain.save(baseUrl=NULL, folderPath, schemaName, queryName, domainDesign)
Arguments
baseUrl a string specifying the baseUrlfor the labkey server
folderPath a string specifying the folderPath
schemaName a string specifying the name of the schema of the domain
queryName a string specifying the query name
domainDesign a list data structure with the domain design to update to
Value
A list containing elements describing the domain after the update. The structure is the same as a
domain design created by labkey.createDomainDesign
Author(s)
<NAME>
See Also
labkey.domain.get, labkey.domain.inferFields, labkey.domain.createDesign, labkey.domain.createIndices,
labkey.domain.create, labkey.domain.drop, labkey.domain.createConditionalFormat, labkey.domain.createC
labkey.domain.FILTER_TYPES
Examples
## Not run:
library(Rlabkey)
## change the type of one of the columns
domain <- labkey.domain.get(baseUrl="http://labkey/", folderPath="home",
schemaName="lists", queryName="test list")
domain$fields[3,]$rangeURI = "xsd:string"
domain$fields[3,]$name = "changed to string"
labkey.domain.save(baseUrl="http://labkey/", folderPath="home",
schemaName="lists", queryName="test list", domainDesign=domain)
## End(Not run)
labkey.executeSql Retrieve data from a LabKey Server using SQL commands
Description
Use Sql commands to specify data to be imported into R. Prior to import, data can be manipulated
through standard SQL commands supported in LabKey SQL.
Usage
labkey.executeSql(baseUrl, folderPath, schemaName, sql,
maxRows = NULL, rowOffset = NULL, colSort=NULL,
showHidden = FALSE, colNameOpt='caption',
containerFilter=NULL, parameters=NULL)
Arguments
baseUrl a string specifying the baseUrlfor the labkey server
folderPath a string specifying the folderPath
schemaName a string specifying the schemaName for the query
sql a string containing the sql commands to be executed
maxRows (optional) an integer specifying the maximum number of rows to return. If no
value is specified, all rows are returned.
rowOffset (optional) an integer specifying which row of data should be the first row in the
retrieval. If no value is specified, rows will begin at the start of the result set.
colSort (optional) a string including the name of the column to sort preceeded by a “+”
or “-” to indicate sort direction
showHidden (optional) a logical value indicating whether or not to return data columns that
would normally be hidden from user view. Defaults to FALSE if no value pro-
vided.
colNameOpt (optional) controls the name source for the columns of the output dataframe,
with valid values of ’caption’, ’fieldname’, and ’rname’ See labkey.selectRows
for more details.
containerFilter
(optional) Specifies the containers to include in the scope of selectRows request.
A value of NULL is equivalent to "Current". Valid values are
• "Current": Include the current folder only
• "CurrentAndSubfolders": Include the current folder and all subfolders
• "CurrentPlusProject": Include the current folder and the project that con-
tains it
• "CurrentAndParents": Include the current folder and its parent folders
• "CurrentPlusProjectAndShared": Include the current folder plus its project
plus any shared folders
• "AllFolders": Include all folders for which the user has read permission
parameters (optional) List of name/value pairs for the parameters if the SQL references
underlying queries that are parameterized. For example, parameters=c("X=1",
"Y=2").
Details
A full dataset or any portion of a dataset can be imported into an R data frame using the labkey.executeSql
function. Function arguments are components of the url that identify the location of the data and
the SQL actions that should be taken on the data prior to import.
See labkey.selectRows for a discussion of the valid options and defaults for colNameOpt.
Value
The requested data are returned in a data frame with stringsAsFactors set to FALSE. Column names
are set as determined by the colNameOpt parameter.
Author(s)
<NAME>
See Also
labkey.selectRows, makeFilter, labkey.insertRows, labkey.importRows, labkey.updateRows,
labkey.deleteRows, getRows
Examples
## Not run:
## Example of an expicit join and use of group by and aggregates
# library(Rlabkey)
sql<- "SELECT AllTypesCategories.Category AS Category,
SUM(AllTypes.IntFld) AS SumOfIntFld,
AVG(AllTypes.DoubleFld) AS AvgOfDoubleFld
FROM AllTypes LEFT JOIN AllTypesCategories
ON (AllTypes.Category = AllTypesCategories.TextKey)
WHERE AllTypes.Category IS NOT NULL
GROUP BY AllTypesCategories.Category"
sqlResults <- labkey.executeSql(
baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples",
schemaName="lists",
sql = sql)
sqlResults
## End(Not run)
labkey.experiment.createData
Create an experiment data object
Description
Create an experiment data object.
Usage
labkey.experiment.createData(config,
dataClassId = NULL, dataClassName = NULL, dataFileUrl = NULL)
Arguments
config a list of base experiment object properties
dataClassId (optional) a integer specifying the data class row ID
dataClassName (optional) a string specifying the name of the data class
dataFileUrl (optional) a string specifying the local file url of the uploaded file
Details
Create an experiment data object which can be used as either input or output datas for an experiment
run.
Value
Returns the object representation of the experiment data object.
Author(s)
<NAME>
See Also
labkey.experiment.saveBatch, labkey.experiment.createMaterial, labkey.experiment.createRun
Examples
## Not run:
library(Rlabkey)
## create a non-assay backed run with data classes as data inputs and outputs
d1 <- labkey.experiment.createData(
list(name = "dc-01"), dataClassId = 400)
d2 <- labkey.experiment.createData(
list(name = "dc-02"), dataClassId = 402)
run <- labkey.experiment.createRun(
list(name="new run"), dataInputs = d1, dataOutputs = d2)
labkey.experiment.saveBatch(baseUrl="http://labkey/", folderPath="home",
protocolName=labkey.experiment.SAMPLE_DERIVATION_PROTOCOL, runList=run)
## End(Not run)
labkey.experiment.createMaterial
Create an experiment material object
Description
Create an experiment material object.
Usage
labkey.experiment.createMaterial(config, sampleSetId = NULL, sampleSetName = NULL)
Arguments
config a list of base experiment object properties
sampleSetId (optional) a integer specifying the sample set row ID
sampleSetName (optional) a string specifying the name of the sample set
Details
Create an experiment material object which can be used as either input or output materials for an
experiment run.
Value
Returns the object representation of the experiment material object.
Author(s)
<NAME>
See Also
labkey.experiment.saveBatch, labkey.experiment.createData, labkey.experiment.createRun
Examples
## Not run:
library(Rlabkey)
## create a non-assay backed run with samples as material inputs and outputs
m1 <- labkey.experiment.createMaterial(
list(name = "87444063.2604.626"), sampleSetName = "Study Specimens")
m2 <- labkey.experiment.createMaterial(
list(name = "87444063.2604.625"), sampleSetName = "Study Specimens")
run <- labkey.experiment.createRun(
list(name="new run"), materialInputs = m1, materialOutputs = m2)
labkey.experiment.saveBatch(baseUrl="http://labkey/", folderPath="home",
protocolName=labkey.experiment.SAMPLE_DERIVATION_PROTOCOL, runList=run)
## End(Not run)
labkey.experiment.createRun
Create an experiment run object
Description
Create an experiment run object.
Usage
labkey.experiment.createRun(config,
dataInputs = NULL, dataOutputs = NULL, dataRows = NULL,
materialInputs = NULL, materialOutputs = NULL, plateMetadata = NULL)
Arguments
config a list of base experiment object properties
dataInputs (optional) a list of experiment data objects to be used as data inputs to the run
dataOutputs (optional) a list of experiment data objects to be used as data outputs to the run
dataRows (optional) a data frame containing data rows to be uploaded to the assay backed
run
materialInputs (optional) a list of experiment material objects to be used as material inputs to
the run
materialOutputs
(optional) a list of experiment material objects to be used as material outputs to
the run
plateMetadata (optional) if the assay supports plate templates, the plate metadata object to
upload
Details
Create an experiment run object which can be used in the saveBatch function.
Value
Returns the object representation of the experiment run object.
Author(s)
<NAME>
See Also
labkey.experiment.saveBatch, labkey.experiment.createData, labkey.experiment.createMaterial
Examples
## Not run:
library(Rlabkey)
## create a non-assay backed run with samples as material inputs and outputs
m1 <- labkey.experiment.createMaterial(
list(name = "87444063.2604.626"), sampleSetName = "Study Specimens")
m2 <- labkey.experiment.createMaterial(
list(name = "87444063.2604.625"), sampleSetName = "Study Specimens")
run <- labkey.experiment.createRun(
list(name="new run"), materialInputs = m1, materialOutputs = m2)
labkey.experiment.saveBatch(baseUrl="http://labkey/", folderPath="home",
protocolName=labkey.experiment.SAMPLE_DERIVATION_PROTOCOL, runList=run)
## import an assay run which includes plate metadata
df <- data.frame(participantId=c(1:3), visitId = c(10,20,30), welllocation = c("A1", "D11", "F12"))
runConfig <- fromJSON(txt='{"assayId": 310,
"name" : "api imported run with plate metadata",
"properties" : {
"PlateTemplate" : "urn:lsid:labkey.com:PlateTemplate.Folder-6:d8bbec7d-34cd-1038-bd67-b3bd"
}
}')
plateMetadata <- fromJSON(txt='{
"control" : {
"positive" : {"dilution": 0.005},
"negative" : {"dilution": 1.0}
},
"sample" : {
"SA01" : {"dilution": 1.0, "ID" : 111, "Barcode" : "BC_111", "Concentration" : 0.0125},
"SA02" : {"dilution": 2.0, "ID" : 222, "Barcode" : "BC_222"},
"SA03" : {"dilution": 3.0, "ID" : 333, "Barcode" : "BC_333"},
"SA04" : {"dilution": 4.0, "ID" : 444, "Barcode" : "BC_444"}
}
}')
run <- labkey.experiment.createRun(runConfig, dataRows = df, plateMetadata = plateMetadata)
labkey.experiment.saveBatch(
baseUrl="http://labkey/", folderPath="home",
assayConfig=list(assayId = 310), runList=run
)
## End(Not run)
labkey.experiment.SAMPLE_DERIVATION_PROTOCOL
Constant for the Simple Derivation Protocol
Description
Simple Derivation Protocol constant.
Details
This value can be used in the labkey.experiment.saveBatch function when creating runs that aren’t
backed by an assay protocol.
Author(s)
<NAME>
See Also
labkey.experiment.saveBatch
labkey.experiment.saveBatch
Saves a modified experiment batch
Description
Saves a modified experiment batch.
Usage
labkey.experiment.saveBatch(baseUrl=NULL, folderPath,
assayConfig = NULL, protocolName = NULL,
batchPropertyList = NULL, runList)
Arguments
baseUrl (optional) a string specifying the baseUrl for the labkey server
folderPath a string specifying the folderPath
assayConfig (optional) a list specifying assay configuration information
protocolName (optional) a string specifying the protocol name of the protocol to use
batchPropertyList
(optional) a list of batch properties
runList a list of experiment run objects
Details
Saves a modified batch. Runs within the batch may refer to existing data and material objects,
either inputs or outputs, by ID or LSID. Runs may also define new data and materials objects by not
specifying an ID or LSID in their properties.
Runs can be created for either assay or non-assay backed protocols. For an assay backed protocol,
either the assayId or the assayName and providerName name must be specified in the assayConfig
parameter. If a non-assay backed protocol is to be used, specify the protocolName string value,
note that currently only the simple : labkey.experiment.SAMPLE_DERIVATION_PROTOCOL is
supported.
Refer to the labkey.experiment.createData, labkey.experiment.createMaterial, and labkey.experiment.createRun
helper functions to assemble the data structure that saveBatch expects.
Value
Returns the object representation of the experiment batch.
Author(s)
<NAME>
See Also
labkey.experiment.createData, labkey.experiment.createMaterial, labkey.experiment.createRun
Examples
## Not run:
library(Rlabkey)
## uploads data to an existing assay
df <- data.frame(participantId=c(1:3), visitId = c(10,20,30), sex = c("f", "m", "f"))
bprops <- list(LabNotes="this is a simple demo")
bpl <- list(name=paste("Batch ", as.character(date())),properties=bprops)
run <- labkey.experiment.createRun(list(name="new assay run"), dataRows = df)
labkey.experiment.saveBatch(baseUrl="http://labkey/", folderPath="home",
assayConfig=list(assayName="GPAT", providerName="General"),
batchPropertyList=bpl, runList=run)
## create a non-assay backed run with samples as material inputs and outputs
m1 <- labkey.experiment.createMaterial(
list(name = "87444063.2604.626"), sampleSetName = "Study Specimens")
m2 <- labkey.experiment.createMaterial(
list(name = "87444063.2604.625"), sampleSetName = "Study Specimens")
run <- labkey.experiment.createRun(
list(name="new run"), materialInputs = m1, materialOutputs = m2)
labkey.experiment.saveBatch(baseUrl="http://labkey/", folderPath="home",
protocolName=labkey.experiment.SAMPLE_DERIVATION_PROTOCOL, runList=run)
## import an assay run which includes plate metadata
df <- data.frame(participantId=c(1:3), visitId = c(10,20,30), welllocation = c("A1", "D11", "F12"))
runConfig <- fromJSON(txt='{"assayId": 310,
"name" : "api imported run with plate metadata",
"properties" : {
"PlateTemplate" : "urn:lsid:labkey.com:PlateTemplate.Folder-6:d8bbec7d-34cd-1038-bd67-b3bd"
}
}')
plateMetadata <- fromJSON(txt='{
"control" : {
"positive" : {"dilution": 0.005},
"negative" : {"dilution": 1.0}
},
"sample" : {
"SA01" : {"dilution": 1.0, "ID" : 111, "Barcode" : "BC_111", "Concentration" : 0.0125},
"SA02" : {"dilution": 2.0, "ID" : 222, "Barcode" : "BC_222"},
"SA03" : {"dilution": 3.0, "ID" : 333, "Barcode" : "BC_333"},
"SA04" : {"dilution": 4.0, "ID" : 444, "Barcode" : "BC_444"}
}
}')
run <- labkey.experiment.createRun(runConfig, dataRows = df, plateMetadata = plateMetadata)
labkey.experiment.saveBatch(
baseUrl="http://labkey/", folderPath="home",
assayConfig=list(assayId = 310), runList=run
)
## End(Not run)
labkey.experiment.saveRuns
Saves Runs.
Description
Saves experiment runs.
Usage
labkey.experiment.saveRuns(baseUrl=NULL, folderPath,
protocolName, runList)
Arguments
baseUrl (optional) a string specifying the baseUrl for the labkey server
folderPath a string specifying the folderPath
protocolName a string specifying the protocol name of the protocol to use
runList a list of experiment run objects
Details
Saves experiment runs. Runs may refer to existing data and material objects, either inputs or out-
puts, by ID or LSID. Runs may also define new data and materials objects by not specifying an ID
or LSID in their properties.
Refer to the labkey.experiment.createData, labkey.experiment.createMaterial, and labkey.experiment.createRun
helper functions to assemble the data structure that saveRuns expects.
Value
Returns the object representation of the experiment run.
Author(s)
<NAME>
See Also
labkey.experiment.createData, labkey.experiment.createMaterial, labkey.experiment.createRun
Examples
## Not run:
library(Rlabkey)
## example with materialInputs and materialOutputs
m1 <- labkey.experiment.createMaterial(
list(name = "87444063.2604.626"), sampleSetName = "Study Specimens")
m2 <- labkey.experiment.createMaterial(
list(name = "87444063.2604.625"), sampleSetName = "Study Specimens")
run <- labkey.experiment.createRun(
list(name="new run"), materialInputs = m1, materialOutputs = m2)
labkey.experiment.saveRuns(baseUrl="http://labkey/", folderPath="home",
protocolName=labkey.experiment.SAMPLE_DERIVATION_PROTOCOL, runList=run)
## End(Not run)
labkey.getBaseUrl Get the default baseUrl parameter used for all http or https requests
Description
Use this function to get "baseUrl" package environment variables to be used for all http or https
requests.
Usage
labkey.getBaseUrl(baseUrl=NULL)
Arguments
baseUrl server location including context path, if any. e.g. https://www.labkey.org/
Details
The function takes an optional baseUrl parameter. When non empty parameter is passed in and
if baseUrl has not been previously set, the function will remember the baseUrl value in package
environment variables and return the formatted baseUrl. Skip baseUrl parameter to get previously
set baseUrl.
Examples
## Not run:
## Example of getting previously set baseUrl
library(Rlabkey)
labkey.setDefaults(apiKey="<KEY>",
baseUrl="http://labkey/")
labkey.getBaseUrl()
## End(Not run)
labkey.getDefaultViewDetails
Retrieve the fields of a LabKey query view
Description
Fetch a list of output fields and their attributes that are avaialble from the default view of a given
query
Usage
labkey.getDefaultViewDetails(baseUrl, folderPath,
schemaName, queryName)
Arguments
baseUrl a string specifying the baseUrlfor the labkey server
folderPath a string specifying the folderPath
schemaName a string specifying the schemaName for the query
queryName a string specifying the queryName
Details
Queries have a default “views” associeated with them. A query view can describe a subset or
superset of the fields defined by the query. A query view is defined by using the “Customize View”
button option on a LabKey data grid page. getDefaultViewDetails has the same arguments and
returns the same shape of result data frame as getQueryDetails.The default view is the what you
will get back on calling labkey.selectRows or getRows.
Value
The output field attributes of the default view are returned as a data frame. See labkey.getQueryDetails
for a description.
Author(s)
<NAME>, <EMAIL>
See Also
Retrieve data: labkey.selectRows, makeFilter, labkey.executeSql
Modify data: labkey.updateRows, labkey.insertRows, labkey.importRows, labkey.deleteRows
List available data: labkey.getSchemas, labkey.getQueries, labkey.getQueryViews, labkey.getQueryDetails,
labkey.getLookupDetails
Examples
## Not run:
## Details of fields of a default query view
# library(Rlabkey)
queryDF <- labkey.getDefaultViewDetails(
baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples",
schemaName="lists",
queryName="AllTypes")
queryDF
## End(Not run)
labkey.getFolders Retrieve a list of folders accessible to the current user
Description
Fetch a list of all folders accessible to the current user, starting from a given folder.
Usage
labkey.getFolders(baseUrl, folderPath,
includeEffectivePermissions=TRUE,
includeSubfolders=FALSE, depth=50,
includeChildWorkbooks=TRUE,
includeStandardProperties=TRUE)
Arguments
baseUrl a string specifying the address of the LabKey Server, including the context root
folderPath the starting point for the search.
includeEffectivePermissions
If set to false, the effective permissions for this container resource will not be
included. (defaults to TRUE).
includeSubfolders
whether the search for subfolders should recurse down the folder hierarchy
depth maximum number of subfolder levels to show if includeSubfolders=TRUE
includeChildWorkbooks
If true, include child containers of type workbook in the response (defaults to
TRUE).
includeStandardProperties
If true, include the standard container properties like title, formats, etc. in the
response (defaults to TRUE).
Details
Folders are a hierarchy of containers for data and files. The are the place where permissions are
set in LabKey Server. The top level in a folder hierarchy is the project. Below the project is an
arbitrary hierarchy of folders that can be used to partition data for reasons of security, visibility, and
organization.
Folders cut across schemas. Some schemas, like the lists schema are not visible in a folder that has
no list objects defined in it. Other schemas are visible in all folders.
Value
The available folders are returned as a three-column data frame containing
name the name of the folder
folderPath the full path of the folder from the project root
effectivePermissions
the current user’s effective permissions for the given folder
Author(s)
<NAME>, <EMAIL>
See Also
labkey.getQueries, labkey.getQueryViews, labkey.getQueryDetails, labkey.getDefaultViewDetails,
labkey.getLookupDetails, labkey.security.getContainers, labkey.security.createContainer,
labkey.security.deleteContainer, labkey.security.moveContainer labkey.security.renameContainer
Examples
## Not run:
## List of folders
# library(Rlabkey)
folders <- labkey.getFolders("https://www.labkey.org", "/home")
folders
## End(Not run)
labkey.getLookupDetails
Retrieve detailed information on a LabKey query
Description
Fetch a list of output columns and their attributes from the query referenced by a lookup field
Usage
labkey.getLookupDetails(baseUrl, folderPath,
schemaName, queryName, lookupKey)
Arguments
baseUrl a string specifying the address of the LabKey Server, including the context root
folderPath a string specifying the hierarchy of folders to the current folder (container) for
the operation, starting with the project folder
schemaName a string specifying the schema name in which the query object is defined
queryName a string specifying the name the query
lookupKey a string specifying the qualified name of a lookup field (foreign key) relative to
the query specified by queryName
Details
When getQueryDetails returns non-NA values for the lookupQueryName, the getLookupDetails
function can be called to enumerate the fields from the query referenced by the lookup. These
lookup fields can be added to the colSelect list of selectRows.
Value
The available schemas are returned as a data frame, with the same columns as detailed in labkey.getQueryDetails
Author(s)
<NAME>, <EMAIL>
See Also
Retrieve data: labkey.selectRows,makeFilter, labkey.executeSql
Modify data: labkey.updateRows, labkey.insertRows, labkey.importRows, labkey.deleteRows
List available data: labkey.getSchemas, labkey.getQueries, labkey.getQueryViews, labkey.getQueryDetails,
labkey.getDefaultViewDetails
Examples
## Not run:
## Details of fields of a query referenced by a lookup field
# library(Rlabkey)
lu1 <- labkey.getLookupDetails(
baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples",
schemaName="lists",
queryName="AllTypes",
lookupKey="Category"
)
lu1
## When a lookup field points to a query object that itself has a lookup
## field, use a compound fieldkey consisting of the lookup fields from
## the base query object to the target lookupDetails, separated by
## forward slashes
lu2<- labkey.getLookupDetails(
baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples",
schemaName="lists",
queryName="AllTypes",
lookupKey="Category/Group"
)
lu2
## Now select a result set containing a field from the base query, a
## field from the 1st level of lookup, and one from the 2nd
rows<- labkey.selectRows(
baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples",
schemaName="lists",
queryName="AllTypes",
colSelect=c("DisplayFld","Category/Category","Category/Group/GroupName"),
colFilter = makeFilter(c("Category/Group/GroupName",
"NOT_EQUALS","TypeRange")), maxRows=20
)
rows
## End(Not run)
labkey.getModuleProperty
Get effective module property value
Description
Get a specific effective module property value for folder
Usage
labkey.getModuleProperty(baseUrl=NULL, folderPath, moduleName, propName)
Arguments
baseUrl server location including context path, if any. e.g. https://www.labkey.org/
folderPath a string specifying the folderPath
moduleName name of the module
propName The module property name
Examples
## Not run:
library(Rlabkey)
labkey.getModuleProperty(baseUrl="http://labkey/", folderPath="flowProject",
moduleName="flow", propName="ExportToScriptFormat")
## End(Not run)
labkey.getQueries Retrieve a list of available queries for a specified LabKey schema
Description
Fetch a list of queries available to the current user within in a specified folder context and specified
schema
Usage
labkey.getQueries(baseUrl, folderPath, schemaName)
Arguments
baseUrl a string specifying the address of the LabKey Server, including the context root
folderPath a string specifying the hierarchy of folders to the current folder (container) for
the operation, starting with the project folder
schemaName a string specifying the schema name in which the query object is defined
Details
“Query” is the LabKey term for a data container that acts like a relational table within LabKey
Server. Queries include lists, assay data results, user-defined queries, built-in SQL tables in individ-
ual modules, and tables or table-like objects in external schemas, For a specific queriable object, the
data that is visible depends on the current user’s permissions in a given folder. Function arguments
identify the location of the server and the folder path.
Value
The available queries are returned as a three-column data frame containing one row for each field
for each query in the specified schema. The three columns are
queryNamethe name of the query object, repeated once for every field defined as output of the query.
fieldName the name of a query output field
captionthe caption of the named field as shown in the column header of a data grid, also known as
a label
Author(s)
<NAME>, <EMAIL>
References
http://www.omegahat.net/RCurl/,
https://www.labkey.org/project/home/begin.view
See Also
Retrieve data: labkey.selectRows, makeFilter, labkey.executeSql
Modify data: labkey.updateRows, labkey.insertRows, labkey.importRows, labkey.deleteRows
List available data: labkey.getSchemas, labkey.getQueryViews, labkey.getQueryDetails,
labkey.getDefaultViewDetails, labkey.getLookupDetails
Examples
## Not run:
## List of queries in a schema
# library(Rlabkey)
queriesDF <- labkey.getQueries(
baseUrl="https://www.labkey.org",
folderPath="/home",
schemaName="lists"
)
queriesDF
## End(Not run)
labkey.getQueryDetails
Retrieve detailed information on a LabKey query
Description
Fetch a list of output columns and their attributes that are avaialble from a given query
Usage
labkey.getQueryDetails(baseUrl, folderPath, schemaName, queryName)
Arguments
baseUrl a string specifying the address of the LabKey Server, including the context root
folderPath a string specifying the hierarchy of folders to the current folder (container) for
the operation, starting with the project folder
schemaName a string specifying the schema name in which the query object is defined
queryName a string specifying the name of the query
Details
Queries have a default output list of fields defined by the "default view" of the query. To retrieve that
set of fields with their detailed properties such as type and nullability, use labkey.getQueryDetails
function. Function arguments are the components of the url that identify the location of the server,
the folder path, the schema, and the name of the query.
The results from getQueryDetails describe the “field names” that are used to build the colSelect,
colFilter and colSort parameters to selectRows. Each column in the data frame returned from se-
lectRows corresponds to a field in the colSelect list.
There are two types of fieldNames that will be reported by the server in the output of this function.
For fields that are directly defined in the query corresponding the queryName parameter for this
function, the fieldName is simply the name assigned by the query. Because selectRows returns the
results specified by the default view, however, there may be cases where this default view incor-
porates data from other queries that have a defined 1-M relationship with the table designated by
the queryName. Such fields in related tables are referred to as “lookup” fields. Lookup fields have
multi-part names using a forward slash as the delimiter. For example, in a samples data set, if the
ParticipantId identifies the source of the sample, ParticipantId/CohortId/CohortName could be
a reference to a CohortName field in a Cohorts data set.
These lookup fieldNames can appear in the default view and show up in the selectRows result. If
a field from a lookup table is not in the default view, it can still be added to the output column list
of labkey.selectRows. Use the labkey.getLookups to discover what additional fields are available
via lookups, and then put their multipart fieldName values into the colSelect list. Lookup fields have
the semantics of a LEFT JOIN in SQL, such that every record from the target queryName appears
in the output whether or not there is a matching lookup field value.
Value
The available schemas are returned as a data frame,
queryName the name of the query, repeated n times, where n is the number of output fields from the
query
fieldName the fully qualified name of the field, relative to the specified queryName.
caption a more readable label for the data field, appears as a column header in grids
fieldKey the name part that identifies this field within its containing table, independent of its use
as a lookup target.
type a string specifying the field type, e.g. Text, Number, Date, Integer
isNullable TRUE if the field can be left empty (null)
isKeyField TRUE if the field is part of the primary key
isAutoIncrement TRUE if the system will automatically assign a sequential integer in this on in-
serting a record
isVersionField TRUE if the field issued to detect changes since last read
isHidden TRUE if the field is not displayed by default
isSelectable reserved for future use.
isUserEditable reserved for future use.
isReadOnly reserved for future use
isMvEnabled reserved for future use
lookupKeyField for a field defined as a lookup the primary key column of the query referenced by
the lookup field; NA for non-lookup fields
lookupSchemaName the schema of the query referenced by the lookup field; NA for non-lookup
fields
lookupDisplayField the field from the query referenced by the lookup field that is shown by de-
fault in place of the lookup field; NA for non-lookup fields
lookupQueryName the query referenced by the lookup field; NA for non-lookup fields. A non-NA
value indicates that you can use this field in a call to getLookups
lookupIsPublic reserved for future use
Author(s)
<NAME>, <EMAIL>
See Also
Retrieve data: labkey.selectRows, makeFilter, labkey.executeSql
Modify data: labkey.updateRows, labkey.insertRows, labkey.importRows, labkey.deleteRows
List available data: labkey.getSchemas, labkey.getQueries, labkey.getQueryViews, labkey.getDefaultViewDetail
labkey.getLookupDetails
Examples
## Not run:
## Details of fields of a query
# library(Rlabkey)
queryDF<-labkey.getQueryDetails(
baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples",
schemaName="lists",
queryName="AllTypes")
## End(Not run)
labkey.getQueryViews Retrieve a list of available named views defined on a query in a schema
Description
Fetch a list of named query views available to the current user in a specified folder context, schema
and query
Usage
labkey.getQueryViews(baseUrl, folderPath, schemaName, queryName)
Arguments
baseUrl a string specifying the address of the LabKey Server, including the context root
folderPath a string specifying the hierarchy of folders to the current folder (container) for
the operation, starting with the project folder
schemaName a string specifying the schema name in which the query object is defined
queryName a string specifying the name the query
Details
Queries have a default “view” associeated with them, and can also have any number of named
views. A named query view is created by using the “Customize View” button option on a LabKey
data grid page. Use getDefaultViewDetails to get inforation about the default (unnamed) view.
Value
The available views for a query are returned as a three-column data frame, with one row per view
output field.
viewName The name of the view, or NA for the default view.
fieldName The name of a field within the view, as defined in the query object to which the field
belongs
key The name of the field relative to the base query, Use this value in the colSelect parameter of
labkey.selectRows() .
Author(s)
<NAME>, <EMAIL>
References
https://www.labkey.org/Documentation/wiki-page.view?name=savingViews
See Also
Retrieve data: labkey.selectRows,makeFilter, labkey.executeSql
Modify data: labkey.updateRows, labkey.insertRows, labkey.importRows, labkey.deleteRows
List available data: labkey.getSchemas, labkey.getQueries, labkey.getQueryDetails,labkey.getDefaultViewDeta
Examples
## Not run:
## List of views defined for a query in a schema
# library(Rlabkey)
viewsDF <- labkey.getQueryViews(
baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples",
schemaName="lists",
queryName="AllTypes"
)
## End(Not run)
labkey.getRequestOptions
Helper function to get the HTTP request options for a specific method
type.
Description
The internal functions for labkey.get() and labkey.post() use this labkey.getRequestOptions() helper
to build up the HTTP request options for things like CSRF, CURL options, and authentication
properties. This function is also exposed for general use if you would like to make your own HTTP
request but need to use those request options as set in your session context.
Usage
labkey.getRequestOptions(method = 'GET', encoding = NULL)
Arguments
method a string specifying the HTTP method for the request options you want to get
encoding a string specifying the type of encoding to add to the header properties, defaults
to UTF-8 when NULL
Author(s)
<NAME>
Examples
## Not run:
library(Rlabkey)
labkey.getRequestOptions()
## End(Not run)
labkey.getSchemas Retrieve a list of available schemas from a labkey database
Description
Fetch a list of schemas available to the current user in a specified folder context
Usage
labkey.getSchemas(baseUrl, folderPath)
Arguments
baseUrl a string specifying the address of the LabKey Server, including the context root
folderPath a string specifying the hierarchy of folders to the current folder (container) for
the operation, starting with the project folder
Details
Schemas act as the name space for query objects in LabKey Server. Schemas are generatlly associ-
ated with a LabKey Server "module" that provides some specific functionality. Within a queriable
object, the specific data that is visible depends on the current user’s permissions in a given folder.
Function arguments are the components of the url that identify the location of the server and the
folder path.
Value
The available schemas are returned as a single-column data frame.
Author(s)
<NAME>, <EMAIL>
References
http://www.omegahat.net/RCurl/,
https://www.labkey.org/project/home/begin.view
See Also
Retrieve data: labkey.selectRows, makeFilter, labkey.executeSql
Modify data: labkey.updateRows, labkey.insertRows, labkey.importRows, labkey.deleteRows
List available data: labkey.getQueries, labkey.getQueryViews, labkey.getQueryDetails,
labkey.getDefaultViewDetails, labkey.getLookupDetails,
Examples
## Not run:
## List of schemas
# library(Rlabkey)
schemasDF <- labkey.getSchemas(
baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples"
)
## End(Not run)
labkey.importRows Import rows of data into a LabKey Server
Description
Bulk import rows of data into the database.
Usage
labkey.importRows(baseUrl, folderPath,
schemaName, queryName, toImport, na)
Arguments
baseUrl a string specifying the baseUrlfor the labkey server
folderPath a string specifying the folderPath
schemaName a string specifying the schemaName for the query
queryName a string specifying the queryName
toImport a data frame containing rows of data to be imported
na (optional) the value to convert NA’s to, defaults to NULL
Details
Multiple rows of data can be imported in bulk. The toImport data frame must contain values for
each column in the dataset and must be created with the stringsAsFactors option set to FALSE.
The names of the data in the data frame must be the column names from the LabKey Server. To im-
port a value of NULL, use an empty string ("") in the data frame (regardless of the database column
type). Also, when importing data into a study dataset, the sequence number must be specified.
Value
A list is returned with named categories of command, rowsAffected, queryName, container-
Path and schemaName. The schemaName, queryName and containerPath properties contain
the same schema, query and folder path used in the request. The rowsAffected property indicates
the number of rows affected by the API action. This will typically be the same number as passed in
the request.
Author(s)
<NAME>
See Also
labkey.selectRows, labkey.executeSql, makeFilter, labkey.insertRows, labkey.updateRows,
labkey.deleteRows, labkey.query.import
Examples
## Not run:
## Note that users must have the necessary permissions in the database
## to be able to modify data through the use of these functions
# library(Rlabkey)
newrows <- data.frame(
DisplayFld="Imported from R"
, RequiredText="abc"
, RequiredInt=1
, stringsAsFactors=FALSE)
newrows = newrows[rep(1:nrow(newrows),each=5),]
importedInfo <- labkey.importRows("http://localhost:8080/labkey",
folderPath="/apisamples", schemaName="lists", queryName="AllTypes",
toImport=newrows)
importedInfo$rowsAffected
## End(Not run)
labkey.insertRows Insert new rows of data into a LabKey Server
Description
Insert new rows of data into the database.
Usage
labkey.insertRows(baseUrl, folderPath,
schemaName, queryName, toInsert, na, provenanceParams=NULL)
Arguments
baseUrl a string specifying the baseUrlfor the labkey server
folderPath a string specifying the folderPath
schemaName a string specifying the schemaName for the query
queryName a string specifying the queryName
toInsert a data frame containing rows of data to be inserted
na (optional) the value to convert NA’s to, defaults to NULL
provenanceParams
the provenance parameter object which contains the options to include as part of
a provenance recording. This is a premium feature and requires the Provenance
LabKey module to function correctly, if it is not present this parameter will be
ignored.
Details
A single row or multiple rows of data can be inserted. The toInsert data frame must contain
values for each column in the dataset and must be created with the stringsAsFactors option set
to FALSE. The names of the data in the data frame must be the column names from the LabKey
Server.To insert a value of NULL, use an empty string ("") in the data frame (regardless of the
database column type). Also, when inserting data into a study dataset, the sequence number must
be specified..
Value
A list is returned with named categories of command, rowsAffected, rows, queryName, contain-
erPath and schemaName. The schemaName, queryName and containerPath properties contain
the same schema, query and folder path used in the request. The rowsAffected property indicates
the number of rows affected by the API action. This will typically be the same number as passed in
the request. The rows property contains a list of row objects corresponding to the rows inserted.
Author(s)
<NAME>
See Also
labkey.selectRows, labkey.executeSql, makeFilter, labkey.importRows, labkey.updateRows,
labkey.deleteRows, labkey.query.import, labkey.provenance.createProvenanceParams,
labkey.provenance.startRecording, labkey.provenance.addRecordingStep, labkey.provenance.stopRecording
Examples
## Not run:
## Insert, update and delete
## Note that users must have the necessary permissions in the database
## to be able to modify data through the use of these functions
# library(Rlabkey)
newrow <- data.frame(
DisplayFld="Inserted from R"
, TextFld="how its done"
, IntFld= 98
, DoubleFld = 12.345
, DateTimeFld = "03/01/2010"
, BooleanFld= FALSE
, LongTextFld = "Four score and seven years ago"
# , AttachmentFld = NA #attachment fields not supported
, RequiredText = "Veni, vidi, vici"
, RequiredInt = 0
, Category = "LOOKUP2"
, stringsAsFactors=FALSE)
insertedRow <- labkey.insertRows("http://localhost:8080/labkey",
folderPath="/apisamples", schemaName="lists", queryName="AllTypes",
toInsert=newrow)
newRowId <- insertedRow$rows[[1]]$RowId
selectedRow<-labkey.selectRows("http://localhost:8080/labkey",
folderPath="/apisamples", schemaName="lists", queryName="AllTypes",
colFilter=makeFilter(c("RowId", "EQUALS", newRowId)))
updaterow=data.frame(
RowId=newRowId
, DisplayFld="Updated from R"
, TextFld="how to update"
, IntFld= 777
, stringsAsFactors=FALSE)
updatedRow <- labkey.updateRows("http://localhost:8080/labkey",
folderPath="/apisamples", schemaName="lists", queryName="AllTypes",
toUpdate=updaterow)
selectedRow<-labkey.selectRows("http://localhost:8080/labkey",
folderPath="/apisamples", schemaName="lists", queryName="AllTypes",
colFilter=makeFilter(c("RowId", "EQUALS", newRowId)))
deleterow <- data.frame(RowId=newRowId, stringsAsFactors=FALSE)
result <- labkey.deleteRows(baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples", schemaName="lists", queryName="AllTypes",
toDelete=deleterow)
## Example of creating a provenance run with an initial step with material inputs, a second step
## with provenance mapping to link existing samples with newly inserted samples, and a final step
## with a data output
##
mi <- data.frame(lsid=c("urn:lsid:labkey.com:Sample.251.MySamples:sample1",
"urn:lsid:labkey.com:Sample.251.MySamples:sample2"))
p <- labkey.provenance.createProvenanceParams(name="step1", description="initial step",
materialInputs=mi)
ra <- labkey.provenance.startRecording(baseUrl="https://labkey.org/labkey/",
folderPath = "Provenance", provenanceParams=p)
rows <- fromJSON(txt='[{
"name" : "sample3",
"protein" : "p3",
"prov:objectInputs" : [
"urn:lsid:labkey.com:Sample.251.MySamples:sample21",
"urn:lsid:labkey.com:Sample.251.MySamples:sample22"
]
},{
"name" : "sample4",
"protein" : "p4",
"prov:objectInputs" : [
"urn:lsid:labkey.com:Sample.251.MySamples:sample21",
"urn:lsid:labkey.com:Sample.251.MySamples:sample22"
]
}
]')
labkey.insertRows(baseUrl="https://labkey.org/labkey/", folderPath = "Provenance",
schemaName="samples", queryName="MySamples", toInsert=rows,
provenanceParams=labkey.provenance.createProvenanceParams(name="query step",
recordingId=ra$recordingId))
labkey.provenance.stopRecording(baseUrl="https://labkey.org/labkey/", folderPath = "Provenance",
provenanceParams=labkey.provenance.createProvenanceParams(name="final step",
recordingId=ra$recordingId, dataOutputs=do))
## End(Not run)
labkey.makeRemotePath Build a file path to data on a remote machine
Description
Replaces a local root with a remote root given a full path
Usage
labkey.makeRemotePath(localRoot, remoteRoot, fullPath)
Arguments
localRoot local root part of the fullPath
remoteRoot remote root that will replace the local root of the fullPath
fullPath the full path to make remote
Details
A helper function to translate a file path on a LabKey web server to a path accessible by a remote
machine. For example, if an R script is run on an R server that is a different machine than the
LabKey server and that script references data files on the LabKey server, a remote path needs to be
created to correctly reference these files. The local and remote roots of the data pipeline are included
by LabKey in the prolog of an R View report script. Note that the data pipeline root references
are only included if an administrator has enabled the Rserve Reports experimental feature on the
LabKey server. If the remoteRoot is empty or the fullPath does not contain the localRoot then the
fullPath is returned without its root being changed.
Value
A character array containing the full path.
Author(s)
<NAME>
Examples
# library(Rlabkey)
labkey.pipeline.root <- "c:/data/fcs"
labkey.remote.pipeline.root <- "/volumes/fcs"
fcsFile <- "c:/data/fcs/runA/aaa.fcs"
# returns "/volumes/fcs/runA/aaa.fcs
labkey.makeRemotePath(
localRoot=labkey.pipeline.root,
remoteRoot=labkey.remote.pipeline.root,
fullPath=fcsFile);
labkey.pipeline.getFileStatus
Gets the protocol file status for a pipeline
Description
Gets the status of analysis using a particular protocol for a particular pipeline.
Usage
labkey.pipeline.getFileStatus(baseUrl=NULL, folderPath,
taskId, protocolName, path, files)
Arguments
baseUrl a string specifying the baseUrl for the LabKey server
folderPath a string specifying the folderPath
taskId a string identifier for the pipeline
protocolName a string name of the analysis protocol
path a string for the relative path from the folder’s pipeline root
files a list of names of the files within the subdirectory described by the path property
Value
The response will contain a list of file status objects, i.e. files, each of which will have the following
properties:
• "name": name of the file
• "status": status of the file
The response will also include the name of the action that would be performed on the files if the
user initiated processing, i.e. submitType.
Author(s)
<NAME>
See Also
labkey.pipeline.getPipelineContainer, labkey.pipeline.getProtocols, labkey.pipeline.startAnalysis
Examples
## Not run:
labkey.pipeline.getFileStatus(
baseUrl="http://labkey/",
folderPath="home",
taskId = "pipelinetest:pipeline:r-copy",
path = "r-copy",
protocolName = "Test protocol name",
files = list("sample.txt", "result.txt")
)
## End(Not run)
labkey.pipeline.getPipelineContainer
Gets the container in which the pipeline is defined
Description
Gets the container in which the pipeline for this container is defined. This may be the container in
which the request was made, or a parent container if the pipeline was defined there.
Usage
labkey.pipeline.getPipelineContainer(baseUrl=NULL, folderPath)
Arguments
baseUrl a string specifying the baseUrl for the LabKey server
folderPath a string specifying the folderPath
Value
The response will contain the following:
• "containerPath": The container path in which the pipeline is defined. If no pipeline has been
defined in this container hierarchy, then the value of this property will be null.
• "webDavURL": The WebDavURL for the pipeline root.
Author(s)
<NAME>
See Also
labkey.pipeline.getProtocols, labkey.pipeline.getFileStatus, labkey.pipeline.startAnalysis
Examples
## Not run:
labkey.pipeline.getPipelineContainer(
baseUrl="http://labkey/",
folderPath="home"
)
## End(Not run)
labkey.pipeline.getProtocols
Gets the protocols that have been saved for a particular pipeline
Description
Gets the protocols that have been saved for a particular pipeline.
Usage
labkey.pipeline.getProtocols(baseUrl=NULL, folderPath,
taskId, path, includeWorkbooks = FALSE)
Arguments
baseUrl a string specifying the baseUrl for the LabKey server
folderPath a string specifying the folderPath
taskId a string identifier for the pipeline
path a string for the relative path from the folder’s pipeline root
includeWorkbooks
(optional) If true, protocols from workbooks under the selected container will
also be included. Defaults to FALSE.
Value
The response will contain a list of protocol objects, each of which will have the following properties:
• "name": Name of the saved protocol.
• "description": Description of the saved protocol, if provided.
• "xmlParameters": Bioml representation of the parameters defined by this protocol.
• "jsonParameters": A list representation of the parameters defined by this protocol.
• "containerPath": The container path where this protocol was saved.
The response will also include a defaultProtocolName property representing which of the protocol
names is the default.
Author(s)
<NAME>
See Also
labkey.pipeline.getPipelineContainer, labkey.pipeline.getFileStatus, labkey.pipeline.startAnalysis
Examples
## Not run:
labkey.pipeline.getProtocols(
baseUrl="http://labkey/",
folderPath="home",
taskId = "pipelinetest:pipeline:r-copy",
path = "r-copy",
includeWorkbooks = FALSE
)
## End(Not run)
labkey.pipeline.startAnalysis
Start an analysis of a set of files using a pipeline
Description
Starts analysis of a set of files using a particular protocol definition with a particular pipeline.
Usage
labkey.pipeline.startAnalysis(baseUrl=NULL, folderPath,
taskId, protocolName, path, files, fileIds = list(),
pipelineDescription = NULL, protocolDescription = NULL,
jsonParameters = NULL, xmlParameters = NULL,
allowNonExistentFiles = FALSE, saveProtocol = TRUE)
Arguments
baseUrl a string specifying the baseUrl for the LabKey server
folderPath a string specifying the folderPath
taskId a string identifier for the pipeline
protocolName a string name of the analysis protocol
path a string for the relative path from the folder’s pipeline root
files a list of names of the files within the subdirectory described by the path property
fileIds (optional) list of data IDs of files to be used as inputs for this pipeline. These
correspond to the rowIds from the table ext.data. They do not need to be located
within the file path provided. The user does need read access to the container
associated with each file.
pipelineDescription
(optional) a string description displayed in the pipeline
protocolDescription
(optional) a string description of the analysis protocol
jsonParameters (optional) a list of key / value pairs, or a JSON string representation, for the
protocol description. Not allowed if a protocol with the same name has already
been saved. If no protocol with the same name exists, either this property or
xmlParameters must be specified.
xmlParameters (optional) a string XML representation of the protocol description. Not allowed
if a protocol with the same name has already been saved. If no protocol with the
same name exists, either this property or jsonParameters must be specified.
allowNonExistentFiles
(optional) a boolean indicating if the pipeline should allow non existent files.
Defaults to false.
saveProtocol (optional) a boolean indicating if no protocol with this name already exists,
whether or not to save this protocol definition for future use. Defaults to true.
Value
On success, the response will contain the jobGUID string value for the newly created pipeline job.
Author(s)
<NAME>
See Also
labkey.pipeline.getPipelineContainer, labkey.pipeline.getProtocols, labkey.pipeline.getFileStatus
Examples
## Not run:
labkey.pipeline.startAnalysis(
baseUrl="http://labkey/",
folderPath="home",
taskId = "pipelinetest:pipeline:r-copy",
protocolName = "Test protocol name",
path="r-copy",
files = list("sample.txt", "result.txt"),
protocolDescription = "Test protocol description",
pipelineDescription = "test pipeline description",
jsonParameters = list(assay = "Test assay name", comment = "Test assay comment"),
saveProtocol = TRUE
)
## End(Not run)
labkey.provenance.addRecordingStep
Add a step to a provenance recording
Description
Function to add a step to a previously created provenance recording session. Note: this function is
in beta and not yet final, changes should be expected so exercise caution when using it.
Usage
labkey.provenance.addRecordingStep(baseUrl=NULL, folderPath, provenanceParams = NULL)
Arguments
baseUrl a string specifying the baseUrl for the labkey server
folderPath a string specifying the folderPath
provenanceParams
the provenance parameter object which contains the options to include in this
recording step
Details
Function to add a step to a previously created provenance recording. The recording ID that was
obtained from a previous startRecording function call must be passed into the provenanceParams
config. This is a premium feature and requires the Provenance LabKey module to function correctly.
Value
The generated recording ID which can be used in subsequent steps (or queries that support prove-
nance).
Author(s)
<NAME>
See Also
labkey.provenance.createProvenanceParams, labkey.provenance.startRecording, labkey.provenance.stopRec
Examples
## Not run:
## start a provenance recording and add a recording step
library(Rlabkey)
mi <- data.frame(lsid=c("urn:lsid:labkey.com:Sample.251.MySamples:sample1",
"urn:lsid:labkey.com:Sample.251.MySamples:sample2"))
p <- labkey.provenance.createProvenanceParams(name="step1", description="initial step",
materialInputs=mi)
r <- labkey.provenance.startRecording(baseUrl="https://labkey.org/labkey/",
folderPath = "Provenance", provenanceParams=p)
do <- data.frame(
lsid="urn:lsid:labkey.com:AssayRunTSVData.Folder-251:12c70994-7ce5-1038-82f0-9c1487dbd334")
labkey.provenance.addRecordingStep(baseUrl="https://labkey.org/labkey/", folderPath = "Provenance",
provenanceParams=labkey.provenance.createProvenanceParams(name="additional step",
recordingId=r$recordingId, dataOutputs=do))
## End(Not run)
labkey.provenance.createProvenanceParams
Create provenance parameter object
Description
Helper function to create the data structure that can be used in provenance related APIs. Note: this
function is in beta and not yet final, changes should be expected so exercise caution when using it.
Usage
labkey.provenance.createProvenanceParams(recordingId=NULL, name=NULL, description=NULL,
runName=NULL, materialInputs=NULL, materialOutputs=NULL, dataInputs=NULL,
dataOutputs=NULL, inputObjectUriProperty=NULL, outputObjectUriProperty=NULL,
objectInputs=NULL, objectOutputs=NULL, provenanceMap=NULL,
params=NULL, properties=NULL)
Arguments
recordingId (optional) the recording ID to associate with other steps using the same ID
name (optional) the name of this provenance step
description (optional) the description of this provenance step
runName (optional) the name of the provenance run, if none specified a default run name
will be created
materialInputs (optional) the list of materials (samples) to be used as the provenance run input.
The data structure should be a dataframe with the column name describing the
data type (lsid, id)
materialOutputs
(optional) the list of materials (samples) to be used as the provenance run output.
The data structure should be a dataframe with the column name describing the
data type (lsid, id)
dataInputs (optional) the list of data inputs to be used for the run provenance map
dataOutputs (optional) the list of data outputs to be used for the run provenance map
inputObjectUriProperty
(optional) for incoming data rows, the column name to interpret as the input to
the provenance map. Defaults to : ’prov:objectInputs’
outputObjectUriProperty
(optional) for provenance mapping, the column name to interpret as the output
to the provenance map. Defaults to : ’lsid’
objectInputs (optional) the list of object inputs to be used for the run provenance map
objectOutputs (optional) the list of object outputs to be used for the run provenance map
provenanceMap (optional) the provenance map to be used directly for the run step. The data
structure should be a dataframe with the column names of ’from’ and ’to’ to
indicate which sides of the mapping the identifiers refer to
params (optional) the list of initial run step parameters. Parameters supported in the
parameter list such as name, description, runName, can be specified in this data
structure as well as other run step parameters not available in the parameter list
properties (optional) custom property values to associate with the run step. The data struc-
ture should be a dataframe with the property URIs as column names and the
column value to associate with the property. The Vocabulary domain and fields
must have been created prior to using
Details
This function can be used to generate a provenance parameter object which can then be used as an
argument in the other provenance related functions to assemble provenance runs. This is a premium
feature and requires the Provenance LabKey module to function correctly.
Value
A list containing elements describing the passed in provenance parameters.
Author(s)
<NAME>
See Also
labkey.provenance.startRecording, labkey.provenance.addRecordingStep, labkey.provenance.stopRecording
Examples
## Not run:
## create provenance params with material inputs and data outputs
library(Rlabkey)
mi <- data.frame(lsid=c("urn:lsid:labkey.com:Sample.251.MySamples:sample1",
"urn:lsid:labkey.com:Sample.251.MySamples:sample2"))
do <- data.frame(
lsid="urn:lsid:labkey.com:AssayRunTSVData.Folder-251:12c70994-7ce5-1038-82f0-9c1487dbd334")
p <- labkey.provenance.createProvenanceParams(name="step1", description="initial step",
materialInputs=mi, dataOutputs=do)
## create provenance params with object inputs (from an assay run)
oi <- labkey.selectRows(baseUrl="https://labkey.org/labkey/", folderPath = "Provenance",
schemaName="assay.General.titer",
queryName="Data",
colSelect= c("LSID"),
colFilter=makeFilter(c("Run/RowId","EQUAL","253")))
mi <- data.frame(lsid=c("urn:lsid:labkey.com:Sample.251.MySamples:sample1",
"urn:lsid:labkey.com:Sample.251.MySamples:sample2"))
p <- labkey.provenance.createProvenanceParams(name="step1", description="initial step",
objectInputs=oi[["LSID"]], materialInputs=mi)
## add run step properties and custom properties to the provenance params
props <- data.frame(
"urn:lsid:labkey.com:Vocabulary.Folder-996:ProvenanceDomain#version"=c(22.3),
"urn:lsid:labkey.com:Vocabulary.Folder-996:ProvenanceDomain#instrumentName"=c("NAb reader"),
check.names=FALSE)
params <- list()
params$comments <- "adding additional step properties"
params$activityDate <- "2022-3-21"
params$startTime <- "2022-3-21 12:35:00"
params$endTime <- "2022-3-22 02:15:30"
params$recordCount <- 2
p <- labkey.provenance.createProvenanceParams(recordingId=ra$recordingId, name="step2",
properties=props, params=params)
## End(Not run)
labkey.provenance.startRecording
Start a provenance recording
Description
Function to start a provenance recording session, if successful a provenance recording ID is returned
which can be used to add additional steps to the provenance run. Note: this function is in beta and
not yet final, changes should be expected so exercise caution when using it.
Usage
labkey.provenance.startRecording(baseUrl=NULL, folderPath, provenanceParams = NULL)
Arguments
baseUrl a string specifying the baseUrl for the labkey server
folderPath a string specifying the folderPath
provenanceParams
the provenance parameter object which contains the options to include in this
recording step
Details
Function to start a provenance recording. A provenance recording can contain an arbitrary number
of steps to create a provenance run, but stopRecording must be called to finish the recording and
create the run. If successful this will return a recording ID which is needed for subsequent steps.
This is a premium feature and requires the Provenance LabKey module to function correctly.
Value
The generated recording ID which can be used in subsequent steps (or queries that support prove-
nance).
Author(s)
<NAME>
See Also
labkey.provenance.createProvenanceParams, labkey.provenance.addRecordingStep, labkey.provenance.stopR
Examples
## Not run:
## create provenance params with material inputs and data outputs and start a recording
library(Rlabkey)
mi <- data.frame(lsid=c("urn:lsid:labkey.com:Sample.251.MySamples:sample1",
"urn:lsid:labkey.com:Sample.251.MySamples:sample2"))
do <- data.frame(
lsid="urn:lsid:labkey.com:AssayRunTSVData.Folder-251:12c70994-7ce5-1038-82f0-9c1487dbd334")
p <- labkey.provenance.createProvenanceParams(name="step1", description="initial step",
materialInputs=mi, dataOutputs=do)
labkey.provenance.startRecording(baseUrl="https://labkey.org/labkey/",
folderPath = "Provenance", provenanceParams=p)
## End(Not run)
labkey.provenance.stopRecording
Stop a provenance recording
Description
Function to end a provenance recording and create and save the provenance run on the server. Note:
this function is in beta and not yet final, changes should be expected so exercise caution when using
it.
Usage
labkey.provenance.stopRecording(baseUrl=NULL, folderPath, provenanceParams = NULL)
Arguments
baseUrl a string specifying the baseUrl for the labkey server
folderPath a string specifying the folderPath
provenanceParams
the provenance parameter object which contains the options to include in this
recording step, including the recording ID
Details
Function to stop the provenance recording associated with the recording ID, this will create a prove-
nance run using all the steps (with inputs and outputs) associated with the recording ID. The record-
ing ID that was obtained from a previous startRecording function call must be passed into the
provenanceParams config. This is a premium feature and requires the Provenance LabKey module
to function correctly.
Value
The serialized provenance run that was created.
Author(s)
<NAME>
See Also
labkey.provenance.createProvenanceParams, labkey.provenance.startRecording, labkey.provenance.addReco
Examples
## Not run:
library(Rlabkey)
## object inputs (from an assay run) and material inputs
##
oi <- labkey.selectRows(baseUrl="https://labkey.org/labkey/", folderPath = "Provenance",
schemaName="assay.General.titer",
queryName="Data",
colSelect= c("LSID"),
colFilter=makeFilter(c("Run/RowId","EQUAL","253")))
mi <- data.frame(lsid=c("urn:lsid:labkey.com:Sample.251.MySamples:sample1",
"urn:lsid:labkey.com:Sample.251.MySamples:sample2"))
p <- labkey.provenance.createProvenanceParams(name="step1", description="initial step",
objectInputs=oi[["LSID"]], materialInputs=mi)
r <- labkey.provenance.startRecording(baseUrl="https://labkey.org/labkey/",
folderPath = "Provenance", provenanceParams=p)
run <- labkey.provenance.stopRecording(baseUrl="https://labkey.org/labkey/",
folderPath = "Provenance",
provenanceParams=labkey.provenance.createProvenanceParams(name="final step",
recordingId=r$recordingId))
## End(Not run)
labkey.query.import Bulk import an R data frame into a LabKey Server table using file
import.
Description
Bulk import an R data frame into a LabKey Server table using file import.
Usage
labkey.query.import(baseUrl, folderPath,
schemaName, queryName, toImport, options = NULL)
Arguments
baseUrl a string specifying the baseUrlfor the labkey server
folderPath a string specifying the folderPath
schemaName a string specifying the schemaName for the query
queryName a string specifying the queryName
toImport a data frame containing rows of data to be imported
options (optional) a list containing options specific to the import action of the query
Details
This command mimics the "Import bulk data" option that you see in the LabKey server UI for a ta-
ble/query. It takes the passed in toImport data frame and writes it to a temp file to be posted to the
import action for the given LabKey query. It is very similar to the labkey.importRows command
but will be much more performant.
Multiple rows of data can be imported in bulk using the toImport data frame. The names of
the data in the data frame must be the column names from the LabKey Server.
LabKey data types support different import options. The list of valid options for each query will
vary, but some common examples include:
• insertOption (string) : Whether the import action should be done as an insert, creating
new rows for each provided row of the data frame, or a merge. When merging during import,
any data you provide for the rows representing records that already exist will replace the
previous values. Note that when updating an existing record, you only need to provide the
columns you wish to update, existing data for other columns will be left as is. Available
options are "INSERT" and "MERGE". Defaults to "INSERT".
• auditBehavior (string) : Set the level of auditing details for this import action. Available
options are "SUMMARY" and "DETAILED". SUMMARY - Audit log reflects that a change
was made, but does not mention the nature of the change. DETAILED - Provides full details
on what change was made, including values before and after the change. Defaults to the setting
as specified by the LabKey query.
• importLookupByAlternateKey (boolean) : Allows lookup target rows to be resolved by
values rather than the target’s primary key. This option will only be available for lookups that
are configured with unique column information. Defaults to FALSE.
Value
A list is returned with the row count for the number of affected rows. If options are provided,
additional details may be included in the response object related to those options.
Author(s)
<NAME>
See Also
labkey.insertRows, labkey.updateRows, labkey.importRows
Examples
## Not run:
## Note that users must have the necessary permissions in the database
## to be able to modify data through the use of these functions
# library(Rlabkey)
df <- data.frame(
name=c("test1","test2","test3"),
customInt=c(1:3),
customString=c("aaa", "bbb", "ccc")
)
importedInfo <- labkey.query.import(
"http://localhost:8080/labkey",
folderPath="/apisamples", schemaName="samples", queryName="SampleType1",
toImport=df, options=list(insertOption = "MERGE", auditBehavior = "DETAILED")
)
importedInfo$rowCount
## End(Not run)
labkey.rstudio.initReport
Initialize a RStudio session for LabKey R report source editing
Description
LabKey-RStudio integration helper. Not intended for use outside RStudio.
Usage
labkey.rstudio.initReport(apiKey = "", baseUrl = "", folderPath,
reportEntityId, skipViewer = FALSE, skipEdit = FALSE)
Arguments
apiKey session key from your server
baseUrl server location including context path, if any. e.g. https://www.labkey.org/
folderPath a string specifying the folderPath
reportEntityId LabKey report’s entityId
skipViewer (TRUE | FALSE) TRUE to skip setting up LabKey schema viewer in RStudio
skipEdit (TRUE | FALSE) TRUE to open file in editor
Examples
## Not run:
## RStudio console only
library(Rlabkey)
labkey.rstudio.initReport(apiKey="<KEY>",
baseUrl="http://labkey/", folderPath="home",
reportEntityId="0123456a-789b-1000-abcd-01234567abcde")
## End(Not run)
labkey.rstudio.initRStudio
Initialize a RStudio session for LabKey integration
Description
LabKey-RStudio integration helper. Not intended for use outside RStudio.
Usage
labkey.rstudio.initRStudio(apiKey = "", baseUrl = "", folderPath, skipViewer = FALSE)
Arguments
apiKey session key from your server
baseUrl server location including context path, if any. e.g. https://www.labkey.org/
folderPath a string specifying the folderPath
skipViewer (TRUE | FALSE) TRUE to skip setting up LabKey schema viewer in RStudio
Examples
## Not run:
## RStudio console only
library(Rlabkey)
labkey.rstudio.initRStudio(apiKey="<KEY>",
baseUrl="http://labkey/", folderPath="home")
## End(Not run)
labkey.rstudio.initSession
Initialize a RStudio session for LabKey integration using a time one
request id
Description
LabKey-RStudio integration helper. Not intended for use outside RStudio.
Usage
labkey.rstudio.initSession(requestId, baseUrl)
Arguments
requestId A one time request id generated by LabKey server for initializing RStudio
baseUrl server location including context path, if any. e.g. https://www.labkey.org/
Examples
## Not run:
## RStudio console only
library(Rlabkey)
labkey.rstudio.initSession(requestId="a60228c8-9448-1036-a7c5-ab541dc15ee9",
baseUrl="http://labkey/")
## End(Not run)
labkey.rstudio.isInitialized
Check valid rlabkey session
Description
LabKey-RStudio integration helper. Not intended for use outside RStudio.
Usage
labkey.rstudio.isInitialized()
Examples
## Not run:
## RStudio console only
library(Rlabkey)
labkey.rstudio.isInitialized()
## End(Not run)
labkey.rstudio.saveReport
Update RStudio report source back to LabKey
Description
LabKey-RStudio integration helper. Not intended for use outside RStudio.
Usage
labkey.rstudio.saveReport(folderPath, reportEntityId, reportFilename,
useWarning = FALSE)
Arguments
folderPath a string specifying the folderPath
reportEntityId LabKey report’s entityId
reportFilename The filename to save
useWarning (TRUE | FALSE) TRUE to prompt user choices to save
Examples
## Not run:
## RStudio console only
library(Rlabkey)
labkey.rstudio.saveReport(folderPath="home",
reportEntityId="0123456a-789b-1000-abcd-01234567abcde",
reportFilename="knitrReport.Rhtml", useWarning=TRUE)
## End(Not run)
labkey.saveBatch Save an assay batch object to a labkey database
Description
Save an assay batch object to a labkey database
Usage
labkey.saveBatch(baseUrl, folderPath, assayName, resultDataFrame,
batchPropertyList=NULL, runPropertyList=NULL)
Arguments
baseUrl a string specifying the baseUrlfor the labkey server
folderPath a string specifying the folderPath
assayName a string specifying the name of the assay instance
resultDataFrame
a data frame containing rows of data to be inserted
batchPropertyList
a list of batch Properties
runPropertyList
a list of run Properties
Details
This function has been deprecated and will be removed in a future release, please use labkey.experiment.saveBatch
instead as it supports the newer options for saving batch objects.
To save an R data.frame an assay results sets, you must create a named assay using the "General"
assay provider. Note that saveBatch currently supports only a single run with one result set per
batch.
Value
Returns the object representation of the Assay batch.
Author(s)
<NAME>
References
https://www.labkey.org/Documentation/wiki-page.view?name=createDatasetViaAssay
See Also
labkey.selectRows, labkey.executeSql, makeFilter, labkey.updateRows,
labkey.deleteRows, labkey.experiment.saveBatch
Examples
## Not run:
## Very simple example of an analysis flow: query some data, calculate
## some stats, then save the calculations as an assay result set in
## LabKey Server
## Note this example expects to find an assay named "SimpleMeans" in
## the apisamples project
# library(Rlabkey)
simpledf <- labkey.selectRows(
baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples",
schemaName="lists",
queryName="AllTypes")
## some dummy calculations to produce and example analysis result
testtable <- simpledf[,3:4]
colnames(testtable) <- c("IntFld", "DoubleFld")
row <- c(list("Measure"="colMeans"), colMeans(testtable, na.rm=TRUE))
results <- data.frame(row, row.names=NULL, stringsAsFactors=FALSE)
row <- c(list("Measure"="colSums"), colSums(testtable, na.rm=TRUE))
results <- rbind(results, as.vector(row))
bprops <- list(LabNotes="this is a simple demo")
bpl <- list(name=paste("Batch ", as.character(date())),properties=bprops)
rpl <- list(name=paste("Assay Run ", as.character(date())))
assayInfo<- labkey.saveBatch(
baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples",
"SimpleMeans",
results,
batchPropertyList=bpl,
runPropertyList=rpl
)
## End(Not run)
labkey.security.createContainer
Creates a new container, which may be a project, folder, or workbook,
on the server
Description
Create a new container, which may be a project, folder, or workbook, on the LabKey server with
parameters to control the containers name, title, description, and folder type.
Usage
labkey.security.createContainer(baseUrl=NULL, parentPath, name = NULL, title = NULL,
description = NULL, folderType = NULL, isWorkbook = FALSE)
Arguments
baseUrl A string specifying the baseUrl for the labkey server.
parentPath A string specifying the parentPath for the new container.
name The name of the container, required for projects or folders.
title The title of the container, used primarily for workbooks.
description The description of the container, used primarily for workbooks.
folderType The name of the folder type to be applied (ex. Study or Collaboration).
isWorkbook Whether this a workbook should be created. Defaults to false.
Details
This function allows for users with proper permissions to create a new container, which may be
a project, folder, or workbook, on the LabKey server with parameters to control the containers
name, title, description, and folder type. If the container already exists or the user does not have
permissions, an error message will be returned.
Value
Returns information about the newly created container.
Author(s)
<NAME>
See Also
labkey.getFolders, labkey.security.getContainers, labkey.security.deleteContainer,
labkey.security.moveContainer labkey.security.renameContainer
Examples
## Not run:
library(Rlabkey)
labkey.security.createContainer(baseUrl="http://labkey/", parentPath = "/home",
name = "NewFolder", description = "My new folder has this description",
folderType = "Collaboration"
)
## End(Not run)
labkey.security.deleteContainer
Deletes an existing container, which may be a project, folder, or work-
book
Description
Deletes an existing container, which may be a project, folder, or workbook, and all of its children
from the Labeky server.
Usage
labkey.security.deleteContainer(baseUrl=NULL, folderPath)
Arguments
baseUrl A string specifying the baseUrl for the labkey server.
folderPath A string specifying the folderPath to be deleted.
Details
This function allows for users with proper permissions to delete an existing container, which may
be a project, folder, or workbook, from the LabKey server. This will also remove all subfolders of
the container being deleted. If the container does not exist or the user does not have permissions,
an error message will be returned.
Value
Returns a success message for the container deletion action.
Author(s)
<NAME>
See Also
labkey.getFolders, labkey.security.getContainers, labkey.security.createContainer,
labkey.security.moveContainer labkey.security.renameContainer
Examples
## Not run:
library(Rlabkey)
labkey.security.deleteContainer(baseUrl="http://labkey/", folderPath = "/home/FolderToDelete")
## End(Not run)
labkey.security.getContainers
Returns information about the specified container
Description
Returns information about the specified container, including the user’s current permissions within
that container. If the includeSubfolders config option is set to true, it will also return information
about all descendants the user is allowed to see.
Usage
labkey.security.getContainers(baseUrl=NULL, folderPath,
includeEffectivePermissions=TRUE, includeSubfolders=FALSE, depth=50,
includeChildWorkbooks=TRUE, includeStandardProperties = TRUE)
Arguments
baseUrl A string specifying the baseUrl for the labkey server.
folderPath A string specifying the folderPath.
includeEffectivePermissions
If set to false, the effective permissions for this container resource will not be
included (defaults to true).
includeSubfolders
If set to true, the entire branch of containers will be returned. If false, only the
immediate children of the starting container will be returned (defaults to false).
depth May be used to control the depth of recursion if includeSubfolders is set to true.
includeChildWorkbooks
If true, include child containers of type workbook in the response (defaults to
TRUE).
includeStandardProperties
If true, include the standard container properties like title, formats, etc. in the
response (defaults to TRUE).
Details
This function returns information about the specified container, including the user’s current per-
missions within that container. If the includeSubfolders config option is set to true, it will also
return information about all descendants the user is allowed to see. The depth of the results for the
included subfolders can be controlled with the depth parameter.
Value
The data frame containing the container properties for the current folder and subfolders, including
name, title, id, path, type, folderType, and effectivePermissions.
Author(s)
<NAME>
See Also
labkey.getFolders, labkey.security.createContainer, labkey.security.deleteContainer,
labkey.security.moveContainer labkey.security.renameContainer
Examples
## Not run:
library(Rlabkey)
labkey.security.getContainers(
baseUrl="http://labkey/", folderPath = "home",
includeEffectivePermissions = FALSE, includeSubfolders = TRUE, depth = 2,
includeChildWorkbooks = FALSE, includeStandardProperties = FALSE
)
## End(Not run)
labkey.security.impersonateUser
Start impersonating a user
Description
For site-admins or project-admins only, start impersonating a user based on the userId or email
address.
Usage
labkey.security.impersonateUser(baseUrl=NULL, folderPath,
userId=NULL, email=NULL)
Arguments
baseUrl A string specifying the baseUrl for the LabKey server.
folderPath A string specifying the folderPath in which to impersonate the user.
userId The id of the user to be impersonated. Either this or email is required.
email The email of the user to be impersonated. Either this or userID is required.
Details
Admins may impersonate other users to perform actions on their behalf. Site admins may imper-
sonate any user in any project. Project admins must execute this command in a project in which
they have admin permission and may impersonate only users that have access to the project.
To finish an impersonation session use labkey.security.stopImpersonating.
Value
Returns a success message based on a call to labkey.whoAmI.
Author(s)
<NAME>
See Also
labkey.whoAmI, labkey.security.stopImpersonating
Examples
## Not run:
library(Rlabkey)
labkey.security.impersonateUser(baseUrl="http://labkey/", folderPath = "/home",
email = "<EMAIL>"
)
## End(Not run)
labkey.security.moveContainer
Moves an existing container, which may be a folder or workbook
Description
Moves an existing container, which may be a folder or workbook, to be the subfolder of another
folder and/or project on the LabKey server.
Usage
labkey.security.moveContainer(baseUrl=NULL, folderPath,
destinationParent, addAlias = TRUE)
Arguments
baseUrl A string specifying the baseUrl for the labkey server.
folderPath A string specifying the folderPath to be moved. Additionally, the container
entity id is also valid.
destinationParent
The container path of destination parent. Additionally, the destination parent
entity id is also valid.
addAlias Add alias of current container path to container that is being moved (defaults to
true).
Details
This function moves an existing container, which may be a folder or workbook, to be the subfolder
of another folder and/or project on the LabKey server. Projects and the root container can not be
moved. If the target or destination container does not exist or the user does not have permissions,
an error message will be returned.
Value
Returns a success message for the container move action with the new path.
Author(s)
<NAME>
See Also
labkey.getFolders, labkey.security.getContainers, labkey.security.createContainer,
labkey.security.deleteContainer labkey.security.renameContainer
Examples
## Not run:
library(Rlabkey)
labkey.security.moveContainer(baseUrl="http://labkey/", folderPath = "/home/FolderToMove",
destinationParent = "/OtherProject", addAlias = TRUE
)
## End(Not run)
labkey.security.renameContainer
Rename an existing container at the given container path
Description
Renames an existing container at the given container path. This action allows for updating the
container name, title, or both.
Usage
labkey.security.renameContainer(baseUrl=NULL, folderPath,
name=NULL, title=NULL, addAlias=TRUE)
Arguments
baseUrl A string specifying the baseUrl for the labkey server.
folderPath A string specifying the folderPath to be renamed. Additionally, the container
entity id is also valid.
name The new container name. If not specified, the container name will not be changed.
title The new container title. If not specified, the container name will be used.
addAlias Add alias of current container path for the current container name (defaults to
true).
Details
This function renames an existing container at the given container path on the LabKey server. A
new container name and/or title must be specified. If a new name is provided but not a title, the
name will also be set as the container title.
Value
Returns a success message for the container rename action.
Author(s)
<NAME>
See Also
labkey.getFolders, labkey.security.getContainers, labkey.security.createContainer,
labkey.security.deleteContainer labkey.security.moveContainer
Examples
## Not run:
library(Rlabkey)
labkey.security.renameContainer(baseUrl="http://labkey/", folderPath = "/home/OriginalFolder",
name = "NewFolderName", title = "New Folder Title", addAlias = TRUE
)
## End(Not run)
labkey.security.stopImpersonating
Stop impersonating a user
Description
Stop impersonating a user while keeping the original user logged in.
Usage
labkey.security.stopImpersonating(baseUrl=NULL)
Arguments
baseUrl A string specifying the baseUrl for the LabKey server.
Details
If you are currently impersonating a user in this session, you can use this function to stop the
impersonation and return back to the original user logged in.
To start an impersonation session use labkey.security.impersonateUser.
Value
Returns a success message based on a call to labkey.whoAmI.
Author(s)
<NAME>
See Also
labkey.whoAmI, labkey.security.impersonateUser
Examples
## Not run:
library(Rlabkey)
labkey.security.stopImpersonating(baseUrl="http://labkey/")
## End(Not run)
labkey.selectRows Retrieve data from a labkey database
Description
Import full datasets or selected rows into R. The data can be sorted and filtered prior to import.
Usage
labkey.selectRows(baseUrl = NULL, folderPath, schemaName, queryName,
viewName = NULL, colSelect = NULL, maxRows = NULL,
rowOffset = NULL, colSort = NULL, colFilter = NULL,
showHidden = FALSE, colNameOpt="caption",
containerFilter = NULL, parameters = NULL,
includeDisplayValues = FALSE, method = "POST")
Arguments
baseUrl a string specifying the baseUrlfor the labkey server
folderPath a string specifying the folderPath
schemaName a string specifying the schemaName for the query
queryName a string specifying the queryName
viewName (optional) a string specifying the viewName associated with the query. If not
specified, the default view determines the rowset returned.
colSelect (optional) a vector of strings specifying which columns of a dataset or view to
import.
• The wildcard character ("*") may also be used here to get all columns in-
cluding those not in the default view.
• If you include a column that is a lookup (foreign key) the value of the
primary key for that target will be returned.
• Use a backslash character ("/") to include non-primary key columns from a
lookup target (foreign key), e.g "LookupColumnName/targetColumn".
• When using a string to specify the colSelect set, the column names must be
separated by a comma and not include spaces between the names.
maxRows (optional) an integer specifying how many rows of data to return. If no value is
specified, all rows are returned.
rowOffset (optional) an integer specifying which row of data should be the first row in the
retrieval. If no value is specified, the retrieval starts with the first row.
colSort (optional) a string including the name of the column to sort preceded by a “+”
or “-” to indicate sort direction
colFilter (optional) a vector or array object created by the makeFilter function which
contains the column name, operator and value of the filter(s) to be applied to the
retrieved data.
showHidden (optional) a logical value indicating whether or not to return data columns that
would normally be hidden from user view. Defaults to FALSE if no value pro-
vided.
colNameOpt (optional) controls the name source for the columns of the output dataframe,
with valid values of ’caption’, ’fieldname’, and ’rname’
containerFilter
(optional) Specifies the containers to include in the scope of selectRows request.
A value of NULL is equivalent to "Current". Valid values are
• "Current": Include the current folder only
• "CurrentAndSubfolders": Include the current folder and all subfolders
• "CurrentPlusProject": Include the current folder and the project that con-
tains it
• "CurrentAndParents": Include the current folder and its parent folders
• "CurrentPlusProjectAndShared": Include the current folder plus its project
plus any shared folders
• "AllFolders": Include all folders for which the user has read permission
parameters (optional) List of name/value pairs for the parameters if the SQL references
underlying queries that are parameterized. For example, parameters=c("X=1",
"Y=2").
includeDisplayValues
(optional) a logical value indicating if display values should be included in the
response object for lookup fields.
method (optional) HTTP method to use for the request, defaults to POST.
Details
A full dataset or any portion of a dataset can be downloaded into an R data frame using the
labkey.selectRows function. Function arguments are the components of the url that identify
the location of the data and what actions should be taken on the data prior to import (ie, sorting,
selecting particular columns or maximum number of rows, etc.).
Stored queries in LabKey Server have an associated default view and may have one or more named
views. Views determine the column set of the return data frame. View columns can be a subset or
superset of the columns of the underlying query (a subset if columns from the query are left out
of the view, and a superset if lookup columns in the underlying query are used to include columns
from related queries). Views can also include filter and sort properties that will make their result
set different from the underlying query. If no view is specified, the columns and rows returned are
determined by the default view, which may not be the same as the result rows of the underlying
query. Please see the topic on Saving Views in the LabKey online documentation.
In the returned data frame, there are three different ways to have the columns named: colNameOpt='caption'
uses the caption value, and is the default option for backward compatibility. It may be the best option
for displaying to another user, but may make scripting more difficult. colNameOpt='fieldname'
uses the field name value, so that the data frame colnames are the same names that are used as
arguments to labkey function calls. It is the default for the new getRows session-based function.
colNameOpt='rname' transforms the field name value into valid R names by substituting an under-
score for both spaces and forward slash (/) characters and lower casing the entire name. This option
is the way a data frame is passed to a script running in a LabKey server in the R View feature of the
data grid. If you are writing scripts for running in an R view on the server, or if you prefer to work
with legal r names in the returned grid, this option may be useful.
For backward compatibility, column names returned by labkey.executeSql and labkey.selectRows
are field captions by default. The getRows function has the same colNameOpt parameter but de-
faults to field names instead of captions.
Value
The requested data are returned in a data frame with stringsAsFactors set to FALSE. Column names
are set as determined by the colNameOpt parameter.
Author(s)
<NAME>
References
https://www.labkey.org/Documentation/wiki-page.view?name=savingViews
See Also
Retrieve data: makeFilter, labkey.executeSql
Modify data: labkey.updateRows, labkey.insertRows, labkey.importRows, labkey.deleteRows
List available data: labkey.getSchemas, labkey.getQueries, labkey.getQueryViews, labkey.getQueryDetails,labk
Examples
## Not run:
## select from a list named AllTypes
# library(Rlabkey)
rows <- labkey.selectRows(
baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples",
schemaName="lists",
queryName="AllTypes")
## select from a view on that list
viewrows <- labkey.selectRows(baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples", schemaName="Lists", queryName="AllTypes",
viewName="rowbyrow")
## select a subset of columns
colSelect=c("TextFld", "IntFld")
subsetcols <- labkey.selectRows(baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples", schemaName="lists", queryName="AllTypes",
colSelect=colSelect)
## including columns from a lookup (foreign key) field
lookupcols <- labkey.selectRows(baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples", schemaName="lists", queryName="AllTypes",
colSelect="TextFld,IntFld,IntFld/LookupValue")
## End(Not run)
labkey.setCurlOptions Modify the current set of Curl options that are being used in the exist-
ing session
Description
Rlabkey uses the package httr to connect to the LabKey Server.
Arguments
options args a variable list of arguments to set the RCurl options
ssl_verifyhost check the existence of a common name and also verify that it matches the host-
name provided
ssl_verifypeer specifies whether curl will verify the peer’s certificate
followlocation specify is curl should follow any location header that is sent in the HTTP request
sslversion the SSL version to use
Details
This topic explains how to configure Rlabkey to work with a LabKey Server running SSL.
Rlabkey uses the package httr to connect to the LabKey Server. On Windows, the httr package is
not configured for SSL by default. In order to connect to a HTTPS enabled LabKey Server, you
will need to perform the following steps:
1. Create or download a "ca-bundle" file.
We recommend using ca-bundle file that is published by Mozilla. See http://curl.haxx.se/docs/caextract.html.
You have two options:
Download the ca-bundle.crt file from the link named "HTTPS from github:" on http://curl.haxx.se/docs/caextract.html
Create your own ca-bundle.crt file using the instructions provided on http://curl.haxx.se/docs/caextract.html
2. Copy the ca-bundle.crt file to a location on your hard-drive.
If you will be the only person using the Rlabkey package on your computer, we recommend that you
create a directory named ‘labkey‘ in your home directory
copy the ca-bundle.crt into the ‘labkey‘ directory
If you are installing this file on a server where multiple users will use may use the Rlabkey package,
we recommend that you create a directory named ‘c:labkey‘
copy the ca-bundle.crt into the ‘c:labkey‘ directory
3. Create a new Environment variable named ‘RLABKEY_CAINFO_FILE‘
On Windows 7, Windows Server 2008 and earlier
Select Computer from the Start menu.
Choose System Properties from the context menu.
Click Advanced system settings > Advanced tab.
Click on Environment Variables.
Under System Variables click on the new button.
For Variable Name: enter RLABKEY_CAINFO_FILE
For Variable Value: enter the path of the ca-bundle.crt you created above.
Hit the Ok buttons to close all the windows.
On Windows 8, Windows 2012 and above
Drag the Mouse pointer to the Right bottom corner of the screen.
Click on the Search icon and type: Control Panel.
Click on -> Control Panel -> System and Security.
Click on System -> Advanced system settings > Advanced tab.
In the System Properties Window, click on Environment Variables.
Under System Variables click on the new button.
For Variable Name: enter RLABKEY_CAINFO_FILE
For Variable Value: enter the path of the ca-bundle.crt you created above.
Hit the Ok buttons to close all the windows.
Now you can start R and begin working.
This command can also be used to provide an alternate location / path to your .netrc file. Example:
labkey.setCurlOptions(NETRC_FILE = '/path/to/alternate/_netrc')
labkey.setDebugMode Helper function to enable/disable debug mode.
Description
When debug mode is enabled, the GET/POST calls with output information about the request being
made and will output a raw string version of the response object.
Usage
labkey.setDebugMode(debug = FALSE)
Arguments
debug a boolean specifying if debug mode is enabled or disabled
Author(s)
<NAME>
Examples
## Not run:
library(Rlabkey)
labkey.setDebugMode(TRUE)
labkey.executeSql(
baseUrl="http://localhost:8080/labkey",
folderPath="/home",
schemaName="core",
sql = "select * from containers")
## End(Not run)
labkey.setDefaults Set the default parameters used for all http or https requests
Description
Use this function to set the default baseUrl and authentication parameters as package environment
variables to be used for all http or https requests. You can also use labkey.setDefaults() without any
parameters to reset/clear these settings.
Usage
labkey.setDefaults(apiKey="", baseUrl="", email="", password="")
Arguments
apiKey api or session key from your server
baseUrl server location including context path, if any. e.g. https://www.labkey.org/
email user email address
password user password
Details
An API key can be used to authorize Rlabkey functions that access secure content on LabKey
Server. Using an API key avoids copying and storing credentials on the client machine. An API
key can be revoked and set to expire. It also acts as a credential for those who sign in using a single
sign-on authentication mechanism such as CAS or SAML.
A site administrator must first enable the use of API keys on that LabKey Server. Once enabled, any
logged in user can generate an API key by clicking their display name (upper right) and selecting
"External Tool Access". The API Key page creates and displays keys that can be copied and pasted
into a labkey.setDefaults() statement to give an Rlabkey session the permissions of the correspond-
ing user.
If an API key is not provided, you can also use this function for basic authentication via email and
password. Note that both email and password must be set via a labkey.setDefaults() call. If an API
key is also set, that will be given preference and the email/password will not be used for authenti-
cation.
On servers that enable them, a session key can be used in place of an API key. A session key ties all
Rlabkey access to a user’s current browser session, which means the code runs in the same context
as the browser (e.g. same user, same authorizations, same declared terms of use and PHI level, same
impersonation state, etc.). Session keys can be useful in certain compliance scenarios.
Once valid credentials are provided to labkey.setDefaults(), subsequent labkey.get or labkey.post
API calls will authenticate using those credentials.
Examples
## Example of setting and clearing email/password, API key, and Session key
# library(Rlabkey)
labkey.setDefaults(email="<EMAIL>", password="password")
## Functions invoked at this point respect the role assignments and
## other authorizations of the specified user
## A user can create an API key via the LabKey UI and set it as follows:
labkey.setDefaults(apiKey="<KEY>")
## Functions invoked at this point respect the role assignments and
## other authorizations of the user who created the API key
## A user can create a session key via the LabKey UI and set it as follows:
labkey.setDefaults(apiKey="<KEY>")
## Functions invoked at this point share authorization
## and session information with the user's browser session
labkey.setDefaults() # called without any parameters will reset/clear the environment variables
labkey.setModuleProperty
Set module property value
Description
Set module property value for a specific folder or as site wide (with folderPath ’/’)
Usage
labkey.setModuleProperty(baseUrl=NULL, folderPath, moduleName, propName, propValue)
Arguments
baseUrl server location including context path, if any. e.g. https://www.labkey.org/
folderPath a string specifying the folderPath
moduleName name of the module
propName The module property name
propValue The module property value to save
Examples
## Not run:
library(Rlabkey)
labkey.setModuleProperty(baseUrl="http://labkey/", folderPath="flowProject",
moduleName="flow", propName="ExportToScriptFormat", propValue="zip")
## End(Not run)
labkey.storage.create Create a new LabKey Freezer Manager storage item
Description
Create a new LabKey Freezer Manager storage item that can be used in the creation of a storage
hierarchy. Storage items can be of the following types: Physical Location, Freezer, Primary Storage,
Shelf, Rack, Canister, Storage Unit Type, or Terminal Storage Location.
Usage
labkey.storage.create(baseUrl=NULL, folderPath, type, props)
Arguments
baseUrl a string specifying the baseUrlfor the LabKey server
folderPath a string specifying the folderPath
type a string specifying the type of storage item to create
props a list properties for the storage item (i.e. name, description, etc.)
Value
A list containing a data element with the property values for the newly created storage item.
Author(s)
<NAME>
See Also
labkey.storage.update, labkey.storage.delete
Examples
## Not run:
library(Rlabkey)
## create a storage Freezer with a Shelf and 2 Plates on that Shelf
freezer <- labkey.storage.create(
baseUrl="http://labkey/",
folderPath="home",
type="Freezer",
props=list(name="Test Freezer", description="My example storage freezer")
)
shelf = labkey.storage.create(
baseUrl="http://labkey/",
folderPath="home",
type="Shelf",
props=list(name="Test Shelf", locationId=freezer$data$rowId )
)
plateType = labkey.storage.create(
baseUrl="http://labkey/",
folderPath="home",
type="Storage Unit Type",
props=list(name="Test 8X12 Well Plate", unitType="Plate", rows=8, cols=12 )
)
plate1 = labkey.storage.create(
baseUrl="http://labkey/",
folderPath="home",
type="Terminal Storage Location",
props=list(name="Plate #1", typeId=plateType$data$rowId, locationId=shelf$data$rowId )
)
plate2 = labkey.storage.create(
baseUrl="http://labkey/",
folderPath="home",
type="Terminal Storage Location",
props=list(name="Plate #2", typeId=plateType$data$rowId, locationId=shelf$data$rowId )
)
## End(Not run)
labkey.storage.delete Delete a LabKey Freezer Manager storage item
Description
Delete an existing LabKey Freezer Manager storage item. Note that deletion of freezers, primary
storage, or locations within the storage hierarchy will cascade the delete down the hierarchy to
remove child locations and terminal storage locations. Samples in the deleted storage location(s)
will not be deleted but will be removed from storage. Storage items can be of the following types:
Physical Location, Freezer, Primary Storage, Shelf, Rack, Canister, Storage Unit Type, or Terminal
Storage Location.
Usage
labkey.storage.delete(baseUrl=NULL, folderPath, type, rowId)
Arguments
baseUrl a string specifying the baseUrlfor the LabKey server
folderPath a string specifying the folderPath
type a string specifying the type of storage item to delete
rowId the primary key of the storage item to delete
Value
A list containing a data element with the property values for the deleted storage item.
Author(s)
<NAME>
See Also
labkey.storage.create, labkey.storage.update
Examples
## Not run:
library(Rlabkey)
## delete a freezer and its child locations and terminal storage locations
freezer <- labkey.storage.create(
baseUrl="http://labkey/",
folderPath="home",
type="Freezer",
props=list(name="Test Freezer", description="My example storage freezer")
)
shelf = labkey.storage.create(
baseUrl="http://labkey/",
folderPath="home",
type="Shelf",
props=list(name="Test Shelf", locationId=freezer$data$rowId )
)
plateType = labkey.storage.create(
baseUrl="http://labkey/",
folderPath="home",
type="Storage Unit Type",
props=list(name="Test 8X12 Well Plate", unitType="Plate", rows=8, cols=12 )
)
plate1 = labkey.storage.create(
baseUrl="http://labkey/",
folderPath="home",
type="Terminal Storage Location",
props=list(name="Plate #1", typeId=plateType$data$rowId, locationId=shelf$data$rowId )
)
plate2 = labkey.storage.create(
baseUrl="http://labkey/",
folderPath="home",
type="Terminal Storage Location",
props=list(name="Plate #2", typeId=plateType$data$rowId, locationId=shelf$data$rowId )
)
# NOTE: this will delete freezer, shelf, plate1 and plate2 but it will not delete
# the plateType as that is not a part of the freezer hierarchy
freezer <- labkey.storage.delete(
baseUrl="http://labkey/",
folderPath="home",
type="Freezer",
rowId=freezer$data$rowId
)
## End(Not run)
labkey.storage.update Update a LabKey Freezer Manager storage item
Description
Update an existing LabKey Freezer Manager storage item to change its properties or location within
the storage hierarchy. Storage items can be of the following types: Physical Location, Freezer,
Primary Storage, Shelf, Rack, Canister, Storage Unit Type, or Terminal Storage Location.
Usage
labkey.storage.update(baseUrl=NULL, folderPath, type, props)
Arguments
baseUrl a string specifying the baseUrlfor the LabKey server
folderPath a string specifying the folderPath
type a string specifying the type of storage item to update
props a list properties for the storage item (i.e. name, description, etc.), must include
the RowId primary key
Value
A list containing a data element with the property values for the updated storage item.
Author(s)
<NAME>
See Also
labkey.storage.create, labkey.storage.delete
Examples
## Not run:
library(Rlabkey)
## create a storage unit type and then update it to change some properties
plateType = labkey.storage.create(
baseUrl="http://labkey/",
folderPath="home",
type="Storage Unit Type",
props=list(name="Test 8X12 Well Plate", unitType="Plate", rows=8, cols=12 )
)
plateType = labkey.storage.update(
baseUrl="http://labkey/",
folderPath="home",
type="Storage Unit Type",
props=list(rowId=plateType$data$rowId, positionFormat="NumAlpha", positionOrder="ColumnRow" )
)
## End(Not run)
labkey.transform.getRunPropertyValue
Assay transform script helper function to get a run property value from
a data.frame
Description
A function that takes in data.frame of the run properties info for a given assay transform script
execution and returns the value for a given property name.
Usage
labkey.transform.getRunPropertyValue(runProps, propName)
Arguments
runProps the data.frame of the run property key/value pairs
propName the name of the property to get the value of within the runProps data.frame
Details
This helper function will most likely be used within an assay transform script after the labkey.transform.readRunPropertiesFil
function has been called to load the full set of run properties.
Examples
## Not run:
# library(Rlabkey)
run.props = labkey.transform.readRunPropertiesFile("${runInfo}");
run.data.file = labkey.transform.getRunPropertyValue(run.props, "runDataFile");
## End(Not run)
labkey.transform.readRunPropertiesFile
Assay transform script helper function to read a run properties file
Description
A function that takes in the full path to the LabKey generated run properties file and returns a
data.frame of the key value pairs for the lines within that file. This helper function would be used
as part of an assay transform script written in R and associated with an assay design.
Usage
labkey.transform.readRunPropertiesFile(runInfoPath)
Arguments
runInfoPath the full file system path to the generated run properties file
Details
The most common scenario is that the assay transform script will get the run properties file path
added into the running script as a replacement variable. To use that replacement variable for this
helper function, you can pass in the runInfoPath parameter as "$runInfo".
Examples
## Not run:
# library(Rlabkey)
labkey.transform.readRunPropertiesFile("${runInfo}")
## End(Not run)
labkey.truncateTable Delete all rows from a table
Description
Delete all rows from the specified table.
Usage
labkey.truncateTable(baseUrl = NULL, folderPath, schemaName, queryName)
Arguments
baseUrl a string specifying the baseUrl for the labkey server
folderPath a string specifying the folderPath
schemaName a string specifying the name of the schema of the domain
queryName a string specifying the query name
Details
Deletes all rows in the table in a single transaction and will also log a single audit event for the
action. Not all tables support truncation, if a particular table doesn’t support the action, an error
will be returned. The current list of tables supporting truncation include : lists, datasets, issues,
sample sets, data classes.
Value
Returns the count of the number of rows deleted.
Author(s)
<NAME>
See Also
labkey.deleteRows
Examples
## Not run:
## create a data frame and infer it's fields
library(Rlabkey)
labkey.truncateTable(baseUrl="http://labkey/", folderPath="home",
schemaName="lists", queryName="people")
## End(Not run)
labkey.updateRows Update existing rows of data in a labkey database
Description
Send data from an R session to update existing rows of data in the database.
Usage
labkey.updateRows(baseUrl, folderPath,
schemaName, queryName, toUpdate, provenanceParams=NULL)
Arguments
baseUrl a string specifying the baseUrlfor the labkey server
folderPath a string specifying the folderPath
schemaName a string specifying the schemaNamefor the query
queryName a string specifying the queryName
toUpdate a data frame containing the row(s) of data to be updated
provenanceParams
the provenance parameter object which contains the options to include as part of
a provenance recording. This is a premium feature and requires the Provenance
LabKey module to function correctly, if it is not present this parameter will be
ignored.
Details
A single row or multiple rows of data can be updated. The toUpdate data frame should contain the
rows of data to be updated and must be created with the stringsAsFactors option set to FALSE.
The names of the data in the data frame must be the column names from the labkey database. To
update a row/column to a value of NULL, use an empty string ("") in the data frame (regardless of
the database column type).
Value
A list is returned with named categories of command, rowsAffected, rows, queryName, contain-
erPath and schemaName. The schemaName, queryName and containerPath properties contain
the same schema, query and folder path used in the request. The rowsAffected property indicates
the number of rows affected by the API action. This will typically be the same number as passed in
the request. The rows property contains a list of row objects corresponding to the rows updated.
Author(s)
<NAME>
See Also
labkey.selectRows, labkey.executeSql, makeFilter, labkey.insertRows, labkey.importRows,
labkey.deleteRows, labkey.query.import, labkey.provenance.createProvenanceParams,
labkey.provenance.startRecording, labkey.provenance.addRecordingStep, labkey.provenance.stopRecording
Examples
## Not run:
## Insert, update and delete
## Note that users must have the necessary permissions in the database
## to be able to modify data through the use of these functions
# library(Rlabkey)
newrow <- data.frame(
DisplayFld="Inserted from R"
, TextFld="how its done"
, IntFld= 98
, DoubleFld = 12.345
, DateTimeFld = "03/01/2010"
, BooleanFld= FALSE
, LongTextFld = "Four score and seven years ago"
# , AttachmentFld = NA #attachment fields not supported
, RequiredText = "Veni, vidi, vici"
, RequiredInt = 0
, Category = "LOOKUP2"
, stringsAsFactors=FALSE)
insertedRow <- labkey.insertRows("http://localhost:8080/labkey",
folderPath="/apisamples", schemaName="lists", queryName="AllTypes",
toInsert=newrow)
newRowId <- insertedRow$rows[[1]]$RowId
selectedRow<-labkey.selectRows("http://localhost:8080/labkey",
folderPath="/apisamples", schemaName="lists", queryName="AllTypes",
colFilter=makeFilter(c("RowId", "EQUALS", newRowId)))
selectedRow
updaterow=data.frame(
RowId=newRowId
, DisplayFld="Updated from R"
, TextFld="how to update"
, IntFld= 777
, stringsAsFactors=FALSE)
updatedRow <- labkey.updateRows("http://localhost:8080/labkey",
folderPath="/apisamples", schemaName="lists", queryName="AllTypes",
toUpdate=updaterow)
selectedRow<-labkey.selectRows("http://localhost:8080/labkey",
folderPath="/apisamples", schemaName="lists", queryName="AllTypes",
colFilter=makeFilter(c("RowId", "EQUALS", newRowId)))
selectedRow
deleterow <- data.frame(RowId=newRowId, stringsAsFactors=FALSE)
result <- labkey.deleteRows(baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples", schemaName="lists", queryName="AllTypes",
toDelete=deleterow)
str(result)
## End(Not run)
labkey.webdav.delete Deletes the provided file/folder on a LabKey Server via WebDAV
Description
This will delete the supplied file or folder under the specified LabKey Server project using WebDAV.
Usage
labkey.webdav.delete(
baseUrl=NULL,
folderPath,
remoteFilePath,
fileSet='@files'
)
Arguments
baseUrl a string specifying the baseUrl for the labkey server
folderPath a string specifying the folderPath
remoteFilePath the path to delete, relative to the LabKey folder root.
fileSet (optional) the name of file server fileSet, which is typically "@files" (the default
value for this argument). In some cases this might be "@pipeline" or "@fileset".
Details
This will delete the supplied file or folder under the specified LabKey Server project using WebDAV.
Note: if a folder is provided, it will delete that folder and contents.
Value
TRUE if the folder was deleted successfully
Author(s)
<NAME>, Ph.D.
See Also
labkey.webdav.get, labkey.webdav.put, labkey.webdav.mkDir, labkey.webdav.mkDirs, labkey.webdav.listDir,
labkey.webdav.pathExists, labkey.webdav.downloadFolder
Examples
## Not run:
library(Rlabkey)
#delete an entire directory and contents
labkey.webdav.delete(baseUrl="http://labkey/", folderPath="home", remoteFilePath="folder1")
#delete single file
labkey.webdav.delete(baseUrl="http://labkey/", folderPath="home", remoteFilePath="folder/file.txt")
## End(Not run)
labkey.webdav.downloadFolder
Recursively download a folder via WebDAV
Description
This will recursively download a folder from a LabKey Server using WebDAV.
Usage
labkey.webdav.downloadFolder(
localBaseDir,
baseUrl=NULL,
folderPath,
remoteFilePath,
overwriteFiles=TRUE,
mergeFolders=TRUE,
fileSet='@files'
)
Arguments
localBaseDir the local filepath where this directory will be saved. a subfolder with the remote
directory name will be created.
baseUrl a string specifying the baseUrl for the labkey server
folderPath a string specifying the folderPath
remoteFilePath the path of this folder on the remote server, relative to the folder root.
overwriteFiles (optional) if true, any pre-existing file at this location will be overwritten. De-
faults to TRUE
mergeFolders (optional) if false, any pre-existing local folders in the target location will be
deleted if there is an incoming folder of the same name. If true, these existing
folders will be left alone, and remote files downloaded into them. Existing file
conflicts will be handled based on the overwriteFiles parameter. Defaults to
TRUE
fileSet (optional) the name of file server fileSet, which is typically "@files" (the default
value for this argument). In some cases this might be "@pipeline" or "@fileset".
Details
This will recursively download a folder from a LabKey Server using WebDAV. This is essentially a
wrapper that recursively calls labkey.webdav.get to download all files in the remote folder.
Value
TRUE or FALSE, depending on if this folder was successfully downloaded
Author(s)
<NAME>, Ph.D.
See Also
labkey.webdav.get, labkey.webdav.put, labkey.webdav.mkDir, labkey.webdav.mkDirs, labkey.webdav.pathExis
labkey.webdav.listDir, labkey.webdav.delete
Examples
## Not run:
## download folder from a LabKey Server
library(Rlabkey)
labkey.webdav.downloadFolder(baseUrl="http://labkey/",
folderPath="home",
remoteFilePath="folder1",
localBaseDir="destFolder",
overwrite=TRUE
)
## End(Not run)
labkey.webdav.get Download a file via WebDAV
Description
This will download a file from a LabKey Server using WebDAV.
Usage
labkey.webdav.get(
baseUrl=NULL,
folderPath,
remoteFilePath,
localFilePath,
overwrite=TRUE,
fileSet='@files'
)
Arguments
baseUrl a string specifying the baseUrl for the labkey server
folderPath a string specifying the folderPath
remoteFilePath the path of this file on the remote server, relative to the folder root.
localFilePath the local filepath where this file will be saved
overwrite (optional) if true, any pre-existing file at this location will be overwritten. De-
faults to TRUE
fileSet (optional) the name of file server fileSet, which is typically "@files" (the default
value for this argument). In some cases this might be "@pipeline" or "@fileset".
Details
Download a single file from a LabKey Server to the local machine using WebDAV.
Value
TRUE or FALSE, depending on if this file was downloaded and exists locally. Will return FALSE
if the already file exists and overwrite=F.
Author(s)
<NAME>, Ph.D.
See Also
labkey.webdav.put, labkey.webdav.mkDir, labkey.webdav.mkDirs, labkey.webdav.pathExists,
labkey.webdav.listDir, labkey.webdav.delete, labkey.webdav.downloadFolder
Examples
## Not run:
## download a single file from a LabKey Server
library(Rlabkey)
labkey.webdav.get(
baseUrl="http://labkey/",
folderPath="home",
remoteFilePath="folder/myFile.txt",
localFilePath="myDownloadedFile.txt",
overwrite=TRUE
)
## End(Not run)
labkey.webdav.listDir List the contents of a LabKey Server folder via WebDAV
Description
This will list the contents of a LabKey Server folder using WebDAV.
Usage
labkey.webdav.listDir(
baseUrl=NULL,
folderPath,
remoteFilePath,
fileSet='@files',
haltOnError=TRUE
)
Arguments
baseUrl a string specifying the baseUrl for the labkey server
folderPath a string specifying the folderPath
remoteFilePath path of the folder on the remote server, relative to the folder root.
fileSet (optional) the name of file server fileSet, which is typically "@files" (the default
value for this argument). In some cases this might be "@pipeline" or "@fileset".
haltOnError (optional) Specifies whether this request should fail if the requested path does
not exist. Defaults to TRUE
Details
Lists the contents of a folder on a LabKey Server using WebDAV.
Value
A list with each item under this folder. Each item (file or directory) is a list with the following
attributes:
• "files": A list of the files, where each has the following attributes:
– "id": The relative path to this item, not encoded
– "href": The relative URL to this item, HTML encoded
– "text": A dataset in a date based study
– "creationdate": The date this item was created
– "createdby": The user that created this file
– "lastmodified": The last modification time
– "contentlength": The content length
– "size": The file size
– "isdirectory": TRUE or FALSE, depending on whether this item is a directory
• "fileCount": If this item is a directory, this property will be present, listing the the total files in
this location
Author(s)
<NAME>, Ph.D.
See Also
labkey.webdav.get, labkey.webdav.put, labkey.webdav.mkDir, labkey.webdav.mkDirs, labkey.webdav.pathExis
labkey.webdav.delete, labkey.webdav.downloadFolder
Examples
## Not run:
library(Rlabkey)
json <- labkey.webdav.listDir(
baseUrl="http://labkey/",
folderPath="home",
remoteFilePath="myFolder"
)
## End(Not run)
labkey.webdav.mkDir Create a folder via WebDAV
Description
This will create a folder under the specified LabKey Server project using WebDAV.
Usage
labkey.webdav.mkDir(
baseUrl=NULL,
folderPath,
remoteFilePath,
fileSet='@files'
)
Arguments
baseUrl a string specifying the baseUrl for the labkey server
folderPath a string specifying the folderPath
remoteFilePath the folder path to create, relative to the LabKey folder root.
fileSet (optional) the name of file server fileSet, which is typically "@files" (the default
value for this argument). In some cases this might be "@pipeline" or "@fileset".
Details
Creates a folder on a LabKey Server using WebDAV. If the parent directory does not exist, this will
fail (similar to mkdir on linux)
Value
TRUE if the folder was created successfully
Author(s)
<NAME>, Ph.D.
See Also
labkey.webdav.get, labkey.webdav.put, labkey.webdav.mkDirs, labkey.webdav.pathExists,
labkey.webdav.listDir, labkey.webdav.delete, labkey.webdav.downloadFolder
Examples
## Not run:
library(Rlabkey)
labkey.webdav.mkDir(
baseUrl="http://labkey/",
folderPath="home",
remoteFilePath="toCreate"
)
## End(Not run)
labkey.webdav.mkDirs Create a folder via WebDAV
Description
This will create folder(s) under the specified LabKey Server project using WebDAV.
Usage
labkey.webdav.mkDirs(
baseUrl=NULL,
folderPath,
remoteFilePath,
fileSet='@files'
)
Arguments
baseUrl a string specifying the baseUrl for the labkey server
folderPath a string specifying the folderPath
remoteFilePath the folder path to create, relative to the LabKey folder root.
fileSet (optional) the name of file server fileSet, which is typically "@files" (the default
value for this argument). In some cases this might be "@pipeline" or "@fileset".
Details
Creates a folder on a LabKey Server using WebDAV. If the parent directory or directories no not
exist, these will also be created (similar to mkdir -p on linux)
Value
TRUE if the folder was created successfully
Author(s)
<NAME>, Ph.D.
See Also
labkey.webdav.get, labkey.webdav.put, labkey.webdav.mkDir, labkey.webdav.pathExists,
labkey.webdav.listDir, labkey.webdav.delete, labkey.webdav.downloadFolder
Examples
## Not run:
library(Rlabkey)
labkey.webdav.mkDirs(
baseUrl="http://labkey/",
folderPath="home",
remoteFilePath="folder1/folder2/toCreate"
)
## End(Not run)
labkey.webdav.pathExists
Tests if a path exists on a LabKey Server via WebDAV
Description
This will test if the supplied file/folder exists folder under the specified LabKey Server project using
WebDAV.
Usage
labkey.webdav.pathExists(
baseUrl=NULL,
folderPath,
remoteFilePath,
fileSet='@files'
)
Arguments
baseUrl a string specifying the baseUrl for the labkey server
folderPath a string specifying the folderPath
remoteFilePath the path to test, relative to the LabKey folder root.
fileSet (optional) the name of file server fileSet, which is typically "@files" (the default
value for this argument). In some cases this might be "@pipeline" or "@fileset".
Details
This will test if the supplied file/folder exists folder under the specified LabKey Server project using
WebDAV.
Value
TRUE if the folder was created successfully
Author(s)
<NAME>, Ph.D.
See Also
labkey.webdav.get, labkey.webdav.put, labkey.webdav.mkDir, labkey.webdav.mkDirs, labkey.webdav.listDir,
labkey.webdav.delete, labkey.webdav.downloadFolder
Examples
## Not run:
library(Rlabkey)
# Test folder
labkey.webdav.pathExists(
baseUrl="http://labkey/",
folderPath="home",
remoteFilePath="pathToTest"
)
# Test file
labkey.webdav.pathExists(
baseUrl="http://labkey/",
folderPath="home",
remoteFilePath="folder/fileToTest.txt"
)
## End(Not run)
labkey.webdav.put Upload a file via WebDAV
Description
This will upload a file to a LabKey Server using WebDAV.
Usage
labkey.webdav.put(
localFile,
baseUrl=NULL,
folderPath,
remoteFilePath,
fileSet='@files',
description=NULL
)
Arguments
localFile the local filepath to upload
baseUrl a string specifying the baseUrl for the labkey server
folderPath a string specifying the folderPath
remoteFilePath the destination path of this file on the remote server, relative to the folder root.
fileSet (optional) the name of file server fileSet, which is typically "@files" (the default
value for this argument). In some cases this might be "@pipeline" or "@fileset".
description (optional) the description to attach to this file on the remote server.
Details
Upload a single file from the local machine to a LabKey Server using WebDAV.
Value
TRUE if the file was uploaded successfully
Author(s)
<NAME>, Ph.D.
See Also
labkey.webdav.get, labkey.webdav.mkDir, labkey.webdav.mkDirs, labkey.webdav.pathExists,
labkey.webdav.listDir, labkey.webdav.delete, labkey.webdav.downloadFolder
Examples
## Not run:
## upload a single file to a LabKey Server
library(Rlabkey)
labkey.webdav.put(
localFile="myFileToUpload.txt",
baseUrl="http://labkey/",
folderPath="home",
remoteFilePath="myFileToUpload.txt"
)
## End(Not run)
labkey.whoAmI Call the whoami API
Description
Call the whoami API to get information about the current LabKey user.
Usage
labkey.whoAmI(baseUrl=NULL)
Arguments
baseUrl A string specifying the baseUrl for the LabKey server.
Value
Returns information about the logged in user including: displayName, id, email, and whether or not
the user is impersonated.
Author(s)
<NAME>
See Also
labkey.security.impersonateUser, labkey.security.stopImpersonating
Examples
## Not run:
library(Rlabkey)
labkey.whoAmI(baseUrl="http://labkey/")
## End(Not run)
lsFolders List the available folder paths
Description
Lists the available folder paths relative to the current folder path for a LabKey session
Usage
lsFolders(session)
Arguments
session the session key returned from getSession
Details
Lists the available folder paths relative to the current folder path for a LabKey session
Value
A character array containing the available folder paths, relative to the project root. These values can
be set on a session using curFolder<-
Author(s)
<NAME>
References
https://www.labkey.org/Documentation/wiki-page.view?name=projects
See Also
getSession, lsProjects, lsSchemas
Examples
## Not run:
##get a list if projects and folders
# library(Rlabkey)
lks<- getSession("https://www.labkey.org", "/home")
#returns values "/home" , "/home/_menus" , ...
lsFolders(lks)
## End(Not run)
lsProjects List the projects available at a given LabKey Server address
Description
Lists the projects available. Takes a string URL instead of a session, as it is intended for use before
creating a session.
Usage
lsProjects(baseUrl)
Arguments
baseUrl a string specifying the baseUrlfor the LabKey Server, of the form http://<server
dns name>/<contextroot>
Details
List the projects available at a given LabKey Server address.
Value
A character array containing the available projects, relative to the root. These values can be set on
a session using curFolder<-
Author(s)
<NAME>
References
https://www.labkey.org/project/home/begin.view
See Also
getSession, lsFolders, lsSchemas
Examples
## Not run:
## get list of projects on server, connect a session in one project,
## then list the folders in that project
# library(Rlabkey)
lsProjects("https://www.labkey.org")
lkorg <- getSession("https://www.labkey.org", "/home")
lsFolders(lkorg)
lkorg <- getSession("https://www.labkey.org", "/home/Study/ListDemo")
lsSchemas(lkorg)
## End(Not run)
lsSchemas List the available schemas
Description
Lists the available schemas given the current folder path for a LabKey session
Usage
lsSchemas(session)
Arguments
session the session key returned from getSession
Details
Lists the available schemas given the current folder path for a LabKey session
Value
A character array containing the available schema names
Author(s)
<NAME>
See Also
getSession, lsFolders, lsProjects
Examples
## Not run:
## get a list of schemas available in the current session context
# library(Rlabkey)
lks<- getSession(baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples")
#returns several schema names, e.g. "lists", "core", "MS1", etc.
lsSchemas(lks)
## End(Not run)
makeFilter Builds filters to be used in labkey.selectRows and getRows
Description
This function takes inputs of column name, filter value and filter operator and returns an array of
filters to be used in labkey.selectRows and getRows.
Usage
makeFilter(...)
Arguments
... Arguments in c("colname","operator","value") form, used to create a filter.
Details
These filters are applied to the data prior to import into R. The user can specify as many filters
as desired. The format for specifying a filter is a vector of characters including the column name,
operator and value.
colname a string specifying the name of the column to be filtered
operator a string specifying what operator should be used in the filter (see options below)
value an integer or string specifying the value the columns should be filtered on
Operator values:
EQUAL
DATE_EQUAL
NOT_EQUAL
DATE_NOT_EQUAL
NOT_EQUAL_OR_MISSING
GREATER_THAN
DATE_GREATER_THAN
LESS_THAN
DATE_LESS_THAN
GREATER_THAN_OR_EQUAL
DATE_GREATER_THAN_OR_EQUAL
LESS_THAN_OR_EQUAL
DATE_LESS_THAN_OR_EQUAL
STARTS_WITH
DOES_NOT_START_WITH
CONTAINS
DOES_NOT_CONTAIN
CONTAINS_ONE_OF
CONTAINS_NONE_OF
IN
NOT_IN
BETWEEN
NOT_BETWEEN
MEMBER_OF
MISSING
NOT_MISSING
MV_INDICATOR
NO_MV_INDICATOR
Q
ONTOLOGY_IN_SUBTREE
ONTOLOGY_NOT_IN_SUBTREE
EXP_CHILD_OF
EXP_PARENT_OF
EXP_LINEAGE_OF
When using the MISSING, NOT_MISSING, MV_INDICATOR, or NO_MV_INDICATOR opera-
tors, an empty string should be supplied as the value. See example below.
Value
The function returns either a single string or an array of strings to be use in the colFilter argument
of the labkey.selectRows function.
Author(s)
<NAME>
References
http://www.omegahat.net/RCurl/,
https://www.labkey.org/project/home/begin.view
See Also
labkey.selectRows
Examples
# library(Rlabkey)
## Two filters, ANDed together
makeFilter(c("TextFld","CONTAINS","h"),
c("BooleanFld","EQUAL","TRUE"))
## Using "in" operator:
makeFilter(c("RowId","IN","2;3;6"))
## Using "missing" operator:
makeFilter(c("IntFld","MISSING",""))
saveResults Returns an object representing a LabKey schema
Description
A wrapper function to labkey.saveBatch which uses a session object and provides defaults for the
Batch/Run names.
Usage
saveResults(session, assayName, resultDataFrame,
batchPropertyList= list(name=paste("Batch ", as.character(date()))),
runPropertyList= list(name=paste("Assay Run ", as.character(date()))))
Arguments
session the session key returned from getSession
assayName a string specifying the name of the assay instance
resultDataFrame
a data frame containing rows of data to be inserted
batchPropertyList
a list of batch Properties
runPropertyList
a list of run Properties
Details
saveResults is a wrapper function to labkey.saveBatch with two changes: First, it uses a session
object in place of the separate baseUrl and folderPath arguments. Second, it provides defaults for
generating Batch and Run names based on a current timestamp.
To see the save result on LabKey server, click on the "SimpleMeans" assay in the Assay List web
part.
Value
an object representing the assay.
Author(s)
<NAME>
References
https://www.labkey.org/project/home/begin.view
See Also
getSession, getSchema, getLookups getRows
Examples
## Not run:
## Very simple example of an analysis flow: query some data,
## calculate some stats, then save the calculations as an assay
## result set in LabKey Server
# library(Rlabkey)
s<- getSession(baseUrl="http://localhost:8080/labkey",
folderPath="/apisamples")
scobj <- getSchema(s, "lists")
simpledf <- getRows(s, scobj$AllTypes)
## some dummy calculations to produce and example analysis result
testtable <- simpledf[,3:4]
colnames(testtable) <- c("IntFld", "DoubleFld")
row <- c(list("Measure"="colMeans"), colMeans(testtable, na.rm=TRUE))
results <- data.frame(row, row.names=NULL, stringsAsFactors=FALSE)
row <- c(list("Measure"="colSums"), colSums(testtable, na.rm=TRUE))
results <- rbind(results, as.vector(row))
bprops <- list(LabNotes="this is a simple demo")
bpl<- list(name=paste("Batch ", as.character(date())),properties=bprops)
rpl<- list(name=paste("Assay Run ", as.character(date())))
assayInfo<- saveResults(s, "SimpleMeans", results,
batchPropertyList=bpl, runPropertyList=rpl)
## End(Not run) |
DBfit | cran | R | Package ‘DBfit’
October 12, 2022
Type Package
Title A Double Bootstrap Method for Analyzing Linear Models with
Autoregressive Errors
Version 2.0
Date 2021-04-30
Author <NAME> and <NAME>
Maintainer <NAME> <<EMAIL>>
Description Computes the double bootstrap as discussed in McKnight, McK-
ean, and Huitema (2000) <doi:10.1037/1082-989X.5.1.87>.
The double bootstrap method provides a better fit for a linear model with autoregressive er-
rors than ARIMA when the sample size is small.
License GPL (>= 2)
Depends Rfit
NeedsCompilation no
Repository CRAN
Date/Publication 2021-04-30 20:30:02 UTC
R topics documented:
DBfit-packag... 2
boot... 3
boot... 4
dbfi... 5
durbin1fi... 7
durbin1x... 8
durbin2fi... 8
full... 9
hmdesign... 10
hmma... 10
hypothma... 11
lag... 12
nurh... 13
print.dbfi... 13
rhoci... 14
simpgen1hm... 15
simul... 16
simulacorrectio... 17
summary.dbfi... 18
testdat... 18
wrh... 19
DBfit-package A Double Bootstrap Method for Analyzing Linear Models With Au-
toregressive Errors
Description
Computes the double bootstrap as discussed in McKnight, McKean, and Huitema (2000) <doi:10.1037/1082-
989X.5.1.87>. The double bootstrap method provides a better fit for a linear model with autore-
gressive errors than ARIMA when the sample size is small.
Details
The DESCRIPTION file:
Package: DBfit
Type: Package
Title: A Double Bootstrap Method for Analyzing Linear Models with Autoregressive Errors
Version: 2.0
Date: 2021-04-30
Author: <NAME> and <NAME>
Maintainer: <NAME> <<EMAIL>>
Description: Computes the double bootstrap as discussed in McKnight, McKean, and Huitema (2000) <doi:10.1037/1082-9
License: GPL (>= 2)
Depends: Rfit
Index of help topics:
DBfit-package A Double Bootstrap Method for Analyzing Linear
Models With Autoregressive Errors
boot1 First Boostrap Procedure For parameter
estimations
boot2 First Boostrap Procedure For parameter
estimations
dbfit The main function for the double bootstrap
method
durbin1fit Durbin stage 1 fit
durbin1xy Creating New X and Y for Durbin Stage 1
durbin2fit Durbin stage 2 fit
fullr QR decomposition for non-full rank design
matrix for Rfit.
hmdesign2 the Two-Phase Design Matrix
hmmat K-Phase Design Matrix
hypothmat General Linear Tests of the regression
coefficients
lagx Lag Functions
nurho Creating a new response variable for Durbin
stage 2
print.dbfit DBfit Internal Print Functions
rhoci2 A fisher type CI of the autoregressive
parameter rho
simpgen1hm2 Simulation Data Generating Function
simula Work Horse Function to implement the Double
Bootstrap method
simulacorrection Work Horse Function to Implement the Double
Bootstrap Method For .99 Cases
summary.dbfit Summarize the double bootstrap (DB) fit
testdata testdata
wrho Creating a new design matrix for Durbin stage 2
Author(s)
<NAME> and <NAME>
Maintainer: <NAME> <<EMAIL>>
References
McKnight, <NAME>., McKean, <NAME>., and Huitema, <NAME>. (2000). A double bootstrap method to analyze
linear models with autoregressive error terms. Psychological methods, 5 (1), 87. <NAME>
(2017). Ph.D. Dissertation.
boot1 First Boostrap Procedure For parameter estimations
Description
Function performing the first bootstrap procedure to yield the parameter estimates
Usage
boot1(y, phi1, arp, nbs, x, allb, method, scores)
Arguments
y the response variable
phi1 the Durbin two-stage estimate of the autoregressive parameter rho
arp the order of autoregressive errors
nbs the bootstrap size
x the original design matrix (including intercept), without centering
allb all the Durbin two-stage estimates of the regression coefficients
method If "OLS", uses the ordinary least square; If "RANK", uses the rank-based fit
scores Default is Wilcoxon scores
Value
An estimate of the bias is returned
Note
This function is for internal use. The main function for users is dbfit.
boot2 First Boostrap Procedure For parameter estimations
Description
Function performing the second bootstrap procedure to yield the inference of the regression coeffi-
cients
Usage
boot2(y, xcopy, phi1, beta, nbs, method, scores)
Arguments
y the response variable
xcopy the original design matrix (including intercept), without centering
phi1 the estimate of the autoregressive parameter rho from the first bootstrap proce-
dure
beta the estimates of the regression coefficients from the first bootstrap procedure
nbs the bootstrap size
method If "OLS", uses the ordinary least square; If "RANK", uses rank-based fit
scores Default is Wilcoxon scores
Value
betacov the estimate of var-cov matrix of betas
allbeta the estimates of betas inside of the second bootstrap, not the final estimates of
betas. The final estimates of betas are still from boot1.
rhostar the estimates of rho inside of the second bootstrap, not the final estimates of rho.
The final estimate(s) of rho are still from boot1.
MSEstar MSE used inside of the second bootstrap.
Note
This function is for internal use. The main function for users is dbfit
dbfit The main function for the double bootstrap method
Description
This function is used to implement the double bootstrap method. It is used to yield estimates of
both regression coefficients and autoregressive parameters(rho), and also the inference of them.
Usage
## Default S3 method:
dbfit(x, y, arp, nbs = 500, nbscov = 500,
conf = 0.95, correction = TRUE, method = "OLS", scores, ...)
Arguments
x the design matrix, including intercept, i.e. the first column being ones.
y the response variable.
arp the order of autoregressive errors.
nbs the bootstrap size for the first bootstrap procedure. Default is 500.
nbscov the bootstrap size for the second bootstrap procedure. Default is 500.
conf the confidence level of CI for rho, default is 0.95.
correction logical. Currently, ONLY works for order 1, i.e. for order > 1, this correction
will not get involved. If TRUE, uses the correction for cases that the estimate of
rho is 0.99. Default is TRUE.
method the method to be used for fitting. If "OLS", uses the ordinary least square lm; If
"RANK", uses the rank-based fit rfit.
scores Default is Wilcoxon scores
... additional arguments to be passed to fitting routines
Details
Computes the double bootstrap as discussed in McKnight, McKean, and Huitema (2000). For
details, see the references.
Value
coefficients the estimates of regression coefficients based on the first bootstrap procedure
rho1 the Durbin two-stage estimate of the autoregressive parameter rho
adjar the estimates of regression coefficients based on the first bootstrap procedure
mse the mean square error
rho_CI_1 the first type of CI for rho, see the second reference for details.
rho_CI_2 the second type of CI for rho, see the second reference for details.
rho_CI_3 the third type of CI for rho, see the second reference for details.
betacov the estimate of the variance-covariance matrix of betas
tabbeta a table of point estimates, SE’s, test statistics and p-values.
flag99 an indicator; if 1, it indicates the original fit yields an estimate of rho to be 0.99.
When the correction is requested (default), the correction procedure kicks in,
and the final estimates of rho is corrected. Only valid if order 1 is specified.
residuals the residuals, that is response minus fitted values.
fitted.values the fitted mean values.
Author(s)
<NAME>. McKean and <NAME>
References
McKnight, <NAME>., <NAME>., and <NAME>. (2000). A double bootstrap method to analyze
linear models with autoregressive error terms. Psychological methods, 5 (1), 87.
<NAME> (2017). Ph.D. Dissertation.
See Also
dbfit.formula
Examples
# make sure the dependent package Rfit is installed
# To save users time, we set both bootstrap sizes to be 100 in this example.
# Defaults are both 500.
# data(testdata)
# This data is generated by a two-phase design, with autoregressive order being one,
# autoregressive coefficient being 0.6 and all regression coefficients being 0.
# Both the first and second phase have 20 observations.
# y <- testdata[,5]
# x <- testdata[,1:4]
# fit1 <- dbfit(x,y,1, nbs = 100, nbscov = 100) # OLS fit, default
# summary(fit1)
# Note that the CI's of autoregressive coef are not shown in the summary.
# Instead, they are attributes of model fit.
# fit1$rho_CI_1
# fit2 <- dbfit(x,y,1, nbs = 100, nbscov = 100 ,method="RANK") # rank-based fit
# When fitting with autoregressive order 2,
# the estimate of the second order autoregressive coefficient should not be significant,
# since this data is generated with order 1.
# fit3 <- dbfit(x,y,2, nbs = 100, nbscov = 100)
# fit3$rho_CI_1 # The first row is lower bounds, and second row is upper bounds
durbin1fit Durbin stage 1 fit
Description
Function implements the Durbin stage 1 fit
Usage
durbin1fit(y, x, arp, method, scores)
Arguments
y the response variable in stage 1, not the original response variable
x the model matrix in stage 1, not the original design matrix
arp the order of autoregressive errors.
method the method to be used for fitting. If "OLS", uses the ordinary least square; If
"RANK", uses the rank-based fit.
scores Default is Wilcoxon scores
Note
This function is for internal use. The main function for users is dbfit.
References
<NAME>., <NAME>., and <NAME>. (2000). A double bootstrap method to analyze
linear models with autoregressive error terms. Psychological methods, 5 (1), 87. Shaofeng Zhang
(2017). Ph.D. Dissertation.
durbin1xy Creating New X and Y for Durbin Stage 1
Description
Functions provides the tranformed reponse variable and model matrix for Durbin stage 1 fit. For
details of the transformation, see the reference.
Usage
durbin1xy(y, x, arp)
Arguments
y the orginal response variable
x the orginal design matrix with first column of all one’s (corresponding to the
intercept)
arp the order of autoregressive errors.
References
<NAME>., <NAME>., and <NAME>. (2000). A double bootstrap method to analyze
linear models with autoregressive error terms. Psychological methods, 5 (1), 87. Shaofeng Zhang
(2017). Ph.D. Dissertation.
durbin2fit Durbin stage 2 fit
Description
Function implements the Durbin stage 1 fit
Usage
durbin2fit(yc, xc, adjphi, method, scores)
Arguments
yc a transformed reponse variable
xc a transformed design matrix
adjphi the Durbin stage 1 estimate(s) of the autoregressive parameters rho
method the method to be used for fitting. If "OLS", uses the ordinary least square; If
"RANK", uses the rank-based fit.
scores Default is Wilcoxon scores
Value
beta the estimates of regression coefficients
sigma the estimate of standard deviation of the white noise
Note
This function is for internal use. The main function for users is dbfit.
References
McKnight, <NAME>., <NAME>., and <NAME>. (2000). A double bootstrap method to analyze
linear models with autoregressive error terms. Psychological methods, 5 (1), 87. <NAME>
(2017). Ph.D. Dissertation.
fullr QR decomposition for non-full rank design matrix for Rfit.
Description
With Rfit recent update, it cannot return partial results with sigular design matrix (as opposed to
lm). This function uses QR decomposition for Rfit to resolve this issue, so that dbfit can run robust
version.
Usage
fullr(x, p1)
Arguments
x design matrix, including intercept, i.e. the first column being ones.
p1 number of first few columns of x that are lineraly independent.
Note
This function is for internal use.
hmdesign2 the Two-Phase Design Matrix
Description
Returns the design matrix for a two-phase intervention model.
Usage
hmdesign2(n1, n2)
Arguments
n1 number of obs in phase 1
n2 number of obs in phase 2
Details
It returns a matrix of 4 columns. As discussed in Huitema, Mckean, & Mcknight (1999), in two-
phase design: beta0 = intercept, beta1 = slope for Phase 1, beta2 = level change from Phase 1 to
Phase 2, and beta3 slope change from Phase 1 to Phase 2.
References
<NAME>., <NAME>., & <NAME>. (1999). Autocorrelation effects on least- squares
intervention analysis of short time series. Educational and Psychological Measurement, 59 (5),
767-786.
Examples
n1 <- 15
n2 <- 15
hmdesign2(n1, n2)
hmmat K-Phase Design Matrix
Description
Returns the design matrix for a general k-phase intervention model
Usage
hmmat(vecss, k)
Arguments
vecss a vector of length k with each element being the number of observations in each
phase
k number of phases
Details
It returns a matrix of 2*k columns. The design can be unbalanced, i.e. each phase has different
observations.
References
<NAME>., <NAME>., & <NAME>. (1999). Autocorrelation effects on least- squares
intervention analysis of short time series. Educational and Psychological Measurement, 59 (5),
767-786.
See Also
hmdesign2
Examples
# a three-phase design matrix
hmmat(c(10,10,10),3)
hypothmat General Linear Tests of the regression coefficients
Description
Performs general linear tests of the regressio coefficients.
Usage
hypothmat(sfit, mmat, n, p)
Arguments
sfit the result of a call to dbfit.
mmat a full row rank q*(p+1) matrix, where q is the row number of the matrix and p
is number of independent variables.
n total number of observations.
p number of independent variables.
Details
This functions performs the general linear F-test of the form H0: Mb = 0 vs HA: Mb != 0.
Value
tst the test statistic
pvf the p-value of the F-test
References
<NAME>., <NAME>., and <NAME>. (2000). A double bootstrap method to analyze
linear models with autoregressive error terms. Psychological methods, 5 (1), 87. <NAME>
(2017). Ph.D. Dissertation.
Examples
# data(testdata)
# y<-testdata[,5]
# x<-testdata[,1:4]
# fit1<-dbfit(x,y,1) # OLS fit, default
# a test that H0: b1 = b3 vs HA: b1 != b3
# mat<-matrix(c(1,0,0,-1),nrow=1)
# hypothmat(sfit=fit1,mmat=mat,n=40,p=4)
lagx Lag Functions
Description
For preparing the transformed x and y in the Durbin stage 1 fit
Usage
lagx(x, s1, s2)
lagmat(x, p)
Arguments
x a vector or the design matrix, including intercept, i.e. the first column being
ones.
s1 starting index of the slice.
s2 end index of the slice.
p the order of autoregressive errors.
Note
These function are for internal use.
nurho Creating a new response variable for Durbin stage 2
Description
It returns a new response variable (vector) for Durbin stage 2.
Usage
nurho(yc, adjphi)
Arguments
yc the centered response variable y
adjphi (initial) estimate of rho in Durbin stage 1
Details
see reference.
Note
This function is for internal use. The main function for users is dbfit.
References
<NAME>., <NAME>., and <NAME>. (2000). A double bootstrap method to analyze
linear models with autoregressive error terms. Psychological methods, 5 (1), 87. <NAME>
(2017). Ph.D. Dissertation.
print.dbfit DBfit Internal Print Functions
Description
These functions print the output in a user-friendly manner using the internal R function print.
Usage
## S3 method for class 'dbfit'
print(x, ...)
## S3 method for class 'summary.dbfit'
print(x, ...)
Arguments
x An object to be printed
... additional arguments to be passed to print
See Also
dbfit, summary.dbfit
rhoci2 A fisher type CI of the autoregressive parameter rho
Description
This function returns a Fisher type CI for rho, which is then used to correct the .99 cases.
Usage
rhoci2(n, rho, cv)
Arguments
n total number of observations
rho final estimate of rho, usually .99.
cv critical value for CI
Details
see reference.
Note
This function is for internal use.
References
<NAME> (2017). Ph.D. Dissertation. <NAME>. (1952). Advanced statistical methods in
biometric research. p. 231
simpgen1hm2 Simulation Data Generating Function
Description
Generates the simulation data for a two-phase intervention model.
Usage
simpgen1hm2(n1, n2, rho, beta = c(0, 0, 0, 0))
Arguments
n1 number of obs in phase 1
n2 number of obs in phase 2
rho pre-defined autoregressive parameter(s)
beta pre-defined regression coefficients
Details
This function is used for simulations when developing the package. With pre-defined sample sizes
in both phases and parameters, it returns a simulated data.
Value
mat a matrix containing the simulation data. The last column is the response vari-
able. All other columns make up the design matrix.
See Also
hmdesign2
Examples
n1 <- 15
n2 <- 15
rho <- 0.6
beta <- c(0,0,0,0)
dat <- simpgen1hm2(n1, n2, rho, beta)
dat
simula Work Horse Function to implement the Double Bootstrap method
Description
simula is the original work horse function to implement the DB method. However, when this
function returns an estimate of rho to be .99, another work horse function simulacorrection kicks
in.
Usage
simula(x, y, arp, nbs, nbscov, conf, method, scores)
Arguments
x the design matrix, including intercept, i.e. the first column being ones.
y the response variable.
arp the order of autoregressive errors.
nbs the bootstrap size for the first bootstrap procedure. Default is 500.
nbscov the bootstrap size for the second bootstrap procedure. Default is 500.
conf the confidence level of CI for rho, default is 0.95.
method the method to be used for fitting. If "OLS", uses the ordinary least square lm; If
"RANK", uses the rank-based fit rfit.
scores Default is Wilcoxon scores
Details
see dbfit.
Note
Users should use dbfit to perform the analysis.
References
<NAME>., <NAME>., and <NAME>. (2000). A double bootstrap method to analyze
linear models with autoregressive error terms. Psychological methods, 5 (1), 87. <NAME>
(2017). Ph.D. Dissertation.
See Also
dbfit.
simulacorrection Work Horse Function to Implement the Double Bootstrap Method For
.99 Cases
Description
When function simula returns an estimate of rho to be .99, this function kicks in and ouputs a
corrected estimate of rho. Currently, this only works for order 1, i.e. for order > 1, this correction
will not get involved.
Usage
simulacorrection(x, y, arp, nbs, nbscov, method, scores)
Arguments
x the design matrix, including intercept, i.e. the first column being ones.
y the response variable.
arp the order of autoregressive errors.
nbs the bootstrap size for the first bootstrap procedure. Default is 500.
nbscov the bootstrap size for the second bootstrap procedure. Default is 500.
method the method to be used for fitting. If "OLS", uses the ordinary least square lm; If
"RANK", uses the rank-based fit rfit.
scores Default is Wilcoxon scores
Details
If 0.99 problem is detected, then construct Fisher CI for both initial estimate (in Durbin stage 1)
and first bias-corrected estimate (perform only one bootstrap, instead of a loop); if the midpoint of
latter is smaller than 0.95, then this midpoint is the final estimate for rho; otherwise the midpoint of
the former CI is the final estimate.
By default, when function simula returns an estimate of rho to be .99, this function kicks in and
ouputs a corrected estimate of rho. However, users can turn the auto correction off by setting
correction="FALSE" in dbfit. Users are encouraged to investigate why the stationarity assumption
is violated based on their experience of time series analysis and knowledge of the data.
Note
Users should use dbfit to perform the analysis.
References
<NAME> (2017). Ph.D. Dissertation.
See Also
dbfit.
summary.dbfit Summarize the double bootstrap (DB) fit
Description
It summarizes the DB fit in a way that is similar to OLS lm.
Usage
## S3 method for class 'dbfit'
summary(object, ...)
Arguments
object a result of the call to rfit
... additional arguments to be passed
Value
call the call to rfit
tab a table of point estimates, standard errors, t-ratios and p-values
rho1 the Durbin two-stage estimate of rho
adjar the DB (final) estimate of rho
flag99 an indicator; if 1, it indicates the original fit yields an estimate of rho to be 0.99.
Only valid if order 1 is specified.
Examples
# data(testdata)
# y<-testdata[,5]
# x<-testdata[,1:4]
# fit1<-dbfit(x,y,1) # OLS fit, default
# summary(fit1)
testdata testdata
Description
This data serves as a test data.
Usage
data("testdata")
Format
A data frame with 40 observations. First 4 columns make up the design matrix, while the last
column is the response variable. This data is generated by a two-phase design, with autoregressive
order being one, autoregressive coefficient being 0.6 and all regression coefficients being 0. Both
the first and second phase have 20 observations.
Examples
data(testdata)
wrho Creating a new design matrix for Durbin stage 2
Description
It returns a new design matrix for Durbin stage 2.
Usage
wrho(xc, adjphi)
Arguments
xc centered design matrix, no column of ones
adjphi (initial) estimate of rho in Durbin stage 1
Details
see reference.
Note
This function is for internal use. The main function for users is dbfit.
References
McKnight, <NAME>., <NAME>., and <NAME>. (2000). A double bootstrap method to analyze
linear models with autoregressive error terms. Psychological methods, 5 (1), 87. <NAME>
(2017). Ph.D. Dissertation. |
correlbinom | cran | R | Package ‘correlbinom’
October 12, 2022
Title Correlated Binomial Probabilities
Version 0.0.1
Date 2017-06-15
Author <NAME> [aut], <NAME> [aut,cre]
Maintainer <NAME> <<EMAIL>>
Description Calculates the probabilities of k successes given n trials of a binomial random vari-
able with non-negative correlation across trials. The function takes as inputs the scalar val-
ues the level of correlation or association between trials, the success probability, the num-
ber of trials, an optional input specifying the number of bits of precision used in the calcula-
tion, and an optional input specifying whether the calculation ap-
proach to be used is from Witt (2014) <doi:10.1080/03610926.2012.725148> or from Kuk (2004) <doi:10.1046/j.1467-
9876.2003.05369.x>. The output is a (trials+1)-dimensional vector containing the likeli-
hoods of 0, 1, ..., trials successes.
Depends R (>= 3.2.3), Rmpfr, methods
License GPL (>= 3)
Encoding UTF-8
LazyData true
RoxygenNote 6.0.1.9000
NeedsCompilation no
Repository CRAN
Date/Publication 2017-07-06 10:07:54 UTC
R topics documented:
correlbino... 2
correlbinom Correlated Binomial Probabilities
Description
This function reports the likelihoods of 0, 1, ..., n successes given n trials of a binomial with a
specified correlation or association between trials and success probability
Usage
correlbinom(rho, successprob, trials, precision = 1024, model = "witt")
Arguments
rho The level of correlation or association between trials. In the Witt (2014) model,
this parameter is the level of correlation between trials. In the Kuk (2004) model,
it is the equivalent of one minus gamma from that paper, where a value of zero
indicates independence. In both cases, this parameter must fall within the unit
interval.
successprob The likelihood of success in one trial.
trials The number of trials.
precision Number of bits of precision. Defaults to 1024.
model Specify whether the ’kuk’ or ’witt’ model is to be used for calculation. Defaults
to ’witt’.
References
Kuk, <NAME>., 2004. A litter-based approach to risk assessment in developmental toxicity via
a power family of completely monotone functions. Journal of the Royal Statistical Society, Series
C (Applied Statistics), 53(2): 369-86.
<NAME>, 2014. A simple distribution for the sum of correlated, exchangeable binary data. Com-
munications in Statistics - Theory and Methods, 43(20): 4265-80.
Examples
correlbinom(0.5,0.1,5)
correlbinom(0.9,0.3,12,256)
correlbinom(0.9,0.6,12,model="kuk") |
ggchangepoint | cran | R | Package ‘ggchangepoint’
October 13, 2022
Type Package
Title Combines Changepoint Analysis with 'ggplot2'
Version 0.1.0
Description R provides fantastic tools for changepoint
analysis, but plots generated by the tools do
not have the 'ggplot2' style. This tool, however,
combines 'changepoint', 'changepoint.np' and 'ecp'
together, and uses 'ggplot2' to visualize changepoints.
License GPL (>= 3)
Encoding UTF-8
Imports changepoint, changepoint.np, dplyr, ecp, ggplot2, Rdpack,
tibble, utils
RdMacros Rdpack
RoxygenNote 7.1.2
Suggests rmarkdown, knitr, gstat, datasets
VignetteBuilder knitr
NeedsCompilation no
Author <NAME> [aut, cre]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2022-02-24 08:20:04 UTC
R topics documented:
cpt_wrappe... 2
ecp_wrappe... 3
ggchangepoin... 4
ggcptplo... 4
ggecpplo... 5
cpt_wrapper Changepoint wrapper
Description
This function wraps a number of cpt functions from the changepoint package and the cpt.np()
function from the changepoint.np package. It is handy that users can use this function to get the
same changepoint results as these functions output individually. Moreover, it returns a tibble that
inherits the tidyverse sytle. Functions from the changepoint package do require data normality
assumption by default, yet changepoint.np is a non-parametric way to detect changepoints and let
data speak by itself. If user sets change_in as cpt_np, a seed should be set before using the function
for the sake of reproducibility. For more details on the changepoint and changepoint.np packages,
please refer to their documentation.
Usage
cpt_wrapper(data, change_in = "mean_var", cp_method = "PELT", ...)
Arguments
data A vector.
change_in Choice of mean_var, mean, var, and cpt_np. Each choice corresponds to
cpt.meanvar(), cpt.mean(), cpt.var() and cpt.np() respectively. The de-
fault is mean_var.
cp_method A wide range of choices (i.e., AMOC, PELT, SegNeigh or BinSeg). Please note
when change_in is cpt_np, PELT is the only option.
... Extra arguments for each cpt function mentioned in the change_in section.
Value
A tibble includes which point(s) is/are the changepoint along with raw changepoint value corre-
sponding to that changepoint.
References
<NAME>, <NAME> (2014). “changepoint: An R package for changepoint analysis.” Journal of
statistical software, 58(3), 1–19.
Examples
set.seed(2022)
cpt_wrapper(c(rnorm(100,0,1),rnorm(100,0,10)))
cpt_wrapper(c(rnorm(100,0,1),rnorm(100,10,1)))
ecp_wrapper ecp wrapper
Description
The ecp package provides a non-parametric way to detect changepoints. Unlike the changepoint
package, it does not assume raw data to have any formal distribution. This wrapper function wraps
two functions from the ecp package, i.e., e.divisive() and e.agglo(). Users can use either
function by switching the algorithm argument. Before using the wrapper function , seed should
be set for the sake of reproducibility.
Usage
ecp_wrapper(data, algorithm = "divisive", min_size = 2, ...)
Arguments
data A vector.
algorithm Either divisive or agglo. divisive is the default.
min_size Minimum number of observations between change points. By default is 2. This
argument is only applied when algorithm = "divisive".
... Extra arguments to pass on either from e.divisive() or e.agglo().
Value
A tibble includes which point(s) is/are the changepoint along with raw changepoint value corre-
sponding to that changepoint.
References
<NAME>, <NAME> (2013). “ecp: An R package for nonparametric multiple change point
analysis of multivariate data.” arXiv preprint arXiv:1309.3295.
Examples
set.seed(2022)
ecp_wrapper(c(rnorm(100,0,1),rnorm(100,0,10)))
ecp_wrapper(c(rnorm(100,0,1),rnorm(100,10,1)))
ggchangepoint ggchangepoint package
Description
Combines Changepoint Anaysis with ’ggplot2’.
Details
ggchangepoint tries to offer several changepoint R packages in a tidy format and output the ggplot2
plots so that the tidyverse users can gain some familiarity to work with the changepoint analysis.
For the moment, I only include three changepoint packages (’changepoint’, ’changepoint.np’ and
’ecp’ ). More changepoint packages will be included as time progresses.
ggcptplot Plot for the changepoint package
Description
The plot for changepoints detected by the changepoint package is a line plot for the raw data and
the vertical lines representing each changepoint. The x-axis is the row number of the raw data in
the original data vector. The plot inherits ggplot2, meaning users can add ggplot2 functions on top
the changepoint plot for customization.
Usage
ggcptplot(
data,
change_in = "mean_var",
cp_method = "PELT",
...,
cptline_alpha = 1,
cptline_color = "blue",
cptline_type = "solid",
cptline_size = 0.5
)
Arguments
data A vector.
change_in Choice of mean_var, mean, var, and cpt_np. Each choice corresponds to
cpt.meanvar(), cpt.mean(), cpt.var() and cpt.np() respectively. The de-
fault is mean_var.
cp_method A wide range of choices (i.e., AMOC, PELT, SegNeigh or BinSeg). Please note
when change_in is cpt_np, PELT is the only option.
... Extra arguments for each cpt function mentioned in the change_in section.
cptline_alpha The value of alpha for the vertical changepoint line(s), default is 1, meaning no
transparency.
cptline_color The color for the vertical changepoint line(s), default is blue.
cptline_type The linetype for the vertical changepoint line(s), default is solid.
cptline_size The size for the vertical changepoint line(s), default is 0.5.
Value
A line plot with data points along with the vertical lines representing changepoints.
Examples
ggcptplot(c(rnorm(100,0,1),rnorm(100,0,10)))
ggcptplot(c(rnorm(100,0,1),rnorm(100,10,1)))
ggecpplot Plot for the ecp package
Description
The plot for changepoints detected by the ecp package is a line plot for the raw data and the ver-
tical lines representing each changepoint. The x-axis is the row number of the raw data in the
original data vector. The plot inherits ggplot2, meaning users can add ggplot2 functions on top the
changepoint plot for customization.
Usage
ggecpplot(
data,
algorithm = "divisive",
min_size = 2,
...,
cptline_alpha = 1,
cptline_color = "blue",
cptline_type = "solid",
cptline_size = 0.5
)
Arguments
data A vector.
algorithm Either divisive or agglo. divisive is the default.
min_size Minimum number of observations between change points. By default is 2. This
argument is only applied when algorithm = "divisive".
... Extra arguments to pass on either from e.divisive() or e.agglo().
cptline_alpha The value of alpha for the vertical changepoint line(s), default is 1, meaning no
transparency.
cptline_color The color for the vertical changepoint line(s), default is blue.
cptline_type The linetype for the vertical changepoint line(s), default is solid.
cptline_size The size for the vertical changepoint line(s), default is 0.5.
Value
A line plot with data points along with the vertical lines representing changepoints.
Examples
ggecpplot(c(rnorm(100,0,1),rnorm(100,0,10)))
ggecpplot(c(rnorm(100,0,1),rnorm(100,0,10))) |
snarkos-storage | rust | Rust | Struct snarkos_storage::BlockLocators
===
```
pub struct BlockLocators<N: Network> { /* private fields */ }
```
A helper struct to represent block locators from the ledger.
The current format of block locators is [(block_height, block_hash, block_header)].
Implementations
---
source### impl<N: Network> BlockLocators<Nsource#### pub fn from( block_locators: BTreeMap<u32, (N::BlockHash, Option<BlockHeader<N>>)>) -> Result<Selfsource#### pub fn is_empty(&self) -> bool
source#### pub fn len(&self) -> usize
source#### pub fn get_block_hash(&self, block_height: u32) -> Option<N::BlockHashsource#### pub fn get_cumulative_weight(&self, block_height: u32) -> Option<u128Methods from Deref<Target = BTreeMap<u32, (N::BlockHash, Option<BlockHeader<N>>)>>
---
1.0.0 · source#### pub fn get<Q>(&self, key: &Q) -> Option<&V> where K: Borrow<Q> + Ord, Q: Ord + ?Sized,
Returns a reference to the value corresponding to the key.
The key may be any borrowed form of the map’s key type, but the ordering on the borrowed form *must* match the ordering on the key type.
##### Examples
Basic usage:
```
use std::collections::BTreeMap;
let mut map = BTreeMap::new();
map.insert(1, "a");
assert_eq!(map.get(&1), Some(&"a"));
assert_eq!(map.get(&2), None);
```
1.40.0 · source#### pub fn get_key_value<Q>(&self, k: &Q) -> Option<(&K, &V)> where K: Borrow<Q> + Ord, Q: Ord + ?Sized,
Returns the key-value pair corresponding to the supplied key.
The supplied key may be any borrowed form of the map’s key type, but the ordering on the borrowed form *must* match the ordering on the key type.
##### Examples
```
use std::collections::BTreeMap;
let mut map = BTreeMap::new();
map.insert(1, "a");
assert_eq!(map.get_key_value(&1), Some((&1, &"a")));
assert_eq!(map.get_key_value(&2), None);
```
source#### pub fn first_key_value(&self) -> Option<(&K, &V)> where K: Ord,
🔬 This is a nightly-only experimental API. (`map_first_last`)Returns the first key-value pair in the map.
The key in this pair is the minimum key in the map.
##### Examples
Basic usage:
```
#![feature(map_first_last)]
use std::collections::BTreeMap;
let mut map = BTreeMap::new();
assert_eq!(map.first_key_value(), None);
map.insert(1, "b");
map.insert(2, "a");
assert_eq!(map.first_key_value(), Some((&1, &"b")));
```
source#### pub fn last_key_value(&self) -> Option<(&K, &V)> where K: Ord,
🔬 This is a nightly-only experimental API. (`map_first_last`)Returns the last key-value pair in the map.
The key in this pair is the maximum key in the map.
##### Examples
Basic usage:
```
#![feature(map_first_last)]
use std::collections::BTreeMap;
let mut map = BTreeMap::new();
map.insert(1, "b");
map.insert(2, "a");
assert_eq!(map.last_key_value(), Some((&2, &"a")));
```
1.0.0 · source#### pub fn contains_key<Q>(&self, key: &Q) -> bool where K: Borrow<Q> + Ord, Q: Ord + ?Sized,
Returns `true` if the map contains a value for the specified key.
The key may be any borrowed form of the map’s key type, but the ordering on the borrowed form *must* match the ordering on the key type.
##### Examples
Basic usage:
```
use std::collections::BTreeMap;
let mut map = BTreeMap::new();
map.insert(1, "a");
assert_eq!(map.contains_key(&1), true);
assert_eq!(map.contains_key(&2), false);
```
1.17.0 · source#### pub fn range<T, R>(&self, range: R) -> Range<'_, K, V> where T: Ord + ?Sized, K: Borrow<T> + Ord, R: RangeBounds<T>,
Constructs a double-ended iterator over a sub-range of elements in the map.
The simplest way is to use the range syntax `min..max`, thus `range(min..max)` will yield elements from min (inclusive) to max (exclusive).
The range may also be entered as `(Bound<T>, Bound<T>)`, so for example
`range((Excluded(4), Included(10)))` will yield a left-exclusive, right-inclusive range from 4 to 10.
##### Panics
Panics if range `start > end`.
Panics if range `start == end` and both bounds are `Excluded`.
##### Examples
Basic usage:
```
use std::collections::BTreeMap;
use std::ops::Bound::Included;
let mut map = BTreeMap::new();
map.insert(3, "a");
map.insert(5, "b");
map.insert(8, "c");
for (&key, &value) in map.range((Included(&4), Included(&8))) {
println!("{}: {}", key, value);
}
assert_eq!(Some((&5, &"b")), map.range(4..).next());
```
1.0.0 · source#### pub fn iter(&self) -> Iter<'_, K, VGets an iterator over the entries of the map, sorted by key.
##### Examples
Basic usage:
```
use std::collections::BTreeMap;
let mut map = BTreeMap::new();
map.insert(3, "c");
map.insert(2, "b");
map.insert(1, "a");
for (key, value) in map.iter() {
println!("{}: {}", key, value);
}
let (first_key, first_value) = map.iter().next().unwrap();
assert_eq!((*first_key, *first_value), (1, "a"));
```
1.0.0 · source#### pub fn keys(&self) -> Keys<'_, K, VGets an iterator over the keys of the map, in sorted order.
##### Examples
Basic usage:
```
use std::collections::BTreeMap;
let mut a = BTreeMap::new();
a.insert(2, "b");
a.insert(1, "a");
let keys: Vec<_> = a.keys().cloned().collect();
assert_eq!(keys, [1, 2]);
```
1.0.0 · source#### pub fn values(&self) -> Values<'_, K, VGets an iterator over the values of the map, in order by key.
##### Examples
Basic usage:
```
use std::collections::BTreeMap;
let mut a = BTreeMap::new();
a.insert(1, "hello");
a.insert(2, "goodbye");
let values: Vec<&str> = a.values().cloned().collect();
assert_eq!(values, ["hello", "goodbye"]);
```
1.0.0 · source#### pub fn len(&self) -> usize
Returns the number of elements in the map.
##### Examples
Basic usage:
```
use std::collections::BTreeMap;
let mut a = BTreeMap::new();
assert_eq!(a.len(), 0);
a.insert(1, "a");
assert_eq!(a.len(), 1);
```
1.0.0 · source#### pub fn is_empty(&self) -> bool
Returns `true` if the map contains no elements.
##### Examples
Basic usage:
```
use std::collections::BTreeMap;
let mut a = BTreeMap::new();
assert!(a.is_empty());
a.insert(1, "a");
assert!(!a.is_empty());
```
Trait Implementations
---
source### impl<N: Clone + Network> Clone for BlockLocators<N> where N::BlockHash: Clone,
source#### fn clone(&self) -> BlockLocators<NReturns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl<N: Debug + Network> Debug for BlockLocators<N> where N::BlockHash: Debug,
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl<N: Network> Default for BlockLocators<Nsource#### fn default() -> Self
Returns the “default value” for a type. Read more
source### impl<N: Network> Deref for BlockLocators<N#### type Target = BTreeMap<u32, (N::BlockHash, Option<BlockHeader<N>>)The resulting type after dereferencing.
source#### fn deref(&self) -> &Self::Target
Dereferences the value.
source### impl<'de, N: Network> Deserialize<'de> for BlockLocators<Nsource#### fn deserialize<D: Deserializer<'de>>(deserializer: D) -> Result<Self, D::ErrorDeserialize this value from the given Serde deserializer. Read more
source### impl<N: Network> Display for BlockLocators<Nsource#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl<N: Network> FromBytes for BlockLocators<Nsource#### fn read_le<R: Read>(reader: R) -> IoResult<SelfReads `Self` from `reader` as little-endian bytes.
#### fn from_bytes_le(bytes: &[u8]) -> Result<Self, ErrorReturns `Self` from a byte array in little-endian order.
source### impl<N: Network> FromStr for BlockLocators<N#### type Err = Error
The associated error which can be returned from parsing.
source#### fn from_str(block_locators: &str) -> Result<Self, Self::ErrParses a string `s` to return a value of this type. Read more
source### impl<N: PartialEq + Network> PartialEq<BlockLocators<N>> for BlockLocators<N> where N::BlockHash: PartialEq,
source#### fn eq(&self, other: &BlockLocators<N>) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &BlockLocators<N>) -> bool
This method tests for `!=`.
source### impl<N: Network> Serialize for BlockLocators<Nsource#### fn serialize<S: Serializer>(&self, serializer: S) -> Result<S::Ok, S::ErrorSerialize this value into the given Serde serializer. Read more
source### impl<N: Network> ToBytes for BlockLocators<Nsource#### fn write_le<W: Write>(&self, writer: W) -> IoResult<()Writes `self` into `writer` as little-endian bytes.
#### fn to_bytes_le(&self) -> Result<Vec<u8, Global>, ErrorReturns `self` as a byte array in little-endian order.
source### impl<N: Eq + Network> Eq for BlockLocators<N> where N::BlockHash: Eq,
source### impl<N: Network> StructuralEq for BlockLocators<Nsource### impl<N: Network> StructuralPartialEq for BlockLocators<NAuto Trait Implementations
---
### impl<N> RefUnwindSafe for BlockLocators<N> where <N as Network>::BlockHash: RefUnwindSafe, <<N as Network>::InnerCurve as PairingEngine>::G1Affine: RefUnwindSafe, <N as Network>::InnerCurve: PairingEngine<Fq = <N as Network>::OuterScalarField> + PairingEngine<Fr = <N as Network>::InnerScalarField>, <N as Network>::InnerScalarField: RefUnwindSafe, <N as Network>::LedgerRoot: RefUnwindSafe, <N as Network>::PoSWNonce: RefUnwindSafe, <N as Network>::PoSWProof: RefUnwindSafe, <N as Network>::TransactionsRoot: RefUnwindSafe,
### impl<N> Send for BlockLocators<N> where <N as Network>::InnerCurve: PairingEngine<Fq = <N as Network>::OuterScalarField> + PairingEngine<Fr = <N as Network>::InnerScalarField>,
### impl<N> Sync for BlockLocators<N> where <N as Network>::InnerCurve: PairingEngine<Fq = <N as Network>::OuterScalarField> + PairingEngine<Fr = <N as Network>::InnerScalarField>,
### impl<N> Unpin for BlockLocators<N### impl<N> UnwindSafe for BlockLocators<N> where <N as Network>::BlockHash: RefUnwindSafe, <<N as Network>::InnerCurve as PairingEngine>::G1Affine: RefUnwindSafe, <N as Network>::InnerCurve: PairingEngine<Fq = <N as Network>::OuterScalarField> + PairingEngine<Fr = <N as Network>::InnerScalarField>, <N as Network>::InnerScalarField: RefUnwindSafe, <N as Network>::LedgerRoot: RefUnwindSafe, <N as Network>::PoSWNonce: RefUnwindSafe, <N as Network>::PoSWProof: RefUnwindSafe, <N as Network>::TransactionsRoot: RefUnwindSafe,
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### pub fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### pub fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<Q, K> Equivalent<K> for Q where Q: Eq + ?Sized, K: Borrow<Q> + ?Sized,
source#### pub fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.
source### impl<T> From<T> for T
const: unstable · source#### pub fn from(t: T) -> T
Performs the conversion.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### pub fn into(self) -> U
Performs the conversion.
### impl<T> Pointable for T
#### pub const ALIGN: usize
The alignment of pointer.
#### type Init = T
The type for initializers.
#### pub unsafe fn init(init: <T as Pointable>::Init) -> usize
Initializes a with the given initializer. Read more
#### pub unsafe fn deref<'a>(ptr: usize) -> &'aT
Dereferences the given pointer. Read more
#### pub unsafe fn deref_mut<'a>(ptr: usize) -> &'a mutT
Mutably dereferences the given pointer. Read more
#### pub unsafe fn drop(ptr: usize)
Drops the object pointed to by the given pointer. Read more
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### pub fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### pub fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T> ToString for T where T: Display + ?Sized,
source#### pub default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
### impl<V, T> VZip<V> for T where V: MultiLane<T>,
#### pub fn vzip(self) -> V
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct snarkos_storage::LedgerState
===
```
pub struct LedgerState<N: Network> { /* private fields */ }
```
Implementations
---
source### impl<N: Network> LedgerState<Nsource#### pub fn open_writer<S: Storage, P: AsRef<Path>>(path: P) -> Result<SelfOpens a new writable instance of `LedgerState` from the given storage path.
For a read-only instance of `LedgerState`, use `LedgerState::open_reader`.
A writable instance of `LedgerState` possesses full functionality, whereas a read-only instance of `LedgerState` may only call immutable methods.
source#### pub fn open_reader<S: Storage, P: AsRef<Path>>(path: P) -> Result<Arc<Self>Opens a read-only instance of `LedgerState` from the given storage path.
For a writable instance of `LedgerState`, use `LedgerState::open_writer`.
A writable instance of `LedgerState` possesses full functionality, whereas a read-only instance of `LedgerState` may only call immutable methods.
source#### pub fn is_read_only(&self) -> bool
Returns `true` if the ledger is in read-only mode.
source#### pub fn latest_block(&self) -> Block<NReturns the latest block.
source#### pub fn latest_block_height(&self) -> u32
Returns the latest block height.
source#### pub fn latest_block_hash(&self) -> N::BlockHash
Returns the latest block hash.
source#### pub fn latest_block_timestamp(&self) -> i64
Returns the latest block timestamp.
source#### pub fn latest_block_difficulty_target(&self) -> u64
Returns the latest block difficulty target.
source#### pub fn latest_cumulative_weight(&self) -> u128
Returns the latest cumulative weight.
source#### pub fn latest_block_header(&self) -> BlockHeader<NReturns the latest block header.
source#### pub fn latest_block_transactions(&self) -> Transactions<NReturns the transactions from the latest block.
source#### pub fn latest_block_locators(&self) -> BlockLocators<NReturns the latest block locators.
source#### pub fn latest_ledger_root(&self) -> N::LedgerRoot
Returns the latest ledger root.
source#### pub fn contains_ledger_root(&self, ledger_root: &N::LedgerRoot) -> Result<boolReturns `true` if the given ledger root exists in storage.
source#### pub fn contains_block_height(&self, block_height: u32) -> Result<boolReturns `true` if the given block height exists in storage.
source#### pub fn contains_block_hash(&self, block_hash: &N::BlockHash) -> Result<boolReturns `true` if the given block hash exists in storage.
source#### pub fn contains_transaction( &self, transaction_id: &N::TransactionID) -> Result<boolReturns `true` if the given transaction ID exists in storage.
source#### pub fn contains_serial_number( &self, serial_number: &N::SerialNumber) -> Result<boolReturns `true` if the given serial number exists in storage.
source#### pub fn contains_commitment(&self, commitment: &N::Commitment) -> Result<boolReturns `true` if the given commitment exists in storage.
source#### pub fn get_ciphertext( &self, commitment: &N::Commitment) -> Result<N::RecordCiphertextReturns the record ciphertext for a given commitment.
source#### pub fn get_transition( &self, transition_id: &N::TransitionID) -> Result<Transition<N>Returns the transition for a given transition ID.
source#### pub fn get_transaction( &self, transaction_id: &N::TransactionID) -> Result<Transaction<N>Returns the transaction for a given transaction ID.
source#### pub fn get_transaction_metadata( &self, transaction_id: &N::TransactionID) -> Result<Metadata<N>Returns the transaction metadata for a given transaction ID.
source#### pub fn get_cumulative_weight(&self, block_height: u32) -> Result<u128Returns the cumulative weight up to a given block height (inclusive) for the canonical chain.
source#### pub fn get_block_height(&self, block_hash: &N::BlockHash) -> Result<u32Returns the block height for the given block hash.
source#### pub fn get_block_hash(&self, block_height: u32) -> Result<N::BlockHashReturns the block hash for the given block height.
source#### pub fn get_block_hashes( &self, start_block_height: u32, end_block_height: u32) -> Result<Vec<N::BlockHash>Returns the block hashes from the given `start_block_height` to `end_block_height` (inclusive).
source#### pub fn get_previous_block_hash(&self, block_height: u32) -> Result<N::BlockHashReturns the previous block hash for the given block height.
source#### pub fn get_block_header(&self, block_height: u32) -> Result<BlockHeader<N>Returns the block header for the given block height.
source#### pub fn get_block_headers( &self, start_block_height: u32, end_block_height: u32) -> Result<Vec<BlockHeader<N>>Returns the block headers from the given `start_block_height` to `end_block_height` (inclusive).
source#### pub fn get_block_transactions( &self, block_height: u32) -> Result<Transactions<N>Returns the transactions from the block of the given block height.
source#### pub fn get_block(&self, block_height: u32) -> Result<Block<N>Returns the block for a given block height.
source#### pub fn get_blocks( &self, start_block_height: u32, end_block_height: u32) -> Result<Vec<Block<N>>Returns the blocks from the given `start_block_height` to `end_block_height` (inclusive).
source#### pub fn get_previous_ledger_root( &self, block_height: u32) -> Result<N::LedgerRootReturns the ledger root in the block header of the given block height.
source#### pub fn get_block_locators(&self, block_height: u32) -> Result<BlockLocators<N>Returns the block locators of the current ledger, from the given block height.
source#### pub fn check_block_locators( &self, block_locators: &BlockLocators<N>) -> Result<boolCheck that the block locators are well formed.
source#### pub fn get_block_template<R: Rng + CryptoRng>( &self, recipient: Address<N>, is_public: bool, transactions: &[Transaction<N>], rng: &mutR) -> Result<BlockTemplate<N>Returns a block template based on the latest state of the ledger.
source#### pub fn mine_next_block<R: Rng + CryptoRng>( &self, recipient: Address<N>, is_public: bool, transactions: &[Transaction<N>], terminator: &AtomicBool, rng: &mutR) -> Result<(Block<N>, Record<N>)Mines a new block using the latest state of the given ledger.
source#### pub fn add_next_block(&self, block: &Block<N>) -> Result<()Adds the given block as the next block in the ledger to storage.
source#### pub fn revert_to_block_height(&self, block_height: u32) -> Result<Vec<Block<N>>Reverts the ledger state back to the given block height, returning the removed blocks on success.
source#### pub fn get_ledger_inclusion_proof( &self, commitment: N::Commitment) -> Result<LedgerProof<N>Returns a ledger proof for the given commitment.
source#### pub fn shut_down(&self) -> Arc<RwLock<()>Gracefully shuts down the ledger state.
Trait Implementations
---
source### impl<N: Debug + Network> Debug for LedgerState<N> where N::BlockHash: Debug, N::LedgerRoot: Debug,
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
Auto Trait Implementations
---
### impl<N> !RefUnwindSafe for LedgerState<N### impl<N> Send for LedgerState<N> where <N as Network>::InnerCurve: PairingEngine<Fq = <N as Network>::OuterScalarField> + PairingEngine<Fr = <N as Network>::InnerScalarField>,
### impl<N> Sync for LedgerState<N> where <N as Network>::InnerCurve: PairingEngine<Fq = <N as Network>::OuterScalarField> + PairingEngine<Fr = <N as Network>::InnerScalarField>,
### impl<N> Unpin for LedgerState<N> where N: Unpin, <N as Network>::BlockHash: Unpin, <N as Network>::Commitment: Unpin, <N as Network>::FunctionID: Unpin, <<N as Network>::InnerCurve as PairingEngine>::G1Affine: Unpin, <N as Network>::InnerCircuitID: Unpin, <N as Network>::InnerCurve: PairingEngine<Fq = <N as Network>::OuterScalarField> + PairingEngine<Fr = <N as Network>::InnerScalarField>, <N as Network>::InnerScalarField: Unpin, <N as Network>::LedgerRoot: Unpin, <N as Network>::OuterProof: Unpin, <N as Network>::PoSWNonce: Unpin, <N as Network>::PoSWProof: Unpin, <N as Network>::ProgramAffineCurve: Unpin, <N as Network>::RecordCiphertext: Unpin, <N as Network>::RecordViewKey: Unpin, <N as Network>::SerialNumber: Unpin, <N as Network>::TransactionID: Unpin, <N as Network>::TransactionsRoot: Unpin, <N as Network>::TransitionID: Unpin,
### impl<N> !UnwindSafe for LedgerState<NBlanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### pub fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### pub fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### pub fn from(t: T) -> T
Performs the conversion.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### pub fn into(self) -> U
Performs the conversion.
### impl<T> Pointable for T
#### pub const ALIGN: usize
The alignment of pointer.
#### type Init = T
The type for initializers.
#### pub unsafe fn init(init: <T as Pointable>::Init) -> usize
Initializes a with the given initializer. Read more
#### pub unsafe fn deref<'a>(ptr: usize) -> &'aT
Dereferences the given pointer. Read more
#### pub unsafe fn deref_mut<'a>(ptr: usize) -> &'a mutT
Mutably dereferences the given pointer. Read more
#### pub unsafe fn drop(ptr: usize)
Drops the object pointed to by the given pointer. Read more
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
### impl<V, T> VZip<V> for T where V: MultiLane<T>,
#### pub fn vzip(self) -> V
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct snarkos_storage::Metadata
===
```
pub struct Metadata<N: Network> { /* private fields */ }
```
A helper struct containing transaction metadata.
*Attention*: This data structure is intended for usage in storage only.
Modifications to its layout will impact how metadata is represented in storage.
Implementations
---
source### impl<N: Network> Metadata<Nsource#### pub fn new( block_height: u32, block_hash: N::BlockHash, block_timestamp: i64, transaction_index: u16) -> Self
Initializes a new instance of `Metadata`.
Trait Implementations
---
source### impl<N: Clone + Network> Clone for Metadata<N> where N::BlockHash: Clone,
source#### fn clone(&self) -> Metadata<NReturns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl<N: Debug + Network> Debug for Metadata<N> where N::BlockHash: Debug,
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl<'de, N: Network> Deserialize<'de> for Metadata<N> where N::BlockHash: Deserialize<'de>,
source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where __D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
source### impl<N: PartialEq + Network> PartialEq<Metadata<N>> for Metadata<N> where N::BlockHash: PartialEq,
source#### fn eq(&self, other: &Metadata<N>) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &Metadata<N>) -> bool
This method tests for `!=`.
source### impl<N: Network> Serialize for Metadata<N> where N::BlockHash: Serialize,
source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where __S: Serializer,
Serialize this value into the given Serde serializer. Read more
source### impl<N: Eq + Network> Eq for Metadata<N> where N::BlockHash: Eq,
source### impl<N: Network> StructuralEq for Metadata<Nsource### impl<N: Network> StructuralPartialEq for Metadata<NAuto Trait Implementations
---
### impl<N> RefUnwindSafe for Metadata<N> where <N as Network>::BlockHash: RefUnwindSafe,
### impl<N> Send for Metadata<N### impl<N> Sync for Metadata<N### impl<N> Unpin for Metadata<N> where <N as Network>::BlockHash: Unpin,
### impl<N> UnwindSafe for Metadata<N> where <N as Network>::BlockHash: UnwindSafe,
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### pub fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### pub fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<Q, K> Equivalent<K> for Q where Q: Eq + ?Sized, K: Borrow<Q> + ?Sized,
source#### pub fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.
source### impl<T> From<T> for T
const: unstable · source#### pub fn from(t: T) -> T
Performs the conversion.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### pub fn into(self) -> U
Performs the conversion.
### impl<T> Pointable for T
#### pub const ALIGN: usize
The alignment of pointer.
#### type Init = T
The type for initializers.
#### pub unsafe fn init(init: <T as Pointable>::Init) -> usize
Initializes a with the given initializer. Read more
#### pub unsafe fn deref<'a>(ptr: usize) -> &'aT
Dereferences the given pointer. Read more
#### pub unsafe fn deref_mut<'a>(ptr: usize) -> &'a mutT
Mutably dereferences the given pointer. Read more
#### pub unsafe fn drop(ptr: usize)
Drops the object pointed to by the given pointer. Read more
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### pub fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### pub fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
### impl<V, T> VZip<V> for T where V: MultiLane<T>,
#### pub fn vzip(self) -> V
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source### impl<T> DeserializeOwned for T where T: for<'de> Deserialize<'de>,
Struct snarkos_storage::OperatorState
===
```
pub struct OperatorState<N: Network> { /* private fields */ }
```
Implementations
---
source### impl<N: Network> OperatorState<Nsource#### pub fn open_writer<S: Storage, P: AsRef<Path>>(path: P) -> Result<SelfOpens a new writable instance of `OperatorState` from the given storage path.
source#### pub fn to_shares(&self) -> Vec<((u32, Record<N>), HashMap<Address<N>, u64>)>Notable traits for Vec<u8, A>`impl<A> Write for Vec<u8, A> where A: Allocator,`
Returns all the shares in storage.
source#### pub fn to_coinbase_records(&self) -> Vec<(u32, Record<N>)>Notable traits for Vec<u8, A>`impl<A> Write for Vec<u8, A> where A: Allocator,`
Returns all coinbase records in storage.
source#### pub fn get_shares_for_block( &self, block_height: u32, coinbase_record: Record<N>) -> Result<HashMap<Address<N>, u64>Returns the shares for a specific block, given the block height and coinbase record.
source#### pub fn get_shares_for_prover(&self, prover: &Address<N>) -> u64
Returns the shares for a specific prover, given the prover address.
source#### pub fn increment_share( &self, block_height: u32, coinbase_record: Record<N>, prover: &Address<N>) -> Result<()Increments the share count by one for a given block height, coinbase record and prover address.
source#### pub fn remove_shares( &self, block_height: u32, coinbase_record: Record<N>) -> Result<()Removes the shares for a given block height and coinbase record in storage.
source#### pub fn get_provers(&self) -> Vec<Address<N>>Notable traits for Vec<u8, A>`impl<A> Write for Vec<u8, A> where A: Allocator,`
Returns a list of provers which have submitted shares to an operator.
Trait Implementations
---
source### impl<N: Debug + Network> Debug for OperatorState<Nsource#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
Auto Trait Implementations
---
### impl<N> RefUnwindSafe for OperatorState<N> where N: RefUnwindSafe, <N as Network>::ProgramAffineCurve: RefUnwindSafe, <N as Network>::ProgramID: RefUnwindSafe, <N as Network>::RecordCiphertext: RefUnwindSafe, <N as Network>::RecordViewKey: RefUnwindSafe,
### impl<N> Send for OperatorState<N### impl<N> Sync for OperatorState<N### impl<N> Unpin for OperatorState<N> where N: Unpin, <N as Network>::ProgramAffineCurve: Unpin, <N as Network>::ProgramID: Unpin, <N as Network>::RecordCiphertext: Unpin, <N as Network>::RecordViewKey: Unpin,
### impl<N> UnwindSafe for OperatorState<N> where N: UnwindSafe, <N as Network>::ProgramAffineCurve: UnwindSafe, <N as Network>::ProgramID: UnwindSafe, <N as Network>::RecordCiphertext: UnwindSafe, <N as Network>::RecordViewKey: UnwindSafe,
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### pub fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### pub fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### pub fn from(t: T) -> T
Performs the conversion.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### pub fn into(self) -> U
Performs the conversion.
### impl<T> Pointable for T
#### pub const ALIGN: usize
The alignment of pointer.
#### type Init = T
The type for initializers.
#### pub unsafe fn init(init: <T as Pointable>::Init) -> usize
Initializes a with the given initializer. Read more
#### pub unsafe fn deref<'a>(ptr: usize) -> &'aT
Dereferences the given pointer. Read more
#### pub unsafe fn deref_mut<'a>(ptr: usize) -> &'a mutT
Mutably dereferences the given pointer. Read more
#### pub unsafe fn drop(ptr: usize)
Drops the object pointed to by the given pointer. Read more
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
### impl<V, T> VZip<V> for T where V: MultiLane<T>,
#### pub fn vzip(self) -> V
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct snarkos_storage::ProverState
===
```
pub struct ProverState<N: Network> { /* private fields */ }
```
Implementations
---
source### impl<N: Network> ProverState<Nsource#### pub fn open_writer<S: Storage, P: AsRef<Path>>(path: P) -> Result<SelfOpens a new writable instance of `ProverState` from the given storage path.
source#### pub fn contains_coinbase_record( &self, commitment: &N::Commitment) -> Result<boolReturns `true` if the given commitment exists in storage.
source#### pub fn to_coinbase_records(&self) -> Vec<(u32, Record<N>)>Notable traits for Vec<u8, A>`impl<A> Write for Vec<u8, A> where A: Allocator,`
Returns all coinbase records in storage.
source#### pub fn get_coinbase_record( &self, commitment: &N::Commitment) -> Result<(u32, Record<N>)Returns the coinbase record for a given commitment.
source#### pub fn add_coinbase_record( &self, block_height: u32, record: Record<N>) -> Result<()Adds the given coinbase record to storage.
source#### pub fn remove_coinbase_record(&self, commitment: &N::Commitment) -> Result<()Removes the given record from storage.
Trait Implementations
---
source### impl<N: Debug + Network> Debug for ProverState<Nsource#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
Auto Trait Implementations
---
### impl<N> RefUnwindSafe for ProverState<N> where N: RefUnwindSafe, <N as Network>::Commitment: RefUnwindSafe, <N as Network>::ProgramAffineCurve: RefUnwindSafe, <N as Network>::ProgramID: RefUnwindSafe, <N as Network>::RecordCiphertext: RefUnwindSafe, <N as Network>::RecordViewKey: RefUnwindSafe,
### impl<N> Send for ProverState<N### impl<N> Sync for ProverState<N### impl<N> Unpin for ProverState<N> where N: Unpin, <N as Network>::Commitment: Unpin, <N as Network>::ProgramAffineCurve: Unpin, <N as Network>::ProgramID: Unpin, <N as Network>::RecordCiphertext: Unpin, <N as Network>::RecordViewKey: Unpin,
### impl<N> UnwindSafe for ProverState<N> where N: UnwindSafe, <N as Network>::Commitment: UnwindSafe, <N as Network>::ProgramAffineCurve: UnwindSafe, <N as Network>::ProgramID: UnwindSafe, <N as Network>::RecordCiphertext: UnwindSafe, <N as Network>::RecordViewKey: UnwindSafe,
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### pub fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### pub fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### pub fn from(t: T) -> T
Performs the conversion.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### pub fn into(self) -> U
Performs the conversion.
### impl<T> Pointable for T
#### pub const ALIGN: usize
The alignment of pointer.
#### type Init = T
The type for initializers.
#### pub unsafe fn init(init: <T as Pointable>::Init) -> usize
Initializes a with the given initializer. Read more
#### pub unsafe fn deref<'a>(ptr: usize) -> &'aT
Dereferences the given pointer. Read more
#### pub unsafe fn deref_mut<'a>(ptr: usize) -> &'a mutT
Mutably dereferences the given pointer. Read more
#### pub unsafe fn drop(ptr: usize)
Drops the object pointed to by the given pointer. Read more
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
### impl<V, T> VZip<V> for T where V: MultiLane<T>,
#### pub fn vzip(self) -> V
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Constant snarkos_storage::MAXIMUM_BLOCK_LOCATORS
===
```
pub const MAXIMUM_BLOCK_LOCATORS: u32 = MAXIMUM_LINEAR_BLOCK_LOCATORS.saturating_add(MAXIMUM_QUADRATIC_BLOCK_LOCATORS); // 0x0000_0060u32
```
The total maximum number of block locators.
Constant snarkos_storage::MAXIMUM_LINEAR_BLOCK_LOCATORS
===
```
pub const MAXIMUM_LINEAR_BLOCK_LOCATORS: u32 = 64;
```
The maximum number of linear block locators.
Constant snarkos_storage::MAXIMUM_QUADRATIC_BLOCK_LOCATORS
===
```
pub const MAXIMUM_QUADRATIC_BLOCK_LOCATORS: u32 = 32;
```
The maximum number of quadratic block locators. |
clifford | cran | R | Package ‘clifford’
August 14, 2022
Type Package
Title Arbitrary Dimensional Clifford Algebras
Version 1.0-8
Maintainer <NAME> <<EMAIL>>
Description A suite of routines for Clifford algebras, using the
'Map' class of the Standard Template Library. Canonical
reference: Hestenes (1987, ISBN 90-277-1673-0, ``Clifford algebra
to geometric calculus''). Special cases including Lorentz transforms,
quaternion multiplication, and Grassman algebra, are discussed.
Conformal geometric algebra theory is implemented. Uses 'disordR'
discipline.
License GPL (>= 2)
Suggests knitr,rmarkdown,testthat,onion,lorentz
VignetteBuilder knitr
Imports Rcpp (>= 0.12.5),mathjaxr,disordR (>= 0.0-8), magrittr, methods, partitions (>= 1.10-4)
LinkingTo Rcpp,BH
SystemRequirements C++11
URL https://github.com/RobinHankin/clifford
BugReports https://github.com/RobinHankin/clifford/issues
RdMacros mathjaxr
R topics documented:
clifford-packag... 2
allclif... 4
antivecto... 5
as.vecto... 6
carta... 7
cliffor... 8
cons... 9
dro... 10
eve... 11
Extract.cliffor... 12
grad... 13
homo... 15
horne... 16
involutio... 17
lowleve... 18
magnitud... 19
minu... 20
numeric_to_cliffor... 21
Ops.cliffor... 22
prin... 25
quaternio... 27
rclif... 27
signatur... 29
summary.cliffor... 31
ter... 32
za... 33
zer... 34
clifford-package Arbitrary Dimensional Clifford Algebras
Description
A suite of routines for Clifford algebras, using the ’Map’ class of the Standard Template Library.
Canonical reference: Hestenes (1987, ISBN 90-277-1673-0, "Clifford algebra to geometric calcu-
lus"). Special cases including Lorentz transforms, quaternion multiplication, and Grassman algebra,
are discussed. Conformal geometric algebra theory is implemented. Uses ’disordR’ discipline.
Details
The DESCRIPTION file:
Package: clifford
Type: Package
Title: Arbitrary Dimensional Clifford Algebras
Version: 1.0-8
Authors@R: person(given=c("Robin", "<NAME>."), family="Hankin", role = c("aut","cre"), email="hankin.robin@
Maintainer: <NAME> <<EMAIL>>
Description: A suite of routines for Clifford algebras, using the ’Map’ class of the Standard Template Library.
License: GPL (>= 2)
Suggests: knitr,rmarkdown,testthat,onion,lorentz
VignetteBuilder: knitr
Imports: Rcpp (>= 0.12.5),mathjaxr,disordR (>= 0.0-8), magrittr, methods, partitions (>= 1.10-4)
LinkingTo: Rcpp,BH
SystemRequirements: C++11
URL: https://github.com/RobinHankin/clifford
BugReports: https://github.com/RobinHankin/clifford/issues
RdMacros: mathjaxr
Author: <NAME> [aut, cre] (<https://orcid.org/0000-0001-5982-0415>)
Index of help topics:
Ops.clifford Arithmetic Ops Group Methods for 'clifford'
objects
[.clifford Extract or Replace Parts of a clifford
allcliff Clifford object containing all possible terms
antivector Antivectors or pseudovectors
as.vector Coerce a clifford vector to a numeric vector
cartan Cartan map between clifford algebras
clifford Create, coerce, and test for 'clifford' objects
clifford-package Arbitrary Dimensional Clifford Algebras
const The constant term of a Clifford object
drop Drop redundant information
even Even and odd clifford objects
grade The grade of a clifford object
homog Homogenous Clifford objects
horner Horner's method
involution Clifford involutions
lowlevel Low-level helper functions for 'clifford'
objects
magnitude Magnitude of a clifford object
minus Take the negative of a vector
numeric_to_clifford Coercion from numeric to Clifford form
print.clifford Print clifford objects
quaternion Quaternions using Clifford algebras
rcliff Random clifford objects
signature The signature of the Clifford algebra
summary.clifford Summary methods for clifford objects
term Deal with terms
zap Zap small values in a clifford object
zero The zero Clifford object
Author(s)
NA
Maintainer: <NAME> <<EMAIL>>
References
• <NAME> (2012). A new approach to differential geometry using Clifford’s geometric Algebra,
Birkhauser. ISBN 978-0-8176-8282-8
• <NAME> (1987). Clifford algebra to geometric calculus, Kluwer. ISBN 90-277-1673-0
• <NAME> (2009). Geometric algebra with applications in engineering, Springer. ISBN
978-3-540-89068-3
• <NAME> (2013). Foundations of geometric algebra computing. Springer, ISBN 978-3-
642-31794-1
See Also
clifford
Examples
as.1vector(1:4)
as.1vector(1:4) * rcliff()
# Following from Ablamowicz and Fauser (see vignette):
x <- clifford(list(1:3,c(1,5,7,8,10)),c(4,-10)) + 2
y <- clifford(list(c(1,2,3,7),c(1,5,6,8),c(1,4,6,7)),c(4,1,-3)) - 1
x*y # signature irrelevant
allcliff Clifford object containing all possible terms
Description
The Clifford algebra on basis vectors e1 , e2 , . . . , en has 2n independent multivectors. Function
allcliff() generates a clifford object with a nonzero coefficient for each multivector.
Usage
allcliff(n,grade)
Arguments
n Integer specifying dimension of underlying vector space
grade Grade of multivector to be returned. If missing, multivector contains every term
of every grade ≤ n
Author(s)
<NAME>
Examples
allcliff(6)
a <- allcliff(5)
a[] <- rcliff()*100
antivector Antivectors or pseudovectors
Description
Antivectors or pseudovectors
Usage
antivector(v, n = length(v))
as.antivector(v)
is.antivector(C, include.pseudoscalar=FALSE)
Arguments
v Numeric vector
n Integer specifying dimensionality of underlying vector space
C Clifford object
include.pseudoscalar
Boolean: should the pseudoscalar be considered an antivector?
Details
An antivector is an n-dimensional Clifford object, all of whose terms are of grade n − 1. An
antivector has n degrees of freedom. Function antivector(v,n) interprets v[i] as the coefficient
of e1 e2 . . . ei−1 ei+1 . . . en .
Function as.antivector() is a convenience wrapper, coercing its argument to an antivector of
minimal dimension (zero entries are interpreted consistently).
The pseudoscalar is a peculiar edge case. Consider:
A <- clifford(list(c(1,2,3)))
B <- A + clifford(list(c(1,2,4)))
> is.antivector(A)
[1] FALSE
> is.antivector(B)
[1] TRUE
> is.antivector(A,include.pseudoscalar=TRUE)
[1] TRUE
> is.antivector(B,include.pseudoscalar=TRUE)
[1] TRUE
One could argue that A should be an antivector as it is a term in B, which is definitely an antivector.
Use include.pseudoscalar=TRUE to ensure consistency in this case.
Compare as.1vector(), which returns a clifford object of grade 1.
Note
An antivector is always a blade.
Author(s)
<NAME>
References
Wikipedia contributors. (2018, July 20). “Antivector”. In Wikipedia, The Free Encyclopedia. Re-
trieved 19:06, January 27, 2020, from https://en.wikipedia.org/w/index.php?title=Antivector&
oldid=851094060
See Also
as.1vector
Examples
antivector(1:5)
as.1vector(c(1,1,2)) %X% as.1vector(c(3,2,2))
c(1*2-2*2, 2*3-1*2, 1*2-1*3) # note sign of e_13
as.vector Coerce a clifford vector to a numeric vector
Description
Given a clifford object with all terms of grade 1, return the corresponding numeric vector
Usage
## S3 method for class 'clifford'
as.vector(x,mode = "any")
Arguments
x Object of class clifford
mode ignored
Note
The awkward R idiom of this function is because the terms may be stored in any order; see the
examples
Author(s)
<NAME>
See Also
numeric_to_clifford
Examples
x <- clifford(list(6,2,9),1:3)
as.vector(x)
as.1vector(as.vector(x)) == x # should be TRUE
cartan Cartan map between clifford algebras
Description
Cartan’s map isomorphisms from Cl(p, q) to Cl(p − 4, q + 4) and Cl(p + 4, q − 4)
Usage
cartan(C, n = 1)
cartan_inverse(C, n = 1)
Arguments
C Object of class clifford
n Strictly positive integer
Value
Returns an object of class clifford. The default value n=1 maps Cl(4, q) to Cl(0, q+4) (cartan())
and Cl(0, q) to Cl(4, q − 4).
Author(s)
<NAME>
References
<NAME> and <NAME>angwine 2017. “Multivector and multivector matrix inverses in real Clifford
algebras”, Applied Mathematics and Computation. 311:3755-89
See Also
clifford
Examples
a <- rcliff(d=7) # Cl(4,3)
b <- rcliff(d=7) # Cl(4,3)
signature(4,3) # e1^2 = e2^2 = e3^2 = e4^2 = +1; e5^2 = e6^2=e7^2 = -1
ab <- a*b # multiplication in Cl(4,3)
signature(0,7) # e1^2 = ... = e7^2 = -1
cartan(a)*cartan(b) == cartan(ab) # multiplication in Cl(0,7); should be TRUE
signature(Inf) # restore default
clifford Create, coerce, and test for clifford objects
Description
An object of class clifford is a member of a Clifford algebra. These objects may be added and
multiplied, and have various applications in physics and mathematics.
Usage
clifford(terms, coeffs=1)
is_ok_clifford(terms, coeffs)
as.clifford(x)
is.clifford(x)
nbits(x)
nterms(x)
## S3 method for class 'clifford'
dim(x)
Arguments
terms A list of integer vectors with strictly increasing entries corresponding to the basis
vectors of the underlying vector space
coeffs Numeric vector of coefficients
x Object of class clifford
Details
• Function clifford() is the formal creation mechanism for clifford objects
• Function as.clifford() is much more user-friendly and attempts to coerce a range of input
arguments to clifford form
• Function nbits() returns the number of bits required in the low-level C routines to store the
terms (this is the largest entry in the list of terms). For a scalar, this is zero and for the zero
clifford object it (currently) returns zero as well although a case could be made for NULL
• Function nterms() returns the number of terms in the expression
• Function is_ok_clifford() is a helper function that checks for consistency of its arguments
• Function is.term() returns TRUE if all terms of its argument have the same grade
Author(s)
<NAME>
References
Snygg 2012. “A new approach to differential geometry using Clifford’s geometric algebra”. Birkhauser;
Springer Science+Business.
See Also
Ops.clifford
Examples
(x <- clifford(list(1,2,1:4),1:3)) # Formal creation method
(y <- as.1vector(4:2))
(z <- rcliff(include.fewer=TRUE))
terms(x+100)
coeffs(z)
## Clifford objects may be added and multiplied:
x + y
x*y
const The constant term of a Clifford object
Description
Get and set the constant term of a clifford object.
Usage
const(C,drop=TRUE)
is.real(C)
## S3 replacement method for class 'clifford'
const(x) <- value
Arguments
C,x Clifford object
value Replacement value
drop Boolean, with default TRUE meaning to return the constant coerced to numeric,
and FALSE meaning to return a (constant) Clifford object
Details
Extractor method for specific terms. Function const() returns the constant element of a Clifford
object. Note that const(C) returns the same as grade(C,0), but is faster.
The R idiom in const<-() is slightly awkward:
> body(`const<-.clifford`)
{
stopifnot(length(value) == 1)
x <- x - const(x)
return(x + value)
}
The reason that it is not simply return(x-const(x)+value) or return(x+value-const(x)) is
to ensure numerical accuracy; see examples.
Author(s)
<NAME>
See Also
grade, clifford, getcoeffs, is.zero
Examples
X <- clifford(list(1,1:2,1:3,3:5),6:9)
X
X <- X + 1e300
X
const(X) # should be 1e300
const(X) <- 0.6
const(X) # should be 0.6, no numerical error
# compare naive approach:
X <- clifford(list(1,1:2,1:3,3:5),6:9)+1e300
X+0.6-const(X) # constant gets lost in the numerics
X <- clifford(list(1,1:2,1:3,3:5),6:9)+1e-300
X-const(X)+0.6 # answer correct by virtue of left-associativity
x <- 2+rcliff(d=3,g=3)
jj <- x*cliffconj(x)
is.real(jj*rev(jj)) # should be TRUE
drop Drop redundant information
Description
Coerce constant Clifford objects to numeric
Usage
drop(x)
Arguments
x Clifford object
Details
If its argument is a constant clifford object, coerce to numeric.
Note
Many functions in the package take drop as an argument which, if TRUE, means that the function
returns a dropped value.
Author(s)
<NAME>
See Also
grade,getcoeffs
Examples
drop(as.clifford(5))
const(rcliff())
const(rcliff(),drop=FALSE)
even Even and odd clifford objects
Description
A clifford object is even if every term has even grade, and odd if every term has odd grade.
Functions is.even() and is.odd() test a clifford object for evenness or oddness.
Functions evenpart() and oddpart() extract the even or odd terms from a clifford object, and we
write A+ and A− respectively; we have A = A+ + A−
Usage
is.even(C)
is.odd(C)
evenpart(C)
oddpart(C)
Arguments
C Clifford object
Author(s)
<NAME>
See Also
grade
Examples
A <- rcliff()
A == evenpart(A) + oddpart(A) # should be true
Extract.clifford Extract or Replace Parts of a clifford
Description
Extract or replace subsets of cliffords.
Usage
## S3 method for class 'clifford'
C[index, ...]
## S3 replacement method for class 'clifford'
C[index, ...] <- value
coeffs(x)
coeffs(x) <- value
list_modifier(B)
getcoeffs(C, B)
Arguments
C,x A clifford object
index elements to extract or replace
value replacement value
B A list of integer vectors, terms
... Further arguments
Details
Extraction and replacement methods. The extraction method uses getcoeffs() and the replace-
ment method uses low-level helper function c_overwrite().
In the extraction function a[index], if index is a list, further arguments are ignored; if not, the dots
are used. If index is a list, its elements are interpreted as integer vectors indicating which terms to
be extracted (even if it is a disord object). If index is a disord object, standard consistency rules
are applied. The extraction methods are designed so that idiom such as a[coeffs(a)>3] works.
For replacement methods, the standard use-case is a[i] <- b in which argument i is a list of integer
vectors and b a length-one numeric vector. Otherwise, to manipulate parts of a clifford object, use
coeffs(a) <- value; this effectively leverages disord formalism. Idiom such as a[coeffs(a)<2]
<- 0 is not currently implemented (to do this, use coeffs(a)[coeffs(a)<2] <- 0). Replacement
using a list-valued index, as in A[i] <-value uses an ugly hack if value is zero. Replacement
methods are not yet finalised and not yet fully integrated with the disordR package.
Idiom such as a[] <- b follows the spray package. If b is a length-one scalar, then coeffs(a) <-
b has the same effect as a[] <- b.
Functions terms() [see term.Rd] and coeffs() extract the terms and coefficients from a clifford
object. These functions return disord objects but the ordering is consistent between them (an
extended discussion of this phenomenon is presented in the mvp package).
Function coeffs<-() (idiom coeffs(a) <- b) sets all coefficients of a to b. This has the same
effect as a[] <- b.
Extraction and replacement methods treat 0 specially, translating it (via list_modifier()) to
numeric(0).
Extracting or replacing a list with a repeated elements is usually a Bad Idea (tm). However, if option
warn_on_repeats is set to FALSE, no warning will be given (and the coefficient will be the sum of
the coefficients of the term; see the examples).
Function getcoeffs() is a lower-level helper function that lacks the succour offered by [.clifford().
It returns a numeric vector [not a disord object: the order of the elements is determined by the order
of argument B]. Compare standard extraction, eg a[index], which returns a clifford object.
See Also
Ops.clifford,clifford,term
Examples
A <- clifford(list(1,1:2,1:3),1:3)
B <- clifford(list(1:2,1:6),c(44,45))
A[1,c(1,3,4)]
A[2:3, 4] <- 99
A[] <- B
# clifford(list(1,1:2,1:2),1:3) # would give a warning
options("warn_on_repeats" = FALSE)
clifford(list(1,1:2,1:2),1:3) # works; 1e1 + 5e_12
options("warn_on_repeats" = TRUE) # return to default behaviour.
grade The grade of a clifford object
Description
The grade of a term is the number of basis vectors in it.
Usage
grade(C, n, drop=TRUE)
grade(C,n) <- value
grades(x)
gradesplus(x)
gradesminus(x)
gradeszero(x)
Arguments
C,x Clifford object
n Integer vector specifying grades to extract
value Replacement value, a numeric vector
drop Boolean, with default TRUE meaning to coerce a constant Clifford object to nu-
meric, and FALSE meaning not to
Details
A term is a single expression in a Clifford object. It has a coefficient and is described by the basis
vectors it comprises. Thus 4e234 is a term but e3 + e5 is not.
The grade of a term is the number of basis vectors in it. Thus the grade of e1 is 1, and the grade of
e125 = e1 e2 e5 is 3. The grade operator ⟨·⟩r is used to extract terms of a particular grade, with
X
A = ⟨A⟩0 + ⟨A⟩1 + ⟨A⟩2 + · · · = ⟨A⟩r
r
for any Clifford object A. Thus ⟨A⟩r is said to be homogenous of grade r. Hestenes sometimes
writes subscripts that specify grades using an overbar as in ⟨A⟩r . It is conventional to denote the
zero-grade object ⟨A⟩0 as simply ⟨A⟩.
We have
⟨A + B⟩r = ⟨A⟩r + ⟨B⟩r ⟨λA⟩r = λ ⟨A⟩r ⟨⟨A⟩r ⟩s = ⟨A⟩r δrs .
Function grades() returns an (unordered) vector specifying the grades of the constituent terms.
Function grades<-() allows idiom such as grade(x,1:2) <- 7 to operate as expected [here to set
all coefficients of terms with grades 1 or 2 to value 7].
Function gradesplus() returns the same but counting only basis vectors that square to +1, and
gradesminus() counts only basis vectors that square to −1. Function signature() controls which
basis vectors square to +1 and which to −1.
From Perwass, page 57, given a bilinear form
⟨x, x⟩ = x21 + x22 + · · · + x2p − x2p+1 − · · · − x2p+q
and a basis blade eA with A ⊆ {1, . . . , p + q}, then
gr(eA ) = |{a ∈ A: 1 ≤ a ≤ p + q}|
gr+ (eA ) = |{a ∈ A: 1 ≤ a ≤ p}|
gr− (eA ) = |{a ∈ A: p < a ≤ p + q}|
Function gradeszero() counts only the basis vectors squaring to zero (I have not seen this any-
where else, but it is a logical suggestion).
If the signature is zero, then the Clifford algebra reduces to a Grassman algebra and products match
the wedge product of exterior calculus. In this case, functions gradesplus() and gradesminus()
return NA.
Function grade(C,n) returns a clifford object with just the elements of grade g, where g %in% n.
The zero grade term, grade(C,0), is given more naturally by const(C).
Function c_grade() is a helper function that is documented at Ops.clifford.Rd.
Note
In the C code, “term” has a slightly different meaning, referring to the vectors without the associated
coefficient.
Author(s)
<NAME>
References
<NAME> 2009. “Geometric algebra with applications in engineering”. Springer.
See Also
signature, const
Examples
a <- clifford(sapply(seq_len(7),seq_len),seq_len(7))
a
grades(a)
grade(a,5)
signature(2,2)
x <- rcliff()
drop(gradesplus(x) + gradesminus(x) + gradeszero(x) - grades(x))
a <- rcliff()
a == Reduce(`+`,sapply(unique(grades(a)),function(g){grade(a,g)}))
homog Homogenous Clifford objects
Description
A clifford object is homogenous if all its terms are the same grade. A scalar (including the zero
clifford object) is considered to be homogenous. This ensures that is.homog(grade(C,n)) always
returns TRUE.
Usage
is.homog(C)
Arguments
C Object of class clifford
Note
Nonzero homogenous clifford objects have a multiplicative inverse.
Author(s)
<NAME>
Examples
is.homog(rcliff())
is.homog(rcliff(include.fewer=FALSE))
horner Horner’s method
Description
Horner’s method for Clifford objects
Usage
horner(P,v)
Arguments
P Multivariate polynomial
v Numeric vector of coefficients
Details
Given a polynomial
p(x) = a0 + a1 + a2 x2 + · · · + an xn
it is possible to express p(x) in the algebraically equivalent form
p(x) = a0 + x (a1 + x (a2 + · · · + x (an−1 + xan ) · · ·))
which is much more efficient for evaluation, as it requires only n multiplications and n additions,
and this is optimal. The output of horner() depends on the signature().
Note
Horner’s method is not as cool for Clifford objects as it is for (e.g.) multivariate polynomials or
freealg objects. This is because powers of Clifford objects don’t get more complicated as the
power increases.
Author(s)
<NAME>
Examples
horner(1+e(1:3)+e(2:3) , 1:6)
rcliff() |> horner(1:4)
involution Clifford involutions
Description
An involution is a function that is its own inverse, or equivalently f (f (x)) = x. There are several
important involutions on Clifford objects; these commute past the grade operator with f (⟨A⟩r ) =
⟨f (A)⟩r and are linear: f (αA + βB) = αf (A) + βf (B).
The dual is documented here for convenience, even though it is not an involution (applying the dual
four times is the identity).
• The reverse A∼ is given by rev() (both Perwass and Dorst use a tilde, as in à or A∼ . How-
ever, both Hestenes and Chisholm use a dagger, as in A† . This page uses Perwass’s notation).
The reverse of a term written as a product of basis vectors is simply the product of the same
basis vectors but written in reverse order. This changes the sign of the term if the number of
∼
basis vectors is 2 or 3 (modulo 4). Thus, for example, (e1 e2 e3 ) = e3 e2 e1 = −e1 e2 e3 and
∼
(e1 e2 e3 e4 ) = e4 e3 e2 e1 = +e1 e2 e3 e4 . Formally, if X = ei1 . . . eik , then X̃ = eik . . . ei1 .
⟨A∼ ⟩r = ⟨A⟩
r r
D E
Perwass shows that ⟨AB⟩r = (−1)r(r−1)/2 B̃ Ã .
r
• The Conjugate A† is given by Conj() (we use Perwass’s notation, def 2.9 p59). This depends
on the signature of the Clifford algebra; see grade.Rd for notation. Given a basis blade eA
with A ⊆ {1, . . . , p + q}, then we have e†A = (−1)m eA ∼ , where m = gr− (A). Alternatively,
we might say
†
(⟨A⟩r ) = (−1)m (−1)r(r−1)/2 ⟨A⟩r
where m = gr− (⟨A⟩r ) [NB I have changed Perwass’s notation].
• The main (grade) involution or grade involution A b is given by gradeinv(). This changes the
sign of any term with odd grade:
⟨A⟩ r r
(I don’t see this in Perwass or Hestenes; notation follows Hitzer and Sangwine). It is a special
case of grade negation.
• The grade r-negation Ar is given by neg(). This changes the sign of the grade r component
of A. It is formally defined as A−2 ⟨A⟩r but function neg() uses a more efficient method. It is
possible to negate all terms with specified grades, so for example we might have ⟨A⟩{1,2,5} =
A−2 (⟨A⟩1 + ⟨A⟩2 + ⟨A⟩5 ) and the R idiom would be neg(A,c(1,2,5)). Note that Hestenes
uses “Ar ” to mean the same as ⟨A⟩r .
• The Clifford conjugate A is given by cliffconj(). It is distinct from conjugation A† , and is
defined in Hitzer and Sangwine as
⟨A⟩r = (−1)r(r+1)/2 ⟨A⟩r .
• The dual C ∗ of a clifford object C is given by dual(C,n); argument n is the dimension of the
underlying vector space. Perwass gives
where I = e1 e2 . . . en is the unit pseudoscalar [note that Hestenes uses I to mean something
different]. The dual is sensitive to the signature of the Clifford algebra and the dimension of
the underlying vector space.
Usage
## S3 method for class 'clifford'
rev(x)
## S3 method for class 'clifford'
Conj(z)
cliffconj(z)
neg(C,n)
gradeinv(C)
Arguments
C,x,z Clifford object
n Integer vector specifying grades to be negated in neg()
Author(s)
<NAME>
See Also
grade
Examples
x <- rcliff()
x
rev(x)
A <- rblade(g=3)
B <- rblade(g=4)
rev(A %^% B) == rev(B) %^% rev(A) # should be TRUE
rev(A * B) == rev(B) * rev(A) # should be TRUE
a <- rcliff()
dual(dual(dual(dual(a,8),8),8),8) == a # should be TRUE
lowlevel Low-level helper functions for clifford objects
Description
Helper functions for clifford objects, written in C using the STL map class.
Usage
c_identity(L, p, m)
c_grade(L, c, m, n)
c_add(L1, c1, L2, c2, m)
c_multiply(L1, c1, L2, c2, m, sig)
c_power(L, c, m, p, sig)
c_equal(L1, c1, L2, c2, m)
c_overwrite(L1, c1, L2, c2, m)
c_cartan(L, c, m, n)
c_cartan_inverse(L, c, m, n)
Arguments
L,L1,L2 Lists of terms
c1,c2,c Numeric vectors of coefficients
m Maximum entry of terms
n Grade to extract
p Integer power
sig Two positive integers, p and q, representing the number of +1 and −1 terms on
the main diagonal of quadratic form
Details
The functions documented here are low-level helper functions that wrap the C code. They are
called by functions like clifford_plus_clifford(), which are themselves called by the binary
operators documented at Ops.clifford.Rd.
Function clifford_inverse() is problematic as nonnull blades always have an inverse; but func-
tion is.blade() is not yet implemented. Blades (including null blades) have a pseudoinverse, but
this is not implemented yet either.
Value
The high-level functions documented here return an object of clifford. But don’t use the low-level
functions.
Author(s)
<NAME>
See Also
Ops.clifford
magnitude Magnitude of a clifford object
Description
Following Perwass, the magnitude of a multivector is defined as
√
||A|| = A∗A
Where A ∗ A denotes the Euclidean scalar product eucprod(). Recall that the Euclidean scalar
product is never negative (the function body is sqrt(abs(eucprod(z))); the abs() is needed to
avoid numerical roundoff errors in eucprod() giving a negative value).
Usage
## S3 method for class 'clifford'
Mod(z)
Arguments
z Clifford objects
Note
2
If you want the square, ||A|| and not ||A||, it is faster and more accurate to use eucprod(A),
because this avoids a needless square root.
There is a nice example of scalar product at rcliff.Rd.
Author(s)
<NAME>
See Also
Ops.clifford, Conj, rcliff
Examples
Mod(rcliff())
# Perwass, p68, asserts that if A is a k-blade, then (in his notation)
# AA == A*A.
# In package idiom, A*A == A %star% A:
A <- rcliff()
Mod(A*A - A %star% A) # meh
A <- rblade()
Mod(A*A - A %star% A) # should be small
minus Take the negative of a vector
Description
Very simple function that takes the negative of a vector, here so that idiom such as
coeffs(z)[gradesminus(z)%%2 != 0] %<>% minus
works as intended (this taken from Conj.clifford()).
Usage
minus(x)
Arguments
x Any vector or disord object
Value
Returns a vector or disord
Author(s)
<NAME>
numeric_to_clifford Coercion from numeric to Clifford form
Description
Given a numeric value or vector, return a Clifford algebra element
Usage
numeric_to_clifford(x)
as.1vector(x)
is.1vector(x)
scalar(x=1)
as.scalar(x=1)
is.scalar(C)
basis(n,x=1)
e(n,x=1)
pseudoscalar(n,x=1)
as.pseudoscalar(n,x=1)
is.pseudoscalar(C)
Arguments
x Numeric vector
n Integer specifying dimensionality of underlying vector space
C Object possibly of class Clifford
Details
Function as.scalar() takes a length-one numeric vector and returns a Clifford scalar of that value
(to extract the scalar component of a multivector, use const()).
Function is.scalar() is a synonym for is.real() which is documented at const.Rd.
Function as.1vector() takes a numeric vector and returns the linear sum of length-one blades
with coefficients given by x; function is.1vector() returns TRUE if every term is of grade 1.
Function pseudoscalar(n) returns a pseudoscalar of dimensionality n and function is.pseudoscalar()
checks for a Clifford object being a pseudoscalar.
Function numeric_to_vector() dispatches to either as.scalar() for length-one vectors or as.1vector()
if the length is greater than one.
Function basis() returns a wedge product of basis vectors; function e() is a synonym. There is
special dispensation for zero, so e(0) returns the Clifford scalar 1.
Function antivector() should arguably be described here but is actually documented at antivector.Rd.
Author(s)
<NAME>
See Also
getcoeffs,antivector,const
Examples
as.scalar(6)
as.1vector(1:8)
e(5:8)
Reduce(`+`,sapply(seq_len(7),function(n){e(seq_len(n))},simplify=FALSE))
pseudoscalar(6)
pseudoscalar(7,5) == 5*pseudoscalar(7) # should be true
Ops.clifford Arithmetic Ops Group Methods for clifford objects
Description
Allows arithmetic operators to be used for multivariate polynomials such as addition, multiplication,
integer powers, etc.
Usage
## S3 method for class 'clifford'
Ops(e1, e2)
clifford_negative(C)
geoprod(C1,C2)
clifford_times_scalar(C,x)
clifford_plus_clifford(C1,C2)
clifford_eq_clifford(C1,C2)
clifford_inverse(C)
cliffdotprod(C1,C2)
fatdot(C1,C2)
lefttick(C1,C2)
righttick(C1,C2)
wedge(C1,C2)
scalprod(C1,C2=rev(C1),drop=TRUE)
eucprod(C1,C2=C1,drop=TRUE)
maxyterm(C1,C2=as.clifford(0))
C1 %.% C2
C1 %dot% C2
C1 %^% C2
C1 %X% C2
C1 %star% C2
C1 % % C2
C1 %euc% C2
C1 %o% C2
C1 %_|% C2
C1 %|_% C2
Arguments
e1,e2,C,C1,C2 Objects of class clifford or coerced if needed
x Scalar, length one numeric vector
drop Boolean, with default TRUE meaning to return the constant coerced to numeric,
and FALSE meaning to return a (constant) Clifford object
Details
The function Ops.clifford() passes unary and binary arithmetic operators “+”, “-”, “*”, “/” and
“^” to the appropriate specialist function. Function maxyterm() returns the maximum index in the
terms of its arguments.
The package has several binary operators:
X
Geometric product A*B = geoprod(A,B) AB = ⟨A⟩r ⟨B⟩s
r,s
X
Inner product A %.% B = cliffdotprod(A,B) A·B = ⟨⟨A⟩r ⟨B⟩s ⟩|s−r|
X
Outer product A %^% B = wedge(A,B) A∧B = ⟨⟨A⟩r ⟨B⟩s ⟩s+r
X r,s
Fat dot product A %o% B = fatdot(A,B) A•B = ⟨⟨A⟩r ⟨B⟩s ⟩|s−r|
X r,s
Left contraction A %_|% B = lefttick(A,B) A⌋B = ⟨⟨A⟩r ⟨B⟩s ⟩s−r
r,s
X
Right contraction A %|_% B = righttick(A,B) A⌊B = ⟨⟨A⟩r ⟨B⟩s ⟩r−s
r,s
Cross product A %X% B = cross(A,B) A×B = (AB − BA)
Scalar product A %star% B = star(A,B) A∗B = ⟨⟨A⟩r ⟨B⟩s ⟩0
r,s
Euclidean product A %euc% B = eucprod(A,B) A ⋆ B = A ∗ B†
In R idiom, the geometric product geoprod(.,.) has to be indicated with a “*” (as in A*B) and so
the binary operator must be %*%: we need a different idiom for scalar product, which is why %star%
is used.
Because geometric product is often denoted by juxtaposition, package idiom includes a % % b for
geometric product.
Binary operator %dot% is a synonym for %.%, which causes problems for rmarkdown.
Function clifford_inverse() returns an inverse for nonnull Clifford objects Cl(p, q) for p + q ≤
5, and a few other special cases. The functionality is problematic as nonnull blades always have
an inverse; but function is.blade() is not yet implemented. Blades (including null blades) have a
pseudoinverse, but this is not implemented yet either.
The scalar product of two clifford objects is defined as the zero-grade component of their geometric
product:
A ∗ B = ⟨AB⟩0 NB: notation used by both Perwass and Hestenes
In package idiom the scalar product is given by A %star% B or scalprod(A,B). Hestenes and Per-
wass both use an asterisk for scalar product as in “A ∗ B”, but in package idiom, the asterisk is
reserved for geometric product.
Note: in the package, A*B is the geometric product.
The Euclidean product (or Euclidean scalar product) of two clifford objects is defined as
A ⋆ B = A ∗ B † = AB † 0
Perwass
where B † denotes Conjugate [as in Conj(a)]. In package idiom the Euclidean scalar product is
given by eucprod(A,B) or A %euc% B, both of which return A * Conj(B).
Note that the scalar product A ∗ A can be positive or negative [that is, A %star% A may be any sign],
but the Euclidean product is guaranteed to be non-negative [that is, A %euc% A is always positive or
zero].
Dorst defines the left and right contraction (Chisholm calls these the left and right inner product) as
A⌋B and A⌊B. See the vignette for more details.
Division, as in idiom x/y, is defined as x*clifford_inverse(y). Function clifford_inverse()
uses the method set out by Hitzer and Sangwine but is limited to p + q ≤ 5.
Many of the functions documented here use low-level helper functions that wrap C code. For
example, fatdot() uses c_fatdotprod(). These are documented at lowlevel.Rd.
Value
The high-level functions documented here return a clifford object. The low-level functions are
not really intended for the end-user.
Note
In the clifford package the caret “^” is reserved for multiplicative powers, as in A^3=A*A*A. All
the different Clifford products have binary operators for convenience including the wedge product
%^%. Compare the stokes package, where multiplicative powers do not really make sense and A^B
is interpreted as a wedge product of differential forms A and B. In stokes, the wedge product is
the sine qua non for the whole package and needs a terse idiomatic representation (although there
A%^%B returns the wedge product too).
Author(s)
<NAME>
References
<NAME> and <NAME> 2017. “Multivector and multivector matrix inverses in real Clifford
algebras”. Applied Mathematics and Computation 311:375-389
Examples
u <- rcliff(5)
v <- rcliff(5)
w <- rcliff(5)
u
v
u*v
u+(v+w) == (u+v)+w # should be TRUE by associativity of "+"
u*(v*w) == (u*v)*w # should be TRUE by associativity of "*"
u*(v+w) == u*v + u*w # should be TRUE by distributivity
# Now if x,y are _vectors_ we have:
x <- as.1vector(sample(5))
y <- as.1vector(sample(5))
x*y == x%.%y + x%^%y
x %^% y == x %^% (y + 3*x)
x %^% y == (x*y-x*y)/2 # should be TRUE
# above are TRUE for x,y vectors (but not for multivectors, in general)
## Inner product "%.%" is not associative:
x <- rcliff(5,g=2)
y <- rcliff(5,g=2)
z <- rcliff(5,g=2)
x %.% (y %.% z) == (x %.% y) %.% z
## Other products should work as expected:
x %|_% y ## left contraction
x %_|% y ## right contraction
x %o% y ## fat dot product
print Print clifford objects
Description
Print methods for Clifford algebra
Usage
## S3 method for class 'clifford'
print(x,...)
## S3 method for class 'clifford'
as.character(x,...)
catterm(a)
Arguments
x Object of class clifford in the print method
... Further arguments, currently ignored
a Integer vector representing a term
Note
The print method does not change the internal representation of a clifford object, which is a two-
element list, the first of which is a list of integer vectors representing terms, and the second is a
numeric vector of coefficients.
The print method is sensitive to the value of option separate. If FALSE (the default), the method
prints in a compact form, as in e_134. The indices of the basis vectors are separated with option
basissep which is usually NULL but if n > 9, then setting options("basissep" = ",") might
look good as it will print e_10,11,12 instead of e_101112. If separate is TRUE, the method prints
the basis vectors separately, as in e1 e3 e4.
Function as.character.clifford() is also sensitive to these options. The print method has spe-
cial dispensation for length-zero clifford objects. Function catterm() is a low-level helper func-
tion.
Author(s)
<NAME>
See Also
clifford
Examples
a <- rcliff(d=15,g=9)
a # incomprehensible
options("separate" = TRUE)
a # marginally better
options("separate" = FALSE)
options(basissep=",")
a # clearer; YMMV
options(basissep = NULL) # restore default
quaternion Quaternions using Clifford algebras
Description
Converting quaternions to and from Clifford objects is not part of the package but functionality and
a short discussion is included in inst/quaternion_clifford.Rmd.
Details
Given a quaternion a + bi + cj + dk, one may identify i with −e12 , j with −e13 , and k with −e23
(the constant term is of course e0 ).
Note
A different mapping, from the quaternions to Cl(0, 2) is given at signature.Rd.
Author(s)
<NAME>
See Also
signature
rcliff Random clifford objects
Description
Random Clifford algebra elements, intended as quick “get you going” examples of clifford ob-
jects
Usage
rcliff(n=9, d=6, g=4, include.fewer=TRUE)
rblade(d=7, g=3)
Arguments
n Number of terms
d Dimensionality of underlying vector space
g Maximum grade of any term
include.fewer Boolean, with FALSE meaning to return a clifford object comprising only terms
of grade g, and default TRUE meaning to include terms with grades less than g
(including a term of grade zero, that is, a scalar)
Details
Function rcliff() gives a quick nontrivial Clifford object, typically with terms having a range of
grades (see ‘grade.Rd’); argument include.fewer=FALSE ensures that all terms are of the same
grade.
Function rblade() gives a Clifford object that is a blade (see ‘term.Rd’). It returns the wedge
product of a number of 1-vectors, for example (e1 + 2e2 ) ∧ (e1 + 3e5 ).
Perwass gives the following lemma:
Given blades A⟨r⟩ , B⟨s⟩ , C⟨t⟩ , then
⟨A⟨r⟩ B⟨s⟩ C⟨t⟩ ⟩0 = ⟨C⟨t⟩ A⟨r⟩ B⟨s⟩ ⟩0
In the proof he notes in an intermediate step that
⟨A⟨r⟩ B⟨s⟩ ⟩t ∗ C⟨t⟩ = C⟨t⟩ ∗ ⟨A⟨r⟩ B⟨s⟩ ⟩t = ⟨C⟨t⟩ A⟨r⟩ B⟨s⟩ ⟩0 .
Package idiom is shown in the examples.
Note
If the grade exceeds the dimensionality, g > d, then the result is arguably zero; rcliff() returns
an error.
Author(s)
<NAME>
See Also
term,grade
Examples
rcliff()
rcliff(d=3,g=2)
rcliff(3,10,7)
rcliff(3,10,7,include=TRUE)
x1 <- rcliff()
x2 <- rcliff()
x3 <- rcliff()
x1*(x2*x3) == (x1*x2)*x3 # should be TRUE
rblade()
# We can invert blades easily:
a <- rblade()
ainv <- rev(a)/scalprod(a)
zap(a*ainv) # 1 (to numerical precision)
zap(ainv*a) # 1 (to numerical precision)
# Perwass 2009, lemma 3.9:
A <- rblade(d=9,g=4)
B <- rblade(d=9,g=5)
C <- rblade(d=9,g=6)
grade(A*B*C,0)-grade(C*A*B,0) # zero to numerical precision
# Intermediate step
x1 <- grade(A*B,3) %star% C
x2 <- C %star% grade(A*B,3)
x3 <- grade(C*A*B,0)
max(x1,x2,x3) - min(x1,x2,x3) # zero to numerical precision
signature The signature of the Clifford algebra
Description
Getting and setting the signature of the Clifford algebra
Usage
signature(p,q=0)
is_ok_sig(s)
showsig(s)
## S3 method for class 'sigobj'
print(x,...)
Arguments
s,p,q Integers, specifying number of positive elements on the diagonal of the quadratic
form, with s=c(p,q)
x Object of class sigobj
... Further arguments, currently ignored
Details
The signature functionality is modelled on lorentz::sol() which gets and sets the speed of light.
Clifford algebras require a bilinear form on Rn ⟨·, ·⟩, usually written
⟨x, x⟩ = x21 + x22 + · · · + x2p − x2p+1 − · · · − x2p+q
where p + q = n. With this quadratic form the vector space is denoted Rp,q and we say that (p, q)
is the signature of the bilinear form ⟨·, ·⟩. This gives rise to the Clifford algebra Cp,q .
If the signature is (p, q), then we have
ei ei = +1 (if 1 ≤ i ≤ p), −1 (if p + 1 ≤ i ≤ p + q), 0 (if i > p + q).
Note that (p, 0) corresponds to a positive-semidefinite quadratic form in which ei ei = +1 for all
i ≤ p and ei ei = 0 for all i > p. Similarly, (0, q) corresponds to a negative-semidefinite quadratic
form in which ei ei = −1 for all i ≤ q and ei ei = 0 for all i > q.
Package idiom for a strictly positive-definite quadratic form would be to specify infinite p [in which
case q is irrelevant] and for a strictly negative-definite quadratic form we would need p = 0, q = ∞.
If we specify ei ei = 0 for all i, then the operation reduces to the wedge product of a Grassman
algebra. Package idiom for this is to set p = q = 0, but this is not recommended: use the stokes
package for Grassman algebras, which is much more efficient and uses nicer idiom.
Function signature(p,q) returns the signature silently; but setting option show_signature to
TRUE makes signature() have the side-effect of calling showsig(), which changes the default
prompt to display the signature, much like showSOL in the lorentz package. There is special
dispensation for “infinite” p or q.
Calling signature() [that is, with no arguments] returns an object of class sigobj with ele-
ments corresponding to p and q. The sigobj class ensures that a near-infinite integer such as
.Machine$integer.max will be printed as “Inf” rather than, for example, “2147483647”.
Function is_ok_sig() is a helper function that checks for a proper signature.
Author(s)
<NAME>
Examples
signature()
e(1)^2
e(2)^2
signature(1)
e(1)^2
e(2)^2 # note sign
signature(3,4)
sapply(1:10,function(i){drop(e(i)^2)})
signature(Inf) # restore default
# Nice mapping from Cl(0,2) to the quaternions (loading clifford and
# onion simultaneously is discouraged):
# library("onion")
# signature(0,2)
# Q1 <- rquat(1)
# Q2 <- rquat(1)
# f <- function(H){Re(H)+i(H)*e(1)+j(H)*e(2)+k(H)*e(1:2)}
# f(Q1)*f(Q2) - f(Q1*Q2) # zero to numerical precision
# signature(Inf)
summary.clifford Summary methods for clifford objects
Description
Summary method for clifford objects, and a print method for summaries.
Usage
## S3 method for class 'clifford'
summary(object, ...)
## S3 method for class 'summary.clifford'
print(x, ...)
first_n_last(x)
Arguments
object,x Object of class clifford
... Further arguments, currently ignored
Details
Summary of a clifford object. Note carefully that the “typical terms” are implementation specific.
Function first_n_last() is a helper function.
Author(s)
<NAME>
See Also
print
Examples
summary(rcliff())
term Deal with terms
Description
By basis vector, I mean one of the basis vectors of the underlying vector space Rn , that is, an
element of the set {e1 , . . . , en }. A term is a wedge product of basis vectors (or a geometric product
of linearly independent basis vectors), something like e12 or e12569 . Sometimes I use the word
“term” to mean a wedge product of basis vectors together with its associated coefficient: so 7e12
would be described as a term.
From Perwass: a blade is the outer product of a number of 1-vectors (or, equivalently, the wedge
product of linearly independent 1-vectors). Thus e12 = e1 ∧ e2 and e12 + e13 = e1 ∧ (e2 + e3 ) are
blades, but e12 + e34 is not.
Function rblade(), documented at ‘rcliff.Rd’, returns a random blade.
Function is.blade() is not currently implemented: there is no easy way to detect whether a Clif-
ford object is a product of 1-vectors.
Usage
terms(x)
is.blade(x)
is.basisblade(x)
Arguments
x Object of class clifford
Details
• Functions terms() and coeffs() are the extraction methods. These are unordered vectors
but the ordering is consistent between them (an extended discussion of this phenomenon is
presented in the mvp package).
• Function term() returns a clifford object that comprises a single term with unit coefficient.
• Function is.basisterm() returns TRUE if its argument has only a single term, or is a nonzero
scalar; the zero clifford object is not considered to be a basis term.
Author(s)
<NAME>
References
<NAME>. “Geometric algebra with applications in engineering”. Springer, 2009.
See Also
clifford,rblade
Examples
x <- rcliff()
terms(x)
is.basisblade(x)
a <- as.1vector(1:3)
b <- as.1vector(c(0,0,0,12,13))
a %^% b # a blade
zap Zap small values in a clifford object
Description
Generic version of zapsmall()
Usage
zap(x, drop=TRUE, digits = getOption("digits"))
Arguments
x Clifford object
drop Boolean with default TRUE meaning to coerce the output to numeric with drop()
digits number of digits to retain
Details
Given a clifford object, coefficients close to zero are ‘zapped’, i.e., replaced by ‘0’ in much the
same way as base::zapsmall().
The function should be called zapsmall(), and dispatch to the appropriate base function, but I
could not figure out how to do this with S3 (the docs were singularly unhelpful) and gave up.
Note, this function actually changes the numeric value, it is not just a print method.
Author(s)
<NAME>
Examples
a <- clifford(sapply(1:10,seq_len),90^-(1:10))
zap(a)
options(digits=3)
zap(a)
a-zap(a) # nonzero
B <- rblade(g=3)
mB <- B*rev(B)
zap(mB)
drop(mB)
zero The zero Clifford object
Description
Dealing with the zero Clifford object presents particular challenges. Some of the methods need
special dispensation for the zero object.
Usage
is.zero(C)
Arguments
C Clifford object
Details
To create the zero object ab initio, use
clifford(list(),numeric(0))
although note that scalar(0) will work too.
Author(s)
<NAME>
See Also
scalar
Examples
is.zero(rcliff()) |
github.com/jmespath/go-jmespath | go | Go | README
[¶](#section-readme)
---
### go-jmespath - A JMESPath implementation in Go
[![Build Status](https://img.shields.io/travis/jmespath/go-jmespath.svg)](https://travis-ci.org/jmespath/go-jmespath)
go-jmespath is a GO implementation of JMESPath,
which is a query language for JSON. It will take a JSON document and transform it into another JSON document through a JMESPath expression.
Using go-jmespath is really easy. There's a single function you use, `jmespath.search`:
```
> import "github.com/jmespath/go-jmespath"
>
> var jsondata = []byte(`{"foo": {"bar": {"baz": [0, 1, 2, 3, 4]}}}`) // your data
> var data interface{}
> err := json.Unmarshal(jsondata, &data)
> result, err := jmespath.Search("foo.bar.baz[2]", data)
result = 2
```
In the example we gave the `search` function input data of
`{"foo": {"bar": {"baz": [0, 1, 2, 3, 4]}}}` as well as the JMESPath expression `foo.bar.baz[2]`, and the `search` function evaluated the expression against the input data to produce the result `2`.
The JMESPath language can do a lot more than select an element from a list. Here are a few more examples:
```
> var jsondata = []byte(`{"foo": {"bar": {"baz": [0, 1, 2, 3, 4]}}}`) // your data
> var data interface{}
> err := json.Unmarshal(jsondata, &data)
> result, err := jmespath.search("foo.bar", data)
result = { "baz": [ 0, 1, 2, 3, 4 ] }
> var jsondata = []byte(`{"foo": [{"first": "a", "last": "b"},
{"first": "c", "last": "d"}]}`) // your data
> var data interface{}
> err := json.Unmarshal(jsondata, &data)
> result, err := jmespath.search({"foo[*].first", data)
result [ 'a', 'c' ]
> var jsondata = []byte(`{"foo": [{"age": 20}, {"age": 25},
{"age": 30}, {"age": 35},
{"age": 40}]}`) // your data
> var data interface{}
> err := json.Unmarshal(jsondata, &data)
> result, err := jmespath.search("foo[?age > `30`]")
result = [ { age: 35 }, { age: 40 } ]
```
You can also pre-compile your query. This is usefull if you are going to run multiple searches with it:
```
> var jsondata = []byte(`{"foo": "bar"}`)
> var data interface{}
> err := json.Unmarshal(jsondata, &data)
> precompiled, err := Compile("foo")
> if err != nil{
> // ... handle the error
> }
> result, err := precompiled.Search(data)
result = "bar"
```
#### More Resources
The example above only show a small amount of what a JMESPath expression can do. If you want to take a tour of the language, the *best* place to go is the
[JMESPath Tutorial](http://jmespath.org/tutorial.html).
One of the best things about JMESPath is that it is implemented in many different programming languages including python, ruby, php, lua, etc. To see a complete list of libraries,
check out the [JMESPath libraries page](http://jmespath.org/libraries.html).
And finally, the full JMESPath specification can be found on the [JMESPath site](http://jmespath.org/specification.html).
Documentation
[¶](#section-documentation)
---
### Index [¶](#pkg-index)
* [Constants](#pkg-constants)
* [func Search(expression string, data interface{}) (interface{}, error)](#Search)
* [type ASTNode](#ASTNode)
* + [func (node ASTNode) PrettyPrint(indent int) string](#ASTNode.PrettyPrint)
+ [func (node ASTNode) String() string](#ASTNode.String)
* [type JMESPath](#JMESPath)
* + [func Compile(expression string) (*JMESPath, error)](#Compile)
+ [func MustCompile(expression string) *JMESPath](#MustCompile)
* + [func (jp *JMESPath) Search(data interface{}) (interface{}, error)](#JMESPath.Search)
* [type Lexer](#Lexer)
* + [func NewLexer() *Lexer](#NewLexer)
* [type Parser](#Parser)
* + [func NewParser() *Parser](#NewParser)
* + [func (p *Parser) Parse(expression string) (ASTNode, error)](#Parser.Parse)
* [type SyntaxError](#SyntaxError)
* + [func (e SyntaxError) Error() string](#SyntaxError.Error)
+ [func (e SyntaxError) HighlightLocation() string](#SyntaxError.HighlightLocation)
### Constants [¶](#pkg-constants)
```
const (
ASTEmpty astNodeType = [iota](/builtin#iota)
ASTComparator
ASTCurrentNode
ASTExpRef
ASTFunctionExpression
ASTField
ASTFilterProjection
ASTFlatten
ASTIdentity
ASTIndex
ASTIndexExpression
ASTKeyValPair
ASTLiteral
ASTMultiSelectHash
ASTMultiSelectList
ASTOrExpression
ASTAndExpression
ASTNotExpression
ASTPipe
ASTProjection
ASTSubexpression
ASTSlice
ASTValueProjection
)
```
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
####
func [Search](https://github.com/jmespath/go-jmespath/blob/v0.4.0/api.go#L41) [¶](#Search)
```
func Search(expression [string](/builtin#string), data interface{}) (interface{}, [error](/builtin#error))
```
Search evaluates a JMESPath expression against input data and returns the result.
### Types [¶](#pkg-types)
####
type [ASTNode](https://github.com/jmespath/go-jmespath/blob/v0.4.0/parser.go#L40) [¶](#ASTNode)
```
type ASTNode struct {
// contains filtered or unexported fields
}
```
ASTNode represents the abstract syntax tree of a JMESPath expression.
####
func (ASTNode) [PrettyPrint](https://github.com/jmespath/go-jmespath/blob/v0.4.0/parser.go#L55) [¶](#ASTNode.PrettyPrint)
```
func (node [ASTNode](#ASTNode)) PrettyPrint(indent [int](/builtin#int)) [string](/builtin#string)
```
PrettyPrint will pretty print the parsed AST.
The AST is an implementation detail and this pretty print function is provided as a convenience method to help with debugging. You should not rely on its output as the internal structure of the AST may change at any time.
####
func (ASTNode) [String](https://github.com/jmespath/go-jmespath/blob/v0.4.0/parser.go#L46) [¶](#ASTNode.String)
```
func (node [ASTNode](#ASTNode)) String() [string](/builtin#string)
```
####
type [JMESPath](https://github.com/jmespath/go-jmespath/blob/v0.4.0/api.go#L7) [¶](#JMESPath)
```
type JMESPath struct {
// contains filtered or unexported fields
}
```
JMESPath is the representation of a compiled JMES path query. A JMESPath is safe for concurrent use by multiple goroutines.
####
func [Compile](https://github.com/jmespath/go-jmespath/blob/v0.4.0/api.go#L14) [¶](#Compile)
```
func Compile(expression [string](/builtin#string)) (*[JMESPath](#JMESPath), [error](/builtin#error))
```
Compile parses a JMESPath expression and returns, if successful, a JMESPath object that can be used to match against data.
####
func [MustCompile](https://github.com/jmespath/go-jmespath/blob/v0.4.0/api.go#L27) [¶](#MustCompile)
```
func MustCompile(expression [string](/builtin#string)) *[JMESPath](#JMESPath)
```
MustCompile is like Compile but panics if the expression cannot be parsed.
It simplifies safe initialization of global variables holding compiled JMESPaths.
####
func (*JMESPath) [Search](https://github.com/jmespath/go-jmespath/blob/v0.4.0/api.go#L36) [¶](#JMESPath.Search)
```
func (jp *[JMESPath](#JMESPath)) Search(data interface{}) (interface{}, [error](/builtin#error))
```
Search evaluates a JMESPath expression against input data and returns the result.
####
type [Lexer](https://github.com/jmespath/go-jmespath/blob/v0.4.0/lexer.go#L24) [¶](#Lexer)
```
type Lexer struct {
// contains filtered or unexported fields
}
```
Lexer contains information about the expression being tokenized.
####
func [NewLexer](https://github.com/jmespath/go-jmespath/blob/v0.4.0/lexer.go#L117) [¶](#NewLexer)
```
func NewLexer() *[Lexer](#Lexer)
```
NewLexer creates a new JMESPath lexer.
####
type [Parser](https://github.com/jmespath/go-jmespath/blob/v0.4.0/parser.go#L112) [¶](#Parser)
```
type Parser struct {
// contains filtered or unexported fields
}
```
Parser holds state about the current expression being parsed.
####
func [NewParser](https://github.com/jmespath/go-jmespath/blob/v0.4.0/parser.go#L119) [¶](#NewParser)
```
func NewParser() *[Parser](#Parser)
```
NewParser creates a new JMESPath parser.
####
func (*Parser) [Parse](https://github.com/jmespath/go-jmespath/blob/v0.4.0/parser.go#L125) [¶](#Parser.Parse)
```
func (p *[Parser](#Parser)) Parse(expression [string](/builtin#string)) ([ASTNode](#ASTNode), [error](/builtin#error))
```
Parse will compile a JMESPath expression.
####
type [SyntaxError](https://github.com/jmespath/go-jmespath/blob/v0.4.0/lexer.go#L32) [¶](#SyntaxError)
```
type SyntaxError struct {
Expression [string](/builtin#string) // Expression that generated a SyntaxError
Offset [int](/builtin#int) // The location in the string where the error occurred
// contains filtered or unexported fields
}
```
SyntaxError is the main error used whenever a lexing or parsing error occurs.
####
func (SyntaxError) [Error](https://github.com/jmespath/go-jmespath/blob/v0.4.0/lexer.go#L38) [¶](#SyntaxError.Error)
```
func (e [SyntaxError](#SyntaxError)) Error() [string](/builtin#string)
```
####
func (SyntaxError) [HighlightLocation](https://github.com/jmespath/go-jmespath/blob/v0.4.0/lexer.go#L47) [¶](#SyntaxError.HighlightLocation)
```
func (e [SyntaxError](#SyntaxError)) HighlightLocation() [string](/builtin#string)
```
HighlightLocation will show where the syntax error occurred.
It will place a "^" character on a line below the expression at the point where the syntax error occurred. |
oauth_azure_activedirectory | hex | Erlang | API Reference
===
[Modules](#modules)
---
[OauthAzureActivedirectory](OauthAzureActivedirectory.html)
Documentation for OauthAzureActivedirectory.
[OauthAzureActivedirectory.Client](OauthAzureActivedirectory.Client.html)
Documentation for OauthAzureActivedirectory.Client()
[OauthAzureActivedirectory.Error](OauthAzureActivedirectory.Error.html)
[OauthAzureActivedirectory.Http](OauthAzureActivedirectory.Http.html)
Documentation for OauthAzureActivedirectory.Http
[OauthAzureActivedirectory.Response](OauthAzureActivedirectory.Response.html)
Documentation for OauthAzureActivedirectory.Response
OauthAzureActivedirectory
===
Documentation for OauthAzureActivedirectory.
[Summary](#summary)
===
[Functions](#functions)
---
[base\_url()](#base_url/0)
[config()](#config/0)
Return configuration set.
[request\_url()](#request_url/0)
[Functions](#functions)
===
OauthAzureActivedirectory.Client
===
Documentation for OauthAzureActivedirectory.Client()
[Summary](#summary)
===
[Functions](#functions)
---
[authorize\_url(client, params)](#authorize_url/2)
[authorize\_url!(state \\ nil)](#authorize_url!/1)
Return authorize URL with optional custom state
[callback\_params(map)](#callback_params/1)
Validate token and return payload attributes in JWT
[client()](#client/0)
[logout\_url(logout\_hint \\ nil)](#logout_url/1)
Return logout URL with optional logout hint
[process\_callback!(params)](#process_callback!/1)
See [`OauthAzureActivedirectory.Client.callback_params/1`](#callback_params/1).
[Functions](#functions)
===
OauthAzureActivedirectory.Error exception
===
[Summary](#summary)
===
[Types](#types)
---
[t()](#t:t/0)
[Functions](#functions)
---
[message(error)](#message/1)
Return the message for the given error.
[wrap(module, reason)](#wrap/2)
[Types](#types)
===
[Functions](#functions)
===
OauthAzureActivedirectory.Http
===
Documentation for OauthAzureActivedirectory.Http
[Summary](#summary)
===
[Functions](#functions)
---
[request(url)](#request/1)
Make an HTTP GET request and verify peer Azure TLS certificate
[Functions](#functions)
===
OauthAzureActivedirectory.Response
===
Documentation for OauthAzureActivedirectory.Response
[Summary](#summary)
===
[Functions](#functions)
---
[openid\_configuration(key)](#openid_configuration/1)
[verify\_client(claims)](#verify_client/1)
Validates client and session attributes
[verify\_code(chash, code)](#verify_code/2)
Validates code param with c\_hash in id\_token
[verify\_signature(message, signature, kid)](#verify_signature/3)
Verifies signature in JWT token
[Functions](#functions)
=== |
@aws-sdk/client-inspector2 | npm | JavaScript | [@aws-sdk/client-inspector2](#aws-sdkclient-inspector2)
===
[Description](#description)
---
AWS SDK for JavaScript Inspector2 Client for Node.js, Browser and React Native.
Amazon Inspector is a vulnerability discovery service that automates continuous scanning for security vulnerabilities within your Amazon EC2, Amazon ECR, and Amazon Web Services Lambda environments.
[Installing](#installing)
---
To install the this package, simply type add or install @aws-sdk/client-inspector2 using your favorite package manager:
* `npm install @aws-sdk/client-inspector2`
* `yarn add @aws-sdk/client-inspector2`
* `pnpm add @aws-sdk/client-inspector2`
[Getting Started](#getting-started)
---
### [Import](#import)
The AWS SDK is modulized by clients and commands.
To send a request, you only need to import the `Inspector2Client` and the commands you need, for example `ListFiltersCommand`:
```
// ES5 example const { Inspector2Client, ListFiltersCommand } = require("@aws-sdk/client-inspector2");
```
```
// ES6+ example import { Inspector2Client, ListFiltersCommand } from "@aws-sdk/client-inspector2";
```
### [Usage](#usage)
To send a request, you:
* Initiate client with configuration (e.g. credentials, region).
* Initiate command with input parameters.
* Call `send` operation on client with command object as input.
* If you are using a custom http handler, you may call `destroy()` to close open connections.
```
// a client can be shared by different commands.
const client = new Inspector2Client({ region: "REGION" });
const params = {
/** input parameters */
};
const command = new ListFiltersCommand(params);
```
#### [Async/await](#asyncawait)
We recommend using [await](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/await)
operator to wait for the promise returned by send operation as follows:
```
// async/await.
try {
const data = await client.send(command);
// process data.
} catch (error) {
// error handling.
} finally {
// finally.
}
```
Async-await is clean, concise, intuitive, easy to debug and has better error handling as compared to using Promise chains or callbacks.
#### [Promises](#promises)
You can also use [Promise chaining](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Using_promises#chaining)
to execute send operation.
```
client.send(command).then(
(data) => {
// process data.
},
(error) => {
// error handling.
}
);
```
Promises can also be called using `.catch()` and `.finally()` as follows:
```
client
.send(command)
.then((data) => {
// process data.
})
.catch((error) => {
// error handling.
})
.finally(() => {
// finally.
});
```
#### [Callbacks](#callbacks)
We do not recommend using callbacks because of [callback hell](http://callbackhell.com/),
but they are supported by the send operation.
```
// callbacks.
client.send(command, (err, data) => {
// process err and data.
});
```
#### [v2 compatible style](#v2-compatible-style)
The client can also send requests using v2 compatible style.
However, it results in a bigger bundle size and may be dropped in next major version. More details in the blog post on [modular packages in AWS SDK for JavaScript](https://aws.amazon.com/blogs/developer/modular-packages-in-aws-sdk-for-javascript/)
```
import * as AWS from "@aws-sdk/client-inspector2";
const client = new AWS.Inspector2({ region: "REGION" });
// async/await.
try {
const data = await client.listFilters(params);
// process data.
} catch (error) {
// error handling.
}
// Promises.
client
.listFilters(params)
.then((data) => {
// process data.
})
.catch((error) => {
// error handling.
});
// callbacks.
client.listFilters(params, (err, data) => {
// process err and data.
});
```
### [Troubleshooting](#troubleshooting)
When the service returns an exception, the error will include the exception information,
as well as response metadata (e.g. request id).
```
try {
const data = await client.send(command);
// process data.
} catch (error) {
const { requestId, cfId, extendedRequestId } = error.$$metadata;
console.log({ requestId, cfId, extendedRequestId });
/**
* The keys within exceptions are also parsed.
* You can access them by specifying exception names:
* if (error.name === 'SomeServiceException') {
* const value = error.specialKeyInException;
* }
*/
}
```
[Getting Help](#getting-help)
---
Please use these community resources for getting help.
We use the GitHub issues for tracking bugs and feature requests, but have limited bandwidth to address them.
* Visit [Developer Guide](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/welcome.html)
or [API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/index.html).
* Check out the blog posts tagged with [`aws-sdk-js`](https://aws.amazon.com/blogs/developer/tag/aws-sdk-js/)
on AWS Developer Blog.
* Ask a question on [StackOverflow](https://stackoverflow.com/questions/tagged/aws-sdk-js) and tag it with `aws-sdk-js`.
* Join the AWS JavaScript community on [gitter](https://gitter.im/aws/aws-sdk-js-v3).
* If it turns out that you may have found a bug, please [open an issue](https://github.com/aws/aws-sdk-js-v3/issues/new/choose).
To test your universal JavaScript code in Node.js, browser and react-native environments,
visit our [code samples repo](https://github.com/aws-samples/aws-sdk-js-tests).
[Contributing](#contributing)
---
This client code is generated automatically. Any modifications will be overwritten the next time the `@aws-sdk/client-inspector2` package is updated.
To contribute to client you can check our [generate clients scripts](https://github.com/aws/aws-sdk-js-v3/tree/main/scripts/generate-clients).
[License](#license)
---
This SDK is distributed under the
[Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0),
see LICENSE for more information.
[Client Commands (Operations List)](#client-commands-operations-list)
---
AssociateMember
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/associatemembercommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/associatemembercommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/associatemembercommandoutput.html)
BatchGetAccountStatus
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/batchgetaccountstatuscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/batchgetaccountstatuscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/batchgetaccountstatuscommandoutput.html)
BatchGetCodeSnippet
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/batchgetcodesnippetcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/batchgetcodesnippetcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/batchgetcodesnippetcommandoutput.html)
BatchGetFindingDetails
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/batchgetfindingdetailscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/batchgetfindingdetailscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/batchgetfindingdetailscommandoutput.html)
BatchGetFreeTrialInfo
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/batchgetfreetrialinfocommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/batchgetfreetrialinfocommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/batchgetfreetrialinfocommandoutput.html)
BatchGetMemberEc2DeepInspectionStatus
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/batchgetmemberec2deepinspectionstatuscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/batchgetmemberec2deepinspectionstatuscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/batchgetmemberec2deepinspectionstatuscommandoutput.html)
BatchUpdateMemberEc2DeepInspectionStatus
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/batchupdatememberec2deepinspectionstatuscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/batchupdatememberec2deepinspectionstatuscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/batchupdatememberec2deepinspectionstatuscommandoutput.html)
CancelFindingsReport
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/cancelfindingsreportcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/cancelfindingsreportcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/cancelfindingsreportcommandoutput.html)
CancelSbomExport
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/cancelsbomexportcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/cancelsbomexportcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/cancelsbomexportcommandoutput.html)
CreateFilter
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/createfiltercommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/createfiltercommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/createfiltercommandoutput.html)
CreateFindingsReport
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/createfindingsreportcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/createfindingsreportcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/createfindingsreportcommandoutput.html)
CreateSbomExport
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/createsbomexportcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/createsbomexportcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/createsbomexportcommandoutput.html)
DeleteFilter
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/deletefiltercommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/deletefiltercommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/deletefiltercommandoutput.html)
DescribeOrganizationConfiguration
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/describeorganizationconfigurationcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/describeorganizationconfigurationcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/describeorganizationconfigurationcommandoutput.html)
Disable
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/disablecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/disablecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/disablecommandoutput.html)
DisableDelegatedAdminAccount
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/disabledelegatedadminaccountcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/disabledelegatedadminaccountcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/disabledelegatedadminaccountcommandoutput.html)
DisassociateMember
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/disassociatemembercommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/disassociatemembercommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/disassociatemembercommandoutput.html)
Enable
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/enablecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/enablecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/enablecommandoutput.html)
EnableDelegatedAdminAccount
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/enabledelegatedadminaccountcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/enabledelegatedadminaccountcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/enabledelegatedadminaccountcommandoutput.html)
GetConfiguration
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/getconfigurationcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/getconfigurationcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/getconfigurationcommandoutput.html)
GetDelegatedAdminAccount
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/getdelegatedadminaccountcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/getdelegatedadminaccountcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/getdelegatedadminaccountcommandoutput.html)
GetEc2DeepInspectionConfiguration
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/getec2deepinspectionconfigurationcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/getec2deepinspectionconfigurationcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/getec2deepinspectionconfigurationcommandoutput.html)
GetEncryptionKey
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/getencryptionkeycommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/getencryptionkeycommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/getencryptionkeycommandoutput.html)
GetFindingsReportStatus
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/getfindingsreportstatuscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/getfindingsreportstatuscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/getfindingsreportstatuscommandoutput.html)
GetMember
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/getmembercommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/getmembercommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/getmembercommandoutput.html)
GetSbomExport
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/getsbomexportcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/getsbomexportcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/getsbomexportcommandoutput.html)
ListAccountPermissions
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/listaccountpermissionscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/listaccountpermissionscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/listaccountpermissionscommandoutput.html)
ListCoverage
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/listcoveragecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/listcoveragecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/listcoveragecommandoutput.html)
ListCoverageStatistics
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/listcoveragestatisticscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/listcoveragestatisticscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/listcoveragestatisticscommandoutput.html)
ListDelegatedAdminAccounts
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/listdelegatedadminaccountscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/listdelegatedadminaccountscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/listdelegatedadminaccountscommandoutput.html)
ListFilters
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/listfilterscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/listfilterscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/listfilterscommandoutput.html)
ListFindingAggregations
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/listfindingaggregationscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/listfindingaggregationscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/listfindingaggregationscommandoutput.html)
ListFindings
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/listfindingscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/listfindingscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/listfindingscommandoutput.html)
ListMembers
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/listmemberscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/listmemberscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/listmemberscommandoutput.html)
ListTagsForResource
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/listtagsforresourcecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/listtagsforresourcecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/listtagsforresourcecommandoutput.html)
ListUsageTotals
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/listusagetotalscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/listusagetotalscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/listusagetotalscommandoutput.html)
ResetEncryptionKey
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/resetencryptionkeycommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/resetencryptionkeycommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/resetencryptionkeycommandoutput.html)
SearchVulnerabilities
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/searchvulnerabilitiescommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/searchvulnerabilitiescommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/searchvulnerabilitiescommandoutput.html)
TagResource
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/tagresourcecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/tagresourcecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/tagresourcecommandoutput.html)
UntagResource
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/untagresourcecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/untagresourcecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/untagresourcecommandoutput.html)
UpdateConfiguration
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/updateconfigurationcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/updateconfigurationcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/updateconfigurationcommandoutput.html)
UpdateEc2DeepInspectionConfiguration
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/updateec2deepinspectionconfigurationcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/updateec2deepinspectionconfigurationcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/updateec2deepinspectionconfigurationcommandoutput.html)
UpdateEncryptionKey
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/updateencryptionkeycommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/updateencryptionkeycommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/updateencryptionkeycommandoutput.html)
UpdateFilter
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/updatefiltercommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/updatefiltercommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/updatefiltercommandoutput.html)
UpdateOrganizationConfiguration
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/updateorganizationconfigurationcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/updateorganizationconfigurationcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/updateorganizationconfigurationcommandoutput.html)
UpdateOrgEc2DeepInspectionConfiguration
[Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/classes/updateorgec2deepinspectionconfigurationcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/updateorgec2deepinspectionconfigurationcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-inspector2/interfaces/updateorgec2deepinspectionconfigurationcommandoutput.html)
Readme
---
### Keywords
none |
sboost | cran | R | Package ‘sboost’
October 14, 2022
Type Package
Title Machine Learning with AdaBoost on Decision Stumps
Version 0.1.2
Description Creates classifier for binary outcomes using Adaptive Boosting
(AdaBoost) algorithm on decision stumps with a fast C++ implementation.
For a description of AdaBoost, see Freund and Schapire (1997)
<doi:10.1006/jcss.1997.1504>. This type of classifier is nonlinear, but
easy to interpret and visualize. Feature vectors may be a combination of
continuous (numeric) and categorical (string, factor) elements. Methods
for classifier assessment, predictions, and cross-validation also included.
License MIT + file LICENSE
URL https://github.com/jadonwagstaff/sboost
BugReports https://github.com/jadonwagstaff/sboost/issues
Encoding UTF-8
LazyData true
Depends R (>= 3.4.0)
LinkingTo Rcpp (>= 0.12.17)
Imports dplyr (>= 0.7.6), rlang (>= 0.2.1), Rcpp (>= 0.12.17), stats
(>= 3.4)
RoxygenNote 7.1.2
Suggests testthat
NeedsCompilation yes
Author <NAME> [aut, cre]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2022-05-26 13:10:02 UTC
R topics documented:
asses... 2
malwar... 3
mushroom... 4
predict.sboost_classifie... 5
sboos... 6
validat... 7
assess sboost Assessment Function
Description
Assesses how well an sboost classifier classifies the data.
Usage
assess(object, features, outcomes, include_scores = FALSE)
Arguments
object sboost_classifier S3 object output from sboost.
features feature set data.frame.
outcomes outcomes corresponding to the features.
include_scores if true feature_scores are included in output.
Value
An sboost_assessment S3 object containing:
performance Last row of cumulative statistics (i.e. when all stumps are included in assessment).
cumulative_statistics stump - the index of the last decision stump added to the assessment.
true_positive - number of true positive predictions.
false_negative - number of false negative predictions.
true_negative - number of true negative predictions.
false_positive - number of false positive predictions.
prevalence - true positive / total.
accuracy - correct predictions / total.
sensitivity - correct predicted positive / true positive.
specificity - correct predicted negative / true negative.
ppv - correct predicted positive / predicted positive.
npv - correct predicted negative / predicted negative.
f1 - harmonic mean of sensitivity and ppv.
feature_scores If include_scores is TRUE, for each feature in the classifier lists scores for each row
in the feature set.
classifier sboost sboost_classifier object used for assessment.
outcomes Shows which outcome was considered as positive and which negative.
call Shows the parameters that were used for assessment.
See Also
sboost documentation.
Examples
# malware
malware_classifier <- sboost(malware[-1], malware[1], iterations = 5, positive = 1)
assess(malware_classifier, malware[-1], malware[1])
# mushrooms
mushroom_classifier <- sboost(mushrooms[-1], mushrooms[1], iterations = 5, positive = "p")
assess(mushroom_classifier, mushrooms[-1], mushrooms[1])
malware Malware System Calls
Description
System call data for apps identified as malware and not malware.
Usage
malware
Format
A data frame with 7597 rows and 361 variables: outcomes 1 if malware, 0 if not. X1... X360 system
calls.
Details
Experimental data generated in this research paper:
<NAME>́, <NAME>, <NAME>, and <NAME>́, "Evaluation of Android Malware Detection
Based on System Calls," in Proceedings of the International Workshop on Security and Privacy
Analytics (IWSPA), 2016.
Data used for kaggle competition: https://www.kaggle.com/c/ml-fall2016-android-malware
Source
https://zenodo.org/record/154737#.WtoA1IjwaUl
mushrooms Mushroom Classification
Description
A classic machine learning data set describing hypothetical samples from the Agaricus and Lepiota
family.
Usage
mushrooms
Format
A data frame with 7597 rows and 361 variables:
outcomes p=poisonous, e=edible
cap_shape bell=b, conical=c, convex=x, flat=f, knobbed=k, sunken=s
cap_surface fibrous=f, grooves=g, scaly=y, smooth=s
cap_color brown=n, buff=b, cinnamon=c, gray=g, green=r, pink=p, purple=u, red=e, white=w,
yellow=y
bruises bruises=t, no=f
odor almond=a, anise=l, creosote=c, fishy=y, foul=f, musty=m, none=n, pungent=p, spicy=s
gill_attachment attached=a, descending=d, free=f, notched=n
gill_spacing close=c, crowded=w, distant=d
gill_size broad=b, narrow=n
gill_color black=k, brown=n, buff=b, chocolate=h, gray=g, green=r, orange=o, pink=p, purple=u,
red=e, white=w, yellow=y
stalk_shape enlarging=e, tapering=t
stalk_root bulbous=b, club=c, cup=u, equal=e, rhizomorphs=z, rooted=r, missing=?
stalk_surface_above_ring fibrous=f, scaly=y, silky=k, smooth=s
stalk_surface_below_ring fibrous=f, scaly=y, silky=k, smooth=s
stalk_color_above_ring brown=n, buff=b, cinnamon=c, gray=g, orange=o, pink=p, red=e, white=w,
yellow=y
stalk_color_below_ring brown=n, buff=b, cinnamon=c, gray=g, orange=o, pink=p, red=e, white=w,
yellow=y
veil_type partial=p, universal=u
veil_color brown=n, orange=o, white=w, yellow=y
ring_number none=n, one=o, two=t
ring_type cobwebby=c, evanescent=e, flaring=f, large=l, none=n, pendant=p, sheathing=s, zone=z
spore_print_color black=k, brown=n, buff=b, chocolate=h, green=r, orange=o, purple=u, white=w,
yellow=y
population abundant=a, clustered=c, numerous=n, scattered=s, several=v, solitary=y
habitat grasses=g, leaves=l, meadows=m, paths=p, urban=u, waste=w, woods=d
Details
Data gathered from:
Mushroom records drawn from The Audubon Society Field Guide to North American Mushrooms
(1981). <NAME> (Pres.), New York: <NAME>
Source
https://archive.ics.uci.edu/ml/datasets/mushroom
predict.sboost_classifier
Make predictions for a feature set based on an sboost classifier.
Description
Make predictions for a feature set based on an sboost classifier.
Usage
## S3 method for class 'sboost_classifier'
predict(object, features, scores = FALSE, ...)
Arguments
object sboost_classifier S3 object output from sboost.
features feature set data.frame.
scores if true, raw scores generated; if false, predictions are generated.
... further arguments passed to or from other methods.
Value
Predictions in the form of a vector, or scores in the form of a vector. The index of the vector aligns
the predictions or scores with the rows of the features. Scores represent the sum of all votes for the
positive outcome minus the sum of all votes for the negative outcome.
See Also
sboost documentation.
Examples
# malware
malware_classifier <- sboost(malware[-1], malware[1], iterations = 5, positive = 1)
predict(malware_classifier, malware[-1], scores = TRUE)
predict(malware_classifier, malware[-1])
# mushrooms
mushroom_classifier <- sboost(mushrooms[-1], mushrooms[1], iterations = 5, positive = "p")
predict(mushroom_classifier, mushrooms[-1], scores = TRUE)
predict(mushroom_classifier, mushrooms[-1])
sboost sboost Learning Algorithm
Description
A machine learning algorithm using AdaBoost on decision stumps.
Usage
sboost(features, outcomes, iterations = 1, positive = NULL, verbose = FALSE)
Arguments
features feature set data.frame.
outcomes outcomes corresponding to the features.
iterations number of boosts.
positive the positive outcome to test for; if NULL, the first outcome in alphabetical (or
numerical) order will be chosen.
verbose If true, progress bar will be displayed in console.
Details
Factors and characters are treated as categorical features. Missing values are supported.
See https://jadonwagstaff.github.io/projects/sboost.html for a description of the algo-
rithm.
For original paper describing AdaBoost see:
<NAME>., <NAME>.: A decision-theoretic generalization of on-line learning and an applica-
tion to boosting. Journal of Computer and System Sciences 55(1), 119-139 (1997)
Value
An sboost_classifier S3 object containing:
classifier stump - the index of the decision stump
feature - name of the column that this stump splits on.
vote - the weight that this stump has on the final classifier.
orientation - shows how outcomes are split. If feature is numeric shows split orientation, if
feature value is less than split then vote is cast in favor of left side outcome, otherwise the vote
is cast for the right side outcome. If feature is categorical, vote is cast for the left side outcome
if feature value is found in left_categories, otherwise vote is cast for right side outcome.
split - if feature is numeric, the value where the decision stump splits the outcomes; otherwise,
NA.
left_categories - if feature is categorical, shows the feature values that sway the vote to the left
side outcome on the orientation split; otherwise, NA.
outcomes Shows which outcome was considered as positive and which negative.
training stumps - how many decision stumps were trained.
features - how many features the training set contained.
instances - how many instances or rows the training set contained.
positive_prevalence - what fraction of the training instances were positive.
call Shows the parameters that were used to build the classifier.
See Also
predict.sboost_classifier - to get predictions from the classifier.
assess - to evaluate the performance of the classifier.
validate - to perform cross validation for the classifier training.
Examples
# malware
malware_classifier <- sboost(malware[-1], malware[1], iterations = 5, positive = 1)
malware_classifier
malware_classifier$classifier
# mushrooms
mushroom_classifier <- sboost(mushrooms[-1], mushrooms[1], iterations = 5, positive = "p")
mushroom_classifier
mushroom_classifier$classifier
validate sboost Validation Function
Description
A k-fold cross validation algorithm for sboost.
Usage
validate(
features,
outcomes,
iterations = 1,
k_fold = 6,
positive = NULL,
verbose = FALSE
)
Arguments
features feature set data.frame.
outcomes outcomes corresponding to the features.
iterations number of boosts.
k_fold number of cross-validation subsets.
positive is the positive outcome to test for; if NULL, the first in alphabetical order will
be chosen
verbose If true, progress bars will be displayed in console.
Value
An sboost_validation S3 object containing:
performance Final performance statistics for all stumps.
training_summary_statistics Mean and standard deviations for test statistics generated by assess
cumulative statistics for each of the training sets.
testing_summary_statistics Mean and standard deviations for test statistics generated by assess
cumulative statistics for each of the testing sets.
training_statistics sboost sboost_assessment cumulative statistics objects used to generate train-
ing_statistics.
testing_statistics sboost sboost_assessment cumulative statistics objects used to generate testing_statistics.
classifier_list sboost sboost_classifier objects created from training sets.
outcomes Shows which outcome was considered as positive and which negative.
k_fold number of testing and training sets used in the validation.
call Shows the parameters that were used for validation.
See Also
sboost documentation.
Examples
# malware
validate(malware[-1], malware[1], iterations = 5, k_fold = 3, positive = 1)
# mushrooms
validate(mushrooms[-1], mushrooms[1], iterations = 5, k_fold = 3, positive = "p") |
adafruit-soundboard | readthedoc | Unknown | Adafruit Soundboard Library
Documentation
Release 0.1
<NAME>
Jul 31, 2017
Contents 1 Installation 3 2 Quick Start 5 3 Documentation 7 Python Module Index 13
i
ii
Adafruit Soundboard Library Documentation, Release 0.1 The Adafruit Soundboards are an easy way to add sound to your maker project, but the library provided by Adafruit only supports Arduino.
If you’ve wanted to use one of these boards with a MicroPython microcontroller (MCU), this is the library you’ve been looking for.
Adafruit Soundboard Library Documentation, Release 0.1
CHAPTER 1
Installation At this time, you have to install the library by copying the soundboard.py script to your MicroPython board along with your main.py file. At some point in the future it may be possible to pip install it.
Make sure to get the latest version of the code from GitHub.
Adafruit Soundboard Library Documentation, Release 0.1
CHAPTER 2
Quick Start First, you’ll need to decide which UART bus you want to use. To do this, you’ll need to consult the documentation for your particular MCU. In these examples, I’m using the original pyboard (see documentation here) and I’m using UART bus 1 or XB, which uses pin X9 for transmitting and ping X10 for receiving.
Then, create an instance of the Soundboard class, like this:
sound = Soundboard('XB')
I highly recommend you also attach the RST pin on the soundboard to one of the other GPIO pins on the MCU (pin X11 in the example). Also, my alternative method of getting the list of files from the board is more stable (in my own testing) than the method built-in to the soundboard. Also, I like getting the debug output and I turn the volume down to 50% while I’m coding. Doing all this looks like the following:
SB_RST = 'X11'
sound = Soundboard('XB', rst_pin=SB_RST, vol=0.5, debug=True, alt_get_files=True)
Once you’ve set up all of this, you’re ready to play some tracks:
# Play track 0 sound.play(0)
# Stop playback sound.stop()
# Play the test file that comes with the soundboard sound.play('T00 OGG')
# Play track 1 immediately, stopping any currently playing tracks sound.play_now(1)
# Pause and resume sound.pause()
sound.unpause()
You can also control the volume in several different ways:
Adafruit Soundboard Library Documentation, Release 0.1
# Raise volume by 2 points (0 min volume, 204 max volume)
sound.vol_up()
# Turn down volume until lower than 125 sound.vol_down(125)
# Get the current volume sound.vol
# Set volume to 56 (out of 204 maximum)
sound.vol = 56
# Set volume to 75% of maximum volume sound.vol = 0.75
CHAPTER 3
Documentation This is a MicroPython library for the Adafruit Sound Boards in UART mode!
This library has been adapted from the library written by Adafruit for Arduino, available at https://github.com/adafruit/
Adafruit_Soundboard_library. I have no affiliation with Adafruit, and they have not sponsored or approved this library in any way. As such, please do not contact them for support regarding this library.
Commands the sound board understands (at least the ones I could discern from the Arduino library) are as follows:
• L: List files on the board
• #: Play a file by number
• P: Play a file by name
• +: Volume up. Range is 0-204, increments of 2.
• -: Volume down
• =: Pause playback
• >: Un-pause playback
• q: Stop playback
• t: Give current position of playback and total time of track
• s: Current track size and total size soundboard.SB_BAUD
The baud rate for the soundboards. This shouldn’t ever change, since all of the soundboard models use the same
value.
See also:
Adafruit Audio FX Sound Board Tutorial Adafruit’s tutorial on the soundboards.
soundboard.MIN_VOL soundboard.MAX_VOL
Minimum volume is 0, maximum is 204.
Adafruit Soundboard Library Documentation, Release 0.1 soundboard.MAX_FILES
In the Arduino version of this library, it defines the max number of files to be 25.
soundboard.DEBUG
A flag for turning on/off debug messages.
See also:
Soundboard.toggle_debug(), printif()
class soundboard.Soundboard(uart_id, rst_pin=None, vol=None, alt_get_files=False, debug=None,
**uart_kwargs)
Control an Adafruit Sound Board via UART.
The Soundboard class handles all communication with the sound board via UART, making it easy to get
information about the sound files on the sound board and control playback.
If you need to reset the sound board from your MicroPython code, be sure to provide the rst_pin parameter.
The soundboard sometimes gets out of UART mode and reverts to the factory default of GPIO trigger mode.
When this happens, it will appear as if the soundboard has stoped working for no apparent reason. This library
is designed to automatically attempt resetting the board if a command fails, since that is a common cause. So, it
is a good idea to provide this parameter.
Parameters
• uart_id – ID for the UART bus to use. Acceptable values vary by board. Check the
documentation for your board for more info.
• rst_pin – Identifier for the pin (on the MicroPython board) connected to the RST pin of
the sound board. Valid identifiers vary by board.
• vol (int or float) – Initial volume level to set. See vol for more info.
• alt_get_files (bool) – Uses an alternate method to get the list of track file names.
See use_alt_get_files() method for more info.
• debug (bool) – When not None, will set the debug output flag to the boolean value of this
argument using the toggle_debug() method.
• uart_kwargs (dict) – Additional values passed to the UART.init() method of the UART
bus object. Acceptable values here also vary by board. It is not necessary to include the baud
rate among these keyword values, because it will be set to SB_BAUD before the UART.
init function is called.
files
Return a list of the files on the sound board.
Return type list
sizes
Return a list of the files’ sizes on the sound board.
See also:
use_alt_get_files()
Return type list
lengths
Return a list of the track lengths in seconds.
Adafruit Soundboard Library Documentation, Release 0.1
Note: In my own testing of this method, the board always returns a value of zero seconds for the length
for every track, no matter if it’s a WAV or OGG file, short or long track.
Return type list file_name(n)
Return the name of track n.
Parameters n (int) – Index of a file on the sound board or False if the track number doesn’t
exist.
Returns Filename of track n.
Return type str or bool track_num(file_name)
Return the track number of the given file name.
Parameters file_name (str) – File name of the track. Should be one of the values from the
files property.
Returns The track number of the file name or False if not found.
Return type int or bool play(track=None)
Play a track on the board.
Parameters track (int or str) – The index (int) or filename (str) of the track to play.
Returns If the command was successful.
Return type bool play_now(track)
Play a track on the board now, stopping current track if necessary.
Parameters track (int or str) – The index (int) or filename (str) of the track to play.
Returns If the command was successful.
Return type bool vol
Current volume.
This is implemented as a class property, so you can get and set its value directly. When setting a new
volume, you can use an int or a float (assuming your board supports floats). When setting to an int,
it should be in the range of 0-204. When set to a float, the value will be interpreted as a percentage of
MAX_VOL.
Return type int vol_up(vol=None)
Turn volume up by 2 points, return current volume level [0-204].
Parameters vol (int) – Target volume. When not None, volume will be turned up to be
greater than or equal to this value.
Return type int vol_down(vol=None)
Turn volume down by 2 points, return current volume level [0-204].
Adafruit Soundboard Library Documentation, Release 0.1
Parameters vol (int) – Target volume. When not None, volume will be turned down to be
less than or equal to this value.
Return type int
pause()
Pause playback, return if the command was successful.
Return type bool
unpause()
Continue playback, return if the command was successful.
Return type bool
stop()
Stop playback, return if the command was successful.
Return type bool
track_time()
Return the current position of playback and total time of track.
Return type tuple
track_size()
Return the remaining size and total size.
It seems the remaining track size refers to the number of bytes left for the soundboard to process before
the playing of the track will be over.
Returns Remaining track size and total size
Return type tuple
reset()
Reset the sound board.
Soft reset the board by bringing the RST pin low momentarily (10 ms). This only has effect if the reset pin
has been initialized in the constructor.
Doing a soft reset on the board before doing any other actions can help ensure that it has been started in
UART control mode, rather than GPIO trigger mode.
See also:
Soundboard Pinout Documentation on the soundboards’ pinouts.
Returns Whether the reset was successful. If the reset pin was not initialized in the constructor,
this will always return False.
Return type bool
use_alt_get_files(now=False)
Get list of track files using an alternate method.
If the list of files is missing tracks you know are on the soundboard, try calling this method. It doesn’t
depend on the soundboard’s internal command for returning a list of files. Instead, it plays each of the
tracks using their track numbers and gets the filename and size from the output of the play command.
Parameters now (bool) – When set to True, the alternate method of getting the files list will
be called immediately. Otherwise, the list of files will be populated the next time the files
property is accessed (lazy loading).
Adafruit Soundboard Library Documentation, Release 0.1
Return type None
static toggle_debug(debug=None)
Turn on/off DEBUG flag.
Parameters debug – If None, the DEBUG flag will be toggled to have the value opposite of its
current value. Otherwise, DEBUG will be set to the boolean value of debug.
Return type None soundboard.printif(*values, **kwargs)
Print a message if DEBUG is set to True.
Adafruit Soundboard Library Documentation, Release 0.1
Python Module Index s
soundboard, 7
13
Adafruit Soundboard Library Documentation, Release 0.1 |
github.com/b4b4r07/gomi | go | Go | README
[¶](#section-readme)
---
![gomi](https://github.com/b4b4r07/gomi/raw/v1.1.6/docs/screenshot.png)
[![License](https://img.shields.io/github/license/b4b4r07/gomi)](https://b4b4r07.mit-license.org)
[![GitHub Releases](https://img.shields.io/github/v/release/b4b4r07/gomi)](https://github.com/b4b4r07/gomi/releases)
[![Website](https://img.shields.io/website?down_color=lightgrey&down_message=donw&up_color=green&up_message=up&url=https%3A%2F%2Fb4b4r07.github.io%2Fgomi)](https://b4b4r07.github.io/gomi/)
[![GitHub Releases](https://github.com/b4b4r07/gomi/actions/workflows/release.yaml/badge.svg)](https://github.com/b4b4r07/gomi/actions/workflows/release.yaml)
[![Go version](https://img.shields.io/github/go-mod/go-version/b4b4r07/gomi)](https://github.com/b4b4r07/gomi/blob/master/go.mod)
### 🗑️ Replacement for UNIX rm command!
`gomi` (ごみ/go-mi means a trash in Japanese) is a simple trash tool that works on CLI, written in Go
The concept of the trashcan does not exist in Command-line interface ([CLI](http://en.wikipedia.org/wiki/Command-line_interface)). If you have deleted an important file by mistake with the `rm` command, it would be difficult to restore. Then, it's this `gomi`. Unlike `rm` command, it is possible to easily restore deleted files because `gomi` have the trashcan for the CLI.
#### Features
* Like a `rm` command but not unlink (delete) in fact (just move to another place)
* Easy to restore, super intuitive
* Compatible with `rm` command, e.g. `-r`, `-f` options
* Nice UI, awesome CLI UX
* Easy to see what gomi does with setting `GOMI_LOG=[trace|debug|info|warn|error]`
#### Usage
```
$ alias rm=gomi
```
```
$ rm -rf important-dir
```
```
$ rm --restore Search: █
Which to restore?
▸ important-dir
main_test.go
main.go
test-dir
↓ validate_test.rego
Name: important-dir Path: /Users/b4b4r07/src/github.com/b4b4r07/important-dir DeletedAt: 5 days ago Content: (directory)
-rw-r--r-- important-file-1
-rw-r--r-- important-file-2
drwxr-xr-x important-subdir-1
drwxr-xr-x important-subdir-2
...
```
#### Installation
Download the binary from [GitHub Releases](https://github.com/b4b4r07/gomi/releases/latest) and drop it in your `$PATH`.
* [Darwin / Mac](https://github.com/b4b4r07/gomi/releases/latest)
* [Linux](https://github.com/b4b4r07/gomi/releases/latest)
**For macOS / [Homebrew](https://brew.sh/) user**:
```
brew install b4b4r07/tap/gomi
```
**Using [afx](https://github.com/b4b4r07/afx), package manager for CLI**:
```
github:
- name: b4b4r07/gomi
description: Trash can in CLI
owner: b4b4r07
repo: gomi
release:
name: gomi
tag: v1.1.5 ## NEED UPDATE!
command:
link:
- from: gomi
to: gomi
alias:
rm: gomi ## --> alias rm=gomi
```
**AUR users**:
<https://aur.archlinux.org/packages/gomi/#### Versus
* [andreafrancia/trash-cli](https://github.com/andreafrancia/trash-cli)
* [sindresorhus/trash](https://github.com/sindresorhus/trash)
#### License
[MIT](https://b4b4r07.mit-license.org)
Documentation
[¶](#section-documentation)
---
![The Go Gopher](/static/shared/gopher/airplane-1200x945.svg)
There is no documentation for this package. |
sbom | hex | Erlang |
SBoM
===
Generates a Software Bill-of-Materials (SBoM) for Mix projects, in [CycloneDX](https://cyclonedx.org)
format.
Full documentation can be found at <https://hexdocs.pm/sbom>.
For a quick demo of how this might be used, check out [this blog post](https://blog.voltone.net/post/24).
[installation](#installation)
Installation
---
To install the Mix task globally on your system, run `mix archive.install hex sbom`.
Alternatively, the package can be added to a project's dependencies to make the Mix task available for that project only:
```
def deps do
[
{:sbom, "~> 0.6", only: :dev, runtime: false}
]
end
```
[usage](#usage)
Usage
---
To produce a CycloneDX SBoM, run [`mix sbom.cyclonedx`](Mix.Tasks.Sbom.Cyclonedx.html) from the project directory. The result is written to a file named `bom.xml`, unless a different name is specified using the `-o` option.
By default only the dependencies used in production are included. To include all dependencies, including those for the 'dev' and 'test' environments, pass the
`-d` command line option: `mix sbom.cyclonedx -d`.
*Note that MIX_ENV does not affect which dependencies are included in the output; the task should normally be run in the default (dev) environment*
For more information on the command line arguments accepted by the Mix task run [`mix help sbom.cyclonedx`](Mix.Tasks.Sbom.Cyclonedx.html).
[npm-packages-and-other-dependencies](#npm-packages-and-other-dependencies)
NPM packages and other dependencies
---
This tool only considers Hex, GitHub and BitBucket dependencies managed through Mix. To build a comprehensive SBoM of a deployment, including NPM and/or operating system packages, it may be necessary to merge multiple CycloneDX files into one.
The [@cyclonedx/bom](https://www.npmjs.com/package/@cyclonedx/bom) tool on NPM can not only generate an SBoM for your JavaScript assets, but it can also merge in the output of the 'sbom.cyclonedx' Mix task and other scanners, through the
'-a' option, producing a single CycloneDX XML file.
[API Reference](api-reference.html)
SBoM
===
Collect dependency information for use in a Software Bill-of-Materials (SBOM).
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[components_for_project(environment \\ :prod)](#components_for_project/1)
Builds a SBoM for the current Mix project. The result can be exported to CycloneDX XML format using the [`SBoM.CycloneDX`](SBoM.CycloneDX.html) module. Pass an environment of `nil` to include dependencies across all environments.
[Link to this section](#functions)
Functions
===
SBoM.CycloneDX
===
Generate a CycloneDX SBoM in XML format.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[bom(components, options \\ [])](#bom/2)
Generate a CycloneDX SBoM in XML format from the specified list of components. Returns an `iolist`, which may be written to a file or IO device,
or converted to a String using [`IO.iodata_to_binary/1`](https://hexdocs.pm/elixir/IO.html#iodata_to_binary/1)
[Link to this section](#functions)
Functions
===
mix sbom.cyclonedx
===
Generates a Software Bill-of-Materials (SBoM) in CycloneDX format.
[options](#module-options)
Options
---
* `--output` (`-o`): the full path to the SBoM output file (default:
bom.xml)
* `--force` (`-f`): overwrite existing files without prompting for confirmation
* `--dev` (`-d`): include dependencies for non-production environments
(including `dev`, `test` or `docs`); by default only dependencies for MIX_ENV=prod are returned
* `--recurse` (`-r`): in an umbrella project, generate individual output files for each application, rather than a single file for the entire project
* `--schema` (`-s`): schema version to be used, defaults to "1.2". |
openbook_rheinwerk-verlag_de_shell_programmierung | free_programming_book | Unknown | #
Vorwort des Autors
Ich freue mich riesig, Ihnen mein nächstes Buch vorstellen zu dürfen. Das Schreiben hat mir besonders viel Spaß gemacht, denn nach Büchern zu C und zur Linux-UNIX-Programmierung war die Arbeit an diesem Buch geradezu eine Entspannung. Der Vorteil der Shell-Programmierung ist, dass man nicht so viel »drum herum« schreiben muss und man meistens gleich auf den Punkt kommen kann. Wenn Sie bereits Erfahrungen in anderen Programmiersprachen gesammelt haben und dieses Buch durcharbeiten, werden Sie verstehen, was ich meine. Sollten Sie noch absoluter Anfänger in Sachen Programmierung sein, so ist dies überhaupt kein Problem, da die Shell-Programmierung recht schnell erlernt werden kann. Die Lernkurve des Buches bewegt sich auf einem mittleren Niveau, sodass der Anfänger nicht überfordert und der programmiererfahrene Leser nicht unterfordert wird.
Ich gehe davon aus, dass Sie bereits über erste Erfahrungen im Umgang mit Linux/UNIX verfügen. Sie werden beim Lesen dieses Buchs so manche Dinge, die Ihnen vielleicht immer schon etwas unklar waren, viel besser verstehen. Damit will ich sagen, dass man über die Shell-Programmierung eine noch intensivere Beziehung zu dem Betriebssystem aufbauen kann. Gerade weil die Shell immer noch ein viel mächtigeres Instrument ist, als es grafische Oberflächen jemals waren oder werden. Man könnte also auch sagen, der Umgang mit der Shell(âProgrammierung) ist das ABC eines jeden zukünftigen Linux-UNIX-Gurus. Und wenn Sie dann auch noch die Sprachen C und den Umgang mit der Linux-UNIX-Programmierung lernen wollen (oder bereits können), ist der Weg zum Olymp nicht mehr weit. Vielleicht fällt Ihnen hierbei auch auf, dass ich die Bücher, die ich schreibe, ein wenig nach bestimmten Kriterien auswähle â die sich alle zusammen auch in einem Schuber gut verkaufen würden ;-)
Nach diesen (hoffentlich) aufmunternden Worten werden Sie wohl freudig den PC hochfahren, das Buch zur Hand nehmen und mit dem ersten Kapitel anfangen. Bei einem Buch dieser Preisklasse ist natürlich die Erwartung hoch, und ich hoffe, ich kann Ihre Anforderungen erfüllen. Sollte es aber mal nicht so sein oder haben Sie etwas zu beanstanden, ist etwas nicht ganz korrekt, lassen Sie mich das wissen, damit ich dieses Buch regelmäßig verbessern kann. Gerade was die vielen verschiedenen Systeme bzw. Distributionen betrifft, kann es doch hier oder da zu Unstimmigkeiten kommen. Zwar wurden die Beispiele im Buch auf den gängigen Systemen getestet (SuSE, Fedora, Debian, Ubuntu und FreeBSD), doch wenn man sich die enorme Anzahl von mittlerweile vorhandenen Distributionen (siehe http://www.distrowatch.com/) ansieht, dann kann man eben nicht für alle garantieren. Hierzu erreichen Sie mich am besten auf meiner Webseite unter www.pronix.de, wo Sie außerdem auch gleich ein Forum und alle anderen von mir geschriebenen Bücher zum Online-Lesen vorfinden.
ÜbersichtÂ
In den ersten zehn Kapiteln erfahren Sie alles, was Sie zur Shell-Programmierung wissen müssen (und ein bisschen mehr als das). Die Kapitel 11, 12 und 13 gehen auf die unverzichtbaren Tools grep, sed und awk ein, die in Kombination (oder auch allein) mit der Shellscript-Programmierung zu wertvollen Helfern werden können. Kapitel 14 behandelt viele grundlegende Kommandos von Linux-UNIX. Kenntnisse zu den Kommandos sind unverzichtbar, wenn Sie sich ernsthaft mit der Shell-Programmierung auseinander setzen wollen bzw. müssen. Im 15. und letzten Kapitel werden Ihnen noch einige Praxisbeispiele mitgegeben. Hierbei wird auf viele alltägliche Anwendungsfälle eingegangen, die als Anregungen dienen sollen und jederzeit erweitert werden können. Im Grunde werden Sie aber feststellen, dass das Buch überall Praxisbeispiele enthält.
Die einzelnen Kapitel des Buchs wurden unabhängig voneinander geschrieben â es werden also keine Beispiele verwendet, die von Kapitel zu Kapitel ausgebaut werden. Dadurch ist es möglich, dass Sie dieses Buch als Nachschlagewerk verwenden können.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
Vorwort des Autors
Ich freue mich riesig, Ihnen mein nächstes Buch vorstellen zu dürfen. Das Schreiben hat mir besonders viel Spaß gemacht, denn nach Büchern zu C und zur Linux-UNIX-Programmierung war die Arbeit an diesem Buch geradezu eine Entspannung. Der Vorteil der Shell-Programmierung ist, dass man nicht so viel »drum herum« schreiben muss und man meistens gleich auf den Punkt kommen kann. Wenn Sie bereits Erfahrungen in anderen Programmiersprachen gesammelt haben und dieses Buch durcharbeiten, werden Sie verstehen, was ich meine. Sollten Sie noch absoluter Anfänger in Sachen Programmierung sein, so ist dies überhaupt kein Problem, da die Shell-Programmierung recht schnell erlernt werden kann. Die Lernkurve des Buches bewegt sich auf einem mittleren Niveau, sodass der Anfänger nicht überfordert und der programmiererfahrene Leser nicht unterfordert wird.
Ich gehe davon aus, dass Sie bereits über erste Erfahrungen im Umgang mit Linux/UNIX verfügen. Sie werden beim Lesen dieses Buchs so manche Dinge, die Ihnen vielleicht immer schon etwas unklar waren, viel besser verstehen. Damit will ich sagen, dass man über die Shell-Programmierung eine noch intensivere Beziehung zu dem Betriebssystem aufbauen kann. Gerade weil die Shell immer noch ein viel mächtigeres Instrument ist, als es grafische Oberflächen jemals waren oder werden. Man könnte also auch sagen, der Umgang mit der Shell(âProgrammierung) ist das ABC eines jeden zukünftigen Linux-UNIX-Gurus. Und wenn Sie dann auch noch die Sprachen C und den Umgang mit der Linux-UNIX-Programmierung lernen wollen (oder bereits können), ist der Weg zum Olymp nicht mehr weit. Vielleicht fällt Ihnen hierbei auch auf, dass ich die Bücher, die ich schreibe, ein wenig nach bestimmten Kriterien auswähle â die sich alle zusammen auch in einem Schuber gut verkaufen würden ;-)
Nach diesen (hoffentlich) aufmunternden Worten werden Sie wohl freudig den PC hochfahren, das Buch zur Hand nehmen und mit dem ersten Kapitel anfangen. Bei einem Buch dieser Preisklasse ist natürlich die Erwartung hoch, und ich hoffe, ich kann Ihre Anforderungen erfüllen. Sollte es aber mal nicht so sein oder haben Sie etwas zu beanstanden, ist etwas nicht ganz korrekt, lassen Sie mich das wissen, damit ich dieses Buch regelmäßig verbessern kann. Gerade was die vielen verschiedenen Systeme bzw. Distributionen betrifft, kann es doch hier oder da zu Unstimmigkeiten kommen. Zwar wurden die Beispiele im Buch auf den gängigen Systemen getestet (SuSE, Fedora, Debian, Ubuntu und FreeBSD), doch wenn man sich die enorme Anzahl von mittlerweile vorhandenen Distributionen (siehe http://www.distrowatch.com/) ansieht, dann kann man eben nicht für alle garantieren. Hierzu erreichen Sie mich am besten auf meiner Webseite unter www.pronix.de, wo Sie außerdem auch gleich ein Forum und alle anderen von mir geschriebenen Bücher zum Online-Lesen vorfinden.
ÜbersichtÂ
In den ersten zehn Kapiteln erfahren Sie alles, was Sie zur Shell-Programmierung wissen müssen (und ein bisschen mehr als das). Die Kapitel 11, 12 und 13 gehen auf die unverzichtbaren Tools grep, sed und awk ein, die in Kombination (oder auch allein) mit der Shellscript-Programmierung zu wertvollen Helfern werden können. Kapitel 14 behandelt viele grundlegende Kommandos von Linux-UNIX. Kenntnisse zu den Kommandos sind unverzichtbar, wenn Sie sich ernsthaft mit der Shell-Programmierung auseinander setzen wollen bzw. müssen. Im 15. und letzten Kapitel werden Ihnen noch einige Praxisbeispiele mitgegeben. Hierbei wird auf viele alltägliche Anwendungsfälle eingegangen, die als Anregungen dienen sollen und jederzeit erweitert werden können. Im Grunde werden Sie aber feststellen, dass das Buch überall Praxisbeispiele enthält.
Die einzelnen Kapitel des Buchs wurden unabhängig voneinander geschrieben â es werden also keine Beispiele verwendet, die von Kapitel zu Kapitel ausgebaut werden. Dadurch ist es möglich, dass Sie dieses Buch als Nachschlagewerk verwenden können.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
Ich freue mich riesig, Ihnen mein nächstes Buch vorstellen zu dürfen. Das Schreiben hat mir besonders viel Spaß gemacht, denn nach Büchern zu C und zur Linux-UNIX-Programmierung war die Arbeit an diesem Buch geradezu eine Entspannung. Der Vorteil der Shell-Programmierung ist, dass man nicht so viel »drum herum« schreiben muss und man meistens gleich auf den Punkt kommen kann. Wenn Sie bereits Erfahrungen in anderen Programmiersprachen gesammelt haben und dieses Buch durcharbeiten, werden Sie verstehen, was ich meine. Sollten Sie noch absoluter Anfänger in Sachen Programmierung sein, so ist dies überhaupt kein Problem, da die Shell-Programmierung recht schnell erlernt werden kann. Die Lernkurve des Buches bewegt sich auf einem mittleren Niveau, sodass der Anfänger nicht überfordert und der programmiererfahrene Leser nicht unterfordert wird.
Ich gehe davon aus, dass Sie bereits über erste Erfahrungen im Umgang mit Linux/UNIX verfügen. Sie werden beim Lesen dieses Buchs so manche Dinge, die Ihnen vielleicht immer schon etwas unklar waren, viel besser verstehen. Damit will ich sagen, dass man über die Shell-Programmierung eine noch intensivere Beziehung zu dem Betriebssystem aufbauen kann. Gerade weil die Shell immer noch ein viel mächtigeres Instrument ist, als es grafische Oberflächen jemals waren oder werden. Man könnte also auch sagen, der Umgang mit der Shell(âProgrammierung) ist das ABC eines jeden zukünftigen Linux-UNIX-Gurus. Und wenn Sie dann auch noch die Sprachen C und den Umgang mit der Linux-UNIX-Programmierung lernen wollen (oder bereits können), ist der Weg zum Olymp nicht mehr weit. Vielleicht fällt Ihnen hierbei auch auf, dass ich die Bücher, die ich schreibe, ein wenig nach bestimmten Kriterien auswähle â die sich alle zusammen auch in einem Schuber gut verkaufen würden ;-)
Nach diesen (hoffentlich) aufmunternden Worten werden Sie wohl freudig den PC hochfahren, das Buch zur Hand nehmen und mit dem ersten Kapitel anfangen. Bei einem Buch dieser Preisklasse ist natürlich die Erwartung hoch, und ich hoffe, ich kann Ihre Anforderungen erfüllen. Sollte es aber mal nicht so sein oder haben Sie etwas zu beanstanden, ist etwas nicht ganz korrekt, lassen Sie mich das wissen, damit ich dieses Buch regelmäßig verbessern kann. Gerade was die vielen verschiedenen Systeme bzw. Distributionen betrifft, kann es doch hier oder da zu Unstimmigkeiten kommen. Zwar wurden die Beispiele im Buch auf den gängigen Systemen getestet (SuSE, Fedora, Debian, Ubuntu und FreeBSD), doch wenn man sich die enorme Anzahl von mittlerweile vorhandenen Distributionen (siehe http://www.distrowatch.com/) ansieht, dann kann man eben nicht für alle garantieren. Hierzu erreichen Sie mich am besten auf meiner Webseite unter www.pronix.de, wo Sie außerdem auch gleich ein Forum und alle anderen von mir geschriebenen Bücher zum Online-Lesen vorfinden.
## ÜbersichtÂ
In den ersten zehn Kapiteln erfahren Sie alles, was Sie zur Shell-Programmierung wissen müssen (und ein bisschen mehr als das). Die Kapitel 11, 12 und 13 gehen auf die unverzichtbaren Tools grep, sed und awk ein, die in Kombination (oder auch allein) mit der Shellscript-Programmierung zu wertvollen Helfern werden können. Kapitel 14 behandelt viele grundlegende Kommandos von Linux-UNIX. Kenntnisse zu den Kommandos sind unverzichtbar, wenn Sie sich ernsthaft mit der Shell-Programmierung auseinander setzen wollen bzw. müssen. Im 15. und letzten Kapitel werden Ihnen noch einige Praxisbeispiele mitgegeben. Hierbei wird auf viele alltägliche Anwendungsfälle eingegangen, die als Anregungen dienen sollen und jederzeit erweitert werden können. Im Grunde werden Sie aber feststellen, dass das Buch überall Praxisbeispiele enthält.
Die einzelnen Kapitel des Buchs wurden unabhängig voneinander geschrieben â es werden also keine Beispiele verwendet, die von Kapitel zu Kapitel ausgebaut werden. Dadurch ist es möglich, dass Sie dieses Buch als Nachschlagewerk verwenden können.
Vorwort des Gutachters â die Shell: Fluch oder Segen?Sie werden sicher schon die Sprüche wie âkryptische Kommandozeilen-Befehleâ oder Ähnliches in einem nicht gerade schmeichelhaften Zusammenhang gehört haben.Und wissen Sie was? Die Vorurteile stimmen. Ein Kommandointerpreter, der Konstrukte wie das Folgende zulässt, kann schon mal das ein oder andere graue Haar wachsen lassen. Probieren Sie die Zeichenfolge übrigens besser nicht aus â Ihr Rechner würde höchstwahrscheinlich abstürzen:(){ :|:& } ;:Und das soll ich jetzt lernen?Nein, nicht unbedingt. Der Befehl ist eher eine kleine Machtdemonstration. Er versucht unendlich viele Prozesse zu starten und legt so Ihren Rechner lahm. Die Syntax für die Kommandozeile ist normalerweise durchaus verständlich und solche Konstrukte sind eher die abschreckende Ausnahme.Das Beispiel soll demonstrieren, mit welch geringem Aufwand Sie durch eine Kommandozeile in der Lage sind, wirkungsvolle Befehle an das System zu übermitteln. Normalerweise wird die Kommandozeile produktiv eingesetzt und hat enorme Vorteile gegenüber einer grafischen Oberfläche. Genau so, wie Sie durch die Kommandozeile unendlich viele Prozesse starten können, können Sie auch mit einem einzigen Befehl Vorschaubilder von unzähligen Bildern erstellen, die sich auf Ihrer Festplatte befinden.Die Kommandozeile wird also genau dann interessant, wenn grafisch gesteuerte Programme nicht mehr in der Lage sind, eine Aufgabe in einer annehmbaren Zeit zu erledigen.In solchen Fällen kann es durchaus vorkommen, dass Sie an einem einzigen Befehl eine Stunde oder mehr grübeln, aber dennoch schneller sind, als wollten Sie die Aufgabe mit einer Maus lösen.Ich werde aber auch das relativieren. Für die meisten Aufgaben gibt es hervorragende Werkzeuge auf der Kommandozeile, die leicht zu verstehen und einzusetzen sind. Stundenlange Grübeleien über einem Befehl sind also eher die Ausnahmen. Wer den Willen hat, Aufgaben effektiv mit der Kommandozeile zu lösen, wird hier selten an Probleme stoßen. Im Gegenteil, die Kommandozeile eröffnet eine vollkommen andere Perspektive und erweitert somit den Horizont, wozu ein Computer alles eingesetzt werden kann, da hier vollkommen andere Grenzen gelten als bei der Arbeit mit der Maus: Sie möchten alle Dateien auf einer Festplatte umbenennen, die die Endung txt haben? Sie möchten periodisch überprüfen, ob ein entfernter Rechner erreichbar ist? Sie wollen ein bestimmtes Wort in allen Textdateien innerhalb eines Verzeichnisbaums ersetzen? Sie wollen täglich eine grafische Auswertung Ihrer Systemlast per Mail geschickt bekommen?Kein Problem :-)Lesen Sie dieses Buch, und Sie werden diese Aufgaben lösen können.Hier klicken, um das Bild zu VergrößernMart<NAME> ist freier Programmierer, Netzwerktechniker und Administrator.Ihre MeinungWie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
Sie werden sicher schon die Sprüche wie âkryptische Kommandozeilen-Befehleâ oder Ähnliches in einem nicht gerade schmeichelhaften Zusammenhang gehört haben.
Und wissen Sie was? Die Vorurteile stimmen. Ein Kommandointerpreter, der Konstrukte wie das Folgende zulässt, kann schon mal das ein oder andere graue Haar wachsen lassen. Probieren Sie die Zeichenfolge übrigens besser nicht aus â Ihr Rechner würde höchstwahrscheinlich abstürzen:
> (){ :|:& } ;:
Und das soll ich jetzt lernen?
Nein, nicht unbedingt. Der Befehl ist eher eine kleine Machtdemonstration. Er versucht unendlich viele Prozesse zu starten und legt so Ihren Rechner lahm. Die Syntax für die Kommandozeile ist normalerweise durchaus verständlich und solche Konstrukte sind eher die abschreckende Ausnahme.
Das Beispiel soll demonstrieren, mit welch geringem Aufwand Sie durch eine Kommandozeile in der Lage sind, wirkungsvolle Befehle an das System zu übermitteln. Normalerweise wird die Kommandozeile produktiv eingesetzt und hat enorme Vorteile gegenüber einer grafischen Oberfläche. Genau so, wie Sie durch die Kommandozeile unendlich viele Prozesse starten können, können Sie auch mit einem einzigen Befehl Vorschaubilder von unzähligen Bildern erstellen, die sich auf Ihrer Festplatte befinden.
Die Kommandozeile wird also genau dann interessant, wenn grafisch gesteuerte Programme nicht mehr in der Lage sind, eine Aufgabe in einer annehmbaren Zeit zu erledigen.
In solchen Fällen kann es durchaus vorkommen, dass Sie an einem einzigen Befehl eine Stunde oder mehr grübeln, aber dennoch schneller sind, als wollten Sie die Aufgabe mit einer Maus lösen.
Ich werde aber auch das relativieren. Für die meisten Aufgaben gibt es hervorragende Werkzeuge auf der Kommandozeile, die leicht zu verstehen und einzusetzen sind. Stundenlange Grübeleien über einem Befehl sind also eher die Ausnahmen. Wer den Willen hat, Aufgaben effektiv mit der Kommandozeile zu lösen, wird hier selten an Probleme stoßen. Im Gegenteil, die Kommandozeile eröffnet eine vollkommen andere Perspektive und erweitert somit den Horizont, wozu ein Computer alles eingesetzt werden kann, da hier vollkommen andere Grenzen gelten als bei der Arbeit mit der Maus:
 | Sie möchten alle Dateien auf einer Festplatte umbenennen, die die Endung txt haben? |
| --- | --- |
 | Sie möchten periodisch überprüfen, ob ein entfernter Rechner erreichbar ist? |
| --- | --- |
 | Sie wollen ein bestimmtes Wort in allen Textdateien innerhalb eines Verzeichnisbaums ersetzen? |
| --- | --- |
 | Sie wollen täglich eine grafische Auswertung Ihrer Systemlast per Mail geschickt bekommen? |
| --- | --- |
Kein Problem :-)
Lesen Sie dieses Buch, und Sie werden diese Aufgaben lösen können.
Hier klicken, um das Bild zu VergrößernMart<NAME> ist freier Programmierer, Netzwerktechniker und Administrator.
Kapitel 1 Einführung
Als Autor eines Fachbuchs steht man am Anfang immer vor der Frage, wo man anfangen soll. Fängt man bei Null an, so wird eine Menge Raum für interessantere Aufgabenschwerpunkte verschenkt. Fordert man dem Leser hingegen am Anfang gleich zu viel ab, läuft man Gefahr, dass dieses Buch schnell im Regel verstaubt oder zur Versteigerung angeboten wird.
1.1 Voraussetzungen an den LeserÂ
Da Sie sich entschieden haben, mit der Shellscript-Programmierung anzufangen, kann ich davon ausgehen, dass Sie bereits ein wenig mit Linux bzw. einem UNIX-artigen System vertraut sind und damit schon ein wenig Zeit verbracht haben. Vielleicht haben Sie auch schon Erfahrungen mit anderen Programmiersprachen gemacht, was Ihnen hier auch einen gewissen Vorteil einbringt. Vorhandene Programmiererfahrungen sind allerdings keine Voraussetzung für dieses Buch, welches so konzipiert wurde, dass selbst ein Anfänger recht einfach und schnell ans Ziel kommt. Dies deshalb, weil die Shellscript-Programmierung im Gegensatz zu anderen Programmiersprachen wie bspw. C/C++ oder Java erheblich einfacher zu erlernen ist (auch wenn Sie beim ersten Durchblättern des Buchs einen anderen Eindruck haben).
Aber was heißt »bereits mit Linux bzw. UNIX ein wenig vertraut«? Hierzu einige Punkte, die ich einfach von Ihnen erwarten muss â ansonsten könnte der Buchtitel gleich »Linux/UNIX â Eine Einführung« heißen.
1.1.1 ZielgruppeÂ
Mit diesem Buch will ich eine recht umfangreiche und keine spezifische Zielgruppe ansprechen. Profitieren von der Shellscript-Programmierung kann jeder, vom einfachen Linux-UNIX-Anwender bis hin zum absolut überbezahlten (oder häufig auch schlecht bezahlten) Systemadministrator.
Einfach jeder, der mit Linux/UNIX zu tun hat bzw. vor hat, damit etwas mehr anzufangen. Ganz besonders gut eignet sich die Shellscript-Programmierung auch für den Einstieg in die Programmierer-Welt. Sie finden hier viele Konstrukte (Schleifen, Bedingungen, Verzweigungen etc.), welche auch in den meisten anderen Programmiersprachen in ähnlicher Form verwendet werden (wenn auch häufig die Syntax ein wenig anders ist).
Da Sie sehr viel mit den »Bordmitteln« (Kommandos/Befehle) des Betriebssystems arbeiten, bekommen Sie auch ein gewisses Gefühl für Linux-UNIX. So haftet dem Shellscript-Programmierer häufig noch ein Guru-Image an, einfach weil dieser tiefer als viele GUI-verwöhnte Anwender in die Materie einsteigen muss. Trotzdem ist es leichter, die Shellscript-Programmierung zu erlernen, als irgendeine andere Hochsprache wie bspw. C, C++, Java oder C#.
Die primäre Zielgruppe aber bildet hierbei ganz klar der Systemadministrator und/oder der Webmaster (mit SSH-Zugang). Als Systemadministrator von Linux-UNIX-Systemen ist es häufig Grundvoraussetzung, sich mit der Shellscript-Programmierung auseinander zu setzen. Letztendlich ist ja auch jeder einzelne Linux-UNIX-(Heim-)Benutzer mit einem PC ein Systemadministrator und profitiert enorm von den neu hinzugewonnenen Kenntnissen.
1.1.2 NotationÂ
Die hier verwendete Notation ist recht einfach und zum Teil eindeutig aufgebaut. Wenn die Eingabe der Tastatur (bzw. einer Tastatur-Kombination) beschrieben wird, wird das mit einem entsprechenden Tasten-Zeichen gekennzeichnet. Wenn Sie bspw. (Strg)+(C) lesen, so bedeutet dies, dass hier die Tasten »Steuerung« (Control oder auch Ctrl) und »C« gleichzeitig gedrückt wurden, finden Sie (ESC) vor, dann ist das Drücken der Escape-Taste gemeint.
Sie werden sehr viel mit der Shell arbeiten. Als Shell-Prompt des normalen Users in der Kommandozeile wird hierfür you@host > verwendet. Dieses Prompt müssen Sie also nicht bei der Eingabe mit angeben (logisch, aber sollte erwähnt werden). Wird hinter diesem Shell-Prompt die Eingabe in fetten Zeichen geschrieben, so handelt es sich um eine Eingabe in der Kommandozeile, die vom Benutzer (meistens von Ihnen) vorgenommen und mit einem (ENTER)-Tastendruck bestätigt wurde.
you@host > Eine_Eingabe_in_der_Kommandozeile
Folgt hinter dieser Zeile eine weitere Zeile ohne das Shell-Prompt you@host > und nicht in fetter Schrift, dann handelt es sich in der Regel um die erzeugte Ausgabe der zuvor getätigten Eingabe (was im Buch meistens den Aktionen Ihres Shellscripts entspricht).
you@host > Eine_Eingabe_in_der_Kommandozeile Die erzeugte Ausgabe
Finden Sie stattdessen das Zeichen # als Shell-Prompt wieder, dann handelt es sich um ein Prompt des Superusers (root). Selbstverständlich setzt dies voraus, dass Sie auch die entsprechenden Rechte haben. Dies ist nicht immer möglich und wird daher in diesem Buch so selten wie möglich eingesetzt.
you@host > whoami nomaler_User you@host > su Passwort: ******** # whoami root # exit you@host > whoami normaler_User
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
Kapitel 1 Einführung
Als Autor eines Fachbuchs steht man am Anfang immer vor der Frage, wo man anfangen soll. Fängt man bei Null an, so wird eine Menge Raum für interessantere Aufgabenschwerpunkte verschenkt. Fordert man dem Leser hingegen am Anfang gleich zu viel ab, läuft man Gefahr, dass dieses Buch schnell im Regel verstaubt oder zur Versteigerung angeboten wird.
1.1 Voraussetzungen an den LeserÂ
Da Sie sich entschieden haben, mit der Shellscript-Programmierung anzufangen, kann ich davon ausgehen, dass Sie bereits ein wenig mit Linux bzw. einem UNIX-artigen System vertraut sind und damit schon ein wenig Zeit verbracht haben. Vielleicht haben Sie auch schon Erfahrungen mit anderen Programmiersprachen gemacht, was Ihnen hier auch einen gewissen Vorteil einbringt. Vorhandene Programmiererfahrungen sind allerdings keine Voraussetzung für dieses Buch, welches so konzipiert wurde, dass selbst ein Anfänger recht einfach und schnell ans Ziel kommt. Dies deshalb, weil die Shellscript-Programmierung im Gegensatz zu anderen Programmiersprachen wie bspw. C/C++ oder Java erheblich einfacher zu erlernen ist (auch wenn Sie beim ersten Durchblättern des Buchs einen anderen Eindruck haben).
Aber was heißt »bereits mit Linux bzw. UNIX ein wenig vertraut«? Hierzu einige Punkte, die ich einfach von Ihnen erwarten muss â ansonsten könnte der Buchtitel gleich »Linux/UNIX â Eine Einführung« heißen.
1.1.1 ZielgruppeÂ
Mit diesem Buch will ich eine recht umfangreiche und keine spezifische Zielgruppe ansprechen. Profitieren von der Shellscript-Programmierung kann jeder, vom einfachen Linux-UNIX-Anwender bis hin zum absolut überbezahlten (oder häufig auch schlecht bezahlten) Systemadministrator.
Einfach jeder, der mit Linux/UNIX zu tun hat bzw. vor hat, damit etwas mehr anzufangen. Ganz besonders gut eignet sich die Shellscript-Programmierung auch für den Einstieg in die Programmierer-Welt. Sie finden hier viele Konstrukte (Schleifen, Bedingungen, Verzweigungen etc.), welche auch in den meisten anderen Programmiersprachen in ähnlicher Form verwendet werden (wenn auch häufig die Syntax ein wenig anders ist).
Da Sie sehr viel mit den »Bordmitteln« (Kommandos/Befehle) des Betriebssystems arbeiten, bekommen Sie auch ein gewisses Gefühl für Linux-UNIX. So haftet dem Shellscript-Programmierer häufig noch ein Guru-Image an, einfach weil dieser tiefer als viele GUI-verwöhnte Anwender in die Materie einsteigen muss. Trotzdem ist es leichter, die Shellscript-Programmierung zu erlernen, als irgendeine andere Hochsprache wie bspw. C, C++, Java oder C#.
Die primäre Zielgruppe aber bildet hierbei ganz klar der Systemadministrator und/oder der Webmaster (mit SSH-Zugang). Als Systemadministrator von Linux-UNIX-Systemen ist es häufig Grundvoraussetzung, sich mit der Shellscript-Programmierung auseinander zu setzen. Letztendlich ist ja auch jeder einzelne Linux-UNIX-(Heim-)Benutzer mit einem PC ein Systemadministrator und profitiert enorm von den neu hinzugewonnenen Kenntnissen.
1.1.2 NotationÂ
Die hier verwendete Notation ist recht einfach und zum Teil eindeutig aufgebaut. Wenn die Eingabe der Tastatur (bzw. einer Tastatur-Kombination) beschrieben wird, wird das mit einem entsprechenden Tasten-Zeichen gekennzeichnet. Wenn Sie bspw. (Strg)+(C) lesen, so bedeutet dies, dass hier die Tasten »Steuerung« (Control oder auch Ctrl) und »C« gleichzeitig gedrückt wurden, finden Sie (ESC) vor, dann ist das Drücken der Escape-Taste gemeint.
Sie werden sehr viel mit der Shell arbeiten. Als Shell-Prompt des normalen Users in der Kommandozeile wird hierfür you@host > verwendet. Dieses Prompt müssen Sie also nicht bei der Eingabe mit angeben (logisch, aber sollte erwähnt werden). Wird hinter diesem Shell-Prompt die Eingabe in fetten Zeichen geschrieben, so handelt es sich um eine Eingabe in der Kommandozeile, die vom Benutzer (meistens von Ihnen) vorgenommen und mit einem (ENTER)-Tastendruck bestätigt wurde.
you@host > Eine_Eingabe_in_der_Kommandozeile
Folgt hinter dieser Zeile eine weitere Zeile ohne das Shell-Prompt you@host > und nicht in fetter Schrift, dann handelt es sich in der Regel um die erzeugte Ausgabe der zuvor getätigten Eingabe (was im Buch meistens den Aktionen Ihres Shellscripts entspricht).
you@host > Eine_Eingabe_in_der_Kommandozeile Die erzeugte Ausgabe
Finden Sie stattdessen das Zeichen # als Shell-Prompt wieder, dann handelt es sich um ein Prompt des Superusers (root). Selbstverständlich setzt dies voraus, dass Sie auch die entsprechenden Rechte haben. Dies ist nicht immer möglich und wird daher in diesem Buch so selten wie möglich eingesetzt.
you@host > whoami nomaler_User you@host > su Passwort: ******** # whoami root # exit you@host > whoami normaler_User
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
Als Autor eines Fachbuchs steht man am Anfang immer vor der Frage, wo man anfangen soll. Fängt man bei Null an, so wird eine Menge Raum für interessantere Aufgabenschwerpunkte verschenkt. Fordert man dem Leser hingegen am Anfang gleich zu viel ab, läuft man Gefahr, dass dieses Buch schnell im Regel verstaubt oder zur Versteigerung angeboten wird.
## 1.1 Voraussetzungen an den LeserÂ
Da Sie sich entschieden haben, mit der Shellscript-Programmierung anzufangen, kann ich davon ausgehen, dass Sie bereits ein wenig mit Linux bzw. einem UNIX-artigen System vertraut sind und damit schon ein wenig Zeit verbracht haben. Vielleicht haben Sie auch schon Erfahrungen mit anderen Programmiersprachen gemacht, was Ihnen hier auch einen gewissen Vorteil einbringt. Vorhandene Programmiererfahrungen sind allerdings keine Voraussetzung für dieses Buch, welches so konzipiert wurde, dass selbst ein Anfänger recht einfach und schnell ans Ziel kommt. Dies deshalb, weil die Shellscript-Programmierung im Gegensatz zu anderen Programmiersprachen wie bspw. C/C++ oder Java erheblich einfacher zu erlernen ist (auch wenn Sie beim ersten Durchblättern des Buchs einen anderen Eindruck haben).
Aber was heißt »bereits mit Linux bzw. UNIX ein wenig vertraut«? Hierzu einige Punkte, die ich einfach von Ihnen erwarten muss â ansonsten könnte der Buchtitel gleich »Linux/UNIX â Eine Einführung« heißen.
### 1.1.1 ZielgruppeÂ
Mit diesem Buch will ich eine recht umfangreiche und keine spezifische Zielgruppe ansprechen. Profitieren von der Shellscript-Programmierung kann jeder, vom einfachen Linux-UNIX-Anwender bis hin zum absolut überbezahlten (oder häufig auch schlecht bezahlten) Systemadministrator.
Einfach jeder, der mit Linux/UNIX zu tun hat bzw. vor hat, damit etwas mehr anzufangen. Ganz besonders gut eignet sich die Shellscript-Programmierung auch für den Einstieg in die Programmierer-Welt. Sie finden hier viele Konstrukte (Schleifen, Bedingungen, Verzweigungen etc.), welche auch in den meisten anderen Programmiersprachen in ähnlicher Form verwendet werden (wenn auch häufig die Syntax ein wenig anders ist).
Da Sie sehr viel mit den »Bordmitteln« (Kommandos/Befehle) des Betriebssystems arbeiten, bekommen Sie auch ein gewisses Gefühl für Linux-UNIX. So haftet dem Shellscript-Programmierer häufig noch ein Guru-Image an, einfach weil dieser tiefer als viele GUI-verwöhnte Anwender in die Materie einsteigen muss. Trotzdem ist es leichter, die Shellscript-Programmierung zu erlernen, als irgendeine andere Hochsprache wie bspw. C, C++, Java oder C#.
Die primäre Zielgruppe aber bildet hierbei ganz klar der Systemadministrator und/oder der Webmaster (mit SSH-Zugang). Als Systemadministrator von Linux-UNIX-Systemen ist es häufig Grundvoraussetzung, sich mit der Shellscript-Programmierung auseinander zu setzen. Letztendlich ist ja auch jeder einzelne Linux-UNIX-(Heim-)Benutzer mit einem PC ein Systemadministrator und profitiert enorm von den neu hinzugewonnenen Kenntnissen.
### 1.1.2 NotationÂ
Die hier verwendete Notation ist recht einfach und zum Teil eindeutig aufgebaut. Wenn die Eingabe der Tastatur (bzw. einer Tastatur-Kombination) beschrieben wird, wird das mit einem entsprechenden Tasten-Zeichen gekennzeichnet. Wenn Sie bspw. (Strg)+(C) lesen, so bedeutet dies, dass hier die Tasten »Steuerung« (Control oder auch Ctrl) und »C« gleichzeitig gedrückt wurden, finden Sie (ESC) vor, dann ist das Drücken der Escape-Taste gemeint.
Sie werden sehr viel mit der Shell arbeiten. Als Shell-Prompt des normalen Users in der Kommandozeile wird hierfür you@host > verwendet. Dieses Prompt müssen Sie also nicht bei der Eingabe mit angeben (logisch, aber sollte erwähnt werden). Wird hinter diesem Shell-Prompt die Eingabe in fetten Zeichen geschrieben, so handelt es sich um eine Eingabe in der Kommandozeile, die vom Benutzer (meistens von Ihnen) vorgenommen und mit einem (ENTER)-Tastendruck bestätigt wurde.
> you@host > Eine_Eingabe_in_der_Kommandozeile
Folgt hinter dieser Zeile eine weitere Zeile ohne das Shell-Prompt you@host > und nicht in fetter Schrift, dann handelt es sich in der Regel um die erzeugte Ausgabe der zuvor getätigten Eingabe (was im Buch meistens den Aktionen Ihres Shellscripts entspricht).
> you@host > Eine_Eingabe_in_der_Kommandozeile Die erzeugte Ausgabe
Finden Sie stattdessen das Zeichen # als Shell-Prompt wieder, dann handelt es sich um ein Prompt des Superusers (root). Selbstverständlich setzt dies voraus, dass Sie auch die entsprechenden Rechte haben. Dies ist nicht immer möglich und wird daher in diesem Buch so selten wie möglich eingesetzt.
> you@host > whoami nomaler_User you@host > su Passwort: ******** # whoami root # exit you@host > whoami normaler_User
Kapitel 2 Variablen
Keine Programmiersprache kommt ohne Variablen aus. Variablen werden gewöhnlich dazu verwendet, Daten (Zahlen oder Zeichenketten) zu speichern, um zu einem späteren Zeitpunkt wieder darauf zurückzugreifen. Allerdings finden Sie bei der Shellscript-Programmierung im Vergleich zu vielen anderen Programmiersprachen zunächst keine speziellen Datentypen, bspw. für Zeichenketten, Fließkomma- oder Integerzahlen.
2.1 GrundlagenÂ
Eine Variable besteht aus zwei Teilen: dem Namen der Variable und dem entsprechenden Wert, den diese Variable repräsentiert bzw. speichert. Hier die Syntax:
variable=wert
Der Variablenname kann aus Groß- und Kleinbuchstaben, Zahlen und Unterstrichen bestehen, darf allerdings niemals mit einer Zahl beginnen. Die Maximallänge eines Variablennamens beträgt 256 Zeichen, wobei er auf vielen Systemen auch etwas länger sein darf (ob ein solch langer Name sinnvoll ist, sei dahingestellt). Wichtig ist es auch, die Lebensdauer einer Variablen zu kennen, welche nur so lange wie die Laufzeit des Scripts (genauer der ausführenden Shell) währt. Beendet sich das Script, wird auch die Variable ungültig (sofern diese nicht exportiert wurde â dazu mehr in Abschnitt 2.6). Der Wert einer Variablen wird zunächst immer als Zeichenkette abgespeichert.
2.1.1 Zugriff auf den Wert einer VariablenÂ
Um auf den Wert einer Variablen zugreifen zu können, wird das $-Zeichen verwendet. Sie haben sicherlich schon das ein oder andere Mal Folgendes in einer Shell gesehen:
you@host > echo $HOME /home/you
Hier wurde die Umgebungsvariable HOME auf dem Bildschirm ausgegeben (mehr zu Umgebungsvariablen weiter unten, siehe Abschnitt 2.7). Ebenso wie die Umgebungsvariable HOME können Sie auch eine eigene Benutzervariable definieren und darauf zurückgreifen.
you@host > ich=juergen you@host > echo $ich juergen
Soeben wurde eine Benutzervariable »ich« mit dem Wert »juergen« definiert. Mithilfe des $-Zeichens können Sie jetzt jederzeit wieder auf den Wert dieser Variablen zurückgreifen â natürlich nur während der Lebensdauer des Scripts oder der Shell. Sobald Sie ein Script oder eine Shell-Sitzung beenden, wird auch die Variable »ich« wieder verworfen.
2.1.2 Variablen-InterpolationÂ
Selten werden Sie eine Variable ausschließlich zur Wiederausgabe definieren. In der Praxis setzt man Variablen meistens in Verbindung mit Befehlen ein.
# Ein einfaches Backup-Script # Name : abackup # datum hat die Form YYYY_MM_DD datum=$(date +%Y_%m_%d) # Ein Verzeichnis in der Form backup_YYYY_MM_DD anlegen mkdir backup_$datum # Alle Textdateien aus dem Heimverzeichnis sichern cp $HOME/*.txt backup_$datum
Hier haben Sie ein einfaches Backup-Script, mit dem Sie sämtliche Textdateien aus Ihrem Heimverzeichnis in einen neuen Ordner kopieren. Den Ordnernamen erzeugen Sie zunächst mit
datum=$(date +%Y_%m_%d)
Damit finden Sie das aktuelle Datum in der Form »YYYY_MM_DD« in der Variablen »datum« wieder. Damit sich in »datum« nicht die Zeichenfolge »date« befindet, muss diesem Kommando selbst ein $-Zeichen vorangestellt werden. Nur so weiß die Shell, dass sie der Variablen »datum« den Wert des Ausdrucks â hier dem Kommando â in den Klammern übergeben soll. Daher muss auch das komplette Kommando in Klammern gesetzt werden. Dieser Vorgang wird auch als Kommando-Substitution bezeichnet, was allerdings in dieser Form (mit der Klammerung) nur unter der Bash und Korn-Shell, nicht aber unter der Bourne-Shell funktioniert. Bei der Bourne-Shell müssen Sie hierfür die Backticks-Zeichen (`) statt der Klammerung verwenden (mehr zur Kommando-Substitution in Abschnitt 2.4). Testen Sie einfach das Kommando date in gleicher Form in der Shell, um zu verstehen, was hier gemacht wird.
you@host > date +%Y_%m_%d 2005_02_03
In der nächsten Zeile erzeugen Sie dann ein Verzeichnis mit dem Namen backup_YYYY_MM_DD. Hier wird der Verzeichnisname mit einer Variablen-Interpolation interpretiert.
mkdir backup_$datum
Zum Schluss kopieren Sie dann alle Textdateien (mit der Endung ».txt«) aus dem Heimverzeichnis, welches hier mit der Umgebungsvariable HOME angegeben wurde, in das neu erzeugte Verzeichnis backup_YYYY_MM_DD.
cp $HOME/*.txt backup_$datum
So oder ähnlich werden Sie Benutzervariablen sehr häufig verwenden. Das Script bei der Ausführung:
you@host > ./abackup you@host > ls abackup datei1.txt datei2.txt datei3.txt datei4.txt backup_2004_12_03 backup_2005_01_03 backup_2005_02_03 you@host > ls backup_2005_02_03 datei1.txt datei2.txt datei3.txt datei4.txt
Hinweis für Anfänger   Sofern Sie das ein oder andere noch nicht ganz genau verstehen, ist das nicht weiter schlimm, da es hier lediglich um den Zugriff von Variablen geht. Wenn Sie verstanden haben, wann bei der Verwendung von Variablen das Zeichen $ benötigt wird und wann nicht, genügt dies vorerst. Dennoch hat es sich immer wieder bewährt, die Option set -x zu verwenden. Damit bekommen Sie das ganze Script auch noch im Klartext ausgegeben. Dies hilft beim Lernen der Shellscript-Programmierung enorm (dies kann man gar nicht oft genug wiederholen).
Nicht definierte Variablen
Verwenden Sie eine nicht definierte Variable in Ihrem Script, bspw. zur Ausgabe, wird eine leere Zeichenkette ausgegeben. Wohl kaum einer wird allerdings eine nicht definierte Variable im Shellscript benötigen. Damit die Shell diesen Umstand bemängelt, können Sie die Option âu verwenden. Dann bekommen Sie einen Hinweis, welche Variablen nicht besetzt sind. Hier ein solches Beispiel.
# Eine nicht definierte Variable wird verwendet # Name : aerror # var1 wird die Zeichenfolge 100 zugewiesen var1=100 # var2 bekommt denselben Wert wie var1 var2=$var1 # var3 wurde nicht definiert, aber trotzdem verwendet echo $var1 $var2 $var3
Hier wird bei der Ausgabe mittels echo versucht, den Inhalt einer Variablen namens var3 auszugeben, welche allerdings im Script gar nicht definiert wurde.
you@host > ./aerror 100 100
Trotzdem lässt sich das Script ohne Probleme interpretieren. Selbiges soll jetzt mit der Option âu vorgenommen werden.
you@host > sh -u ./aerror ./aerror: line 10: var3: unbound variable
Variablennamen abgrenzen
Soll der Variablenname innerhalb einer Zeichenkette verwendet werden, müssen Sie diesen mit geschweiften Klammern abgrenzen. Ein Beispiel, was hiermit gemeint ist:
# Variablennamen einbetten bzw. abgrenzen # Name : embeed file=back_ cp datei.txt $filedatei.txt
Beabsichtigt war bei diesem Script, dass eine Datei namens datei.txt kopiert wird. Der neue Dateiname sollte dann back_datei.txt lauten. Wenn Sie das Script ausführen, findet sich allerdings keine Spur einer solchen Datei. Dem Problem wollen wir auf den Grund gehen. Sehen wir uns das Ganze im Klartext an:
you@host > sh -x ./embeed + file=back_ + cp datei.txt .txt
Hier scheint die Variablen-Interpolation versagt zu haben. Statt eines Dateinamens back_datei.txt wird nur der Dateiname .txt verwendet. Der Grund ist einfach: Die Shell kann nicht wissen, dass Sie mit $filedatei.txt den Wert der Variablen »file« verwenden wollen, sondern sucht hier nach einer Variablen »filedatei«, was â wie Sie ja bereits wissen â, wenn diese nicht definiert wurde, ein leerer String ist. Dieses Problem können Sie umgehen, indem Sie den entsprechenden Variablennamen mit geschweiften Klammern abgrenzen.
# Variablennamen einbetten bzw. abgrenzen # Name : embeed2 file=back_ cp datei.txt ${file}datei.txt
Jetzt klappt es auch mit back_datei.txt. Die Schreibweise mit den geschweiften Klammern können Sie natürlich grundsätzlich verwenden (was durchaus gängig ist), auch wenn keine weitere Zeichenkette mehr folgt. Gewiss gibt es auch Ausnahmefälle, in denen man keine geschweiften Klammern benötigt, bspw. $var/keinvar, $var$var1, keinevar_$var usw. Aber ein sauberer Stil ist es trotzdem, diese zu verwenden.
Löschen von Variablen
Wenn Sie eine Variable löschen wollen, können Sie dies mit unset erreichen. Wollen Sie hingegen nur den Wert der Variable löschen, aber den Variablennamen selbst definiert lassen, reicht ein einfaches var= ohne Angabe eines Wertes aus. Hier ein Beispiel dazu im Shell-Prompt:
you@host > set -u you@host > ich=juergen you@host > echo $ich juergen you@host > unset ich you@host > echo $ich bash: ich: unbound variable you@host > ich=juergen you@host > echo $ich juergen you@host > ich= you@host > echo $ich you@host > set +u
Wert als Konstante definieren
Wollen Sie einen konstanten Wert definieren, der während der Ausführung des Scripts oder genauer der Shell nicht mehr verändert werden kann, können Sie vor dem Variablennamen ein readonly setzen. Damit versehen Sie eine Variable mit einem Schreibschutz. Allerdings lässt sich diese Variable zur Laufzeit des Scripts (oder genauer der (Sub-)Shell) auch nicht mehr mit unset löschen.
you@host > ich=juergen you@host > readonly ich you@host > echo $ich juergen you@host > ich=john bash: ich: readonly variable you@host > unset ich bash: unset: ich: cannot unset: readonly variable
Hinweis   Bei der Verwendung von readonly handelt es sich um eine Zuweisung! Daher erfolgt auch das Setzen des Schreibschutzes ohne das Zeichen $.
Keine Programmiersprache kommt ohne Variablen aus. Variablen werden gewöhnlich dazu verwendet, Daten (Zahlen oder Zeichenketten) zu speichern, um zu einem späteren Zeitpunkt wieder darauf zurückzugreifen. Allerdings finden Sie bei der Shellscript-Programmierung im Vergleich zu vielen anderen Programmiersprachen zunächst keine speziellen Datentypen, bspw. für Zeichenketten, Fließkomma- oder Integerzahlen.
## 2.1 GrundlagenÂ
Eine Variable besteht aus zwei Teilen: dem Namen der Variable und dem entsprechenden Wert, den diese Variable repräsentiert bzw. speichert. Hier die Syntax:
> variable=wert
Der Variablenname kann aus Groß- und Kleinbuchstaben, Zahlen und Unterstrichen bestehen, darf allerdings niemals mit einer Zahl beginnen. Die Maximallänge eines Variablennamens beträgt 256 Zeichen, wobei er auf vielen Systemen auch etwas länger sein darf (ob ein solch langer Name sinnvoll ist, sei dahingestellt). Wichtig ist es auch, die Lebensdauer einer Variablen zu kennen, welche nur so lange wie die Laufzeit des Scripts (genauer der ausführenden Shell) währt. Beendet sich das Script, wird auch die Variable ungültig (sofern diese nicht exportiert wurde â dazu mehr in Abschnitt 2.6). Der Wert einer Variablen wird zunächst immer als Zeichenkette abgespeichert.
### 2.1.1 Zugriff auf den Wert einer VariablenÂ
Um auf den Wert einer Variablen zugreifen zu können, wird das $-Zeichen verwendet. Sie haben sicherlich schon das ein oder andere Mal Folgendes in einer Shell gesehen:
> you@host > echo $HOME /home/you
Hier wurde die Umgebungsvariable HOME auf dem Bildschirm ausgegeben (mehr zu Umgebungsvariablen weiter unten, siehe Abschnitt 2.7). Ebenso wie die Umgebungsvariable HOME können Sie auch eine eigene Benutzervariable definieren und darauf zurückgreifen.
> you@host > ich=juergen you@host > echo $ich juergen
Soeben wurde eine Benutzervariable »ich« mit dem Wert »juergen« definiert. Mithilfe des $-Zeichens können Sie jetzt jederzeit wieder auf den Wert dieser Variablen zurückgreifen â natürlich nur während der Lebensdauer des Scripts oder der Shell. Sobald Sie ein Script oder eine Shell-Sitzung beenden, wird auch die Variable »ich« wieder verworfen.
### 2.1.2 Variablen-InterpolationÂ
Selten werden Sie eine Variable ausschließlich zur Wiederausgabe definieren. In der Praxis setzt man Variablen meistens in Verbindung mit Befehlen ein.
> # Ein einfaches Backup-Script # Name : abackup # datum hat die Form YYYY_MM_DD datum=$(date +%Y_%m_%d) # Ein Verzeichnis in der Form backup_YYYY_MM_DD anlegen mkdir backup_$datum # Alle Textdateien aus dem Heimverzeichnis sichern cp $HOME/*.txt backup_$datum
Hier haben Sie ein einfaches Backup-Script, mit dem Sie sämtliche Textdateien aus Ihrem Heimverzeichnis in einen neuen Ordner kopieren. Den Ordnernamen erzeugen Sie zunächst mit
> datum=$(date +%Y_%m_%d)
Damit finden Sie das aktuelle Datum in der Form »YYYY_MM_DD« in der Variablen »datum« wieder. Damit sich in »datum« nicht die Zeichenfolge »date« befindet, muss diesem Kommando selbst ein $-Zeichen vorangestellt werden. Nur so weiß die Shell, dass sie der Variablen »datum« den Wert des Ausdrucks â hier dem Kommando â in den Klammern übergeben soll. Daher muss auch das komplette Kommando in Klammern gesetzt werden. Dieser Vorgang wird auch als Kommando-Substitution bezeichnet, was allerdings in dieser Form (mit der Klammerung) nur unter der Bash und Korn-Shell, nicht aber unter der Bourne-Shell funktioniert. Bei der Bourne-Shell müssen Sie hierfür die Backticks-Zeichen (`) statt der Klammerung verwenden (mehr zur Kommando-Substitution in Abschnitt 2.4). Testen Sie einfach das Kommando date in gleicher Form in der Shell, um zu verstehen, was hier gemacht wird.
> you@host > date +%Y_%m_%d 2005_02_03
In der nächsten Zeile erzeugen Sie dann ein Verzeichnis mit dem Namen backup_YYYY_MM_DD. Hier wird der Verzeichnisname mit einer Variablen-Interpolation interpretiert.
> mkdir backup_$datum
Zum Schluss kopieren Sie dann alle Textdateien (mit der Endung ».txt«) aus dem Heimverzeichnis, welches hier mit der Umgebungsvariable HOME angegeben wurde, in das neu erzeugte Verzeichnis backup_YYYY_MM_DD.
> cp $HOME/*.txt backup_$datum
So oder ähnlich werden Sie Benutzervariablen sehr häufig verwenden. Das Script bei der Ausführung:
> you@host > ./abackup you@host > ls abackup datei1.txt datei2.txt datei3.txt datei4.txt backup_2004_12_03 backup_2005_01_03 backup_2005_02_03 you@host > ls backup_2005_02_03 datei1.txt datei2.txt datei3.txt datei4.txt
# Nicht definierte Variablen
Verwenden Sie eine nicht definierte Variable in Ihrem Script, bspw. zur Ausgabe, wird eine leere Zeichenkette ausgegeben. Wohl kaum einer wird allerdings eine nicht definierte Variable im Shellscript benötigen. Damit die Shell diesen Umstand bemängelt, können Sie die Option âu verwenden. Dann bekommen Sie einen Hinweis, welche Variablen nicht besetzt sind. Hier ein solches Beispiel.
> # Eine nicht definierte Variable wird verwendet # Name : aerror # var1 wird die Zeichenfolge 100 zugewiesen var1=100 # var2 bekommt denselben Wert wie var1 var2=$var1 # var3 wurde nicht definiert, aber trotzdem verwendet echo $var1 $var2 $var3
Hier wird bei der Ausgabe mittels echo versucht, den Inhalt einer Variablen namens var3 auszugeben, welche allerdings im Script gar nicht definiert wurde.
> you@host > ./aerror 100 100
Trotzdem lässt sich das Script ohne Probleme interpretieren. Selbiges soll jetzt mit der Option âu vorgenommen werden.
> you@host > sh -u ./aerror ./aerror: line 10: var3: unbound variable
# Variablennamen abgrenzen
Soll der Variablenname innerhalb einer Zeichenkette verwendet werden, müssen Sie diesen mit geschweiften Klammern abgrenzen. Ein Beispiel, was hiermit gemeint ist:
> # Variablennamen einbetten bzw. abgrenzen # Name : embeed file=back_ cp datei.txt $filedatei.txt
Beabsichtigt war bei diesem Script, dass eine Datei namens datei.txt kopiert wird. Der neue Dateiname sollte dann back_datei.txt lauten. Wenn Sie das Script ausführen, findet sich allerdings keine Spur einer solchen Datei. Dem Problem wollen wir auf den Grund gehen. Sehen wir uns das Ganze im Klartext an:
> you@host > sh -x ./embeed + file=back_ + cp datei.txt .txt
Hier scheint die Variablen-Interpolation versagt zu haben. Statt eines Dateinamens back_datei.txt wird nur der Dateiname .txt verwendet. Der Grund ist einfach: Die Shell kann nicht wissen, dass Sie mit $filedatei.txt den Wert der Variablen »file« verwenden wollen, sondern sucht hier nach einer Variablen »filedatei«, was â wie Sie ja bereits wissen â, wenn diese nicht definiert wurde, ein leerer String ist. Dieses Problem können Sie umgehen, indem Sie den entsprechenden Variablennamen mit geschweiften Klammern abgrenzen.
> # Variablennamen einbetten bzw. abgrenzen # Name : embeed2 file=back_ cp datei.txt ${file}datei.txt
Jetzt klappt es auch mit back_datei.txt. Die Schreibweise mit den geschweiften Klammern können Sie natürlich grundsätzlich verwenden (was durchaus gängig ist), auch wenn keine weitere Zeichenkette mehr folgt. Gewiss gibt es auch Ausnahmefälle, in denen man keine geschweiften Klammern benötigt, bspw. $var/keinvar, $var$var1, keinevar_$var usw. Aber ein sauberer Stil ist es trotzdem, diese zu verwenden.
# Löschen von Variablen
Wenn Sie eine Variable löschen wollen, können Sie dies mit unset erreichen. Wollen Sie hingegen nur den Wert der Variable löschen, aber den Variablennamen selbst definiert lassen, reicht ein einfaches var= ohne Angabe eines Wertes aus. Hier ein Beispiel dazu im Shell-Prompt:
> you@host > set -u you@host > ich=juergen you@host > echo $ich juergen you@host > unset ich you@host > echo $ich bash: ich: unbound variable you@host > ich=juergen you@host > echo $ich juergen you@host > ich= you@host > echo $ich you@host > set +u
# Wert als Konstante definieren
Wollen Sie einen konstanten Wert definieren, der während der Ausführung des Scripts oder genauer der Shell nicht mehr verändert werden kann, können Sie vor dem Variablennamen ein readonly setzen. Damit versehen Sie eine Variable mit einem Schreibschutz. Allerdings lässt sich diese Variable zur Laufzeit des Scripts (oder genauer der (Sub-)Shell) auch nicht mehr mit unset löschen.
> you@host > ich=juergen you@host > readonly ich you@host > echo $ich juergen you@host > ich=john bash: ich: readonly variable you@host > unset ich bash: unset: ich: cannot unset: readonly variable
Kapitel 3 Parameter und Argumente
Nachdem Sie sich mit der Ausführung von Shellscripts und den Variablen vertraut machen konnten, folgt in diesem Kapitel die Übergabe von Argumenten an ein Shellscript. Damit wird Ihr Shellscript wesentlich flexibler und vielseitiger.
3.1 EinführungÂ
Das Prinzip der Übergabe von Argumenten stellt eigentlich nichts Neues mehr für Sie dar. Sie verwenden dieses Prinzip gewöhnlich im Umgang mit anderen Kommandos, zum Beispiel:
you@host > ls -l /home/you/Shellbuch
Hier wurde das Kommando ls verwendet. Die Option âl beschreibt hierbei, in welcher Form der Inhalt von /home/you/Shellbuch ausgegeben werden soll. Als Argumente zählen die Option âl und die Verzeichnisangabe /home/you/Shellbuch. Damit legen Sie beim Aufruf fest, womit ls arbeiten soll. Würde ls kein Argument aus der Kommandozeile entgegennehmen, wäre das Kommando nur auf das aktuelle Verzeichnis anwendbar und somit recht unflexibel.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>lag.de.
Nachdem Sie sich mit der Ausführung von Shellscripts und den Variablen vertraut machen konnten, folgt in diesem Kapitel die Übergabe von Argumenten an ein Shellscript. Damit wird Ihr Shellscript wesentlich flexibler und vielseitiger.
## 3.1 EinführungÂ
Das Prinzip der Übergabe von Argumenten stellt eigentlich nichts Neues mehr für Sie dar. Sie verwenden dieses Prinzip gewöhnlich im Umgang mit anderen Kommandos, zum Beispiel:
> you@host > ls -l /home/you/Shellbuch
Hier wurde das Kommando ls verwendet. Die Option âl beschreibt hierbei, in welcher Form der Inhalt von /home/you/Shellbuch ausgegeben werden soll. Als Argumente zählen die Option âl und die Verzeichnisangabe /home/you/Shellbuch. Damit legen Sie beim Aufruf fest, womit ls arbeiten soll. Würde ls kein Argument aus der Kommandozeile entgegennehmen, wäre das Kommando nur auf das aktuelle Verzeichnis anwendbar und somit recht unflexibel.
Kapitel 4 Kontrollstrukturen
Um aus der »Shell« eine »echte« Programmiersprache zu machen, sind so genannte Kontrollstrukturen erforderlich. Dabei handelt es sich um Entscheidungsverzweigungen oder Schleifen. Eine Programmiersprache ohne Verzweigungen wäre wohl eine Katastrophe. Mit einer Verzweigung können Sie dafür sorgen, dass die weitere Ausführung des Scripts von einer bestimmten Bedingung abhängt und eben entsprechend verzweigt wird. Ebenso sieht es mit den Schleifen aus. Anstatt immer Zeile für Zeile dieselben Anweisungen auszuführen, können Sie dies auch in einer Schleife zusammenfassen.
4.1 Bedingte Anweisung mit ifÂ
Wenn Sie überprüfen wollen, ob eine Eingabe von der Kommandozeile oder der Tastatur (egal, ob eine Zahl oder eine Zeichenkette) korrekt war, ob das Auslesen einer Datei ein entsprechendes Ergebnis beinhaltet, Sie eine Datei bzw. ein bestimmtes Attribut prüfen müssen oder den Erfolg einer Kommandoausführung testen wollen, dann können Sie hierfür die bedingte Anweisung (oder auch Verzweigung) mit if verwenden. Die korrekte Syntax der if-Verzweigung sieht wie folgt aus:
if Kommando_erfolgreich then # Ja, Kommando war erfolgreich # ... hier Befehle für erfolgreiches Kommando verwenden fi
Im Anschluss des Schlüsselwortes if muss ein Kommando oder eine Kommandofolge folgen. Wurde das Kommando (oder die Kommandofolge) erfolgreich ausgeführt, wird als Rückgabewert 0 zurückgegeben (wie dies ja bei Kommandos üblich ist). Im Fall einer erfolgreichen Kommandoausführung werden also die Befehle im darauf folgenden then (bis zum fi) ausgeführt. Bei einer fehlerhaften Kommandoausführung wird die Ausführung, sofern weitere Kommandos vorhanden sind, hinter dem fi fortgesetzt. Oder einfacher: Falls der Befehl einen Fehler zurückgab, wird der Anweisungsblock in der Verzweigung übersprungen. fi (rückwärts geschriebenes if) schließt eine if-Anweisung bzw. den Anweisungsblock ab (siehe Abbildung 4.1).
Wenn Sie sich schon mal einige Scripts angesehen haben, werden Sie feststellen, dass häufig folgende Syntax bei einer if-Verzweigung verwendet wird:
if [ bedingung ] then # Ja, Bedingung war erfolgreich # ... hier Befehle für erfolgreiche Bedingung verwenden fi
Selbstverständlich dürfen Sie die if-Verzweigung zu einem etwas unleserlichen Code zusammenpacken, müssen dann aber entsprechend Semikolons zur Trennung verwenden (diese Schreibweise wird gewöhnlich verwendet, um eine Verzweigung auf der Kommandozeile einzugeben):
if [ bedingung ]; then befehl(e) ; fi
Bei einer Verwendung von eckigen Klammern handelt es sich um den test-Befehl, dessen symbolische Form eben [ ... ] ist. Hierauf wird noch recht umfangreich eingegangen. Wir wollen uns jetzt weiter mit der if-Verzweigung ohne den test-Befehl befassen.
Skurriles: Wenn ich hier behaupte, dass die eckigen Klammern eine symbolische Form für test sind, stimmt das eigentlich nicht. Wer mal nach einem Kommando »[« sucht ( which [ ), wird überrascht sein, dass es tatsächlich ein »Binary« mit diesem Namen gibt. Richtig gesehen handelt es sich somit nicht um eine andere Schreibweise von test, sondern um ein eigenständiges Programm, das als letzten Parameter die schließende eckige Klammer auswertet.
4.1.1 Kommandos testen mit ifÂ
Hierzu ein einfaches Shellscript, welches das Kommando grep auf erfolgreiche Ausführung überprüft. In der Datei /etc/passwd wird einfach nach einem Benutzer gesucht, den Sie als erstes Argument (Positionsparameter $1) in der Kommandozeile angeben (siehe Abbildung 4.2). Je nach Erfolg, bekommen Sie eine entsprechende Meldung ausgegeben.
# Demonstriert eine Verzweigung mit if # Name: aif1 # Benutzer in /etc/passwd suchen ... if grep "^$1" /etc/passwd then # Ja, grep war erfolgreich echo "User $1 ist bekannt auf dem System" exit 0; # Erfolgreich beenden ... fi # Angegebener User scheint hier nicht vorhanden zu sein ... echo "User $1 gibt es hier nicht"
Das Script bei der Ausführung:
you@host > ./aif1 you you:x:1001:100::/home/you:/bin/bash User you ist bekannt auf dem System you@host > ./aif1 tot tot:x:1000:100:J.Wolf:/home/tot:/bin/bash User tot ist bekannt auf dem System you@host > ./aif1 root root:x:0:0:root:/root:/bin/bash User root ist bekannt auf dem System you@host > ./aif1 rot User rot gibt es hier nicht
Findet grep hier den Benutzer, den Sie als erstes Argument übergeben haben, liefert es den Exit-Status 0 zurück und somit wird then in der if-Verzweigung ausgeführt. Bei erfolgloser Suche gibt grep einen Wert ungleich 0 zurück, womit nicht in die if-Verzweigung gewechselt, sondern mit der Ausführung hinter dem fi der if-Verzweigung fortgefahren wird.
Was in diesem Beispiel ein wenig stört, ist die Standardausgabe, die hier unerwünscht »hereinplappert«. Hier lässt sich aber wie in der Kommandozeile eine einfache Umleitung in das Datengrab /dev/null legen. Hierzu müssen Sie nur die Zeile
if grep "^$1" /etc/passwd
umändern in
if grep "^$1" /etc/passwd > /dev/null
Natürlich kann die Fehlerausgabe (stderr) auch noch unterdrückt werden:
if grep "^$1" /etc/passwd > /dev/null 2>&1
Somit würde die Ausführung des Scripts jetzt wie folgt aussehen:
you@host > ./aif1 you User you ist bekannt auf dem System you@host > ./aif1 tot User tot ist bekannt auf dem System you@host > ./aif1 root User root ist bekannt auf dem System you@host > ./aif1 rot User rot gibt es hier nicht
Aus Abschnitt 1.8.8 kennen Sie ja bereits den Exit-Status und wissen, wie Sie diesen mit der Variablen $? abfragen können. Somit könnten Sie statt â wie im Beispiel geschehen â den Kommandoaufruf mit if auf Erfolg zu testen, den Rückgabewert eines Kommandos testen. Allerdings benötigen Sie hierzu wieder das test-Kommando oder dessen »alternative« Schreibweise in eckigen Klammern. Zwar wurde das test-Kommando noch nicht behandelt, aber aus Referenzgründen will ich Ihnen das Beispiel hier nicht vorenthalten:
# Demonstriert eine test-Verzweigung mit if # Name: aif2 # Benutzer in /etc/passwd suchen ... grep "^$1" /etc/passwd > /dev/null # Ist der Exit-Status in $? nicht gleich (not equal) 0 ... if [ $? -ne 0 ] then echo "Die Ausführung von grep ist fehlgeschlagen" echo "Vermutlich existiert User $1 hier nicht" exit 1 # Erfolglos beenden fi # grep erfolgreich echo "User $1 ist bekannt auf dem System"
Das Script bei der Ausführung:
you@host > ./aif2 you User you ist bekannt auf dem System you@host > ./aif2 rot Die Ausführung von grep ist fehlgeschlagen Vermutlich existiert User rot hier nicht
4.1.2 Kommandoverkettung über Pipes mit ifÂ
Wenn bei einer if-Abfrage mehrere Kommandos mit den Pipes verwendet werden, ist der Rückgabewert immer der des letzten Kommandos. Dies scheint auch irgendwie logisch, denn es gibt nur eine Variable für den Exit-Status ($?). Somit wird in einer Kommandoverkettung immer die Variable $? mit einem neuen Exit-Status belegt â bis zum letzten Kommando in der Verkettung.
Trotzdem hat eine solche Kommandoverkettung durchaus ihre Tücken. Denn wem nützt es, wenn das letzte Kommando erfolgreich war, aber eines der vorhergehenden Kommandos einen Fehler produziert hat? Hier ein Beispiel, das zeigt, worauf ich hinaus will:
# Demonstriert eine Kommandoverkettung mit if # Name: aif3 # In /usr/include nach erstem eingegebenen Argument suchen if ls -l /usr/include | grep $1 | wc -l then echo "Suche erfolgreich" exit 0 fi echo "Suche erfolglos"
Das Script bei der Ausführung:
you@host > ./aif3 l.h 15 Suche erfolgreich you@host > ./aif3 std 5 Suche erfolgreich you@host > ./aif3 asdfjklö 0 Suche erfolgreich you@host > ./aif3 Aufruf: grep [OPTION]... MUSTER [DATEI]... »grep --help« gibt Ihnen mehr Informationen. 0 Suche erfolgreich
Wie Sie sehen konnten, gibt es bei den letzten zwei Suchanfragen eigentlich keine erfolgreichen Suchergebnisse mehr. Trotzdem wird »Suche erfolgreich« zurückgegeben â was ja auch klar ist, weil die Ausführung des letzten Kommandos wc âl keinen Fehler produziert hat und 0 zurückgab. Sie haben zwar jetzt die Möglichkeit, die Standardfehlerausgabe zu überprüfen, aber dies wird mit jedem weiteren Befehl in der Pipe immer unübersichtlicher.
PIPESTATUS auswerten (Bash only)
In der Bash kann Ihnen hierbei die Shell-Variable PIPESTATUS helfen. Die Variable PIPESTATUS ist ein Array, welches die Exit-Status der zuletzt ausgeführten Kommandos enthält. Der Rückgabewert des ersten Befehls steht in ${PIPESTATUS[0]}, der zweite in ${PIPESTATUS[1]}, der dritte in ${PIPESTATUS[2]} usw. Die Anzahl der Befehle, die in einer Pipe ausgeführt wurden, enthält ${#PIPESTATUS[*]} und alle Rückgabewerte (getrennt mit einem Leerzeichen) erhalten Sie mit ${PIPESTATUS[*]}. Hier die Verwendung von PIPESTATUS in der Praxis, angewandt auf das Beispiel aif3:
# Demonstriert die Verwendung von PIPESTATUS in der Bash # Name: apipestatus # In /usr/include suchen ls -l /usr/include | grep $1 | wc -l # kompletten PIPESTATUS in die Variable STATUS legen STATUS=${PIPESTATUS[*]} # Die Variable STATUS auf die einzelnen # Positionsparameter aufteilen set $STATUS # Status einzeln zurückgeben echo "ls : $1" echo "grep : $2" echo "wc : $3" # Fehlerüberprüfungen der einzelnen Werte ... if [ $1 -ne 0 ]; then echo "Fehler bei ls" >&2 fi if [ $2 -ne 0 ]; then echo "Fehler bei grep" >&2 fi if [ $3 -ne 0 ]; then echo "Fehler bei wc" >&2 fi
Das Shellscript bei der Ausführung:
you@host > ./apipestatus std 5 ls : 0 grep : 0 wc : 0 you@host > ./apipestatus Aufruf: grep [OPTION]... MUSTER [DATEI]... »grep --help« gibt Ihnen mehr Informationen. 0 ls : 141 grep : 2 wc : 0 Fehler bei ls Fehler bei grep you@host > ./apipestatus asdf 0 ls : 0 grep : 1 wc : 0 Fehler bei grep
Wichtig ist, dass Sie PIPESTATUS unmittelbar nach der Kommandoverkettung abfragen. Führen Sie dazwischen bspw. eine echo-Ausgabe durch, finden Sie im Array PIPESTATUS nur noch den Exit-Status des echo-Kommandos.
Pipestatus auswerten (für alle Shells)
Mit einigen Klimmzügen ist es auch möglich, die Auswertung der Exit-Codes aller Kommandos in einer Pipe vorzunehmen. Zuerst werden neue Ausgabekanäle angelegt und die Ausführung der Pipe muss in einen eval-Anweisungsblock gepackt werden. Leitet man die entsprechenden Kanäle in die richtigen Bahnen, erhält man auch hier sämtliche Exit-Codes einer Pipe. Auf die Funktion eval wird noch in Kapitel 9, Nützliche Funktionen, näher eingegangen, aber auch in diesem Fall will ich Ihnen vorab ein Beispiel anbieten (auf die test-Überprüfung wurde allerdings verzichtet).
# Demonstriert, wie man alle Kommandos einer Pipe ohne # die Shell-Variable PIPESTATUS ermitteln kann # Name: apipestatus2 exec 3>&1 # dritten Ausgabekanal öffnen (für stdout) exec 4>&1 # vierten Ausgabekanal öffnen (für Exit-Status) eval ` { { ls -l /usr/include echo lsexit=$? >&4; } | { grep $1 echo grepexit=$? >&4; } | wc -l } 4>&1 >&3 # Umleitung ` echo "ls : $lsexit" echo "grep : $grepexit" echo "wc : $?"
Das Shellscript bei der Ausführung:
you@host > ./apipestatus2 Aufruf: grep [OPTION]... MUSTER [DATEI]... "grep --help" gibt Ihnen mehr Informationen. 0 ls : 141 grep : 2 wc : 0 you@host > ./apipestatus2 std 5 ls : 0 grep : 0 wc : 0 you@host > ./apipestatus2 asdfasf 0 ls : 0 grep : 1 wc : 0
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.de.
Um aus der »Shell« eine »echte« Programmiersprache zu machen, sind so genannte Kontrollstrukturen erforderlich. Dabei handelt es sich um Entscheidungsverzweigungen oder Schleifen. Eine Programmiersprache ohne Verzweigungen wäre wohl eine Katastrophe. Mit einer Verzweigung können Sie dafür sorgen, dass die weitere Ausführung des Scripts von einer bestimmten Bedingung abhängt und eben entsprechend verzweigt wird. Ebenso sieht es mit den Schleifen aus. Anstatt immer Zeile für Zeile dieselben Anweisungen auszuführen, können Sie dies auch in einer Schleife zusammenfassen.
## 4.1 Bedingte Anweisung mit ifÂ
Wenn Sie überprüfen wollen, ob eine Eingabe von der Kommandozeile oder der Tastatur (egal, ob eine Zahl oder eine Zeichenkette) korrekt war, ob das Auslesen einer Datei ein entsprechendes Ergebnis beinhaltet, Sie eine Datei bzw. ein bestimmtes Attribut prüfen müssen oder den Erfolg einer Kommandoausführung testen wollen, dann können Sie hierfür die bedingte Anweisung (oder auch Verzweigung) mit if verwenden. Die korrekte Syntax der if-Verzweigung sieht wie folgt aus:
> if Kommando_erfolgreich then # Ja, Kommando war erfolgreich # ... hier Befehle für erfolgreiches Kommando verwenden fi
Im Anschluss des Schlüsselwortes if muss ein Kommando oder eine Kommandofolge folgen. Wurde das Kommando (oder die Kommandofolge) erfolgreich ausgeführt, wird als Rückgabewert 0 zurückgegeben (wie dies ja bei Kommandos üblich ist). Im Fall einer erfolgreichen Kommandoausführung werden also die Befehle im darauf folgenden then (bis zum fi) ausgeführt. Bei einer fehlerhaften Kommandoausführung wird die Ausführung, sofern weitere Kommandos vorhanden sind, hinter dem fi fortgesetzt. Oder einfacher: Falls der Befehl einen Fehler zurückgab, wird der Anweisungsblock in der Verzweigung übersprungen. fi (rückwärts geschriebenes if) schließt eine if-Anweisung bzw. den Anweisungsblock ab (siehe Abbildung 4.1).
Wenn Sie sich schon mal einige Scripts angesehen haben, werden Sie feststellen, dass häufig folgende Syntax bei einer if-Verzweigung verwendet wird:
> if [ bedingung ] then # Ja, Bedingung war erfolgreich # ... hier Befehle für erfolgreiche Bedingung verwenden fi
Selbstverständlich dürfen Sie die if-Verzweigung zu einem etwas unleserlichen Code zusammenpacken, müssen dann aber entsprechend Semikolons zur Trennung verwenden (diese Schreibweise wird gewöhnlich verwendet, um eine Verzweigung auf der Kommandozeile einzugeben):
> if [ bedingung ]; then befehl(e) ; fi
Bei einer Verwendung von eckigen Klammern handelt es sich um den test-Befehl, dessen symbolische Form eben [ ... ] ist. Hierauf wird noch recht umfangreich eingegangen. Wir wollen uns jetzt weiter mit der if-Verzweigung ohne den test-Befehl befassen.
### 4.1.1 Kommandos testen mit ifÂ
Hierzu ein einfaches Shellscript, welches das Kommando grep auf erfolgreiche Ausführung überprüft. In der Datei /etc/passwd wird einfach nach einem Benutzer gesucht, den Sie als erstes Argument (Positionsparameter $1) in der Kommandozeile angeben (siehe Abbildung 4.2). Je nach Erfolg, bekommen Sie eine entsprechende Meldung ausgegeben.
> # Demonstriert eine Verzweigung mit if # Name: aif1 # Benutzer in /etc/passwd suchen ... if grep "^$1" /etc/passwd then # Ja, grep war erfolgreich echo "User $1 ist bekannt auf dem System" exit 0; # Erfolgreich beenden ... fi # Angegebener User scheint hier nicht vorhanden zu sein ... echo "User $1 gibt es hier nicht"
Das Script bei der Ausführung:
> you@host > ./aif1 you you:x:1001:100::/home/you:/bin/bash User you ist bekannt auf dem System you@host > ./aif1 tot tot:x:1000:100:J.Wolf:/home/tot:/bin/bash User tot ist bekannt auf dem System you@host > ./aif1 root root:x:0:0:root:/root:/bin/bash User root ist bekannt auf dem System you@host > ./aif1 rot User rot gibt es hier nicht
Findet grep hier den Benutzer, den Sie als erstes Argument übergeben haben, liefert es den Exit-Status 0 zurück und somit wird then in der if-Verzweigung ausgeführt. Bei erfolgloser Suche gibt grep einen Wert ungleich 0 zurück, womit nicht in die if-Verzweigung gewechselt, sondern mit der Ausführung hinter dem fi der if-Verzweigung fortgefahren wird.
Was in diesem Beispiel ein wenig stört, ist die Standardausgabe, die hier unerwünscht »hereinplappert«. Hier lässt sich aber wie in der Kommandozeile eine einfache Umleitung in das Datengrab /dev/null legen. Hierzu müssen Sie nur die Zeile
> if grep "^$1" /etc/passwd
umändern in
> if grep "^$1" /etc/passwd > /dev/null
Natürlich kann die Fehlerausgabe (stderr) auch noch unterdrückt werden:
> if grep "^$1" /etc/passwd > /dev/null 2>&1
Somit würde die Ausführung des Scripts jetzt wie folgt aussehen:
> you@host > ./aif1 you User you ist bekannt auf dem System you@host > ./aif1 tot User tot ist bekannt auf dem System you@host > ./aif1 root User root ist bekannt auf dem System you@host > ./aif1 rot User rot gibt es hier nicht
Aus Abschnitt 1.8.8 kennen Sie ja bereits den Exit-Status und wissen, wie Sie diesen mit der Variablen $? abfragen können. Somit könnten Sie statt â wie im Beispiel geschehen â den Kommandoaufruf mit if auf Erfolg zu testen, den Rückgabewert eines Kommandos testen. Allerdings benötigen Sie hierzu wieder das test-Kommando oder dessen »alternative« Schreibweise in eckigen Klammern. Zwar wurde das test-Kommando noch nicht behandelt, aber aus Referenzgründen will ich Ihnen das Beispiel hier nicht vorenthalten:
> # Demonstriert eine test-Verzweigung mit if # Name: aif2 # Benutzer in /etc/passwd suchen ... grep "^$1" /etc/passwd > /dev/null # Ist der Exit-Status in $? nicht gleich (not equal) 0 ... if [ $? -ne 0 ] then echo "Die Ausführung von grep ist fehlgeschlagen" echo "Vermutlich existiert User $1 hier nicht" exit 1 # Erfolglos beenden fi # grep erfolgreich echo "User $1 ist bekannt auf dem System"
Das Script bei der Ausführung:
> you@host > ./aif2 you User you ist bekannt auf dem System you@host > ./aif2 rot Die Ausführung von grep ist fehlgeschlagen Vermutlich existiert User rot hier nicht
### 4.1.2 Kommandoverkettung über Pipes mit ifÂ
Wenn bei einer if-Abfrage mehrere Kommandos mit den Pipes verwendet werden, ist der Rückgabewert immer der des letzten Kommandos. Dies scheint auch irgendwie logisch, denn es gibt nur eine Variable für den Exit-Status ($?). Somit wird in einer Kommandoverkettung immer die Variable $? mit einem neuen Exit-Status belegt â bis zum letzten Kommando in der Verkettung.
Trotzdem hat eine solche Kommandoverkettung durchaus ihre Tücken. Denn wem nützt es, wenn das letzte Kommando erfolgreich war, aber eines der vorhergehenden Kommandos einen Fehler produziert hat? Hier ein Beispiel, das zeigt, worauf ich hinaus will:
> # Demonstriert eine Kommandoverkettung mit if # Name: aif3 # In /usr/include nach erstem eingegebenen Argument suchen if ls -l /usr/include | grep $1 | wc -l then echo "Suche erfolgreich" exit 0 fi echo "Suche erfolglos"
Das Script bei der Ausführung:
> you@host > ./aif3 l.h 15 Suche erfolgreich you@host > ./aif3 std 5 Suche erfolgreich you@host > ./aif3 asdfjklö 0 Suche erfolgreich you@host > ./aif3 Aufruf: grep [OPTION]... MUSTER [DATEI]... »grep --help« gibt Ihnen mehr Informationen. 0 Suche erfolgreich
Wie Sie sehen konnten, gibt es bei den letzten zwei Suchanfragen eigentlich keine erfolgreichen Suchergebnisse mehr. Trotzdem wird »Suche erfolgreich« zurückgegeben â was ja auch klar ist, weil die Ausführung des letzten Kommandos wc âl keinen Fehler produziert hat und 0 zurückgab. Sie haben zwar jetzt die Möglichkeit, die Standardfehlerausgabe zu überprüfen, aber dies wird mit jedem weiteren Befehl in der Pipe immer unübersichtlicher.
# PIPESTATUS auswerten (Bash only)
In der Bash kann Ihnen hierbei die Shell-Variable PIPESTATUS helfen. Die Variable PIPESTATUS ist ein Array, welches die Exit-Status der zuletzt ausgeführten Kommandos enthält. Der Rückgabewert des ersten Befehls steht in ${PIPESTATUS[0]}, der zweite in ${PIPESTATUS[1]}, der dritte in ${PIPESTATUS[2]} usw. Die Anzahl der Befehle, die in einer Pipe ausgeführt wurden, enthält ${#PIPESTATUS[*]} und alle Rückgabewerte (getrennt mit einem Leerzeichen) erhalten Sie mit ${PIPESTATUS[*]}. Hier die Verwendung von PIPESTATUS in der Praxis, angewandt auf das Beispiel aif3:
> # Demonstriert die Verwendung von PIPESTATUS in der Bash # Name: apipestatus # In /usr/include suchen ls -l /usr/include | grep $1 | wc -l # kompletten PIPESTATUS in die Variable STATUS legen STATUS=${PIPESTATUS[*]} # Die Variable STATUS auf die einzelnen # Positionsparameter aufteilen set $STATUS # Status einzeln zurückgeben echo "ls : $1" echo "grep : $2" echo "wc : $3" # Fehlerüberprüfungen der einzelnen Werte ... if [ $1 -ne 0 ]; then echo "Fehler bei ls" >&2 fi if [ $2 -ne 0 ]; then echo "Fehler bei grep" >&2 fi if [ $3 -ne 0 ]; then echo "Fehler bei wc" >&2 fi
Das Shellscript bei der Ausführung:
> you@host > ./apipestatus std 5 ls : 0 grep : 0 wc : 0 you@host > ./apipestatus Aufruf: grep [OPTION]... MUSTER [DATEI]... »grep --help« gibt Ihnen mehr Informationen. 0 ls : 141 grep : 2 wc : 0 Fehler bei ls Fehler bei grep you@host > ./apipestatus asdf 0 ls : 0 grep : 1 wc : 0 Fehler bei grep
Wichtig ist, dass Sie PIPESTATUS unmittelbar nach der Kommandoverkettung abfragen. Führen Sie dazwischen bspw. eine echo-Ausgabe durch, finden Sie im Array PIPESTATUS nur noch den Exit-Status des echo-Kommandos.
# Pipestatus auswerten (für alle Shells)
Mit einigen Klimmzügen ist es auch möglich, die Auswertung der Exit-Codes aller Kommandos in einer Pipe vorzunehmen. Zuerst werden neue Ausgabekanäle angelegt und die Ausführung der Pipe muss in einen eval-Anweisungsblock gepackt werden. Leitet man die entsprechenden Kanäle in die richtigen Bahnen, erhält man auch hier sämtliche Exit-Codes einer Pipe. Auf die Funktion eval wird noch in Kapitel 9, Nützliche Funktionen, näher eingegangen, aber auch in diesem Fall will ich Ihnen vorab ein Beispiel anbieten (auf die test-Überprüfung wurde allerdings verzichtet).
> # Demonstriert, wie man alle Kommandos einer Pipe ohne # die Shell-Variable PIPESTATUS ermitteln kann # Name: apipestatus2 exec 3>&1 # dritten Ausgabekanal öffnen (für stdout) exec 4>&1 # vierten Ausgabekanal öffnen (für Exit-Status) eval ` { { ls -l /usr/include echo lsexit=$? >&4; } | { grep $1 echo grepexit=$? >&4; } | wc -l } 4>&1 >&3 # Umleitung ` echo "ls : $lsexit" echo "grep : $grepexit" echo "wc : $?"
Das Shellscript bei der Ausführung:
> you@host > ./apipestatus2 Aufruf: grep [OPTION]... MUSTER [DATEI]... "grep --help" gibt Ihnen mehr Informationen. 0 ls : 141 grep : 2 wc : 0 you@host > ./apipestatus2 std 5 ls : 0 grep : 0 wc : 0 you@host > ./apipestatus2 asdfasf 0 ls : 0 grep : 1 wc : 0
Kapitel 5 Terminal-Ein- und Ausgabe
Bisher wurde die Ausgabe auf dem Bildschirm immer verwendet, ohne jemals näher darauf eingegangen zu sein. Zur perfekten Interaktion gehört neben der Bildschirmausgabe die Benutzereingabe. In diesem Kapitel werden Sie alles Nötige zur Ein- und Ausgabe erfahren. Außerdem soll der Begriff »Terminal« ein wenig genauer erläutert werden.
5.1 Von Terminals zu Pseudo-TerminalsÂ
Obwohl in der Praxis heute eigentlich keine echten Terminals mehr verwendet werden, ist von ihnen immer noch die Rede. Terminals selbst sahen in der Regel aus wie gewöhnliche Desktop-Computer, meist mit einem schwarz-weißen (bzw. schwarz-grünen) Bildschirm, obwohl für ein Terminal nicht zwangsläufig ein Monitor genutzt werden muss. Solche Terminals waren über eine Leitung direkt mit einem UNIX-Rechner verbunden â also sind (waren) Terminals niemals Bestandteil des Betriebssystems selbst. Ein Betriebssystem lief auch ohne Terminal weiter (ähnlich, wie Ihr System auch ohne eine Internetverbindung läuft). Wenn ein solches Terminal eingeschaltet wurde, wartete schon ein Prozess namens »getty« (Get Terminal) »horchend« darauf und öffnete eine neue Session (Sitzung). Es wurde bereits erwähnt, dass eine Session nichts anderes ist, als die Zeit, ab der sich ein Benutzer mit einer Login-Shell eingeloggt hat, und die endet, wenn dieser sich wieder vom System verabschiedet.
Heute werden kaum noch echte Terminals (im eigentlichen Sinne) eingesetzt, sondern vorzugsweise Terminal-Emulationen. Terminal-Emulationen wiederum sind Programme, die vorgeben, ein Terminal zu sein.
Unter den meisten Linux-UNIX-Systemen stehen einem mehrere »virtuelle« Terminals zur Verfügung, die mit der Tastenkombination (Strg)+(Alt)+(F1) bis meistens (Strg)+(Alt)+(F7) erreicht werden können. Wenn Ihr System hochgefahren wird, bekommen Sie in der Regel als erstes Terminal (Strg)+ (Alt)+(F1) zu Gesicht. Arbeiten Sie ohne grafische Oberfläche, so ist dies gewöhnlich auch Ihre Login-Shell. Bei einer grafischen Oberfläche wird zumeist ein anderes Terminal (unter Linux bspw. (Strg)+(Alt)+(F7)) benutzt. Trotzdem können Sie jederzeit über (Strg)+(Alt)+(Fn) eine »echte« Login-Shell verwenden.
Auf jeder dieser Textkonsolen ((Strg)+(Alt)+(F1) bis (Strg)+(Alt)+(Fn)) »horchen« die Getty-Prozesse, bis sich ein Benutzer einloggt, so zum Beispiel:
you@host > ps -e | grep getty 3092 tty1 00:00:00 getty 3093 tty2 00:00:00 getty 3095 tty4 00:00:00 getty 3096 tty5 00:00:00 getty 3097 tty6 00:00:00 getty
Unter Linux werden Sie hierbei statt »getty« vermutlich den Namen »mingetty« vorfinden. Im Beispiel fällt außerdem auf, dass die Textkonsolen »tty0« und »tty3« fehlen. Dies kann nur bedeuten, dass sich hier jemand eingeloggt hat:
you@host > who | grep tty3 tot tty3 Mar 1 23:28
Sobald der User »tot« seine Session wieder beendet, wird ein neuer Getty-Prozess gestartet, der horchend darauf wartet, dass sich wieder jemand in der Textkonsole »tty3« einloggt.
Die Textfenster grafischer Oberflächen werden als Pseudo-Terminal bezeichnet (Abk. »pts« oder auch »ttyp« â betriebssystemspezifisch). Im Gegensatz zu einer Terminal-Emulation verläuft die Geräteeinstellung zu den Pseudo-Terminals dynamisch â sprich, ein Pseudo-Terminal existiert nur dann, wenn eine Verbindung besteht. In welchem Pseudo-Terminal Sie sich gerade befinden, sofern Sie unter einer grafischen Oberfläche eine Konsole geöffnet haben, können Sie mit dem Kommando tty ermitteln:
[ --- Linux --- ] you@host > tty /dev/pts/40 [ --- oder unter FreeBSD --- ] you@host > tty /dev/ttyp1
Der Name eines solchen Pseudo-Terminals ist eine Zahl im Verzeichnis /dev/pts. Die Namensvergabe beginnt gewöhnlich bei 0 und wird automatisch erhöht. Einen Überblick, welche Pseudo-Terminals gerade für welche User bereitgestellt werden, finden Sie im entsprechenden Verzeichnis:
you@host > ls -l /dev/pts insgesamt 0 crw--w---- 1 tot tty 136, 37 2005â03â01 22:46 37 crw------- 1 tot tty 136, 38 2005â03â01 22:46 38 crw------- 1 tot tty 136, 39 2005â03â01 22:46 39 crw------- 1 you tty 136, 40 2005â03â02 00:35 40
Hier finden Sie insgesamt vier Pseudo-Terminal-Einträge. Drei für den User »tot« und einen für »you«.
Das Gleiche finden Sie auch unter BSD-UNIX mit
you@host > ls -l /dev/ttyp* crw-rw-rw- 1 root wheel 5, 0 22 Mär 21:54 /dev/ttyp0 crw--w---- 1 martin tty 5, 1 13 Mai 08:58 /dev/ttyp1 crw--w---- 1 martin tty 5, 2 13 Mai 07:43 /dev/ttyp2 crw--w---- 1 martin tty 5, 3 13 Mai 08:48 /dev/ttyp3 crw-rw-rw- 1 root wheel 5, 4 13 Mai 08:48 /dev/ttyp4 crw-rw-rw- 1 root wheel 5, 5 12 Mai 16:31 /dev/ttyp5 crw-rw-rw- 1 root wheel 5, 6 12 Mai 23:01 /dev/ttyp6 crw-rw-rw- 1 root wheel 5, 7 12 Mai 14:37 /dev/ttyp7 crw-rw-rw- 1 root wheel 5, 8 12 Mai 14:22 /dev/ttyp8 crw-rw-rw- 1 root wheel 5, 9 12 Mai 14:26 /dev/ttyp9 crw-rw-rw- 1 root wheel 5, 10 12 Mai 17:20 /dev/ttypa crw-rw-rw- 1 root wheel 5, 11 23 Apr 11:23 /dev/ttypb
...
nur mit dem Unterschied, dass bei der Namensvergabe der Wert nach 9 (»ttyp9«) nicht mehr um 1 inkrementiert wird, sondern mit dem ersten Buchstaben des Alphabets fortgefahren wird.
Pseudo-Terminals können neben den normalen Terminalverbindungen auch im Netzwerk (TCP/IP) eingesetzt werden. Dies ist möglich, weil der X-Server für grafische Anwendungen auch über ein Netzwerkprotokoll verfügt und damit verbunden ist.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
Bisher wurde die Ausgabe auf dem Bildschirm immer verwendet, ohne jemals näher darauf eingegangen zu sein. Zur perfekten Interaktion gehört neben der Bildschirmausgabe die Benutzereingabe. In diesem Kapitel werden Sie alles Nötige zur Ein- und Ausgabe erfahren. Außerdem soll der Begriff »Terminal« ein wenig genauer erläutert werden.
## 5.1 Von Terminals zu Pseudo-TerminalsÂ
Obwohl in der Praxis heute eigentlich keine echten Terminals mehr verwendet werden, ist von ihnen immer noch die Rede. Terminals selbst sahen in der Regel aus wie gewöhnliche Desktop-Computer, meist mit einem schwarz-weißen (bzw. schwarz-grünen) Bildschirm, obwohl für ein Terminal nicht zwangsläufig ein Monitor genutzt werden muss. Solche Terminals waren über eine Leitung direkt mit einem UNIX-Rechner verbunden â also sind (waren) Terminals niemals Bestandteil des Betriebssystems selbst. Ein Betriebssystem lief auch ohne Terminal weiter (ähnlich, wie Ihr System auch ohne eine Internetverbindung läuft). Wenn ein solches Terminal eingeschaltet wurde, wartete schon ein Prozess namens »getty« (Get Terminal) »horchend« darauf und öffnete eine neue Session (Sitzung). Es wurde bereits erwähnt, dass eine Session nichts anderes ist, als die Zeit, ab der sich ein Benutzer mit einer Login-Shell eingeloggt hat, und die endet, wenn dieser sich wieder vom System verabschiedet.
Heute werden kaum noch echte Terminals (im eigentlichen Sinne) eingesetzt, sondern vorzugsweise Terminal-Emulationen. Terminal-Emulationen wiederum sind Programme, die vorgeben, ein Terminal zu sein.
Unter den meisten Linux-UNIX-Systemen stehen einem mehrere »virtuelle« Terminals zur Verfügung, die mit der Tastenkombination (Strg)+(Alt)+(F1) bis meistens (Strg)+(Alt)+(F7) erreicht werden können. Wenn Ihr System hochgefahren wird, bekommen Sie in der Regel als erstes Terminal (Strg)+ (Alt)+(F1) zu Gesicht. Arbeiten Sie ohne grafische Oberfläche, so ist dies gewöhnlich auch Ihre Login-Shell. Bei einer grafischen Oberfläche wird zumeist ein anderes Terminal (unter Linux bspw. (Strg)+(Alt)+(F7)) benutzt. Trotzdem können Sie jederzeit über (Strg)+(Alt)+(Fn) eine »echte« Login-Shell verwenden.
Auf jeder dieser Textkonsolen ((Strg)+(Alt)+(F1) bis (Strg)+(Alt)+(Fn)) »horchen« die Getty-Prozesse, bis sich ein Benutzer einloggt, so zum Beispiel:
> you@host > ps -e | grep getty 3092 tty1 00:00:00 getty 3093 tty2 00:00:00 getty 3095 tty4 00:00:00 getty 3096 tty5 00:00:00 getty 3097 tty6 00:00:00 getty
Unter Linux werden Sie hierbei statt »getty« vermutlich den Namen »mingetty« vorfinden. Im Beispiel fällt außerdem auf, dass die Textkonsolen »tty0« und »tty3« fehlen. Dies kann nur bedeuten, dass sich hier jemand eingeloggt hat:
> you@host > who | grep tty3 tot tty3 Mar 1 23:28
Sobald der User »tot« seine Session wieder beendet, wird ein neuer Getty-Prozess gestartet, der horchend darauf wartet, dass sich wieder jemand in der Textkonsole »tty3« einloggt.
Die Textfenster grafischer Oberflächen werden als Pseudo-Terminal bezeichnet (Abk. »pts« oder auch »ttyp« â betriebssystemspezifisch). Im Gegensatz zu einer Terminal-Emulation verläuft die Geräteeinstellung zu den Pseudo-Terminals dynamisch â sprich, ein Pseudo-Terminal existiert nur dann, wenn eine Verbindung besteht. In welchem Pseudo-Terminal Sie sich gerade befinden, sofern Sie unter einer grafischen Oberfläche eine Konsole geöffnet haben, können Sie mit dem Kommando tty ermitteln:
> [ --- Linux --- ] you@host > tty /dev/pts/40 [ --- oder unter FreeBSD --- ] you@host > tty /dev/ttyp1
Der Name eines solchen Pseudo-Terminals ist eine Zahl im Verzeichnis /dev/pts. Die Namensvergabe beginnt gewöhnlich bei 0 und wird automatisch erhöht. Einen Überblick, welche Pseudo-Terminals gerade für welche User bereitgestellt werden, finden Sie im entsprechenden Verzeichnis:
> you@host > ls -l /dev/pts insgesamt 0 crw--w---- 1 tot tty 136, 37 2005â03â01 22:46 37 crw------- 1 tot tty 136, 38 2005â03â01 22:46 38 crw------- 1 tot tty 136, 39 2005â03â01 22:46 39 crw------- 1 you tty 136, 40 2005â03â02 00:35 40
Hier finden Sie insgesamt vier Pseudo-Terminal-Einträge. Drei für den User »tot« und einen für »you«.
Das Gleiche finden Sie auch unter BSD-UNIX mit
> you@host > ls -l /dev/ttyp* crw-rw-rw- 1 root wheel 5, 0 22 Mär 21:54 /dev/ttyp0 crw--w---- 1 martin tty 5, 1 13 Mai 08:58 /dev/ttyp1 crw--w---- 1 martin tty 5, 2 13 Mai 07:43 /dev/ttyp2 crw--w---- 1 martin tty 5, 3 13 Mai 08:48 /dev/ttyp3 crw-rw-rw- 1 root wheel 5, 4 13 Mai 08:48 /dev/ttyp4 crw-rw-rw- 1 root wheel 5, 5 12 Mai 16:31 /dev/ttyp5 crw-rw-rw- 1 root wheel 5, 6 12 Mai 23:01 /dev/ttyp6 crw-rw-rw- 1 root wheel 5, 7 12 Mai 14:37 /dev/ttyp7 crw-rw-rw- 1 root wheel 5, 8 12 Mai 14:22 /dev/ttyp8 crw-rw-rw- 1 root wheel 5, 9 12 Mai 14:26 /dev/ttyp9 crw-rw-rw- 1 root wheel 5, 10 12 Mai 17:20 /dev/ttypa crw-rw-rw- 1 root wheel 5, 11 23 Apr 11:23 /dev/ttypb
> ...
nur mit dem Unterschied, dass bei der Namensvergabe der Wert nach 9 (»ttyp9«) nicht mehr um 1 inkrementiert wird, sondern mit dem ersten Buchstaben des Alphabets fortgefahren wird.
Pseudo-Terminals können neben den normalen Terminalverbindungen auch im Netzwerk (TCP/IP) eingesetzt werden. Dies ist möglich, weil der X-Server für grafische Anwendungen auch über ein Netzwerkprotokoll verfügt und damit verbunden ist.
Kapitel 6 FunktionenJe länger die Shellscripts in den vorangegangenen Kapiteln wurden, umso unübersichtlicher wurden sie. Verwendete man dann auch noch die Routinen mehrmals im Script, so steigerte sich der Umfang weiter. In solch einem Fall (und eigentlich fast immer) sind Funktionen die Lösung. Mit Funktionen fassen Sie mehrere Befehle in einem Block zwischen geschweiften Klammern zusammen und rufen sie bei Bedarf mit einem von Ihnen definierten Funktionsnamen auf. Eine einmal definierte Funktion können Sie jederzeit in Ihrem Script mehrmals aufrufen und verwenden. Funktionen sind somit auch Scripts, welche in der laufenden Umgebung des Shell-Prozesses immer präsent sind, und verhalten sich folglich, als seien Sie interne Shell-Kommandos.6.1 Definition Wenn Sie eine Shell-Funktion schreiben, wird diese zunächst nicht vom Shell-Prozess ausgeführt, sondern zunächst in der laufenden Umgebung gespeichert (genauer, die Funktion wird hiermit definiert). Mit folgender Syntax können Sie eine Funktion definieren:funktions_name() { kommando1 kommando2 ... kommando_n }Sie geben nach einem frei wählbaren Funktionsnamen runde Klammern an. Die nun folgenden Kommandos, welche diese Funktion ausführen soll, werden zwischen geschweifte Klammern gesetzt. Folgende Aspekte müssen Sie allerdings bei der Definition einer Funktion beachten: funktions_name und die Klammerung () (bzw. das gleich anschließend gezeigte Schlüsselwort function und funktions_name) müssen in einer Zeile stehen. Vor der sich öffnenden geschweiften Klammer »{« muss sich mindestens ein Leerzeichen oder ein Newline-Zeichen befinden. Vor einer sich schließenden geschweiften Klammer »}« muss sich entweder ein Semikolon oder ein Newline-Zeichen befinden.Somit lässt sich eine Funktion wie folgt auch als Einzeiler definieren:funktions_name() { kommando1 ; kommando2 ; ... ; kommando_n ; }6.1.1 Definition (Bash und Korn-Shell only) In der Bash und der Korn-Shell steht Ihnen hierzu mit dem Schlüsselwort function noch die folgende Syntax zur Verfügung, um eine Funktion zu definieren:function funktions_name { kommando1 kommando2 ... kommando_n }Im Unterschied zur herkömmlichen Syntax, die für alle Shells gilt, fallen beim Schlüsselwort function die runden Klammern hinter dem Funktionsnamen weg.6.1.2 Funktionsaufruf Funktionen müssen logischerweise immer vor dem ersten Aufruf definiert sein, da ein Script ja auch von oben nach unten, Zeile für Zeile, abgearbeitet wird. Daher befindet sich die Definition einer Funktion immer am Anfang des Scripts bzw. vor dem Hauptprogramm.Wird im Hauptprogramm die Funktion aufgerufen, was durch die Notierung des Funktionsnamens geschieht, werden die einzelnen Befehle im Funktionsblock ausgeführt.# Demonstriert einen einfachen Funktionsaufruf # Name: afunc1 # Die Funktion hallo hallo() { echo "In der Funktion hallo()" } # Hier beginnt das Hauptprogramm echo "Vor der Ausführung der Funktion ..." # Jetzt wird die Funktion aufgerufen hallo # Nach einem Funktionsaufruf echo "Weiter gehts im Hauptprogramm"Das Script bei der Ausführung:you@host > ./afunc1 Vor der Ausführung der Funktion ... In der Funktion hallo() Weiter gehts im HauptprogrammDie Shell-Funktion wird von der laufenden Shell ausgeführt und ist somit auch Bestandteil des aktuellen Prozesses (es wird keine Subshell verwendet). Im Unterschied zu vielen anderen Programmiersprachen ist der Zugriff auf die Variablen innerhalb eines Shellscripts auch in Funktionen möglich, obwohl sie hier nicht definiert wurden. Dies bedeutet auch: Wird eine Variable in der Funktion modifiziert, gilt diese Veränderung ebenso für das Hauptprogramm. Das folgende Script demonstriert dies:# Demonstriert einen einfachen Funktionsaufruf # Name: afunc2 # Die Funktion print_var print_var() { echo $var # test var=ein_neuer_Test # var bekommt neuen Wert } var=test echo $var # test print_var # Funktionsaufruf echo $var # ein_neuer_TestDas Script bei der Ausführung:you@host > ./afunc2 test test ein_neuer_TestWeil eine Shell-Funktion dem eigenen Prozess zugeteilt ist, können Sie diese genauso wie eine Variable mit unset wieder entfernen.# Demonstriert einen einfachen Funktionsaufruf # Name: afunc3 # Die Funktion print_var print_var() { echo $var # test var=ein_neuer_Test # var bekommt neuen Wert } var=test echo $var # test print_var # Funktionsaufruf echo $var # ein_neuer_Test unset print_var # Funktion löschen print_var # Fehler!!!Das Script bei der Ausführung:you@host > ./afunc3 test test ein_neuer_Test ./afunc1: line 16: print_var: command not foundHinweis   Bei Korn-Shell und Bash ist mit unset zum Löschen der Funktionen die Option -f erforderlich, weil es hier erlaubt ist, dass Funktionen und Variablen denselben Namen haben dürfen. Existiert hier eine Variable mit dem gleichen Namen, wird die Variable gelöscht und nicht â wie vielleicht beabsichtigt â die Funktion. Die Bourne-Shell hingegen erlaubt keine gleichnamigen Bezeichnungen von Funktionen und Variablen.6.1.3 Funktionen exportieren Wenn Funktionen wie einfache Variablen mit unset wieder gelöscht werden können, werden Sie sich sicherlich fragen, ob man Funktionen auch in Subprozesse exportieren kann. Allgemein gibt es keine Möglichkeit, Shell-Funktionen zu exportieren. Ausnahme ist wieder die Bash: Hier können Sie mittels export âf funktions_name eine Funktion zum Export freigeben.Bourne-Shell und Korn-Shell hingegen können keine Funktionen exportieren. Hierbei steht Ihnen nur der Umweg über den Punkteoperator mittels. functions_namezur Verfügung. Dabei muss die Funktion in einer Datei geschrieben und mittels . functions_name (oder auch source functions_name) eingelesen werden. Ein Beispiel:# Name: afunc_ex echo "Rufe eine externe Funktion auf ..." . funktionen echo "... ich bin fertig"Im Beispiel wird die Datei »funktionen« in das Script (keiner Subshell) eingelesen und ausgeführt. Die Datei und Funktion sieht wie folgt aus:# Name: funktionen # Funktion localtest afunction() { echo "Ich bin eine Funktion" }Das Script bei der Ausführung:you@host > ./afunc_ex Rufe eine externe Funktion auf ... ... ich bin fertigHinweis   Auch wenn mit Bourne- und Korn-Shell keine echten Exporte umsetzbar sind, so stehen den Subshells, welche durch eine Kopie des Eltern-Prozesses entstehen, auch die Funktionen des Elternprozesses zur Verfügung und werden mitkopiert. Eine solche Subshell wird erstellt mit `...`, (...) und einem direkten Scriptaufruf. Natürlich setzt ein direkter Scriptaufruf voraus, dass keine Shell vorangestellt wird (bspw. ksh scriptname) und dass sich auch in der ersten Zeile nichts bezogen auf eine andere Shell befindet (bspw. #!/bin/ksh). Bei der Korn-Shell müssen Sie außerdem noch typeset -fx functions_name angeben, damit den Subshells auch hier die aktuellen Funktionen des Eltern-Prozesses zur Verfügung stehen. Denn dies geschieht im Gegensatz zur Bourne-Shell bei der Korn-Shell nicht automatisch.FunktionsbibliothekenDer Vorgang, der Ihnen eben mit dem Punkteoperator demonstriert wurde, wird ebenso verwendet, wenn Sie eine Funktionsbibliothek erstellen wollen, in der sich immer wiederkehrende kleine Routinen ansammeln. Damit können Sie mit Ihrem Script die gewünschte Bibliotheksdatei einlesen und so auf alle darin enthaltenen Funktionen zugreifen. Der Aufruf erfolgt dabei aus Ihrem Script mit (der Bibliotheksname sei stringroutinen.bib):. stringroutinen.bibHier wird natürlich davon ausgegangen, dass sich die Script-Bibliotheksdatei im selben Verzeichnis wie das laufende Script befindet. Nach diesem Aufruf im Script stehen Ihnen die Funktionen von stringroutine.bib zur Verfügung.Hinweis   Wenn Sie jetzt motiviert sind, eigene Bibliotheken zu schreiben, so bedenken Sie bitte den Umfang einer solchen Datei. Schließlich muss die komplette Bibliotheksdatei eingelesen werden, nur um meistens ein paar darin enthaltene Funktionen auszuführen. Daher kann es manchmal ratsam sein, eher über Copy & Paste die nötigen Funktionen in das aktuelle Script einzufügen â was allerdings dann wiederum den Nachteil hat, dass Änderungen an der Funktion an mehreren Orten vorgenommen werden müssen.6.1.4 Aufrufreihenfolge Sollten Sie einmal aus Versehen oder mit Absicht eine Funktion schreiben, zu der es ebenfalls ein internes Shell-Kommando oder gar ein externes Kommando gibt, richtet sich die Shell nach folgender Aufrufreihenfolge: Existiert eine selbst geschriebene Shell-Funktion mit entsprechenden, aufgerufenen Namen, wird sie ausgeführt. Gibt es keine Shell-Funktion, wird das interne Shell-Kommando (Builtin) bevorzugt. Gibt es weder eine Shell-Funktion noch ein internes Shell-Kommando, wird nach einem externen Kommando in den Suchpfaden (PATH) gesucht. Also nochmals die Reihenfolge:1. Shell-Funktion      2. Internes Shell-Kommando      3. Externes Kommando (in PATH)6.1.5 Who is who Wissen Sie nicht genau, um was es sich bei einem Kommando denn nun handelt, können Sie dies mit type abfragen, beispielsweise:you@host > hallo() { > echo "Hallo" > } you@host > type hallo hallo is a function hallo () { echo "Hallo" } you@host > type echo echo is a shell builtin you@host > type ls ls is aliased to `/bin/ls $LS_OPTIONS` you@host > type ps ps is hashed (/bin/ps) you@host > type type type is a shell builtin6.1.6 Aufruf selbst bestimmen Haben Sie zum Beispiel eine Funktion, von der Sie eine Shell-Funktion, ein internes Shell-Kommando und ein externes Kommando auf Ihrem Rechner kennen, so können Sie auch hierbei selbst bestimmen, was ausgeführt werden soll. Einfachstes Beispiel ist echo, von dem sowohl ein Shell-internes als auch ein externes Kommando existiert. Schreiben Sie bspw. noch eine eigene echo-Funktion, dann hätten Sie hierbei drei verschiedene Varianten auf Ihrem System zur Verfügung.Der Aufrufreihenfolge nach wird ja die selbst geschriebene Shell-Funktion bevorzugt, sodass Sie diese weiterhin mit dem einfachen Aufruf ausführen können. Wollen Sie allerdings jetzt das interne Shell-Kommando (Builtin) verwenden, müssen Sie dies der Shell mit dem Schlüsselwort builtin mitteilen, damit nicht die Shell-Funktion ausgeführt wird:you@host > builtin echo "<NAME>" Hallo WeltWollen Sie hingegen das externe Kommando echo aufrufen, müssen Sie den absoluten Pfad zu echo verwenden:you@host > /bin/echo "<NAME>" <NAME>eltWarum sollten Sie bei echo das externe Kommando verwenden, wo beide doch dieselbe Funktionalität haben? Versuchen Sie einmal, das interne Kommando mit ââhelp um Hilfe zu bitten, versuchen Sie es dann beim externen.6.1.7 Funktionen auflisten Wollen Sie auflisten, welche Funktionen in der laufenden Shell zurzeit definiert sind, können Sie sich mit set oder typeset einen Überblick verschaffen (siehe Tabelle 6.1).Tabelle 6.1  Auflisten definierter FunktionenKommandoShellsBedeutungsetsh, bashGibt alle Variablen und Funktionen mit kompletter Definition austypeset âfksh, bashListet alle Funktionen mit kompletter Definition auftypeset âFksh, bashListet alle Funktionen ohne Definition aufIhre MeinungWie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
Je länger die Shellscripts in den vorangegangenen Kapiteln wurden, umso unübersichtlicher wurden sie. Verwendete man dann auch noch die Routinen mehrmals im Script, so steigerte sich der Umfang weiter. In solch einem Fall (und eigentlich fast immer) sind Funktionen die Lösung. Mit Funktionen fassen Sie mehrere Befehle in einem Block zwischen geschweiften Klammern zusammen und rufen sie bei Bedarf mit einem von Ihnen definierten Funktionsnamen auf. Eine einmal definierte Funktion können Sie jederzeit in Ihrem Script mehrmals aufrufen und verwenden. Funktionen sind somit auch Scripts, welche in der laufenden Umgebung des Shell-Prozesses immer präsent sind, und verhalten sich folglich, als seien Sie interne Shell-Kommandos.
## 6.1 DefinitionÂ
Wenn Sie eine Shell-Funktion schreiben, wird diese zunächst nicht vom Shell-Prozess ausgeführt, sondern zunächst in der laufenden Umgebung gespeichert (genauer, die Funktion wird hiermit definiert). Mit folgender Syntax können Sie eine Funktion definieren:
> funktions_name() { kommando1 kommando2 ... kommando_n }
Sie geben nach einem frei wählbaren Funktionsnamen runde Klammern an. Die nun folgenden Kommandos, welche diese Funktion ausführen soll, werden zwischen geschweifte Klammern gesetzt. Folgende Aspekte müssen Sie allerdings bei der Definition einer Funktion beachten:
 | funktions_name und die Klammerung () (bzw. das gleich anschließend gezeigte Schlüsselwort function und funktions_name) müssen in einer Zeile stehen. |
| --- | --- |
 | Vor der sich öffnenden geschweiften Klammer »{« muss sich mindestens ein Leerzeichen oder ein Newline-Zeichen befinden. |
| --- | --- |
 | Vor einer sich schließenden geschweiften Klammer »}« muss sich entweder ein Semikolon oder ein Newline-Zeichen befinden. |
| --- | --- |
Somit lässt sich eine Funktion wie folgt auch als Einzeiler definieren:
> funktions_name() { kommando1 ; kommando2 ; ... ; kommando_n ; }
### 6.1.1 Definition (Bash und Korn-Shell only)Â
In der Bash und der Korn-Shell steht Ihnen hierzu mit dem Schlüsselwort function noch die folgende Syntax zur Verfügung, um eine Funktion zu definieren:
> function funktions_name { kommando1 kommando2 ... kommando_n }
Im Unterschied zur herkömmlichen Syntax, die für alle Shells gilt, fallen beim Schlüsselwort function die runden Klammern hinter dem Funktionsnamen weg.
### 6.1.2 FunktionsaufrufÂ
Funktionen müssen logischerweise immer vor dem ersten Aufruf definiert sein, da ein Script ja auch von oben nach unten, Zeile für Zeile, abgearbeitet wird. Daher befindet sich die Definition einer Funktion immer am Anfang des Scripts bzw. vor dem Hauptprogramm.
Wird im Hauptprogramm die Funktion aufgerufen, was durch die Notierung des Funktionsnamens geschieht, werden die einzelnen Befehle im Funktionsblock ausgeführt.
> # Demonstriert einen einfachen Funktionsaufruf # Name: afunc1 # Die Funktion hallo hallo() { echo "In der Funktion hallo()" } # Hier beginnt das Hauptprogramm echo "Vor der Ausführung der Funktion ..." # Jetzt wird die Funktion aufgerufen hallo # Nach einem Funktionsaufruf echo "Weiter gehts im Hauptprogramm"
Das Script bei der Ausführung:
> you@host > ./afunc1 Vor der Ausführung der Funktion ... In der Funktion hallo() Weiter gehts im Hauptprogramm
Die Shell-Funktion wird von der laufenden Shell ausgeführt und ist somit auch Bestandteil des aktuellen Prozesses (es wird keine Subshell verwendet). Im Unterschied zu vielen anderen Programmiersprachen ist der Zugriff auf die Variablen innerhalb eines Shellscripts auch in Funktionen möglich, obwohl sie hier nicht definiert wurden. Dies bedeutet auch: Wird eine Variable in der Funktion modifiziert, gilt diese Veränderung ebenso für das Hauptprogramm. Das folgende Script demonstriert dies:
> # Demonstriert einen einfachen Funktionsaufruf # Name: afunc2 # Die Funktion print_var print_var() { echo $var # test var=ein_neuer_Test # var bekommt neuen Wert } var=test echo $var # test print_var # Funktionsaufruf echo $var # ein_neuer_Test
Das Script bei der Ausführung:
> you@host > ./afunc2 test test ein_neuer_Test
Weil eine Shell-Funktion dem eigenen Prozess zugeteilt ist, können Sie diese genauso wie eine Variable mit unset wieder entfernen.
> # Demonstriert einen einfachen Funktionsaufruf # Name: afunc3 # Die Funktion print_var print_var() { echo $var # test var=ein_neuer_Test # var bekommt neuen Wert } var=test echo $var # test print_var # Funktionsaufruf echo $var # ein_neuer_Test unset print_var # Funktion löschen print_var # Fehler!!!
Das Script bei der Ausführung:
> you@host > ./afunc3 test test ein_neuer_Test ./afunc1: line 16: print_var: command not found
Hinweis   Bei Korn-Shell und Bash ist mit unset zum Löschen der Funktionen die Option -f erforderlich, weil es hier erlaubt ist, dass Funktionen und Variablen denselben Namen haben dürfen. Existiert hier eine Variable mit dem gleichen Namen, wird die Variable gelöscht und nicht â wie vielleicht beabsichtigt â die Funktion. Die Bourne-Shell hingegen erlaubt keine gleichnamigen Bezeichnungen von Funktionen und Variablen.
### 6.1.3 Funktionen exportierenÂ
Wenn Funktionen wie einfache Variablen mit unset wieder gelöscht werden können, werden Sie sich sicherlich fragen, ob man Funktionen auch in Subprozesse exportieren kann. Allgemein gibt es keine Möglichkeit, Shell-Funktionen zu exportieren. Ausnahme ist wieder die Bash: Hier können Sie mittels export âf funktions_name eine Funktion zum Export freigeben.
Bourne-Shell und Korn-Shell hingegen können keine Funktionen exportieren. Hierbei steht Ihnen nur der Umweg über den Punkteoperator mittels
> . functions_name
zur Verfügung. Dabei muss die Funktion in einer Datei geschrieben und mittels . functions_name (oder auch source functions_name) eingelesen werden. Ein Beispiel:
> # Name: afunc_ex echo "Rufe eine externe Funktion auf ..." . funktionen echo "... ich bin fertig"
Im Beispiel wird die Datei »funktionen« in das Script (keiner Subshell) eingelesen und ausgeführt. Die Datei und Funktion sieht wie folgt aus:
> # Name: funktionen # Funktion localtest afunction() { echo "Ich bin eine Funktion" }
Das Script bei der Ausführung:
> you@host > ./afunc_ex Rufe eine externe Funktion auf ... ... ich bin fertig
Hinweis   Auch wenn mit Bourne- und Korn-Shell keine echten Exporte umsetzbar sind, so stehen den Subshells, welche durch eine Kopie des Eltern-Prozesses entstehen, auch die Funktionen des Elternprozesses zur Verfügung und werden mitkopiert. Eine solche Subshell wird erstellt mit `...`, (...) und einem direkten Scriptaufruf. Natürlich setzt ein direkter Scriptaufruf voraus, dass keine Shell vorangestellt wird (bspw. ksh scriptname) und dass sich auch in der ersten Zeile nichts bezogen auf eine andere Shell befindet (bspw. #!/bin/ksh). Bei der Korn-Shell müssen Sie außerdem noch typeset -fx functions_name angeben, damit den Subshells auch hier die aktuellen Funktionen des Eltern-Prozesses zur Verfügung stehen. Denn dies geschieht im Gegensatz zur Bourne-Shell bei der Korn-Shell nicht automatisch.
# Funktionsbibliotheken
Der Vorgang, der Ihnen eben mit dem Punkteoperator demonstriert wurde, wird ebenso verwendet, wenn Sie eine Funktionsbibliothek erstellen wollen, in der sich immer wiederkehrende kleine Routinen ansammeln. Damit können Sie mit Ihrem Script die gewünschte Bibliotheksdatei einlesen und so auf alle darin enthaltenen Funktionen zugreifen. Der Aufruf erfolgt dabei aus Ihrem Script mit (der Bibliotheksname sei stringroutinen.bib):
> . stringroutinen.bib
Hier wird natürlich davon ausgegangen, dass sich die Script-Bibliotheksdatei im selben Verzeichnis wie das laufende Script befindet. Nach diesem Aufruf im Script stehen Ihnen die Funktionen von stringroutine.bib zur Verfügung.
Hinweis   Wenn Sie jetzt motiviert sind, eigene Bibliotheken zu schreiben, so bedenken Sie bitte den Umfang einer solchen Datei. Schließlich muss die komplette Bibliotheksdatei eingelesen werden, nur um meistens ein paar darin enthaltene Funktionen auszuführen. Daher kann es manchmal ratsam sein, eher über Copy & Paste die nötigen Funktionen in das aktuelle Script einzufügen â was allerdings dann wiederum den Nachteil hat, dass Änderungen an der Funktion an mehreren Orten vorgenommen werden müssen.
### 6.1.4 AufrufreihenfolgeÂ
Sollten Sie einmal aus Versehen oder mit Absicht eine Funktion schreiben, zu der es ebenfalls ein internes Shell-Kommando oder gar ein externes Kommando gibt, richtet sich die Shell nach folgender Aufrufreihenfolge: Existiert eine selbst geschriebene Shell-Funktion mit entsprechenden, aufgerufenen Namen, wird sie ausgeführt. Gibt es keine Shell-Funktion, wird das interne Shell-Kommando (Builtin) bevorzugt. Gibt es weder eine Shell-Funktion noch ein internes Shell-Kommando, wird nach einem externen Kommando in den Suchpfaden (PATH) gesucht. Also nochmals die Reihenfolge:
1. | Shell-Funktion |
| --- | --- |
   |    |
2. | Internes Shell-Kommando |
| --- | --- |
   |    |
3. Externes Kommando (in PATH)
### 6.1.5 Who is whoÂ
Wissen Sie nicht genau, um was es sich bei einem Kommando denn nun handelt, können Sie dies mit type abfragen, beispielsweise:
> you@host > hallo() { > echo "Hallo" > } you@host > type hallo hallo is a function hallo () { echo "Hallo" } you@host > type echo echo is a shell builtin you@host > type ls ls is aliased to `/bin/ls $LS_OPTIONS` you@host > type ps ps is hashed (/bin/ps) you@host > type type type is a shell builtin
### 6.1.6 Aufruf selbst bestimmenÂ
Haben Sie zum Beispiel eine Funktion, von der Sie eine Shell-Funktion, ein internes Shell-Kommando und ein externes Kommando auf Ihrem Rechner kennen, so können Sie auch hierbei selbst bestimmen, was ausgeführt werden soll. Einfachstes Beispiel ist echo, von dem sowohl ein Shell-internes als auch ein externes Kommando existiert. Schreiben Sie bspw. noch eine eigene echo-Funktion, dann hätten Sie hierbei drei verschiedene Varianten auf Ihrem System zur Verfügung.
Der Aufrufreihenfolge nach wird ja die selbst geschriebene Shell-Funktion bevorzugt, sodass Sie diese weiterhin mit dem einfachen Aufruf ausführen können. Wollen Sie allerdings jetzt das interne Shell-Kommando (Builtin) verwenden, müssen Sie dies der Shell mit dem Schlüsselwort builtin mitteilen, damit nicht die Shell-Funktion ausgeführt wird:
> you@host > builtin echo "Hal<NAME>" Hallo Welt
Wollen Sie hingegen das externe Kommando echo aufrufen, müssen Sie den absoluten Pfad zu echo verwenden:
> you@host > /bin/echo "Hal<NAME>elt" Hallo Welt
Warum sollten Sie bei echo das externe Kommando verwenden, wo beide doch dieselbe Funktionalität haben? Versuchen Sie einmal, das interne Kommando mit ââhelp um Hilfe zu bitten, versuchen Sie es dann beim externen.
### 6.1.7 Funktionen auflistenÂ
Wollen Sie auflisten, welche Funktionen in der laufenden Shell zurzeit definiert sind, können Sie sich mit set oder typeset einen Überblick verschaffen (siehe Tabelle 6.1).
Kommando | Shells | Bedeutung |
| --- | --- | --- |
set | sh, bash | Gibt alle Variablen und Funktionen mit kompletter Definition aus |
typeset âf | ksh, bash | Listet alle Funktionen mit kompletter Definition auf |
typeset âF | ksh, bash | Listet alle Funktionen ohne Definition auf |
Signale werden zur Steuerung von Prozessen verwendet. Wie ein Prozess auf ein bestimmtes Signal reagiert, können Sie entweder dem System überlassen oder selbst festlegen.
## 7.1 Grundlagen zu den SignalenÂ
Bei Signalen handelt es sich um asynchrone Ereignisse, die eine Unterbrechung (genauer eine Interrupt-Anforderung) auf der Prozessebene bewirken können. Dabei handelt es sich um ein einfaches Kommunikationsmittel zwischen zwei Prozessen (Interprozesskommunikation), bei dem ein Prozess einem anderen eine Nachricht senden kann. Bei den Nachrichten selbst handelt es sich wiederum um einfache Ganzzahlen, welche aber auch mit einem symbolischen (aussagekräftigen) Namen verwendet werden können (auch Macro genannt).
Vorwiegend werden Signale verwendet zur Synchronisation, zum Ausführen von vordefinierten Aktionen oder zum Beenden bzw. Unterbrechen von Prozessen. Generell lassen sich die Signale in drei Kategorien einordnen (die Bedeutung der einzelnen Signale erfahren Sie am Abschnittsende):
 | Benutzerdefinierte Signale (QUIT, ABRT, USR1, USR2, TERM) |
| --- | --- |
Wenn ein Prozess ein bestimmtes Signal erhält, wird dieses Signal in einen so genannten Prozesstabelleneintrag hinterlegt. Sobald diesem Prozess Rechenzeit der CPU zur Verfügung steht, um seine Befehle abzuarbeiten, wird der Kernel aktiv und sieht nach, wie auf das Signal reagiert werden soll (alles natürlich ein wenig vereinfacht erklärt). Hier können Sie jetzt ins Geschehen eingreifen und auf das Eintreten eines Signals reagieren. Sofern Sie gar nichts unternehmen, führt der Kernel die Standardaktion für das entsprechende Signal aus. Was die Standardaktion ist, hängt vom jeweiligen Signal ab. Meistens handelt es sich dabei um die Beendigung des Prozesses. Einige Signale erzeugen auch einen Core-Dump (Speicherabbild eines Prozesses), welcher zum Debuggen verwendet werden kann.
Sofern Sie nicht wollen, dass die Standardaktion beim Auftreten eines Signals ausgeführt wird (und darum geht es auch in diesem Kapitel), können Sie das Signal abfangen und entsprechend darauf reagieren oder es gar vollständig ignorieren. Wenn Sie ein Signal ignorieren, kommt es gar nicht erst bei dem Prozess an.
Für die Reaktion auf Signale stehen Ihnen drei Möglichkeiten zur Verfügung, von denen Sie zwei selbst beeinflussen können (siehe Abbildung 7.1):
Signale treten vorwiegend bei Programmfehlern (unerlaubter Speicherzugriff mit einem Zeiger, ungültige Instruktionen, Division durch null ...) auf oder werden vom Benutzer selbst ausgelöst. Bspw. können Sie mit der Tastenkombination (Strg)+(C) (= SIGINT) oder (Strg)+(Z) (= SIGTSTP) einen Prozess in der aktuellen Shell beenden bzw. anhalten.
Einen Überblick zu den Signalen auf Ihrem System erhalten Sie in der Kommandozeile mit kill âl (kleines L). In der Tabelle 7.1 bis Tabelle 7.7 (aufgeteilt nach Themen der Signale) finden Sie eine Auflistung der Signale (mitsamt deren Nummern und Standardaktionen) und ihrer Bedeutung. Allerdings gilt die hier gezeigte Übersicht nur für Intel- und PPC-Prozessor-Systeme. Sparc- und Alpha- oder MIPS-Prozessor-basierte Systeme haben zumindest zum Teil eine andere Signalnummer. Die Nummern weichen auch unter FreeBSD ab â hier scheinen die Systeme eigene Süppchen zu kochen. Ein weiterer Grund, den symbolischen Namen statt der Nummer zu verwenden.
Name | Nr. | Aktion | Verfügbar | Bedeutung |
| --- | --- | --- | --- | --- |
SIGILL | 4 | Core & Ende | POSIX | Ungültige Instruktion wurde ausgeführt. |
SIGTRAP | 5 | Core & Ende | Unterbrechung (Einzelschrittausführung) |
SIGABRT | 6 | Core & Ende | POSIX | Abnormale Beendigung |
SIGBUS | 7 | Core & Ende | Fehler auf dem System-Bus |
SIGFPE | 8 | Core & Ende | POSIX | Problem bei einer Gleitkommaoperation (z. B. Teilung durch null) |
SIGSEGV | 11 | Core & Ende | POSIX | Speicherzugriff auf unerlaubtes Speichersegment |
SIGSYS | 31 | Core & Ende | Ungültiges Argument bei System-Call |
SIGEMT | Ende | Emulations-Trap |
SIGIOT | Core & Ende | Wie SIGABRT |
Name | Nr. | Aktion | Verfügbar | Bedeutung |
| --- | --- | --- | --- | --- |
SIGHUP | 1 | Ende | POSIX | Abbruch einer Dialogstationsleitung bzw. Konfiguration neu laden für Dämonen |
SIGINT | 2 | Ende | POSIX | Interrupt der Dialogstation ((Strg)+(C)) |
SIGQUIT | 3 | Core & Ende | POSIX | Das Signal quit von einer Dialogstation |
SIGKILL | 9 | Ende | POSIX | Das Signal kill |
SIGTERM | 15 | Ende | POSIX | Programme, die SIGTERM abfangen, bieten meistens einen »Soft Shutdown« an. |
Name | Nr. | Aktion | Verfügbar | Bedeutung |
| --- | --- | --- | --- | --- |
SIGALRM | 14 | Ende | POSIX | Zeituhr ist abgelaufen â alarm(). |
SIGVTALRM | 26 | Ende | BSD, SVR4 | Der virtuelle Wecker ist abgelaufen. |
SIGPROF | 27 | Ende | Timer zur Profileinstellung ist abgelaufen. |
Name | Nr. | Aktion | Verfügbar | Bedeutung |
| --- | --- | --- | --- | --- |
SIGURG | 23 | Ignoriert | BSD, SVR4 | Dringender Socketstatus ist eingetreten. |
SIGIO | 29 | Ignoriert | BSD, SVR4 | Socket E/A ist möglich. |
SIGPOLL | Ende | SVR4 | Ein anstehendes Ereignis bei Streams wird signalisiert. |
Name | Nr. | Aktion | Verfügbar | Bedeutung |
| --- | --- | --- | --- | --- |
SIGCHLD | 17 | Ignoriert | POSIX | Der Kind-Prozess wurde beendet oder angehalten. |
SIGCONT | 18 | Ignoriert | POSIX | Ein angehaltener Prozess soll weiterlaufen. |
SIGSTOP | 19 | Anhalten | POSIX | Der Prozess wurde angehalten. |
SIGTSTP | 20 | Anhalten | POSIX | Der Prozess wurde »von Hand« mit STOP angehalten. |
SIGTTIN | 21 | Anhalten | POSIX | Prozess wollte aus einem Hintergrundprozess der Kontroll-Dialogstation lesen. |
SIGTTOU | 22 | Anhalten | POSIX | Prozess wollte in einem Hintergrundprozess der Kontroll-Dialogstation schreiben. |
SIGCLD | Ignoriert | Wie SIGCHLD |
Name | Nr. | Aktion | Verfügbar | Bedeutung |
| --- | --- | --- | --- | --- |
SIGPIPE | 13 | Ende | POSIX | Es wurde in eine Pipe geschrieben, aus der niemand liest. Es wurde versucht, in eine Pipe mit O_NONBLOCK zu schreiben, aus der keiner liest. |
SIGLOST | Ende | Eine Dateisperre ging verloren. |
SIGXCPU | 24 | Core & Ende | BSD, SVR4 | Maximale CPU-Zeit wurde überschritten. |
SIGXFSZ | 25 | Core & Ende | BSD, SVR4 | Maximale Dateigröße wurde überschritten. |
Name | Nr. | Aktion | Verfügbar | Bedeutung |
| --- | --- | --- | --- | --- |
SIGUSR1 SIGUSR2 | 10, 12 | Ende | POSIX | Frei zur eigenen Benutzung |
SIGWINCH | 28 | Ignoriert | BSD | Window-Größe hat sich verändert. |
Es reicht einfach nicht aus, nur zu wissen, wie man ein Shellscript schreibt, man muss auch wissen, wie man ein Shellscript ausführen kann. Ein Script ausführen im herkömmlichen Sinn kann jeder â doch gibt es verschiedene Möglichkeiten, wie ein oder mehrere Scripts ausgeführt werden können. Viele der hier genannten Themen wurden zwar bereits das ein oder andere Mal angesprochen, jedoch ohne ins Detail zu gehen.
## 8.1 ProzessprioritätenÂ
Bei einem modernen Multitasking-Betriebssystem sorgt ein so genannter Scheduling-Algorithmus (gewöhnlich ein prioritätsgesteuerter Algorithmus) dafür, dass jedem Prozess eine gewisse Zeit die CPU zur Verfügung steht, um seine Arbeit auszuführen â schließlich kann eine CPU letztendlich nur einen Prozess gleichzeitig bearbeiten (auch wenn der Eindruck entsteht, hier würden unzählig viele Prozesse auf einmal verarbeitet). Natürlich hat nicht jeder Prozess die gleiche Priorität. Wenn zum Beispiel ein Prozess eine Systemfunktion aufruft, so besitzt dieser immer eine höhere Priorität. Wobei es sich hierbei um Systemprozesse handelt, auf die Sie keinen Einfluss haben.
Neben den Systemprozessen, deren Priorität Sie nicht beeinflussen können, gibt es noch die Timesharing-Prozesse. Hierbei wird versucht, die CPU-Zeit möglichst gleichmäßig auf alle anderen Prozesse â unter Beachtung der Priorität â zu verteilen. Damit es keine Ungerechtigkeiten gibt, wird die Priorität der Prozesse nach einer gewissen Zeit neu berechnet. Beeinflussen können Sie die Scheduling-Priorität mit dem Kommando nice oder renice. Wenn Sie bspw. die Prozesse mit ps âl auflisten lassen, finden Sie darin einen Wert NI (nice). Diesen Wert können Sie mit einer Priorität belegen: â20 bedeutet die höchste und +19 die niedrigste Priorität. Bei einer Priorität von â20 bedeutet dies, dass der Prozess die CPU bedeutend länger in Anspruch nimmt als ein Prozess mit der Priorität +19. Befehle, die von der Shell gestartet werden, übernehmen dieselbe Priorität wie die Shell.
Wollen Sie bspw. dem Prozess »ein_prozess«, der die PID 1234 besitzt, eine niedrigere Priorität vergeben, können Sie wie folgt vorgehen:
> you@host > renice +10 1234 1234: Alte Priorität: 0, neue Priorität: 10
Um dem Prozess eine höhere Priorität zuzuweisen (egal, ob man ihn vorher selbst heruntergesetzt hat), benötigt man Superuser-Rechte. Man kann sich selbst degradieren, aber nicht wieder aufsteigen. Dazu fehlt dem Linux-Kernel ein Feld in der Prozesstabelle, das es erlauben würde, wieder bis zur Originalpriorität aufzusteigen. Wollen Sie den Prozess 1234 wieder erhöhen, könnten Sie so vorgehen (Superuser-Rechte vorausgesetzt):
> you@host > su -c 'renice â5 1234' Password:******** 1234: Alte Priorität: 10, neue Priorität: â5
Kapitel 9 Nützliche Funktionen
Dieses Kapitel zeigt Ihnen einige sehr hilfreiche »Funktionen«, auf die bisher noch nicht eingegangen werden konnte. Natürlich stellt dies noch lange nicht den kompletten Funktionsumfang der Shell-Funktionen dar, weshalb Sie neben der Linux-UNIX-Kommandoreferenz (Kapitel 14) in diesem Buch im Anhang noch einen Überblick zu allen »Builtins« für die jeweilige Shell finden.
9.1 Der Befehl evalÂ
Ein auf den ersten Blick etwas ungewöhnliches Kommando lautet eval. Die Syntax:
eval Kommandozeile
Mit eval können Sie Kommandos ausführen, als würden Sie diese in der Kommandozeile von Tastatur eingeben. Schreiben Sie eval vor einem Kommando in die Kommandozeile, wird das Kommando oder die Kommandofolge zweimal ausgeführt. Im Fall einer einfachen Kommandoausführung oder -folge ohne eval ist eine doppelte Bewertung nicht sinnvoll:
you@host > eval ls -l | sort -r ...
Anders hingegen sieht dies aus, wenn Sie eine Befehlsfolge in einer Variablen abspeichern wollen, um diese daraufhin ausführen zu lassen:
you@host > LSSORT="ls -l | sort -r" you@host > $LSSORT ls: |: Datei oder Verzeichnis nicht gefunden ls: sort: Datei oder Verzeichnis nicht gefunden
Sicherlich haben Sie etwas Derartiges schon einmal ausprobiert. Wollen Sie die Befehlsfolge, die Sie in der Variablen LSSORT gespeichert haben, jetzt ausführen, ist das Kommando eval erforderlich. Die anschließende Fehlermeldung sagt alles. Verwenden Sie hier am besten mal set âx und sehen Sie sich an, was die Shell aus der Variablen $LSSORT macht:
you@host > set -x you@host > $LSSORT + ls -l '|' sort -r ls: |: Datei oder Verzeichnis nicht gefunden ls: sort: Datei oder Verzeichnis nicht gefunden
Das Problem ist hierbei also das Pipe-Zeichen. Dieses müsste von der Shell ein zweites Mal bewertet werden, aber stattdessen wird die Befehlsfolge gleich ausgeführt, sodass hier das Pipe-Zeichen als Argument von ls bewertet wird. Sie können die Variable LSSORT drehen und wenden wie Sie wollen, die Shells verwenden grundsätzlich einen Ausdruck nur einmal. Hier greift eval mit seiner doppelten Ausführung ein:
you@host > eval $LSSORT ...
Schalten Sie wieder die Option âx ein, können Sie erkennen, dass mithilfe von eval die Variable LSSORT zweimal ausgeführt wird:
you@host > eval $LSSORT + eval ls -l '|' sort -r ++ /bin/ls -l ++ sort -r ...
eval eignet sich also in Shellscripts für die Ausführung von Kommandos, die nicht im Script enthalten sind, sondern erst während der Laufzeit des Scripts festgelegt werden. Sinnvoll ist dies beispielsweise, wenn Sie in Ihrem Script ein Kommando verwenden, das es auf einem bestimmten System nicht gibt. So können Sie dem etwas erfahreneren Anwender eine Möglichkeit geben, sich sein Kommando selbst zusammenzubasteln:
# Name: aeval while true do printf "Kommando(s) : " read eval $REPLY done
Das Script bei der Ausführung:
you@host > ./aeval Kommando(s) : var=hallo Kommando(s) : echo $var hallo Kommando(s) : who you :0 Mar 30 14:45 (console) Kommando(s) : echo `expr 5 + 11` 16 Kommando(s) : asdf ./script1: line 1: asdf: command not found Kommando(s) : ps PID TTY TIME CMD 3242 pts/41 00:00:00 bash 4719 pts/41 00:00:00 bash 4720 pts/41 00:00:00 ps Kommando(s) : echo $$ 4719 Kommando(s) : exit you@host > echo $$ 3242
Man könnte meinen, man hat eine eigene Shell vor sich.
Es gibt noch einen besonders überzeugenden Anwendungsfall von eval. Mit eval haben Sie die Möglichkeit, indirekt auf eine Variable zuzugreifen. Alte CâFanatiker werden feuchte Augen bekommen, da sich das Ganze wie bei den Zeigern in C anhört. Das Prinzip ist eigentlich einfach, aber nicht gleich durchschaubar. Daher ein Beispiel:
# Name: aeval_cool Mo=backup1 Di=backup2 Mi=backup3 Do=backup4 Fr=backup5 Sa=backup6 So=backup7 tag=`date +"%a"` eval backup=\$$tag echo "Heute wird das Backup-Script $backup ausgeführt" ./$backup
Das Script bei der Ausführung (an einem Mittwoch):
you@host > ./aeval_cool Heute wird das Backup-Script backup3 ausgeführt ./backup3
Durch die Kommando-Substitution wurde veranlasst, dass sich in der Variable »tag« die ersten zwei Zeichen des Wochentags befinden. Im Beispiel war es Mittwoch, daher ist tag=Mi. Danach wird es ein wenig seltsam, aber durchaus logisch:
eval backup=\$$tag
Wollen Sie wissen, was da in den zwei Durchläufen passiert, verwenden Sie doch einfach wieder die Option âx:
set -x eval backup=\$$tag set +x
Ein erneuter Aufruf des Shellscripts:
you@host > ./aeval_cool ++ eval 'backup=$Mi' +++ backup=backup3 ...
Im ersten Durchlauf wird aus \$$tag die Zeichenfolge $Mi. Die Variable »Mi« hat ja den Wert »backup3«, was korrekterweise im zweiten Durchlauf an die Variable »backup« übergeben wird. Also wird im ersten Durchlauf der Ausdruck backup=\$$tag zu backup=$Mi. Die Zeichenfolge »Mi« hatten Sie ja zuvor aus der Kommando-Substitution erhalten. Anschließend durchläuft eval den Ausdruck backup=$Mi nochmals, sodass daraus backup=backup3 wird. Das erste Dollarzeichen wurde im ersten Durchlauf mit einem Backslash maskiert und somit vor einem Zugriff der Shell geschützt. Dadurch blieb im zweiten Durchlauf das Dollarzeichen bestehen, sodass hier tatsächlich eine Variable an eine andere Variable überwiesen wurde. Versuchen Sie doch einmal, das Ganze ohne eval-Anweisung so kurz und bündig zu gestalten.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
Dieses Kapitel zeigt Ihnen einige sehr hilfreiche »Funktionen«, auf die bisher noch nicht eingegangen werden konnte. Natürlich stellt dies noch lange nicht den kompletten Funktionsumfang der Shell-Funktionen dar, weshalb Sie neben der Linux-UNIX-Kommandoreferenz (Kapitel 14) in diesem Buch im Anhang noch einen Überblick zu allen »Builtins« für die jeweilige Shell finden.
## 9.1 Der Befehl evalÂ
Ein auf den ersten Blick etwas ungewöhnliches Kommando lautet eval. Die Syntax:
> eval Kommandozeile
Mit eval können Sie Kommandos ausführen, als würden Sie diese in der Kommandozeile von Tastatur eingeben. Schreiben Sie eval vor einem Kommando in die Kommandozeile, wird das Kommando oder die Kommandofolge zweimal ausgeführt. Im Fall einer einfachen Kommandoausführung oder -folge ohne eval ist eine doppelte Bewertung nicht sinnvoll:
> you@host > eval ls -l | sort -r ...
Anders hingegen sieht dies aus, wenn Sie eine Befehlsfolge in einer Variablen abspeichern wollen, um diese daraufhin ausführen zu lassen:
> you@host > LSSORT="ls -l | sort -r" you@host > $LSSORT ls: |: Datei oder Verzeichnis nicht gefunden ls: sort: Datei oder Verzeichnis nicht gefunden
Sicherlich haben Sie etwas Derartiges schon einmal ausprobiert. Wollen Sie die Befehlsfolge, die Sie in der Variablen LSSORT gespeichert haben, jetzt ausführen, ist das Kommando eval erforderlich. Die anschließende Fehlermeldung sagt alles. Verwenden Sie hier am besten mal set âx und sehen Sie sich an, was die Shell aus der Variablen $LSSORT macht:
> you@host > set -x you@host > $LSSORT + ls -l '|' sort -r ls: |: Datei oder Verzeichnis nicht gefunden ls: sort: Datei oder Verzeichnis nicht gefunden
Das Problem ist hierbei also das Pipe-Zeichen. Dieses müsste von der Shell ein zweites Mal bewertet werden, aber stattdessen wird die Befehlsfolge gleich ausgeführt, sodass hier das Pipe-Zeichen als Argument von ls bewertet wird. Sie können die Variable LSSORT drehen und wenden wie Sie wollen, die Shells verwenden grundsätzlich einen Ausdruck nur einmal. Hier greift eval mit seiner doppelten Ausführung ein:
> you@host > eval $LSSORT ...
Schalten Sie wieder die Option âx ein, können Sie erkennen, dass mithilfe von eval die Variable LSSORT zweimal ausgeführt wird:
> you@host > eval $LSSORT + eval ls -l '|' sort -r ++ /bin/ls -l ++ sort -r ...
eval eignet sich also in Shellscripts für die Ausführung von Kommandos, die nicht im Script enthalten sind, sondern erst während der Laufzeit des Scripts festgelegt werden. Sinnvoll ist dies beispielsweise, wenn Sie in Ihrem Script ein Kommando verwenden, das es auf einem bestimmten System nicht gibt. So können Sie dem etwas erfahreneren Anwender eine Möglichkeit geben, sich sein Kommando selbst zusammenzubasteln:
> # Name: aeval while true do printf "Kommando(s) : " read eval $REPLY done
Das Script bei der Ausführung:
> you@host > ./aeval Kommando(s) : var=hallo Kommando(s) : echo $var hallo Kommando(s) : who you :0 Mar 30 14:45 (console) Kommando(s) : echo `expr 5 + 11` 16 Kommando(s) : asdf ./script1: line 1: asdf: command not found Kommando(s) : ps PID TTY TIME CMD 3242 pts/41 00:00:00 bash 4719 pts/41 00:00:00 bash 4720 pts/41 00:00:00 ps Kommando(s) : echo $$ 4719 Kommando(s) : exit you@host > echo $$ 3242
Man könnte meinen, man hat eine eigene Shell vor sich.
Es gibt noch einen besonders überzeugenden Anwendungsfall von eval. Mit eval haben Sie die Möglichkeit, indirekt auf eine Variable zuzugreifen. Alte CâFanatiker werden feuchte Augen bekommen, da sich das Ganze wie bei den Zeigern in C anhört. Das Prinzip ist eigentlich einfach, aber nicht gleich durchschaubar. Daher ein Beispiel:
> # Name: aeval_cool Mo=backup1 Di=backup2 Mi=backup3 Do=backup4 Fr=backup5 Sa=backup6 So=backup7 tag=`date +"%a"` eval backup=\$$tag echo "Heute wird das Backup-Script $backup ausgeführt" ./$backup
Das Script bei der Ausführung (an einem Mittwoch):
> you@host > ./aeval_cool Heute wird das Backup-Script backup3 ausgeführt ./backup3
Durch die Kommando-Substitution wurde veranlasst, dass sich in der Variable »tag« die ersten zwei Zeichen des Wochentags befinden. Im Beispiel war es Mittwoch, daher ist tag=Mi. Danach wird es ein wenig seltsam, aber durchaus logisch:
> eval backup=\$$tag
Wollen Sie wissen, was da in den zwei Durchläufen passiert, verwenden Sie doch einfach wieder die Option âx:
> set -x eval backup=\$$tag set +x
Ein erneuter Aufruf des Shellscripts:
> you@host > ./aeval_cool ++ eval 'backup=$Mi' +++ backup=backup3 ...
Im ersten Durchlauf wird aus \$$tag die Zeichenfolge $Mi. Die Variable »Mi« hat ja den Wert »backup3«, was korrekterweise im zweiten Durchlauf an die Variable »backup« übergeben wird. Also wird im ersten Durchlauf der Ausdruck backup=\$$tag zu backup=$Mi. Die Zeichenfolge »Mi« hatten Sie ja zuvor aus der Kommando-Substitution erhalten. Anschließend durchläuft eval den Ausdruck backup=$Mi nochmals, sodass daraus backup=backup3 wird. Das erste Dollarzeichen wurde im ersten Durchlauf mit einem Backslash maskiert und somit vor einem Zugriff der Shell geschützt. Dadurch blieb im zweiten Durchlauf das Dollarzeichen bestehen, sodass hier tatsächlich eine Variable an eine andere Variable überwiesen wurde. Versuchen Sie doch einmal, das Ganze ohne eval-Anweisung so kurz und bündig zu gestalten.
Kapitel 10 Fehlersuche und Debugging
Sicherlich haben Sie im Verlauf der vielen Kapitel beim Tippen und Schreiben von Scripts genauso viele Fehler gemacht wie ich und sich gefragt: Was passt denn jetzt wieder nicht? Auch hier gilt, dass Fehler beim Lernen nichts Schlechtes sind. Aber sobald Sie Ihre Scripts auf mehreren oder fremden Rechnern ausführen, sollte dies nicht mehr vorkommen. Daher finden Sie in diesem Kapitel einige Hinweise, wie Sie Fehler vermeiden können und, wenn sie bereits aufgetreten sind, wie Sie diesen auf die Schliche kommen.
10.1 Strategien zum Vermeiden von FehlernÂ
Sicherlich, Fehler gänzlich vermeiden können Sie wohl nie, aber es gibt immer ein paar grundlegende Dinge, die Sie beherzigen können, um wenigstens den Überblick zu behalten.
10.1.1 Planen Sie Ihr ScriptÂ
Zugegeben, bei einem Mini-Script von ein paar Zeilen werden Sie wohl kaum das Script planen. Wobei ich persönlich schon die Erfahrung gemacht habe, dass ein geplantes Script wesentlich schneller erstellt ist als ein aus dem Kopf geschriebenes. Eine solche Planung hängt natürlich vom entsprechenden »Kopf« ;-) und dem Anwendungsfall ab. Aber das Prinzip ist einfach. Zuerst schreiben (oder kritzeln) Sie auf, was das Script machen soll.
Ein Beispiel: Das Script soll auf mehreren Rechnern eingesetzt werden und ein Backup der Datenbank erstellen. Das Backup soll hierbei in einem separaten Verzeichnis gespeichert und bei mehr als zehn Backup-Dateien soll immer die älteste gelöscht werden.
Auch wenn man sich bei den ersten Gedanken noch gar nicht vorstellen kann, wie das Script funktionieren bzw. aussehen soll, kann man trotzdem schon mit der Planung beginnen.
#---------------------------------------------------------------# Plan zum Backup-Script: 1.) Prüfen, ob das Verzeichnis für das Backup existiert (test) und ggf. das Verzeichnis neu anlegen (mkdir)- 2.) Da mehrere Dateinamen (zehn) in einem Verzeichnis verwendet werden, einen entsprechenden Namen zur Laufzeit mit 'date' erstellen und in einer Variablen speichern (date). 3.) Einen Dump der Datenbank mit mysqldump und den entsprechenden Optionen erstellen (mysqldump). 4.) Den Dump gleich in das entsprechende Verzeichnis packen und archivieren (tar oder cpio). !!! Dateinamen beachten !!! 5.) Ggf. Datenreste löschen. 6.) Verzeichnis auf Anzahl von Dateien überprüfen (ls -l | wc -l) und ggf. die älteste Datei löschen. Notizen: Bei Fertigstellung crontab aktivieren â täglich um 01.00 Uhr ein Backup erstellen. #---------------------------------------------------------------#
Zwar haben Sie jetzt noch keine Zeile Code geschrieben, doch Sie werden feststellen, dies ist schon die halbe Miete. Und wenn Sie merken, dass Sie das Script mit dem derzeitigen Wissensstand nicht realisieren können, wissen Sie wenigstens, wo sich Ihre Wissenslücken befinden und können entsprechend nachhelfen.
10.1.2 Testsystem bereitstellenÂ
Bevor Sie Ihr Script nach der Fertigstellung dem Ernstfall überlassen, sollten Sie auf jeden Fall das Script lokal oder auf einem Testsystem testen. Gerade Operationen auf einer Datenbank oder schreibende Zugriffe auf Dateien sollte man möglichst vorsichtig behandeln. Denn wenn irgendetwas schief läuft, ist dies auf Ihrem Testsystem nicht so schlimm, aber bei einem oder gar mehreren Rechnern könnte Ihnen solch ein Fehler eine Menge Ärger, Zeit und vor allem Nerven kosten. Es wäre auch sinnvoll, wenn Sie mehrere verschiedene Testsysteme zum Ausprobieren hätten â denn was auf Ihrer Umgebung läuft, muss nicht zwangsläufig auch in einer anderen Umgebung laufen. Gerade wenn man häufig zwischen Linux und UNIX wechselt, gibt es hie und da doch einige Differenzen.
10.1.3 Ordnung ist das halbe LebenÂ
Hierzu gleich am Anfang folgendes Script:
# Eine Datei anlegen # Name: creatfile CreatFile=atestfile.txt if [ ! -e $CreatFile ] then touch $CreatFile if [ ! -e $CreatFile ] ; then echo "Konnte $CreatFile nicht anlegen" ; exit 1 fi fi echo "$CreatFile angelegt/vorhanden!"
Zugegeben, das Script ist kurz und im Prinzip auch verständlich, doch wenn Sie solch einen Programmierstil über längere Scripts beibehalten, werden Sie allein bei der Fehlersuche keine große Freude haben. Was auf den ersten Blick negativ auffällt:
Hier das Script jetzt in einem etwas lesbareren Stil:
# Eine Datei anlegen # Name: creatfile2 # Datei, die angelegt werden soll creatfile=atestfile.txt # Existiert diese Datei bereits ... if [ ! -e $creatfile ] then # Nein ... touch $creatfile # Datei anlegen ... # Jetzt nochmals überprüfen ... if [ ! -e $creatfile ] then echo "Konnte $creatfile nicht anlegen" exit 1 # Script erfolglos beenden else echo "$creatfile erfolgreich angelegt" fi fi echo "$creatfile vorhanden!"
Das sieht doch schon viel besser aus. Natürlich kann Ihnen niemand vorschreiben, in welchem Stil Sie Ihre Scripts schreiben, dennoch will ich Ihnen hier einige Hinweise ans Herz legen, mit denen Sie (gerade, wenn Sie vielleicht noch Anfänger sind) ein friedlicheres Shell-Leben führen können.
Variablen und Konstanten
Verwenden Sie einen sinnvollen Namen für eine Variable â mit »x«, »y« oder »z« kann keiner so recht was anfangen. Verwenden Sie bspw. Variablen in einer Funktion namens removing(), könnten Sie Variablen wie »rem_source«, »rem_dest« oder »rem_temp« verwenden. Hier gibt es viele Möglichkeiten, nur sollten Sie immer versuchen, dass Sie einer Variablen einen Namen geben, an dem man erkennt, was sie bedeutet â eine Variable benötigt also ein klar zu erkennendes Merkmal.
Außerdem sollten Sie Konstanten und Variablen auseinander halten. Ich bezeichne hier Konstanten nicht als Variablen, die mit typeset als »readonly« markiert wurden, sondern meine einfach Variablen, die sich zur Laufzeit des Scripts nicht mehr ändern. Ein guter Stil wäre es bspw., Variablen, die sich im Script nicht mehr ändern, großzuschreiben (wer hat hier was von C gesagt ;-)) und normale Variablen klein. So kann man im Script schnell erkennen, welche Werte sich ohnehin nicht verändern und welche Werte variabel sind. Dies kann die Fehlersuche eingrenzen, weil konstante Variablen nicht verändert werden und man in diesen auch keine falschen Werte vermutet (es sei denn, man gibt diesen Variablen gleich zu Beginn einen falschen Wert).
Stil und Kommentare
Hier gibt es nur eines zu sagen: Einmal mehr auf (ENTER) und eventuell mal auf die (häufig noch wie neue) Tabulatortaste zu drücken, hat noch keinem geschadet. Ich empfehle Ihnen, eine Anweisung bzw. ein Schlüsselwort pro Zeile zu verwenden. Dies hilft Ihnen, bei einer Fehlermeldung gleich die entsprechende Zeile anzuspringen und zu lokalisieren. Ebenfalls sollten Sie der Übersichtlichkeit zuliebe Anweisungs- oder Funktionsblöcke einrücken.
Das Problem mit einen kaum kommentierten Code kennt man zur Genüge. Ich hatte zum Beispiel früher sehr viel mit Perl zu tun. Nach einem Jahr Abstinenz (und vielen anderen Programmiersprachen) benötigte ich nur ein paar Zeilen aus einem Script, das ich mal geschrieben hatte. Leider habe ich den Code nicht kommentiert und so gestaltete sich das »Entschlüsseln« etwas längerer bzw. ich benötigte wieder den Rat anderer Personen. Mit der Verwendung von Kommentaren hätte ich mir einiges an Zeit gespart. Ebenso ist dies mit der Shellscript-Programmierung. Sie werden schließlich noch was anderes zu tun haben, als immer nur Shellscripts zu schreiben.
Und: Es gibt auch noch das Backslash-Zeichen, von dem Sie immer dann Gebrauch machen sollten, wenn eine Zeile mal zu lang wird oder wenn Sie Ketten von Kommandos mit Pipes verwenden. Dies erhöht die Übersicht ebenfalls enorm.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
Sicherlich haben Sie im Verlauf der vielen Kapitel beim Tippen und Schreiben von Scripts genauso viele Fehler gemacht wie ich und sich gefragt: Was passt denn jetzt wieder nicht? Auch hier gilt, dass Fehler beim Lernen nichts Schlechtes sind. Aber sobald Sie Ihre Scripts auf mehreren oder fremden Rechnern ausführen, sollte dies nicht mehr vorkommen. Daher finden Sie in diesem Kapitel einige Hinweise, wie Sie Fehler vermeiden können und, wenn sie bereits aufgetreten sind, wie Sie diesen auf die Schliche kommen.
## 10.1 Strategien zum Vermeiden von FehlernÂ
Sicherlich, Fehler gänzlich vermeiden können Sie wohl nie, aber es gibt immer ein paar grundlegende Dinge, die Sie beherzigen können, um wenigstens den Überblick zu behalten.
### 10.1.1 Planen Sie Ihr ScriptÂ
Zugegeben, bei einem Mini-Script von ein paar Zeilen werden Sie wohl kaum das Script planen. Wobei ich persönlich schon die Erfahrung gemacht habe, dass ein geplantes Script wesentlich schneller erstellt ist als ein aus dem Kopf geschriebenes. Eine solche Planung hängt natürlich vom entsprechenden »Kopf« ;-) und dem Anwendungsfall ab. Aber das Prinzip ist einfach. Zuerst schreiben (oder kritzeln) Sie auf, was das Script machen soll.
Ein Beispiel: Das Script soll auf mehreren Rechnern eingesetzt werden und ein Backup der Datenbank erstellen. Das Backup soll hierbei in einem separaten Verzeichnis gespeichert und bei mehr als zehn Backup-Dateien soll immer die älteste gelöscht werden.
Auch wenn man sich bei den ersten Gedanken noch gar nicht vorstellen kann, wie das Script funktionieren bzw. aussehen soll, kann man trotzdem schon mit der Planung beginnen.
> #---------------------------------------------------------------# Plan zum Backup-Script: 1.) Prüfen, ob das Verzeichnis für das Backup existiert (test) und ggf. das Verzeichnis neu anlegen (mkdir)- 2.) Da mehrere Dateinamen (zehn) in einem Verzeichnis verwendet werden, einen entsprechenden Namen zur Laufzeit mit 'date' erstellen und in einer Variablen speichern (date). 3.) Einen Dump der Datenbank mit mysqldump und den entsprechenden Optionen erstellen (mysqldump). 4.) Den Dump gleich in das entsprechende Verzeichnis packen und archivieren (tar oder cpio). !!! Dateinamen beachten !!! 5.) Ggf. Datenreste löschen. 6.) Verzeichnis auf Anzahl von Dateien überprüfen (ls -l | wc -l) und ggf. die älteste Datei löschen. Notizen: Bei Fertigstellung crontab aktivieren â täglich um 01.00 Uhr ein Backup erstellen. #---------------------------------------------------------------#
Zwar haben Sie jetzt noch keine Zeile Code geschrieben, doch Sie werden feststellen, dies ist schon die halbe Miete. Und wenn Sie merken, dass Sie das Script mit dem derzeitigen Wissensstand nicht realisieren können, wissen Sie wenigstens, wo sich Ihre Wissenslücken befinden und können entsprechend nachhelfen.
### 10.1.2 Testsystem bereitstellenÂ
Bevor Sie Ihr Script nach der Fertigstellung dem Ernstfall überlassen, sollten Sie auf jeden Fall das Script lokal oder auf einem Testsystem testen. Gerade Operationen auf einer Datenbank oder schreibende Zugriffe auf Dateien sollte man möglichst vorsichtig behandeln. Denn wenn irgendetwas schief läuft, ist dies auf Ihrem Testsystem nicht so schlimm, aber bei einem oder gar mehreren Rechnern könnte Ihnen solch ein Fehler eine Menge Ärger, Zeit und vor allem Nerven kosten. Es wäre auch sinnvoll, wenn Sie mehrere verschiedene Testsysteme zum Ausprobieren hätten â denn was auf Ihrer Umgebung läuft, muss nicht zwangsläufig auch in einer anderen Umgebung laufen. Gerade wenn man häufig zwischen Linux und UNIX wechselt, gibt es hie und da doch einige Differenzen.
### 10.1.3 Ordnung ist das halbe LebenÂ
Hierzu gleich am Anfang folgendes Script:
> # Eine Datei anlegen # Name: creatfile CreatFile=atestfile.txt if [ ! -e $CreatFile ] then touch $CreatFile if [ ! -e $CreatFile ] ; then echo "Konnte $CreatFile nicht anlegen" ; exit 1 fi fi echo "$CreatFile angelegt/vorhanden!"
Zugegeben, das Script ist kurz und im Prinzip auch verständlich, doch wenn Sie solch einen Programmierstil über längere Scripts beibehalten, werden Sie allein bei der Fehlersuche keine große Freude haben. Was auf den ersten Blick negativ auffällt:
Hier das Script jetzt in einem etwas lesbareren Stil:
> # Eine Datei anlegen # Name: creatfile2 # Datei, die angelegt werden soll creatfile=atestfile.txt # Existiert diese Datei bereits ... if [ ! -e $creatfile ] then # Nein ... touch $creatfile # Datei anlegen ... # Jetzt nochmals überprüfen ... if [ ! -e $creatfile ] then echo "Konnte $creatfile nicht anlegen" exit 1 # Script erfolglos beenden else echo "$creatfile erfolgreich angelegt" fi fi echo "$creatfile vorhanden!"
Das sieht doch schon viel besser aus. Natürlich kann Ihnen niemand vorschreiben, in welchem Stil Sie Ihre Scripts schreiben, dennoch will ich Ihnen hier einige Hinweise ans Herz legen, mit denen Sie (gerade, wenn Sie vielleicht noch Anfänger sind) ein friedlicheres Shell-Leben führen können.
# Variablen und Konstanten
Verwenden Sie einen sinnvollen Namen für eine Variable â mit »x«, »y« oder »z« kann keiner so recht was anfangen. Verwenden Sie bspw. Variablen in einer Funktion namens removing(), könnten Sie Variablen wie »rem_source«, »rem_dest« oder »rem_temp« verwenden. Hier gibt es viele Möglichkeiten, nur sollten Sie immer versuchen, dass Sie einer Variablen einen Namen geben, an dem man erkennt, was sie bedeutet â eine Variable benötigt also ein klar zu erkennendes Merkmal.
Außerdem sollten Sie Konstanten und Variablen auseinander halten. Ich bezeichne hier Konstanten nicht als Variablen, die mit typeset als »readonly« markiert wurden, sondern meine einfach Variablen, die sich zur Laufzeit des Scripts nicht mehr ändern. Ein guter Stil wäre es bspw., Variablen, die sich im Script nicht mehr ändern, großzuschreiben (wer hat hier was von C gesagt ;-)) und normale Variablen klein. So kann man im Script schnell erkennen, welche Werte sich ohnehin nicht verändern und welche Werte variabel sind. Dies kann die Fehlersuche eingrenzen, weil konstante Variablen nicht verändert werden und man in diesen auch keine falschen Werte vermutet (es sei denn, man gibt diesen Variablen gleich zu Beginn einen falschen Wert).
# Stil und Kommentare
Hier gibt es nur eines zu sagen: Einmal mehr auf (ENTER) und eventuell mal auf die (häufig noch wie neue) Tabulatortaste zu drücken, hat noch keinem geschadet. Ich empfehle Ihnen, eine Anweisung bzw. ein Schlüsselwort pro Zeile zu verwenden. Dies hilft Ihnen, bei einer Fehlermeldung gleich die entsprechende Zeile anzuspringen und zu lokalisieren. Ebenfalls sollten Sie der Übersichtlichkeit zuliebe Anweisungs- oder Funktionsblöcke einrücken.
Das Problem mit einen kaum kommentierten Code kennt man zur Genüge. Ich hatte zum Beispiel früher sehr viel mit Perl zu tun. Nach einem Jahr Abstinenz (und vielen anderen Programmiersprachen) benötigte ich nur ein paar Zeilen aus einem Script, das ich mal geschrieben hatte. Leider habe ich den Code nicht kommentiert und so gestaltete sich das »Entschlüsseln« etwas längerer bzw. ich benötigte wieder den Rat anderer Personen. Mit der Verwendung von Kommentaren hätte ich mir einiges an Zeit gespart. Ebenso ist dies mit der Shellscript-Programmierung. Sie werden schließlich noch was anderes zu tun haben, als immer nur Shellscripts zu schreiben.
Und: Es gibt auch noch das Backslash-Zeichen, von dem Sie immer dann Gebrauch machen sollten, wenn eine Zeile mal zu lang wird oder wenn Sie Ketten von Kommandos mit Pipes verwenden. Dies erhöht die Übersicht ebenfalls enorm.
Kapitel 11 Reguläre Ausdrücke und grep
Die Verwendung von regulären Ausdrücken und grep ist Grundlage eines jeden Linux-UNIX-Anwenders. Und für einen Systemadministrator ist sie sowieso unerlässlich, denn es gibt kein vernünftiges System, in dem sie nicht vorkommen. Eine kurze Einführung zu den regulären Ausdrücken wie auch zum Tool grep (und seinen Nachkommen wie bspw. egrep und fgrep) erscheint daher notwendig.
11.1 Reguläre Ausdrücke â die TheorieÂ
Reguläre Ausdrücke (engl. regular expression) sind eine leistungsfähige formale Sprache, mit der sich eine bestimmte (Unter-)Menge von Zeichenketten beschreiben lässt. Es muss allerdings gleich erwähnt werden, dass reguläre Ausdrücke kein Tool oder eine Sammlung von Funktionen sind, die von einem Betriebssystem abhängig sind, sondern es handelt sich in der Tat um eine echte Sprache mit einer formalen Grammatik, in der jeder Ausdruck eine präzise Bedeutung hat.
Regulären Ausdrücke werden von sehr vielen Texteditoren und Programmen eingesetzt. Meistens verwendet man sie, um bestimmte Muster zu suchen und diese dann durch etwas anderes zu ersetzen. In der Linux-UNIX-Welt werden reguläre Ausdrücke vorwiegend bei Programmen wie grep, sed und awk oder den Texteditoren vi und Emacs verwendet. Aber auch viele Programmiersprachen, u. a. Perl, Java, Python, Tcl, PHP oder Ruby, bieten reguläre Ausdrücke an.
Die Entstehungsgeschichte der regulären Ausdrücke ist schnell erzählt. Den Ursprung hat ein Mathematiker und Logiker, <NAME>, gelegt. Er gilt übrigens auch als Mitbegründer der theoretischen Informatik, besonders der hier behandelten formalen Sprachen und der Automatentheorie. <NAME> verwendete eine Notation, die er selbst reguläre Menge nannte. Später verwendete dann <NAME> (der Miterfinder der Programmiersprache C) diese Notationen für eine Vorgänger-Version des UNIX-Editors ed und für das Werkzeug grep. Nach der Fertigstellung von grep wurden die regulären Ausdrücke in sehr vielen Programmen implementiert. Viele davon benutzen die mittlerweile sehr bekannte Bibliothek regex von <NAME>.
Jedes dieser Metazeichen lässt sich auch mit dem Backslash (\) maskieren.
Zusammenfassung
Grau ist alle Theorie und trotzdem ließe sich zu den regulären Ausdrücken noch viel mehr schreiben. Damit das hier Beschriebene für Sie kein Buch mit sieben Sigeln bleibt, soll im nächsten Abschnitt mit grep darauf zurückgegriffen werden. Auch in den Abschnitten zu sed und awk hilft Ihnen das Wissen über reguläre Ausdrücke weiter. Mehr zu den regulären Ausdrücken finden Sie im Internet unter http://www.lrz-muenchen.de/services/schulung/unterlagen/regul/. Wollen Sie gar wissen, wie man reguläre Ausdrücke selbst programmieren kann, finden Sie in »C von A bis Z« einen kleinen Abschnitt dazu, welchen Sie auch online auf meiner Webseite unter www.pronix.de einsehen können.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
Kapitel 11 Reguläre Ausdrücke und grep
Die Verwendung von regulären Ausdrücken und grep ist Grundlage eines jeden Linux-UNIX-Anwenders. Und für einen Systemadministrator ist sie sowieso unerlässlich, denn es gibt kein vernünftiges System, in dem sie nicht vorkommen. Eine kurze Einführung zu den regulären Ausdrücken wie auch zum Tool grep (und seinen Nachkommen wie bspw. egrep und fgrep) erscheint daher notwendig.
11.1 Reguläre Ausdrücke â die TheorieÂ
Reguläre Ausdrücke (engl. regular expression) sind eine leistungsfähige formale Sprache, mit der sich eine bestimmte (Unter-)Menge von Zeichenketten beschreiben lässt. Es muss allerdings gleich erwähnt werden, dass reguläre Ausdrücke kein Tool oder eine Sammlung von Funktionen sind, die von einem Betriebssystem abhängig sind, sondern es handelt sich in der Tat um eine echte Sprache mit einer formalen Grammatik, in der jeder Ausdruck eine präzise Bedeutung hat.
Regulären Ausdrücke werden von sehr vielen Texteditoren und Programmen eingesetzt. Meistens verwendet man sie, um bestimmte Muster zu suchen und diese dann durch etwas anderes zu ersetzen. In der Linux-UNIX-Welt werden reguläre Ausdrücke vorwiegend bei Programmen wie grep, sed und awk oder den Texteditoren vi und Emacs verwendet. Aber auch viele Programmiersprachen, u. a. Perl, Java, Python, Tcl, PHP oder Ruby, bieten reguläre Ausdrücke an.
Die Entstehungsgeschichte der regulären Ausdrücke ist schnell erzählt. Den Ursprung hat ein Mathematiker und Logiker, <NAME>, gelegt. Er gilt übrigens auch als Mitbegründer der theoretischen Informatik, besonders der hier behandelten formalen Sprachen und der Automatentheorie. <NAME> verwendete eine Notation, die er selbst reguläre Menge nannte. Später verwendete dann <NAME> (der Miterfinder der Programmiersprache C) diese Notationen für eine Vorgänger-Version des UNIX-Editors ed und für das Werkzeug grep. Nach der Fertigstellung von grep wurden die regulären Ausdrücke in sehr vielen Programmen implementiert. Viele davon benutzen die mittlerweile sehr bekannte Bibliothek regex von <NAME>.
Jedes dieser Metazeichen lässt sich auch mit dem Backslash (\) maskieren.
Zusammenfassung
Grau ist alle Theorie und trotzdem ließe sich zu den regulären Ausdrücken noch viel mehr schreiben. Damit das hier Beschriebene für Sie kein Buch mit sieben Sigeln bleibt, soll im nächsten Abschnitt mit grep darauf zurückgegriffen werden. Auch in den Abschnitten zu sed und awk hilft Ihnen das Wissen über reguläre Ausdrücke weiter. Mehr zu den regulären Ausdrücken finden Sie im Internet unter http://www.lrz-muenchen.de/services/schulung/unterlagen/regul/. Wollen Sie gar wissen, wie man reguläre Ausdrücke selbst programmieren kann, finden Sie in »C von A bis Z« einen kleinen Abschnitt dazu, welchen Sie auch online auf meiner Webseite unter www.pronix.de einsehen können.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
Die Verwendung von regulären Ausdrücken und grep ist Grundlage eines jeden Linux-UNIX-Anwenders. Und für einen Systemadministrator ist sie sowieso unerlässlich, denn es gibt kein vernünftiges System, in dem sie nicht vorkommen. Eine kurze Einführung zu den regulären Ausdrücken wie auch zum Tool grep (und seinen Nachkommen wie bspw. egrep und fgrep) erscheint daher notwendig.
## 11.1 Reguläre Ausdrücke â die TheorieÂ
Reguläre Ausdrücke (engl. regular expression) sind eine leistungsfähige formale Sprache, mit der sich eine bestimmte (Unter-)Menge von Zeichenketten beschreiben lässt. Es muss allerdings gleich erwähnt werden, dass reguläre Ausdrücke kein Tool oder eine Sammlung von Funktionen sind, die von einem Betriebssystem abhängig sind, sondern es handelt sich in der Tat um eine echte Sprache mit einer formalen Grammatik, in der jeder Ausdruck eine präzise Bedeutung hat.
Regulären Ausdrücke werden von sehr vielen Texteditoren und Programmen eingesetzt. Meistens verwendet man sie, um bestimmte Muster zu suchen und diese dann durch etwas anderes zu ersetzen. In der Linux-UNIX-Welt werden reguläre Ausdrücke vorwiegend bei Programmen wie grep, sed und awk oder den Texteditoren vi und Emacs verwendet. Aber auch viele Programmiersprachen, u. a. Perl, Java, Python, Tcl, PHP oder Ruby, bieten reguläre Ausdrücke an.
Die Entstehungsgeschichte der regulären Ausdrücke ist schnell erzählt. Den Ursprung hat ein Mathematiker und Logiker, <NAME>, gelegt. Er gilt übrigens auch als Mitbegründer der theoretischen Informatik, besonders der hier behandelten formalen Sprachen und der Automatentheorie. <NAME> verwendete eine Notation, die er selbst reguläre Menge nannte. Später verwendete dann <NAME> (der Miterfinder der Programmiersprache C) diese Notationen für eine Vorgänger-Version des UNIX-Editors ed und für das Werkzeug grep. Nach der Fertigstellung von grep wurden die regulären Ausdrücke in sehr vielen Programmen implementiert. Viele davon benutzen die mittlerweile sehr bekannte Bibliothek regex von <NAME>.
### 11.1.1 Elemente für reguläre Ausdrücke (POSIX-RE)Â
Vorwiegend werden reguläre Ausdrücke dazu verwendet, bestimmte Zeichenketten in einer Menge von Zeichen zu suchen und zu finden. Die nun folgende Beschreibung ist eine sehr häufig verwendete Konvention, welche von fast allen Programmen, die reguläre Ausdrücke verwenden, so eingesetzt wird. Gewöhnlich wird dabei ein regulärer Ausdruck aus den Zeichen des Alphabets in Kombination mit den Metazeichen (die hier gleich vorgestellt werden) gebildet.
# Zeichenliterale
Als Zeichenliterale bezeichnet man die Zeichen, die wörtlich übereinstimmen müssen. Diese werden im regulären Ausdruck direkt (als Wort) notiert. Hierbei besteht je nach System auch die Möglichkeit, alles in hexadezimaler oder oktaler Form anzugeben.
# Beliebiges Zeichen
Für ein einzelnes, beliebiges Zeichen verwendet man einen Punkt. Dieser Punkt kann dann für ein fast beliebiges Zeichen stehen.
# Zeichenauswahl
Die Zeichenauswahl kennen Sie ebenfalls bereits aus der Shell mit den eckigen Klammern [auswahl] (siehe Abschnitt 1.10.6). Alles, was Sie in den eckigen Klammern schreiben, gilt dann exakt für ein Zeichen aus dieser Auswahl. Bspw. [axz] steht für eines der Zeichen »a«, »x« oder »z«. Dies lässt sich natürlich auch in Bereiche aufteilen. Bspw. bei der Angabe von [2â7] besteht der Bereich aus den Ziffern 2 bis 7. Mit dem Zeichen ^ innerhalb der Zeichenauswahl können Sie auch Zeichen ausschließen. Bspw. mit [^aâf] schließen Sie die Zeichen »a«, »b«, »c«, »d«, »e« oder »f« aus.
# Vordefinierte Zeichenklassen
Manche Implementationen von regulären Ausdrücken bieten auch vordefinierte Zeichenklassen an. Sofern Sie keine solch vordefinierten Zeichenklassen finden, lässt sich dies auch selbst durch eine Zeichenauswahl in eckigen Klammern beschreiben. Die vordefinierten Zeichenklassen sind letztendlich auch nur eine Kurzform der Zeichenklassen. Tabelle 11.1 nennt einige bekannte vordefinierte Zeichenklassen:
Vordefiniert | Bedeutung | Selbstdefiniert |
| --- | --- | --- |
\d | eine Zahl | [0â9] |
\D | keine Zahl | [^0â9] |
\w | ein Buchstabe, eine Zahl oder der Unterstrich | [aâzAâZ_0â9] |
\W | kein Buchstabe, keine Zahl und kein Unterstrich | [^aâzAâZ_0â9] |
\s | Whitespace-Zeichen | [ \f\n\r\t\v] |
\S | alle Zeichen außer Whitespace-Zeichen | [^\f\n\r\t\v] |
# Quantifizierer
Quantifizierer | Bedeutung |
| --- | --- |
? | Der Ausdruck, der voransteht, ist optional, d. h., er kann ein Mal vorkommen, muss aber nicht. Der Ausdruck kommt also entweder null oder ein Mal vor. |
+ | Der Ausdruck muss mindestens ein Mal vorkommen, darf aber auch mehrmals vorhanden sein. |
* | Der Ausdruck darf beliebig oft oder auch gar nicht vorkommen. |
{min,} | Der voranstehende Ausdruck muss mindestens min-mal vorkommen. |
{min,max} | Der voranstehende Ausdruck muss mindestens min-mal, darf aber nicht mehr als max-mal vorkommen. |
{n} | Der voranstehende Ausdruck muss genau n-mal vorkommen. |
# Gruppierung
Ausdrücke können auch zwischen runden Klammern gruppiert werden. Einige Tools speichern diese Gruppierung ab und ermöglichen so eine Wiederverwendung im regulären Ausdruck bzw. der Textersetzung über \1. Es lassen sich hiermit bis zu neun Muster abspeichern (\1, \2 ... \9). Bspw. würde man mit
> s/\(string1\) \(string2\) \(string3\)/\3 \2 \1/g
erreichen, dass in einer Textdatei alle Vorkommen von
> string1 string2 string3
umgeändert werden in
> string3 string2 string1
\1 bezieht sich also immer auf das erste Klammernpaar, \2 auf das zweite usw.
# Alternativen
Selbstverständlich lassen sich auch Alternativen definieren. Hierfür wird das Zeichen | verwendet. Bspw.:
> (asdf|ASDF)
bedeutet, dass nach »asdf« oder »ASDF« gesucht wird, nicht aber nach »AsDf« oder »asdF«.
# Sonderzeichen
Da viele Tools direkt auf Textdateien zugreifen, finden Sie gewöhnlich noch folgende Sonderzeichen definiert (siehe Tabelle 11.3):
Sonderzeichen | Bedeutung |
| --- | --- |
^ | Steht für den Zeilenanfang. |
$ | Steht für das Zeilenende. |
\b | Steht für die leere Zeichenkette am Wortanfang oder am Wortende. |
\B | Steht für die leere Zeichenkette, die nicht den Anfang oder das Ende eines Wortes bildet. |
\< | Steht für die leere Zeichenkette am Wortanfang. |
\> | Steht für die leere Zeichenkette am Wortende. |
\d | Ziffer |
\D | Keine Ziffer |
\s | Whitespace |
\S | Kein Whitespace |
. | Zeichen |
+ | Voriger Ausdruck mindestens ein Mal. |
* | Voriger Ausdruck beliebig oft. |
? | Voriger Ausdruck null oder ein Mal. |
Jedes dieser Metazeichen lässt sich auch mit dem Backslash (\) maskieren.
# Zusammenfassung
Grau ist alle Theorie und trotzdem ließe sich zu den regulären Ausdrücken noch viel mehr schreiben. Damit das hier Beschriebene für Sie kein Buch mit sieben Sigeln bleibt, soll im nächsten Abschnitt mit grep darauf zurückgegriffen werden. Auch in den Abschnitten zu sed und awk hilft Ihnen das Wissen über reguläre Ausdrücke weiter. Mehr zu den regulären Ausdrücken finden Sie im Internet unter http://www.lrz-muenchen.de/services/schulung/unterlagen/regul/. Wollen Sie gar wissen, wie man reguläre Ausdrücke selbst programmieren kann, finden Sie in »C von A bis Z« einen kleinen Abschnitt dazu, welchen Sie auch online auf meiner Webseite unter www.pronix.de einsehen können.
Kapitel 12 Der Stream-Editor sed
sed ist ein relativ altes Tool, wird aber immer noch vielfach eingesetzt, etwa um Zeilen einer Datei oder eines Datenstroms zu manipulieren. Besonders häufig und gern wird sed zum Suchen und Ersetzen von Zeichenfolgen verwendet.
12.1 Funktions- und Anwendungsweise von sedÂ
Bevor Sie sich mit dem Editor sed auseinander setzen, möchte Ich Ihnen zunächst die grundlegende Funktionsweise des sed-Kommandos beschreiben. sed wurde bereits in den 70-ern von <NAME> in den Bell Labs entwickelt. Der Name sed steht für Stream EDitor. Natürlich ist sed kein Editor in dem Sinne, wie Sie es vielleicht von »vim« oder »Emacs« her kennen.
Mittlerweile hat jede Linux- bzw. UNIX-Distribution eine eigene sed-Version. Linux verwendet gewöhnlich GNU-sed, was eine erweiterte Variante des Ur-sed ist. Bspw. bietet GNU-sed »in-place-editing«, womit eine Änderung direkt im Eingabefilter möglich ist. Solaris wiederum bietet gleich drei Versionen von sed an, eine von AT&T-abgeleitete, eine von BSD UNIX und eine weitere, die dem XPG4-Standard entspricht (natürlich kann unter Solaris und BSD auch GNU-sed verwendet werden). Generell unterscheiden sich die einzelnen Versionen in der Funktionsvielfalt, was hier allerdings kein Problem sein sollte, da vorwiegend auf die grundlegenden Funktionen von sed eingegangen wird, die jedes sed kennen sollte.
sed bekommt seine Kommandos entweder aus einer Datei oder von der Kommandozeile. Meistens handelt es sich aber um eine Datei. Diese Datei (bzw. das Kommando aus der Kommandozeile) liest sed Zeile für Zeile von der Standardeingabe ein, kopiert diese in einen extra dafür angelegten Puffer (in den so genannten Patternspace), bearbeitet diesen Puffer nach einer von Ihnen bestimmten Regel und gibt den veränderten Puffer anschließend wieder auf die Standardausgabe aus. Bei dem Puffer handelt es sich um eine Kopie einer jeden Zeile, das heißt, jegliche Kommandoausführung von sed bezieht sich von nun an auf diesen Puffer. Die Quelldatei bleibt hierbei immer unverändert. Es sollte hier klar geworden sein, dass sed zeilenweise arbeitet â eine Zeile wird eingelesen, bearbeitet und wieder ausgegeben. Natürlich findet dieser ganze Arbeitsvorgang in einer Schleife statt (siehe Abbildung 12.1).
Hinweis   Dass die Quelldatei unverändert bleibt, sorgt immer wieder für Verwirrung.
Aber eigentlich ist dies logisch, denn es ist nun mal nicht möglich, gleichzeitig aus einer Datei zu lesen und in sie hineinzuschreiben. Man muss also die Ausgabe in einer temporären Datei speichern, zurückkopieren und die temporäre Datei wieder löschen.
Neben dem Puffer Patternspace gibt es noch einen zweiten Puffer, den Holdspace. Diesen können Sie mit speziellen Kommandos verwenden, um Daten untereinander (zwischen dem Pattern- und Holdspace) auszutauschen. Die Puffergröße (oder auch die Zeichen, die pro Zeile aufgenommen werden können) ist bei moderneren sed-Versionen unbegrenzt. Bei älteren Versionen gibt es häufig noch eine Begrenzung auf 8192 Zeichen.
Die Syntax zu sed lautet:
sed [-option] Kommando [Datei] ... [Datei_n] sed -f Kommandodatei [Datei] ... [Datei_n]
Damit die Kommandos nicht mit der fleißigen Shell und deren Mühen, Metazeichen vorher zu interpretieren, kollidieren, sollten Sie die sed-Kommandos zwischen Single Quotes setzen.
sed [-option] 'Kommando' [Datei] ... [Datei_n]
Natürlich können Sie auch mehrere Kommandos auf einmal verwenden. Hierzu müssen Sie entweder die einzelnen Kommandos mit einem Semikolon voneinander trennen oder aber Sie schreiben in jeder Zeile ein Kommando.
sed 'Kommando1 ; Kommando2 ; Kommando3' Date sed 'Kommando1 > Kommando2 > Kommando3' Datei
Hier als Beispiel drei verschiedene sed-Kommandos, welche alle drei zum selben Ziel führen:
you@host > sed -n '/Scott/p' mrolympia.dat Larry Scott USA 1965 1966 you@host > sed -n '/Scott/p' < mrolympia.dat Larry Scott USA 1965 1966 you@host > cat mrolympia.dat | sed -n '/Scott/p' Larry Scott USA 1965 1966
sed liest immer von der Standardeingaben und schreibt standardmäßig auf die Standardausgabe. Mit dem letzten Parameter können Sie einen oder auch mehrere Dateinamen angeben, von denen sed aus dem Eingabestrom lesen soll. Wie Sie im Beispiel sehen konnten, können Sie sich hier auch der Umlenkungszeichen (<, >, |) bedienen. Die nähere Bedeutung der sed-Befehle, die im Beispiel eben verwendet wurden, werden Sie in den folgenden Abschnitten näher kennen lernen.
12.1.1 Wohin mit der Ausgabe?Â
Wenn Sie den sed-Editor verwenden, werden Sie wohl im seltensten Fall eine Ausgabe auf dem Bildschirm machen wollen. Hierzu gibt es drei gängige Anwendungsmöglichkeiten, die Ausgabe von sed in die richtigen Bahnen zu lenken. Hier werden erst mal die gängigsten Methoden von sed behandelt, ohne jetzt auf die einzelnen Optionen bzw. Features von sed einzugehen.
Hinweis   Die Ausgabe von sed erfokgt auf den Standardausgabekanal, Fehlermeldungen erfolgen auf den Standardfehlerkanal. Der Rückgabewert von sed ist dabei ungleich (meistens größer als) 0.
Mehrere Dateien bearbeiten
Wenn Sie mehrere Dateien auf einmal mit sed bearbeiten wollen bzw. müssen, kann es sinnvoll sein, anstatt eines Kommandos eine Kommandodatei zu verwenden â man darf hierzu gern auch sed-Script sagen. Der Vorteil: Diese Kommandodatei können Sie jederzeit wieder verwenden. Also, statt eine Datei wie folgt mit sed zu bearbeiten
sed -n 'kommando ; kommando; kommando' Datei > Zieldatei
können Sie ein sed-Script verwenden:
sed -f ased_script.sed Datei > Zieldatei
Damit können Sie de facto mehrere Dateien komfortabel mit einem sed-Script bearbeiten. Theoretisch könnten Sie hierbei vorgefertigte sed-Scripts einsetzen, ohne dass Sie überhaupt Kenntnisse von sed haben (was allerdings nicht sehr empfehlenswert ist). Wie Sie eigene sed-Scripts schreiben, erfahren Sie in Abschnitt 12.5. Gewöhnlich werden Sie das sed-Script auf mehrere Dateien auf einmal in einem Shellscript ansetzen. Dies können Sie dann bspw. folgendermaßen machen:
# Alle Dateien im aktuellen Verzeichnis for file in * do sed -f ased_script.sed $file > temp # Achtung, hier wird das Original verändert!!! mv tmp $file done
Direkt aus einem Kommando bearbeiten
Im Beispiel oben wurden mehrere Dateien neu bearbeitet. Wenn Sie ein wenig vorausschauender arbeiten, könnten Sie diesen Schritt auch auslassen, indem Sie gleich von vorn herein darauf achten, dass man die Datei gar nicht bearbeiten muss (natürlich ist dies nicht immer möglich). Dies lässt sich wieder mit einer Pipe realisieren:
kommando | sed -f ased_script.sed > Datei kommando | sed -n 'kommando' > Datei shellscript | sed -f ased_script.sed > Datei shellscript | sed -f 'Kommando' > Datei
Hierbei schreibt ein Prozess (ein Kommando oder auch ein Shellscript) seine Daten in die Standardausgabe durch die Pipe. Auf der anderen Seite der Pipe wartet sed mit der Standardeingabe auf diese Daten und verarbeitet sie zeilenweise mit dem sed-Script. Die Ausgabe wird mit einer Umlenkung in eine Datei geschrieben.
Direkt aus einer Datei bearbeiten
Selbstverständlich können Sie mit sed auch einen Text aus einer Datei extrahieren, zerlegen und in eine neue Zieldatei speichern, ohne die Originaldatei zu verändern.
sed -n '/ein_wort/p' Datei > Zieldatei
Damit extrahiert sed alle Zeilen mit dem Wort »ein_wort« aus Datei und schreibt diese Zeilen in die Zieldatei.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
sed ist ein relativ altes Tool, wird aber immer noch vielfach eingesetzt, etwa um Zeilen einer Datei oder eines Datenstroms zu manipulieren. Besonders häufig und gern wird sed zum Suchen und Ersetzen von Zeichenfolgen verwendet.
## 12.1 Funktions- und Anwendungsweise von sedÂ
Bevor Sie sich mit dem Editor sed auseinander setzen, möchte Ich Ihnen zunächst die grundlegende Funktionsweise des sed-Kommandos beschreiben. sed wurde bereits in den 70-ern von <NAME> in den Bell Labs entwickelt. Der Name sed steht für Stream EDitor. Natürlich ist sed kein Editor in dem Sinne, wie Sie es vielleicht von »vim« oder »Emacs« her kennen.
Mittlerweile hat jede Linux- bzw. UNIX-Distribution eine eigene sed-Version. Linux verwendet gewöhnlich GNU-sed, was eine erweiterte Variante des Ur-sed ist. Bspw. bietet GNU-sed »in-place-editing«, womit eine Änderung direkt im Eingabefilter möglich ist. Solaris wiederum bietet gleich drei Versionen von sed an, eine von AT&T-abgeleitete, eine von BSD UNIX und eine weitere, die dem XPG4-Standard entspricht (natürlich kann unter Solaris und BSD auch GNU-sed verwendet werden). Generell unterscheiden sich die einzelnen Versionen in der Funktionsvielfalt, was hier allerdings kein Problem sein sollte, da vorwiegend auf die grundlegenden Funktionen von sed eingegangen wird, die jedes sed kennen sollte.
sed bekommt seine Kommandos entweder aus einer Datei oder von der Kommandozeile. Meistens handelt es sich aber um eine Datei. Diese Datei (bzw. das Kommando aus der Kommandozeile) liest sed Zeile für Zeile von der Standardeingabe ein, kopiert diese in einen extra dafür angelegten Puffer (in den so genannten Patternspace), bearbeitet diesen Puffer nach einer von Ihnen bestimmten Regel und gibt den veränderten Puffer anschließend wieder auf die Standardausgabe aus. Bei dem Puffer handelt es sich um eine Kopie einer jeden Zeile, das heißt, jegliche Kommandoausführung von sed bezieht sich von nun an auf diesen Puffer. Die Quelldatei bleibt hierbei immer unverändert. Es sollte hier klar geworden sein, dass sed zeilenweise arbeitet â eine Zeile wird eingelesen, bearbeitet und wieder ausgegeben. Natürlich findet dieser ganze Arbeitsvorgang in einer Schleife statt (siehe Abbildung 12.1).
Neben dem Puffer Patternspace gibt es noch einen zweiten Puffer, den Holdspace. Diesen können Sie mit speziellen Kommandos verwenden, um Daten untereinander (zwischen dem Pattern- und Holdspace) auszutauschen. Die Puffergröße (oder auch die Zeichen, die pro Zeile aufgenommen werden können) ist bei moderneren sed-Versionen unbegrenzt. Bei älteren Versionen gibt es häufig noch eine Begrenzung auf 8192 Zeichen.
Die Syntax zu sed lautet:
> sed [-option] Kommando [Datei] ... [Datei_n] sed -f Kommandodatei [Datei] ... [Datei_n]
Damit die Kommandos nicht mit der fleißigen Shell und deren Mühen, Metazeichen vorher zu interpretieren, kollidieren, sollten Sie die sed-Kommandos zwischen Single Quotes setzen.
> sed [-option] 'Kommando' [Datei] ... [Datei_n]
Natürlich können Sie auch mehrere Kommandos auf einmal verwenden. Hierzu müssen Sie entweder die einzelnen Kommandos mit einem Semikolon voneinander trennen oder aber Sie schreiben in jeder Zeile ein Kommando.
> sed 'Kommando1 ; Kommando2 ; Kommando3' Date sed 'Kommando1 > Kommando2 > Kommando3' Datei
Hier als Beispiel drei verschiedene sed-Kommandos, welche alle drei zum selben Ziel führen:
> you@host > sed -n '/Scott/p' mrolympia.dat Larry Scott USA 1965 1966 you@host > sed -n '/Scott/p' < mrolympia.dat Larry Scott USA 1965 1966 you@host > cat mrolympia.dat | sed -n '/Scott/p' Larry Scott USA 1965 1966
sed liest immer von der Standardeingaben und schreibt standardmäßig auf die Standardausgabe. Mit dem letzten Parameter können Sie einen oder auch mehrere Dateinamen angeben, von denen sed aus dem Eingabestrom lesen soll. Wie Sie im Beispiel sehen konnten, können Sie sich hier auch der Umlenkungszeichen (<, >, |) bedienen. Die nähere Bedeutung der sed-Befehle, die im Beispiel eben verwendet wurden, werden Sie in den folgenden Abschnitten näher kennen lernen.
### 12.1.1 Wohin mit der Ausgabe?Â
Wenn Sie den sed-Editor verwenden, werden Sie wohl im seltensten Fall eine Ausgabe auf dem Bildschirm machen wollen. Hierzu gibt es drei gängige Anwendungsmöglichkeiten, die Ausgabe von sed in die richtigen Bahnen zu lenken. Hier werden erst mal die gängigsten Methoden von sed behandelt, ohne jetzt auf die einzelnen Optionen bzw. Features von sed einzugehen.
# Mehrere Dateien bearbeiten
Wenn Sie mehrere Dateien auf einmal mit sed bearbeiten wollen bzw. müssen, kann es sinnvoll sein, anstatt eines Kommandos eine Kommandodatei zu verwenden â man darf hierzu gern auch sed-Script sagen. Der Vorteil: Diese Kommandodatei können Sie jederzeit wieder verwenden. Also, statt eine Datei wie folgt mit sed zu bearbeiten
> sed -n 'kommando ; kommando; kommando' Datei > Zieldatei
können Sie ein sed-Script verwenden:
> sed -f ased_script.sed Datei > Zieldatei
Damit können Sie de facto mehrere Dateien komfortabel mit einem sed-Script bearbeiten. Theoretisch könnten Sie hierbei vorgefertigte sed-Scripts einsetzen, ohne dass Sie überhaupt Kenntnisse von sed haben (was allerdings nicht sehr empfehlenswert ist). Wie Sie eigene sed-Scripts schreiben, erfahren Sie in Abschnitt 12.5. Gewöhnlich werden Sie das sed-Script auf mehrere Dateien auf einmal in einem Shellscript ansetzen. Dies können Sie dann bspw. folgendermaßen machen:
> # Alle Dateien im aktuellen Verzeichnis for file in * do sed -f ased_script.sed $file > temp # Achtung, hier wird das Original verändert!!! mv tmp $file done
# Direkt aus einem Kommando bearbeiten
Im Beispiel oben wurden mehrere Dateien neu bearbeitet. Wenn Sie ein wenig vorausschauender arbeiten, könnten Sie diesen Schritt auch auslassen, indem Sie gleich von vorn herein darauf achten, dass man die Datei gar nicht bearbeiten muss (natürlich ist dies nicht immer möglich). Dies lässt sich wieder mit einer Pipe realisieren:
> kommando | sed -f ased_script.sed > Datei kommando | sed -n 'kommando' > Datei shellscript | sed -f ased_script.sed > Datei shellscript | sed -f 'Kommando' > Datei
Hierbei schreibt ein Prozess (ein Kommando oder auch ein Shellscript) seine Daten in die Standardausgabe durch die Pipe. Auf der anderen Seite der Pipe wartet sed mit der Standardeingabe auf diese Daten und verarbeitet sie zeilenweise mit dem sed-Script. Die Ausgabe wird mit einer Umlenkung in eine Datei geschrieben.
# Direkt aus einer Datei bearbeiten
Selbstverständlich können Sie mit sed auch einen Text aus einer Datei extrahieren, zerlegen und in eine neue Zieldatei speichern, ohne die Originaldatei zu verändern.
> sed -n '/ein_wort/p' Datei > Zieldatei
Damit extrahiert sed alle Zeilen mit dem Wort »ein_wort« aus Datei und schreibt diese Zeilen in die Zieldatei.
Zwar haben Sie mit der Shell-Programmierung einen Trumpf in der Hand, mit dem Sie wohl fast alle Probleme lösen können. Trotzdem kommt man manchmal an einen Punkt, an dem sich die Lösung als nicht mehr so einfach und relativ umständlich erweist. Hier empfiehlt es sich häufig, auf andere Werkzeuge umzusteigen oder â sofern die Kenntnisse vorhanden sind â sich ein solches Werkzeug (bspw. in C) selbst zu programmieren. Allerdings hat man oft nicht die Zeit und Lust, gleich wieder etwas Neues zu erfinden. Wenn Sie also mit der Shell am Limit sind, ist ein Umstieg auf andere »Werkzeuge« wie awk oder auch Perl sinnvoll.
## 13.1 Einführung und Grundlagen von awkÂ
Ich will es gar nicht leugnen, mit Perl hätten Sie die ultimativste Waffe für eigentlich (fast) alle Probleme, aber auch hier ist die Einarbeitungsphase und erforderliche Lernzeit relativ lange, wenn man von null anfangen muss (auch wenn bei Perl die Lernkurve recht steil ist). Mit awk hingegen haben Sie einen »kleineren«, aber schnell erlernbaren Interpreter. Ich fasse mich kurz, für eine Einführung in awk genügen 50 Seiten, für eine solche in Perl wäre dieser Umfang ein Witz.
Die Hauptaufgaben von awk glichen zu Beginn der Entwicklung denen von sed. awk wurde als ein Programm entwickelt, das als Reportgenerator dienen sollte. Allerdings hatte sich awk binnen kurzer Zeit zu einer echten Programmiersprache entwickelt. Die Syntax von awk gleicht der von C, ja man kann fast meinen, einen C-Interpreter vor sich zu haben. Allerdings hat awk gegenüber C den Vorteil, dass Sie sich nicht mit all den ungemütlichen Features von C befassen müssen.
Somit wurde awk im Anfangsstadium als Programm eingesetzt, mit dem Dateien zeilenweise eingelesen und in einzelne Felder (bzw. Wörter) zerlegt wurden. Daraus entstand dann eine benutzerdefinierte Ausgabe â im Prinzip also wie bei sed. Allerdings wurde awk in kurzer Zeit enorm erweitert. Heute bietet awk alle Konstrukte an, die von (fast) jeder modernen Programmiersprache verwendet werden: angefangen von einfachen Variablen, Arrays, den verschiedenen Kontrollstrukturen bis hin zu Funktionen (auch den benutzerdefinierten).
### 13.1.1 History und Versionen von awkÂ
Der Name awk stammt von den Anfangsbuchstaben der Entwickler (aus dem Jahre 1977) Aho, Weinberger und Kerninghan. Allerdings existieren mittlerweile drei Hauptzweige von awk-Versionen. Das Ur-awk (auch unter dem Namen oawk (für old awk) bekannt) von den eben besagten Entwicklern, das erweiterte nawk (new awk) von 1985 und die GNU-Version gawk (GNU awk), das wiederum eine Erweiterung von nawk darstellt. Linux, FreeBSD und NetBSD benutzen hierbei standardmäßig gawk und die meisten UNIX-Varianten (Solaris, HP-UX etc.) in der Regel nawk, wobei auch hier GNU-awk nachinstalliert werden kann. Das Ur-awk wird eigentlich nicht mehr genutzt und es gibt auch keinen Grund mehr, es zu verwenden, da hier die Unterschiede zu nawk und gawk erheblich sind. Dies fängt damit an, dass beim Ur-awk einige Schlüsselwörter (delete, do, function, return), vordefinierte Funktionen, spezielle Variablen und noch einiges mehr fehlen. Sofern Sie irgendwo einen Uralt-Rechner verwalten müssen, sei trotzdem in einer Liste aufgeführt, was das Ur-awk nicht kann oder kennt:
 | Die Schlüsselwörter delete, do, function und return. |
| --- | --- |
 | Operatoren für Ausdrücke ?, :, , und ^. |
| --- | --- |
 | Es sind nicht mehrere -f Schalter erlaubt. |
| --- | --- |
Aber eigentlich müssen Sie sich keine Gedanken darüber machen, so lange Sie nawk bzw. gawk verwenden. Allerdings erscheint es mir sinnvoll, dass Sie hier für einen gleichen Kommandoaufruf sorgen. Meistens (unter Linux) ist dies allerdings von Haus aus der Fall. Ein Aufruf von awk führt unter Linux beispielsweise dank eines symbolischen Links zu gawk.
> you@host > ls -l /usr/bin/awk lrwxrwxrwx 1 root root 8 2005â02â05 /usr/bin/awk -> /bin/awk you@host > ls -l /bin/awk lrwxrwxrwx 1 root root 4 2005â02â05 /bin/awk -> gawk
Unter diversen UNIX-Varianten werden Sie nicht so komfortabel weitergeleitet. Allerdings können Sie auch hier selbst einen symbolischen Link setzen:
> you@host > ln -s /bin/nawk /bin/awk
Damit führt jeder Aufruf von awk nach nawk. Wenn Sie hierbei auf mehreren Systemen arbeiten, müssen Sie sich zumindest keine Gedanken um den Aufruf von awk machen, weil Sie so immer awk eingeben und dadurch entsprechende awk-Variante nutzen.
### 13.1.2 Die Funktionsweise von awkÂ
awk wartet auf Daten vom Eingabestrom, welche von der Standardeingabe oder von Dateien kommen können. Gewöhnlich werden hierbei die Dateien, die in der Kommandozeile angegeben wurden, eine nach der anderen geöffnet. Anschließend arbeitet awk Zeile für Zeile bis zum Dateiende ab oder â falls die Daten von der Standardeingabe kommen â bis (Strg)+(D) (EOF) gedrückt wurde. Bei der Abarbeitung werden die einzelnen Zeilen in einzelne Felder (bzw. Wörter) zerlegt. Als Trennzeichen gilt hier meistens auch dasjenige, was in der Variablen IFS (wobei diese Variable in awk FS lautet) steht, ein Leerzeichen oder/und ein Tabulatorzeichen.
Anschließend wird für gewöhnlich jede Zeile mit einem Muster bzw. regulären Ausdruck verglichen und bei gefundenen Übereinstimmungen eine bestimmte Aktion ausgeführt â eben ähnlich wie bei sed, nur dass die Aktionen hier wesentlich komplexer sein können/dürfen.
# Kapitel 14 Linux-UNIX-Kommandoreferenz
Wie Sie dieses Kapitel verwenden, bleibt zunächst Ihnen überlassen. Sie können entweder alle Kommandos durcharbeiten und am besten ein wenig damit herumexperimentieren oder aber Sie kommen zu gegebener Zeit (wenn Sie ein Kommando verwenden) hierauf zurück. Alle Kommandos sind nach Themengebiet sortiert. Fakt ist auf jeden Fall, ohne Kenntnis dieser Kommandos werden Sie in der Shell-Programmierung nicht sehr weit kommen bzw. der Weg wird häufig umständlicher sein als nötig.
Die Beschreibung der einzelnen Kommandos beschränkt sich auf das Nötigste bzw. Gängigste. Sofern es sinnvoll erscheint, finden Sie hier und da einige Beispiele zum entsprechenden Kommando, wie dies recht häufig in der Praxis verwendet wird. Auf Kommandos, die im Buch schon intensiver verwendet wurden, wird nur noch in einer kurzen Form eingegangen (mit einem entsprechenden Querverweis). Die Beschreibung der einzelnen Tools stellt in keiner Weise einen Ersatz für die unzähligen Manual- und Info-Seiten dar, die sich auf jedem Linux-UNIX-Rechner befinden. Für tiefgründige Recherchen sind die Manual- und Info-Seiten nicht wegzudenken. Besonders auf die vielen Optionen und Schalter der einzelnen Kommandos kann hier aus Platzgründen kaum eingegangen werden.
Des Weiteren ist anzumerken, dass viele Befehle zum Teil betriebssystem- oder distributionsspezifisch sind. Andere Kommandos wiederum unterscheiden sich hier und da geringfügig von der Funktionalität und der Verwendung der Optionen. Einzig unter Linux kann man behaupten, dass all die hier erwähnten Kommandos auf jeden Fall vorhanden sind und auch so funktionieren wie beschrieben.
Hier eine Übersicht aller Kommandos, die in diesem Kapitel kurz beschrieben werden (in alphabetischer Reihenfolge).
Kommando | Bedeutung |
| --- | --- |
a2ps | Textdatei umwandeln nach Postscript |
accept | Druckerwarteschlange auf emfangsbereit setzen |
afio | Ein cpio mit zusätzlicher Komprimierung |
alias | Kurznamen für Kommandos vergeben |
apropos | Nach Schlüsselwörtern in man-Seiten suchen |
arp | Ausgeben von MAC-Adressen |
at | Kommando zu einem bestimmten Zeitpunkt ausführen lassen |
badblocks | Überprüft, ob ein Datenträger defekte Sektoren beinhaltet (betriebssystemspezifisch) |
basename | Gibt den Dateianteil eines Pfadnamens zurück |
batch | Kommando irgendwann später ausführen lassen |
bc | Taschenrechner |
bg | Einen angehaltenen Prozess im Hintergrund fortsetzen |
bzcat | Ausgabe von bzip2-komprimierten Dateien |
bzip2 bunzip2 | (De-)Komprimieren von Dateien |
cal | Zeigt einen Kalender an |
cancel | Druckaufträge stornieren |
cat | Datei(en) nacheinander ausgeben |
cdrecord | Daten auf eine CD brennen |
cd | Verzeichnis wechseln |
cfdisk | Partitionieren von Festplatten (betriebssystemspezifisch) |
chgrp | Gruppe von Dateien oder Verzeichnissen ändern |
cksum sum | Eine Prüfsumme für eine Datei ermitteln |
chmod | Zugriffsrechte von Dateien oder Verzeichnissen ändern |
chown | Eigentümer von Dateien oder Verzeichnissen ändern |
clear | Löschen des Bildschirms |
cmp | Dateien miteinander vergleichen |
comm | Zwei sortierte Textdateien miteinander vergleichen |
compress uncompress | (De-)Komprimieren von Dateien |
cp | Dateien kopieren |
cpio | Dateien und Verzeichnisse archivieren |
cron crontab | Programme in bestimmten Zeitintervallen ausführen lassen |
crypt | Dateien verschlüsseln |
csplit | Zerteilen von Dateien (kontextabhängig) |
cut | Zeichen oder Felder aus Dateien herausschneiden |
date | Datum und Uhrzeit |
dd | Datenblöcke zwischen Device (Low Level) kopieren (und konvertieren); natürlich nicht nur zwischen devices |
dd_rescue | Datenblöcke fehlertolerant (Low Level) kopieren (und konvertieren) |
df | Erfragen, wie viel Speicherplatz die Filesysteme benötigen |
diff | Vergleicht zwei Dateien |
diff3 | Vergleicht drei Dateien |
dig | DNS-Server abfragen |
dircmp | Verzeichnisse rekursiv vergleichen |
dirname | Verzeichnisanteil eines Pfadnamens zurückgeben |
disable | Drucker deaktivieren |
dos2unix | Dateien vom DOS ins UNIX-Format umwandeln |
du | Größe eines Verzeichnisbaums ermitteln |
dumpe2fs | Zeigt Informationen über ein ext2/ext3-Dateisystem an (betriebssystemspezifisch) |
dump | Vollsicherung eines Dateisystems |
dvips | DVI-Dateien umwandeln nach Postscript |
e2fsck | Repariert ein ext2/ext3-Dateisystem |
enable | Drucker aktivieren |
enscript | Textdatei umwandeln nach Postscript |
exit | Eine Session (Sitzung) beenden |
expand | Tabulatoren in Leerzeichen umwandeln |
fdformat | Formatiert eine Diskette |
fdisk | Partitionieren von Festplatten (unterschiedliche Verwendung bei verschidenen OS) |
fg | Einen angehaltenen Prozess im Vordergrund fortsetzen |
file | Den Dateityp ermitteln |
find | Suchen nach Dateien und Verzeichnissen |
finger | Informationen zu anderen Benutzern abfragen |
fold | Einfaches Formatieren von Dateien |
free | Verfügbaren Speicherplatz (RAM und Swap) anzeigen (betriebssystemspezifisch) |
fsck | Reparieren und Überprüfen von Dateisystemen (unterschiedliche Verwendung bei verschiedenen OS) |
ftp | Dateien von und zu einem anderen Rechner übertragen |
groupadd | Eine neue Gruppe anlegen (distributionsabhängig) |
groupdel | Löschen einer Gruppe (distributionsabhängig) |
groupmod | Group-ID und/oder Name ändern (distributionsabhängig) |
groups | Gruppenzugehörigkeit ausgeben |
growisofs | Frontend für mkisofs zum Brennen von DVDs |
gs | PostScript- und PDF-Dateien konvertieren |
gzip gunzip | (De-)Komprimieren von Dateien |
halt | Alle laufenden Prozesse beenden (Shortcut auf shutdown, wobei die Argumente unter verschiedenen OS differieren) |
hd | Datei hexadezimal bzw. oktal ausgeben |
head | Anfang einer Datei ausgeben |
hostname | Name des Rechners ausgeben |
html2ps | Umwandeln von HTML-Dateien nach PostScript |
id | Eigene Benutzer- und Gruppen-ID ermitteln |
ifconfig | Netzwerkzugang konfigurieren |
info | GNU-Online-Manual |
jobs | Anzeigen angehaltener bzw. im Hintergrund laufender Prozesse |
killall | Signale an Prozesse mit einem Prozessnamen senden |
kill | Signale an Prozesse mit einer Prozessnummer senden |
last | An- und Abmeldezeit eines Benutzers ermitteln |
less | Datei(en) seitenweise ausgeben |
line | Eine Zeile von der Standardeingabe einlesen |
ln | Links auf eine Datei erzeugen |
logname | Name des aktuellen Benutzers anzeigen |
logout | Beenden einer Session (Sitzung) |
lpadmin | Verwaltungsprogramm für das CUPS-Print-Spooler-System |
lp | Ausgabe auf dem Drucker mit dem Print-Spooler (drucksystemabhängig) |
lpc | Steuerung von Druckern (drucksystemabhängig) |
lphelp | Optionen eines Druckers ausgeben (drucksystemabhängig) |
lpmove | Druckerauftrag zu einem anderen Drucker verschieben (drucksystemabhängig) |
lpq | Druckerwarteschlange anzeigen (drucksystemabhängig) |
lpr | Dateien auf den Drucker ausgeben (drucksystemabhängig) |
lprm | Druckaufträge in der Warteschlange stornieren (drucksystemabhängig) |
lpstat | Status der Aufträge anzeigen (drucksystemabhängig) |
ls | Verzeichnisinhalt auflisten |
mail mailx | E-Mails schreiben und empfangen (und auswerten) |
man | Die traditionelle Online-Hilfe für Linux |
md5sum | Eine Prüfsumme für eine Datei ermitteln (unter FreeBSD lautet der Name md5) |
mesg | Nachrichten auf die Dialogstation zulassen oder unterbinden |
mkdir | Ein Verzeichnis anlegen |
mkfs | Dateisystem einrichten (betriebssystemabhängig; bei FreeBSD heißt es bspw. newfs) |
mkisofs | Erzeugt ISO9660/JOLIET/HFS-Dateisystem |
mkreiserfs | Ein ReiserFS-Dateisystem anlegen (betriebssystemabhängig) |
mkswap | Eine Swap-Partition einrichten (betriebssystemabhängig) |
more | Datei(en) seitenweise ausgeben |
mount | Einbinden eines Dateisystems |
mt | Streamer steuern |
mv | Datei(en) und Verzeichnisse verschieben oder umbenennen |
netstat | Statusinformationen über das Netzwerk |
newgrp | Gruppenzugehörigkeit kurzzeitig wechseln |
nice | Prozesse mit anderer Priorität ausführen lassen |
nl | Datei mit Zeilennummer ausgeben |
nohup | Prozesse beim Beenden einer Sitzung weiterlaufen lassen |
notify | Meldung bei neuer eingehender E-Mail |
nslookup | DNS-Server abfragen (künftig dig verwenden) |
od | Datei(en) hexadezimal bzw. oktal ausgeben |
pack unpack | (De-)Komprimieren von Dateien |
parted | Partitionen anlegen, verschieben, vergrößern oder verkleinern |
passwd | Passwort ändern bzw. vergeben |
paste | Dateien spaltenweise verknüpfen |
patch | Pakete upgraden |
pcat | Ausgabe von pack-komprimierten Dateien |
pdf2ps | Umwandeln von PDF nach PostScript |
ping | Verbindung zu anderem Rechner testen |
printenv | Umgebungsvariablen anzeigen |
ps2ascii | Umwandeln von PostScript nach ASCII |
ps2pdf | Umwandeln von PostScript nach PDF |
psgrep | Prozesse über ihren Namen finden (unterschiedliche Verwendung bei verschiedenen OS) |
ps | Prozessinformationen anzeigen (unterschiedliche Verwendung bei verschiedenen OS) |
pstree | Prozesshierachie in Baumform ausgeben |
psutils | Paket zur Bearbeitung von PostScript-Dateien |
pwd | Ausgeben des aktuellen Arbeitsverzeichnisses |
rcp | Dateien im Netz kopieren |
rdev | Kernel-Datei verändern (betriebssystemspezifisch) |
reboot | Alle laufenden Prozesse beenden und System neu starten (Abkürzung auf shutdown âr) |
reiserfsck | Reparieren und Überprüfen von Dateisystemen (betriebssystemabhängig) |
reject | Warteschlange für weitere Aufträge sperren |
renice | Priorität laufender Prozesse verändern |
reset | Zeichensatz für ein Terminal wiederherstellen |
restore | Wiederherstellen einzelner Dateien oder ganzer Dateisysteme |
rlogin | Auf anderem Netzrechner einloggen |
rmail | Eingeschränkte Form von mail |
rm | Dateien und Verzeichnisse löschen |
rmdir | Ein leeres Verzeichnis löschen |
rsh | Programme auf entferntem Rechner ausführen |
rsync | Replizieren von Dateien und Verzeichnissen |
ruptime | Alle Systeme im Netz auflisten |
rwho | Aktive Benutzer im Netz auflisten |
init | Runlevel wechseln (SystemV-Systeme) |
setterm | Terminal-Einstellung verändern |
shutdown | System herunterfahren |
sleep | Prozesse suspendieren (schlafen legen) |
sort | Dateien sortieren |
split | Dateien in mehrere Teile zerlegen |
ssh | Sichere Shell auf anderem Rechner starten |
stty | Terminal-Einstellung abfragen oder setzen |
su | Ändern der Benutzerkennung (ohne Neuanmeldung) |
sudo | Ein Programm als anderer Benutzer ausführen |
swapoff | Swap-Datei oder Partition deaktivieren (betriebssystemabhängig) |
swapon | Swap-Datei oder Partition aktivieren (betriebssystemabhängig) |
swap | Swap-Space anzeigen (betriebssystemabhängig) |
sync | Alle gepufferten Schreiboperationen ausführen |
tac | Dateien rückwärts ausgeben |
tail | Ende einer Datei ausgeben |
tar | Dateien und Verzeichnisse archivieren |
tee | Ausgabe duplizieren |
time | Zeitmessung für Prozesse |
top | Prozesse nach CPU-Auslastung anzeigen (unterschiedliche Verwendung bei verschiedenen OS) |
touch | Anlegen von Dateien oder Zeitstempel verändern |
tput | Terminal- und Cursorsteuerung |
traceroute | Route zu einem Rechner verfolgen |
tr | Zeichen ersetzen bzw. Umformen von Dateien |
tsort | Dateien topologisch sortieren |
tty | Terminal-Name erfragen |
type | Kommandos bzw. Dateien klassifizieren |
ufsdump ufsrestore | Sichern und Wiederherstellen ganzer Dateisysteme (betriebssystemspezifisch) |
umask | Dateierstellungsmaske ändern bzw. ausgeben |
umount | Ausbinden eines Dateisystems |
unalias | Einen Kurznamen löschen |
uname | Rechnername, Architektur und OS ausgeben |
uniq | Doppelte Zeilen nur ein Mal ausgeben |
unix2dos | Dateien vom UNIX- ins DOS-Format umwandeln |
uptime | Laufzeit des Rechners |
useradd adduser | Einen neuen Benutzer anlegen (distributionsspezifisch) |
userdel | Einen Benutzer löschen (distributionsspezifisch) |
usermod | Eigenschaften eines Benutzers ändern (distributionsspezifisch) |
uuncode | Textdatei nach Binärdatei konvertieren |
uudecode | Binärdatei nach Textdatei konvertieren |
wall | Nachrichten an alle Benutzer verschicken |
wc | Zeichen, Wörter und Zeichen einer Datei zählen |
whatis | Kurzbeschreibung zu einem Kommando |
whereis | Suchen nach Dateien innerhalb von PATH |
whoami | Name des aktuellen Benutzers anzeigen |
who | Eingeloggte Benutzer anzeigen |
write | Nachrichten an andere Benutzer verschicken |
zcat | Ausgabe von gunzip-komprimierten Dateien |
zip unzip | (De-)Komprimieren von Dateien |
zless | gunzip-komprimierte Dateien seitenweise ausgeben |
zmore | gunzip-komprimierte Dateien seitenweise ausgeben |
Auf den kommenden Seiten sind diese Kommandos nach folgenden Themenschwerpunkte aufgeteilt:
Kapitel 15 Die PraxisIn diesem Kapitel finden Sie viele Lösungsansätze zu gewöhnlichen Themen, die man relativ häufig in der Praxis benötigt. Natürlich darf man hier keine kompletten Projekte erwarten, sondern eher Scripts, die weiter ausbaufähig sind bzw. als Anregung für umfangreichere Projekte dienen sollen. Es ist auch gar nicht möglich, für jedes Problem eine ultimative Lösung aus dem Hut zu zaubern, dazu sind die individuellen Ansprüche der einzelnen Anforderungen zu unterschiedlich.Die einzelnen Rezepte für die Praxis wurden in die folgenden Teile aufgegliedert: Alltäglich benötigte ScriptsBenutzer- und Prozessverwaltung (Überwachung) Systemüberwachung Backups (init-Scripts) Startup-Scripts erstellen E-Mail Log-File-Analyse CGI-Scripts15.1 Alltägliche Lösungen 15.1.1 Auf alphabetische und numerische Zeichen prüfen Ein Problem bei vielen Scripts, die eine User-Eingabe erfordern, ist, dass ein »Vertipper« Dinge wie einen Datenbankschlüssel oder Dateinamen schnell durcheinander bringt. Eine Überprüfung auf die richtige Eingabe von der Tastatur fällt mittels sed recht einfach aus. Das folgende Beispiel überprüft, ob der Anwender Buchstaben und Zahlen korrekt eingegeben hat. Nicht erlaubt sind alle anderen Zeichen wie Whitespaces, Punktationen, Sonderzeichen etc.#!/bin/sh # checkInput() : Überprüft, dass eine richtige Eingabe aus # alphabetischen und numerischen Zeichen besteht # Gibt 0 (=OK) zurück, wenn alle Zeichen aus # Groß- und Kleinbuchstaben und Zahlen bestehen, # ansonsten wird 1 zurückgegeben. # checkInput() { if [ -z "$eingabe" ] then echo "Es wurde nichts eingegeben" exit 1 fi # Alle unerwünschten Zeichen entfernen ... eingabeTmp="`echo $1 | sed -e 's/[^[:alnum:]]//g'`" # ... und dann vergleichen if [ "$eingabeTmp" != "$eingabe" ] then return 1 else return 0 fi } # Ein Beispiel zum Testen der Funktion checkInput echo -n "Eingabe machen: " read eingabe if ! checkInput "$eingabe" then echo "Die Eingabe muss aus Buchstaben und Zahlen bestehen!" exit 1 else echo "Die Eingabe ist Ok." fiDas Script bei der Ausführung:you@host > ./checkInput Eingabe machen: 1234asdf Die Eingabe ist Ok. you@host > ./checkInput Eingabe machen: asfd 1234 Die Eingabe muss aus Buchstaben und Zahlen bestehen! you@host > ./checkInput Eingabe machen: !"§$ Die Eingabe muss aus Buchstaben und Zahlen bestehen!Entscheidend ist in diesem Script die sed-Zeile:eingabeTmp="`echo $1 | sed -e 's/[^[:alnum:]]//g'`"In der Variablen eingabeTmp befindet sich nach der Kommandosubstitution ein String ohne irgendwelche Sonderzeichen außer Buchstaben und Ziffern (siehe Zeichenklasse [:alnum:]). Hierbei werden einfach alle anderen Zeichen aus dem Originalstring entfernt und im nächsten Schritt wird der Originalstring mit dem so neu erstellten verglichen. Sind beide weiterhin identisch, wurden die Bedingungen erfüllt und die Eingabe war in Ordnung. Ansonsten, wenn beide Strings nicht gleich sind, wurde ein »Fehler« bei der Eingabe entdeckt.15.1.2 Auf Integer überprüfen Leider kann man hierbei jetzt nicht mit [:digits:] in der sed-Zeile auf die Eingabe eines echten Integer prüfen, da zum einen ein negativer Wert eingegeben werden kann und es zum anderen auch minimale und maximale Grenzen des Größenbereichs gibt. Daher müssen Sie das Script bzw. die Funktion ein wenig anpassen, um auch diese Aspekte zu berücksichtigen. Hier das Script, welches die Eingabe einer Integerzahl überprüft:#!/bin/sh # checkInt() : Überprüft, ob ein echter # Integerwert eingegeben wurde # Gibt 0 (== OK) zurück, wenn es sich um einen gültigen # Integerwert handelt, ansonsten 1 # checkInt() { number="$1" # Mindestwert für einen Integer (ggf. anpassen) min=-2147483648 # maximaler Wert für einen Integer (ggf. anpassen) max=2147483647 if [ -z $number ] then echo "Es wurde nichts eingegeben" exit 1 fi # Es könnte ein negativer Wert sein ... if [ "${number%${number#?}}" = "-" ] then # es ist ein negativer Wert â erstes Zeichen ein "-" # das erste Zeichen nicht übergeben testinteger="${number#?}" else testinteger="$number" fi # Alle unerwünschten Zeichen außer Zahlen entfernen ... extract_nodigits="`echo $testinteger | \ sed 's/[[:digit:]]//g'`" # Ist jetzt noch was vorhanden if [ ! -z $extract_nodigits ] then echo "Kein numerisches Format!" return 1 fi # Mindestgrenze eingehalten ... if [ "$number" -lt "$min" ] then echo "Der Wert ist unter dem erlaubten Mindestwert : $min" return 1 fi # max. Grenze eingehalten if [ "$number" -gt "$max" ] then echo "Der Wert ist über dem erlaubten Maximalwert : $max" return 1 fi return 0 # Ok, es ist ein Integer } # Ein Beispiel zum Testen der Funktion checkInput # echo -n "Eingabe machen: " read eingabe if ! checkInt "$eingabe" then echo "Falsche Eingabe â Kein Integer" exit 1 else echo "Die Eingabe ist Ok." fiDas Script bei der Ausführung:you@host > ./checkInt Eingabe machen: 1234 Die Eingabe ist Ok. you@host > ./checkInt Eingabe machen: â1234 Die Eingabe ist Ok. you@host > ./checkInt Eingabe machen: â123412341234 Der Wert ist unter dem erlaubten Mindestwert : â2147483648 Falsche Eingabe â Kein Integer you@host > ./checkInt Eingabe machen: 123412341234 Der Wert ist über dem erlaubten Maximalwert : 2147483647 Falsche Eingabe â Kein Integer15.1.3 echo mit oder ohne -n Wen es bislang genervt hat, ob denn nun seine Shell die Option ân für das Verhindern eines Zeilenumbruchs nach einer echo-Ausgabe oder das Escape-Zeichen \c kennt, und sich nicht darauf verlassen will, dass print oder printf auf dem System vorhanden ist, dem kann mit einer einfachen Funktion geholfen werden:#!/bin/sh # myecho() : Portables echo ohne Zeilenumbruch myecho() { # Weg mit dem Newline, falls vorhanden echo "$*" | tr -d '\n' } # Zum Testen ... # myecho "Eingabe machen : " read eingabeIhre MeinungWie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
In diesem Kapitel finden Sie viele Lösungsansätze zu gewöhnlichen Themen, die man relativ häufig in der Praxis benötigt. Natürlich darf man hier keine kompletten Projekte erwarten, sondern eher Scripts, die weiter ausbaufähig sind bzw. als Anregung für umfangreichere Projekte dienen sollen. Es ist auch gar nicht möglich, für jedes Problem eine ultimative Lösung aus dem Hut zu zaubern, dazu sind die individuellen Ansprüche der einzelnen Anforderungen zu unterschiedlich.
Die einzelnen Rezepte für die Praxis wurden in die folgenden Teile aufgegliedert:
 | Alltäglich benötigte Scripts |
| --- | --- |
Benutzer- und Prozessverwaltung (Überwachung) Systemüberwachung Backups (init-Scripts) Startup-Scripts erstellen E-Mail Log-File-Analyse CGI-Scripts
## 15.1 Alltägliche LösungenÂ
### 15.1.1 Auf alphabetische und numerische Zeichen prüfenÂ
Ein Problem bei vielen Scripts, die eine User-Eingabe erfordern, ist, dass ein »Vertipper« Dinge wie einen Datenbankschlüssel oder Dateinamen schnell durcheinander bringt. Eine Überprüfung auf die richtige Eingabe von der Tastatur fällt mittels sed recht einfach aus. Das folgende Beispiel überprüft, ob der Anwender Buchstaben und Zahlen korrekt eingegeben hat. Nicht erlaubt sind alle anderen Zeichen wie Whitespaces, Punktationen, Sonderzeichen etc.
> #!/bin/sh # checkInput() : Überprüft, dass eine richtige Eingabe aus # alphabetischen und numerischen Zeichen besteht # Gibt 0 (=OK) zurück, wenn alle Zeichen aus # Groß- und Kleinbuchstaben und Zahlen bestehen, # ansonsten wird 1 zurückgegeben. # checkInput() { if [ -z "$eingabe" ] then echo "Es wurde nichts eingegeben" exit 1 fi # Alle unerwünschten Zeichen entfernen ... eingabeTmp="`echo $1 | sed -e 's/[^[:alnum:]]//g'`" # ... und dann vergleichen if [ "$eingabeTmp" != "$eingabe" ] then return 1 else return 0 fi } # Ein Beispiel zum Testen der Funktion checkInput echo -n "Eingabe machen: " read eingabe if ! checkInput "$eingabe" then echo "Die Eingabe muss aus Buchstaben und Zahlen bestehen!" exit 1 else echo "Die Eingabe ist Ok." fi
Das Script bei der Ausführung:
> you@host > ./checkInput Eingabe machen: 1234asdf Die Eingabe ist Ok. you@host > ./checkInput Eingabe machen: asfd 1234 Die Eingabe muss aus Buchstaben und Zahlen bestehen! you@host > ./checkInput Eingabe machen: !"§$ Die Eingabe muss aus Buchstaben und Zahlen bestehen!
Entscheidend ist in diesem Script die sed-Zeile:
> eingabeTmp="`echo $1 | sed -e 's/[^[:alnum:]]//g'`"
In der Variablen eingabeTmp befindet sich nach der Kommandosubstitution ein String ohne irgendwelche Sonderzeichen außer Buchstaben und Ziffern (siehe Zeichenklasse [:alnum:]). Hierbei werden einfach alle anderen Zeichen aus dem Originalstring entfernt und im nächsten Schritt wird der Originalstring mit dem so neu erstellten verglichen. Sind beide weiterhin identisch, wurden die Bedingungen erfüllt und die Eingabe war in Ordnung. Ansonsten, wenn beide Strings nicht gleich sind, wurde ein »Fehler« bei der Eingabe entdeckt.
### 15.1.2 Auf Integer überprüfenÂ
Leider kann man hierbei jetzt nicht mit [:digits:] in der sed-Zeile auf die Eingabe eines echten Integer prüfen, da zum einen ein negativer Wert eingegeben werden kann und es zum anderen auch minimale und maximale Grenzen des Größenbereichs gibt. Daher müssen Sie das Script bzw. die Funktion ein wenig anpassen, um auch diese Aspekte zu berücksichtigen. Hier das Script, welches die Eingabe einer Integerzahl überprüft:
> #!/bin/sh # checkInt() : Überprüft, ob ein echter # Integerwert eingegeben wurde # Gibt 0 (== OK) zurück, wenn es sich um einen gültigen # Integerwert handelt, ansonsten 1 # checkInt() { number="$1" # Mindestwert für einen Integer (ggf. anpassen) min=-2147483648 # maximaler Wert für einen Integer (ggf. anpassen) max=2147483647 if [ -z $number ] then echo "Es wurde nichts eingegeben" exit 1 fi # Es könnte ein negativer Wert sein ... if [ "${number%${number#?}}" = "-" ] then # es ist ein negativer Wert â erstes Zeichen ein "-" # das erste Zeichen nicht übergeben testinteger="${number#?}" else testinteger="$number" fi # Alle unerwünschten Zeichen außer Zahlen entfernen ... extract_nodigits="`echo $testinteger | \ sed 's/[[:digit:]]//g'`" # Ist jetzt noch was vorhanden if [ ! -z $extract_nodigits ] then echo "Kein numerisches Format!" return 1 fi # Mindestgrenze eingehalten ... if [ "$number" -lt "$min" ] then echo "Der Wert ist unter dem erlaubten Mindestwert : $min" return 1 fi # max. Grenze eingehalten if [ "$number" -gt "$max" ] then echo "Der Wert ist über dem erlaubten Maximalwert : $max" return 1 fi return 0 # Ok, es ist ein Integer } # Ein Beispiel zum Testen der Funktion checkInput # echo -n "Eingabe machen: " read eingabe if ! checkInt "$eingabe" then echo "Falsche Eingabe â Kein Integer" exit 1 else echo "Die Eingabe ist Ok." fi
Das Script bei der Ausführung:
> you@host > ./checkInt Eingabe machen: 1234 Die Eingabe ist Ok. you@host > ./checkInt Eingabe machen: â1234 Die Eingabe ist Ok. you@host > ./checkInt Eingabe machen: â123412341234 Der Wert ist unter dem erlaubten Mindestwert : â2147483648 Falsche Eingabe â Kein Integer you@host > ./checkInt Eingabe machen: 123412341234 Der Wert ist über dem erlaubten Maximalwert : 2147483647 Falsche Eingabe â Kein Integer
### 15.1.3 echo mit oder ohne -nÂ
Wen es bislang genervt hat, ob denn nun seine Shell die Option ân für das Verhindern eines Zeilenumbruchs nach einer echo-Ausgabe oder das Escape-Zeichen \c kennt, und sich nicht darauf verlassen will, dass print oder printf auf dem System vorhanden ist, dem kann mit einer einfachen Funktion geholfen werden:
> #!/bin/sh # myecho() : Portables echo ohne Zeilenumbruch myecho() { # Weg mit dem Newline, falls vorhanden echo "$*" | tr -d '\n' } # Zum Testen ... # myecho "Eingabe machen : " read eingabe
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
Kapitel A Anhang
A.1 Shell-Builtin-BefehleÂ
Zwar wurden die meisten Befehle in diesem Buch schon behandelt, doch häufig fehlt einem der Überblick, welche dieser Befehl denn nun eingebaute »Builtins« einer Shell sind oder externe Kommandos. Daher ein kleiner Überblick zu den Builtins-Kommandos der einzelnen Shells.
## A.1 Shell-Builtin-BefehleÂ
Befehl | Shell | Bedeutung |
| --- | --- | --- |
: | sh, ksh, bash | Nullkommando; verlangt die Syntax einen Befehl und man will jedoch nichts ausführen, dann verwendet man : |
# | sh, ksh, bash | Einleitung eines Kommentars |
. | sh, ksh, bash | Punktkommando; damit kann man ein Script in der aktuellen Shell ausführen. |
alias | ksh, bash | Aliase; Kürzel für einen Befehl definieren |
autoload | ksh | Der autoload-Mechansimus |
bg | sh, ksh, bash | Einen gestoppten Prozess in den Hintergrund stellen |
bind | bash | key-bindings; Tastenbelegung für Extrafunktionen der Kommandozeile festlegen |
break | sh, ksh, bash | Eine Schleife verlassen |
builtin kommando | bash | Ein Builtin-Kommando ausführen, auch wenn es einen Alias gleichen Namens gibt. |
cd | sh, ksh, bash | Arbeitsverzeichnis wechseln |
command kommando | bash | Kommando ausführen, sodass die Shell nicht nach einer Funktion mit gleichem Namen sucht. |
continue | sh, ksh, bash | Mit dem nächsten Schleifendurchlauf fortfahren |
declare | bash | Variablen deklarieren; wie typeset |
dirs | bash | Ausgeben des Directorystacks der ehemaligen Arbeitsverzeichnisse |
echo | sh, ksh, bash | Ausgeben von Text auf stdout |
enable [ân] builtin | bash | Schaltet ein Builtin-Kommando ab/an (mit ân ab) |
eval args | sh, ksh, bash | Argumente als Kommando ausführen |
exec kommando | sh, ksh, bash | Das Kommando wird ausgeführt, indem die Shell überlagert wird (es wird also kein neuer Prozess erzeugt). |
exit | sh, ksh, bash | Beendet die aktuelle Shell oder das Script |
export | sh, ksh, bash | Exportiert Shell-Variablen, womit diese den Subshells zur Verfügung stehen |
fc [le] n | ksh | Zeigt Befehls-History an (âl) bzw. führt den Befehl mit der Nummer n in der History aus (âe) |
fg | sh, ksh, bash | Einen gestoppten Prozess in den Vordergrund stellen |
getopts | sh, ksh, bash | Optionen der Kommandozeile eines Shellscripts durchsuchen |
hash [âr] kommando | sh, ksh, bash | Interne Hashtabelle |
help kommando | bash | Gibt eine Infoseite für das angegebene Builtin-Kommando aus; ohne Kommando werden alle Builtins aufgelistet. |
history | bash | Zeigt eine History der zuletzt benutzten Befehle an |
jobs | sh, ksh, bash | Infos zu allen gestoppten oder im Hintergrund laufenden Prozessen |
kill | sh, ksh, bash | Signale an Prozessen schicken und Auflisten aller Signale (âl) |
local var | bash | Eine Variable einer Funktion als lokal definieren |
logout | bash | Interaktive Shell verlassen |
newgrp gruppe | sh, ksh, bash | Macht gruppe zur aktuellen Gruppe |
popd | bash | Eines oder mehrere Verzeichnisse vom Directorystack entfernen |
print | ksh | Einen Text auf die Standardausgabe ausgeben |
pushd dir | bash | Wechselt in das vorherige Arbeitsverzeichnis oder (bei Verwendung von dir) nach dir. dir wird dann im Directorystack gespeichert. |
pwd | sh, ksh, bash | Gibt das aktuelle Arbeitsverzeichnis aus |
read var | sh, ksh, bash | Liest eine Zeile von der Standardeingabe ein und speichert den Wert in var |
readonly var | sh, ksh, bash | Markiert eine Variable als nur lesbar; Konstante |
return | sh, ksh, bash | Beenden einer Funktion |
set | sh, ksh, bash | Shell-Optionen setzen bzw. anzeigen |
shift n | sh, ksh, bash | Liste mit Positionsparametern/Array um n Stellen nach links verschieben |
shopt | bash | Ein Zusatz für set |
source | bash | Wie der Punkteoperator; damit kann man ein Script in der aktuellen Shell ausführen. |
suspend | sh, ksh, bash | Anhalten der aktuellen Shell |
test | sh, ksh, bash | Ausdrücke auswerten |
times | sh, ksh, bash | Verbrauchte CPU-Zeit anzeigen (User- und System-Mode) |
trap kommando sig | sh, ksh, bash | Einen Signalhandel einrichten; fängt Signale ab und führt kommando aus |
type kommando | sh, ksh, bash | Gibt aus, ob es sich bei kommando um eine Funktion, Alias, Schlüsselwort oder externes Kommando handelt |
typeset | ksh, bash | Attribute für Variablen und Funktionen setzen |
ulimit | sh, ksh, bash | Setzt ein Limit für den Verbrauch von Systemressourcen oder zeigt diese an |
umask | sh, ksh, bash | Setzt eine Dateikreierungsmaske, welche neu erstellte Dateien oder Verzeichnisse erhalten |
unalias | ksh, bash | Löscht einen Alias |
unset var | sh, ksh, bash | Löscht Variablen oder Funktionen |
wait pid | sh, ksh, bash | Wartet auf das Ende von Subshells |
whence | ksh | Klassifizieren eines Kommandos (ähnlich wie type) |
## A.2 Externe KommandosÂ
Hierbei handelt es sich um externe Kommandos, welche sich gewöhnlich in /bin oder /usr/bin befinden.
Kommando | Bedeutung |
| --- | --- |
sh | Bourne-Shell |
ksh | Korn-Shell |
csh | C-Shell |
bash | Bourne-Again-Shell |
tcsh | TC-Shell |
rsh | Eine eingeschänkte Shell |
bc | Rechner |
dc | Rechner |
dialog | Farbige Dialogboxen |
env | Shell-Umgebung ausgeben |
expr | Auswerten von Ausdrücken |
false | Ein exit-Status ungleich 0 (in der Korn-Shell ein Alias) |
nice | Priorität verändern; als normaler Benutzer verringern |
nohup | Einen Job beim Beenden einer Shell fortführen (in der Korn-Shell ein Alias) |
true | Der exit-Status 0 (in der Korn-Shell ein Alias) |
xargs | Eine geeignete Kommandozeile konstruieren |
# Index
Date: 2004-01-01
Categories:
Tags:
# DanksagungÂ
DanksagungÂ
Natürlich kein Buch, ohne die Namen derer zur würdigen, die (bewusst oder unbewusst) zum Gelingen beigetragen haben. Zunächst gilt ein besonderer Dank den Menschen, die mir am nächsten stehen, meiner Frau und meinem Sohn, die diese Arbeit nicht »nur« dulden, sondern auch tatkräftig unterstützen. Für ein Buch benötigt man viel Freiraum und Freizeit â beides habt ihr mir ermöglicht.
Dann ist da noch Martin (Conrad), der mir neben dem Fachlektorat des Buchs auch noch mit der Webseite (www.pronix.de) hilft, oder rund heraus gesagt, die Webseite komplett verwaltet. »Verwaltet« ist auch noch milde ausgedrückt, Martin hat ein vollständiges Content Management System (CMS) in C geschrieben, welches ich glücklicherweise uneingeschränkt verwenden darf. Es gibt wenige Menschen wie Martin, der seine Arbeit mit einer solchen Hingabe und Sorgfalt erledigt, ohne etwas dafür zu verlangen. Bleibt nur noch zu hoffen, dass Martin sich endlich entschließt, sein unglaubliches Know-how zu Papier zu bringen.
Mein Dank gilt auch <NAME>-Lemoine von Galileo Press für die (wie immer) tolle Betreuung. Dies ist mittlerweile das dritte Buch unter ihrer Regie und soll noch nicht das letzte gewesen sein. Ich freue mich schon auf weitere Projekte.
Mein Dank geht auch an alle Mitglieder des www.pronix.de-Forums für das aktive Mitwirken und die Zeit, die sie zur Verfügung gestellt haben, um anderen Menschen bei ihren Problemen zu helfen. Besonders hervorheben will ich das »broesel« alias <NAME>, »Hirogen2« alias <NAME>, »Patrick« alias »himself« und noch viele weitere hier nicht erwähnte Teilnehmer (was natürlich auch wieder Martin mit einschließt ;-)).
Ebenfalls möchte ich mich bei <NAME> und <NAME> für das tolle Webhosting und den Service bei www.speedpartner.de bedanken.
And last, but not least â ich lebe nicht vom Bücherschreiben allein â, muss ich mich auch bei meinem Arbeitskollegen Uwe Kapfer für seine Diskretion und das Fernhalten von Störungen bedanken.
<NAME>, Mering 2005
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## DanksagungÂ
Natürlich kein Buch, ohne die Namen derer zur würdigen, die (bewusst oder unbewusst) zum Gelingen beigetragen haben. Zunächst gilt ein besonderer Dank den Menschen, die mir am nächsten stehen, meiner Frau und meinem Sohn, die diese Arbeit nicht »nur« dulden, sondern auch tatkräftig unterstützen. Für ein Buch benötigt man viel Freiraum und Freizeit â beides habt ihr mir ermöglicht.
Dann ist da noch Martin (Conrad), der mir neben dem Fachlektorat des Buchs auch noch mit der Webseite (www.pronix.de) hilft, oder rund heraus gesagt, die Webseite komplett verwaltet. »Verwaltet« ist auch noch milde ausgedrückt, Martin hat ein vollständiges Content Management System (CMS) in C geschrieben, welches ich glücklicherweise uneingeschränkt verwenden darf. Es gibt wenige Menschen wie Martin, der seine Arbeit mit einer solchen Hingabe und Sorgfalt erledigt, ohne etwas dafür zu verlangen. Bleibt nur noch zu hoffen, dass Martin sich endlich entschließt, sein unglaubliches Know-how zu Papier zu bringen.
Mein Dank gilt auch <NAME>-Lemoine von Galileo Press für die (wie immer) tolle Betreuung. Dies ist mittlerweile das dritte Buch unter ihrer Regie und soll noch nicht das letzte gewesen sein. Ich freue mich schon auf weitere Projekte.
Mein Dank geht auch an alle Mitglieder des www.pronix.de-Forums für das aktive Mitwirken und die Zeit, die sie zur Verfügung gestellt haben, um anderen Menschen bei ihren Problemen zu helfen. Besonders hervorheben will ich das »broesel« alias <NAME>, »Hirogen2« alias <NAME>, »Patrick« alias »himself« und noch viele weitere hier nicht erwähnte Teilnehmer (was natürlich auch wieder Martin mit einschließt ;-)).
Ebenfalls möchte ich mich bei <NAME> und <NAME> für das tolle Webhosting und den Service bei www.speedpartner.de bedanken.
And last, but not least â ich lebe nicht vom Bücherschreiben allein â, muss ich mich auch bei meinem Arbeitskollegen Uwe Kapfer für seine Diskretion und das Fernhalten von Störungen bedanken.
<NAME>, Mering 2005
# 1.2 Was ist eine Shell?Â
1.2 Was ist eine Shell?Â
Für den einen ist eine Shell nichts anderes als ein besseres »command.com« aus der MS DOS-Welt. Ein anderer wiederum bezeichnet die Shell lediglich als Kommandozeile, und für einige ist die Shell die Benutzeroberfläche schlechthin. Diese Meinungen resultieren daraus, dass der eine die Shell lediglich dazu verwendet, Installations- bzw. Konfigurationsscripts auszuführen, andere verwenden auch häufiger mal die Kommandozeile zum Arbeiten und die letzte Gruppe setzt die Shell ein, wie andere Anwender die grafischen Oberflächen benutzen.
Für den heutigen User besteht eine Benutzeroberfläche aus einem oder mehreren Fenstern, die man mit der Maus oder Tastatur steuern kann. Wie kann man (oder der Autor) also behaupten, die Shell sei eine Benutzeroberfläche? Genauer betrachtet, ist eine Benutzeroberfläche nichts anderes als eine Schnittstelle, welche zwischen der Hardware des Systems (womit der normale Benutzer nicht gern direkt zu tun haben will und dies auch nicht sollte) und dem Betriebssystem kommuniziert. Dies ist natürlich relativ einfach ausgedrückt, denn in Wirklichkeit findet eine solche Kommunikation meistens über mehrere abstrakte Schichten statt. Der Benutzer gibt eine Anweisung ein, diese wird interpretiert und in einen Betriebssystemaufruf (auch System-Call oder Sys-Call genannt) umgewandelt. Jetzt wird auf die Rückmeldung des Systems gewartet und ein entsprechender Befehl umgesetzt â oder im Fall eines Fehlers eine Fehlermeldung ausgegeben.
Natürlich kann man die Shell nicht einfach als normalen Prozess bezeichnen, dazu ist sie ein viel zu mächtiges und flexibles Programm. Selbst der Begriff »Kommandozeile« ist ein wenig zu schwach. Eher schon würde sich der Begriff »Kommandosprache« eignen. Und wenn die Shell schon nur ein Programm ist, sollte erwähnt werden, dass sie auch jederzeit gegen eine andere Shell austauschbar ist. Es gibt also nicht nur die eine Shell, sondern mehrere â aber auch hierzu in Kürze mehr.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
1.2 Was ist eine Shell?Â
Für den einen ist eine Shell nichts anderes als ein besseres »command.com« aus der MS DOS-Welt. Ein anderer wiederum bezeichnet die Shell lediglich als Kommandozeile, und für einige ist die Shell die Benutzeroberfläche schlechthin. Diese Meinungen resultieren daraus, dass der eine die Shell lediglich dazu verwendet, Installations- bzw. Konfigurationsscripts auszuführen, andere verwenden auch häufiger mal die Kommandozeile zum Arbeiten und die letzte Gruppe setzt die Shell ein, wie andere Anwender die grafischen Oberflächen benutzen.
Für den heutigen User besteht eine Benutzeroberfläche aus einem oder mehreren Fenstern, die man mit der Maus oder Tastatur steuern kann. Wie kann man (oder der Autor) also behaupten, die Shell sei eine Benutzeroberfläche? Genauer betrachtet, ist eine Benutzeroberfläche nichts anderes als eine Schnittstelle, welche zwischen der Hardware des Systems (womit der normale Benutzer nicht gern direkt zu tun haben will und dies auch nicht sollte) und dem Betriebssystem kommuniziert. Dies ist natürlich relativ einfach ausgedrückt, denn in Wirklichkeit findet eine solche Kommunikation meistens über mehrere abstrakte Schichten statt. Der Benutzer gibt eine Anweisung ein, diese wird interpretiert und in einen Betriebssystemaufruf (auch System-Call oder Sys-Call genannt) umgewandelt. Jetzt wird auf die Rückmeldung des Systems gewartet und ein entsprechender Befehl umgesetzt â oder im Fall eines Fehlers eine Fehlermeldung ausgegeben.
## 1.2 Was ist eine Shell?Â
Für den einen ist eine Shell nichts anderes als ein besseres »command.com« aus der MS DOS-Welt. Ein anderer wiederum bezeichnet die Shell lediglich als Kommandozeile, und für einige ist die Shell die Benutzeroberfläche schlechthin. Diese Meinungen resultieren daraus, dass der eine die Shell lediglich dazu verwendet, Installations- bzw. Konfigurationsscripts auszuführen, andere verwenden auch häufiger mal die Kommandozeile zum Arbeiten und die letzte Gruppe setzt die Shell ein, wie andere Anwender die grafischen Oberflächen benutzen.
Für den heutigen User besteht eine Benutzeroberfläche aus einem oder mehreren Fenstern, die man mit der Maus oder Tastatur steuern kann. Wie kann man (oder der Autor) also behaupten, die Shell sei eine Benutzeroberfläche? Genauer betrachtet, ist eine Benutzeroberfläche nichts anderes als eine Schnittstelle, welche zwischen der Hardware des Systems (womit der normale Benutzer nicht gern direkt zu tun haben will und dies auch nicht sollte) und dem Betriebssystem kommuniziert. Dies ist natürlich relativ einfach ausgedrückt, denn in Wirklichkeit findet eine solche Kommunikation meistens über mehrere abstrakte Schichten statt. Der Benutzer gibt eine Anweisung ein, diese wird interpretiert und in einen Betriebssystemaufruf (auch System-Call oder Sys-Call genannt) umgewandelt. Jetzt wird auf die Rückmeldung des Systems gewartet und ein entsprechender Befehl umgesetzt â oder im Fall eines Fehlers eine Fehlermeldung ausgegeben.
Auch wenn es auf den ersten Blick den Anschein hat, dass man über die grafische Oberfläche den etwas schnelleren und bequemeren Weg gewählt hat, so ist dies häufig nicht so. Je höher die Erfahrung des Anwenders ist, desto eher wird dieser mit der Kommandozeile schneller ans Ziel kommen als der GUI-gewöhnte Anwender.
Hurra, die ganze Arbeit war umsonst, KDE und GNOME benötigen wir nicht mehr und fliegen jetzt von der Platte! Nein, keine Sorge, ich will Sie keinesfalls von der grafischen Oberfläche abbringen, ganz im Gegenteil. Gerade ein Mix aus grafischer Oberfläche und Kommandozeile macht effektiveres Arbeiten erst möglich. Wie sonst wäre es z. B. zu realisieren, mehrere Shells gleichzeitig zur Überwachung zu betrachten bzw. einfach mehrere Shells gleichzeitig zu verwenden?
Lange Rede, kurzer Sinn: Eine Shell ist eine Methode, das System über die Kommandozeile zu bedienen.
Historisch interessierte User werden sich wohl fragen, wieso die Bezeichnung »Shell«, was soviel wie Muschel oder auch Schale bedeutet, verwendet wurde. Der Name wurde unter Linux/UNIX (eigentlich war zuerst UNIX da, aber auf diese Geschichte wird hier nicht eingegangen ...) verwendet, weil sich eine Shell wie eine Schale um den Betriebssystemkern legt. Vereinfacht ausgedrückt ist die Shell ein vom Betriebssystemkern unabhängiges Programm bei der Ausführung. Also ein simpler einfacher Prozess. Ein Blick auf das Kommando ps (ein Kommando, mit dem Sie Informationen zu laufenden Prozessen ermitteln können) bestätigt Ihnen das:
> you@host > ps PID TTY TIME CMD 5890 pts/40 00:00:00 bash 5897 pts/40 00:00:00 ps
Hier läuft als Shell bspw. die Bash â aber mehr zu den verschiedenen Shells und der Ausgabe von ps erfahren Sie etwas später.
# 1.3 HauptanwendungsgebietÂ
## 1.3 HauptanwendungsgebietÂ
Dass eine Shell als Kommandosprache oder als Skriptsprache eine bedeutende Rolle unter Linux/UNIX spielt, dürfte Ihnen bereits bekannt sein. Wichtige Dinge wie die Systemsteuerung oder die verschiedenen Konfigurationen werden über die Shell (und den damit erstellten Shellscripts, den Prozeduren) realisiert.
Vor allem werden Shellscripts auch dazu verwendet, täglich wiederkehrende Aufgaben zu vereinfachen und Vorgänge zu automatisieren. Dies schließt die Hauptanwendungsgebiete wie (System-)Überwachung von Servern, Auswerten von bestimmten Dateien (Log-Dateien, Protokollen, Analysen, Statistiken, Filterprogrammen etc.) und ganz besonders der Systemadministration mit ein. Alles Dinge, mit denen sich beinahe jeder Anwender (ob bewusst oder unbewusst) auseinander setzen muss.
Es kann ganz plausible Gründe geben, ein Shellscript zu erstellen. Auch bei einfachsten Eingaben in der Kommandozeile, die Sie dauernd durchführen, können Sie sich das Leben leichter machen, indem Sie diese Kommandos zusammenfassen und einfach ein »neues« Kommando oder, etwas exakter formuliert, ein Shellscript schreiben.
### 1.3.1 Was ist ein Shellscript?Â
Bestimmt haben Sie das eine oder andere Mal schon ein Kommando oder eine Kommandoverkettung in der Shell eingegeben, zum Beispiel:
> you@host > ls /usr/include | sort | less af_vfs.h aio.h aliases.h alloca.h ansidecl.h a.out.h argp.h argz.h ...
Tatsächlich stellt diese Kommandoverkettung schon ein Programm bzw. Script dar. Hier lenken Sie z. B. mit ls (womit Sie den Inhalt eines Verzeichnisses auflisten können) die Ausgabe aller im Verzeichnis /usr/include enthaltenen Dateien mit einer Pipe an das Kommando sort weiter. sort sortiert die Ausgabe des Inhalts von /usr/include, dessen Daten ja vom Kommando ls kommen, lexikografisch. Auch die Ausgabe von sort wird nochmals durch eine Pipe an less, den Pager, mit dem Sie die Ausgabe bequem nach oben bzw. unten scrollen können, weitergegeben. Zusammengefasst könnten Sie jetzt diese Befehlskette in einer Datei (Script) speichern und mit einem Aufruf des Dateinamens jederzeit wieder starten (hierauf wird noch gezielter eingegangen). Rein theoretisch hätten Sie hierbei schon ein Shellscript erstellt. Wäre dies alles, so müssten Sie kein ganzes Buch lesen und könnten gleich auf das Kommando alias oder ein anderes bequemeres Mittel zurückgreifen. Für solch einfache Aufgaben müssen Sie nicht extra ein Shellscript schreiben. Ein Shellscript wird allerdings u. a. dann nötig, wenn Sie
 | Befehlssequenzen mehrmals ausführen wollen. |
| --- | --- |
 | eine Eingabe vom Benutzer benötigen. |
| --- | --- |
Für diese und andere Aufgaben finden Sie bei einer Shell noch zusätzliche Konstrukte, wie sie ähnlich auch in anderen Programmiersprachen vorhanden sind.
Natürlich dürfen Sie nicht den Fehler machen, die Shellscript-Programmierung mit anderen höheren Programmiersprachen gleichzusetzen. Aufwändige und schnelle Grafiken, zeitkritische Anwendungen und eine umfangreiche Datenverwaltung (Stichwort: Datenbank) werden weiterhin mit Programmiersprachen wie C oder C++ erstellt.
Ein Shellscript ist also nichts anderes als eine mit Shell-Kommandos zusammengebastelte (Text-)Datei, bei der Sie entscheiden, wann, warum und wie welches Kommando oder welche Kommandosequenz ausgeführt wird.
Dennoch kann ein Shellscript, je länger es wird und je komplexer die Aufgaben werden, auf den ersten Blick sehr kryptisch und kompliziert sein. Dies nur für den Fall, dass Sie vorhaben, das Buch von hinten nach vorne durchzuarbeiten.
### 1.3.2 Vergleich mit anderen SprachenÂ
Das Vergleichen von Programmiersprachen war schon immer ein ziemlicher Irrsinn. Jede Programmiersprache hat ihre Vor- und Nachteile und dient letztendlich immer als Mittel, eine bestimmte Aufgabe zu erfüllen. Somit hängt die Wahl der Programmiersprache davon ab, welche Aufgabe es zu erfüllen gilt.
Zumindest kann ich ohne Bedenken sagen, dass Sie mit einem Shellscript komplexe Aufgaben erledigen können, wofür sich andere Programmiersprachen kaum bis gar nicht eignen. In keiner anderen Sprache haben Sie die Möglichkeit, die unterschiedlichsten Linux-UNIX-Kommandos zu verwenden und so zu kombinieren, wie Sie es gerade benötigen. Einfachstes Beispiel: Geben Sie den Inhalt von einem Verzeichnis in lexikografischer Reihenfolge aus. Bei einem Shellscript geht dies mit einem einfachen ls | sort â würden Sie selbiges in C oder C++ erstellen, wäre dies wie mit Kanonen auf Spatzen zu schießen.
Auf der anderen Seite haben auch Shellscript-Programme ihre Grenzen. Dies trifft ganz besonders bei Anwendungen zu, bei denen eine enorme Menge an Daten auf einmal verarbeitet werden muss. Werden diese Arbeiten dann auch noch innerhalb von Schleifen ausgeführt, dann kann dies schon mal recht langsam werden. Zwar werden Sie auf einem Einzelplatzrechner recht selten auf solche Engpässe stoßen, aber sobald Sie Ihre Scripts in einem Netzwerk wie bspw. einem Server schreiben, auf dem sich eine Menge weiterer Benutzer (etwa bei einem Webhoster mit SSH) befinden, sollten Sie überdenken, ob sich nicht eine andere Programmiersprache besser eignen würde. Häufig wird als nächstbessere Alternative und als mindestens genauso flexibel Perl genannt.
Der Vorteil von Perl ist, dass häufig keine Kommandos oder externe Tools gestartet werden müssen, weil Perl so ziemlich alles beinhaltet, was man braucht. Und wer in den Grundfunktionen von Perl nicht das findet, was er sucht, dem bleibt immer noch ein Blick in das mittlerweile unüberschaubare Archiv von CPAN-Modulen (CPAN = Comprehensive Perl Archive Network).
Allerdings sind Skriptsprachen immer etwas langsamer. Der Grund liegt darin, dass diese interpretiert werden müssen. Schreiben Sie bspw. ein Shellscript, so wird es beim Ausführen Zeile für Zeile analysiert. Dies beinhaltet u. a. eine Überprüfung der Syntax und zahlreiche Ersetzungen.
Wenn es auf die Performance ankommt oder zeitkritische Daten verarbeitet werden müssen, weicht man meistens auf eine »echte« Programmiersprache wie C oder C++ aus. Wobei es den Begriff »echte« Programmiersprache (auch Hochsprache genannt) ja gar nicht gibt. Häufig ist davon bei Sprachen die Rede, die kompiliert und nicht mehr interpretiert werden. Kompilierte Sprachen haben den Vorteil, dass die Abarbeitung von Daten in und mit einem einzigen (binären) Programm erfolgt.
# 1.4 Kommando, Programm oder Shellscript?Â
1.4 Kommando, Programm oder Shellscript?Â
Solange man nichts mit dem Programmieren am Hut hat, verwendet man alles, was irgendwie ausführbar ist (mehr oder weniger). Irgendwann hat man Feuer gefangen und in die ein oder andere Sprache geschnuppert, dann stellt sich die Frage: Was ist es denn nun ein Kommando (Befehl) â ein Programm, ein Script oder was? Die Unterschiede sind für Laien gar nicht ersichtlich. Da eine Shell als Kommando-Interpreter arbeitet, kennt diese mehrere Arten von Kommandos, welche Ihnen hier ein wenig genauer erläutert werden sollen.
1.4.1 Shell-eigene Kommandos (Builtin-Kommando)Â
Dies sind Kommandos, die in der Shell direkt implementiert sind und auch von dieser selbst ausgeführt werden, ohne dass dabei ein weiteres Programm gestartet werden muss. Die Arten und der Umfang dieser Builtins sind abhängig vom verwendeten Kommando-Interpreter (was meistens die Shell selbst ist). So kennt die eine Shell einen Befehl, den die andere nicht beherrscht. Einige dieser Kommandos müssen sogar zwingend Teil einer Shell sein, da diese globale Variablen verwendet (wie bspw. cd mit den Shell-Variablen PWD und OLDPWD). Weitere Kommandos werden wiederum aus Performance-Gründen in der Shell ausgeführt.
1.4.2 Aliase in der ShellÂ
Ein Alias ist ein anderer Name für ein Kommando oder eine Kette von Kommandos. Solch einen Alias können Sie mit einer selbst definierten Abkürzung beschreiben, welche von der Shell während der Substitution durch den vollen Text ersetzt wird. Sinnvoll sind solche Aliase bei etwas längeren Kommandoanweisungen. Wird ein Alias aufgerufen, wird ein entsprechendes Kommando oder eine Kette von Kommandos mit allen angegebenen Optionen und Parametern ausgeführt.
you@host> alias gohome="cd ~" you@host> cd /usr you@host> pwd /usr you@host> gohome you@host> pwd /home/you
Im Beispiel wurde der Alias gohome definiert. Die Aufgabe des Alias wurde zwischen den Anführungsstrichen festgelegt â hier also ins Heimverzeichnis wechseln. Da ein Alias wiederum einen Alias beinhalten kann, wiederholen die Shells die Substitutionen so lange, bis der letzte Alias aufgelöst ist. Bspw.:
you@host> alias meinheim="gohome" you@host> cd /usr you@host> pwd /usr you@host> meinheim you@host> pwd /home/you
1.4.3 Funktionen in der ShellÂ
Dabei handelt es sich um nicht fest eingebaute Funktionen der Shell (wie den Builtins), vielmehr um Funktionen, die zuvor für diese Shell definiert wurden. Solche Funktionen gruppieren meistens mehrere Kommandos als eine separate Routine.
1.4.4 Shellscripts (Shell-Prozeduren)Â
Dies ist Ihr zukünftiges Hauptanwendungsfeld. Ein Shellscript hat gewöhnlich das Attributrecht zum Ausführen gesetzt. Für die Ausführung eines Shellscripts wird eine weitere (Sub-)Shell gestartet, welche das Script interpretiert und die enthaltenen Kommandos ausführt.
1.4.5 Programme (binär)Â
Echte ausführbare Programme (bspw. erstellt in C) liegen im binären Format auf der Festplatte vor. Zum Starten eines solchen Programms erzeugt die Shell einen neuen Prozess. Alle anderen Kommandos werden innerhalb des Prozesses der Shell selbst ausgeführt. Standardmäßig wartet die Shell auf die Beendigung des Prozesses.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 1.4 Kommando, Programm oder Shellscript?Â
Solange man nichts mit dem Programmieren am Hut hat, verwendet man alles, was irgendwie ausführbar ist (mehr oder weniger). Irgendwann hat man Feuer gefangen und in die ein oder andere Sprache geschnuppert, dann stellt sich die Frage: Was ist es denn nun ein Kommando (Befehl) â ein Programm, ein Script oder was? Die Unterschiede sind für Laien gar nicht ersichtlich. Da eine Shell als Kommando-Interpreter arbeitet, kennt diese mehrere Arten von Kommandos, welche Ihnen hier ein wenig genauer erläutert werden sollen.
### 1.4.1 Shell-eigene Kommandos (Builtin-Kommando)Â
Dies sind Kommandos, die in der Shell direkt implementiert sind und auch von dieser selbst ausgeführt werden, ohne dass dabei ein weiteres Programm gestartet werden muss. Die Arten und der Umfang dieser Builtins sind abhängig vom verwendeten Kommando-Interpreter (was meistens die Shell selbst ist). So kennt die eine Shell einen Befehl, den die andere nicht beherrscht. Einige dieser Kommandos müssen sogar zwingend Teil einer Shell sein, da diese globale Variablen verwendet (wie bspw. cd mit den Shell-Variablen PWD und OLDPWD). Weitere Kommandos werden wiederum aus Performance-Gründen in der Shell ausgeführt.
### 1.4.2 Aliase in der ShellÂ
Ein Alias ist ein anderer Name für ein Kommando oder eine Kette von Kommandos. Solch einen Alias können Sie mit einer selbst definierten Abkürzung beschreiben, welche von der Shell während der Substitution durch den vollen Text ersetzt wird. Sinnvoll sind solche Aliase bei etwas längeren Kommandoanweisungen. Wird ein Alias aufgerufen, wird ein entsprechendes Kommando oder eine Kette von Kommandos mit allen angegebenen Optionen und Parametern ausgeführt.
> you@host> alias gohome="cd ~" you@host> cd /usr you@host> pwd /usr you@host> gohome you@host> pwd /home/you
Im Beispiel wurde der Alias gohome definiert. Die Aufgabe des Alias wurde zwischen den Anführungsstrichen festgelegt â hier also ins Heimverzeichnis wechseln. Da ein Alias wiederum einen Alias beinhalten kann, wiederholen die Shells die Substitutionen so lange, bis der letzte Alias aufgelöst ist. Bspw.:
> you@host> alias meinheim="gohome" you@host> cd /usr you@host> pwd /usr you@host> meinheim you@host> pwd /home/you
### 1.4.3 Funktionen in der ShellÂ
Dabei handelt es sich um nicht fest eingebaute Funktionen der Shell (wie den Builtins), vielmehr um Funktionen, die zuvor für diese Shell definiert wurden. Solche Funktionen gruppieren meistens mehrere Kommandos als eine separate Routine.
### 1.4.4 Shellscripts (Shell-Prozeduren)Â
Dies ist Ihr zukünftiges Hauptanwendungsfeld. Ein Shellscript hat gewöhnlich das Attributrecht zum Ausführen gesetzt. Für die Ausführung eines Shellscripts wird eine weitere (Sub-)Shell gestartet, welche das Script interpretiert und die enthaltenen Kommandos ausführt.
### 1.4.5 Programme (binär)Â
Echte ausführbare Programme (bspw. erstellt in C) liegen im binären Format auf der Festplatte vor. Zum Starten eines solchen Programms erzeugt die Shell einen neuen Prozess. Alle anderen Kommandos werden innerhalb des Prozesses der Shell selbst ausgeführt. Standardmäßig wartet die Shell auf die Beendigung des Prozesses.
# 1.5 Die Shell-VielfaltÂ
1.5 Die Shell-VielfaltÂ
Es wurde bereits erwähnt, dass eine Shell wie jede andere Anwendung durch eine andere Shell ersetzt bzw. ausgetauscht werden kann. Welche Shell der Anwender auch verwendet, jede Shell hat denselben Ursprung, die Mutter (oder auch der Vater) aller Shells ist die Bourne-Shell (sh), welche von <NAME> 1978 entwickelt wurde und sich gleich als Standard-Shell im UNIX-Bereich durchgesetzt hat. Einige Zeit später wurde für das Berkeley-UNIX-System eine weitere Shell namens C-Shell (csh) entwickelt, die erheblich mehr Komfort als die Bourne-Shell bot. Leider war die C-Shell bezüglich der Syntax gänzlich inkompatibel zur Bourne-Shell und glich mehr der Syntax der Programmiersprache C.
An diese beiden Shell-Varianten, der Bourne-Shell und der C-Shell, lehnten sich alle weiteren Shells an. Aus der Sparte der Bourne-Shell gingen als etwas bekanntere Vertreter die Shell-Varianten ksh (Korn-Shell), bash (Bourne-Again-Shell), zsh (Z-Shell), ash (A-Shell) und rbash bzw. rzsh (Restricted Shell) hervor. Aus der C-Shell-Abteilung hingegen trat nur noch tcsh (TC-Shell) als eine Erweiterung von csh in Erscheinung.
Hinweis   Die Bourne-Shell (sh) gibt es auf einem Linux-System eigentlich nicht und wird dort meistens vollständig von der bash (Bourne-Again-Shell) ersetzt. Wenn Sie also unter Linux die Bourne-Shell starten, führt dieser Aufruf direkt (als symbolischer Link) zur bash.
1.5.1 ksh (Korn-Shell)Â
Eine Erweiterung der Bourne-Shell ist die Korn-Shell. Hier wurden gegenüber der Bourne-Shell besonders interaktive Merkmale und auch einige Funktionalitäten der C-Shell hinzugefügt. Die Korn-Shell ist seit UNIX.4 Bestandteil eines Standard-UNIX-Systems und gewöhnlich auch die Standard-Shell in der UNIX-Welt. Was schon zur Bourne-Shell in Bezug auf Linux gesagt wurde, gilt auch für die Korn-Shell. Auch hierbei wird unter Linux ein Aufruf der Korn-Shell durch die Bash ersetzt. Für den Fall der Fälle gibt es für Linux allerdings auch eine Version der Korn-Shell, eine Public Domain Korn-Shell (pdksh), welche den meisten Distributionen beiliegt.
1.5.2 Bash (Bourne-Again-Shell)Â
Die Bash ist praktisch die Standard-Shell in der Linux-Welt und mittlerweile generell die am meisten benutzte Shell. Sie ist im Prinzip eine freie Nach-Implementierung der Bourne- und der Korn-Shell.
1.5.3 zsh (Z-Shell)Â
Die Z-Shell ist eine Erweiterung der Bourne-Shell, Korn-Shell und der Bash. Eine Shell mit derartiger Funktionalität müsste sich doch zu einer Art Standard-Shell etablieren!? Dass dies nicht so ist und wahrscheinlich auch nie so sein wird, liegt wohl an der mangelnden Übersichtlichkeit ihrer Funktionalität. Es stellt sich die Frage nach dem Sinn einer solchen Shell.
1.5.4 ash (A-Shell)Â
Bei der A-Shell handelt es sich um eine ähnliche Shell wie die Bourne-Shell, allerdings mit einigen Erweiterungen. Die A-Shell gehört eher zu einer exotischeren Spezies und gilt auch nicht als sonderlich komfortabel. Dass ich sie dennoch erwähne, liegt an der geringen Größe an Speicherplatz (60 KB), die sie benötigt (auch wenn dies heutzutage kein Thema mehr ist). Dank dieser geringen Größe kann die ash überall dort verwendet werden, wo an Speicherplatz gespart werden muss. Dies kann bspw. die Erstellung einer Notfall-Diskette sein, mit der man sein Linux-UNIX-System booten will. Oder man ist auf der Suche nach einer Shell, die man in die Start-RAM-Disk beim initrd-Verfahren laden will. Selbst in einer ziemlich abgeriegelten Umgebung (bspw. mit chroot) kommt man im lokalen Zugriffsmodus mit der ash sehr weit.
1.5.5 rbash, rzsh (Restricted Shell)Â
Hierbei handelt es sich um »erweiterte« Shell-Varianten der Bash bzw. der zsh. Die Erweiterung bezieht sich allerdings hier nicht auf den Funktionsumfang, sondern eher auf gewisse Einschränkungen des Benutzers in seinen Rechten und Möglichkeiten. Diese Einschränkungen können vom Systemadministrator für jeden Benutzer spezifisch gesetzt werden. Dies ist besonders bei fremden Zugriffen von außen recht sinnvoll.
1.5.6 tcsh (TC-Shell)Â
Hierbei handelt es sich lediglich um eine verbesserte und ein wenig erweiterte Version der C-Shell (csh) mit voller Kompatibilität zu dieser. Erweitert wurden allerdings nur die automatische Eingabevervollständigung, wie sie beim Betriebssystem »TENEX« (daher auch TC-Shell = TENEX C-Shell) anzutreffen war. TENEX ist allerdings wie die Dinosaurier bereits ausgestorben.
1.5.7 Welche Shell-Variante wird in diesem Buch verwendet?Â
Sie wissen nun, dass es zwei Linien der Shellprogrammierung gibt â die Bourne-Shell- und die C-Shell-Familie. In der heutigen Shell-Welt wird vorzugsweise die Bourne-Shell-Familie verwendet, genauer: Unter UNIX wird als Standard-Shell die Korn-Shell und unter Linux wird die Bash eingesetzt. Bei FreeBSD hingegen scheiden sich häufig die Geister â hier wird für den User die sh und für den Superuser häufig die csh als Standard gesetzt, dies lässt aber ohne Problem ändern. Die C-Shell-Familie verliert allerdings aufgrund einiger Schwächen (bspw. fehlen einige wichtige Programmier-Konstrukte) immer mehr an Boden gegenüber der Bourne-Familie.
Somit beschreibt dieses Buch die Shell-Varianten Bourne-Shell, Korn-Shell und die Bash. Die Bourne-Shell deshalb, weil immer noch die meisten Boot-Scripts (init-Scripts) mit ihr erstellt werden. Die Korn-Shell, weil diese quasi die Standard-Shell unter UNIX ist und immer noch unzählige Scripts hierfür geschrieben werden. Und die Bash, weil diese zum einen die Standard-Shell unter Linux ist und zum anderen den Großteil der Leser betrifft (da wohl die wenigsten Leser Zugang zu einem kommerziellen UNIX-System haben). Da die Unterschiede dieser drei Shells nicht sehr groß sind, werden alle drei Varianten parallel behandelt â wenn es also einen besonderen Unterschied zwischen dieser oder jener Shell gibt, so wird entsprechend darauf hingewiesen. Als Einsteiger müssen Sie sich jetzt erst mal keine Gedanken darum machen. Lernen Sie einfach Shell-Programmieren!
Hinweis   Bitte verzeihen Sie mir, dass ich dauernd die Bourne-Again-Shell mit Bash abkürze und dies bei der Korn- bzw. Bourne-Shell häufig unterlasse, aber Bash ist in diesem Fall einfach geläufiger.
## 1.5 Die Shell-VielfaltÂ
Es wurde bereits erwähnt, dass eine Shell wie jede andere Anwendung durch eine andere Shell ersetzt bzw. ausgetauscht werden kann. Welche Shell der Anwender auch verwendet, jede Shell hat denselben Ursprung, die Mutter (oder auch der Vater) aller Shells ist die Bourne-Shell (sh), welche von <NAME> 1978 entwickelt wurde und sich gleich als Standard-Shell im UNIX-Bereich durchgesetzt hat. Einige Zeit später wurde für das Berkeley-UNIX-System eine weitere Shell namens C-Shell (csh) entwickelt, die erheblich mehr Komfort als die Bourne-Shell bot. Leider war die C-Shell bezüglich der Syntax gänzlich inkompatibel zur Bourne-Shell und glich mehr der Syntax der Programmiersprache C.
An diese beiden Shell-Varianten, der Bourne-Shell und der C-Shell, lehnten sich alle weiteren Shells an. Aus der Sparte der Bourne-Shell gingen als etwas bekanntere Vertreter die Shell-Varianten ksh (Korn-Shell), bash (Bourne-Again-Shell), zsh (Z-Shell), ash (A-Shell) und rbash bzw. rzsh (Restricted Shell) hervor. Aus der C-Shell-Abteilung hingegen trat nur noch tcsh (TC-Shell) als eine Erweiterung von csh in Erscheinung.
### 1.5.1 ksh (Korn-Shell)Â
Eine Erweiterung der Bourne-Shell ist die Korn-Shell. Hier wurden gegenüber der Bourne-Shell besonders interaktive Merkmale und auch einige Funktionalitäten der C-Shell hinzugefügt. Die Korn-Shell ist seit UNIX.4 Bestandteil eines Standard-UNIX-Systems und gewöhnlich auch die Standard-Shell in der UNIX-Welt. Was schon zur Bourne-Shell in Bezug auf Linux gesagt wurde, gilt auch für die Korn-Shell. Auch hierbei wird unter Linux ein Aufruf der Korn-Shell durch die Bash ersetzt. Für den Fall der Fälle gibt es für Linux allerdings auch eine Version der Korn-Shell, eine Public Domain Korn-Shell (pdksh), welche den meisten Distributionen beiliegt.
### 1.5.2 Bash (Bourne-Again-Shell)Â
Die Bash ist praktisch die Standard-Shell in der Linux-Welt und mittlerweile generell die am meisten benutzte Shell. Sie ist im Prinzip eine freie Nach-Implementierung der Bourne- und der Korn-Shell.
### 1.5.3 zsh (Z-Shell)Â
Die Z-Shell ist eine Erweiterung der Bourne-Shell, Korn-Shell und der Bash. Eine Shell mit derartiger Funktionalität müsste sich doch zu einer Art Standard-Shell etablieren!? Dass dies nicht so ist und wahrscheinlich auch nie so sein wird, liegt wohl an der mangelnden Übersichtlichkeit ihrer Funktionalität. Es stellt sich die Frage nach dem Sinn einer solchen Shell.
### 1.5.4 ash (A-Shell)Â
Bei der A-Shell handelt es sich um eine ähnliche Shell wie die Bourne-Shell, allerdings mit einigen Erweiterungen. Die A-Shell gehört eher zu einer exotischeren Spezies und gilt auch nicht als sonderlich komfortabel. Dass ich sie dennoch erwähne, liegt an der geringen Größe an Speicherplatz (60 KB), die sie benötigt (auch wenn dies heutzutage kein Thema mehr ist). Dank dieser geringen Größe kann die ash überall dort verwendet werden, wo an Speicherplatz gespart werden muss. Dies kann bspw. die Erstellung einer Notfall-Diskette sein, mit der man sein Linux-UNIX-System booten will. Oder man ist auf der Suche nach einer Shell, die man in die Start-RAM-Disk beim initrd-Verfahren laden will. Selbst in einer ziemlich abgeriegelten Umgebung (bspw. mit chroot) kommt man im lokalen Zugriffsmodus mit der ash sehr weit.
### 1.5.5 rbash, rzsh (Restricted Shell)Â
Hierbei handelt es sich um »erweiterte« Shell-Varianten der Bash bzw. der zsh. Die Erweiterung bezieht sich allerdings hier nicht auf den Funktionsumfang, sondern eher auf gewisse Einschränkungen des Benutzers in seinen Rechten und Möglichkeiten. Diese Einschränkungen können vom Systemadministrator für jeden Benutzer spezifisch gesetzt werden. Dies ist besonders bei fremden Zugriffen von außen recht sinnvoll.
### 1.5.6 tcsh (TC-Shell)Â
Hierbei handelt es sich lediglich um eine verbesserte und ein wenig erweiterte Version der C-Shell (csh) mit voller Kompatibilität zu dieser. Erweitert wurden allerdings nur die automatische Eingabevervollständigung, wie sie beim Betriebssystem »TENEX« (daher auch TC-Shell = TENEX C-Shell) anzutreffen war. TENEX ist allerdings wie die Dinosaurier bereits ausgestorben.
### 1.5.7 Welche Shell-Variante wird in diesem Buch verwendet?Â
Sie wissen nun, dass es zwei Linien der Shellprogrammierung gibt â die Bourne-Shell- und die C-Shell-Familie. In der heutigen Shell-Welt wird vorzugsweise die Bourne-Shell-Familie verwendet, genauer: Unter UNIX wird als Standard-Shell die Korn-Shell und unter Linux wird die Bash eingesetzt. Bei FreeBSD hingegen scheiden sich häufig die Geister â hier wird für den User die sh und für den Superuser häufig die csh als Standard gesetzt, dies lässt aber ohne Problem ändern. Die C-Shell-Familie verliert allerdings aufgrund einiger Schwächen (bspw. fehlen einige wichtige Programmier-Konstrukte) immer mehr an Boden gegenüber der Bourne-Familie.
Somit beschreibt dieses Buch die Shell-Varianten Bourne-Shell, Korn-Shell und die Bash. Die Bourne-Shell deshalb, weil immer noch die meisten Boot-Scripts (init-Scripts) mit ihr erstellt werden. Die Korn-Shell, weil diese quasi die Standard-Shell unter UNIX ist und immer noch unzählige Scripts hierfür geschrieben werden. Und die Bash, weil diese zum einen die Standard-Shell unter Linux ist und zum anderen den Großteil der Leser betrifft (da wohl die wenigsten Leser Zugang zu einem kommerziellen UNIX-System haben). Da die Unterschiede dieser drei Shells nicht sehr groß sind, werden alle drei Varianten parallel behandelt â wenn es also einen besonderen Unterschied zwischen dieser oder jener Shell gibt, so wird entsprechend darauf hingewiesen. Als Einsteiger müssen Sie sich jetzt erst mal keine Gedanken darum machen. Lernen Sie einfach Shell-Programmieren!
### 1.5.8 rsh und sshÂ
# 1.6 BetriebssystemeÂ
1.6 BetriebssystemeÂ
Die Shellscript-Programmierung steht Ihnen unter Linux und vielen UNIX-Systemen (HP-UX, Solaris, AIX, IRIX etc.) zur Verfügung. Selbst unter MS Windows können Sie in der Cygwin-Umgebung oder gar mit dem von Microsoft selbst bereitgestellten Softwarepaket SFU (Service for Unix) die meisten Shells benutzen. Generell müssen Sie sich hierbei keine allzu großen Gedanken machen, da es mit den Kommandos der Shell selten Probleme gibt. Allerdings gibt es hier und da doch einige Differenzen, wenn systemspezifische Kommandos verwendet werden, die nicht der Shell angehören. Wer es bspw. gewöhnt ist, rpm zum Installieren seiner Software zu verwenden und dies in seinem Shellscript auch verwendet, dürfte Schwierigkeiten bekommen, wenn sein Shellscript auf einem System ausgeführt wird, auf dem es kein rpm gibt (und bspw. installp zum Einsatz kommt). Ähnlich sieht dies mit einigen Optionen aus, die man bei den Kommandos verwenden kann. Auch hier kann es passieren, dass die eine oder andere Option zwar auf anderen Systemen vorhanden ist, aber eben mit einer anderen Angabe angesprochen wird.
Zwar sind diese kleinen Unterschiede in diesem Buch recht gering, dennoch müssen sie an dieser Stelle erwähnt werden. Und es ist kein Geheimnis, dass Sie auch selten eine »perfekte« Lösung finden werden. Tritt ein solcher Fall auf, müssen Sie in Ihrem Script das Kommando Ihren Gegebenheiten (System) anpassen. Sofern ein solcher Fall gegeben ist, werde ich Sie darauf hinweisen.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
1.6 BetriebssystemeÂ
Die Shellscript-Programmierung steht Ihnen unter Linux und vielen UNIX-Systemen (HP-UX, Solaris, AIX, IRIX etc.) zur Verfügung. Selbst unter MS Windows können Sie in der Cygwin-Umgebung oder gar mit dem von Microsoft selbst bereitgestellten Softwarepaket SFU (Service for Unix) die meisten Shells benutzen. Generell müssen Sie sich hierbei keine allzu großen Gedanken machen, da es mit den Kommandos der Shell selten Probleme gibt. Allerdings gibt es hier und da doch einige Differenzen, wenn systemspezifische Kommandos verwendet werden, die nicht der Shell angehören. Wer es bspw. gewöhnt ist, rpm zum Installieren seiner Software zu verwenden und dies in seinem Shellscript auch verwendet, dürfte Schwierigkeiten bekommen, wenn sein Shellscript auf einem System ausgeführt wird, auf dem es kein rpm gibt (und bspw. installp zum Einsatz kommt). Ähnlich sieht dies mit einigen Optionen aus, die man bei den Kommandos verwenden kann. Auch hier kann es passieren, dass die eine oder andere Option zwar auf anderen Systemen vorhanden ist, aber eben mit einer anderen Angabe angesprochen wird.
Zwar sind diese kleinen Unterschiede in diesem Buch recht gering, dennoch müssen sie an dieser Stelle erwähnt werden. Und es ist kein Geheimnis, dass Sie auch selten eine »perfekte« Lösung finden werden. Tritt ein solcher Fall auf, müssen Sie in Ihrem Script das Kommando Ihren Gegebenheiten (System) anpassen. Sofern ein solcher Fall gegeben ist, werde ich Sie darauf hinweisen.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 1.6 BetriebssystemeÂ
Die Shellscript-Programmierung steht Ihnen unter Linux und vielen UNIX-Systemen (HP-UX, Solaris, AIX, IRIX etc.) zur Verfügung. Selbst unter MS Windows können Sie in der Cygwin-Umgebung oder gar mit dem von Microsoft selbst bereitgestellten Softwarepaket SFU (Service for Unix) die meisten Shells benutzen. Generell müssen Sie sich hierbei keine allzu großen Gedanken machen, da es mit den Kommandos der Shell selten Probleme gibt. Allerdings gibt es hier und da doch einige Differenzen, wenn systemspezifische Kommandos verwendet werden, die nicht der Shell angehören. Wer es bspw. gewöhnt ist, rpm zum Installieren seiner Software zu verwenden und dies in seinem Shellscript auch verwendet, dürfte Schwierigkeiten bekommen, wenn sein Shellscript auf einem System ausgeführt wird, auf dem es kein rpm gibt (und bspw. installp zum Einsatz kommt). Ähnlich sieht dies mit einigen Optionen aus, die man bei den Kommandos verwenden kann. Auch hier kann es passieren, dass die eine oder andere Option zwar auf anderen Systemen vorhanden ist, aber eben mit einer anderen Angabe angesprochen wird.
Zwar sind diese kleinen Unterschiede in diesem Buch recht gering, dennoch müssen sie an dieser Stelle erwähnt werden. Und es ist kein Geheimnis, dass Sie auch selten eine »perfekte« Lösung finden werden. Tritt ein solcher Fall auf, müssen Sie in Ihrem Script das Kommando Ihren Gegebenheiten (System) anpassen. Sofern ein solcher Fall gegeben ist, werde ich Sie darauf hinweisen.
# 1.7 Crashkurs: einfacher Umgang mit der KommandozeileÂ
1.7 Crashkurs: einfacher Umgang mit der KommandozeileÂ
Wahrscheinlich wird der ein oder andere im Umgang mit der Kommandozeile der Shell schon geübt sein. Trotzdem will ich hier einen kleinen »Crashkurs« zur Verwendung der Kommandozeile einer Shell geben â natürlich auf das Nötigste beschränkt. Viele dieser Kommandos werden im Laufe der Kapitel immer wieder zu Demonstrationszwecken verwendet. Noch mehr Kommandos und eventuell auch mehr zu den möglichen (wichtigsten) Optionen finden Sie in der »Linux-UNIX-Kommandoreferenz« am Ende des Buchs. Ich beschränke mich im Crashkurs nur auf die grundlegenden Befehle und das Arbeiten mit Dateien und Verzeichnissen.
1.7.1 Grundlegende BefehleÂ
Um überhaupt einen Befehl auszuführen, muss nach dessen Eingabe die (ENTER)-Taste gedrückt werden. Danach wird das Linux-UNIX-System veranlassen, dass der Befehl das ausführt, was er eben tun soll.
Wer ist eingeloggt und wer bin ich â who und whoami
Wollen Sie wissen, wer alles auf dem System eingeloggt ist, dann kann der Befehl who (engl. wer) verwendet werden.
you@host > who rot tty2 Jan 30 14:14 tot :0 Jan 30 15:02 (console) you tty1 Jan 30 11:14
Hier sind die User »rot«, »tot« und »you« eingeloggt. Neben der Benutzererkennung werden hierbei auch die Nummern der »tty« angezeigt. »tty1« und »tty2« sind beides echte Terminals ohne grafische Oberfläche. Hingegen scheint der User »tot« mit einer grafischen Oberfläche verbunden zu sein. Des Weiteren finden Sie hier das Datum und die Uhrzeit des Logins.
Wollen Sie wissen, wer davon Sie sind, dann können Sie das mit whoami (engl. wer bin ich) abfragen (vielleicht sind Sie ja irgendwie an Root-Rechte gekommen).
you@host > whoami you you@host > su Password: ******** # whoami root
Die Ausgabe von Zeichen â echo
Den Befehl echo werden Sie noch öfters einsetzen. Mit echo können Sie alles ausgeben, was hinter dem Befehl in der Eingabezeile folgt. Hört sich zunächst recht trivial an, aber in der Praxis (bspw. bei der Ausgabe von Shellscripts, der Ausgabe von Shell-Variablen oder zu Debugging-Zwecke etc.) ist dieser Befehl unverzichtbar.
you@host > echo Hallo Hallo you@host > echo Mehrere Worte ausgeben Mehrere Worte ausgeben you@host > echo you@host > echo Mehrere Leerzeichen werden ignoriert Mehrere Leerzeichen werden ignoriert you@host > echo $SHELL /bin/bash
Sicherlich ist Ihnen dabei auch aufgefallen, dass echo mehrere Leerzeichen zwischen den einzelnen Wörtern ignoriert. Das liegt daran, dass bei Linux/UNIX die Leerzeichen lediglich dazu dienen, mehrere Worte voneinander zu trennen â weil eben nur die Worte von Bedeutung sind. Werden mehr Leerzeichen oder Tabulatoren (siehe IFS) als Teil eines Befehls oder einer Befehlskette verwendet, werden diese grundsätzlich ignoriert. Wollen Sie dennoch mit echo mehrere Leerzeichen ausgeben, so können Sie den Text auch zwischen doppelte Anführungsstriche stellen:
you@host > echo "jetzt geht es auch mit mehreren Leerzeichen" jetzt geht es auch mit mehreren Leerzeichen
1.7.2 Der Umgang mit DateienÂ
Der Umgang mit Dateien wird dank einfacher Möglichkeiten unter Linux/UNIX zum Kinderspiel. Hier finden Sie die wichtigsten Befehle, die im Verlauf des Kapitels zum Einsatz kommen.
Datei(en) auflisten â ls
Der Befehl ls (Abkürzung für list = auflisten) ist wohl der am häufigsten benutzte Befehl in einer Shell überhaupt. Damit können Sie die Dateien auflisten, die sich in einem Verzeichnis befinden.
you@host > ls bin Documents GNUstep new_dir public_html Desktop Dokumente Mail OpenOffice.org1.1 Shellbuch
Sicherlich fällt Ihnen hierbei auf, dass ls neben gewöhnlichen Dateien auch Inhaltsverzeichnisse und Spezialdateien (Gerätedateien) mit ausgibt. Darauf möchte ich ein paar Seiten später näher eingehen.
Inhalt einer Datei ausgeben â cat
Mit dem Befehl cat (Abkürzung für catenate = verketten, aneinander reihen) können Sie den Inhalt einer Datei ausgeben lassen. Als Argument übergibt man dem Befehl den Namen der Datei, deren Inhalt man lesen möchte.
you@host > cat beispiel.txt Dieser Text steht in der Textdatei "beispiel.txt" Beenden kann man diese Eingabe mit Strg+D you@host > cat >> beispiel.txt Dieser Text wird ans Ende von "beispiel.txt" geschrieben. you@host > cat beispiel.txt Dieser Text steht in der Textdatei "beispiel.txt" Beenden kann man diese Eingabe mit Strg+D Dieser Text wird ans Ende von "beispiel.txt" geschrieben.
Im Beispiel konnten Sie sehen, dass cat für mehr als nur das Ausgeben von Dateien verwendet werden kann. Hier wurde zum Beispiel mit dem Ausgabeumlenkungszeichen >> der von Ihnen darauf folgend eingegebene Text an die Datei beispiel.txt gehängt. Beenden können Sie die Eingabe über die Standardeingabe mit EOF (End Of File), die zugehörige Tastenkombination ist (Strg)+(D). Mehr zur Ein-/Ausgabeumleitung erfahren Sie in Abschnitt 1.10.
Zeilen, Worte und Zeichen zählen â wc
Mit dem Kommando wc (Abkürzung für word count = Wörter zählen) können Sie die Anzahl der Zeilen, Worte und Zeichen einer Datei zählen.
you@host > wc beispiel.txt 3 22 150 beispiel.txt you@host > wc -l beispiel.txt 3 beispiel.txt you@host > wc -w beispiel.txt 22 beispiel.txt you@host > wc -c beispiel.txt 150 beispiel.txt
Geben Sie â wie im ersten Beispiel â wc nur mit dem Dateinamen als Argument an, werden Ihnen drei Zahlen ausgegeben. Die Bedeutung dieser drei Zahlen ist, der Reihe nach: die Anzahl der Zeilen, die Anzahl der Wörter und die Anzahl von Zeichen. Natürlich können Sie mit einer Angabe der entsprechenden Option auch einzeln die Anzahl von Zeilen (âl = line), Worten (âw = words) und Zeichen (âc = character) einer Datei ermitteln.
Hinweis   Fast jeder Linux-UNIX-Befehl bietet Ihnen solche speziellen Optionen an. Das Format ist generell immer gleich, gefolgt von einem Minuszeichen folgt die gewünschte Option. Wollen Sie bspw. wissen, was Ihnen der ein oder andere Befehl noch so für Optionen bietet, dann hilft häufig ein Aufruf des Befehls gefolgt von der Option --help aus. Mehr Details zu einzelnen Optionen eines Kommandos finden Sie in der Linux-UNIX-Kommandoreferenz (siehe Kapitel 14) und natürlich auf den man-Pages. Die Verwendung der man-Pages wird ebenfalls in der Kommandoreferenz erläutert.
Datei(en) kopieren â cp
Dateien kopieren können Sie mit dem Kommando cp (Abkürzung für copy = kopieren). Als erstes Argument übergeben Sie diesem Befehl den Namen der Datei (Quelldatei), die kopiert werden soll. Das zweite Argument ist dann der Name der Kopie (Zieldatei), die Sie erstellen wollen.
you@host > cp beispiel.txt kopie_beispiel.txt you@host > ls *.txt beispiel.txt kopie_beispiel.txt
Beachten Sie allerdings, wenn Sie hier als zweites Argument (im Beispiel kopie_beispiel.txt) den Namen einer bereits vorhandenen Datei verwenden, dass diese gnadenlos überschrieben wird.
Datei(en) umbenennen oder verschieben â mv
Eine Datei können Sie mit dem Kommando mv (Abkürzung für move = bewegen) umbenennen oder, sofern als Ziel ein anderes Verzeichnis angegeben wird, verschieben. Die Argumente entsprechen dabei demselben Prinzip wie beim Kommando cp. Das erste Argument ist der Name der Datei, die umbenannt oder verschoben werden soll, und das zweite Argument ist entweder der neue Name der Datei oder â wenn gewünscht â das Verzeichnis, wohin mv die Datei schieben soll.
you@host > ls beispiel.txt kopie_beispiel.txt you@host > mv kopie_beispiel.txt beispiel.bak you@host > ls beispiel.bak beispiel.txt you@host > mv beispiel.bak /home/tot/ein_ordner you@host > ls beispiel.txt you@host > ls /home/tot/ein_ordner beispiel.bak
Wie beim cp-Befehl wird mit mv der als zweites Argument angegebene Name bzw. das Ziel überschrieben, sofern ein gleicher Name bereits existiert.
Datei(en) löschen â rm
Wollen Sie eine Datei aus Ihrem System entfernen, hilft Ihnen der Befehl rm (Abkürzung für remove = entfernen). Als Argument von rm geben Sie die Datei an, die Sie löschen wollen. Natürlich dürfen hierbei auch mehrere Dateien auf einmal angegeben werden.
you@host > rm datei.txt you@host > rm datei1.txt datei2.txt datei3.txt datei4.txt you@host > rm beispiel.txt /home/tot/ein_ordner/beispiel.bak
Leere Datei erzeugen â touch
Mit diesem Befehl können Sie eine oder mehrere Dateien als Argument mit der Größe 0 anlegen. Sofern eine Datei mit diesem Namen existiert, ändert touch nur das Datum der letzten Änderung â lässt aber den Inhalt der Datei in Ruhe. Ich verwende den Befehl gern zu Demonstrationszwecken im Buch, wenn ich mal schnell mehrere Dateien auf einmal erzeugen will und damit ohne großen Aufwand das ein oder andere Szenario darstellen kann.
you@host > touch datei1.txt you@host > ls *.txt datei1.txt you@host > touch datei2.txt datei3.txt datei4.txt you@host > ls *.txt datei1.txt datei2.txt datei3.txt datei4.txt
1.7.3 Der Umgang mit VerzeichnissenÂ
Neben dem Umgang mit Dateien gehören der Umgang und das Verständnis von Verzeichnissen ebenso zu den grundlegenden Arbeiten. Daher hier die wichtigsten Befehle und Begriffe dazu.
Verzeichnishierarchien und Home
In jedem Linux-UNIX-System finden Sie sich nach dem Einloggen (oder beim Starten einer »xterm«) automatisch im Heim-Inhaltsverzeichnis (home) wieder. Geht man bspw. davon aus, dass Ihr Heim-Inhaltsverzeichnis »you« lautet und dass dieses Verzeichnis ein Unterverzeichnis des Verzeichnisses /home ist (was gewöhnlich der Fall ist), dann lautet Ihr Heimverzeichnis /home/you. Die Rede ist von einem Verzeichnisbaum, an dessen Spitze sich immer ein / befindet â das Wurzelinhaltsverzeichnis (Root-Verzeichnis, nicht zu verwechseln mit dem Benutzer root). Hierbei gibt es keine Laufwerksangaben, wie dies Windows-typisch der Fall ist. Das oberste Hauptverzeichnis geben Sie hier immer mit einem / an, welches Sie auch gern mit ls ausgeben lassen können.
you@host > ls / bin dev home media opt root srv tmp var boot etc lib mnt proc sbin sys usr windows
So veranlassen Sie, dass ls den Inhalt vom Wurzelverzeichnis in alphabetischer Reihenfolge (hier von oben nach unten; dann von links nach rechts) ausgibt. Will man an dieser Stelle wissen, wo man sich gerade befindet (das aktuelle Arbeitsverzeichnis, also das Verzeichnis, welches ls ohne sonstige Angaben ausgeben würde), kann man den Befehl pwd (Abkürzung für print working director = gib das Arbeitsverzeichnis aus) verwenden.
you@host > pwd /home/you
Geben Sie also einen Pfadnamen beginnend mit einem Schrägstrich ( / ) an, wird versucht, eine Datei oder ein Verzeichnis vom Wurzelverzeichnis aus zu erreichen. Man spricht dabei von einer vollständigen oder absoluten Pfadangabe. Die weiteren Verzeichnisse, die Sie zur Datei oder zum Verzeichnis benötigen, werden mit einem / ohne Zwischenraum getrennt.
you@host > cat /home/you/Dokumente/brief.txt
Hier wird die Datei brief.txt, welche sich im Verzeichnis /home/tot/Dokumente befindet, ausgegeben. Befindet sich diese Datei allerdings in Ihrem Heimverzeichnis (in der Annahme, es lautet hier /home/you), dann müssen Sie keine vollständige Pfadangabe vom Wurzelverzeichnis aus vornehmen. Hier würde auch eine relative Pfadangabe genügen. Als »relativ« wird hier der Pfad vom aktuellen Arbeitsverzeichnis aus bezeichnet.
you@host > pwd /home/you you@host > cat Dokumente/brief.txt
Was hier angegeben wurde, wird auch als vollständige Dateiangabe bezeichnet, weil hier die Datei brief.txt direkt mitsamt Pfad (relativ zum aktuellen Arbeitsverzeichnis) angesprochen wird. Gibt man hingegen nur /home/you/Dokumente an, dann spricht man von einem (vollständigen) Pfadnamen.
Früher oder später werden Sie in Ihrem Inhaltsverzeichnis auch mal auf die Punkte . und .. stoßen. Die (doppelten) Punkte .. repräsentieren immer das Inhaltsverzeichnis, das eine Ebene höher liegt. Befinden Sie sich also nach dem Einloggen im Verzeichnis /home/you, bezieht sich .. auf das Inhaltsverzeichnis von you. So können Sie bspw. beim Wechseln des Verzeichnisses jederzeit mit Angabe von .. in das nächsthöhere Verzeichnis der Hierarchie wechseln. Selbst das höchste Verzeichnis, die Wurzel /, besitzt solche Einträge, nur dass es sich beim Wurzelverzeichnis um einen Eintrag auf sich selbst handelt â höher geht es eben nicht mehr.
you@host > pwd /home/you you@host > ls .. you you@host > ls /.. bin dev home media opt root srv tmp var boot etc lib mnt proc sbin sys usr windows
Der einfache Punkt hingegen verweist immer auf das aktuelle Inhaltsverzeichnis. So sind bspw. folgende Angaben absolut gleichwertig:
you@host > ls you@host > ls ./
In Tabelle 1.1 finden Sie eine Zusammenfassung der Begriffe, die in Bezug auf Verzeichnisse wichtig sind.
Tabelle 1.1 Â Wichtige Begriffe im Zusammenhang mit Verzeichnissen
Begriff
Bedeutung
Wurzelverzeichnis / (engl. root)
Höchstes Verzeichnis im Verzeichnisbaum
Aktuelles Arbeitsverzeichnis
Das Verzeichnis, in dem Sie sich im Augenblick befinden
Vollständige Pfadangabe
Ein Pfadname beginnend mit / (Wurzel); alle anderen Verzeichnisse werden ebenfalls mit einem / getrennt.
Relative Pfadangabe
Der Pfad ist relativ zum aktuellen Arbeitsverzeichnis; hierbei wird kein voranstehendes / verwendet.
Vollständiger Dateiname
Pfadangabe inklusive Angabe zur entsprechenden Datei (bspw. /home/you/Dokumente/brief.txt)
Vollständiger Pfadname
Pfadangabe zu einem Verzeichnis, in dem sich die entsprechende Datei befindet (bspw. /home/you/Dokumente)
.. (zwei Punkte)
Verweisen auf ein Inhaltsverzeichnis, das sich eine Ebene höher befindet
. (ein Punkt)
Verweist auf das aktuelle Inhaltsverzeichnis
Inhaltsverzeichnis wechseln â cd
Das Wechseln von einem Inhaltsverzeichnis können Sie mit dem Befehl cd (Abkürzung für change directory = wechsle Verzeichnis) ausführen. Als Argument übergeben Sie diesem Befehl das Inhaltsverzeichnis, in das Sie wechseln wollen.
you@host > pwd /home/you you@host > cd Dokumente you@host > pwd /home/you/Dokumente you@host > ls Beispiele FAQ.tmd Liesmich.tmd you@host > cd .. you@host > pwd /home/you you@host > cd ../.. you@host > pwd / you@host > cd /home/you you@host > pwd /home/you you@host > cd / you@host > pwd / you@host > cd you@host > pwd /home/you
Mit den vorherigen Erläuterungen sollte Ihnen dieses Beispiel keine Probleme mehr bereiten. Einzig der Sonderfall sollte erwähnt werden, die Funktion cd. Verwenden Sie cd allein, so gelangen Sie immer in das Heim-Inhaltsverzeichnis zurück, egal wo Sie sich gerade befinden.
Ein Inhaltsverzeichnis erstellen â mkdir
Ein neues Inhaltsverzeichnis können Sie mit dem Befehl mkdir (Abkürzung make directory = erzeuge Verzeichnis) anlegen. Als Argument erwartet dieser Befehl den Namen des neu zu erzeugenden Verzeichnisses. Ohne Angabe eines Pfadnamens als Argument wird das neue Verzeichnis im aktuellen Arbeitsverzeichnis angelegt.
you@host > mkdir Ordner1 you@host > pwd /home/you you@host > cd Ordner1 you@host > pwd /home/you/Ordner1 tyou@host > cd .. you@host > pwd /home/you
Ein Verzeichnis löschen â rmdir
Ein Verzeichnis können Sie mit dem Befehl rmdir (Abkürzung für remove directory = lösche Verzeichnis) wieder löschen. Um aber ein Verzeichnis wirklich löschen zu dürfen, muss dieses leer sein. Das heißt, im Verzeichnis darf sich außer den Einträgen . und .. nichts mehr befinden. Die Verwendung von rmdir entspricht der von mkdir, nur dass Sie als Argument das Verzeichnis angeben, das gelöscht werden soll.
you@host > cd Ordner1 you@host > touch testfile.dat you@host > cd .. you@host > rmdir Ordner1 rmdir: Ordner1: Das Verzeichnis ist nicht leer
Um also alle Dateien in einem Verzeichnis zu löschen, haben Sie ja bereits das Kommando rm kennen gelernt. Mit einer Datei ist das kein großer Aufwand, doch befinden sich in einem Verzeichnis mehrere Dateien, bietet Ihnen rm die Option âr zum rekursiven Löschen an.
you@host > rm -r Ordner1
Dank dieser Option werden alle gewünschten Verzeichnisse und die darin enthaltenen Dateien gelöscht, inklusive aller darin enthaltenen Unterverzeichnisse.
1.7.4 Datei- und VerzeichnisnamenÂ
Bei der Verwendung von Datei- bzw. Verzeichnisnamen ist Linux/UNIX sehr flexibel. Hierbei ist fast alles erlaubt, dennoch gibt es auch einige Zeichen, die man in einem Dateinamen besser weglässt.
1.7.5 GerätenamenÂ
Bisher haben Sie die Dateitypen der normalen Dateien und der (Datei-)Verzeichnisse kennen gelernt. Es gibt allerdings noch eine dritte Dateiart, die Gerätedateien (auch Spezialdateien genannt). Mit den Gerätedateien (device files) können Programme unter Verwendung des Kernels auf die Hardwarekomponenten im System zugreifen. Natürlich sind das keine Dateien, wie Sie sie eigentlich kennen, aber aus der Sicht der Programme werden diese wie gewöhnliche Dateien behandelt. Man kann daraus lesen, etwas dorthin schreiben oder sie auch für spezielle Zwecke benutzen â eben alles, was Sie mit einer Datei auch machen können (und noch mehr). Gerätedateien bieten Ihnen eine einfache Methode, um auf Systemressourcen wie bspw. den Bildschirm, den Drucker, das Diskettenlaufwerk oder die Soundkarte zuzugreifen, ohne dass Sie wissen müssen, wie dieses Gerät eigentlich funktioniert.
Auf Linux-UNIX-Systemen finden Sie die Gerätedateien fast immer im Verzeichnis /dev. Hier finden Sie zu jedem Gerät einen entsprechenden Eintrag. /dev/ttyS0 steht bspw. für die erste serielle Schnittstelle, /dev/hda1 ist die erste Partition der ersten IDE-Festplatte. Neben echten Geräten in der Gerätedatei gibt es auch so genannte Pseudo-Gerätedateien im /dev-Verzeichnis, wie beispielsweise den bekanntesten Vertreter /dev/null, das häufig als Datengrab verwendet wird (dazu später mehr).
Beim Verwenden von Gerätedateinamen in Ihren Shellscripts sollten Sie allerdings Vorsicht walten lassen und wenn möglich diese Eingabe dem Anwender überlassen (etwa mit einer Konfigurationsdatei). Der Grund ist hierbei, dass viele Linux-UNIX-Systeme unterschiedliche Gerätenamen verwenden.
Hinweis   Natürlich ließe sich hierzu noch viel mehr schreiben â daher sei bei Bedarf mein Buch »Linux-Unix-Programmierung« empfohlen, das Sie auch online auf der Webseite www.pronix.de vorfinden. Hier werden die Gerätedateien in einem extra Kapitel etwas umfangreicher behandelt.
1.7.6 DateiattributeÂ
Nachdem Sie jetzt wissen, dass Linux/UNIX zwischen mehreren Dateiarten, insgesamt sechs, unterscheidet â nicht erwähnt wurden hierbei die Sockets, Pipes (named Pipes und FIFOs) und die Links (Softlinks bzw. Hardlinks) â, sollten Sie sich auf jeden Fall noch mit den Dateiattributen auseinander setzen. Gerade als angehender oder bereits leibhaftiger Administrator hat man häufig mit unautorisierten Zugriffen auf bestimmte Dateien, Verzeichnisse oder Gerätedateien zu tun. Auch in Ihren Shellscripts werden Sie das eine oder andere Mal die Dateiattribute auswerten müssen und entsprechend darauf reagieren wollen.
Am einfachsten erhalten Sie solche Informationen mit dem Befehl ls und der Option âl (für long). So gelangen Sie an eine recht üppige Zahl von Informationen zu den einzelnen Dateien und Verzeichnissen.
you@host > ls -l -rw-r--r-- 1 you users 9689 2003â12â04 15:53 datei.dat
Hinweis   Beachten Sie, dass die Ausgabe unter verschiedenen Betriebssystemen abweichen kann.
Hier nun zu den einzelnen Bedeutungen der Ausgabe von ls âl, aufgelistet von links nach rechts.
Dateiart (Dateityp)
Ganz links, beim ersten Zeichen (hier befindet sich ein Minuszeichen â) wird die Dateiart angegeben. Folgende Angaben können sich hier befinden (siehe Tabelle 1.2).
Tabelle 1.2 Â Mögliche Angaben für die Dateiart (Dateityp)
Zeichen
Bedeutung (Dateiart)
â
Normale Datei
d
Verzeichnis (d = directory)
p
Named Pipe; steht für eine Art Pufferungsdatei, eine Pipe-Datei.
c
(c = character oriented) steht für eine zeichenorientierte Gerätedatei.
b
(b = block oriented) steht für eine blockorientierte Gerätedatei.
s
(s = socket) steht für ein Socket (genauer einen UNIX-Domainsocket).
l
Symbolische Links
Tabelle 1.3 Â Zugriffsrechte des Eigentümers, der Gruppe und aller anderen
Darstellung in ls
Bedeutung
[rââââââââ]
read (user; Leserecht für Eigentümer)
[âwâââââââ]
write (user; Schreibrecht für Eigentümer)
[ââxââââââ]
execute (user; Ausführrecht für Eigentümer)
[rwxââââââ]
read, write, execute (user; Lese-, Schreib-, Ausführungsrecht für Eigentümer)
[ââârâââââ]
read (group; Leserecht für Gruppe)
[ââââwââââ]
write (group; Schreibrecht für Gruppe)
[âââââxâââ]
execute (group; Ausführungsrecht für Gruppe)
[ââârwxâââ]
read, write, execute (group; Lese-, Schreib-, Ausführungsrecht für Gruppe)
[âââââârââ]
read (other; Leserecht für alle anderen Benutzer)
[âââââââwâ]
write (other; Schreibrecht für alle anderen Benutzer)
[ââââââââx]
execute (other; Ausführungsrecht für alle anderen Benutzer)
[âââââârwx]
read, write, execute (other; Lese-, Schreib-, Ausführungsrecht für alle anderen Benutzer)
Sind bspw. die Rechte rwârâârââ für eine Datei gesetzt, so bedeutet dies, dass der Eigentümer (rwâ) der Datei diese sowohl lesen (r) als auch beschreiben (w) darf, nicht aber ausführen (hierzu fehlt ein x). Die Mitglieder in der Gruppe (rââ) und alle anderen (rââ) hingegen dürfen diese Datei nur lesen.
Der Rest
Als Nächstes finden Sie die Verweise auf die Datei. Damit ist die Anzahl der Links (Referenzen) gemeint, die für diese Datei existieren. In diesem Fall gibt es keine Verweise auf die Datei, da hier die »1« steht. Jetzt kommt der Benutzername des Dateibesitzers (hier »you«), gefolgt vom Namen der Gruppe (hier »users«), welcher vom Eigentümer der Datei festgelegt wird. Die nächsten Werte entsprechen der Länge der Datei in Bytes (hier 9689 Bytes) und dem Datum der letzten Änderung der Datei mitsamt Uhrzeit (2003â12â04 15:53). Am Ende finden Sie den Namen der Datei wieder (hier datei.dat).
Hinweis   Trotz dieser kleinen Einführung konnten viele grundlegende Dinge von Linux/UNIX nicht behandelt werden. Sollten Sie mit diesem Buch tatsächlich Ihre ersten Erfahrungen mit Linux/UNIX sammeln, so wäre die Hinzuziehung zusätzlicher Literatur sinnvoll.
## 1.7 Crashkurs: einfacher Umgang mit der KommandozeileÂ
Wahrscheinlich wird der ein oder andere im Umgang mit der Kommandozeile der Shell schon geübt sein. Trotzdem will ich hier einen kleinen »Crashkurs« zur Verwendung der Kommandozeile einer Shell geben â natürlich auf das Nötigste beschränkt. Viele dieser Kommandos werden im Laufe der Kapitel immer wieder zu Demonstrationszwecken verwendet. Noch mehr Kommandos und eventuell auch mehr zu den möglichen (wichtigsten) Optionen finden Sie in der »Linux-UNIX-Kommandoreferenz« am Ende des Buchs. Ich beschränke mich im Crashkurs nur auf die grundlegenden Befehle und das Arbeiten mit Dateien und Verzeichnissen.
### 1.7.1 Grundlegende BefehleÂ
Um überhaupt einen Befehl auszuführen, muss nach dessen Eingabe die (ENTER)-Taste gedrückt werden. Danach wird das Linux-UNIX-System veranlassen, dass der Befehl das ausführt, was er eben tun soll.
# Wer ist eingeloggt und wer bin ich â who und whoami
Wollen Sie wissen, wer alles auf dem System eingeloggt ist, dann kann der Befehl who (engl. wer) verwendet werden.
> you@host > who rot tty2 Jan 30 14:14 tot :0 Jan 30 15:02 (console) you tty1 Jan 30 11:14
Hier sind die User »rot«, »tot« und »you« eingeloggt. Neben der Benutzererkennung werden hierbei auch die Nummern der »tty« angezeigt. »tty1« und »tty2« sind beides echte Terminals ohne grafische Oberfläche. Hingegen scheint der User »tot« mit einer grafischen Oberfläche verbunden zu sein. Des Weiteren finden Sie hier das Datum und die Uhrzeit des Logins.
Wollen Sie wissen, wer davon Sie sind, dann können Sie das mit whoami (engl. wer bin ich) abfragen (vielleicht sind Sie ja irgendwie an Root-Rechte gekommen).
> you@host > whoami you you@host > su Password: ******** # whoami root
# Die Ausgabe von Zeichen â echo
Den Befehl echo werden Sie noch öfters einsetzen. Mit echo können Sie alles ausgeben, was hinter dem Befehl in der Eingabezeile folgt. Hört sich zunächst recht trivial an, aber in der Praxis (bspw. bei der Ausgabe von Shellscripts, der Ausgabe von Shell-Variablen oder zu Debugging-Zwecke etc.) ist dieser Befehl unverzichtbar.
> you@host > echo <NAME> you@host > echo Mehrere Worte ausgeben Mehrere Worte ausgeben you@host > echo you@host > echo Mehrere Leerzeichen werden ignoriert Mehrere Leerzeichen werden ignoriert you@host > echo $SHELL /bin/bash
Sicherlich ist Ihnen dabei auch aufgefallen, dass echo mehrere Leerzeichen zwischen den einzelnen Wörtern ignoriert. Das liegt daran, dass bei Linux/UNIX die Leerzeichen lediglich dazu dienen, mehrere Worte voneinander zu trennen â weil eben nur die Worte von Bedeutung sind. Werden mehr Leerzeichen oder Tabulatoren (siehe IFS) als Teil eines Befehls oder einer Befehlskette verwendet, werden diese grundsätzlich ignoriert. Wollen Sie dennoch mit echo mehrere Leerzeichen ausgeben, so können Sie den Text auch zwischen doppelte Anführungsstriche stellen:
> you@host > echo "jetzt geht es auch mit mehreren Leerzeichen" jetzt geht es auch mit mehreren Leerzeichen
### 1.7.2 Der Umgang mit DateienÂ
Der Umgang mit Dateien wird dank einfacher Möglichkeiten unter Linux/UNIX zum Kinderspiel. Hier finden Sie die wichtigsten Befehle, die im Verlauf des Kapitels zum Einsatz kommen.
# Datei(en) auflisten â ls
Der Befehl ls (Abkürzung für list = auflisten) ist wohl der am häufigsten benutzte Befehl in einer Shell überhaupt. Damit können Sie die Dateien auflisten, die sich in einem Verzeichnis befinden.
> you@host > ls bin Documents GNUstep new_dir public_html Desktop Dokumente Mail OpenOffice.org1.1 Shellbuch
Sicherlich fällt Ihnen hierbei auf, dass ls neben gewöhnlichen Dateien auch Inhaltsverzeichnisse und Spezialdateien (Gerätedateien) mit ausgibt. Darauf möchte ich ein paar Seiten später näher eingehen.
# Inhalt einer Datei ausgeben â cat
Mit dem Befehl cat (Abkürzung für catenate = verketten, aneinander reihen) können Sie den Inhalt einer Datei ausgeben lassen. Als Argument übergibt man dem Befehl den Namen der Datei, deren Inhalt man lesen möchte.
> you@host > cat beispiel.txt Dieser Text steht in der Textdatei "beispiel.txt" Beenden kann man diese Eingabe mit Strg+D you@host > cat >> beispiel.txt Dieser Text wird ans Ende von "beispiel.txt" geschrieben. you@host > cat beispiel.txt Dieser Text steht in der Textdatei "beispiel.txt" Beenden kann man diese Eingabe mit Strg+D Dieser Text wird ans Ende von "beispiel.txt" geschrieben.
Im Beispiel konnten Sie sehen, dass cat für mehr als nur das Ausgeben von Dateien verwendet werden kann. Hier wurde zum Beispiel mit dem Ausgabeumlenkungszeichen >> der von Ihnen darauf folgend eingegebene Text an die Datei beispiel.txt gehängt. Beenden können Sie die Eingabe über die Standardeingabe mit EOF (End Of File), die zugehörige Tastenkombination ist (Strg)+(D). Mehr zur Ein-/Ausgabeumleitung erfahren Sie in Abschnitt 1.10.
# Zeilen, Worte und Zeichen zählen â wc
Mit dem Kommando wc (Abkürzung für word count = Wörter zählen) können Sie die Anzahl der Zeilen, Worte und Zeichen einer Datei zählen.
> you@host > wc beispiel.txt 3 22 150 beispiel.txt you@host > wc -l beispiel.txt 3 beispiel.txt you@host > wc -w beispiel.txt 22 beispiel.txt you@host > wc -c beispiel.txt 150 beispiel.txt
Geben Sie â wie im ersten Beispiel â wc nur mit dem Dateinamen als Argument an, werden Ihnen drei Zahlen ausgegeben. Die Bedeutung dieser drei Zahlen ist, der Reihe nach: die Anzahl der Zeilen, die Anzahl der Wörter und die Anzahl von Zeichen. Natürlich können Sie mit einer Angabe der entsprechenden Option auch einzeln die Anzahl von Zeilen (âl = line), Worten (âw = words) und Zeichen (âc = character) einer Datei ermitteln.
# Datei(en) kopieren â cp
Dateien kopieren können Sie mit dem Kommando cp (Abkürzung für copy = kopieren). Als erstes Argument übergeben Sie diesem Befehl den Namen der Datei (Quelldatei), die kopiert werden soll. Das zweite Argument ist dann der Name der Kopie (Zieldatei), die Sie erstellen wollen.
> you@host > cp beispiel.txt kopie_beispiel.txt you@host > ls *.txt beispiel.txt kopie_beispiel.txt
Beachten Sie allerdings, wenn Sie hier als zweites Argument (im Beispiel kopie_beispiel.txt) den Namen einer bereits vorhandenen Datei verwenden, dass diese gnadenlos überschrieben wird.
# Datei(en) umbenennen oder verschieben â mv
Eine Datei können Sie mit dem Kommando mv (Abkürzung für move = bewegen) umbenennen oder, sofern als Ziel ein anderes Verzeichnis angegeben wird, verschieben. Die Argumente entsprechen dabei demselben Prinzip wie beim Kommando cp. Das erste Argument ist der Name der Datei, die umbenannt oder verschoben werden soll, und das zweite Argument ist entweder der neue Name der Datei oder â wenn gewünscht â das Verzeichnis, wohin mv die Datei schieben soll.
> you@host > ls beispiel.txt kopie_beispiel.txt you@host > mv kopie_beispiel.txt beispiel.bak you@host > ls beispiel.bak beispiel.txt you@host > mv beispiel.bak /home/tot/ein_ordner you@host > ls beispiel.txt you@host > ls /home/tot/ein_ordner beispiel.bak
Wie beim cp-Befehl wird mit mv der als zweites Argument angegebene Name bzw. das Ziel überschrieben, sofern ein gleicher Name bereits existiert.
# Datei(en) löschen â rm
Wollen Sie eine Datei aus Ihrem System entfernen, hilft Ihnen der Befehl rm (Abkürzung für remove = entfernen). Als Argument von rm geben Sie die Datei an, die Sie löschen wollen. Natürlich dürfen hierbei auch mehrere Dateien auf einmal angegeben werden.
> you@host > rm datei.txt you@host > rm datei1.txt datei2.txt datei3.txt datei4.txt you@host > rm beispiel.txt /home/tot/ein_ordner/beispiel.bak
# Leere Datei erzeugen â touch
Mit diesem Befehl können Sie eine oder mehrere Dateien als Argument mit der Größe 0 anlegen. Sofern eine Datei mit diesem Namen existiert, ändert touch nur das Datum der letzten Änderung â lässt aber den Inhalt der Datei in Ruhe. Ich verwende den Befehl gern zu Demonstrationszwecken im Buch, wenn ich mal schnell mehrere Dateien auf einmal erzeugen will und damit ohne großen Aufwand das ein oder andere Szenario darstellen kann.
> you@host > touch datei1.txt you@host > ls *.txt datei1.txt you@host > touch datei2.txt datei3.txt datei4.txt you@host > ls *.txt datei1.txt datei2.txt datei3.txt datei4.txt
### 1.7.3 Der Umgang mit VerzeichnissenÂ
Neben dem Umgang mit Dateien gehören der Umgang und das Verständnis von Verzeichnissen ebenso zu den grundlegenden Arbeiten. Daher hier die wichtigsten Befehle und Begriffe dazu.
# Verzeichnishierarchien und Home
In jedem Linux-UNIX-System finden Sie sich nach dem Einloggen (oder beim Starten einer »xterm«) automatisch im Heim-Inhaltsverzeichnis (home) wieder. Geht man bspw. davon aus, dass Ihr Heim-Inhaltsverzeichnis »you« lautet und dass dieses Verzeichnis ein Unterverzeichnis des Verzeichnisses /home ist (was gewöhnlich der Fall ist), dann lautet Ihr Heimverzeichnis /home/you. Die Rede ist von einem Verzeichnisbaum, an dessen Spitze sich immer ein / befindet â das Wurzelinhaltsverzeichnis (Root-Verzeichnis, nicht zu verwechseln mit dem Benutzer root). Hierbei gibt es keine Laufwerksangaben, wie dies Windows-typisch der Fall ist. Das oberste Hauptverzeichnis geben Sie hier immer mit einem / an, welches Sie auch gern mit ls ausgeben lassen können.
> you@host > ls / bin dev home media opt root srv tmp var boot etc lib mnt proc sbin sys usr windows
So veranlassen Sie, dass ls den Inhalt vom Wurzelverzeichnis in alphabetischer Reihenfolge (hier von oben nach unten; dann von links nach rechts) ausgibt. Will man an dieser Stelle wissen, wo man sich gerade befindet (das aktuelle Arbeitsverzeichnis, also das Verzeichnis, welches ls ohne sonstige Angaben ausgeben würde), kann man den Befehl pwd (Abkürzung für print working director = gib das Arbeitsverzeichnis aus) verwenden.
> you@host > pwd /home/you
Geben Sie also einen Pfadnamen beginnend mit einem Schrägstrich ( / ) an, wird versucht, eine Datei oder ein Verzeichnis vom Wurzelverzeichnis aus zu erreichen. Man spricht dabei von einer vollständigen oder absoluten Pfadangabe. Die weiteren Verzeichnisse, die Sie zur Datei oder zum Verzeichnis benötigen, werden mit einem / ohne Zwischenraum getrennt.
> you@host > cat /home/you/Dokumente/brief.txt
Hier wird die Datei brief.txt, welche sich im Verzeichnis /home/tot/Dokumente befindet, ausgegeben. Befindet sich diese Datei allerdings in Ihrem Heimverzeichnis (in der Annahme, es lautet hier /home/you), dann müssen Sie keine vollständige Pfadangabe vom Wurzelverzeichnis aus vornehmen. Hier würde auch eine relative Pfadangabe genügen. Als »relativ« wird hier der Pfad vom aktuellen Arbeitsverzeichnis aus bezeichnet.
> you@host > pwd /home/you you@host > cat Dokumente/brief.txt
Was hier angegeben wurde, wird auch als vollständige Dateiangabe bezeichnet, weil hier die Datei brief.txt direkt mitsamt Pfad (relativ zum aktuellen Arbeitsverzeichnis) angesprochen wird. Gibt man hingegen nur /home/you/Dokumente an, dann spricht man von einem (vollständigen) Pfadnamen.
Früher oder später werden Sie in Ihrem Inhaltsverzeichnis auch mal auf die Punkte . und .. stoßen. Die (doppelten) Punkte .. repräsentieren immer das Inhaltsverzeichnis, das eine Ebene höher liegt. Befinden Sie sich also nach dem Einloggen im Verzeichnis /home/you, bezieht sich .. auf das Inhaltsverzeichnis von you. So können Sie bspw. beim Wechseln des Verzeichnisses jederzeit mit Angabe von .. in das nächsthöhere Verzeichnis der Hierarchie wechseln. Selbst das höchste Verzeichnis, die Wurzel /, besitzt solche Einträge, nur dass es sich beim Wurzelverzeichnis um einen Eintrag auf sich selbst handelt â höher geht es eben nicht mehr.
> you@host > pwd /home/you you@host > ls .. you you@host > ls /.. bin dev home media opt root srv tmp var boot etc lib mnt proc sbin sys usr windows
Der einfache Punkt hingegen verweist immer auf das aktuelle Inhaltsverzeichnis. So sind bspw. folgende Angaben absolut gleichwertig:
> you@host > ls you@host > ls ./
In Tabelle 1.1 finden Sie eine Zusammenfassung der Begriffe, die in Bezug auf Verzeichnisse wichtig sind.
Begriff | Bedeutung |
| --- | --- |
Wurzelverzeichnis / (engl. root) | Höchstes Verzeichnis im Verzeichnisbaum |
Aktuelles Arbeitsverzeichnis | Das Verzeichnis, in dem Sie sich im Augenblick befinden |
Vollständige Pfadangabe | Ein Pfadname beginnend mit / (Wurzel); alle anderen Verzeichnisse werden ebenfalls mit einem / getrennt. |
Relative Pfadangabe | Der Pfad ist relativ zum aktuellen Arbeitsverzeichnis; hierbei wird kein voranstehendes / verwendet. |
Vollständiger Dateiname | Pfadangabe inklusive Angabe zur entsprechenden Datei (bspw. /home/you/Dokumente/brief.txt) |
Vollständiger Pfadname | Pfadangabe zu einem Verzeichnis, in dem sich die entsprechende Datei befindet (bspw. /home/you/Dokumente) |
.. (zwei Punkte) | Verweisen auf ein Inhaltsverzeichnis, das sich eine Ebene höher befindet |
. (ein Punkt) | Verweist auf das aktuelle Inhaltsverzeichnis |
# Inhaltsverzeichnis wechseln â cd
Das Wechseln von einem Inhaltsverzeichnis können Sie mit dem Befehl cd (Abkürzung für change directory = wechsle Verzeichnis) ausführen. Als Argument übergeben Sie diesem Befehl das Inhaltsverzeichnis, in das Sie wechseln wollen.
> you@host > pwd /home/you you@host > cd Dokumente you@host > pwd /home/you/Dokumente you@host > ls Beispiele FAQ.tmd Liesmich.tmd you@host > cd .. you@host > pwd /home/you you@host > cd ../.. you@host > pwd / you@host > cd /home/you you@host > pwd /home/you you@host > cd / you@host > pwd / you@host > cd you@host > pwd /home/you
Mit den vorherigen Erläuterungen sollte Ihnen dieses Beispiel keine Probleme mehr bereiten. Einzig der Sonderfall sollte erwähnt werden, die Funktion cd. Verwenden Sie cd allein, so gelangen Sie immer in das Heim-Inhaltsverzeichnis zurück, egal wo Sie sich gerade befinden.
# Ein Inhaltsverzeichnis erstellen â mkdir
Ein neues Inhaltsverzeichnis können Sie mit dem Befehl mkdir (Abkürzung make directory = erzeuge Verzeichnis) anlegen. Als Argument erwartet dieser Befehl den Namen des neu zu erzeugenden Verzeichnisses. Ohne Angabe eines Pfadnamens als Argument wird das neue Verzeichnis im aktuellen Arbeitsverzeichnis angelegt.
> you@host > mkdir Ordner1 you@host > pwd /home/you you@host > cd Ordner1 you@host > pwd /home/you/Ordner1 tyou@host > cd .. you@host > pwd /home/you
# Ein Verzeichnis löschen â rmdir
Ein Verzeichnis können Sie mit dem Befehl rmdir (Abkürzung für remove directory = lösche Verzeichnis) wieder löschen. Um aber ein Verzeichnis wirklich löschen zu dürfen, muss dieses leer sein. Das heißt, im Verzeichnis darf sich außer den Einträgen . und .. nichts mehr befinden. Die Verwendung von rmdir entspricht der von mkdir, nur dass Sie als Argument das Verzeichnis angeben, das gelöscht werden soll.
> you@host > cd Ordner1 you@host > touch testfile.dat you@host > cd .. you@host > rmdir Ordner1 rmdir: Ordner1: Das Verzeichnis ist nicht leer
Um also alle Dateien in einem Verzeichnis zu löschen, haben Sie ja bereits das Kommando rm kennen gelernt. Mit einer Datei ist das kein großer Aufwand, doch befinden sich in einem Verzeichnis mehrere Dateien, bietet Ihnen rm die Option âr zum rekursiven Löschen an.
> you@host > rm -r Ordner1
Dank dieser Option werden alle gewünschten Verzeichnisse und die darin enthaltenen Dateien gelöscht, inklusive aller darin enthaltenen Unterverzeichnisse.
### 1.7.4 Datei- und VerzeichnisnamenÂ
Bei der Verwendung von Datei- bzw. Verzeichnisnamen ist Linux/UNIX sehr flexibel. Hierbei ist fast alles erlaubt, dennoch gibt es auch einige Zeichen, die man in einem Dateinamen besser weglässt.
### 1.7.5 GerätenamenÂ
Bisher haben Sie die Dateitypen der normalen Dateien und der (Datei-)Verzeichnisse kennen gelernt. Es gibt allerdings noch eine dritte Dateiart, die Gerätedateien (auch Spezialdateien genannt). Mit den Gerätedateien (device files) können Programme unter Verwendung des Kernels auf die Hardwarekomponenten im System zugreifen. Natürlich sind das keine Dateien, wie Sie sie eigentlich kennen, aber aus der Sicht der Programme werden diese wie gewöhnliche Dateien behandelt. Man kann daraus lesen, etwas dorthin schreiben oder sie auch für spezielle Zwecke benutzen â eben alles, was Sie mit einer Datei auch machen können (und noch mehr). Gerätedateien bieten Ihnen eine einfache Methode, um auf Systemressourcen wie bspw. den Bildschirm, den Drucker, das Diskettenlaufwerk oder die Soundkarte zuzugreifen, ohne dass Sie wissen müssen, wie dieses Gerät eigentlich funktioniert.
Auf Linux-UNIX-Systemen finden Sie die Gerätedateien fast immer im Verzeichnis /dev. Hier finden Sie zu jedem Gerät einen entsprechenden Eintrag. /dev/ttyS0 steht bspw. für die erste serielle Schnittstelle, /dev/hda1 ist die erste Partition der ersten IDE-Festplatte. Neben echten Geräten in der Gerätedatei gibt es auch so genannte Pseudo-Gerätedateien im /dev-Verzeichnis, wie beispielsweise den bekanntesten Vertreter /dev/null, das häufig als Datengrab verwendet wird (dazu später mehr).
Beim Verwenden von Gerätedateinamen in Ihren Shellscripts sollten Sie allerdings Vorsicht walten lassen und wenn möglich diese Eingabe dem Anwender überlassen (etwa mit einer Konfigurationsdatei). Der Grund ist hierbei, dass viele Linux-UNIX-Systeme unterschiedliche Gerätenamen verwenden.
### 1.7.6 DateiattributeÂ
Nachdem Sie jetzt wissen, dass Linux/UNIX zwischen mehreren Dateiarten, insgesamt sechs, unterscheidet â nicht erwähnt wurden hierbei die Sockets, Pipes (named Pipes und FIFOs) und die Links (Softlinks bzw. Hardlinks) â, sollten Sie sich auf jeden Fall noch mit den Dateiattributen auseinander setzen. Gerade als angehender oder bereits leibhaftiger Administrator hat man häufig mit unautorisierten Zugriffen auf bestimmte Dateien, Verzeichnisse oder Gerätedateien zu tun. Auch in Ihren Shellscripts werden Sie das eine oder andere Mal die Dateiattribute auswerten müssen und entsprechend darauf reagieren wollen.
Am einfachsten erhalten Sie solche Informationen mit dem Befehl ls und der Option âl (für long). So gelangen Sie an eine recht üppige Zahl von Informationen zu den einzelnen Dateien und Verzeichnissen.
> you@host > ls -l -rw-r--r-- 1 you users 9689 2003â12â04 15:53 datei.dat
Hier nun zu den einzelnen Bedeutungen der Ausgabe von ls âl, aufgelistet von links nach rechts.
# Dateiart (Dateityp)
Ganz links, beim ersten Zeichen (hier befindet sich ein Minuszeichen â) wird die Dateiart angegeben. Folgende Angaben können sich hier befinden (siehe Tabelle 1.2).
Zeichen | Bedeutung (Dateiart) |
| --- | --- |
â | Normale Datei |
d | Verzeichnis (d = directory) |
p | Named Pipe; steht für eine Art Pufferungsdatei, eine Pipe-Datei. |
c | (c = character oriented) steht für eine zeichenorientierte Gerätedatei. |
b | (b = block oriented) steht für eine blockorientierte Gerätedatei. |
s | (s = socket) steht für ein Socket (genauer einen UNIX-Domainsocket). |
l | Symbolische Links |
# Zugriffsrechte
Darstellung in ls | Bedeutung |
| --- | --- |
| read (user; Leserecht für Eigentümer) |
| write (user; Schreibrecht für Eigentümer) |
| execute (user; Ausführrecht für Eigentümer) |
| read, write, execute (user; Lese-, Schreib-, Ausführungsrecht für Eigentümer) |
| read (group; Leserecht für Gruppe) |
| write (group; Schreibrecht für Gruppe) |
| execute (group; Ausführungsrecht für Gruppe) |
| read, write, execute (group; Lese-, Schreib-, Ausführungsrecht für Gruppe) |
| read (other; Leserecht für alle anderen Benutzer) |
| write (other; Schreibrecht für alle anderen Benutzer) |
| execute (other; Ausführungsrecht für alle anderen Benutzer) |
| read, write, execute (other; Lese-, Schreib-, Ausführungsrecht für alle anderen Benutzer) |
Sind bspw. die Rechte rwârâârââ für eine Datei gesetzt, so bedeutet dies, dass der Eigentümer (rwâ) der Datei diese sowohl lesen (r) als auch beschreiben (w) darf, nicht aber ausführen (hierzu fehlt ein x). Die Mitglieder in der Gruppe (rââ) und alle anderen (rââ) hingegen dürfen diese Datei nur lesen.
# Der Rest
Als Nächstes finden Sie die Verweise auf die Datei. Damit ist die Anzahl der Links (Referenzen) gemeint, die für diese Datei existieren. In diesem Fall gibt es keine Verweise auf die Datei, da hier die »1« steht. Jetzt kommt der Benutzername des Dateibesitzers (hier »you«), gefolgt vom Namen der Gruppe (hier »users«), welcher vom Eigentümer der Datei festgelegt wird. Die nächsten Werte entsprechen der Länge der Datei in Bytes (hier 9689 Bytes) und dem Datum der letzten Änderung der Datei mitsamt Uhrzeit (2003â12â04 15:53). Am Ende finden Sie den Namen der Datei wieder (hier datei.dat).
# 1.8 Shellscripts schreiben und ausführenÂ
Date: 2005-05-20
Categories:
Tags:
1.8 Shellscripts schreiben und ausführenÂ
Sie finden hier eine Anleitung, wie Sie gewöhnlich vorgehen können, um eigene Shellscripts zu schreiben und auszuführen.
1.8.1 Der EditorÂ
Zu Beginn steht man immer vor der Auswahl seines Werkzeugs. In der Shell-Programmierung reicht hierfür der einfachste Editor aus. Welchen Editor Sie verwenden, hängt häufig vom Anwendungsfall ab. Obgleich ich hier ein Buch zur Shellscript-Programmierung schreibe, gestehe ich, kein großer Fan der Editoren »vi« und »emacs« zu sein. Ich verwende, wenn irgend möglich, einen Editor einer grafischen Oberfläche wie »Kate« unter KDE, »Gedit« unter dem GNOME-Desktop oder »xemacs« auf einer X11-Oberfläche.
Trotzdem habe ich die Mühe nicht gescheut, den gewöhnungsbedürftigen Umgang mit den quasi überall vorhandenen Editoren wie »vi«, »emacs«, »joe« oder »pico« zu erlernen. Der Grund ist ganz einfach: Als Systemadministrator oder Webmaster (mit Zugang über SSH) haben Sie nicht überall die Möglichkeit, auf eine grafische Oberfläche zurückzugreifen. Häufig sitzen Sie dann vor einem Rechner mit einer nackten Shell. Wenn Sie dann nicht wenigstens den grundlegenden Umgang mit (mindestens) »vi« beherrschen, sehen Sie ziemlich alt aus. Zwar besteht meistens noch die Möglichkeit eines Fernzugriffs zum Kopieren mit dem Kommando scp, aber macht man hier bspw. einen Tippfehler, ist dieser Vorgang auf Dauer eher ineffizient. Letztendlich ist das Ihre Entscheidung.
1.8.2 Der Name des ShellscriptsÂ
Den Namen des Shellscripts, unter dem Sie dieses mit dem Editor Ihrer Wahl abspeichern, können Sie fast willkürlich wählen. Sie müssen lediglich folgende Punkte beachten:
Tipp   Auch wenn Sie Ihr Kommando (fast) nennen können wie Sie wollen, sollten Sie auf jeden Fall versuchen, einen Namen zu wählen, der einen Sinn ergibt.
Hinweis   Natürlich müssen Sie hier nicht alles abtippen. Sie finden alle Shellscripts auch auf der Buch-CD. Dennoch ist es aus Übungsgründen zu empfehlen, die meisten Shellscripts selbst zu schreiben, um ein Gefühl dafür zu bekommen.
Hinweis   Wenn Ihnen die oktale Schreibweise etwas geläufiger ist, können Sie selbstverständlich auch chmod damit beauftragen, das Ausführrecht zu setzen: chmod 0744 userinfo
Jetzt können Sie das Script beim Namen aufrufen. Allerdings werden Sie gewöhnlich den absoluten Pfadnamen verwenden müssen, es sei denn, Sie fügen das entsprechende Verzeichnis (wo sich das Script befindet) zur Umgebungsvariablen PATH hinzu (aber dazu später mehr). Somit lautet ein möglicher Aufruf des Scripts wie folgt:
you@host > ./userinfo Fr Jan 21 02:35:06 CET 2005 Ich bin ... you Alle User, die eingeloggt sind ... you :0 Jan 20 23:06 (console) ingo :0 Jan 20 12:02 (console) john pts/0 Jan 1 16:15 (pd9e9bdc0.dip.t-dialin.net) john pts/2 Jan 1 16:22 (pd9e9bdc0.dip.t-dialin.net) Name des Host-Systems ... goliath.speedpartner.de
Hier wird mit ./ das aktuelle Verzeichnis verwendet. Starten Sie das Script aus einem anderen Verzeichnis, müssen Sie eben den absoluten Pfadnamen angeben:
you@host:~/irgend/wo > /home/tot/beispielscripte/Kap1/userinfo
Damit Sie verstehen, wie ein Shellscript ausgeführt wird, muss ich Ihnen leider noch ein Shellscript aufdrücken, bevor Sie mit den eigentlichen Grundlagen der Shell beginnen können. Es ist nämlich von enormer Bedeutung, dass Sie verstehen, dass nicht die aktuelle Shell das Script ausführt. Zum besseren Verständnis:
you@host > ps -f UID PID PPID C STIME TTY TIME CMD you 3672 3658 0 Jan20 pts/38 00:00:00 /bin/bash you 7742 3672 0 03:03 pts/38 00:00:00 ps -f
Sie geben hier mit ps âf die aktuelle Prozessliste (hier unter Linux mit der Bash) der Shell auf den Bildschirm aus. Hier lautet die Prozessnummer (PID) der aktuellen Shell 3672. Die Nummer des Elternprozesses (PPID), der diese Shell gestartet hat, lautet hierbei 3658. Im Beispiel war dies der Dämon »kdeinit«, was hier allerdings jetzt nicht von Bedeutung ist. Den Prozess ps âf haben Sie ja soeben selbst gestartet und anhand der PPID lässt sich auch hier ermitteln, von welcher Shell dieser Prozess gestartet wurde (hier 3672, also der aktuellen Bash).
Um Ihnen jetzt zu demonstrieren, wie ein Shellscript ausgeführt wird, führen Sie bitte folgendes Script aus:
# Script-Name: finduser # gibt alle Dateien des Users you auf dem Bildschirm aus find / -user you -print 2>/dev/null
Ich habe hier bewusst einen Befehl verwendet, der ein wenig länger beschäftigt sein sollte. Mit dem Kommando find und den entsprechenden Angaben werden alle Dateien des Users »you« unterhalb des Wurzelverzeichnisses / auf dem Bildschirm ausgegeben. Fehlerausgaben (Kanal 2) wie bspw. »keine Berechtigung« werden nach /dev/null umgeleitet und somit nicht auf dem Bildschirm ausgegeben. Auf das Thema Umleitung wird noch gezielt eingegangen. Damit uns im Beispiel hier nicht die Standardausgabe (Kanal 1) auf dem Bildschirm stört, leiten wir diese auch beim Start des Scripts nach /dev/null (1>/dev/null) um. Und damit uns die Shell für weitere Eingaben zur Verfügung steht, stellen wir die Ausführung des Scripts in den Hintergrund (&). Somit können Sie in aller Ruhe die Prozessliste betrachten, während Sie das Script ausführen.
you@host > chmod u+x finduser you@host > ./finduser 1>/dev/null & [1] 8138 you@host > ps -f UID PID PPID C STIME TTY TIME CMD you 3672 3658 0 Jan20 pts/38 00:00:00 /bin/bash you 8138 3672 0 03:26 pts/38 00:00:00 /bin/bash you 8139 8138 10 03:26 pts/38 00:00:00 find / -user tot -print you 8140 3672 0 03:26 pts/38 00:00:00 ps -f you@host > kill $! [1]+ Beendet ./finduser >/dev/null
Damit das Script nicht im Hintergrund unnötig Ressourcen verbraucht, wurde es mit kill $! beendet. Die Zeichenfolge »$!« ist eine Shell-Variable einer Prozessnummer vom zuletzt gestarteten Hintergrundprozess (auch hierauf wird noch eingegangen). Interessant ist für diesen Abschnitt nur die Ausgabe von ps âf:
UID PID PPID C STIME TTY TIME CMD tot 3672 3658 0 Jan20 pts/38 00:00:00 /bin/bash tot 8138 3672 0 03:26 pts/38 00:00:00 /bin/bash tot 8139 8138 10 03:26 pts/38 00:00:00 find / -user tot -print tot 8140 3672 0 03:26 pts/38 00:00:00 ps -f
Sie können hier anhand der PPID erkennen, dass von der aktuellen Shell (PID:3672) eine weitere Shell (8138) gestartet wurde. Bei dieser neuen Shell spricht man von einer Subshell. Erst diese Subshell führt anschließend das Script aus, wie unschwer anhand der PPID von find zu erkennen ist. Der Befehl ps âf hingegen wurde wiederum von der aktuellen Shell ausgeführt.
Derselbe Vorgang ist natürlich auch bei der Bourne-Shell (sh) und der Korn-Shell (ksh) möglich und nicht nur, wie hier gezeigt, unter der bash. Jede Shell ruft für das Ausführen eines Scripts immer eine Subshell auf. Die bash eine bash und die sh eine sh. Nur bei der Korn-Shell kann es sein, dass anstatt einer weiteren ksh eine sh aufgerufen wird! Wie Sie dies ermitteln können, haben Sie ja eben erfahren.
1.8.4 Hintergrundprozess startenÂ
Wie Sie beim Ausführen des Beispiels eben entnehmen konnten, können Sie durch das Anhängen eines Ampersand-Zeichens (&) den Prozess im Hintergrund ausführen. Dies wird gerade bei Befehlen verwendet, die etwas länger im Einsatz sind. Würden Sie im Beispiel von finduser den Prozess im Vordergrund (also normal) ausführen, müssten Sie auf das Ende des Prozesses warten, bis Ihnen der Eingabeprompt der aktuell ausführenden Shell wieder zur Verfügung steht. Dadurch, dass der Prozess in den Hintergrund gestellt wird, können Sie weitere Prozesse quasi parallel starten.
1.8.5 Ausführende Shell festlegenÂ
Im Beispiel konnten Sie sehen, dass ich das Script in einer Bash als (Sub-)Shell ausgeführt habe. Was aber, wenn ich das Script gar nicht in der Bash ausführen will? Schreiben Sie bspw. ein Shellscript in einer bestimmten Shell-Syntax, werden Sie wohl auch wollen, dass Ihr Script eben in der entsprechenden (Sub-)Shell ausgeführt wird. Hierbei können Sie entweder die (Sub-)Shell direkt mit dem Shellscript als Argument aufrufen oder eben die auszuführende (Sub-)Shell im Script selbst festlegen.
Aufruf als Argument der Shell
Der Aufruf eines Shellscripts als Argument der Shell ist recht einfach zu bewerkstelligen:
you@host > sh finduser
Hier wird zum Ausführen des Shellscripts die Bourne-Shell (sh) als Subshell verwendet. Wollen Sie hingegen das Script mit einer Korn-Shell als Subshell aufrufen, rufen Sie das Script wie folgt auf:
you@host > ksh finduser
Ebenso können Sie auch andere Shells wie bspw. ash oder zsh verwenden. Natürlich können Sie auch testweise eine C-Shell (bspw. tcsh) aufrufen â um festzustellen, dass hier schon erste Unstimmigkeiten auftreten.
Ausführende Shell im Script festlegen (She-Bang-Zeile)
Letztendlich besteht der gewöhnliche Weg darin, die ausführende Shell im Script selbst festzulegen. Dabei muss sich in der ersten Zeile des Shellscripts die Zeichenfolge #! gefolgt von der absoluten Pfadangabe der entsprechenden Shell befinden (wird auch als She-Bang-Zeile bezeichnet). Die absolute Pfadangabe können Sie einfach mittels which ermitteln. Gewöhnlich befinden sich alle Shell-Varianten im Verzeichnis /bin oder in /usr/bin (bzw. /usr/local/bin ist auch ein Kandidat).
Wollen Sie bspw. das Shellscript finduser von einer Korn-(Sub-)Shell ausführen lassen, müssen Sie in der ersten Zeile Folgendes eintragen:
#!/bin/ksh
oder
#!/usr/bin/ksh
Somit sieht das komplette Script für die Korn-Shell so aus (davon ausgehend, dass sich die Korn-Shell in /bin befindet):
#!/bin/ksh # Shellscript: finduser # gibt alle Dateien des Users tot auf dem Bildschirm aus find / -user tot -print 2>/dev/null
Für die Bash verwenden Sie:
#!/bin/bash
oder
#!/usr/bin/bash
Und für die Bourne-Shell:
#!/bin/sh
oder
#!/usr/bin/sh
Ähnlich wird diese erste Zeile übrigens auch bei Perl und anderen Skriptsprachen verwendet. Das dies funktioniert, ist nichts Magisches, sondern ein typischer Linux-UNIX-Vorgang. Beim Start eines neuen Programms (besser ist wohl Prozess) verwenden alle Shell-Varianten den Linux-UNIX-Systemaufruf exec. Wenn Sie aus der Login-Shell eine weitere Prozedur starten, wird ja zunächst eine weitere Subshell gestartet (wie bereits erwähnt). Durch den Systemaufruf exec wird der neue Prozess durch ein angegebenes Programm überlagert â egal ob dieses Programm jetzt in Form einer ausführbaren Datei (wie diese etwa in C erstellt werden) oder als Shellscript vorhanden ist. Im Fall eines Shellscripts startet exec standardmäßig eine Shell vom Typ Ihrer aufrufenden Shell und beauftragt diese Shell, alle Befehle und Kommandos der angegebenen Datei auszuführen. Enthält das Shellscript allerdings in der ersten Zeile eine Angabe wie
#! Interpreter [Argument(e)]
verwendet exec den vom Script angegebenen Interpreter zum Ausführen des Shellscripts. Als Interpreter können alle Ihnen bekannten Shell-Varianten (sofern auf dem System vorhanden) verwendet werden (wie bereits erwähnt, nicht nur eine Shell!).
An der Zeichenfolge #! in der ersten Zeile erkennt exec die Interpreterzeile und verwendet die dahinter stehenden Zeichen als Namen des Programms, das als Interpreter verwendet werden soll. Der Name Ihres Shellscripts wird beim Aufruf des Interpreters automatisch übergeben. Dieser exec-Mechanismus wird von jeder Shell zum Aufrufen eines Programms oder Kommandos verwendet. Somit können Sie sichergehen, dass der Ablauf Ihres Scripts in der richtigen Umgebung ausgeführt wird, auch wenn Sie es gewohnt sind, in einer anderen Shell zu arbeiten.
Hinweis   Es sollte Ihnen klar sein, dass hinter dem Interpreternamen kein weiterer Kommentar folgen darf, weil dieser sonst als Argument verwendet würde.
Hinweis   Vielleicht haben Sie schon mal gehört, dass man dem Inhalt von Umgebungsvariablen nicht unbedingt trauen sollte â denn jeder kann diese verändern. In diesem Fall ist es noch schlimmer, denn zum Ausführen eines Shellscripts mit einem Punkt davor benötigt der Benutzer nicht einmal Ausführrechte. Hierbei begnügt sich die Shell schon mit dem Leserecht, um munter mitzumachen.
Ein Beispiel, wie Sie ein Shellscript ohne Subshell ausführen können:
you@host > . ./script_zum_ausfuehren
oder mit absoluten Pfadnamen:
you@host > . /home/tot/beispielscripte/Kap1/script_zum_ausfuehren
Hinweis   Die Bash kennt als Alternative zum Punkt das interne Kommando source.
1.8.6 KommentareÂ
In den Scripts zuvor wurden Kommentare bereits reichlich verwendet, und Sie sollten dies in Ihren Scripts auch regelmäßig tun. Einen Kommentar können Sie durch ein vor dem Text stehendes Hash-Zeichen (#) erkennen. Hier ein Beispiel, wie Sie Ihre Scripts kommentieren können:
#!/bin/sh # Script-Name: HalloWelt # Version : 0.1 # Autor : J.Wolf # Datum : 20.05.2005 # Lizenz : ... # ... gibt "Hallo Welt" aus echo "Hallo Welt" echo "Ihre Shell:" echo $SHELL # ... gibt die aktuelle Shell aus
Zugegeben, für ein solch kleines Beispiel sind die Kommentare ein wenig übertrieben. Sie sehen hier, wie Sie auch Kommentare hinter einen Befehl setzen können. Alles, was hinter einem Hash-Zeichen steht, wird von der Shell ignoriert. Natürlich mit Ausnahme der ersten Zeile, wenn sich dort die Zeichenfolge »#!« befindet.
Kommentieren sollten Sie zumindest den Anfang des Scripts mit einigen Angaben zum Namen und eventuell dem Zweck, sofern dieser nicht gleich klar sein sollte. Wenn Sie wollen, können Sie selbstverständlich auch Ihren Namen, das Datum und die Versionsnummer angeben. Auch auf Besonderheiten bezüglich einer Lizenz sollten Sie hier hinweisen. Komplexe Stellen sollten auf jeden Fall ausreichend kommentiert werden. Dies hilft Ihnen, den Code nach längerer Abwesenheit schneller wieder zu verstehen. Sofern Sie die Scripts der Öffentlichkeit zugänglich machen wollen, wird man es Ihnen danken, wenn Sie Ihren »Hack« ausreichend kommentiert haben.
1.8.7 StilÂ
Jeder Programmierer entwickelt mit der Zeit seinen eigenen Programmierstil, dennoch gibt es einiges, was Sie vielleicht beachten sollten. Zwar ist es möglich, durch ein Semikolon getrennt mehrere Befehle in einer Zeile zu schreiben (wie dies ja in der Shell selbst auch möglich ist), doch sollten Sie dies, wenn möglich, der Übersichtlichkeit zuliebe vermeiden.
Und sollte eine Zeile doch mal ein wenig zu lang werden, dann können Sie immer noch das Backslash-Zeichen (\) setzen. Ein Shellscript arbeitet ja Zeile für Zeile ab. Als Zeilentrenner wird gewöhnlich das Newline-Zeichen (ein für den Editor nicht sichtbares Zeichen mit der ASCII-Code-Nummer 10) verwendet. Durch das Voranstellen eines Backslash-Zeichens wird das Newline-Zeichen außer Kraft gesetzt.
#!/bin/sh # Script-Name: backslash # ... bei überlangen Befehlen kann man ein Backslash setzen echo "Hier ein solches Beispiel, wie Sie\ ueberlange Zeilen bzw. Ketten von Befehlen\ auf mehreren Zeilen ausführen können"
Bei etwas überlangen Ketten von Kommandos verhilft das Setzen eines Backslashs auch ein wenig, die Übersichtlichkeit zu verbessern:
ls -l /home/tot/docs/listings/beispiele/kapitel1/final/*.sh | \ sort -r | less
Wenn Sie etwas später auf Schleifen und Verzweigungen treffen, sollten Sie gerade hier mal eine Einrückung mehr als nötig vornehmen. Einmal mehr auf die (Ë_)-Taste zu drücken, hat noch keinem geschadet.
1.8.8 Ein Shellscript beendenÂ
Ein Shellscript beendet sich entweder nach Ablauf der letzten Zeile selbst oder Sie verwenden den Befehl exit, womit die Ausführung des Scripts sofort beendet wird. Die Syntax des Kommandos sieht wie folgt aus:
exit [n]
Der Parameter n ist optional. Für diesen Wert können Sie eine ganze Zahl von 0 bis 255 verwenden. Verwenden Sie exit ohne jeden Wert, wird der exit-Status des Befehls genutzt, der vor exit ausgeführt wurde. Mit den Werten 0 bis 255 wird angegeben, ob ein Kommando oder Script ordnungsgemäß ausgeführt wurde. Der Wert 0 steht dafür, dass alles ordnungsgemäß abgewickelt wurde. Jeder andere Wert signalisiert eine Fehlernummer.
Sie können damit auch testen, ob ein Kommando, das Sie im Script ausgeführt haben, erfolgreich ausgeführt wurde und je nach Situation das Script an einer anderen Stelle fortsetzen lassen. Hierfür fehlt Ihnen aber noch die Kenntnis der Verzweigung. Die Zahl des exit-Status kann jederzeit mit der Shell-Variablen $? abgefragt werden. Hierzu ein kurzes Beispiel:
#!/bin/sh # Script-Name: ende echo "Tick ..." # ... Shellscript wird vorzeitig beendet exit 5 # "Tack ..." wird nicht mehr ausgeführt echo "Tack ...
Das Beispiel bei der Ausführung:
you@host > chmod u+x ende you@host > ./ende Tick ... you@host > echo $? 5 you@host > ls -l /root /bin/ls: /root: Keine Berechtigung you@host > echo $? 1 you@host > ls -l *.txt /bin/ls: *.txt: Datei oder Verzeichnis nicht gefunden you@host > echo $? 1 you@host > ls -l *.c -rw-r--r-- 1 tot users 49 2005â01â22 01:59 hallo.c you@host > echo $? 0
Im Beispiel wurden auch andere Kommandos ausgeführt. Je nach Erfolg war hierbei der Wert der Variablen $? 1 (bei einem Fehler) oder 0 (wenn alles in Ordnung ging) â abgesehen von unserem Beispiel, wo bewusst der Wert 5 zurückgegeben wurde.
Hinweis   Beachten Sie bitte, wenn Sie den Befehl exit direkt im Terminal ausführen, wird dadurch auch die aktuelle Login-Shell beendet.
1.8.9 Testen und Debuggen von ShellscriptsÂ
Debuggen und Fehlersuche ist auch in der Shellscript-Programmierung von großer Bedeutung, sodass diesem Thema ein extra Abschnitt gewidmet wird. Trotzdem werden Sie auch bei Ihren ersten Scripts unweigerlich den einen oder anderen (Tipp-)Fehler einbauen (es sei denn, Sie verwenden sämtliche Listings von der Buch-CD).
Am häufigsten wird dabei in der Praxis die Option âx verwendet. Damit wird jede Zeile vor ihrer Ausführung auf dem Bildschirm ausgegeben. Meistens wird diese Zeile mit einem führenden Plus angezeigt (abhängig davon, was in der Variablen PS4 enthalten ist). Als Beispiel diene folgendes Shellscript:
#!/bin/sh # Script-Name: prozdat # Listet Prozessinformationen auf echo "Anzahl laufender Prozesse:" # ... wc -l zählt alle Zeilen, die ps -ef ausgeben würde ps -ef | wc -l echo "Prozessinformationen unserer Shell:" # ... die Shell-Variable $$ enthält die eigene Prozessnummer ps $$
Normal ausgeführt ergibt dieses Script folgende Ausgabe:
you@host > ./prozdat Anzahl laufender Prozesse: 76 Prozessinformationen unserer Shell: PID TTY STAT TIME COMMAND 10235 pts/40 S+ 0:00 /bin/sh ./prozdat
Auch hier wollen wir uns zunächst nicht zu sehr mit dem Script selbst befassen. Jetzt soll die Testhilfe mit der Option âx eingeschaltet werden:
you@host > sh -x ./prozdat + echo 'Anzahl laufender Prozesse:' Anzahl laufender Prozesse: + ps -ef + wc -l 76 + echo 'Prozessinformationen unserer Shell:' Prozessinformationen unserer Shell: + ps 10405 PID TTY STAT TIME COMMAND 10405 pts/40 S+ 0:00 sh -x ./prozdat
Sie sehen, wie jede Zeile vor ihrer Ausführung durch ein voranstehendes Plus ausgegeben wird. Außerdem können Sie hierbei auch feststellen, dass die Variable $$ durch den korrekten Inhalt ersetzt wurde. Selbiges wäre übrigens auch bei der Verwendung von Sonderzeichen (Wildcards) wie * der Fall. Sie bekommen mit dem Schalter âx alles im Klartext zu sehen.
Bei längeren Scripts ist mir persönlich das Pluszeichen am Anfang nicht deutlich genug. Wenn Sie dies auch so empfinden, können Sie die Variable PS4 gegen eine andere beliebige Zeichenkette austauschen. Ich verwende hierfür sehr gern die Variable LINENO, die nach der Ausführung immer durch die entsprechende Zeilennummer ersetzt wird. Die Zeilennummer hilft mir dann bei längeren Scripts, immer gleich die entsprechende Zeile zu finden. Hier ein Beispiel, wie Sie mit PS4 effektiver Ihr Script debuggen können:
you@host > export PS4='[--- Zeile: $LINENO ---] ' you@host > sh -x ./prozdat [--- Zeile: 6 ---] echo 'Anzahl laufender Prozesse:' Anzahl laufender Prozesse: [--- Zeile: 8 ---] ps -ef [--- Zeile: 8 ---] wc -l 76 [--- Zeile: 10 ---] echo 'Prozessinformationen unserer Shell:' Prozessinformationen unserer Shell: [--- Zeile: 12 ---] ps 10793 PID TTY STAT TIME COMMAND 10793 pts/40 S+ 0:00 sh -x ./prozdat
1.8.10 Shellscript, das ein Shellscript erstellt und ausführtÂ
Hand aufs Herz: Wie oft haben Sie bis hierher schon einen Fluch ausgestoßen, weil Sie bspw. vergessen haben, das Ausführrecht zu setzen, oder waren davon genervt, immer wiederkehrende Dinge zu wiederholen? Dazu lernen Sie ja eigentlich Shellscript-Programmierung, nicht wahr? Somit liegt nichts näher, als Ihnen hier eine kleine Hilfe mitzugeben. Ein Shellscript, welches ein neues Script erstellt oder ein bereits vorhandenes Script in den Editor Ihrer Wahl lädt. Natürlich soll auch noch das Ausführrecht für den User gesetzt werden und bei Bedarf wird das Script ausgeführt. Hier das simple Script:
# Ein Script zum Script erstellen ... # Name : scripter # Bitte entsprechend anpassen # # Verzeichnis, in dem sich das Script befindet dir=$HOME # Editor, der verwendet werden soll editor=vi # erstes Argument muss der Scriptname sein ... [ -z "$1" ] && exit 1 # Editor starten und Script laden (oder erzeugen) $editor $dir/$1 # Ausführrechte für User setzen ... chmod u+x $dir/$1 # Script gleich ausführen? Nein? Dann auskommentieren ... $dir/$1
Sie müssen bei diesem Script lediglich die Variablen »dir« und »editor« anpassen. Mit der Variable »dir« geben Sie das Verzeichnis an, wohin das Script geladen oder eventuell abgespeichert werden soll. Im Beispiel wurde einfach mit der Shell-Variablen HOME das Heimverzeichnis des eingeloggten Users verwendet. Als Editor verwende ich vi. Hier können Sie auch einen Editor Ihrer Wahl eintragen. Mit [ âz "$1" ] (z steht für zero, also leer) wird überprüft, ob das erste Argument der Kommandozeile ($1) vorhanden ist. Wurde hier keine Angabe gemacht, beendet sich das Script gleich wieder mit exit 1. Die weitere Ausführung spricht, glaube ich, erst einmal für sich. Aufgerufen wird das Script wie folgt:
you@host > ./scripter example
Natürlich ist es nicht meine Absicht, dem Einsteiger mit diesem Script eine Einführung in die Shellscript-Programmierung zu geben, sondern hier geht es nur darum, Ihnen eine kleine Hilfe an die Hand zu geben, um effektiver mit dem Buch arbeiten zu können.
Nachdem bis hierhin alle Formalitäten geklärt wurden, können Sie endlich damit loslegen, die eigentliche Shellscript-Programmierung zu erlernen. Zugegeben, es waren viele Formalitäten, aber dies war aus meiner Sicht unbedingt nötig.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 1.8 Shellscripts schreiben und ausführenÂ
Sie finden hier eine Anleitung, wie Sie gewöhnlich vorgehen können, um eigene Shellscripts zu schreiben und auszuführen.
### 1.8.1 Der EditorÂ
Zu Beginn steht man immer vor der Auswahl seines Werkzeugs. In der Shell-Programmierung reicht hierfür der einfachste Editor aus. Welchen Editor Sie verwenden, hängt häufig vom Anwendungsfall ab. Obgleich ich hier ein Buch zur Shellscript-Programmierung schreibe, gestehe ich, kein großer Fan der Editoren »vi« und »emacs« zu sein. Ich verwende, wenn irgend möglich, einen Editor einer grafischen Oberfläche wie »Kate« unter KDE, »Gedit« unter dem GNOME-Desktop oder »xemacs« auf einer X11-Oberfläche.
Trotzdem habe ich die Mühe nicht gescheut, den gewöhnungsbedürftigen Umgang mit den quasi überall vorhandenen Editoren wie »vi«, »emacs«, »joe« oder »pico« zu erlernen. Der Grund ist ganz einfach: Als Systemadministrator oder Webmaster (mit Zugang über SSH) haben Sie nicht überall die Möglichkeit, auf eine grafische Oberfläche zurückzugreifen. Häufig sitzen Sie dann vor einem Rechner mit einer nackten Shell. Wenn Sie dann nicht wenigstens den grundlegenden Umgang mit (mindestens) »vi« beherrschen, sehen Sie ziemlich alt aus. Zwar besteht meistens noch die Möglichkeit eines Fernzugriffs zum Kopieren mit dem Kommando scp, aber macht man hier bspw. einen Tippfehler, ist dieser Vorgang auf Dauer eher ineffizient. Letztendlich ist das Ihre Entscheidung.
### 1.8.2 Der Name des ShellscriptsÂ
Den Namen des Shellscripts, unter dem Sie dieses mit dem Editor Ihrer Wahl abspeichern, können Sie fast willkürlich wählen. Sie müssen lediglich folgende Punkte beachten:
### 1.8.3 AusführenÂ
Jetzt können Sie das Script beim Namen aufrufen. Allerdings werden Sie gewöhnlich den absoluten Pfadnamen verwenden müssen, es sei denn, Sie fügen das entsprechende Verzeichnis (wo sich das Script befindet) zur Umgebungsvariablen PATH hinzu (aber dazu später mehr). Somit lautet ein möglicher Aufruf des Scripts wie folgt:
> you@host > ./userinfo Fr Jan 21 02:35:06 CET 2005 Ich bin ... you Alle User, die eingeloggt sind ... you :0 Jan 20 23:06 (console) ingo :0 Jan 20 12:02 (console) john pts/0 Jan 1 16:15 (pd9e9bdc0.dip.t-dialin.net) john pts/2 Jan 1 16:22 (pd9e9bdc0.dip.t-dialin.net) Name des Host-Systems ... goliath.speedpartner.de
Hier wird mit ./ das aktuelle Verzeichnis verwendet. Starten Sie das Script aus einem anderen Verzeichnis, müssen Sie eben den absoluten Pfadnamen angeben:
> you@host:~/irgend/wo > /home/tot/beispielscripte/Kap1/userinfo
Damit Sie verstehen, wie ein Shellscript ausgeführt wird, muss ich Ihnen leider noch ein Shellscript aufdrücken, bevor Sie mit den eigentlichen Grundlagen der Shell beginnen können. Es ist nämlich von enormer Bedeutung, dass Sie verstehen, dass nicht die aktuelle Shell das Script ausführt. Zum besseren Verständnis:
> you@host > ps -f UID PID PPID C STIME TTY TIME CMD you 3672 3658 0 Jan20 pts/38 00:00:00 /bin/bash you 7742 3672 0 03:03 pts/38 00:00:00 ps -f
Sie geben hier mit ps âf die aktuelle Prozessliste (hier unter Linux mit der Bash) der Shell auf den Bildschirm aus. Hier lautet die Prozessnummer (PID) der aktuellen Shell 3672. Die Nummer des Elternprozesses (PPID), der diese Shell gestartet hat, lautet hierbei 3658. Im Beispiel war dies der Dämon »kdeinit«, was hier allerdings jetzt nicht von Bedeutung ist. Den Prozess ps âf haben Sie ja soeben selbst gestartet und anhand der PPID lässt sich auch hier ermitteln, von welcher Shell dieser Prozess gestartet wurde (hier 3672, also der aktuellen Bash).
Um Ihnen jetzt zu demonstrieren, wie ein Shellscript ausgeführt wird, führen Sie bitte folgendes Script aus:
> # Script-Name: finduser # gibt alle Dateien des Users you auf dem Bildschirm aus find / -user you -print 2>/dev/null
Ich habe hier bewusst einen Befehl verwendet, der ein wenig länger beschäftigt sein sollte. Mit dem Kommando find und den entsprechenden Angaben werden alle Dateien des Users »you« unterhalb des Wurzelverzeichnisses / auf dem Bildschirm ausgegeben. Fehlerausgaben (Kanal 2) wie bspw. »keine Berechtigung« werden nach /dev/null umgeleitet und somit nicht auf dem Bildschirm ausgegeben. Auf das Thema Umleitung wird noch gezielt eingegangen. Damit uns im Beispiel hier nicht die Standardausgabe (Kanal 1) auf dem Bildschirm stört, leiten wir diese auch beim Start des Scripts nach /dev/null (1>/dev/null) um. Und damit uns die Shell für weitere Eingaben zur Verfügung steht, stellen wir die Ausführung des Scripts in den Hintergrund (&). Somit können Sie in aller Ruhe die Prozessliste betrachten, während Sie das Script ausführen.
> you@host > chmod u+x finduser you@host > ./finduser 1>/dev/null & [1] 8138 you@host > ps -f UID PID PPID C STIME TTY TIME CMD you 3672 3658 0 Jan20 pts/38 00:00:00 /bin/bash you 8138 3672 0 03:26 pts/38 00:00:00 /bin/bash you 8139 8138 10 03:26 pts/38 00:00:00 find / -user tot -print you 8140 3672 0 03:26 pts/38 00:00:00 ps -f you@host > kill $! [1]+ Beendet ./finduser >/dev/null
Damit das Script nicht im Hintergrund unnötig Ressourcen verbraucht, wurde es mit kill $! beendet. Die Zeichenfolge »$!« ist eine Shell-Variable einer Prozessnummer vom zuletzt gestarteten Hintergrundprozess (auch hierauf wird noch eingegangen). Interessant ist für diesen Abschnitt nur die Ausgabe von ps âf:
> UID PID PPID C STIME TTY TIME CMD tot 3672 3658 0 Jan20 pts/38 00:00:00 /bin/bash tot 8138 3672 0 03:26 pts/38 00:00:00 /bin/bash tot 8139 8138 10 03:26 pts/38 00:00:00 find / -user tot -print tot 8140 3672 0 03:26 pts/38 00:00:00 ps -f
Sie können hier anhand der PPID erkennen, dass von der aktuellen Shell (PID:3672) eine weitere Shell (8138) gestartet wurde. Bei dieser neuen Shell spricht man von einer Subshell. Erst diese Subshell führt anschließend das Script aus, wie unschwer anhand der PPID von find zu erkennen ist. Der Befehl ps âf hingegen wurde wiederum von der aktuellen Shell ausgeführt.
Derselbe Vorgang ist natürlich auch bei der Bourne-Shell (sh) und der Korn-Shell (ksh) möglich und nicht nur, wie hier gezeigt, unter der bash. Jede Shell ruft für das Ausführen eines Scripts immer eine Subshell auf. Die bash eine bash und die sh eine sh. Nur bei der Korn-Shell kann es sein, dass anstatt einer weiteren ksh eine sh aufgerufen wird! Wie Sie dies ermitteln können, haben Sie ja eben erfahren.
### 1.8.4 Hintergrundprozess startenÂ
Wie Sie beim Ausführen des Beispiels eben entnehmen konnten, können Sie durch das Anhängen eines Ampersand-Zeichens (&) den Prozess im Hintergrund ausführen. Dies wird gerade bei Befehlen verwendet, die etwas länger im Einsatz sind. Würden Sie im Beispiel von finduser den Prozess im Vordergrund (also normal) ausführen, müssten Sie auf das Ende des Prozesses warten, bis Ihnen der Eingabeprompt der aktuell ausführenden Shell wieder zur Verfügung steht. Dadurch, dass der Prozess in den Hintergrund gestellt wird, können Sie weitere Prozesse quasi parallel starten.
### 1.8.5 Ausführende Shell festlegenÂ
Im Beispiel konnten Sie sehen, dass ich das Script in einer Bash als (Sub-)Shell ausgeführt habe. Was aber, wenn ich das Script gar nicht in der Bash ausführen will? Schreiben Sie bspw. ein Shellscript in einer bestimmten Shell-Syntax, werden Sie wohl auch wollen, dass Ihr Script eben in der entsprechenden (Sub-)Shell ausgeführt wird. Hierbei können Sie entweder die (Sub-)Shell direkt mit dem Shellscript als Argument aufrufen oder eben die auszuführende (Sub-)Shell im Script selbst festlegen.
# Aufruf als Argument der Shell
Der Aufruf eines Shellscripts als Argument der Shell ist recht einfach zu bewerkstelligen:
> you@host > sh finduser
Hier wird zum Ausführen des Shellscripts die Bourne-Shell (sh) als Subshell verwendet. Wollen Sie hingegen das Script mit einer Korn-Shell als Subshell aufrufen, rufen Sie das Script wie folgt auf:
> you@host > ksh finduser
Ebenso können Sie auch andere Shells wie bspw. ash oder zsh verwenden. Natürlich können Sie auch testweise eine C-Shell (bspw. tcsh) aufrufen â um festzustellen, dass hier schon erste Unstimmigkeiten auftreten.
# Ausführende Shell im Script festlegen (She-Bang-Zeile)
Letztendlich besteht der gewöhnliche Weg darin, die ausführende Shell im Script selbst festzulegen. Dabei muss sich in der ersten Zeile des Shellscripts die Zeichenfolge #! gefolgt von der absoluten Pfadangabe der entsprechenden Shell befinden (wird auch als She-Bang-Zeile bezeichnet). Die absolute Pfadangabe können Sie einfach mittels which ermitteln. Gewöhnlich befinden sich alle Shell-Varianten im Verzeichnis /bin oder in /usr/bin (bzw. /usr/local/bin ist auch ein Kandidat).
Wollen Sie bspw. das Shellscript finduser von einer Korn-(Sub-)Shell ausführen lassen, müssen Sie in der ersten Zeile Folgendes eintragen:
> #!/bin/ksh
oder
> #!/usr/bin/ksh
Somit sieht das komplette Script für die Korn-Shell so aus (davon ausgehend, dass sich die Korn-Shell in /bin befindet):
> #!/bin/ksh # Shellscript: finduser # gibt alle Dateien des Users tot auf dem Bildschirm aus find / -user tot -print 2>/dev/null
Für die Bash verwenden Sie:
> #!/bin/bash
oder
> #!/usr/bin/bash
Und für die Bourne-Shell:
> #!/bin/sh
oder
> #!/usr/bin/sh
Ähnlich wird diese erste Zeile übrigens auch bei Perl und anderen Skriptsprachen verwendet. Das dies funktioniert, ist nichts Magisches, sondern ein typischer Linux-UNIX-Vorgang. Beim Start eines neuen Programms (besser ist wohl Prozess) verwenden alle Shell-Varianten den Linux-UNIX-Systemaufruf exec. Wenn Sie aus der Login-Shell eine weitere Prozedur starten, wird ja zunächst eine weitere Subshell gestartet (wie bereits erwähnt). Durch den Systemaufruf exec wird der neue Prozess durch ein angegebenes Programm überlagert â egal ob dieses Programm jetzt in Form einer ausführbaren Datei (wie diese etwa in C erstellt werden) oder als Shellscript vorhanden ist. Im Fall eines Shellscripts startet exec standardmäßig eine Shell vom Typ Ihrer aufrufenden Shell und beauftragt diese Shell, alle Befehle und Kommandos der angegebenen Datei auszuführen. Enthält das Shellscript allerdings in der ersten Zeile eine Angabe wie
> #! Interpreter [Argument(e)]
verwendet exec den vom Script angegebenen Interpreter zum Ausführen des Shellscripts. Als Interpreter können alle Ihnen bekannten Shell-Varianten (sofern auf dem System vorhanden) verwendet werden (wie bereits erwähnt, nicht nur eine Shell!).
An der Zeichenfolge #! in der ersten Zeile erkennt exec die Interpreterzeile und verwendet die dahinter stehenden Zeichen als Namen des Programms, das als Interpreter verwendet werden soll. Der Name Ihres Shellscripts wird beim Aufruf des Interpreters automatisch übergeben. Dieser exec-Mechanismus wird von jeder Shell zum Aufrufen eines Programms oder Kommandos verwendet. Somit können Sie sichergehen, dass der Ablauf Ihres Scripts in der richtigen Umgebung ausgeführt wird, auch wenn Sie es gewohnt sind, in einer anderen Shell zu arbeiten.
# Shellscript ohne Subshell ausführen
Ein Beispiel, wie Sie ein Shellscript ohne Subshell ausführen können:
> you@host > . ./script_zum_ausfuehren
oder mit absoluten Pfadnamen:
> you@host > . /home/tot/beispielscripte/Kap1/script_zum_ausfuehren
### 1.8.6 KommentareÂ
In den Scripts zuvor wurden Kommentare bereits reichlich verwendet, und Sie sollten dies in Ihren Scripts auch regelmäßig tun. Einen Kommentar können Sie durch ein vor dem Text stehendes Hash-Zeichen (#) erkennen. Hier ein Beispiel, wie Sie Ihre Scripts kommentieren können:
> #!/bin/sh # Script-Name: HalloWelt # Version : 0.1 # Autor : J.Wolf # Datum : 20.05.2005 # Lizenz : ... # ... gibt "Hallo Welt" aus echo "Hallo Welt" echo "Ihre Shell:" echo $SHELL # ... gibt die aktuelle Shell aus
Zugegeben, für ein solch kleines Beispiel sind die Kommentare ein wenig übertrieben. Sie sehen hier, wie Sie auch Kommentare hinter einen Befehl setzen können. Alles, was hinter einem Hash-Zeichen steht, wird von der Shell ignoriert. Natürlich mit Ausnahme der ersten Zeile, wenn sich dort die Zeichenfolge »#!« befindet.
Kommentieren sollten Sie zumindest den Anfang des Scripts mit einigen Angaben zum Namen und eventuell dem Zweck, sofern dieser nicht gleich klar sein sollte. Wenn Sie wollen, können Sie selbstverständlich auch Ihren Namen, das Datum und die Versionsnummer angeben. Auch auf Besonderheiten bezüglich einer Lizenz sollten Sie hier hinweisen. Komplexe Stellen sollten auf jeden Fall ausreichend kommentiert werden. Dies hilft Ihnen, den Code nach längerer Abwesenheit schneller wieder zu verstehen. Sofern Sie die Scripts der Öffentlichkeit zugänglich machen wollen, wird man es Ihnen danken, wenn Sie Ihren »Hack« ausreichend kommentiert haben.
### 1.8.7 StilÂ
Jeder Programmierer entwickelt mit der Zeit seinen eigenen Programmierstil, dennoch gibt es einiges, was Sie vielleicht beachten sollten. Zwar ist es möglich, durch ein Semikolon getrennt mehrere Befehle in einer Zeile zu schreiben (wie dies ja in der Shell selbst auch möglich ist), doch sollten Sie dies, wenn möglich, der Übersichtlichkeit zuliebe vermeiden.
Und sollte eine Zeile doch mal ein wenig zu lang werden, dann können Sie immer noch das Backslash-Zeichen (\) setzen. Ein Shellscript arbeitet ja Zeile für Zeile ab. Als Zeilentrenner wird gewöhnlich das Newline-Zeichen (ein für den Editor nicht sichtbares Zeichen mit der ASCII-Code-Nummer 10) verwendet. Durch das Voranstellen eines Backslash-Zeichens wird das Newline-Zeichen außer Kraft gesetzt.
> #!/bin/sh # Script-Name: backslash # ... bei überlangen Befehlen kann man ein Backslash setzen echo "Hier ein solches Beispiel, wie Sie\ ueberlange Zeilen bzw. Ketten von Befehlen\ auf mehreren Zeilen ausführen können"
Bei etwas überlangen Ketten von Kommandos verhilft das Setzen eines Backslashs auch ein wenig, die Übersichtlichkeit zu verbessern:
> ls -l /home/tot/docs/listings/beispiele/kapitel1/final/*.sh | \ sort -r | less
Wenn Sie etwas später auf Schleifen und Verzweigungen treffen, sollten Sie gerade hier mal eine Einrückung mehr als nötig vornehmen. Einmal mehr auf die (Ë_)-Taste zu drücken, hat noch keinem geschadet.
### 1.8.8 Ein Shellscript beendenÂ
Ein Shellscript beendet sich entweder nach Ablauf der letzten Zeile selbst oder Sie verwenden den Befehl exit, womit die Ausführung des Scripts sofort beendet wird. Die Syntax des Kommandos sieht wie folgt aus:
> exit [n]
Der Parameter n ist optional. Für diesen Wert können Sie eine ganze Zahl von 0 bis 255 verwenden. Verwenden Sie exit ohne jeden Wert, wird der exit-Status des Befehls genutzt, der vor exit ausgeführt wurde. Mit den Werten 0 bis 255 wird angegeben, ob ein Kommando oder Script ordnungsgemäß ausgeführt wurde. Der Wert 0 steht dafür, dass alles ordnungsgemäß abgewickelt wurde. Jeder andere Wert signalisiert eine Fehlernummer.
Sie können damit auch testen, ob ein Kommando, das Sie im Script ausgeführt haben, erfolgreich ausgeführt wurde und je nach Situation das Script an einer anderen Stelle fortsetzen lassen. Hierfür fehlt Ihnen aber noch die Kenntnis der Verzweigung. Die Zahl des exit-Status kann jederzeit mit der Shell-Variablen $? abgefragt werden. Hierzu ein kurzes Beispiel:
> #!/bin/sh # Script-Name: ende echo "Tick ..." # ... Shellscript wird vorzeitig beendet exit 5 # "Tack ..." wird nicht mehr ausgeführt echo "Tack ...
Das Beispiel bei der Ausführung:
> you@host > chmod u+x ende you@host > ./ende Tick ... you@host > echo $? 5 you@host > ls -l /root /bin/ls: /root: Keine Berechtigung you@host > echo $? 1 you@host > ls -l *.txt /bin/ls: *.txt: Datei oder Verzeichnis nicht gefunden you@host > echo $? 1 you@host > ls -l *.c -rw-r--r-- 1 tot users 49 2005â01â22 01:59 hallo.c you@host > echo $? 0
Im Beispiel wurden auch andere Kommandos ausgeführt. Je nach Erfolg war hierbei der Wert der Variablen $? 1 (bei einem Fehler) oder 0 (wenn alles in Ordnung ging) â abgesehen von unserem Beispiel, wo bewusst der Wert 5 zurückgegeben wurde.
### 1.8.9 Testen und Debuggen von ShellscriptsÂ
Debuggen und Fehlersuche ist auch in der Shellscript-Programmierung von großer Bedeutung, sodass diesem Thema ein extra Abschnitt gewidmet wird. Trotzdem werden Sie auch bei Ihren ersten Scripts unweigerlich den einen oder anderen (Tipp-)Fehler einbauen (es sei denn, Sie verwenden sämtliche Listings von der Buch-CD).
Am häufigsten wird dabei in der Praxis die Option âx verwendet. Damit wird jede Zeile vor ihrer Ausführung auf dem Bildschirm ausgegeben. Meistens wird diese Zeile mit einem führenden Plus angezeigt (abhängig davon, was in der Variablen PS4 enthalten ist). Als Beispiel diene folgendes Shellscript:
> #!/bin/sh # Script-Name: prozdat # Listet Prozessinformationen auf echo "Anzahl laufender Prozesse:" # ... wc -l zählt alle Zeilen, die ps -ef ausgeben würde ps -ef | wc -l echo "Prozessinformationen unserer Shell:" # ... die Shell-Variable $$ enthält die eigene Prozessnummer ps $$
Normal ausgeführt ergibt dieses Script folgende Ausgabe:
> you@host > ./prozdat Anzahl laufender Prozesse: 76 Prozessinformationen unserer Shell: PID TTY STAT TIME COMMAND 10235 pts/40 S+ 0:00 /bin/sh ./prozdat
Auch hier wollen wir uns zunächst nicht zu sehr mit dem Script selbst befassen. Jetzt soll die Testhilfe mit der Option âx eingeschaltet werden:
> you@host > sh -x ./prozdat + echo 'Anzahl laufender Prozesse:' Anzahl laufender Prozesse: + ps -ef + wc -l 76 + echo 'Prozessinformationen unserer Shell:' Prozessinformationen unserer Shell: + ps 10405 PID TTY STAT TIME COMMAND 10405 pts/40 S+ 0:00 sh -x ./prozdat
Sie sehen, wie jede Zeile vor ihrer Ausführung durch ein voranstehendes Plus ausgegeben wird. Außerdem können Sie hierbei auch feststellen, dass die Variable $$ durch den korrekten Inhalt ersetzt wurde. Selbiges wäre übrigens auch bei der Verwendung von Sonderzeichen (Wildcards) wie * der Fall. Sie bekommen mit dem Schalter âx alles im Klartext zu sehen.
Bei längeren Scripts ist mir persönlich das Pluszeichen am Anfang nicht deutlich genug. Wenn Sie dies auch so empfinden, können Sie die Variable PS4 gegen eine andere beliebige Zeichenkette austauschen. Ich verwende hierfür sehr gern die Variable LINENO, die nach der Ausführung immer durch die entsprechende Zeilennummer ersetzt wird. Die Zeilennummer hilft mir dann bei längeren Scripts, immer gleich die entsprechende Zeile zu finden. Hier ein Beispiel, wie Sie mit PS4 effektiver Ihr Script debuggen können:
> you@host > export PS4='[--- Zeile: $LINENO ---] ' you@host > sh -x ./prozdat [--- Zeile: 6 ---] echo 'Anzahl laufender Prozesse:' Anzahl laufender Prozesse: [--- Zeile: 8 ---] ps -ef [--- Zeile: 8 ---] wc -l 76 [--- Zeile: 10 ---] echo 'Prozessinformationen unserer Shell:' Prozessinformationen unserer Shell: [--- Zeile: 12 ---] ps 10793 PID TTY STAT TIME COMMAND 10793 pts/40 S+ 0:00 sh -x ./prozdat
### 1.8.10 Shellscript, das ein Shellscript erstellt und ausführtÂ
Hand aufs Herz: Wie oft haben Sie bis hierher schon einen Fluch ausgestoßen, weil Sie bspw. vergessen haben, das Ausführrecht zu setzen, oder waren davon genervt, immer wiederkehrende Dinge zu wiederholen? Dazu lernen Sie ja eigentlich Shellscript-Programmierung, nicht wahr? Somit liegt nichts näher, als Ihnen hier eine kleine Hilfe mitzugeben. Ein Shellscript, welches ein neues Script erstellt oder ein bereits vorhandenes Script in den Editor Ihrer Wahl lädt. Natürlich soll auch noch das Ausführrecht für den User gesetzt werden und bei Bedarf wird das Script ausgeführt. Hier das simple Script:
> # Ein Script zum Script erstellen ... # Name : scripter # Bitte entsprechend anpassen # # Verzeichnis, in dem sich das Script befindet dir=$HOME # Editor, der verwendet werden soll editor=vi # erstes Argument muss der Scriptname sein ... [ -z "$1" ] && exit 1 # Editor starten und Script laden (oder erzeugen) $editor $dir/$1 # Ausführrechte für User setzen ... chmod u+x $dir/$1 # Script gleich ausführen? Nein? Dann auskommentieren ... $dir/$1
Sie müssen bei diesem Script lediglich die Variablen »dir« und »editor« anpassen. Mit der Variable »dir« geben Sie das Verzeichnis an, wohin das Script geladen oder eventuell abgespeichert werden soll. Im Beispiel wurde einfach mit der Shell-Variablen HOME das Heimverzeichnis des eingeloggten Users verwendet. Als Editor verwende ich vi. Hier können Sie auch einen Editor Ihrer Wahl eintragen. Mit [ âz "$1" ] (z steht für zero, also leer) wird überprüft, ob das erste Argument der Kommandozeile ($1) vorhanden ist. Wurde hier keine Angabe gemacht, beendet sich das Script gleich wieder mit exit 1. Die weitere Ausführung spricht, glaube ich, erst einmal für sich. Aufgerufen wird das Script wie folgt:
> you@host > ./scripter example
Natürlich ist es nicht meine Absicht, dem Einsteiger mit diesem Script eine Einführung in die Shellscript-Programmierung zu geben, sondern hier geht es nur darum, Ihnen eine kleine Hilfe an die Hand zu geben, um effektiver mit dem Buch arbeiten zu können.
Nachdem bis hierhin alle Formalitäten geklärt wurden, können Sie endlich damit loslegen, die eigentliche Shellscript-Programmierung zu erlernen. Zugegeben, es waren viele Formalitäten, aber dies war aus meiner Sicht unbedingt nötig.
# 1.9 Vom Shellscript zum ProzessÂ
1.9 Vom Shellscript zum ProzessÂ
Ein Prozess ist nichts anderes als ein Programm bei der Ausführung. Geben Sie bspw. in einer Shell das Kommando pwd ein, wird Ihnen das aktuelle Verzeichnis angezeigt, in dem Sie sich gerade befinden. Zwar ist hier häufig die Rede von einem Kommando, doch letztendlich ist dies auch nur ein Prozess bei der Ausführung.
Diesem Prozess (wir bleiben einfach mal beim Kommando pwd) steht zur Ausführungszeit die komplette Verwaltungseinheit des Betriebssystems zur Verfügung, so als wäre dies das einzige Programm, das gerade läuft. Vereinfacht heißt dies, dass im Augenblick der Ausführung dem Kommando pwd die komplette Rechenleistung der CPU (Prozessor) zur Verfügung steht. Natürlich ist diese alleinige Verfügbarkeit zeitlich begrenzt, denn sonst würde es sich ja nicht um ein Multitasking-Betriebssystem handeln. Gewöhnlich werden auf einem Betriebssystem mehrere Programme gleichzeitig ausgeführt. Auch dann, wenn Sie noch gar kein Programm gestartet haben, laufen viele Dienste â und je nach Einstellung auch Programme â im Hintergrund ab (Daemon-Prozesse).
In der Praxis hat es den Anschein, als würden alle Prozesse zur gleichen Zeit ausgeführt. Dies ist aber nicht möglich, da ein Prozess im Augenblick der Ausführung eine CPU benötigt, und allen anderen Prozessen eben in dieser Zeit keine CPU zur Verfügung steht. Anders hingegen sieht dies natürlich aus, wenn mehrere CPUs im Rechner vorhanden sind. Dann wäre theoretisch die Ausführung von mehreren Prozessen (echtes Multithreading) gleichzeitig möglich â pro CPU eben ein Prozess.
Jeder dieser Prozesse besitzt eine eindeutige systemweite fortlaufende Prozessnummer (kurz PID für Process Identification Number). Vom Betriebssystem selbst wird sichergestellt, dass hierbei keine Nummer zweimal vorkommt. Stehen bei der Vergabe von PIDs keine Nummern mehr zur Verfügung, wird wieder bei 0 angefangen (bereits vergebene Nummern werden übersprungen). Natürlich können nicht unendlich viele Prozesse ausgeführt werden, sondern dies ist abhängig von der Einstellung des Kernels (wo mindestens 32768 Prozesse garantiert werden). Dieser Wert kann zwar nach oben »gedreht« werden, das fordert dann allerdings auch mehr Hauptspeicher, da die Prozessliste nicht ausgelagert (geswapt) werden kann.
Zur Überwachung von Prozessen sind des Administrators liebste Werkzeuge die Kommandos ps und top geworden (natürlich gibt es auch grafische Alternativen zu diesen beiden System-Tools). Das Kommando ps liefert Ihnen Informationen über den Status von aktiven Prozessen. Mit ps lässt sich die aktuelle PID, die UID, der Steuerterminal, der Speicherbedarf eines Prozesses, die CPU-Zeit, der aktuelle Status des Prozesses und noch eine Menge mehr anzeigen. Da ps allerdings nur den momentanen Zustand von Prozessen ausgibt, kann man nie genau sagen, wo gerade wieder mal ein Prozess Amok läuft. Für eine dauerhafte Überwachung von Prozessen wurde das Kommando top entwickelt. Abhängig von der Einstellung aktualisiert top die Anzeige nach einigen Sekunden (laut Standardeinstellung meistens nach 3 Sekunden) und verwendet dazu auch noch relativ wenig Ressourcen in Ihrem System.
Ein grundlegendes Prinzip unter Linux/UNIX ist es, dass ein Prozess einen weiteren Prozess starten kann. Dieses Prinzip wurde bereits beschrieben, als es darum ging, ein Shellscript auszuführen. Dabei startet die Login-Shell eine weitere Shell (also einen weiteren Prozess). Erst die neue Shell arbeitet jetzt Zeile für Zeile das Shellscript ab. Hier haben Sie ein Eltern-Kind-Prinzip. Jeder Kindprozess (abgesehen von »init« mit der PID 1) hat einen Elternprozess, der ihn gestartet hat. Den Elternprozess kann man anhand der PPID-Nummer (PPID = Parent Process Identifier) ermitteln. So können Sie die ganze Ahnengalerie mithilfe von ps âf nach oben laufen:
you@host > ps -f UID PID PPID C STIME TTY TIME CMD tot 3675 3661 0 10:52 pts/38 00:00:00 /bin/bash tot 5576 3675 0 12:32 pts/38 00:00:00 ps -f
Hier wird bspw. der eben ausgeführte Prozess ps âf mit der PID 5576 aufgelistet. Der Elternprozess, der diesen Prozess gestartet hat, lautet 3675 (siehe PPID) â was die PID der aktuellen bash (hier eine Zeile höher) ist. Die komplette Ahnengalerie können Sie aber auch etwas komfortabler mit dem Kommando pgrep (siehe Kapitel 14, Linux-UNIX-Kommandoreferenz) ermitteln.
1.9.1 Ist das Shellscript ein Prozess?Â
Diese Frage kann man mit »Jein« beantworten. Denn schließlich startet die Login-Shell mit der Subshell schon mal einen weiteren Prozess. Und die Subshell startet mit unserem Shellscript einen oder mehrere Prozesse, also ebenfalls einen weiteren Kindprozess. Aber bedenken Sie bitte, dass ein Shellscript keine ausführbare Anwendung im eigentlichen Sinne ist, sondern eine Textdatei. Die Subshell startet gegebenenfalls weitere Prozesse, die Sie in dieser Textdatei (Ihrem Shellscript) angeben. Am besten kann man diesen Vorgang mit dem Kommando ps verdeutlichen:
# Script-Name : myscript ps -f
Führen Sie das Shellscript myscript aus. Hier sieht die Ausgabe wie folgt aus:
you@host > ./myscript UID PID PPID C STIME TTY TIME CMD tot 3679 3661 0 10:52 pts/40 00:00:00 /bin/bash tot 5980 3679 0 12:50 pts/40 00:00:00 /bin/bash tot 5981 5980 0 12:50 pts/40 00:00:00 ps -f
Sie finden keine Spur von einem Prozess namens myscript, sondern nur vom Kommando ps âf, welches Sie im Shellscript geschrieben haben. Hier sehen Sie, dass unsere Subshell (PID 5980) den Prozess ps âf aufgerufen hat. Die Subshell ist ja schließlich auch unser Interpreter, der das Script myscript Zeile für Zeile abarbeiten soll â was hier auch mit ps âf geschieht.
Beachten Sie aber, dass diese Subshell im Gegensatz zu Ihrer Login-Shell nicht interaktiv arbeitet! Die Subshell arbeitet nur das Shellscript ab und wird anschließend gleich wieder beendet. Es gibt also nach einem interpretierten Kommando keinen Prompt, sondern es wird stur Kommando für Kommando abgearbeitet, bis sich das Shellscript beendet oder mit exit beendet wird.
1.9.2 Echte Login-Shell?Â
Hier ist immer die Rede von einer Login-Shell. Da ich recht intensiv u. a. mit Linux unter einem Windowmanager (bspw. KDE) arbeite, wird vielleicht der ein oder andere leicht ergraute UNIX-Guru feststellen: Der benutzt ja gar keine echte Login-Shell. Was ist also eine Login-Shell?
Eine Login-Shell ist eine normale Shell, die als erstes Kommando (Prozess) beim Einloggen gestartet wird und beim Hochfahren der kompletten Umgebung (jede Shell hat eine Umgebung) viele andere Kommandos startet. Die Lebensdauer der Login-Shell wird meistens als Session (die Zeit, die man am Rechner eingeloggt ist) bezeichnet.
Eine echte Login-Shell erkennen Sie in der Ausgabe des Kommandos ps daran, dass sich vor dem Namen der Shell ein 'â' befindet (bspw. âbash; âsh oder âksh). Es verwirrt vielleicht ein bisschen, da es ja kein Kommando bzw. keine Shell mit âsh oder âksh gibt. Dies ist allerdings wieder eine Eigenheit der Funktion execl() unter C, bei der eben das Argument vom Kommandonamen nicht gleich dem Argument des Dateinamens sein muss. Setzt man einen Bindestrich vor die Shell, so verhält sich die Shell eben wie eine Login-Shell. Wollen Sie die Login-Shell ändern, sollten Sie sich das Kommando chsh (Abkürzung für change shell) ansehen.
Hinweis Es ist wichtig zu wissen, dass viele Kommandos wie bspw. logout nur in einer echten Login-Shell funktionieren. Dies auch deshalb, weil solche Login-Shells als Eltern den Prozess mit der Nummer 1 auslösen.
Bei der Ausgabe der Scripts im Buch konnten Sie erkennen, dass ich häufig keine echte Login-Shell verwendet habe. Wollen Sie eine echte Login-Shell ausführen, müssen Sie sich im Textmodus anmelden (unter Linux finden Sie einen echten Textmodus mit der Tastenkombination (Strg)+(Alt)+(F1) bis (F6)). Beim Anmelden im Textmodus landen Sie in einer echten Login-Shell â also einer Shell, die sich unmittelbar nach dem Anmelden öffnet.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 1.9 Vom Shellscript zum ProzessÂ
Ein Prozess ist nichts anderes als ein Programm bei der Ausführung. Geben Sie bspw. in einer Shell das Kommando pwd ein, wird Ihnen das aktuelle Verzeichnis angezeigt, in dem Sie sich gerade befinden. Zwar ist hier häufig die Rede von einem Kommando, doch letztendlich ist dies auch nur ein Prozess bei der Ausführung.
Diesem Prozess (wir bleiben einfach mal beim Kommando pwd) steht zur Ausführungszeit die komplette Verwaltungseinheit des Betriebssystems zur Verfügung, so als wäre dies das einzige Programm, das gerade läuft. Vereinfacht heißt dies, dass im Augenblick der Ausführung dem Kommando pwd die komplette Rechenleistung der CPU (Prozessor) zur Verfügung steht. Natürlich ist diese alleinige Verfügbarkeit zeitlich begrenzt, denn sonst würde es sich ja nicht um ein Multitasking-Betriebssystem handeln. Gewöhnlich werden auf einem Betriebssystem mehrere Programme gleichzeitig ausgeführt. Auch dann, wenn Sie noch gar kein Programm gestartet haben, laufen viele Dienste â und je nach Einstellung auch Programme â im Hintergrund ab (Daemon-Prozesse).
In der Praxis hat es den Anschein, als würden alle Prozesse zur gleichen Zeit ausgeführt. Dies ist aber nicht möglich, da ein Prozess im Augenblick der Ausführung eine CPU benötigt, und allen anderen Prozessen eben in dieser Zeit keine CPU zur Verfügung steht. Anders hingegen sieht dies natürlich aus, wenn mehrere CPUs im Rechner vorhanden sind. Dann wäre theoretisch die Ausführung von mehreren Prozessen (echtes Multithreading) gleichzeitig möglich â pro CPU eben ein Prozess.
Jeder dieser Prozesse besitzt eine eindeutige systemweite fortlaufende Prozessnummer (kurz PID für Process Identification Number). Vom Betriebssystem selbst wird sichergestellt, dass hierbei keine Nummer zweimal vorkommt. Stehen bei der Vergabe von PIDs keine Nummern mehr zur Verfügung, wird wieder bei 0 angefangen (bereits vergebene Nummern werden übersprungen). Natürlich können nicht unendlich viele Prozesse ausgeführt werden, sondern dies ist abhängig von der Einstellung des Kernels (wo mindestens 32768 Prozesse garantiert werden). Dieser Wert kann zwar nach oben »gedreht« werden, das fordert dann allerdings auch mehr Hauptspeicher, da die Prozessliste nicht ausgelagert (geswapt) werden kann.
Zur Überwachung von Prozessen sind des Administrators liebste Werkzeuge die Kommandos ps und top geworden (natürlich gibt es auch grafische Alternativen zu diesen beiden System-Tools). Das Kommando ps liefert Ihnen Informationen über den Status von aktiven Prozessen. Mit ps lässt sich die aktuelle PID, die UID, der Steuerterminal, der Speicherbedarf eines Prozesses, die CPU-Zeit, der aktuelle Status des Prozesses und noch eine Menge mehr anzeigen. Da ps allerdings nur den momentanen Zustand von Prozessen ausgibt, kann man nie genau sagen, wo gerade wieder mal ein Prozess Amok läuft. Für eine dauerhafte Überwachung von Prozessen wurde das Kommando top entwickelt. Abhängig von der Einstellung aktualisiert top die Anzeige nach einigen Sekunden (laut Standardeinstellung meistens nach 3 Sekunden) und verwendet dazu auch noch relativ wenig Ressourcen in Ihrem System.
Ein grundlegendes Prinzip unter Linux/UNIX ist es, dass ein Prozess einen weiteren Prozess starten kann. Dieses Prinzip wurde bereits beschrieben, als es darum ging, ein Shellscript auszuführen. Dabei startet die Login-Shell eine weitere Shell (also einen weiteren Prozess). Erst die neue Shell arbeitet jetzt Zeile für Zeile das Shellscript ab. Hier haben Sie ein Eltern-Kind-Prinzip. Jeder Kindprozess (abgesehen von »init« mit der PID 1) hat einen Elternprozess, der ihn gestartet hat. Den Elternprozess kann man anhand der PPID-Nummer (PPID = Parent Process Identifier) ermitteln. So können Sie die ganze Ahnengalerie mithilfe von ps âf nach oben laufen:
> you@host > ps -f UID PID PPID C STIME TTY TIME CMD tot 3675 3661 0 10:52 pts/38 00:00:00 /bin/bash tot 5576 3675 0 12:32 pts/38 00:00:00 ps -f
Hier wird bspw. der eben ausgeführte Prozess ps âf mit der PID 5576 aufgelistet. Der Elternprozess, der diesen Prozess gestartet hat, lautet 3675 (siehe PPID) â was die PID der aktuellen bash (hier eine Zeile höher) ist. Die komplette Ahnengalerie können Sie aber auch etwas komfortabler mit dem Kommando pgrep (siehe Kapitel 14, Linux-UNIX-Kommandoreferenz) ermitteln.
### 1.9.1 Ist das Shellscript ein Prozess?Â
Diese Frage kann man mit »Jein« beantworten. Denn schließlich startet die Login-Shell mit der Subshell schon mal einen weiteren Prozess. Und die Subshell startet mit unserem Shellscript einen oder mehrere Prozesse, also ebenfalls einen weiteren Kindprozess. Aber bedenken Sie bitte, dass ein Shellscript keine ausführbare Anwendung im eigentlichen Sinne ist, sondern eine Textdatei. Die Subshell startet gegebenenfalls weitere Prozesse, die Sie in dieser Textdatei (Ihrem Shellscript) angeben. Am besten kann man diesen Vorgang mit dem Kommando ps verdeutlichen:
> # Script-Name : myscript ps -f
Führen Sie das Shellscript myscript aus. Hier sieht die Ausgabe wie folgt aus:
> you@host > ./myscript UID PID PPID C STIME TTY TIME CMD tot 3679 3661 0 10:52 pts/40 00:00:00 /bin/bash tot 5980 3679 0 12:50 pts/40 00:00:00 /bin/bash tot 5981 5980 0 12:50 pts/40 00:00:00 ps -f
Sie finden keine Spur von einem Prozess namens myscript, sondern nur vom Kommando ps âf, welches Sie im Shellscript geschrieben haben. Hier sehen Sie, dass unsere Subshell (PID 5980) den Prozess ps âf aufgerufen hat. Die Subshell ist ja schließlich auch unser Interpreter, der das Script myscript Zeile für Zeile abarbeiten soll â was hier auch mit ps âf geschieht.
Beachten Sie aber, dass diese Subshell im Gegensatz zu Ihrer Login-Shell nicht interaktiv arbeitet! Die Subshell arbeitet nur das Shellscript ab und wird anschließend gleich wieder beendet. Es gibt also nach einem interpretierten Kommando keinen Prompt, sondern es wird stur Kommando für Kommando abgearbeitet, bis sich das Shellscript beendet oder mit exit beendet wird.
### 1.9.2 Echte Login-Shell?Â
Hier ist immer die Rede von einer Login-Shell. Da ich recht intensiv u. a. mit Linux unter einem Windowmanager (bspw. KDE) arbeite, wird vielleicht der ein oder andere leicht ergraute UNIX-Guru feststellen: Der benutzt ja gar keine echte Login-Shell. Was ist also eine Login-Shell?
Eine Login-Shell ist eine normale Shell, die als erstes Kommando (Prozess) beim Einloggen gestartet wird und beim Hochfahren der kompletten Umgebung (jede Shell hat eine Umgebung) viele andere Kommandos startet. Die Lebensdauer der Login-Shell wird meistens als Session (die Zeit, die man am Rechner eingeloggt ist) bezeichnet.
Eine echte Login-Shell erkennen Sie in der Ausgabe des Kommandos ps daran, dass sich vor dem Namen der Shell ein 'â' befindet (bspw. âbash; âsh oder âksh). Es verwirrt vielleicht ein bisschen, da es ja kein Kommando bzw. keine Shell mit âsh oder âksh gibt. Dies ist allerdings wieder eine Eigenheit der Funktion execl() unter C, bei der eben das Argument vom Kommandonamen nicht gleich dem Argument des Dateinamens sein muss. Setzt man einen Bindestrich vor die Shell, so verhält sich die Shell eben wie eine Login-Shell. Wollen Sie die Login-Shell ändern, sollten Sie sich das Kommando chsh (Abkürzung für change shell) ansehen.
Bei der Ausgabe der Scripts im Buch konnten Sie erkennen, dass ich häufig keine echte Login-Shell verwendet habe. Wollen Sie eine echte Login-Shell ausführen, müssen Sie sich im Textmodus anmelden (unter Linux finden Sie einen echten Textmodus mit der Tastenkombination (Strg)+(Alt)+(F1) bis (F6)). Beim Anmelden im Textmodus landen Sie in einer echten Login-Shell â also einer Shell, die sich unmittelbar nach dem Anmelden öffnet.
# 1.10 DatenstromÂ
Hier klicken, um das Bild zu Vergrößern
Abbildung 1.1 Â Der standardmäßige Datenstrom einer Shell
Relativ selten will man allerdings eine Ausgabe auf dem Bildschirm erzeugen. Gerade wenn es sich dabei um Hunderte Zeilen von Log-Daten handelt, speichert man dies gern in eine Datei ab. Dies realisiert man am einfachsten mit der Umleitung der Ausgabe.
1.10.1 Ausgabe umleitenÂ
Die Standardausgabe eines Kommandos oder eines Shellscripts wird mit dem Zeichen > hinter dem Kommando bzw. Shellscript in eine Datei umgeleitet. Alternativ kann hierfür auch der Kanal 1 mit der Syntax 1> verwendet werden, was aber nicht nötig ist, da das Umleitungszeichen > ohne sonstige Angaben automatisch in der Standardausgabe endet, beispielsweise:
you@host > du -b /home > home_size.dat you@host > ./ascript > output_of_ascript.dat
Mit du âb /home erfahren Sie, wie viele Bytes die einzelnen Ordner im Verzeichnis /home belegen. Da die Ausgabe relativ lang werden kann, wird diese kurzum in eine Datei namens home_size.dat umgeleitet, welche Sie jetzt mit einem beliebigen Texteditor ansehen können. Sofern diese Datei noch nicht existiert, wird sie neu erzeugt. Gleiches wurde bei der zweiten Kommandoeingabe vorgenommen. Hier wird davon ausgegangen, dass ein Script namens ascript existiert, und die Daten dieses Scripts werden in eine Datei namens output_of_ascript.dat geschrieben (siehe Abbildung 1.2).
Hier klicken, um das Bild zu Vergrößern
Abbildung 1.2 Â Umleiten der Standardausgabe (cmd > datei)
Wie bereits erwähnt, könnten Sie theoretisch den Standardausgabekanal auch bei der Nummer ansprechen:
you@host > du -b /home 1> home_size.dat
Diese Schreibweise hat allerdings gegenüber dem > allein keinen Vorteil.
Leider hat dieses Umleiten einer Ausgabe in eine Datei den Nachteil, dass bei einem erneuten Aufruf von
you@host > du -b /home > home_size.dat
die Daten gnadenlos überschrieben werden. Abhilfe schafft hier eine Ausgabeumleitung mit >>. Damit wird die Ausgabe wie gehabt in eine entsprechende Datei umgeleitet, nur mit dem Unterschied, dass jetzt die Daten immer ans Ende der Datei angehängt werden. Existiert die Datei noch nicht, wird diese trotzdem neu erzeugt.
you@host > du -b /home >> home_size.dat
Jetzt werden die weiteren Daten immer ans Ende von home_size.dat angefügt.
1.10.2 Standardfehlerausgabe umleitenÂ
So, wie Sie die Standardausgabe umleiten können, können Sie auch die Standardfehlerausgabe umleiten. Etwas Ähnliches konnten Sie ja bereits ein paar Seiten zuvor mit
you@host > find / -user you -print 2>/dev/null
sehen. Hiermit wurden alle Dateien vom User »you« auf den Bildschirm ausgegeben. Fehlermeldungen wie bspw. »Keine Berechtigung« wurden hier gewöhnlich auf die Standardfehlerausgabe geleitet. Diese Ausgabe wurde im Beispiel nach /dev/null, dem Datengrab des Systems, umgeleitet.
Hinweis /dev/null eignet sich prima, wenn Sie beispielsweise Daten für einen Test kopieren wollen oder nicht an der Ausgabe interessiert sind. Diese Gerätedatei benötigt außerdem keinen Plattenplatz auf dem System.
Natürlich können Sie auch hierbei die komplette Standardfehlerausgabe (zu erkennen an der Syntax 2> (Kanal 2)) von einem Befehl oder auch einem Script in eine Datei umleiten:
you@host > find / -user you -print 2> error_find_you.dat
Hiermit werden alle Fehlermeldungen in die Datei error_find_you.dat geschrieben. Existiert diese Datei noch nicht, wird sie angelegt. Sofern diese Datei allerdings präsent ist, wird der alte Inhalt in dieser Form komplett überschrieben. Wenn Sie dies nicht wollen, können Sie so wie schon bei der Standardausgabe mit >>, nur eben mit dem Kanal 2, die Standardfehlerausgabe an eine bestehende Datei anhängen (siehe Abbildung 1.3).
you@host > find / -user you -print 2>> error_find_you.dat
Sie können die Standardausgabe und Standardfehlerausgabe auch in zwei verschiedene Dateien umleiten. Dies wird recht häufig gemacht, denn oft hat man nicht die Zeit bzw. Übersicht, sämtliche Ausgaben zu überwachen (siehe Abbildung 1.4).
Hier klicken, um das Bild zu Vergrößern
Abbildung 1.3 Â Umleiten der Standardfehlerausgabe (cmd 2> datei)
you@host > find / -user you -print >> find_you.dat 2>> \ error_find_you.dat
Hier klicken, um das Bild zu Vergrößern
Abbildung 1.4 Â Beide Ausgabekanäle in eine separate Datei umleiten
Wollen Sie hingegen beides, also sowohl die Standardausgabe als auch die Standardfehlerausgabe in eine Datei schreiben, dann können Sie die beiden Kanäle zusammenkoppeln. Dies erledigen Sie mit der Syntax 2>&1. Damit legen Sie Kanal 2 (Standardfehlerkanal) und Kanal 1 (Standardausgabe) zusammen. Angewandt auf unser Beispiel, sieht dies folgendermaßen aus:
you@host > find / -user you -print > find_you_report.dat 2>&1
So können Sie übrigens auch ein Problem mit dem Pager (less, more ...) Ihrer Wahl lösen. Testen Sie einfach Folgendes:
you@host > find / -user you -print | more
Gewöhnlich verwendet man ja einen Pager (im Beispiel more), um die Standardausgabe komfortabel nach oben oder unten zu scrollen, damit einem bei etwas längeren Ausgaben die Texte nicht nur so vorbeifliegen. Aber hier will das nicht so recht hinhauen, da ständig die Standardfehlerausgabe mit einer Fehlermeldung dazwischen funkt. Dies bekommen Sie einfach in den Griff, indem Sie die beiden Ausgaben zusammenlegen.
you@host > find / -user you -print 2>&1 | more
Alternativ können Sie auch die Standardfehlerausgabe ins Datengrab (/dev/null) befördern.
you@host > find / -user you -print 2>/dev/null | more
1.10.3 Eingabe umleitenÂ
Das Umleiten der Standardeingabe (Kanal 0), das normalerweise von der Tastatur aus erfolgt, kann ebenfalls erreicht werden. Hier wird das Zeichen < nach einem Befehl bzw. einem Script verwendet. Nehmen wir einfach das Beispiel, in dem es darum ging, alle Dateien vom Eigentümer »you« zu suchen und diese in die Datei find_tot.dat zu schreiben. Wahrscheinlich befinden sich hier unendlich viele Dateien, und Sie sind auf der Suche nach einer bestimmten Datei, deren Namen Sie aber nicht mehr genau wissen. War es irgendetwas mit »audio«? Fragen Sie einfach grep:
you@host > grep audio < find_tot.dat /dev/audio /dev/audio0 /dev/audio1 /dev/audio2 /dev/audio3 /home/tot/cd2audio.txt /home/you/cd2audio.txt /home/you/Documents/Claudio.dat /var/audio /var/audio/audio.txt
Hier schieben wir grep einfach die Datei find_tot.dat von der Standardeingabe zu und werden fündig (siehe Abbildung 1.5).
Hier klicken, um das Bild zu Vergrößern
Abbildung 1.5 Â Umleiten der Standardeingabe (cmd < datei)
Wollen Sie die Ausgabe von grep in eine andere Datei speichern, dann können Sie auch noch eine Umleitung der Standardausgabe einbauen.
you@host > grep audio < find_you.dat > audio.dat you@host > cat audio.dat /dev/audio /dev/audio0 /dev/audio1 /dev/audio2 /dev/audio3 /home/tot/cd2audio.txt /home/tot/Documents/Claudio.dat /var/audio /var/audio/audio.txt
So haben Sie die Standardausgabe von grep, die Ihre Anfrage von der umgeleiteten Standardeingabe (hier der Datei find_you.dat) erhielt, weitergeleitet in die Datei audio.dat, was Ihnen die Ausgabe von cat anschließend auch bestätigt (siehe Abbildung 1.6).
Hier klicken, um das Bild zu Vergrößern
Abbildung 1.6 Â Umleiten von Standardeingabe und Standardausgabe
In Tabelle 1.4 sehen Sie alle Umlenkungen auf einen Blick:
Tabelle 1.4 Â Verschiedene Umlenkungen einer Shell
Kanal
Syntax
Beschreibung
1 (Standardausgabe)
cmd > file
Standardausgabe in eine Datei umlenken
1 (Standardausgabe)
cmd >> file
Standardausgabe ans Ende einer Datei umlenken
2 (Standardfehlerausgabe)
cmd 2> file
Standardfehlerausgabe in eine Datei umlenken
2 (Standardfehlerausgabe)
cmd 2>> file
Standardfehlerausgabe ans Ende einer Datei umlenken
1 (Standardausgabe) 2 (Standardfehlerausgabe)
cmd > file 2>&1
Standardfehlerausgabe und Standardausgabe in die gleiche Datei umlenken
1 (Standardausgabe) 2 (Standardfehlerausgabe)
cmd > file 2> file2
Standardfehlerausgabe und Standardausgabe jeweils in eine extra Datei umlenken
0 (Standardeingabe)
cmd < file
Eine Datei in die Standardeingabe eines Kommandos umleiten
1.10.4 PipesÂ
Bleiben wir doch beim Szenario vom Beispiel zuvor. Folgender Weg wurde gewählt, um alle Dateien des Users »you«, welche eine Zeichenfolge »audio« im Namen haben, zu finden.
you@host > find / -user you -print > find_you.dat 2>/dev/null you@host > grep audio < find_you.dat /dev/audio /dev/audio0 /dev/audio1 /dev/audio2 /dev/audio3 /home/tot/cd2audio.txt /home/you/cd2audio.txt /home/you/Documents/Claudio.dat /var/audio /var/audio/audio.txt
Mit einer Pipe können Sie das Ganze noch abkürzen und benötigen nicht einmal eine Datei, die hierfür angelegt werden muss. Hier das Beispiel mit der Pipe, womit dieselbe Wirkung wie im Beispiel eben erreicht wird.
you@host > find / -user you -print 2>/dev/null | grep audio /dev/audio /dev/audio0 /dev/audio1 /dev/audio2 /dev/audio3 /home/tot/cd2audio.txt /home/you/cd2audio.txt /home/you/Documents/Claudio.dat /var/audio /var/audio/audio.txt
Pipes werden realisiert, wenn mehrere Kommandos durch das Zeichen | miteinander verknüpft werden. Dabei wird immer die Standardausgabe des ersten Kommandos mit der Standardeingabe des zweiten Kommandos verbunden. Beachten Sie aber, dass die Standardfehlerausgabe hierbei nicht beachtet wird (siehe Abbildung 1.7).
Hier klicken, um das Bild zu Vergrößern
Abbildung 1.7 Â Verknüpfen mehrerer Kommandos via Pipe
Die Anzahl der Pipes, die Sie hierbei aneinander reihen können, ist unbegrenzt. Wollen Sie bspw. nicht wissen, welche und wo, sondern wie viele Dateien sich mit der Zeichenfolge »audio« auf Ihrem System befinden, müssen Sie dem Befehl zuvor nur das Kommando wc mit der Option âl (für line) »pipen«.
you@host > find / -user you -print 2>/dev/null | \ grep audio | wc -l 9
1.10.5 Ein T-Stück mit teeÂ
Wollen Sie die Standardausgabe eines Kommandos oder Shellscripts auf dem Bildschirm und in eine Datei oder gar mehrere Dateien gleichzeitig veranlassen, empfiehlt es sich, das Kommando tee zu verwenden.
you@host > du -bc | sort -n | tee februar05.log 8 ./bin 48 ./.dia/objects 48 ./.dia/shapes 48 ./.dia/sheets 48 ./.gconf 48 ./.gnome ... 1105091 ./OpenOffice.org1.1/user 1797366 ./OpenOffice.org1.1 1843697 ./.thumbnails/normal 1944148 ./.thumbnails 32270848 . 32270848 insgesamt
Im Beispiel wird die Plattenplatznutzung nach Bytes sortiert ausgegeben und, dank tee, in die Datei februar05.log geschrieben. Hier können Sie natürlich auch in mehr als einer Datei schreiben.
you@host > du -bc | sort -n | tee februar05.log year05.log 8 ./bin 48 ./.dia/objects 48 ./.dia/shapes 48 ./.dia/sheets 48 ./.gconf 48 ./.gnome ... 1105091 ./OpenOffice.org1.1/user 1797366 ./OpenOffice.org1.1 1843697 ./.thumbnails/normal 1944148 ./.thumbnails 32270848 . 32270848 insgesamt
Aber gerade, wenn Sie bei Log-Dateien einen regelmäßigen Jahresbericht erstellen wollen, sollten Sie tee mit der Option âa (append) verwenden. Damit wird die Ausgabe eines Kommandos bzw. des Shellscripts an die Datei(en) angefügt und nicht â wie bisher â überschrieben (siehe Abbildung 1.8).
Hier klicken, um das Bild zu Vergrößern
Abbildung 1.8 Â Das Kommando tee im Einsatz
1.10.6 Ersatzmuster (Wildcards)Â
Als Ersatzmuster (Wildcards) werden bestimmte Zeichen(-folgen) eines Wortes bezeichnet, die von der Shell durch eine andere Zeichenkette ersetzt werden. In der Shell werden hierzu die Zeichen * ? [ ] verwendet. Das heißt, findet die Shell bei der Analyse eines dieser Zeichen, geht der Interpreter davon aus, dass es sich hier um ein Ersatzmuster für andere Zeichen handelt. Die Shell sucht jetzt bspw. im Verzeichnis nach Dateien (Namen), welche auf dieses Muster (nach den von der Shell vorgegebenen Regeln) passen, und ersetzt die entsprechenden Wildcards in der Kommandozeile durch eine Liste mit gefundenen Dateien (Namen). Es wird dabei im Fachjargon von einer Dateinamen-Expansion oder auch von Globbing gesprochen.
Hinweis   Diese Namens-Expansion findet noch vor Ausführung des Kommandos in der Shell statt. Das Kommando selbst bekommt von diesem Vorgang nichts mehr mit. Die Argumente des Kommandos sind schon die expandierten Dateinamen.
Eine beliebige Zeichenfolge *
Das Zeichen * steht für eine beliebige Zeichenfolge im Dateinamen. Das Sternchen wurde selbst schon zu DOS-Zeiten verwendet und ist das am meisten angewandte Wildcard-Zeichen. Einfaches Beispiel:
you@host > grep Backup /home/tot/Mails/*.txt
Hier wird in allen Dateien mit der Endung ».txt« im Verzeichnis /home/tot/Mails nach dem Wort »Backup« gesucht. Die Position des Zeichens * können Sie im Suchwort fast beliebig wählen. Hier einige Beispiele:
you@host > ls *ript myscript you@host > ls *ript* myscript scripter you@host > ls ript* /bin/ls: ript*: Datei oder Verzeichnis nicht gefunden you@host > ls text*.txt text1.txt text2.txt text3.txt texta.txt textb.txt textc.txt textdatei_test.txt
Stimmt keine der Dateien mit dem gewünschten Muster überein, wird entweder eine dem Kommando entsprechende Fehlermeldung oder (bspw. mit echo) die Zeichenkette des Musters unverändert zurückgegeben.
Ein beliebiges Zeichen ?
Im Gegensatz zum Zeichen * wird das Metazeichen ? als Platzhalter für nur ein beliebiges Zeichen verwendet. Das heißt, wird das Zeichen ? verwendet, muss sich an dieser Stelle ein (und wirklich nur ein) beliebiges Zeichen befinden, damit das Muster übereinstimmt. Zur Demonstration einfach eine Gegenüberstellung, bei der zuerst das Metazeichen * und gleich darauf mit denselben Angaben das Zeichen ? verwendet wird:
you@host > ls datei*.dat datei1.dat datei1b.dat datei1c.dat datei2.dat datei2b.dat datei_backup.dat datei1.dat~ you@host > ls datei?.dat datei1.dat datei2.dat you@host > ls datei1.dat? datei1.dat~
Zeichenbereiche angeben
Meistens werden Sie wohl mit den Metazeichen * und ? zufrieden sein und größtenteils auch ans Ziel kommen. Dennoch kann die Bestimmung eines Dateinamens ausgeweitet werden. Sie können zusätzlich noch definieren, welche Zeichen für eine Ersetzung in Frage kommen sollen. Hierzu werden die eckigen Klammern ([ ]) verwendet. Alles, was sich darin befindet, wird als gültiges Zeichen des Musters für die Ersetzung akzeptiert.
you@host > ls datei*.txt datei1a.txt datei1b.txt datei1.txt datei2a.txt datei2.txt datei3.txt you@host > ls datei[12].txt datei1.txt datei2.txt you@host > ls datei[12]*.txt datei1a.txt datei1b.txt datei1.txt datei2a.txt datei2.txt you@host > ls datei[123].txt datei1.txt datei2.txt datei3.txt you@host > ls datei[1â3].txt datei1.txt datei2.txt datei3.txt you@host > ls datei[12][a].txt datei1a.txt datei2a.txt you@host > ls datei[12][b].txt datei1b.txt you@host > ls datei[!12].txt datei3.txt
Im Beispiel konnten Sie erkennen, dass Sie â statt alle möglichen Zeichen in eckigen Klammern aneinander zu reihen â auch mit einem Minuszeichen einen Bereich bestimmen können. Vorausgesetzt, das bzw. die Zeichen liegen in einem laufenden Bereich der ASCII-Tabelle. So passen z. B. mit datei[0â9].txt alle Dateinamen von 0 bis 9 in das Muster (datei1.txt, datei2.txt ... datei9.txt). Wird der Wert zweistellig, können Sie entweder datei[0â9][0â9].txt oder datei[0â9]*.txt verwenden. Wobei beide Versionen wieder unterschiedliche Ersetzungen durchführen, da die erste Version nur zwei dezimale Zeichen und die zweite Version beliebig viele weitere Zeichen enthalten darf.
Wollen Sie hingegen nur Dateinamen erhalten, die neben der dezimalen Zahl einen weiteren Buchstaben enthalten, können Sie auch datei[0â9][aâz].txt verwenden. Natürlich können Sie in den eckigen Klammern noch mehrere Bereiche (allerdings müssen diese immer fortlaufend gemäß der ASCII-Tabelle sein) verwenden. So liefert datei[aâcgâj1â3].txt Ihnen beispielsweise als Ergebnisse zurück, worin sich eines der Zeichen »a« bis »c«, »g« bis »j« und 1 bis 3 befindet.
Wollen Sie hingegen in den eckigen Klammern einzelne oder eine Folge von Zeichen ausschließen, dann können Sie die Expansion mittels ! auch negieren. Bspw. liefert Ihnen datei[!12].txt alles zurück, was als nächstes Zeichen nicht 1 oder 2 enthält (wie im Beispiel gesehen).
Des Weiteren können Sie die Expansionen mit den Metazeichen * und ? erweitern (allerdings nicht innerhalb der eckigen Klammer).
Wollen Sie die Datei-Expansionen in Aktion sehen, können Sie mit set die Debugging-Option âx setzen. Dann sehen Sie die Zeilen, nachdem eine Datei-Expansion ausgeführt wurde (Selbiges können Sie selbstverständlich auch mit den Metazeichen * und ? herbeiführen), zum Beispiel:
you@host > set -x you@host > ls datei[12].txt [DEBUG] /bin/ls datei1.txt datei2.txt datei1.txt datei2.txt you@host > ls datei[!12].txt [DEBUG] /bin/ls datei3.txt datei3.txt you@host > ls datei[1â3]*.txt [DEBUG] /bin/ls datei10.txt datei1a.txt datei1b.txt datei1.txt datei2a.txt datei2.txt datei3.txt datei10.txt datei1b.txt datei2a.txt datei3.txt datei1a.txt datei1.txt datei2.txt
Die Bash und die Korn-Shell bieten hierbei außerdem vordefinierte Parameter als Ersatzmuster an. Wollen Sie etwa nur Groß- und Kleinbuchstaben zulassen, können Sie hierfür statt [aâzAâZ] auch [:alpha:] verwenden. Sollen es nur dezimale Ziffern sein, dann kann [:digit:] statt [1â9] verwendet werden. In der Praxis sieht dies folgendermaßen aus:
you@host > ls datei[[:digit:]].txt datei1.txt datei2.txt datei3.txt you@host > ls datei[[:digit:]][[:alpha:]].txt datei1a.txt datei1b.txt datei2a.txt
Selbstverständlich können Sie auch hier den Wert in den eckigen Klammern mit ! negieren. In Tabelle 1.5 finden Sie die Zeichenmengen, die Sie alternativ zur gängigen Schreibweise in den eckigen Klammern setzen können.
Tabelle 1.5 Â Alternativen zu den gängigsten Zeichenmengen
Zeichenmenge
Bedeutung
[:alnum:]
Buchstaben, Unterstrich und dezimale Ziffern
[:alpha:]
Groß- und Kleinbuchstaben
[:digit:]
Dezimale Ziffern
[:lower:]
Kleinbuchstaben
[:upper:]
Großbuchstaben
[:print:]
Nur druckbare Zeichen
[:space:]
Leerzeichen, Tabulator ...
Tipp   In der Bash können Sie die versteckten Dateien auch ohne explizite Angabe des Punkts verwenden. Hierbei müssen Sie einfach mit shopt -s dotglob die Option glob_dot_filenames setzen. Deaktivieren können Sie das Ganze wieder mit shopt -u dotglob.
1.10.7 Brace Extension (Bash und Korn-Shell only)Â
Mit der Brace Extension haben Sie eine weitere neue Form, um Muster formulieren zu können. Dieses Feature steht unter der Bourne-Shell nicht zur Verfügung. Das Prinzip ist recht einfach: Bei einer gültigen Brace Extension schreibt man Alternativ-Ausdrücke zwischen geschweifte Klammern getrennt von einem Komma zwischen den beiden Klammern. Die Syntax hierzu lautet:
präfix{ muster1, muster2 }suffix
Von der Shell wird dieses Muster folgendermaßen ausgewertet: Vor jeder Zeichenkette â innerhalb der geschweiften Klammern â wird das Präfix gesetzt und dahinter das Suffix angehängt. Es ist einfacher, als es sich anhört. So generiert aus der Syntaxbeschreibung die Shell zum Beispiel den Dateinamen präfixmuster1suffix und präfixmuster2suffix. Natürlich können noch weitere Alternativ-Ausdrücke in den geschweiften Klammern gesetzt werden.
Wollen Sie bspw. mehrere Dateien wie prozess.dat, process.dat und progress.dat gleichzeitig anlegen, dann könnten Sie hierfür die Brace Extension wie folgt einsetzen:
you@host > touch pro{z,c,gr}ess.dat you@host > ls *.dat process.dat progress.dat prozess.dat
Brace Extensions lassen sich aber auch verschachteln. Wollen Sie bspw. die Dateinamen dateiorginal.txt, dateiorginal.bak, dateikopie.txt und dateikopie.bak mit einer Brace Extension erzeugen, so wird dies folgendermaßen realisiert:
you@host > touch datei{orginal{.bak,.txt},kopie{.bak,.txt}} you@host > ls datei* dateikopie.bak dateikopie.txt dateiorginal.bak dateiorginal.txt
Natürlich können Sie hier auch eine wilde Wildcard-Orgie veranstalten und alle bisher kennen gelernten Wildcards verwenden.
you@host > ls datei{*{.bak,.txt}} dateikopie.bak dateikopie.txt dateiorginal.bak dateiorginal.txt
1.10.8 Muster-Alternativen (Bash und Korn-Shell only)Â
Und es gibt noch eine weitere Möglichkeit, Muster-Alternativen für Namen zu beschreiben, die zwar eher für Muster-Vergleiche von Zeichenketten und weniger im Bereich der Datei-Expansion eingesetzt werden, allerdings auch dafür verwendet werden können. Damit Ihnen diese Muster-Alternativen auch für die Bash zur Verfügung stehen, müssen Sie die Option extglob mit shopt (Abkürzung für shell option) ausschalten (shopt âs extglob).
you@host > ls *.dat process.dat progress.dat prozess.dat you@host > shopt -s extglob you@host > ls @(prozess|promess|process|propan).dat process.dat prozess.dat you@host > ls !(prozess|promess|process|propan).dat progress.dat
In Tabelle 1.6 sind die einzelnen Möglichkeiten für solche Muster-Alternativen zusammengestellt:
Tabelle 1.6 Â Muster-Alternativen für die Bash und Korn-Shell
Muster-Alternativen
Bedeutung
@(pattern1 | patter2 | ... | patternN)
Eines der Muster
!(pattern1 | patter2 | ... | patternN)
Keines der Muster
+(pattern1 | patter2 | ... | patternN)
Mindestens eines der Muster
?(pattern1 | patter2 | ... | patternN)
Keines oder eines der Muster
*(pattern1 | patter2 | ... | patternN)
Keines, eines oder mehrere Muster
1.10.9 Tilde-Expansion (Bash und Korn-Shell only)Â
Das Tilde-Zeichen ~ wird von der Bash und der Korn-Shell als Heimverzeichnis des aktuellen Benutzers ausgewertet. Hier einige typische Aktionen, die mit dem Tilde-Zeichen in der Bash und Korn-Shell durchgeführt werden:
you@host > pwd /home/you you@host > cd /usr you@host > echo ~ /home/you you@host > cd ~ you@host > pwd /home/you you@host > echo ~- /usr you@host > echo ~+ /home/you you@host > cd ~- you@host > echo ~- /home/you you@host > cd ~- you@host > pwd /home/you you@host > echo ~tot /home/you you@host > mkdir ~/new_dir
Tabelle 1.7 zeigt, wie die einzelnen Tilde-Expansionen eingesetzt werden und ihre jeweilige Bedeutung.
Tabelle 1.7 Â Tilde-Expansionen
Angabe
Bedeutung
~
Heimverzeichnis des eingeloggten Benutzers
~BENUTZERNAME
Heimverzeichnis eines angegebenen Benutzers
~â
Verzeichnis, das zuvor besucht wurde
~+
Aktuelles Arbeitsverzeichnis (wie pwd)
## 1.10 DatenstromÂ
Hier klicken, um das Bild zu Vergrößern
Abbildung 1.1 Â Der standardmäßige Datenstrom einer Shell
Relativ selten will man allerdings eine Ausgabe auf dem Bildschirm erzeugen. Gerade wenn es sich dabei um Hunderte Zeilen von Log-Daten handelt, speichert man dies gern in eine Datei ab. Dies realisiert man am einfachsten mit der Umleitung der Ausgabe.
### 1.10.1 Ausgabe umleitenÂ
Die Standardausgabe eines Kommandos oder eines Shellscripts wird mit dem Zeichen > hinter dem Kommando bzw. Shellscript in eine Datei umgeleitet. Alternativ kann hierfür auch der Kanal 1 mit der Syntax 1> verwendet werden, was aber nicht nötig ist, da das Umleitungszeichen > ohne sonstige Angaben automatisch in der Standardausgabe endet, beispielsweise:
> you@host > du -b /home > home_size.dat you@host > ./ascript > output_of_ascript.dat
Mit du âb /home erfahren Sie, wie viele Bytes die einzelnen Ordner im Verzeichnis /home belegen. Da die Ausgabe relativ lang werden kann, wird diese kurzum in eine Datei namens home_size.dat umgeleitet, welche Sie jetzt mit einem beliebigen Texteditor ansehen können. Sofern diese Datei noch nicht existiert, wird sie neu erzeugt. Gleiches wurde bei der zweiten Kommandoeingabe vorgenommen. Hier wird davon ausgegangen, dass ein Script namens ascript existiert, und die Daten dieses Scripts werden in eine Datei namens output_of_ascript.dat geschrieben (siehe Abbildung 1.2).
Hier klicken, um das Bild zu Vergrößern
Abbildung 1.2 Â Umleiten der Standardausgabe (cmd > datei)
Wie bereits erwähnt, könnten Sie theoretisch den Standardausgabekanal auch bei der Nummer ansprechen:
> you@host > du -b /home 1> home_size.dat
Diese Schreibweise hat allerdings gegenüber dem > allein keinen Vorteil.
Leider hat dieses Umleiten einer Ausgabe in eine Datei den Nachteil, dass bei einem erneuten Aufruf von
> you@host > du -b /home > home_size.dat
die Daten gnadenlos überschrieben werden. Abhilfe schafft hier eine Ausgabeumleitung mit >>. Damit wird die Ausgabe wie gehabt in eine entsprechende Datei umgeleitet, nur mit dem Unterschied, dass jetzt die Daten immer ans Ende der Datei angehängt werden. Existiert die Datei noch nicht, wird diese trotzdem neu erzeugt.
> you@host > du -b /home >> home_size.dat
Jetzt werden die weiteren Daten immer ans Ende von home_size.dat angefügt.
### 1.10.2 Standardfehlerausgabe umleitenÂ
So, wie Sie die Standardausgabe umleiten können, können Sie auch die Standardfehlerausgabe umleiten. Etwas Ähnliches konnten Sie ja bereits ein paar Seiten zuvor mit
> you@host > find / -user you -print 2>/dev/null
sehen. Hiermit wurden alle Dateien vom User »you« auf den Bildschirm ausgegeben. Fehlermeldungen wie bspw. »Keine Berechtigung« wurden hier gewöhnlich auf die Standardfehlerausgabe geleitet. Diese Ausgabe wurde im Beispiel nach /dev/null, dem Datengrab des Systems, umgeleitet.
Natürlich können Sie auch hierbei die komplette Standardfehlerausgabe (zu erkennen an der Syntax 2> (Kanal 2)) von einem Befehl oder auch einem Script in eine Datei umleiten:
> you@host > find / -user you -print 2> error_find_you.dat
Hiermit werden alle Fehlermeldungen in die Datei error_find_you.dat geschrieben. Existiert diese Datei noch nicht, wird sie angelegt. Sofern diese Datei allerdings präsent ist, wird der alte Inhalt in dieser Form komplett überschrieben. Wenn Sie dies nicht wollen, können Sie so wie schon bei der Standardausgabe mit >>, nur eben mit dem Kanal 2, die Standardfehlerausgabe an eine bestehende Datei anhängen (siehe Abbildung 1.3).
> you@host > find / -user you -print 2>> error_find_you.dat
Sie können die Standardausgabe und Standardfehlerausgabe auch in zwei verschiedene Dateien umleiten. Dies wird recht häufig gemacht, denn oft hat man nicht die Zeit bzw. Übersicht, sämtliche Ausgaben zu überwachen (siehe Abbildung 1.4).
Hier klicken, um das Bild zu Vergrößern
Abbildung 1.3 Â Umleiten der Standardfehlerausgabe (cmd 2> datei)
> you@host > find / -user you -print >> find_you.dat 2>> \ error_find_you.dat
Hier klicken, um das Bild zu Vergrößern
Abbildung 1.4 Â Beide Ausgabekanäle in eine separate Datei umleiten
Wollen Sie hingegen beides, also sowohl die Standardausgabe als auch die Standardfehlerausgabe in eine Datei schreiben, dann können Sie die beiden Kanäle zusammenkoppeln. Dies erledigen Sie mit der Syntax 2>&1. Damit legen Sie Kanal 2 (Standardfehlerkanal) und Kanal 1 (Standardausgabe) zusammen. Angewandt auf unser Beispiel, sieht dies folgendermaßen aus:
> you@host > find / -user you -print > find_you_report.dat 2>&1
So können Sie übrigens auch ein Problem mit dem Pager (less, more ...) Ihrer Wahl lösen. Testen Sie einfach Folgendes:
> you@host > find / -user you -print | more
Gewöhnlich verwendet man ja einen Pager (im Beispiel more), um die Standardausgabe komfortabel nach oben oder unten zu scrollen, damit einem bei etwas längeren Ausgaben die Texte nicht nur so vorbeifliegen. Aber hier will das nicht so recht hinhauen, da ständig die Standardfehlerausgabe mit einer Fehlermeldung dazwischen funkt. Dies bekommen Sie einfach in den Griff, indem Sie die beiden Ausgaben zusammenlegen.
> you@host > find / -user you -print 2>&1 | more
Alternativ können Sie auch die Standardfehlerausgabe ins Datengrab (/dev/null) befördern.
> you@host > find / -user you -print 2>/dev/null | more
### 1.10.3 Eingabe umleitenÂ
Das Umleiten der Standardeingabe (Kanal 0), das normalerweise von der Tastatur aus erfolgt, kann ebenfalls erreicht werden. Hier wird das Zeichen < nach einem Befehl bzw. einem Script verwendet. Nehmen wir einfach das Beispiel, in dem es darum ging, alle Dateien vom Eigentümer »you« zu suchen und diese in die Datei find_tot.dat zu schreiben. Wahrscheinlich befinden sich hier unendlich viele Dateien, und Sie sind auf der Suche nach einer bestimmten Datei, deren Namen Sie aber nicht mehr genau wissen. War es irgendetwas mit »audio«? Fragen Sie einfach grep:
> you@host > grep audio < find_tot.dat /dev/audio /dev/audio0 /dev/audio1 /dev/audio2 /dev/audio3 /home/tot/cd2audio.txt /home/you/cd2audio.txt /home/you/Documents/Claudio.dat /var/audio /var/audio/audio.txt
Hier schieben wir grep einfach die Datei find_tot.dat von der Standardeingabe zu und werden fündig (siehe Abbildung 1.5).
Hier klicken, um das Bild zu Vergrößern
Abbildung 1.5 Â Umleiten der Standardeingabe (cmd < datei)
Wollen Sie die Ausgabe von grep in eine andere Datei speichern, dann können Sie auch noch eine Umleitung der Standardausgabe einbauen.
> you@host > grep audio < find_you.dat > audio.dat you@host > cat audio.dat /dev/audio /dev/audio0 /dev/audio1 /dev/audio2 /dev/audio3 /home/tot/cd2audio.txt /home/tot/Documents/Claudio.dat /var/audio /var/audio/audio.txt
So haben Sie die Standardausgabe von grep, die Ihre Anfrage von der umgeleiteten Standardeingabe (hier der Datei find_you.dat) erhielt, weitergeleitet in die Datei audio.dat, was Ihnen die Ausgabe von cat anschließend auch bestätigt (siehe Abbildung 1.6).
Hier klicken, um das Bild zu Vergrößern
Abbildung 1.6 Â Umleiten von Standardeingabe und Standardausgabe
In Tabelle 1.4 sehen Sie alle Umlenkungen auf einen Blick:
Kanal | Syntax | Beschreibung |
| --- | --- | --- |
1 (Standardausgabe) | cmd > file | Standardausgabe in eine Datei umlenken |
1 (Standardausgabe) | cmd >> file | Standardausgabe ans Ende einer Datei umlenken |
2 (Standardfehlerausgabe) | cmd 2> file | Standardfehlerausgabe in eine Datei umlenken |
2 (Standardfehlerausgabe) | cmd 2>> file | Standardfehlerausgabe ans Ende einer Datei umlenken |
1 (Standardausgabe) 2 (Standardfehlerausgabe) | cmd > file 2>&1 | Standardfehlerausgabe und Standardausgabe in die gleiche Datei umlenken |
1 (Standardausgabe) 2 (Standardfehlerausgabe) | cmd > file 2> file2 | Standardfehlerausgabe und Standardausgabe jeweils in eine extra Datei umlenken |
0 (Standardeingabe) | cmd < file | Eine Datei in die Standardeingabe eines Kommandos umleiten |
### 1.10.4 PipesÂ
Bleiben wir doch beim Szenario vom Beispiel zuvor. Folgender Weg wurde gewählt, um alle Dateien des Users »you«, welche eine Zeichenfolge »audio« im Namen haben, zu finden.
> you@host > find / -user you -print > find_you.dat 2>/dev/null you@host > grep audio < find_you.dat /dev/audio /dev/audio0 /dev/audio1 /dev/audio2 /dev/audio3 /home/tot/cd2audio.txt /home/you/cd2audio.txt /home/you/Documents/Claudio.dat /var/audio /var/audio/audio.txt
Mit einer Pipe können Sie das Ganze noch abkürzen und benötigen nicht einmal eine Datei, die hierfür angelegt werden muss. Hier das Beispiel mit der Pipe, womit dieselbe Wirkung wie im Beispiel eben erreicht wird.
> you@host > find / -user you -print 2>/dev/null | grep audio /dev/audio /dev/audio0 /dev/audio1 /dev/audio2 /dev/audio3 /home/tot/cd2audio.txt /home/you/cd2audio.txt /home/you/Documents/Claudio.dat /var/audio /var/audio/audio.txt
Pipes werden realisiert, wenn mehrere Kommandos durch das Zeichen | miteinander verknüpft werden. Dabei wird immer die Standardausgabe des ersten Kommandos mit der Standardeingabe des zweiten Kommandos verbunden. Beachten Sie aber, dass die Standardfehlerausgabe hierbei nicht beachtet wird (siehe Abbildung 1.7).
Hier klicken, um das Bild zu Vergrößern
Abbildung 1.7 Â Verknüpfen mehrerer Kommandos via Pipe
Die Anzahl der Pipes, die Sie hierbei aneinander reihen können, ist unbegrenzt. Wollen Sie bspw. nicht wissen, welche und wo, sondern wie viele Dateien sich mit der Zeichenfolge »audio« auf Ihrem System befinden, müssen Sie dem Befehl zuvor nur das Kommando wc mit der Option âl (für line) »pipen«.
> you@host > find / -user you -print 2>/dev/null | \ grep audio | wc -l 9
### 1.10.5 Ein T-Stück mit teeÂ
Wollen Sie die Standardausgabe eines Kommandos oder Shellscripts auf dem Bildschirm und in eine Datei oder gar mehrere Dateien gleichzeitig veranlassen, empfiehlt es sich, das Kommando tee zu verwenden.
> you@host > du -bc | sort -n | tee februar05.log 8 ./bin 48 ./.dia/objects 48 ./.dia/shapes 48 ./.dia/sheets 48 ./.gconf 48 ./.gnome ... 1105091 ./OpenOffice.org1.1/user 1797366 ./OpenOffice.org1.1 1843697 ./.thumbnails/normal 1944148 ./.thumbnails 32270848 . 32270848 insgesamt
Im Beispiel wird die Plattenplatznutzung nach Bytes sortiert ausgegeben und, dank tee, in die Datei februar05.log geschrieben. Hier können Sie natürlich auch in mehr als einer Datei schreiben.
> you@host > du -bc | sort -n | tee februar05.log year05.log 8 ./bin 48 ./.dia/objects 48 ./.dia/shapes 48 ./.dia/sheets 48 ./.gconf 48 ./.gnome ... 1105091 ./OpenOffice.org1.1/user 1797366 ./OpenOffice.org1.1 1843697 ./.thumbnails/normal 1944148 ./.thumbnails 32270848 . 32270848 insgesamt
Aber gerade, wenn Sie bei Log-Dateien einen regelmäßigen Jahresbericht erstellen wollen, sollten Sie tee mit der Option âa (append) verwenden. Damit wird die Ausgabe eines Kommandos bzw. des Shellscripts an die Datei(en) angefügt und nicht â wie bisher â überschrieben (siehe Abbildung 1.8).
Hier klicken, um das Bild zu Vergrößern
Abbildung 1.8 Â Das Kommando tee im Einsatz
### 1.10.6 Ersatzmuster (Wildcards)Â
Als Ersatzmuster (Wildcards) werden bestimmte Zeichen(-folgen) eines Wortes bezeichnet, die von der Shell durch eine andere Zeichenkette ersetzt werden. In der Shell werden hierzu die Zeichen * ? [ ] verwendet. Das heißt, findet die Shell bei der Analyse eines dieser Zeichen, geht der Interpreter davon aus, dass es sich hier um ein Ersatzmuster für andere Zeichen handelt. Die Shell sucht jetzt bspw. im Verzeichnis nach Dateien (Namen), welche auf dieses Muster (nach den von der Shell vorgegebenen Regeln) passen, und ersetzt die entsprechenden Wildcards in der Kommandozeile durch eine Liste mit gefundenen Dateien (Namen). Es wird dabei im Fachjargon von einer Dateinamen-Expansion oder auch von Globbing gesprochen.
# Eine beliebige Zeichenfolge *
Das Zeichen * steht für eine beliebige Zeichenfolge im Dateinamen. Das Sternchen wurde selbst schon zu DOS-Zeiten verwendet und ist das am meisten angewandte Wildcard-Zeichen. Einfaches Beispiel:
> you@host > grep Backup /home/tot/Mails/*.txt
Hier wird in allen Dateien mit der Endung ».txt« im Verzeichnis /home/tot/Mails nach dem Wort »Backup« gesucht. Die Position des Zeichens * können Sie im Suchwort fast beliebig wählen. Hier einige Beispiele:
> you@host > ls *ript myscript you@host > ls *ript* myscript scripter you@host > ls ript* /bin/ls: ript*: Datei oder Verzeichnis nicht gefunden you@host > ls text*.txt text1.txt text2.txt text3.txt texta.txt textb.txt textc.txt textdatei_test.txt
Stimmt keine der Dateien mit dem gewünschten Muster überein, wird entweder eine dem Kommando entsprechende Fehlermeldung oder (bspw. mit echo) die Zeichenkette des Musters unverändert zurückgegeben.
# Ein beliebiges Zeichen ?
Im Gegensatz zum Zeichen * wird das Metazeichen ? als Platzhalter für nur ein beliebiges Zeichen verwendet. Das heißt, wird das Zeichen ? verwendet, muss sich an dieser Stelle ein (und wirklich nur ein) beliebiges Zeichen befinden, damit das Muster übereinstimmt. Zur Demonstration einfach eine Gegenüberstellung, bei der zuerst das Metazeichen * und gleich darauf mit denselben Angaben das Zeichen ? verwendet wird:
> you@host > ls datei*.dat datei1.dat datei1b.dat datei1c.dat datei2.dat datei2b.dat datei_backup.dat datei1.dat~ you@host > ls datei?.dat datei1.dat datei2.dat you@host > ls datei1.dat? datei1.dat~
# Zeichenbereiche angeben
Meistens werden Sie wohl mit den Metazeichen * und ? zufrieden sein und größtenteils auch ans Ziel kommen. Dennoch kann die Bestimmung eines Dateinamens ausgeweitet werden. Sie können zusätzlich noch definieren, welche Zeichen für eine Ersetzung in Frage kommen sollen. Hierzu werden die eckigen Klammern ([ ]) verwendet. Alles, was sich darin befindet, wird als gültiges Zeichen des Musters für die Ersetzung akzeptiert.
> you@host > ls datei*.txt datei1a.txt datei1b.txt datei1.txt datei2a.txt datei2.txt datei3.txt you@host > ls datei[12].txt datei1.txt datei2.txt you@host > ls datei[12]*.txt datei1a.txt datei1b.txt datei1.txt datei2a.txt datei2.txt you@host > ls datei[123].txt datei1.txt datei2.txt datei3.txt you@host > ls datei[1â3].txt datei1.txt datei2.txt datei3.txt you@host > ls datei[12][a].txt datei1a.txt datei2a.txt you@host > ls datei[12][b].txt datei1b.txt you@host > ls datei[!12].txt datei3.txt
Im Beispiel konnten Sie erkennen, dass Sie â statt alle möglichen Zeichen in eckigen Klammern aneinander zu reihen â auch mit einem Minuszeichen einen Bereich bestimmen können. Vorausgesetzt, das bzw. die Zeichen liegen in einem laufenden Bereich der ASCII-Tabelle. So passen z. B. mit datei[0â9].txt alle Dateinamen von 0 bis 9 in das Muster (datei1.txt, datei2.txt ... datei9.txt). Wird der Wert zweistellig, können Sie entweder datei[0â9][0â9].txt oder datei[0â9]*.txt verwenden. Wobei beide Versionen wieder unterschiedliche Ersetzungen durchführen, da die erste Version nur zwei dezimale Zeichen und die zweite Version beliebig viele weitere Zeichen enthalten darf.
Wollen Sie hingegen nur Dateinamen erhalten, die neben der dezimalen Zahl einen weiteren Buchstaben enthalten, können Sie auch datei[0â9][aâz].txt verwenden. Natürlich können Sie in den eckigen Klammern noch mehrere Bereiche (allerdings müssen diese immer fortlaufend gemäß der ASCII-Tabelle sein) verwenden. So liefert datei[aâcgâj1â3].txt Ihnen beispielsweise als Ergebnisse zurück, worin sich eines der Zeichen »a« bis »c«, »g« bis »j« und 1 bis 3 befindet.
Wollen Sie hingegen in den eckigen Klammern einzelne oder eine Folge von Zeichen ausschließen, dann können Sie die Expansion mittels ! auch negieren. Bspw. liefert Ihnen datei[!12].txt alles zurück, was als nächstes Zeichen nicht 1 oder 2 enthält (wie im Beispiel gesehen).
Des Weiteren können Sie die Expansionen mit den Metazeichen * und ? erweitern (allerdings nicht innerhalb der eckigen Klammer).
Wollen Sie die Datei-Expansionen in Aktion sehen, können Sie mit set die Debugging-Option âx setzen. Dann sehen Sie die Zeilen, nachdem eine Datei-Expansion ausgeführt wurde (Selbiges können Sie selbstverständlich auch mit den Metazeichen * und ? herbeiführen), zum Beispiel:
> you@host > set -x you@host > ls datei[12].txt [DEBUG] /bin/ls datei1.txt datei2.txt datei1.txt datei2.txt you@host > ls datei[!12].txt [DEBUG] /bin/ls datei3.txt datei3.txt you@host > ls datei[1â3]*.txt [DEBUG] /bin/ls datei10.txt datei1a.txt datei1b.txt datei1.txt datei2a.txt datei2.txt datei3.txt datei10.txt datei1b.txt datei2a.txt datei3.txt datei1a.txt datei1.txt datei2.txt
Die Bash und die Korn-Shell bieten hierbei außerdem vordefinierte Parameter als Ersatzmuster an. Wollen Sie etwa nur Groß- und Kleinbuchstaben zulassen, können Sie hierfür statt [aâzAâZ] auch [:alpha:] verwenden. Sollen es nur dezimale Ziffern sein, dann kann [:digit:] statt [1â9] verwendet werden. In der Praxis sieht dies folgendermaßen aus:
> you@host > ls datei[[:digit:]].txt datei1.txt datei2.txt datei3.txt you@host > ls datei[[:digit:]][[:alpha:]].txt datei1a.txt datei1b.txt datei2a.txt
Selbstverständlich können Sie auch hier den Wert in den eckigen Klammern mit ! negieren. In Tabelle 1.5 finden Sie die Zeichenmengen, die Sie alternativ zur gängigen Schreibweise in den eckigen Klammern setzen können.
Zeichenmenge | Bedeutung |
| --- | --- |
[:alnum:] | Buchstaben, Unterstrich und dezimale Ziffern |
[:alpha:] | Groß- und Kleinbuchstaben |
[:digit:] | Dezimale Ziffern |
[:lower:] | Kleinbuchstaben |
[:upper:] | Großbuchstaben |
[:print:] | Nur druckbare Zeichen |
[:space:] | Leerzeichen, Tabulator ... |
# Versteckte Dateien
### 1.10.7 Brace Extension (Bash und Korn-Shell only)Â
Mit der Brace Extension haben Sie eine weitere neue Form, um Muster formulieren zu können. Dieses Feature steht unter der Bourne-Shell nicht zur Verfügung. Das Prinzip ist recht einfach: Bei einer gültigen Brace Extension schreibt man Alternativ-Ausdrücke zwischen geschweifte Klammern getrennt von einem Komma zwischen den beiden Klammern. Die Syntax hierzu lautet:
> präfix{ muster1, muster2 }suffix
Von der Shell wird dieses Muster folgendermaßen ausgewertet: Vor jeder Zeichenkette â innerhalb der geschweiften Klammern â wird das Präfix gesetzt und dahinter das Suffix angehängt. Es ist einfacher, als es sich anhört. So generiert aus der Syntaxbeschreibung die Shell zum Beispiel den Dateinamen präfixmuster1suffix und präfixmuster2suffix. Natürlich können noch weitere Alternativ-Ausdrücke in den geschweiften Klammern gesetzt werden.
Wollen Sie bspw. mehrere Dateien wie prozess.dat, process.dat und progress.dat gleichzeitig anlegen, dann könnten Sie hierfür die Brace Extension wie folgt einsetzen:
> you@host > touch pro{z,c,gr}ess.dat you@host > ls *.dat process.dat progress.dat prozess.dat
Brace Extensions lassen sich aber auch verschachteln. Wollen Sie bspw. die Dateinamen dateiorginal.txt, dateiorginal.bak, dateikopie.txt und dateikopie.bak mit einer Brace Extension erzeugen, so wird dies folgendermaßen realisiert:
> you@host > touch datei{orginal{.bak,.txt},kopie{.bak,.txt}} you@host > ls datei* dateikopie.bak dateikopie.txt dateiorginal.bak dateiorginal.txt
Natürlich können Sie hier auch eine wilde Wildcard-Orgie veranstalten und alle bisher kennen gelernten Wildcards verwenden.
> you@host > ls datei{*{.bak,.txt}} dateikopie.bak dateikopie.txt dateiorginal.bak dateiorginal.txt
### 1.10.8 Muster-Alternativen (Bash und Korn-Shell only)Â
Und es gibt noch eine weitere Möglichkeit, Muster-Alternativen für Namen zu beschreiben, die zwar eher für Muster-Vergleiche von Zeichenketten und weniger im Bereich der Datei-Expansion eingesetzt werden, allerdings auch dafür verwendet werden können. Damit Ihnen diese Muster-Alternativen auch für die Bash zur Verfügung stehen, müssen Sie die Option extglob mit shopt (Abkürzung für shell option) ausschalten (shopt âs extglob).
> you@host > ls *.dat process.dat progress.dat prozess.dat you@host > shopt -s extglob you@host > ls @(prozess|promess|process|propan).dat process.dat prozess.dat you@host > ls !(prozess|promess|process|propan).dat progress.dat
In Tabelle 1.6 sind die einzelnen Möglichkeiten für solche Muster-Alternativen zusammengestellt:
Muster-Alternativen | Bedeutung |
| --- | --- |
@(pattern1 | patter2 | ... | patternN) | Eines der Muster |
!(pattern1 | patter2 | ... | patternN) | Keines der Muster |
+(pattern1 | patter2 | ... | patternN) | Mindestens eines der Muster |
?(pattern1 | patter2 | ... | patternN) | Keines oder eines der Muster |
*(pattern1 | patter2 | ... | patternN) | Keines, eines oder mehrere Muster |
### 1.10.9 Tilde-Expansion (Bash und Korn-Shell only)Â
Das Tilde-Zeichen ~ wird von der Bash und der Korn-Shell als Heimverzeichnis des aktuellen Benutzers ausgewertet. Hier einige typische Aktionen, die mit dem Tilde-Zeichen in der Bash und Korn-Shell durchgeführt werden:
> you@host > pwd /home/you you@host > cd /usr you@host > echo ~ /home/you you@host > cd ~ you@host > pwd /home/you you@host > echo ~- /usr you@host > echo ~+ /home/you you@host > cd ~- you@host > echo ~- /home/you you@host > cd ~- you@host > pwd /home/you you@host > echo ~tot /home/you you@host > mkdir ~/new_dir
Tabelle 1.7 zeigt, wie die einzelnen Tilde-Expansionen eingesetzt werden und ihre jeweilige Bedeutung.
Angabe | Bedeutung |
| --- | --- |
~ | Heimverzeichnis des eingeloggten Benutzers |
~BENUTZERNAME | Heimverzeichnis eines angegebenen Benutzers |
~â | Verzeichnis, das zuvor besucht wurde |
~+ | Aktuelles Arbeitsverzeichnis (wie pwd) |
# 2.2 ZahlenÂ
2.2 ZahlenÂ
Wenn Mathematik nie Ihr Parade-Fach war, dann haben Sie etwas mit der Shell (und dem Autor ;-)) gemeinsam. Die Bourne-Shell kann nicht mal die einfachsten arithmetischen Berechnungen durchführen. Zur Verteidigung muss man allerdings sagen, dass der Erfinder der Shell nie im Sinn gehabt hat, eine komplette Programmiersprache zu entwickeln. Bei der Bash und der Korn-Shell hingegen wurden einfache arithmetische Operationen eingebaut, obgleich diese auch nur auf einfache Integer-Berechnungen beschränkt sind. Werden Fließkommazahlen benötigt, dann müssen Sie auf ein externes Tool umsteigen, z. B. den Rechner bc.
2.2.1 Integer-Arithmetik (Bourne-Shell, Bash und Korn-Shell)Â
Zwar bieten Ihnen die Bash und die Korn-Shell erheblich komfortablere Alternativen als die Bourne-Shell, aber wenn Sie nicht genau wissen, in welcher Shell Ihr Script verwendet wird, dann sind Sie mit der Integer-Arithmetik der Bourne-Shell gut beraten. Ihre Scripts werden dann auf jeder Shell laufen, also sowohl in der Bourne- und Korn-Shell als auch in der Bash.
Zum Rechnen in der Bourne-Shell wird der alte UNIX-Befehl expr verwendet. Beim Aufruf dieses Befehls können reguläre Ausdrücke, Zeichenketten-, arithmetische oder Vergleichsausdrücke als Parameter übergeben werden. Daraufhin erhalten Sie das Ergebnis des Ausdrucks auf die Standardausgabe.
you@host > expr 8 / 2 4 you@host > expr 8 + 2 10
Das Ganze lässt sich natürlich auch mit einem Shellscript nutzen:
# Demonstriert die Verwendung von expr # zum Ausführen arithmetischer Ausdrücke # Name : aexpr var1=100 var2=50 expr $var1 + $var2 expr $var1 â $var2
Das Script bei der Ausführung:
you@host > ./aexpr 150 50
Hierzu gibt es eigentlich nicht viel zu sagen. Wichtig ist es jedoch, in Bezug auf expr noch zu erwähnen, dass beim arithmetischen Ausdruck Leerzeichen zwischen den einzelnen Operatoren und Operanden unbedingt erforderlich sind!
Wer schon weitere expr-Ausdrücke probiert hat, wird beim Multiplizieren auf Probleme gestoßen sein. Ähnliche Schwierigkeiten dürften auch mit den Klammern auftreten.
you@host > expr 8 * 5 expr: Syntaxfehler you@host > expr ( 5 + 5 ) * 5 bash: syntax error near unexpected token `5'
Hier haben Sie das Problem, dass die Shell ein * eben als Datei-Expansion behandelt und bei den Klammern gern eine weitere Subshell öffnen würde. Damit beides außer Kraft gesetzt wird, müssen Sie hier nur vor den kritischen Zeichen ein Backslash setzen.
you@host > expr 8 \* 5 40 you@host > expr \( 5 + 5 \) \* 5 50
Wollen Sie den Inhalt einer Berechnung in einer Variablen abspeichern, benötigen Sie eine Kommando-Substitution. Was das genau ist, erfahren Sie noch in Abschnitt 2.4. Wie Sie das hier einsetzen können, soll Ihnen aber nicht vorenthalten werden.
# Demonstriert, wie Sie einen arithmetischen expr-Ausdruck # in einer Variablen speichern können # Name : aexpr2 var1=100 var2=50 # Dies wird als Kommando-Substitution bezeichnet var3=`expr $var1 + $var2` echo $var3
Durch die Klammerung des Ausdrucks rechts vom Gleichheitszeichen (hier durch zwei Gravis-Zeichen) wird das, was durch die Anweisung expr normalerweise auf dem Bildschirm ausgegeben wird, an der entsprechenden Stelle in der Zuweisung eingesetzt und der links stehenden Variablen zugewiesen.
var3=`expr $var1 + $var2`
Richtig befriedigend dürfte expr für keinen sein, denn die Verwendung bleibt ausschließlich auf Integerzahlen und die arithmetischen Operatoren +, â, *, / und % beschränkt. Als Alternative und als eine gewisse Erleichterung bietet es sich an, mit dem UNIX-Kommando bc zu arbeiten. Hierauf wird anschließend in Abschnitt 2.2.3 eingegangen.
2.2.2 Integer-Arithmetik (Bash und Korn-Shell only)Â
Klar, die Bash und die Korn-Shell können das wieder besser, schließlich stellt jede dieser Shells eine Erweiterung der Bourne-Shell dar, wobei die Bash ja ohnehin alles kann. Es stehen auf jeden Fall mehrere Möglichkeiten zur Auswahl.
you@host > ((z=5+5)) you@host > echo $z 10
Hier wurden doppelte Klammerungen verwendet, um der Shell Bescheid zu geben, dass es sich hier um einen mathematischen Ausdruck handelt. Im Beispiel haben Sie eine Variable »z« definiert, welche den Wert von 5 + 5 erhält. Hiermit können Sie jederzeit auf den Inhalt von »z« zurückgreifen. Neben dieser Möglichkeit können Sie auch eine andere Schreibweise verwenden, die allerdings denselben Effekt hat.
you@host > y=$((8+8)) you@host > echo $y 16 you@host > echo $((z+y)) 26
Und zu guter Letzt hat die Bash noch einen Spezialfall der Integer-Arithmetik mit eckigen Klammern eingebaut.
you@host > w=$[z+y] you@host > echo $w 26 you@host > w=$[(8+2)*4] you@host > echo $w 40
Mir persönlich gefällt diese Version mit den eckigen Klammern am besten, aber wem nützt das, wenn man sein Script auch in der Korn-Shell ausführen muss. Dass die doppelte Klammerung nicht unbedingt leserlich wirkt, liegt auf der Hand. Deshalb finden Sie bei der Korn-Shell und der Bash auch das Builtin-Kommando let.
let
Das Kommando let wertet arithmetische Ausdrücke aus. Als Argumente werden hierbei mit Operatoren verbundene Ausdrücke verwendet.
you@host > let z=w+20 you@host > echo $z 60
Allerdings besteht zwischen let und den doppelten Klammerungen â abgesehen von der Anwendung â kein Unterschied. Die Verwendung von let im Beispiel oben entspricht exakt folgender Klammerung:
you@host > z=$((w+20)) you@host > echo $z 60
let dient hier lediglich dem Komfort. Zur Anschauung let in einem Shellscript:
# Beispiel demonstriert das Kommando let # Name: alet varA=100 varB=50 let varC=$varA+$varB echo $varC
Was bei diesem Shellscript auffällt, ist, dass das Kommando let erst beim eigentlichen arithmetischen Ausdruck verwendet wird. Zwar können Sie let auch vor den einzelnen Variablen setzen, doch das ist im Grunde nicht nötig.
Wenn Sie mit einem let-Aufruf mehrere Ausdrücke gleichzeitig auswerten lassen, oder wenn Sie der besseren Lesbarkeit wegen Leerzeichen einstreuen wollen, sind die Ausdrücke jeweils in doppelte Anführungszeichen einzuschließen, wie bspw. in
you@host > let "a=5+5" "b=4+4" you@host > echo $a 10 you@host > echo $b 8
typeset
Das Builtin-Kommando typeset oder declare kann Ihnen hier (und in vielen anderen Fällen auch) das Shell-Leben noch etwas komfortabler gestalten. Mit beiden Kommandos können Sie entweder eine Shell-Variable erzeugen und/oder die Attribute einer Variablen setzen. Verwenden Sie z. B. die Option âi auf einer Variablen, so werden diese in der Shell als Integer-Variable behandelt und gekennzeichnet.
you@host > typeset -i a b c you@host > b=10 you@host > c=10 you@host > a=b+c you@host > echo $a 20
Was hier auch gleich ins Auge sticht: Dank der Integer-Deklaration entfällt beim Rechnen mit den Integer-Variablen auch das $-Zeichen bei der Anwendung.
# Beispiel mit typeset # Name : atypeset typeset -i c a=5 b=2 c=a+b echo $c c=a*b echo $c c=\(a+b\)*2 echo $c
Wie schon beim Kommando let ist es nicht nötig, alle Variablen als Integer zu deklarieren, wie Sie dies schön im Shellscript »atypeset« sehen können. Es reicht aus, wenn Sie das Ergebnis als Integer kennzeichnen. Den Rest übernimmt die Shell wieder für Sie.
Hinweis   Bei jeder Version von Integer-Arithmetik, abgesehen von expr, wird das Sternchen auch als Rechenoperator erkannt und muss nicht erst als Expansions-Kandidat mit einem Backslash unschädlich gemacht werden. Die Klammern müssen Sie allerdings nach wie vor mit einem Backslash schützen.
Tabelle 2.1 Â Arithmetische Operatoren (Bash und Korn-Shell)
+, â, *, /
Addition, Subtraktion, Multiplikation, Divison
%
Rest einer ganzzahligen Divison (Modulo-Operator)
var+=n varâ=n var*=n var/=n var%=n
Kurzformen für +, â, *, /, % Kurzform für bspw. Addition var+=n ist gleichwertig zu var=var+n
var>>n
Bitweise Linksverschiebung um n Stellen
var<<n
Bitweise Rechtsverschiebung um n Stellen
>>= <<=
Kurzformen für die Links- bzw. Rechtsverschiebung << und > var1&var2
Bitweises UND
var1^var2
Bitweises exklusives ODER (XOR)
var1|var2
Bitweises ODER
~
Bitweise Negation
&= ^= |=
Kurzformen für bitweise UND, exklusives ODER sowie ODER (&, | und ^)
2.2.3 bc â Rechnen mit Fließkommazahlen und mathematische FunktionenÂ
Wohl keine der oben vorgestellten Lösungen erschien Ihnen sonderlich elegant. Daher würde sich als Alternative und auf jeden Fall als Erleichterung das UNIX-Kommando bc anbieten.
bc ist ein Quasi-Tischrechner, der von der Standardeingabe (Tastatur) zeilenweise arithmetische Ausdrücke erwartet und das Ergebnis wiederum zeilenweise auf der Standardausgabe (Bildschirm) ausgibt. bc arbeitet im Gegensatz zum verwandten Programm dc mit der gewohnten Infix-Schreibweise und verlangt kein Leerzeichen zwischen Operatoren und Operanden. Außerdem muss der arithmetische Operator * vor der Shell nicht geschützt werden. Entsprechendes gilt auch für runde Klammern.
bc ist eigentlich kein Befehl, sondern eher eine Art Shell mit einer Präzisions-Kalkulationssprache. Allerdings lässt sich bc auch gut in jeder Shell als Befehl verwenden. Die Argumente schiebt man dem Befehl mithilfe einer Pipe unter. Und gegenüber den Rechenoperationen, die sonst so in der Shell möglich sind, hat bc den Vorteil, dass es mit Fließkommazahlen rechnet.
Die Verwendung von bc ist allerdings auf den ersten Blick etwas gewöhnungsbedürftig, aber bei regelmäßiger Anwendung sollten keine größeren Probleme auftreten. Hier die Syntax:
Variable=`echo "[scale=n ;] Rechenoperation" | bc [-l]`
Angaben in eckigen Klammern sind optional. Mit scale=n können Sie die Genauigkeit (n) der Berechnung angeben. Die Rechenoperation besteht aus Ausdrücken, die verbunden sind mit Operatoren. Hierbei können alle bekannten arithmetischen Operatoren verwendet werden (inklusive der Potenzierung mit ^ und dem Bilden einer Quadratwurzel mit sqrt()).
Wird bc mit der Option âl (für library) aufgerufen, können Sie auch auf Funktionen einer mathematischen Bibliothek (math library) zugreifen. In Tabelle 2.2 finden Sie einige Funktionen (x wird dabei durch entsprechenden Wert bzw. eine Variable ersetzt).
Tabelle 2.2 Â Zusätzliche mathematische Funktionen von bc mit der Option -l
s(x)
Sinus von x
c(x)
Cosinus von x
a(x)
Arcustangens von x
l(x)
Natürlicher Logaritmus von x
e(x)
Exponentialfunktion e hoch x
Ihnen bc bis ins Detail zu erläutern, ist wohl zu viel des Guten, aber zumindest verfügen Sie damit über ein Werkzeug für alle wichtigen mathematischen Probleme, das Sie selbstverständlich auch für Integer-Arithmetik einsetzen können.
Hier ein Shellscript mit einigen Anwendungsbeispielen von bc (inklusive der math library).
# Demonstriert die Verwendung von bc # Name : abc echo Rechnen mit bc varA=1.23 varB=2.34 varC=3.45 # Addieren, Genauigkeit 3 nach dem Komma gesamt=`echo "scale=2 ; $varA+$varB+$varC" | bc` echo $gesamt # Quadratwurzel varSQRT=`echo "scale=5 ; sqrt($varA)" | bc` echo $varSQRT # Einfache Integer-Arithmetik varINT=`echo "(8+5)*2" | bc` echo $varINT # Trigonometrische Berechnung mit der math library -l varRAD=`echo "scale=10 ; a(1)/45" | bc -l` echo -e "1° = $varRAD rad" # Sinus varSINUS45=`echo "scale=10 ; s(45*$varRAD)" | bc -l` echo "Der Sinus von 45° ist $varSINUS45"
Das Shellscript bei der Ausführung:
you@host > ./abc Rechnen mit bc 7.02 1.10905 26 1° = .0174532925 rad Der Sinus von 45° ist .7071067805
Zahlen konvertieren mit bc: bin/hex/dec
Zum Konvertieren von Zahlen können Sie ebenfalls auf das Kommando bc setzen. Hierzu finden Sie bei bc zwei Variablen, ibase und obase. Das i steht für Input (Eingabe der Zahlenbasis), o für Output (gewünschte Ausgabe der Zahlenbasis) und base für die gewünschte Zahlenbasis. Beiden Variablen kann ein Wert von 2â16 übergeben werden. Der Standardwert beider Variablen entspricht unserem Dezimalsystem, also 10. Wenn Sie ibase und obase gemeinsam benutzen, müssen Sie beim Umrechnen von bspw. binär nach dezimal die Reihenfolge beachten, weil sonst das Ergebnis nicht stimmt (siehe das folgende Beispiel).
# Demonstriert die Konvertierung verschiedener Zahlensysteme # Name : aconvert varDEZ=123 echo $varDEZ # Dez2Hex var=`echo "obase=16 ; $varDEZ" | bc` echo $var # Dez2Oct var=`echo "obase=8 ; $varDEZ" | bc` echo $var # Dez2Bin var=`echo "obase=2 ; $varDEZ" | bc` echo $var # Bin2Dez â Reihenfolge von obase und ibase beachten dez=`echo "obase=10 ; ibase=2 ; $var" | bc` echo $dez
Das Script bei der Ausführung:
you@host > ./aconvert 123 7B 173 1111011 123
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 2.2 ZahlenÂ
Wenn Mathematik nie Ihr Parade-Fach war, dann haben Sie etwas mit der Shell (und dem Autor ;-)) gemeinsam. Die Bourne-Shell kann nicht mal die einfachsten arithmetischen Berechnungen durchführen. Zur Verteidigung muss man allerdings sagen, dass der Erfinder der Shell nie im Sinn gehabt hat, eine komplette Programmiersprache zu entwickeln. Bei der Bash und der Korn-Shell hingegen wurden einfache arithmetische Operationen eingebaut, obgleich diese auch nur auf einfache Integer-Berechnungen beschränkt sind. Werden Fließkommazahlen benötigt, dann müssen Sie auf ein externes Tool umsteigen, z. B. den Rechner bc.
### 2.2.1 Integer-Arithmetik (Bourne-Shell, Bash und Korn-Shell)Â
Zwar bieten Ihnen die Bash und die Korn-Shell erheblich komfortablere Alternativen als die Bourne-Shell, aber wenn Sie nicht genau wissen, in welcher Shell Ihr Script verwendet wird, dann sind Sie mit der Integer-Arithmetik der Bourne-Shell gut beraten. Ihre Scripts werden dann auf jeder Shell laufen, also sowohl in der Bourne- und Korn-Shell als auch in der Bash.
Zum Rechnen in der Bourne-Shell wird der alte UNIX-Befehl expr verwendet. Beim Aufruf dieses Befehls können reguläre Ausdrücke, Zeichenketten-, arithmetische oder Vergleichsausdrücke als Parameter übergeben werden. Daraufhin erhalten Sie das Ergebnis des Ausdrucks auf die Standardausgabe.
> you@host > expr 8 / 2 4 you@host > expr 8 + 2 10
Das Ganze lässt sich natürlich auch mit einem Shellscript nutzen:
> # Demonstriert die Verwendung von expr # zum Ausführen arithmetischer Ausdrücke # Name : aexpr var1=100 var2=50 expr $var1 + $var2 expr $var1 â $var2
Das Script bei der Ausführung:
> you@host > ./aexpr 150 50
Hierzu gibt es eigentlich nicht viel zu sagen. Wichtig ist es jedoch, in Bezug auf expr noch zu erwähnen, dass beim arithmetischen Ausdruck Leerzeichen zwischen den einzelnen Operatoren und Operanden unbedingt erforderlich sind!
Wer schon weitere expr-Ausdrücke probiert hat, wird beim Multiplizieren auf Probleme gestoßen sein. Ähnliche Schwierigkeiten dürften auch mit den Klammern auftreten.
> you@host > expr 8 * 5 expr: Syntaxfehler you@host > expr ( 5 + 5 ) * 5 bash: syntax error near unexpected token `5'
Hier haben Sie das Problem, dass die Shell ein * eben als Datei-Expansion behandelt und bei den Klammern gern eine weitere Subshell öffnen würde. Damit beides außer Kraft gesetzt wird, müssen Sie hier nur vor den kritischen Zeichen ein Backslash setzen.
> you@host > expr 8 \* 5 40 you@host > expr \( 5 + 5 \) \* 5 50
Wollen Sie den Inhalt einer Berechnung in einer Variablen abspeichern, benötigen Sie eine Kommando-Substitution. Was das genau ist, erfahren Sie noch in Abschnitt 2.4. Wie Sie das hier einsetzen können, soll Ihnen aber nicht vorenthalten werden.
> # Demonstriert, wie Sie einen arithmetischen expr-Ausdruck # in einer Variablen speichern können # Name : aexpr2 var1=100 var2=50 # Dies wird als Kommando-Substitution bezeichnet var3=`expr $var1 + $var2` echo $var3
Durch die Klammerung des Ausdrucks rechts vom Gleichheitszeichen (hier durch zwei Gravis-Zeichen) wird das, was durch die Anweisung expr normalerweise auf dem Bildschirm ausgegeben wird, an der entsprechenden Stelle in der Zuweisung eingesetzt und der links stehenden Variablen zugewiesen.
> var3=`expr $var1 + $var2`
Richtig befriedigend dürfte expr für keinen sein, denn die Verwendung bleibt ausschließlich auf Integerzahlen und die arithmetischen Operatoren +, â, *, / und % beschränkt. Als Alternative und als eine gewisse Erleichterung bietet es sich an, mit dem UNIX-Kommando bc zu arbeiten. Hierauf wird anschließend in Abschnitt 2.2.3 eingegangen.
### 2.2.2 Integer-Arithmetik (Bash und Korn-Shell only)Â
Klar, die Bash und die Korn-Shell können das wieder besser, schließlich stellt jede dieser Shells eine Erweiterung der Bourne-Shell dar, wobei die Bash ja ohnehin alles kann. Es stehen auf jeden Fall mehrere Möglichkeiten zur Auswahl.
> you@host > ((z=5+5)) you@host > echo $z 10
Hier wurden doppelte Klammerungen verwendet, um der Shell Bescheid zu geben, dass es sich hier um einen mathematischen Ausdruck handelt. Im Beispiel haben Sie eine Variable »z« definiert, welche den Wert von 5 + 5 erhält. Hiermit können Sie jederzeit auf den Inhalt von »z« zurückgreifen. Neben dieser Möglichkeit können Sie auch eine andere Schreibweise verwenden, die allerdings denselben Effekt hat.
> you@host > y=$((8+8)) you@host > echo $y 16 you@host > echo $((z+y)) 26
Und zu guter Letzt hat die Bash noch einen Spezialfall der Integer-Arithmetik mit eckigen Klammern eingebaut.
> you@host > w=$[z+y] you@host > echo $w 26 you@host > w=$[(8+2)*4] you@host > echo $w 40
Mir persönlich gefällt diese Version mit den eckigen Klammern am besten, aber wem nützt das, wenn man sein Script auch in der Korn-Shell ausführen muss. Dass die doppelte Klammerung nicht unbedingt leserlich wirkt, liegt auf der Hand. Deshalb finden Sie bei der Korn-Shell und der Bash auch das Builtin-Kommando let.
# let
Das Kommando let wertet arithmetische Ausdrücke aus. Als Argumente werden hierbei mit Operatoren verbundene Ausdrücke verwendet.
> you@host > let z=w+20 you@host > echo $z 60
Allerdings besteht zwischen let und den doppelten Klammerungen â abgesehen von der Anwendung â kein Unterschied. Die Verwendung von let im Beispiel oben entspricht exakt folgender Klammerung:
> you@host > z=$((w+20)) you@host > echo $z 60
let dient hier lediglich dem Komfort. Zur Anschauung let in einem Shellscript:
> # Beispiel demonstriert das Kommando let # Name: alet varA=100 varB=50 let varC=$varA+$varB echo $varC
Was bei diesem Shellscript auffällt, ist, dass das Kommando let erst beim eigentlichen arithmetischen Ausdruck verwendet wird. Zwar können Sie let auch vor den einzelnen Variablen setzen, doch das ist im Grunde nicht nötig.
Wenn Sie mit einem let-Aufruf mehrere Ausdrücke gleichzeitig auswerten lassen, oder wenn Sie der besseren Lesbarkeit wegen Leerzeichen einstreuen wollen, sind die Ausdrücke jeweils in doppelte Anführungszeichen einzuschließen, wie bspw. in
> you@host > let "a=5+5" "b=4+4" you@host > echo $a 10 you@host > echo $b 8
# typeset
Das Builtin-Kommando typeset oder declare kann Ihnen hier (und in vielen anderen Fällen auch) das Shell-Leben noch etwas komfortabler gestalten. Mit beiden Kommandos können Sie entweder eine Shell-Variable erzeugen und/oder die Attribute einer Variablen setzen. Verwenden Sie z. B. die Option âi auf einer Variablen, so werden diese in der Shell als Integer-Variable behandelt und gekennzeichnet.
> you@host > typeset -i a b c you@host > b=10 you@host > c=10 you@host > a=b+c you@host > echo $a 20
Was hier auch gleich ins Auge sticht: Dank der Integer-Deklaration entfällt beim Rechnen mit den Integer-Variablen auch das $-Zeichen bei der Anwendung.
> # Beispiel mit typeset # Name : atypeset typeset -i c a=5 b=2 c=a+b echo $c c=a*b echo $c c=\(a+b\)*2 echo $c
Wie schon beim Kommando let ist es nicht nötig, alle Variablen als Integer zu deklarieren, wie Sie dies schön im Shellscript »atypeset« sehen können. Es reicht aus, wenn Sie das Ergebnis als Integer kennzeichnen. Den Rest übernimmt die Shell wieder für Sie.
Operator | Bedeutung |
| --- | --- |
+, â, *, / | Addition, Subtraktion, Multiplikation, Divison |
% | Rest einer ganzzahligen Divison (Modulo-Operator) |
var+=n varâ=n var*=n var/=n var%=n | Kurzformen für +, â, *, /, % Kurzform für bspw. Addition var+=n ist gleichwertig zu var=var+n |
var>>n | Bitweise Linksverschiebung um n Stellen |
var<<n | Bitweise Rechtsverschiebung um n Stellen |
>>= <<= | Kurzformen für die Links- bzw. Rechtsverschiebung << und >> |
var1&var2 | Bitweises UND |
var1^var2 | Bitweises exklusives ODER (XOR) |
var1|var2 | Bitweises ODER |
~ | Bitweise Negation |
&= ^= |= | Kurzformen für bitweise UND, exklusives ODER sowie ODER (&, | und ^) |
### 2.2.3 bc â Rechnen mit Fließkommazahlen und mathematische FunktionenÂ
Wohl keine der oben vorgestellten Lösungen erschien Ihnen sonderlich elegant. Daher würde sich als Alternative und auf jeden Fall als Erleichterung das UNIX-Kommando bc anbieten.
bc ist ein Quasi-Tischrechner, der von der Standardeingabe (Tastatur) zeilenweise arithmetische Ausdrücke erwartet und das Ergebnis wiederum zeilenweise auf der Standardausgabe (Bildschirm) ausgibt. bc arbeitet im Gegensatz zum verwandten Programm dc mit der gewohnten Infix-Schreibweise und verlangt kein Leerzeichen zwischen Operatoren und Operanden. Außerdem muss der arithmetische Operator * vor der Shell nicht geschützt werden. Entsprechendes gilt auch für runde Klammern.
bc ist eigentlich kein Befehl, sondern eher eine Art Shell mit einer Präzisions-Kalkulationssprache. Allerdings lässt sich bc auch gut in jeder Shell als Befehl verwenden. Die Argumente schiebt man dem Befehl mithilfe einer Pipe unter. Und gegenüber den Rechenoperationen, die sonst so in der Shell möglich sind, hat bc den Vorteil, dass es mit Fließkommazahlen rechnet.
Die Verwendung von bc ist allerdings auf den ersten Blick etwas gewöhnungsbedürftig, aber bei regelmäßiger Anwendung sollten keine größeren Probleme auftreten. Hier die Syntax:
> Variable=`echo "[scale=n ;] Rechenoperation" | bc [-l]`
Angaben in eckigen Klammern sind optional. Mit scale=n können Sie die Genauigkeit (n) der Berechnung angeben. Die Rechenoperation besteht aus Ausdrücken, die verbunden sind mit Operatoren. Hierbei können alle bekannten arithmetischen Operatoren verwendet werden (inklusive der Potenzierung mit ^ und dem Bilden einer Quadratwurzel mit sqrt()).
Wird bc mit der Option âl (für library) aufgerufen, können Sie auch auf Funktionen einer mathematischen Bibliothek (math library) zugreifen. In Tabelle 2.2 finden Sie einige Funktionen (x wird dabei durch entsprechenden Wert bzw. eine Variable ersetzt).
Funktion | Bedeutung |
| --- | --- |
s(x) | Sinus von x |
c(x) | Cosinus von x |
a(x) | Arcustangens von x |
l(x) | Natürlicher Logaritmus von x |
e(x) | Exponentialfunktion e hoch x |
Ihnen bc bis ins Detail zu erläutern, ist wohl zu viel des Guten, aber zumindest verfügen Sie damit über ein Werkzeug für alle wichtigen mathematischen Probleme, das Sie selbstverständlich auch für Integer-Arithmetik einsetzen können.
Hier ein Shellscript mit einigen Anwendungsbeispielen von bc (inklusive der math library).
> # Demonstriert die Verwendung von bc # Name : abc echo Rechnen mit bc varA=1.23 varB=2.34 varC=3.45 # Addieren, Genauigkeit 3 nach dem Komma gesamt=`echo "scale=2 ; $varA+$varB+$varC" | bc` echo $gesamt # Quadratwurzel varSQRT=`echo "scale=5 ; sqrt($varA)" | bc` echo $varSQRT # Einfache Integer-Arithmetik varINT=`echo "(8+5)*2" | bc` echo $varINT # Trigonometrische Berechnung mit der math library -l varRAD=`echo "scale=10 ; a(1)/45" | bc -l` echo -e "1° = $varRAD rad" # Sinus varSINUS45=`echo "scale=10 ; s(45*$varRAD)" | bc -l` echo "Der Sinus von 45° ist $varSINUS45"
Das Shellscript bei der Ausführung:
> you@host > ./abc Rechnen mit bc 7.02 1.10905 26 1° = .0174532925 rad Der Sinus von 45° ist .7071067805
# Zahlen konvertieren mit bc: bin/hex/dec
Zum Konvertieren von Zahlen können Sie ebenfalls auf das Kommando bc setzen. Hierzu finden Sie bei bc zwei Variablen, ibase und obase. Das i steht für Input (Eingabe der Zahlenbasis), o für Output (gewünschte Ausgabe der Zahlenbasis) und base für die gewünschte Zahlenbasis. Beiden Variablen kann ein Wert von 2â16 übergeben werden. Der Standardwert beider Variablen entspricht unserem Dezimalsystem, also 10. Wenn Sie ibase und obase gemeinsam benutzen, müssen Sie beim Umrechnen von bspw. binär nach dezimal die Reihenfolge beachten, weil sonst das Ergebnis nicht stimmt (siehe das folgende Beispiel).
> # Demonstriert die Konvertierung verschiedener Zahlensysteme # Name : aconvert varDEZ=123 echo $varDEZ # Dez2Hex var=`echo "obase=16 ; $varDEZ" | bc` echo $var # Dez2Oct var=`echo "obase=8 ; $varDEZ" | bc` echo $var # Dez2Bin var=`echo "obase=2 ; $varDEZ" | bc` echo $var # Bin2Dez â Reihenfolge von obase und ibase beachten dez=`echo "obase=10 ; ibase=2 ; $var" | bc` echo $dez
Das Script bei der Ausführung:
> you@host > ./aconvert 123 7B 173 1111011 123
# 2.3 ZeichenkettenÂ
2.3 ZeichenkettenÂ
Zum Bearbeiten von Zeichenketten werden immer noch vorrangig die alten UNIX-Tools (sofern auf dem System vorhanden) wie tr, cut, paste, sed und natürlich awk verwendet. Sofern Ihr Script überall laufen soll, sind Sie mit diesen Mitteln immer auf der sicheren Seite. Im Gegensatz zur Bourne-Shell bieten Ihnen hierzu die Bash und die Korn-Shell auch einige eingebaute Funktionen an.
2.3.1 StringverarbeitungÂ
Ich möchte Ihnen zunächst die grundlegenden UNIX-Kommandos zur Stringverarbeitung ans Herz legen, mit denen Sie auf jeder Shell arbeiten können, also auch der Bourne-Shell.
Schneiden mit cut
Müssen Sie aus einer Datei oder der Ausgabe eines Befehls bestimmte Datenfelder extrahieren (herausschneiden), leistet Ihnen das Kommando cut sehr gute Dienste. Die Syntax von cut sieht wie folgt aus:
cut -option datei
Würden Sie bspw. das Kommando folgendermaßen verwenden
you@host > cut -c5 gedicht.txt
dann würden Sie aus der Textdatei gedicht.txt jeweils aus jeder Zeile das fünfte Zeichen extrahieren. Die Option âc steht für character (also Zeichen). Wollen Sie hingegen aus jeder Zeile einer Datei ab dem fünften Zeichen bis zum Zeilenende alles extrahieren, dann wird cut wie folgt verwendet:
you@host > cut -c5- gedicht.txt
Wollen Sie aus jeder Zeile der Datei alles ab dem fünften bis zum zehnten Zeichen herausschneiden, dann sieht die Verwendung des Kommandos so aus:
you@host > cut -c5â10 gedicht.txt
Natürlich lassen sich auch mehrere einzelne Zeichen und Zeichenbereiche mit cut heraustrennen. Hierfür müssen Sie ein Komma zwischen den einzelnen Zeichen bzw. Zeichenbereichen setzen:
you@host > cut -c1,3,5,6,7â12,14 gedicht.txt
Hiermit würden Sie das erste, dritte, fünfte, sechste, siebte bis zwölfte und das vierzehnte Zeichen aus jeder Zeile extrahieren.
Häufig wird das Kommando cut in Verbindung mit einer Pipe verwendet, sprich cut erhält die Standardeingabe von einem anderen Kommando und nicht von einer Datei. Einfachstes Beispiel ist das Kommando who:
you@host > who sn pts/0 Feb 5 13:52 (p83.129.9.xxx.tisdip.tiscali.de) mm pts/2 Feb 5 15:59 (p83.129.4.xxx.tisdip.tiscali.de) kd10129 pts/3 Feb 5 16:13 (pd9e9bxxx.dip.t-dialin.net) you tty01 Feb 5 16:13 (console) you@host > who | cut -c1â8 sn mm kd10129 you
Hier wurden zum Beispiel aus dem Kommando who die Benutzer (die ersten 8 Zeichen) auf dem System extrahiert. Wollen Sie hingegen den Benutzernamen und die Uhrzeit des Logins extrahieren, so ist dies mit cut kein Problem:
you@host > who | cut -c1â8,30â35 sn 13:52 mm 15:59 kd10129 16:13
cut lässt Sie auch nicht im Stich, wenn die Daten nicht so ordentlich in Reih und Glied aufgelistet sind, wie dies eben bei who der Fall war. Folgendes sei gegeben:
you@host > cat datei.csv tot;15:21;17:32; you;18:22;23:12; kd10129;17:11;20:23;
Dies soll eine Login-Datei darstellen, die zeigt, welcher Benutzer von wann bis wann eingeloggt war. Um hier bestimmte Daten zu extrahieren, bieten sich die Optionen âd (Abk. für delimiter = Begrenzungszeichen) und âf (Abk. für field = Feld) an. Im Beispiel besteht das Begrenzungszeichen aus einem Semikolon (;). Wollen Sie bspw. nur die Benutzer (Feld 1) aus dieser Datei extrahieren, gehen Sie mit cut so vor:
you@host > cut -d\; -f1 datei.csv tot you kd10129
Um die Sonderbedeutung des Semikolons abzuschalten, wurde hier ein Backslash verwendet. Wollen Sie hingegen den Benutzer und die Auslogg-Zeit extrahieren, dann erreichen Sie dies folgendermaßen:
you@host > cut -d\; -f1,3 datei.csv tot;17:32 you;23:12 kd10129;20:23
Hinweis   Setzen Sie cut ohne die Option -d ein, wird das Tabulatorzeichen standardmäßig als Begrenzungszeichen verwendet.
Einfügen mit paste
Das Gegenstück zu cut ist das Kommando paste (paste = kleben), womit Sie etwas in Zeilen einfügen bzw. zusammenfügen können. Einfachstes Beispiel:
you@host > cat namen.txt <NAME> erni flip you@host > cat nummer.txt (123)12345 (089)234564 (432)4534 (019)311334 you@host > paste namen.txt nummer.txt > zusammen.txt you@host > cat zusammen.txt john (123)12345 bert (089)234564 erni (432)4534 flip (019)311334
Hier wurden die entsprechenden Zeilen der Dateien namen.txt und nummern.txt zusammengefügt und die Ausgabe in die Datei zusammen.txt umgeleitet. Natürlich können Sie auch mehr als nur zwei Dateien zeilenweise zusammenfügen. Wollen Sie auch noch ein anderes Begrenzungszeichen verwenden, kann mit der Option âd ein entsprechendes Zeichen eingesetzt werden.
Der Filter tr
Häufig verwendet wird der Filter tr, mit dem Sie einzelne Zeichen aus der Standardeingabe übersetzen können. Die Syntax:
tr von_Zeichen nach_Zeichen
Bei den Argumenten »von_Zeichen« und »nach_Zeichen« können Sie ein einzelnes oder mehrere Zeichen verwenden. Dabei wird jedes Zeichen »von_Zeichen«, das aus der Standardeingabe kommt, übersetzt in »nach_Zeichen«. Die Übersetzung erfolgt wie gehabt auf die Standardausgabe. Als Beispiel soll folgendes Gedicht verwendet werden:
you@host > cat gedicht.txt Schreiben, wollte ich ein paar Zeilen. Buchstabe für Buchstabe reihe ich aneinand. Ja, schreiben, meine Gedanken verweilen. Wort für Wort, es sind mir viele bekannt. Satz für Satz, geht es fließend voran. Aber um welchen transzendenten Preis? Was kostet mich das an Nerven, sodann? An Kraft, an Seele, an Geist und Fleiß? (von Brigitte Obermaier alias <NAME>)
Wollen Sie jetzt hier sinnloserweise jedes »a« durch ein »b« ersetzen, geht das so:
you@host > tr a b < gedicht.txt Schreiben, wollte ich ein pbbr Zeilen. Buchstbbe für Buchstbbe reihe ich bneinbnd. Jb, schreiben, meine Gedbnken verweilen. Wort für Wort, es sind mir viele bekbnnt. Sbtz für Sbtz, geht es fließend vorbn. Aber um welchen trbnszendenten Preis? Wbs kostet mich dbs bn Nerven, sodbnn? An Krbft, bn Seele, bn Geist und Fleiß? (von Brigitte Obermbier blibs Miss Zbuberblume)
Nun ein etwas sinnvolleres Beispiel. Erinnern Sie sich an folgenden cut-Befehl:
you@host > cut -d\; -f1,3 datei.csv tot;17:32 you;23:12 kd10129;20:23
Die Ausgabe hier ist nicht unbedingt schön leserlich. Hier würde sich der Filter tr mit einer Pipe hervorragend eignen. Schieben wir doch einfach in die Standardeingabe von tr die Standardausgabe von cut und ersetzen das Semikolon durch ein Tabulatorzeichen (\t):
you@host > cut -d\; -f1,3 datei.csv | tr \; '\t' tot 17:32 you 23:12 kd10129 20:23
Dies sieht doch schon um einiges besser aus. Natürlich muss auch hier das Semikolon mit einem Backslash ausgegrenzt werden. Ebenso können Sie hier wieder ganze Zeichenbereiche verwenden, wie Sie dies von der Shell (Stichwort: Wildcards) her kennen.
you@host > tr '[a-z]' '[A-Z]' < gedicht.txt SCHREIBEN, WOLLTE ICH EIN PAAR ZEILEN. BUCHSTABE FüR BUCHSTABE REIHE ICH ANEINAND. JA, SCHREIBEN, MEINE GEDANKEN VERWEILEN. WORT FüR WORT, ES SIND MIR VIELE BEKANNT. SATZ FüR SATZ, GEHT ES FLIEßEND VORAN. ABER UM WELCHEN TRANSZENDENTEN PREIS? WAS KOSTET MICH DAS AN NERVEN, SODANN? AN KRAFT, AN SEELE, AN GEIST UND FLEIß? (VON BRIGITTE OBERMAIER ALIAS MISS ZAUBERBLUME)
Damit allerdings hier auch der Filter tr diese Zeichenbereiche erhält und nicht unsere Shell daraus eine Datei-Expansion macht, müssen Sie diese Zeichenbereiche für tr zwischen einfache Anführungsstriche setzen.
Wollen Sie bestimmte Zeichen aus dem Eingabestrom löschen, so können Sie die Option âd verwenden:
you@host > tr -d ' ','\n' < gedicht.txt SchreibenwollteicheinpaarZeilen.BuchstabefürBuchstabereiheichaneinand. JaschreibenmeineGedankenverweilen. WortfürWortessindmirvielebekannt. SatzfürSatzgehtesfließendvoran.AberumwelchentranszendentenPreis? WaskostetmichdasanNervensodann?AnKraftanSeeleanGeistundFleiß? (vonBrigitteObermaieraliasMissZauberblume)
Hier wurden alle Leer- und Newline-Zeichen (Zeilenumbrüche) gelöscht. Mehrere Zeichen werden â wie Sie sehen â mit einem Komma voneinander getrennt.
Und noch eine gern genutzte Option von tr ist mit âs gegeben. Damit können Sie mehrfach hintereinander kommende gleiche Zeichen durch ein anderes austauschen. Dies wird etwa verwendet, wenn in einer Datei unnötig viele Leerzeichen oder Zeilenumbrüche vorkommen. Kommen in einer Textdatei unnötig viele Leerzeichen hintereinander vor, so können Sie diese folgendermaßen durch ein einzelnes Leerzeichen austauschen:
you@host > tr -s ' ' ' ' < datei.txt
Gleicher Fall mit unnötig vielen Zeilenumbrüchen:
you@host > tr -s '\n' '\n' < datei.txt
Die Killer-Tools â sed, awk und reguläre Ausdrücke
Die absoluten Killer-Tools für die Textverarbeitung stehen Ihnen mit awk und sed zur Verfügung. Bei der Verwendung von sed, awk, grep und vielen anderen UNIX-Tools kommen Sie auf Dauer um die regulären Ausdrücke nicht herum. Sollten Sie also eine Einführung in diese UNIX-Tools benötigen, so sollten Sie sich die Kapitel 11, 12 und 13 ansehen.
Auf die Schnelle soll hier dennoch die Verwendung von awk und sed in der Shell demonstriert werden. Wollen Sie bspw. mit awk die Länge einer Zeichenkette ermitteln, so ist dies mit eingebauten (String-)Funktionen von awk kein Problem.
you@host > zeichen="<NAME>" you@host > echo $zeichen | awk '{print length($zeichen)}' 12
Hier schieben Sie über eine Pipe die Standardausgabe von echo an die Standardeingabe von awk. Im Anweisungsblock von awk wird dann der Befehl print (zur Ausgabe) verwendet. print gibt dabei die Länge (awk-Funktion length) der Variablen zeichen aus. In einem Shellscript werden Sie allerdings selten die Länge oder den Wert eines Kommandos ausgeben wollen, sondern meistens damit weiterarbeiten. Um dies zu realisieren, wird wieder die Kommando-Substitution verwendet.
# Demonstriert das Kommando awk im Shellscript # Name : aawk zeichen="<NAME>" laenge=`echo $zeichen | awk '{print length($zeichen)}'` echo "$zeichen enthaelt $laenge Zeichen"
Das Beispiel bei der Ausführung:
you@host > ./aawk juergen wolf enthaelt 12 Zeichen
Neben der Funktion length() stehen Ihnen noch eine Reihe weiterer typischer Stringfunktionen mit awk zur Verfügung. Aus Referenz-Gründen werden diese in Tabelle 2.3 aufgelistet. Die Anwendung dieser Funktionen entspricht im Prinzip derjenigen, die Sie eben mit length() gesehen haben. Sofern Ihnen awk noch nicht so richtig von der Hand gehen sollte, empfehle ich Ihnen, zunächst Kapitel 13, awk-Programmierung, zu lesen.
Tabelle 2.3 Â Awks Builtin-Stringfunktionen
tolower str
Komplette Zeichenkette in Kleinbuchstaben
toupper str
Komplette Zeichenkette in Großbuchstaben
index(str, substr)
Gibt die Position zurück, wo substr in str anfängt
match(str, regexpr)
Überprüft, ob der reguläre Ausdruck regexpr in str enthalten ist
substr(str, start, len)
Gibt einen Teilstring ab Postion start mit der Länge len aus str zurück.
split(str, array, sep)
Teilt einen String in einzelne Felder auf und gibt diese an ein Array. sep dient dabei als Feldtrenner.
gsub(alt, neu, str)
Ersetzt in str den String alt durch neu
sub(alt, neu, str)
Ersetzt erstes Vorkommen von alt durch neu in str
sprintf("fmt",expr)
Verwendet die printf-Formatbeschreibung für expr
Neben awk will ich Ihnen hier auch kurz den Einsatz von sed auf Zeichenketten zeigen. sed wird vorzugsweise eingesetzt, um aus ganzen Textdateien bestimmte Teile herauszufiltern, zu verändern oder zu löschen. Auch hier geht man mit einer Pipe zu Werke. Will man eine ganze Textdatei durch sed jagen, geht man folgendermaßen vor:
you@host > cat gedicht.txt | sed 's/Satz/Wort/g'
Hier wird die Ausgabe von cat durch eine Pipe an die Eingabe von sed weitergegeben. sed ersetzt jetzt jedes Wort »Satz« durch das Wort »Wort« und gibt die Ersetzung an die Standardausgabe aus. s steht hier für substitute und das Suffix g für global, was bedeutet, dass die Ersetzung nicht nur einmal, sondern im kompletten Text, also auch bei mehrfachen Vorkommen, umgesetzt wird. Natürlich wird auch hier bevorzugt die Kommando-Substitution im Shellscript verwendet.
# Demonstriert sed im Shellscript # Name : ased zeichenkette="... und sie dreht sich doch" neu=`echo $zeichenkette | sed 's/sie/die Erde/g'` echo $neu
Das Shellscript bei der Ausführung:
you@host > ./ased ... und die Erde dreht sich doch
Dies sollte als Kurzeinstieg zu den Power-Tools sed und awk genügen. Mehr dazu finden Sie in Kapitel 12, Der Stream-Editor sed, und 13, awk-Programmierung.
2.3.2 Erweiterte Funktionen für Bash und Korn-ShellÂ
Der Bash und der Korn-Shell wurden im Gegensatz zur Bourne-Shell für die Stringverarbeitung einige extra Funktionen spendiert, welche allerdings in keiner Weise die eben vorgestellten UNIX-Tools ersetzen. Hierbei handelt es sich höchstens um einige Abkürzungen.
Länge eines Strings
Wenn Sie in der Bash oder Korn-Shell nicht auf awk zurückgreifen wollen, können Sie hier die Länge eines Strings mit folgender Syntax ermitteln:
$(#zeichenkette)
In der Praxis sieht dies folgendermaßen aus:
you@host > zeichenkette="... keep alive" you@host > echo "Laenge von $zeichenkette ist ${#zeichenkette}" Laenge von ... keep alive ist 14
Aneinanderreihen von Strings
Die Aneinanderreihung von Strings ist recht einfach. Hier müssen Sie lediglich die einzelnen Variablen hintereinander schreiben und mit einer Begrenzung versehen.
you@host > user=you you@host > login=15:22 you@host > logout=18:21 you@host > daten=${user}:${login}_bis_${logout} you@host > echo $daten you:15:22_bis_18:21
(Teil-)String entfernen
Um einen bestimmten (Teil-)String oder besser ein Muster (weil hier auch die Metazeichen *, ? und [ ] verwendet werden können) aus einem String zu entfernen, bieten Ihnen Bash und Korn-Shell einige interessante Funktionen an. Aufgrund ihrer etwas umständlicheren Nutzung werden sie recht selten verwendet (siehe Tabelle 2.4).
Tabelle 2.4 Â Stringfunktionen von Bash und Korn-Shell
Funktion
Gibt zurück ...
Hierzu ein Shellscript, das diese Funktionen demonstriert:
# Name : acut var1="1234567890" var2="/home/you/Dokuments/shell/kapitel2.txt" pfad=${var2 %/*} file=${var2##*/} echo "Komplette Angabe: $var2" echo "Pfad : $pfad" echo "Datei : $file" # rechts 2 Zeichen abschneiden echo ${var1 %??} # links 2 Zeichen abschneiden echo ${var1#??} # im Klartext ohne Metazeichen echo ${var2 %/kapitel2.txt}
Das Script bei der Ausführung:
you@host > ./acut Komplette Angabe: /home/you/Dokuments/shell/kapitel2.txt Pfad : /home/you/Dokuments/shell Datei : kapitel2.txt 12345678 34567890 /home/you/Dokuments/shell
Die Metazeichen *, ? und [ ] können Sie hierbei genauso einsetzen, wie Sie dies bereits von der Datei-Expansion her kennen. Zugegeben, sehr lesefreundlich sind diese Funktionen nicht gerade, aber in Verbindung mit einer Pfadangabe, wie im Beispiel gesehen, lässt es sich recht gut damit leben.
String rechts oder links abschneiden (Korn-Shell only)
Die Korn-Shell bietet Ihnen die Möglichkeit, bei einem String von der rechten oder linken Seite etwas abzuschneiden. Hierzu verwendet man wieder den Befehl typeset. Natürlich lässt typeset schlussfolgern, dass hierbei nicht im eigentlichen Sinneabgeschnitten wird, sondern Sie deklarieren lediglich die Länge einer Variablen, also welche Anzahl von Zeichen diese aufnehmen darf, beispielsweise:
you@host > typeset -L5 atext you@host > atext=1234567890 you@host > echo $atext 12345
Hier definieren Sie die Länge der Variablen »atext« auf 5 Zeichen. Die Option âL steht dabei für eine linksbündige Ausrichtung, was allerdings bei einem leeren String keinen Effekt hat. Damit könnten Sie praktisch mit âL eine Zeichenkette links um n Zeichen abschneiden. Die Gegenoption, um von der rechten Seite etwas zu entfernen, lautet âRn. Mit n geben Sie die Anzahl der Zeichen an, die Sie von der rechten Seite abschneiden wollen. Hier typset im Einsatz:
you@host > zeichenkette=1234567890 you@host > typeset -L5 zeichenkette you@host > echo $zeichenkette 12345 you@host > typeset -R3 zeichenkette you@host > echo $zeichenkette 345
(Teil-)Strings ausschneiden (Bash only)
Als einzige von allen Shells bietet Ihnen die Bash eine sinnvolle Funktion an, um aus einem String Teile herauszuschneiden (ähnlich wie mit dem Kommando cut). Auch die Anwendung ist recht passabel und lesefreundlich. Die Syntax:
${var:start:laenge} ${var:start}
Damit schneiden Sie aus der Variablen var ab der Position start, laenge Zeichen heraus. Erfolgt keine Angabe zu laenge, wird von der Position start bis zum Ende alles herauskopiert.
you@host > zeichenkette=1234567890 you@host > echo ${zeichenkette:3:6} 456789 you@host > echo ${zeichenkette:5} 67890 you@host > neu=${zeichenkette:5:3} you@host > echo $neu 678 you@host > mehr=${zeichenkette:5:1}_und_${zeichenkette:8:2} you@host > echo $mehr 6_und_90
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 2.3 ZeichenkettenÂ
Zum Bearbeiten von Zeichenketten werden immer noch vorrangig die alten UNIX-Tools (sofern auf dem System vorhanden) wie tr, cut, paste, sed und natürlich awk verwendet. Sofern Ihr Script überall laufen soll, sind Sie mit diesen Mitteln immer auf der sicheren Seite. Im Gegensatz zur Bourne-Shell bieten Ihnen hierzu die Bash und die Korn-Shell auch einige eingebaute Funktionen an.
### 2.3.1 StringverarbeitungÂ
Ich möchte Ihnen zunächst die grundlegenden UNIX-Kommandos zur Stringverarbeitung ans Herz legen, mit denen Sie auf jeder Shell arbeiten können, also auch der Bourne-Shell.
# Schneiden mit cut
Müssen Sie aus einer Datei oder der Ausgabe eines Befehls bestimmte Datenfelder extrahieren (herausschneiden), leistet Ihnen das Kommando cut sehr gute Dienste. Die Syntax von cut sieht wie folgt aus:
> cut -option datei
Würden Sie bspw. das Kommando folgendermaßen verwenden
> you@host > cut -c5 gedicht.txt
dann würden Sie aus der Textdatei gedicht.txt jeweils aus jeder Zeile das fünfte Zeichen extrahieren. Die Option âc steht für character (also Zeichen). Wollen Sie hingegen aus jeder Zeile einer Datei ab dem fünften Zeichen bis zum Zeilenende alles extrahieren, dann wird cut wie folgt verwendet:
> you@host > cut -c5- gedicht.txt
Wollen Sie aus jeder Zeile der Datei alles ab dem fünften bis zum zehnten Zeichen herausschneiden, dann sieht die Verwendung des Kommandos so aus:
> you@host > cut -c5â10 gedicht.txt
Natürlich lassen sich auch mehrere einzelne Zeichen und Zeichenbereiche mit cut heraustrennen. Hierfür müssen Sie ein Komma zwischen den einzelnen Zeichen bzw. Zeichenbereichen setzen:
> you@host > cut -c1,3,5,6,7â12,14 gedicht.txt
Hiermit würden Sie das erste, dritte, fünfte, sechste, siebte bis zwölfte und das vierzehnte Zeichen aus jeder Zeile extrahieren.
Häufig wird das Kommando cut in Verbindung mit einer Pipe verwendet, sprich cut erhält die Standardeingabe von einem anderen Kommando und nicht von einer Datei. Einfachstes Beispiel ist das Kommando who:
> you@host > who sn pts/0 Feb 5 13:52 (p83.129.9.xxx.tisdip.tiscali.de) mm pts/2 Feb 5 15:59 (p83.129.4.xxx.tisdip.tiscali.de) kd10129 pts/3 Feb 5 16:13 (pd9e9bxxx.dip.t-dialin.net) you tty01 Feb 5 16:13 (console) you@host > who | cut -c1â8 sn mm kd10129 you
Hier wurden zum Beispiel aus dem Kommando who die Benutzer (die ersten 8 Zeichen) auf dem System extrahiert. Wollen Sie hingegen den Benutzernamen und die Uhrzeit des Logins extrahieren, so ist dies mit cut kein Problem:
> you@host > who | cut -c1â8,30â35 sn 13:52 mm 15:59 kd10129 16:13
cut lässt Sie auch nicht im Stich, wenn die Daten nicht so ordentlich in Reih und Glied aufgelistet sind, wie dies eben bei who der Fall war. Folgendes sei gegeben:
> you@host > cat datei.csv tot;15:21;17:32; you;18:22;23:12; kd10129;17:11;20:23;
Dies soll eine Login-Datei darstellen, die zeigt, welcher Benutzer von wann bis wann eingeloggt war. Um hier bestimmte Daten zu extrahieren, bieten sich die Optionen âd (Abk. für delimiter = Begrenzungszeichen) und âf (Abk. für field = Feld) an. Im Beispiel besteht das Begrenzungszeichen aus einem Semikolon (;). Wollen Sie bspw. nur die Benutzer (Feld 1) aus dieser Datei extrahieren, gehen Sie mit cut so vor:
> you@host > cut -d\; -f1 datei.csv tot you kd10129
Um die Sonderbedeutung des Semikolons abzuschalten, wurde hier ein Backslash verwendet. Wollen Sie hingegen den Benutzer und die Auslogg-Zeit extrahieren, dann erreichen Sie dies folgendermaßen:
> you@host > cut -d\; -f1,3 datei.csv tot;17:32 you;23:12 kd10129;20:23
# Einfügen mit paste
Das Gegenstück zu cut ist das Kommando paste (paste = kleben), womit Sie etwas in Zeilen einfügen bzw. zusammenfügen können. Einfachstes Beispiel:
> you@host > cat namen.txt <NAME> erni flip you@host > cat nummer.txt (123)12345 (089)234564 (432)4534 (019)311334 you@host > paste namen.txt nummer.txt > zusammen.txt you@host > cat zusammen.txt john (123)12345 bert (089)234564 erni (432)4534 flip (019)311334
Hier wurden die entsprechenden Zeilen der Dateien namen.txt und nummern.txt zusammengefügt und die Ausgabe in die Datei zusammen.txt umgeleitet. Natürlich können Sie auch mehr als nur zwei Dateien zeilenweise zusammenfügen. Wollen Sie auch noch ein anderes Begrenzungszeichen verwenden, kann mit der Option âd ein entsprechendes Zeichen eingesetzt werden.
# Der Filter tr
Häufig verwendet wird der Filter tr, mit dem Sie einzelne Zeichen aus der Standardeingabe übersetzen können. Die Syntax:
> tr von_Zeichen nach_Zeichen
Bei den Argumenten »von_Zeichen« und »nach_Zeichen« können Sie ein einzelnes oder mehrere Zeichen verwenden. Dabei wird jedes Zeichen »von_Zeichen«, das aus der Standardeingabe kommt, übersetzt in »nach_Zeichen«. Die Übersetzung erfolgt wie gehabt auf die Standardausgabe. Als Beispiel soll folgendes Gedicht verwendet werden:
> you@host > cat gedicht.txt Schreiben, wollte ich ein paar Zeilen. Buchstabe für Buchstabe reihe ich aneinand. Ja, schreiben, meine Gedanken verweilen. Wort für Wort, es sind mir viele bekannt. Satz für Satz, geht es fließend voran. Aber um welchen transzendenten Preis? Was kostet mich das an Nerven, sodann? An Kraft, an Seele, an Geist und Fleiß? (von Brigitte Obermaier alias <NAME>)
Wollen Sie jetzt hier sinnloserweise jedes »a« durch ein »b« ersetzen, geht das so:
> you@host > tr a b < gedicht.txt Schreiben, wollte ich ein pbbr Zeilen. Buchstbbe für Buchstbbe reihe ich bneinbnd. Jb, schreiben, meine Gedbnken verweilen. Wort für Wort, es sind mir viele bekbnnt. Sbtz für Sbtz, geht es fließend vorbn. Aber um welchen trbnszendenten Preis? Wbs kostet mich dbs bn Nerven, sodbnn? An Krbft, bn Seele, bn Geist und Fleiß? (von <NAME> blibs <NAME>)
Nun ein etwas sinnvolleres Beispiel. Erinnern Sie sich an folgenden cut-Befehl:
> you@host > cut -d\; -f1,3 datei.csv tot;17:32 you;23:12 kd10129;20:23
Die Ausgabe hier ist nicht unbedingt schön leserlich. Hier würde sich der Filter tr mit einer Pipe hervorragend eignen. Schieben wir doch einfach in die Standardeingabe von tr die Standardausgabe von cut und ersetzen das Semikolon durch ein Tabulatorzeichen (\t):
> you@host > cut -d\; -f1,3 datei.csv | tr \; '\t' tot 17:32 you 23:12 kd10129 20:23
Dies sieht doch schon um einiges besser aus. Natürlich muss auch hier das Semikolon mit einem Backslash ausgegrenzt werden. Ebenso können Sie hier wieder ganze Zeichenbereiche verwenden, wie Sie dies von der Shell (Stichwort: Wildcards) her kennen.
> you@host > tr '[a-z]' '[A-Z]' < gedicht.txt SCHREIBEN, WOLLTE ICH EIN PAAR ZEILEN. BUCHSTABE FüR BUCHSTABE REIHE ICH ANEINAND. JA, SCHREIBEN, MEINE GEDANKEN VERWEILEN. WORT FüR WORT, ES SIND MIR VIELE BEKANNT. SATZ FüR SATZ, GEHT ES FLIEßEND VORAN. ABER UM WELCHEN TRANSZENDENTEN PREIS? WAS KOSTET MICH DAS AN NERVEN, SODANN? AN KRAFT, AN SEELE, AN GEIST UND FLEIß? (VON BRIGITTE OBERMAIER ALIAS MISS ZAUBERBLUME)
Damit allerdings hier auch der Filter tr diese Zeichenbereiche erhält und nicht unsere Shell daraus eine Datei-Expansion macht, müssen Sie diese Zeichenbereiche für tr zwischen einfache Anführungsstriche setzen.
Wollen Sie bestimmte Zeichen aus dem Eingabestrom löschen, so können Sie die Option âd verwenden:
> you@host > tr -d ' ','\n' < gedicht.txt SchreibenwollteicheinpaarZeilen.BuchstabefürBuchstabereiheichaneinand. JaschreibenmeineGedankenverweilen. WortfürWortessindmirvielebekannt. SatzfürSatzgehtesfließendvoran.AberumwelchentranszendentenPreis? WaskostetmichdasanNervensodann?AnKraftanSeeleanGeistundFleiß? (vonBrigitteObermaieraliasMissZauberblume)
Hier wurden alle Leer- und Newline-Zeichen (Zeilenumbrüche) gelöscht. Mehrere Zeichen werden â wie Sie sehen â mit einem Komma voneinander getrennt.
Und noch eine gern genutzte Option von tr ist mit âs gegeben. Damit können Sie mehrfach hintereinander kommende gleiche Zeichen durch ein anderes austauschen. Dies wird etwa verwendet, wenn in einer Datei unnötig viele Leerzeichen oder Zeilenumbrüche vorkommen. Kommen in einer Textdatei unnötig viele Leerzeichen hintereinander vor, so können Sie diese folgendermaßen durch ein einzelnes Leerzeichen austauschen:
> you@host > tr -s ' ' ' ' < datei.txt
Gleicher Fall mit unnötig vielen Zeilenumbrüchen:
> you@host > tr -s '\n' '\n' < datei.txt
# Die Killer-Tools â sed, awk und reguläre Ausdrücke
Die absoluten Killer-Tools für die Textverarbeitung stehen Ihnen mit awk und sed zur Verfügung. Bei der Verwendung von sed, awk, grep und vielen anderen UNIX-Tools kommen Sie auf Dauer um die regulären Ausdrücke nicht herum. Sollten Sie also eine Einführung in diese UNIX-Tools benötigen, so sollten Sie sich die Kapitel 11, 12 und 13 ansehen.
Auf die Schnelle soll hier dennoch die Verwendung von awk und sed in der Shell demonstriert werden. Wollen Sie bspw. mit awk die Länge einer Zeichenkette ermitteln, so ist dies mit eingebauten (String-)Funktionen von awk kein Problem.
> you@host > zeichen="<NAME>" you@host > echo $zeichen | awk '{print length($zeichen)}' 12
Hier schieben Sie über eine Pipe die Standardausgabe von echo an die Standardeingabe von awk. Im Anweisungsblock von awk wird dann der Befehl print (zur Ausgabe) verwendet. print gibt dabei die Länge (awk-Funktion length) der Variablen zeichen aus. In einem Shellscript werden Sie allerdings selten die Länge oder den Wert eines Kommandos ausgeben wollen, sondern meistens damit weiterarbeiten. Um dies zu realisieren, wird wieder die Kommando-Substitution verwendet.
> # Demonstriert das Kommando awk im Shellscript # Name : aawk zeichen="<NAME>" laenge=`echo $zeichen | awk '{print length($zeichen)}'` echo "$zeichen enthaelt $laenge Zeichen"
Das Beispiel bei der Ausführung:
> you@host > ./aawk juergen wolf enthaelt 12 Zeichen
Neben der Funktion length() stehen Ihnen noch eine Reihe weiterer typischer Stringfunktionen mit awk zur Verfügung. Aus Referenz-Gründen werden diese in Tabelle 2.3 aufgelistet. Die Anwendung dieser Funktionen entspricht im Prinzip derjenigen, die Sie eben mit length() gesehen haben. Sofern Ihnen awk noch nicht so richtig von der Hand gehen sollte, empfehle ich Ihnen, zunächst Kapitel 13, awk-Programmierung, zu lesen.
Funktion | Bedeutung |
| --- | --- |
tolower str | Komplette Zeichenkette in Kleinbuchstaben |
toupper str | Komplette Zeichenkette in Großbuchstaben |
index(str, substr) | Gibt die Position zurück, wo substr in str anfängt |
match(str, regexpr) | Überprüft, ob der reguläre Ausdruck regexpr in str enthalten ist |
substr(str, start, len) | Gibt einen Teilstring ab Postion start mit der Länge len aus str zurück. |
split(str, array, sep) | Teilt einen String in einzelne Felder auf und gibt diese an ein Array. sep dient dabei als Feldtrenner. |
gsub(alt, neu, str) | Ersetzt in str den String alt durch neu |
sub(alt, neu, str) | Ersetzt erstes Vorkommen von alt durch neu in str |
sprintf("fmt",expr) | Verwendet die printf-Formatbeschreibung für expr |
Neben awk will ich Ihnen hier auch kurz den Einsatz von sed auf Zeichenketten zeigen. sed wird vorzugsweise eingesetzt, um aus ganzen Textdateien bestimmte Teile herauszufiltern, zu verändern oder zu löschen. Auch hier geht man mit einer Pipe zu Werke. Will man eine ganze Textdatei durch sed jagen, geht man folgendermaßen vor:
> you@host > cat gedicht.txt | sed 's/Satz/Wort/g'
Hier wird die Ausgabe von cat durch eine Pipe an die Eingabe von sed weitergegeben. sed ersetzt jetzt jedes Wort »Satz« durch das Wort »Wort« und gibt die Ersetzung an die Standardausgabe aus. s steht hier für substitute und das Suffix g für global, was bedeutet, dass die Ersetzung nicht nur einmal, sondern im kompletten Text, also auch bei mehrfachen Vorkommen, umgesetzt wird. Natürlich wird auch hier bevorzugt die Kommando-Substitution im Shellscript verwendet.
> # Demonstriert sed im Shellscript # Name : ased zeichenkette="... und sie dreht sich doch" neu=`echo $zeichenkette | sed 's/sie/die Erde/g'` echo $neu
Das Shellscript bei der Ausführung:
> you@host > ./ased ... und die Erde dreht sich doch
Dies sollte als Kurzeinstieg zu den Power-Tools sed und awk genügen. Mehr dazu finden Sie in Kapitel 12, Der Stream-Editor sed, und 13, awk-Programmierung.
### 2.3.2 Erweiterte Funktionen für Bash und Korn-ShellÂ
Der Bash und der Korn-Shell wurden im Gegensatz zur Bourne-Shell für die Stringverarbeitung einige extra Funktionen spendiert, welche allerdings in keiner Weise die eben vorgestellten UNIX-Tools ersetzen. Hierbei handelt es sich höchstens um einige Abkürzungen.
# Länge eines Strings
Wenn Sie in der Bash oder Korn-Shell nicht auf awk zurückgreifen wollen, können Sie hier die Länge eines Strings mit folgender Syntax ermitteln:
> $(#zeichenkette)
In der Praxis sieht dies folgendermaßen aus:
> you@host > zeichenkette="... keep alive" you@host > echo "Laenge von $zeichenkette ist ${#zeichenkette}" Laenge von ... keep alive ist 14
# Aneinanderreihen von Strings
Die Aneinanderreihung von Strings ist recht einfach. Hier müssen Sie lediglich die einzelnen Variablen hintereinander schreiben und mit einer Begrenzung versehen.
> you@host > user=you you@host > login=15:22 you@host > logout=18:21 you@host > daten=${user}:${login}_bis_${logout} you@host > echo $daten you:15:22_bis_18:21
# (Teil-)String entfernen
Um einen bestimmten (Teil-)String oder besser ein Muster (weil hier auch die Metazeichen *, ? und [ ] verwendet werden können) aus einem String zu entfernen, bieten Ihnen Bash und Korn-Shell einige interessante Funktionen an. Aufgrund ihrer etwas umständlicheren Nutzung werden sie recht selten verwendet (siehe Tabelle 2.4).
Funktion | Gibt zurück ... |
| --- | --- |
${var#pattern} | ⦠den Wert von var ohne kleinstmöglichen durch pattern abgedeckten linken Teilstring. Bei keiner Abdeckung wird der Inhalt von var zurückgegeben. |
${var##pattern} | ⦠den Wert von var ohne größtmöglichen durch pattern abgedeckten linken Teilstring. Bei keiner Abdeckung wird der Inhalt von var zurückgegeben. |
${var%pattern} | ⦠den Wert von var ohne kleinstmöglichen durch pattern abgedeckten rechten Teilstring. Bei keiner Abdeckung wird der Inhalt von var zurückgegeben. |
${var%%pattern} | ⦠den Wert von var ohne größtmöglichen durch pattern abgedeckten rechten Teilstring. Bei keiner Abdeckung wird der Inhalt von var zurückgegeben. |
Hierzu ein Shellscript, das diese Funktionen demonstriert:
> # Name : acut var1="1234567890" var2="/home/you/Dokuments/shell/kapitel2.txt" pfad=${var2 %/*} file=${var2##*/} echo "Komplette Angabe: $var2" echo "Pfad : $pfad" echo "Datei : $file" # rechts 2 Zeichen abschneiden echo ${var1 %??} # links 2 Zeichen abschneiden echo ${var1#??} # im Klartext ohne Metazeichen echo ${var2 %/kapitel2.txt}
Das Script bei der Ausführung:
> you@host > ./acut Komplette Angabe: /home/you/Dokuments/shell/kapitel2.txt Pfad : /home/you/Dokuments/shell Datei : kapitel2.txt 12345678 34567890 /home/you/Dokuments/shell
Die Metazeichen *, ? und [ ] können Sie hierbei genauso einsetzen, wie Sie dies bereits von der Datei-Expansion her kennen. Zugegeben, sehr lesefreundlich sind diese Funktionen nicht gerade, aber in Verbindung mit einer Pfadangabe, wie im Beispiel gesehen, lässt es sich recht gut damit leben.
# String rechts oder links abschneiden (Korn-Shell only)
Die Korn-Shell bietet Ihnen die Möglichkeit, bei einem String von der rechten oder linken Seite etwas abzuschneiden. Hierzu verwendet man wieder den Befehl typeset. Natürlich lässt typeset schlussfolgern, dass hierbei nicht im eigentlichen Sinneabgeschnitten wird, sondern Sie deklarieren lediglich die Länge einer Variablen, also welche Anzahl von Zeichen diese aufnehmen darf, beispielsweise:
> you@host > typeset -L5 atext you@host > atext=1234567890 you@host > echo $atext 12345
Hier definieren Sie die Länge der Variablen »atext« auf 5 Zeichen. Die Option âL steht dabei für eine linksbündige Ausrichtung, was allerdings bei einem leeren String keinen Effekt hat. Damit könnten Sie praktisch mit âL eine Zeichenkette links um n Zeichen abschneiden. Die Gegenoption, um von der rechten Seite etwas zu entfernen, lautet âRn. Mit n geben Sie die Anzahl der Zeichen an, die Sie von der rechten Seite abschneiden wollen. Hier typset im Einsatz:
> you@host > zeichenkette=1234567890 you@host > typeset -L5 zeichenkette you@host > echo $zeichenkette 12345 you@host > typeset -R3 zeichenkette you@host > echo $zeichenkette 345
# (Teil-)Strings ausschneiden (Bash only)
Als einzige von allen Shells bietet Ihnen die Bash eine sinnvolle Funktion an, um aus einem String Teile herauszuschneiden (ähnlich wie mit dem Kommando cut). Auch die Anwendung ist recht passabel und lesefreundlich. Die Syntax:
> ${var:start:laenge} ${var:start}
Damit schneiden Sie aus der Variablen var ab der Position start, laenge Zeichen heraus. Erfolgt keine Angabe zu laenge, wird von der Position start bis zum Ende alles herauskopiert.
> you@host > zeichenkette=1234567890 you@host > echo ${zeichenkette:3:6} 456789 you@host > echo ${zeichenkette:5} 67890 you@host > neu=${zeichenkette:5:3} you@host > echo $neu 678 you@host > mehr=${zeichenkette:5:1}_und_${zeichenkette:8:2} you@host > echo $mehr 6_und_90
# 2.4 Quotings und Kommando-SubstitutionÂ
Date: 2005-02-07
Categories:
Tags:
2.4 Quotings und Kommando-SubstitutionÂ
Jetzt zu einem längst überfälligen Thema, den Quotings, zu dem die Kommando-Substitution (oder Kommandoersetzung) ebenfalls gehört. Häufiger wurden in den Beispielen zuvor Quotings eingesetzt, ohne jemals genauer darauf eingegangen zu sein.
Quoting ist die englische Bezeichnung für verschiedene Anführungszeichen (Quotes). Generell unterscheidet man dabei zwischen Single Quotes ('), Double Quotes (") und den Back Quotes (`). Siehe dazu Tabelle 2.5:
Tabelle 2.5 Â Verschiedene Quotes
'
Single Quote, einfaches Anführungszeichen
"
Double Quote, doppeltes Anführungszeichen (oder Gänsefüßchen)
`
Back Quote, Backtick, Gravis-Zeichen, umgekehrtes einfaches Anführungszeichen
2.4.1 Single und Double QuotingsÂ
Um den Sinn vom Quotings zu verstehen, wird folgende echo-Ausgabe verwendet:
you@host > echo *** Hier wird $SHELL ausgeführt *** aawk acut ased bin chmlib-0.35.tgz datei.csv Desktop Documents gedicht.txt namen.txt nummer.txt OpenOffice.org1.1 public_html Shellbuch zusammen.txt Hier wird /bin/bash ausgeführt aawk acut ased bin chmlib-0.35.tgz datei.csv Desktop Documents gedicht.txt namen.txt nummer.txt OpenOffice.org1.1 public_html Shellbuch zusammen.txt
Das war jetzt sicherlich nicht die gewünschte und erwartete Ausgabe. Da hierbei das Metazeichen * mehrmals verwendet wurde, werden für jedes Sternchen alle Dateinamen im Verzeichnis ausgegeben. Nach Ihrem Wissensstand könnten Sie dem Problem mit mehreren Backslash-Zeichen entgegentreten.
you@host > echo \*\*\* Hier wird $SHELL ausgeführt \*\*\* *** Hier wird /bin/bash ausgeführt ***
Aber auf Dauer ist dies wohl eher umständlich und auch sehr fehleranfällig. Testen Sie das ganze Beispiel doch einmal, indem Sie den auszugebenden Text in Single Quotes stellen.
you@host > echo '*** Hier wird $SHELL ausgeführt ***' *** Hier wird $SHELL ausgeführt ***
Daraus können Sie nun schlussfolgern, dass bei der Verwendung von Single Quotes die Metazeichen (hier *) ausgeschaltet werden. Aber im Beispiel wurde auch die Umgebungsvariable $SHELL ausgequotet. Wenn Sie also die Variablen im Klartext ausgeben wollen (mitsamt Dollarzeichen), dann könnten Sie dies mit Single Quotes erreichen. Die Single Quotes helfen Ihnen auch beim Multiplizieren mit dem Kommando expr.
you@host > expr 10 * 10 expr: Syntaxfehler you@host > expr 10 \* 10 100 you@host > expr 10 '*' 10 100
Zurück zu unserer Textausgabe. Wollen Sie den Wert einer Variablen (im Beispiel SHELL) ebenfalls ausgeben lassen, dann sollten Sie Double Quotes verwenden.
you@host > echo "*** Hier wird $SHELL ausgeführt ***" *** Hier wird /bin/bash ausgeführt ***
Es mag jetzt verwirrend sein, weshalb ein Double Quoting den Inhalt einer Variablen anzeigt, aber die Sonderbedeutung des Metazeichens * ausschließt. Dies liegt daran, dass Double Quotes alle Metazeichen ausschließen, bis auf das Dollarzeichen (also auch den Variablen), das Back Quote-Zeichen und den Backslash.
Leerzeichen und Zeilenumbrüche
Damit Sie für den Fall der Fälle beim Verwenden der Single und Double Quotings keine Formatierungsprobleme bei der Ausgabe bekommen, folgen noch ein paar Hinweise. Das Thema Leerzeichen wurde bereits kurz bei der Beschreibung des Befehls echo erwähnt.
you@host > echo Name ID-Nr. Passwort Name ID-Nr. Passwort
Im Beispiel wurden zwar einige Leerzeichen verwendet, welche aber bei der Ausgabe nicht erscheinen. Vielleicht mag dies den einen oder anderen verwundern, aber auch Leerzeichen haben eine Sonderbedeutung. Ein Leerzeichen mag eben in einer Shell immer gern ein Trennzeichen zwischen zwei Befehlen sein. Um vom Zeichen die Trennung als Bedeutung zu lösen, müssen Sie auch hierbei Ouotings einsetzen.
you@host > echo 'Name ID-Nr. Passwort' Name ID-Nr. Passwort you@host > echo "Name ID-Nr. Passwort" Name ID-Nr. Passwort
Selbstverständlich hält Sie keiner davon ab, nur diese Leerzeichen zu quoten.
you@host > echo Name' 'ID-Nr.' Passwort' Name ID-Nr. Passwort
Ähnlich sieht dies bei einem Zeilenwechsel aus:
you@host > echo Ein Zeilenwechsel \ > wird gemacht Ein Zeilenwechsel wird gemacht you@host > echo 'Ein Zeilenwechsel \ > wird gemacht' Ein Zeilenwechsel \ wird gemacht you@host > echo 'Ein Zeilenwechsel wird gemacht' Ein Zeilenwechsel wird gemacht you@host > echo "Ein Zeilenwechsel > wird gemacht" Ein Zeilenwechsel wird gemacht you@host > echo "Ein Zeilenwechsel \ wird gemacht" Ein Zeilenwechsel wird gemacht
Verwenden Sie bspw. das Backslash-Zeichen bei einem Zeilenwechsel in Single Quotes, so wird es außer Kraft gesetzt und wie ein normales Zeichen ausgegeben. Beim Double Quoting hingegen behält das Backslash-Zeichen weiterhin seine Sonderbedeutung beim Zeilenumbruch. Ansonsten wird bei der Verwendung von Single oder Double Quotes bei Zeilenbrüchen kein Backslash-Zeichen verwendet, um einen Zeilenumbruch auszugeben.
2.4.2 Kommando-Substitution â Back QuotesÂ
Die Funktion einer Shell schlechthin ist die Kommando-Substitution. Ohne dieses Feature wäre die Shell-Programmierung wohl nur die Hälfte wert. Damit erst wird es möglich, das Ergebnis eines Kommandos in einer Variablen abzuspeichern statt eine Ausgabe auf dem Bildschirm (bzw. in einer Datei) vorzunehmen. In der Praxis gibt es zwei Anwendungen der Kommando-Substitution:
variable=`kommando` grep `kommando` file oder grep suche `kommando`
Bei der ersten Variante wird die Ausgabe eines Kommandos in einer Variablen gespeichert. Bei der zweiten Version wird die Ausgabe eines Kommandos für ein weiteres Kommando verwendet. Die Shell erkennt eine solche Kommando-Substitution, indem ein Kommando oder mehrere Kommandos zwischen Back Quotes (`) eingeschlossen ist. Einfachstes und immer wieder verwendetes Beispiel ist das Kommando date:
you@host > datum=`date` you@host > echo $datum Mo Feb 7 07:16:43 CET 2005 you@host > tag=`date +%A` you@host > echo "Heute ist $tag" Heute ist Montag
Die Kommando-Substitution ist für Sie als Script-Programmierer sozusagen die Luft zum Atmen, weshalb Sie sehr gut beraten sind, sich intensiv damit zu beschäftigen. Daher ist es auch unbedingt notwendig, sich bei der Shell-Programmierung auch mit den Linux-UNIX-Kommandos zu befassen. Hier ein einfaches Shellscript dazu:
# Demonstriert die Kommando-Substitution # Name : asubstit tag=`date +%A` datum=`date +%d.%m.%Y` count=`ls -l | wc -l` echo "Heute ist $tag der $datum" echo "Sie befinden sich in $HOME (Inhalt: $count Dateien)"
Das Script bei der Ausführung:
you@host > ./asubstit Heute ist Montag der 07.02.2005 Sie befinden sich in /home/tot (Inhalt: 17 Dateien)
Die Kommando-Substitution wird in der Praxis aber häufig auch noch anders verwendet als eben demonstriert. Man kann mithilfe der Back Quotes auch die Ausgabe eines Kommandos für ein weiteres Kommando verwenden. Damit erhält ein Kommando den Rückgabewert eines anderen Kommandos. Hört sich schlimmer an, als es ist.
you@host > echo "Heute ist `date +%A`" Heute ist Montag
Im Beispiel können Sie sehen, dass die Kommando-Substitution auch zwischen Double Quotes ihre Bedeutung behält. Entmachten können Sie die Back Quotes wieder mit einem Backslash-Zeichen.
you@host > echo "Heute ist \`date +%A\`" Heute ist 'date +%A'
In welcher Reihenfolge mehrere Kommando-Substitutionen ausgeführt werden, ermitteln Sie am besten selbst mit der Option âx für die Shell.
you@host > set -x you@host > echo "Format1: `date` Format2: `date +%d.%m.%Y`" ++ date ++ date +%d.%m.%Y + echo 'Format1: Mo Feb 7 12:59:53 CET 2005 Format2: 07.02.2005' Format1: Mo Feb 7 12:59:53 CET 2005 Format2: 07.02.2005
Hinweis   Wie bei einer Pipe, geht auch bei einer Kommando-Substitution die Fehlermeldung verloren.
Anfänger kommen schnell auf den Gedanken, Kommando-Substitutionen mit Pipes gleichzusetzen. Daher folgendes Beispiel als Gegenbeweis:
you@host > find . -name "*.txt" | ls -l
Was hier gewollt ist, dürfte schnell ersichtlich sein. Es wird versucht, aus dem aktuellen Arbeitsverzeichnis alle Dateien mit der Endung ».txt« herauszufiltern und an die Standardeingabe von ls âl zu schieben. Bei der Ausführung wird Ihnen jedoch auffallen, dass ls âl die Eingaben von find total ignoriert. find hat hierbei keinerlei Wirkung. Selbiges mit einer Kommando-Substitution zeigt allerdings dann den gewünschten Effekt.
you@host > ls -l `find . -name "*.txt"`
Wenden Sie dieses Prinzip bspw. auf das Script »asubstit« an, lässt sich dieses auf zwei Zeilen Umfang reduzieren. Hier die neue Version von »asubstit«:
# Demonstriert die Kommando-Substitution # Name : asubstit2 echo "Heute ist `date +%A` der `date +%d.%m.%Y`" echo "Sie befinden sich in $HOME " \ "(Inhalt:`ls -l | wc -l` Dateien)"Bei der Ausführung verläuft das Script »asubtit2« analog zu »asubstit«.
Erweiterte Syntax (Bash und Korn-Shell only)
Bash und Korn-Shell bieten Ihnen neben den Back Quotes außerdem noch eine andere Syntax an. Statt bspw.
you@host > echo "Heute ist `date`" Heute ist Mo Feb 7 13:21:43 CET 2005 you@host > count=`ls -l | wc -l` you@host > echo $count 18 you@host > ls -l `find . -name "*.txt"`
können Sie hier folgende Syntax verwenden:
you@host > echo "Heute ist $(date)" Heute ist Mo Feb 7 13:24:03 CET 2005 you@host > count=$(ls -l | wc -l) you@host > echo $count 18 you@host > ls -l $(find . -name "*.txt")
Die Verwendung von $(...) gegenüber `...` hat natürlich nur einen rein optischen Vorteil. Die Form $(...) lässt sich erheblich einfacher lesen, aber wem nützt das was, wenn es die Bourne-Shell nicht kennt. Wer sich sicher sein kann, dass seine Scripts auch immer in einer Bash oder einer Korn-Shell ausgeführt werden, der kann sich frohen Herzens damit anfreunden. Damit fällt wenigstens bei der Quoterei die Back Quote weg, und Sie müssen nur noch die Single und Double Quotes beachten.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 2.4 Quotings und Kommando-SubstitutionÂ
Jetzt zu einem längst überfälligen Thema, den Quotings, zu dem die Kommando-Substitution (oder Kommandoersetzung) ebenfalls gehört. Häufiger wurden in den Beispielen zuvor Quotings eingesetzt, ohne jemals genauer darauf eingegangen zu sein.
Quoting ist die englische Bezeichnung für verschiedene Anführungszeichen (Quotes). Generell unterscheidet man dabei zwischen Single Quotes ('), Double Quotes (") und den Back Quotes (`). Siehe dazu Tabelle 2.5:
Zeichen | Bedeutung |
| --- | --- |
' | Single Quote, einfaches Anführungszeichen |
" | Double Quote, doppeltes Anführungszeichen (oder Gänsefüßchen) |
` | Back Quote, Backtick, Gravis-Zeichen, umgekehrtes einfaches Anführungszeichen |
### 2.4.1 Single und Double QuotingsÂ
Um den Sinn vom Quotings zu verstehen, wird folgende echo-Ausgabe verwendet:
> you@host > echo *** Hier wird $SHELL ausgeführt *** aawk acut ased bin chmlib-0.35.tgz datei.csv Desktop Documents gedicht.txt namen.txt nummer.txt OpenOffice.org1.1 public_html Shellbuch zusammen.txt Hier wird /bin/bash ausgeführt aawk acut ased bin chmlib-0.35.tgz datei.csv Desktop Documents gedicht.txt namen.txt nummer.txt OpenOffice.org1.1 public_html Shellbuch zusammen.txt
Das war jetzt sicherlich nicht die gewünschte und erwartete Ausgabe. Da hierbei das Metazeichen * mehrmals verwendet wurde, werden für jedes Sternchen alle Dateinamen im Verzeichnis ausgegeben. Nach Ihrem Wissensstand könnten Sie dem Problem mit mehreren Backslash-Zeichen entgegentreten.
> you@host > echo \*\*\* Hier wird $SHELL ausgeführt \*\*\* *** Hier wird /bin/bash ausgeführt ***
Aber auf Dauer ist dies wohl eher umständlich und auch sehr fehleranfällig. Testen Sie das ganze Beispiel doch einmal, indem Sie den auszugebenden Text in Single Quotes stellen.
> you@host > echo '*** Hier wird $SHELL ausgeführt ***' *** Hier wird $SHELL ausgeführt ***
Daraus können Sie nun schlussfolgern, dass bei der Verwendung von Single Quotes die Metazeichen (hier *) ausgeschaltet werden. Aber im Beispiel wurde auch die Umgebungsvariable $SHELL ausgequotet. Wenn Sie also die Variablen im Klartext ausgeben wollen (mitsamt Dollarzeichen), dann könnten Sie dies mit Single Quotes erreichen. Die Single Quotes helfen Ihnen auch beim Multiplizieren mit dem Kommando expr.
> you@host > expr 10 * 10 expr: Syntaxfehler you@host > expr 10 \* 10 100 you@host > expr 10 '*' 10 100
Zurück zu unserer Textausgabe. Wollen Sie den Wert einer Variablen (im Beispiel SHELL) ebenfalls ausgeben lassen, dann sollten Sie Double Quotes verwenden.
> you@host > echo "*** Hier wird $SHELL ausgeführt ***" *** Hier wird /bin/bash ausgeführt ***
Es mag jetzt verwirrend sein, weshalb ein Double Quoting den Inhalt einer Variablen anzeigt, aber die Sonderbedeutung des Metazeichens * ausschließt. Dies liegt daran, dass Double Quotes alle Metazeichen ausschließen, bis auf das Dollarzeichen (also auch den Variablen), das Back Quote-Zeichen und den Backslash.
# Leerzeichen und Zeilenumbrüche
Damit Sie für den Fall der Fälle beim Verwenden der Single und Double Quotings keine Formatierungsprobleme bei der Ausgabe bekommen, folgen noch ein paar Hinweise. Das Thema Leerzeichen wurde bereits kurz bei der Beschreibung des Befehls echo erwähnt.
> you@host > echo Name ID-Nr. Passwort Name ID-Nr. Passwort
Im Beispiel wurden zwar einige Leerzeichen verwendet, welche aber bei der Ausgabe nicht erscheinen. Vielleicht mag dies den einen oder anderen verwundern, aber auch Leerzeichen haben eine Sonderbedeutung. Ein Leerzeichen mag eben in einer Shell immer gern ein Trennzeichen zwischen zwei Befehlen sein. Um vom Zeichen die Trennung als Bedeutung zu lösen, müssen Sie auch hierbei Ouotings einsetzen.
> you@host > echo 'Name ID-Nr. Passwort' Name ID-Nr. Passwort you@host > echo "Name ID-Nr. Passwort" Name ID-Nr. Passwort
Selbstverständlich hält Sie keiner davon ab, nur diese Leerzeichen zu quoten.
> you@host > echo Name' 'ID-Nr.' Passwort' Name ID-Nr. Passwort
Ähnlich sieht dies bei einem Zeilenwechsel aus:
> you@host > echo Ein Zeilenwechsel \ > wird gemacht Ein Zeilenwechsel wird gemacht you@host > echo 'Ein Zeilenwechsel \ > wird gemacht' Ein Zeilenwechsel \ wird gemacht you@host > echo 'Ein Zeilenwechsel wird gemacht' Ein Zeilenwechsel wird gemacht you@host > echo "Ein Zeilenwechsel > wird gemacht" Ein Zeilenwechsel wird gemacht you@host > echo "Ein Zeilenwechsel \ wird gemacht" Ein Zeilenwechsel wird gemacht
Verwenden Sie bspw. das Backslash-Zeichen bei einem Zeilenwechsel in Single Quotes, so wird es außer Kraft gesetzt und wie ein normales Zeichen ausgegeben. Beim Double Quoting hingegen behält das Backslash-Zeichen weiterhin seine Sonderbedeutung beim Zeilenumbruch. Ansonsten wird bei der Verwendung von Single oder Double Quotes bei Zeilenbrüchen kein Backslash-Zeichen verwendet, um einen Zeilenumbruch auszugeben.
### 2.4.2 Kommando-Substitution â Back QuotesÂ
Die Funktion einer Shell schlechthin ist die Kommando-Substitution. Ohne dieses Feature wäre die Shell-Programmierung wohl nur die Hälfte wert. Damit erst wird es möglich, das Ergebnis eines Kommandos in einer Variablen abzuspeichern statt eine Ausgabe auf dem Bildschirm (bzw. in einer Datei) vorzunehmen. In der Praxis gibt es zwei Anwendungen der Kommando-Substitution:
> variable=`kommando` grep `kommando` file oder grep suche `kommando`
Bei der ersten Variante wird die Ausgabe eines Kommandos in einer Variablen gespeichert. Bei der zweiten Version wird die Ausgabe eines Kommandos für ein weiteres Kommando verwendet. Die Shell erkennt eine solche Kommando-Substitution, indem ein Kommando oder mehrere Kommandos zwischen Back Quotes (`) eingeschlossen ist. Einfachstes und immer wieder verwendetes Beispiel ist das Kommando date:
> you@host > datum=`date` you@host > echo $datum Mo Feb 7 07:16:43 CET 2005 you@host > tag=`date +%A` you@host > echo "Heute ist $tag" Heute ist Montag
Die Kommando-Substitution ist für Sie als Script-Programmierer sozusagen die Luft zum Atmen, weshalb Sie sehr gut beraten sind, sich intensiv damit zu beschäftigen. Daher ist es auch unbedingt notwendig, sich bei der Shell-Programmierung auch mit den Linux-UNIX-Kommandos zu befassen. Hier ein einfaches Shellscript dazu:
> # Demonstriert die Kommando-Substitution # Name : asubstit tag=`date +%A` datum=`date +%d.%m.%Y` count=`ls -l | wc -l` echo "Heute ist $tag der $datum" echo "Sie befinden sich in $HOME (Inhalt: $count Dateien)"
Das Script bei der Ausführung:
> you@host > ./asubstit Heute ist Montag der 07.02.2005 Sie befinden sich in /home/tot (Inhalt: 17 Dateien)
Die Kommando-Substitution wird in der Praxis aber häufig auch noch anders verwendet als eben demonstriert. Man kann mithilfe der Back Quotes auch die Ausgabe eines Kommandos für ein weiteres Kommando verwenden. Damit erhält ein Kommando den Rückgabewert eines anderen Kommandos. Hört sich schlimmer an, als es ist.
> you@host > echo "Heute ist `date +%A`" Heute ist Montag
Im Beispiel können Sie sehen, dass die Kommando-Substitution auch zwischen Double Quotes ihre Bedeutung behält. Entmachten können Sie die Back Quotes wieder mit einem Backslash-Zeichen.
> you@host > echo "Heute ist \`date +%A\`" Heute ist 'date +%A'
In welcher Reihenfolge mehrere Kommando-Substitutionen ausgeführt werden, ermitteln Sie am besten selbst mit der Option âx für die Shell.
> you@host > set -x you@host > echo "Format1: `date` Format2: `date +%d.%m.%Y`" ++ date ++ date +%d.%m.%Y + echo 'Format1: Mo Feb 7 12:59:53 CET 2005 Format2: 07.02.2005' Format1: Mo Feb 7 12:59:53 CET 2005 Format2: 07.02.2005
Anfänger kommen schnell auf den Gedanken, Kommando-Substitutionen mit Pipes gleichzusetzen. Daher folgendes Beispiel als Gegenbeweis:
> you@host > find . -name "*.txt" | ls -l
Was hier gewollt ist, dürfte schnell ersichtlich sein. Es wird versucht, aus dem aktuellen Arbeitsverzeichnis alle Dateien mit der Endung ».txt« herauszufiltern und an die Standardeingabe von ls âl zu schieben. Bei der Ausführung wird Ihnen jedoch auffallen, dass ls âl die Eingaben von find total ignoriert. find hat hierbei keinerlei Wirkung. Selbiges mit einer Kommando-Substitution zeigt allerdings dann den gewünschten Effekt.
> you@host > ls -l `find . -name "*.txt"`
Wenden Sie dieses Prinzip bspw. auf das Script »asubstit« an, lässt sich dieses auf zwei Zeilen Umfang reduzieren. Hier die neue Version von »asubstit«:
# Demonstriert die Kommando-Substitution # Name : asubstit2 echo "Heute ist `date +%A` der `date +%d.%m.%Y`" echo "Sie befinden sich in $HOME " \ "(Inhalt:`ls -l | wc -l` Dateien)"Bei der Ausführung verläuft das Script »asubtit2« analog zu »asubstit«.
# Erweiterte Syntax (Bash und Korn-Shell only)
Bash und Korn-Shell bieten Ihnen neben den Back Quotes außerdem noch eine andere Syntax an. Statt bspw.
> you@host > echo "Heute ist `date`" Heute ist Mo Feb 7 13:21:43 CET 2005 you@host > count=`ls -l | wc -l` you@host > echo $count 18 you@host > ls -l `find . -name "*.txt"`
können Sie hier folgende Syntax verwenden:
> you@host > echo "Heute ist $(date)" Heute ist Mo Feb 7 13:24:03 CET 2005 you@host > count=$(ls -l | wc -l) you@host > echo $count 18 you@host > ls -l $(find . -name "*.txt")
Die Verwendung von $(...) gegenüber `...` hat natürlich nur einen rein optischen Vorteil. Die Form $(...) lässt sich erheblich einfacher lesen, aber wem nützt das was, wenn es die Bourne-Shell nicht kennt. Wer sich sicher sein kann, dass seine Scripts auch immer in einer Bash oder einer Korn-Shell ausgeführt werden, der kann sich frohen Herzens damit anfreunden. Damit fällt wenigstens bei der Quoterei die Back Quote weg, und Sie müssen nur noch die Single und Double Quotes beachten.
# 2.5 Arrays (Bash und Korn-Shell only)Â
2.5 Arrays (Bash und Korn-Shell only)Â
Mit Arrays haben Sie die Möglichkeit, eine geordnete Folge von Werten eines bestimmten Typs abzuspeichern und zu bearbeiten. Arrays werden auch als Vektoren, Felder oder Reihungen bezeichnet. Allerdings können Sie in der Shell-Programmierung nur eindimensionale Arrays erzeugen bzw. verwenden. Der Zugriff auf die einzelnen Elemente des Arrays erfolgt mit ${variable[x]}. x steht hier für die Nummer (nicht negative, ganze Zahl) des Feldes (wird auch als Feldindex bezeichnet), welches Sie ansprechen wollen. Bevorzugt werden solche Arrays in Schleifen verwendet. Anstatt nämlich auf einen ganzen Satz von einzelnen Variablen zurückzugreifen, nutzt man hier bevorzugt Arrays.
Einige Wermutstropfen gibt es allerdings doch noch: Die Bourne-Shell kennt nichts dergleichen und auch die Korn-Shell kommt teilweise nicht mit der Bash überein. Zwar erfolgt die Anwendung von Arrays selbst bei den beiden Shells in gleicher Weise, aber bei der Zuweisung von Arrays gehen auch hier beide Vertreter in eine andere Richtung.
2.5.1 Werte an Arrays zuweisenÂ
Wenn Sie einen Wert in ein Feld-Array schreiben wollen, müssen Sie lediglich den Feldindex angeben.
you@host > array[3]=drei
Hier haben Sie das dritte Feld von »array« mit der Zeichenkette »drei« belegt. Jetzt stellt sich bei den etwas programmiererprobten Lesern sicherlich die Frage, was mit den anderen Elementen davor ist. Hier kann ich Sie beruhigen, in der Shell-Programmierung dürfen die Felder Lücken enthalten. Zwar beginnt man der Übersichtlichkeit zuliebe bei dem Feldindex 0, aber es geht eben auch anders.
you@host > array[0]=null you@host > array[1]=zwei you@host > array[2]=zwei you@host > array[3]=drei ...
Hinweis   Das erste Element in einem Array hat immer den Feldindex 0!
Hinweis   In vielen Büchern wird in der Bash ein Array mit typeset -a array als ein Array ohne spezielle Typen deklariert. Da Arrays in der Bash automatisch erstellt werden, ist dies unnötig (aber nicht falsch).
Gewöhnlich werden Sie beim Anlegen eines Arrays dieses mit mehreren Werten auf einmal versehen wollen. Leider gehen hier die Shells ihre eigenen Wege.
2.5.2 Eine Liste von Werten an ein Array zuweisen (Bash)Â
Die Syntax:
array=(null eins zwei drei vier fuenf ...)
Bei der Zuweisung beginnt der Feldindex automatisch bei 0, somit wäre array[0] mit dem Inhalt »null« beschrieben. Dennoch ist es auch möglich, eine solche Zuweisung an einer anderen Position zu starten.
array=([2]=zwei drei)
Trotzdem hilft Ihnen diese Möglichkeit nicht, an das Array hinten etwas anzuhängen. Hierbei wird ein existierendes Array trotzdem komplett überschrieben. Die Anzahl der Elemente, die Sie einem Array übergeben können, ist bei der Bash beliebig. Als Trennzeichen zwischen den einzelnen Elementen fungiert ein Leerzeichen.
2.5.3 Eine Liste von Werten an ein Array zuweisen (Korn-Shell)Â
In der Korn-Shell bedienen Sie sich des Kommandos set und der Option âA (für Array), um eine ganze Liste von Werten einem Array zuzuweisen. Hier die Syntax:
set -A array null eins zwei drei vier fuenf ...
Im Gegensatz zur Bash können Sie hier allerdings keinen bestimmten Feldindex angeben, von dem aus die Zuweisung des Arrays erfolgen soll. Ebenso finden Sie in der Korn-Shell eine Einschränkung bezüglich der Anzahl von Elementen vor, die ein Array belegen darf. Gewöhnlich sind hierbei 512, 1024 oder maximal 4096 Werte möglich.
2.5.4 Zugreifen auf die einzelnen Elemente eines ArraysÂ
Hier kommen die beiden Shells zum Glück wieder auf einen Nenner. Der Zugriff auf ein einzelnes Element im Array erfolgt über folgende Syntax:
${array[n]}
n entspricht dabei dem Feldindex, auf dessen Wert im Array Sie zurückgreifen wollen. Der Zugriff auf ein Array muss leider in geschweiften Klammern erfolgen, da hier ja mit den Zeichen [ ] Metazeichen verwendet wurden und sonst eine Expansion der Shell durchgeführt würde.
Alle Elemente eines Arrays auf einmal und die Anzahl der Elemente ausgeben
Alle Elemente eines Arrays können Sie mit folgenden Schreibweisen ausgeben lassen:
you@host > echo ${array[*]} you@host > echo ${array[@]}
Auf den Unterschied dieser beiden Versionen ($* und $@) wird in Kapitel 3, Parameter und Argumente, eingegangen.
Die Anzahl der belegten Elemente im Array erhalten Sie mit:
you@host > echo ${#array[*]}
Wollen Sie feststellen, wie lang der Eintrag innerhalb eines Arrays ist, so gehen Sie genauso vor wie beim Ermitteln der Anzahl belegter Elemente im Array, nur dass Sie hierbei den entsprechenden Feldindex verwenden.
you@host > echo ${#array[1]}
Löschen einzelner Elemente oder des kompletten Arrays
Beim Löschen eines Elements im Array oder gar des kompletten Arrays wird derselbe Weg wie schon bei den Variablen beschritten. Hier hilft Ihnen das Kommando unset wieder aus. Das komplette Array löschen Sie mit:
you@host > unset array
Einzelne Elemente können Sie mit folgender Syntax herauslöschen (im Beispiel das zweite Element):
you@host > unset array[2]
Komplettes Array kopieren
Wollen Sie ein komplettes Array kopieren, dann können Sie entweder jedes einzelne Element in einer Schleife durchlaufen, überprüfen und einem anderen Array zuweisen. Oder aber Sie verwenden ein ähnliches Konstrukt wie beim Auflisten aller Elemente in einem Array, verbunden mit dem Zuweisen einer Liste mit Werten an ein Array â was bedeutet, dass hier wieder Bash und Korn-Shell extra behandelt werden möchten (müssen).
Bei der Bash realisieren Sie dies so:
array_kopie=(${array_quelle[*]})
Und die Korn-Shell will das so haben:
set -A array_kopie ${array_quelle[*]}
Achtung: Hierbei gibt es noch eine üble Stolperfalle. So löscht man beispielsweise in der Bash array[1] mit unset und kopiert dann das array, so steht in array_kopie[1] der Inhalt von array[2].
String-Manipulationen
Selbstverständlich stehen Ihnen hierzu auch die String-Manipulationen zur Verfügung, die Sie in diesem Kapitel kennen gelernt haben. Wollen Sie diese einsetzen, müssen Sie aber überdenken, ob sich die Manipulation auf ein einzelnes Feld beziehen soll (${array[n]) oder ob Sie das komplette Array (${array[*]}) dafür heranziehen wollen. Auch die in der Bash vorhandene cut-ähnliche Funktion ${array[n]1:2 (oder alle Elemente: ${array[*]1:2}) steht Ihnen bereit.
Zugegeben, es ließe sich noch einiges mehr zu den Arrays schreiben, doch hier begnüge ich mich mit dem für den Hausgebrauch Nötigen. Auf Arrays in Zusammenhang mit Schleifen wird noch in Kapitel eingegangen.
Zum besseren Verständnis möchte ich Ihnen ein etwas umfangreicheres Script zeigen, das Ihnen alle hier beschriebenen Funktionalitäten eines Arrays nochmals in der Praxis zeigt. Vorwegnehmend wurde hierbei auch eine Schleife eingesetzt. Im Beispiel wurde der Bash-Version der Vortritt gegeben. Sofern Sie das Beispiel in einer Korn-Shell testen wollen, müssen Sie in den entsprechenden Zeilen (Zuweisung eines Arrays und das Kopieren) das Kommentarzeichen # entfernen und die entsprechenden Zeilen der Bash-Version auskommentieren.
# Demonstriert den Umgang mit Arrays # Name : aarray # Liste von Werten in einem Array speichern # Version: Korn-Shell (auskommentiert) set -A array null eins zwei drei vier fuenf # Version: Bash array=( null eins zwei drei vier fuenf ) # Zugriff auf die einzelnen Elemente echo ${array[0]} # null echo ${array[1]} # eins echo ${array[5]} # fuenf # Alle Elemente ausgeben echo ${array[*]} # Länge von Element 3 ermitteln echo ${#array[2]} # 4 # Anzahl der Elemente ausgeben echo ${#array[*]} # Neues Element hinzufügen array[6]="sechs" # Anzahl der Elemente ausgeben echo ${#array[*]} # Alle Elemente ausgeben echo ${array[*]} # Element löschen unset array[4] # Alle Elemente ausgeben echo ${array[*]} # Array kopieren # Version: ksh (auskommentiert) #set -A array_kopie=${array[*]} # Version: Bash array_kopie=(${array[*]}) # Alle Elemente ausgeben echo ${array_kopie[*]} # Schreibschutz verwenden typeset -r array_kopie # Versuch, darauf zuzugreifen array_kopie[1]=nixda # Vorweggenommen â ein Array in einer for-Schleife # Einen Integer machen typeset -i i=0 max=${#array[*]} while (( i < max )) do echo "Feld $i: ${array[$i]}" i=i+1 done
Das Beispiel bei der Ausführung:
you@host > ./aarray null eins fuenf null eins zwei drei vier fuenf 4 6 7 null eins zwei drei vier fuenf sechs null eins zwei drei fuenf sechs null eins zwei drei fuenf sechs ./aarray: line 48: array_kopie: readonly variable Feld 0: null Feld 1: eins Feld 2: zwei Feld 3: drei Feld 4: Feld 5: fuenf
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 2.5 Arrays (Bash und Korn-Shell only)Â
Mit Arrays haben Sie die Möglichkeit, eine geordnete Folge von Werten eines bestimmten Typs abzuspeichern und zu bearbeiten. Arrays werden auch als Vektoren, Felder oder Reihungen bezeichnet. Allerdings können Sie in der Shell-Programmierung nur eindimensionale Arrays erzeugen bzw. verwenden. Der Zugriff auf die einzelnen Elemente des Arrays erfolgt mit ${variable[x]}. x steht hier für die Nummer (nicht negative, ganze Zahl) des Feldes (wird auch als Feldindex bezeichnet), welches Sie ansprechen wollen. Bevorzugt werden solche Arrays in Schleifen verwendet. Anstatt nämlich auf einen ganzen Satz von einzelnen Variablen zurückzugreifen, nutzt man hier bevorzugt Arrays.
Einige Wermutstropfen gibt es allerdings doch noch: Die Bourne-Shell kennt nichts dergleichen und auch die Korn-Shell kommt teilweise nicht mit der Bash überein. Zwar erfolgt die Anwendung von Arrays selbst bei den beiden Shells in gleicher Weise, aber bei der Zuweisung von Arrays gehen auch hier beide Vertreter in eine andere Richtung.
### 2.5.1 Werte an Arrays zuweisenÂ
Wenn Sie einen Wert in ein Feld-Array schreiben wollen, müssen Sie lediglich den Feldindex angeben.
> you@host > array[3]=drei
Hier haben Sie das dritte Feld von »array« mit der Zeichenkette »drei« belegt. Jetzt stellt sich bei den etwas programmiererprobten Lesern sicherlich die Frage, was mit den anderen Elementen davor ist. Hier kann ich Sie beruhigen, in der Shell-Programmierung dürfen die Felder Lücken enthalten. Zwar beginnt man der Übersichtlichkeit zuliebe bei dem Feldindex 0, aber es geht eben auch anders.
> you@host > array[0]=null you@host > array[1]=zwei you@host > array[2]=zwei you@host > array[3]=drei ...
Gewöhnlich werden Sie beim Anlegen eines Arrays dieses mit mehreren Werten auf einmal versehen wollen. Leider gehen hier die Shells ihre eigenen Wege.
### 2.5.2 Eine Liste von Werten an ein Array zuweisen (Bash)Â
Die Syntax:
> array=(null eins zwei drei vier fuenf ...)
Bei der Zuweisung beginnt der Feldindex automatisch bei 0, somit wäre array[0] mit dem Inhalt »null« beschrieben. Dennoch ist es auch möglich, eine solche Zuweisung an einer anderen Position zu starten.
> array=([2]=zwei drei)
Trotzdem hilft Ihnen diese Möglichkeit nicht, an das Array hinten etwas anzuhängen. Hierbei wird ein existierendes Array trotzdem komplett überschrieben. Die Anzahl der Elemente, die Sie einem Array übergeben können, ist bei der Bash beliebig. Als Trennzeichen zwischen den einzelnen Elementen fungiert ein Leerzeichen.
### 2.5.3 Eine Liste von Werten an ein Array zuweisen (Korn-Shell)Â
In der Korn-Shell bedienen Sie sich des Kommandos set und der Option âA (für Array), um eine ganze Liste von Werten einem Array zuzuweisen. Hier die Syntax:
> set -A array null eins zwei drei vier fuenf ...
Im Gegensatz zur Bash können Sie hier allerdings keinen bestimmten Feldindex angeben, von dem aus die Zuweisung des Arrays erfolgen soll. Ebenso finden Sie in der Korn-Shell eine Einschränkung bezüglich der Anzahl von Elementen vor, die ein Array belegen darf. Gewöhnlich sind hierbei 512, 1024 oder maximal 4096 Werte möglich.
### 2.5.4 Zugreifen auf die einzelnen Elemente eines ArraysÂ
Hier kommen die beiden Shells zum Glück wieder auf einen Nenner. Der Zugriff auf ein einzelnes Element im Array erfolgt über folgende Syntax:
> ${array[n]}
n entspricht dabei dem Feldindex, auf dessen Wert im Array Sie zurückgreifen wollen. Der Zugriff auf ein Array muss leider in geschweiften Klammern erfolgen, da hier ja mit den Zeichen [ ] Metazeichen verwendet wurden und sonst eine Expansion der Shell durchgeführt würde.
# Alle Elemente eines Arrays auf einmal und die Anzahl der Elemente ausgeben
Alle Elemente eines Arrays können Sie mit folgenden Schreibweisen ausgeben lassen:
> you@host > echo ${array[*]} you@host > echo ${array[@]}
Auf den Unterschied dieser beiden Versionen ($* und $@) wird in Kapitel 3, Parameter und Argumente, eingegangen.
Die Anzahl der belegten Elemente im Array erhalten Sie mit:
> you@host > echo ${#array[*]}
Wollen Sie feststellen, wie lang der Eintrag innerhalb eines Arrays ist, so gehen Sie genauso vor wie beim Ermitteln der Anzahl belegter Elemente im Array, nur dass Sie hierbei den entsprechenden Feldindex verwenden.
> you@host > echo ${#array[1]}
# Löschen einzelner Elemente oder des kompletten Arrays
Beim Löschen eines Elements im Array oder gar des kompletten Arrays wird derselbe Weg wie schon bei den Variablen beschritten. Hier hilft Ihnen das Kommando unset wieder aus. Das komplette Array löschen Sie mit:
> you@host > unset array
Einzelne Elemente können Sie mit folgender Syntax herauslöschen (im Beispiel das zweite Element):
> you@host > unset array[2]
# Komplettes Array kopieren
Wollen Sie ein komplettes Array kopieren, dann können Sie entweder jedes einzelne Element in einer Schleife durchlaufen, überprüfen und einem anderen Array zuweisen. Oder aber Sie verwenden ein ähnliches Konstrukt wie beim Auflisten aller Elemente in einem Array, verbunden mit dem Zuweisen einer Liste mit Werten an ein Array â was bedeutet, dass hier wieder Bash und Korn-Shell extra behandelt werden möchten (müssen).
Bei der Bash realisieren Sie dies so:
> array_kopie=(${array_quelle[*]})
Und die Korn-Shell will das so haben:
> set -A array_kopie ${array_quelle[*]}
# String-Manipulationen
Selbstverständlich stehen Ihnen hierzu auch die String-Manipulationen zur Verfügung, die Sie in diesem Kapitel kennen gelernt haben. Wollen Sie diese einsetzen, müssen Sie aber überdenken, ob sich die Manipulation auf ein einzelnes Feld beziehen soll (${array[n]) oder ob Sie das komplette Array (${array[*]}) dafür heranziehen wollen. Auch die in der Bash vorhandene cut-ähnliche Funktion ${array[n]1:2 (oder alle Elemente: ${array[*]1:2}) steht Ihnen bereit.
Zugegeben, es ließe sich noch einiges mehr zu den Arrays schreiben, doch hier begnüge ich mich mit dem für den Hausgebrauch Nötigen. Auf Arrays in Zusammenhang mit Schleifen wird noch in Kapitel eingegangen.
Zum besseren Verständnis möchte ich Ihnen ein etwas umfangreicheres Script zeigen, das Ihnen alle hier beschriebenen Funktionalitäten eines Arrays nochmals in der Praxis zeigt. Vorwegnehmend wurde hierbei auch eine Schleife eingesetzt. Im Beispiel wurde der Bash-Version der Vortritt gegeben. Sofern Sie das Beispiel in einer Korn-Shell testen wollen, müssen Sie in den entsprechenden Zeilen (Zuweisung eines Arrays und das Kopieren) das Kommentarzeichen # entfernen und die entsprechenden Zeilen der Bash-Version auskommentieren.
> # Demonstriert den Umgang mit Arrays # Name : aarray # Liste von Werten in einem Array speichern # Version: Korn-Shell (auskommentiert) set -A array null eins zwei drei vier fuenf # Version: Bash array=( null eins zwei drei vier fuenf ) # Zugriff auf die einzelnen Elemente echo ${array[0]} # null echo ${array[1]} # eins echo ${array[5]} # fuenf # Alle Elemente ausgeben echo ${array[*]} # Länge von Element 3 ermitteln echo ${#array[2]} # 4 # Anzahl der Elemente ausgeben echo ${#array[*]} # Neues Element hinzufügen array[6]="sechs" # Anzahl der Elemente ausgeben echo ${#array[*]} # Alle Elemente ausgeben echo ${array[*]} # Element löschen unset array[4] # Alle Elemente ausgeben echo ${array[*]} # Array kopieren # Version: ksh (auskommentiert) #set -A array_kopie=${array[*]} # Version: Bash array_kopie=(${array[*]}) # Alle Elemente ausgeben echo ${array_kopie[*]} # Schreibschutz verwenden typeset -r array_kopie # Versuch, darauf zuzugreifen array_kopie[1]=nixda # Vorweggenommen â ein Array in einer for-Schleife # Einen Integer machen typeset -i i=0 max=${#array[*]} while (( i < max )) do echo "Feld $i: ${array[$i]}" i=i+1 done
Das Beispiel bei der Ausführung:
> you@host > ./aarray null eins fuenf null eins zwei drei vier fuenf 4 6 7 null eins zwei drei vier fuenf sechs null eins zwei drei fuenf sechs null eins zwei drei fuenf sechs ./aarray: line 48: array_kopie: readonly variable Feld 0: null Feld 1: eins Feld 2: zwei Feld 3: drei Feld 4: Feld 5: fuenf
# 2.6 Variablen exportierenÂ
2.6 Variablen exportierenÂ
Definieren Sie Variablen in Ihrem Shellscript, so sind diese gewöhnlich nur zur Ausführzeit verfügbar. Nach der Beendigung des Scripts werden die Variablen wieder freigegeben. Manches Mal ist es allerdings nötig, dass mehrere Scripts oder Subshells mit einer einzelnen Variablen arbeiten. Damit das jeweilige Script bzw. die nächste Subshell von der Variable Kenntnis nimmt, wird diese exportiert.
Exportiere Variablen, die Sie in eine neue Shell übertragen möchten, werden mit dem Kommando export übernommen:
export variable
Beim Start der neuen Shell steht die exportierte Variable dann auch zur Verfügung â natürlich mitsamt dem Wert, den diese in der alten Shell belegt hat. In der Bourne-Shell müssen Sie die Zuweisung und das Exportieren einer Variablen getrennt ausführen.
you@host > variable=1234 you@host > export variable
In der Bash und der Korn-Shell können Sie diesen Vorgang zusammenlegen:
you@host > export variable=1234
Natürlich können auch mehr Variablen auf einmal exportiert werden:
you@host > export var1 var2 var3 var4
Die weiteren Variablen, die Sie auf einmal exportieren wollen, müssen mit einem Leerzeichen voneinander getrennt sein.
Die Shell vererbt eine Variable an ein Script (Subshell)
Der einfachste Fall ist gegeben, wenn eine in der Shell definierte Variable auch für ein Shellscript vorhanden sein soll. Da die Shell ja zum Starten von Shellscripts in der Regel eine Subshell startet, weiß die Subshell bzw. das Shellscript nichts mehr von den benutzerdefinierten Variablen. Das Beispiel:
you@host > wichtig="Wichtige Daten"
Das Shellscript:
# Demonstriert den Umgang export # Name : aexport1 echo "aexport1: $wichtig"
Beim Ausführen von »aexport1« wird bei Verwendung der Variablen »wichtig« ein leerer String ausgegeben. Hier sollte man in der Shell, in der das Script gestartet wird, einen Export durchführen.
you@host > export wichtig you@host > ./aexport1 aexport1: Wichtige Daten
Und schon steht dem Script die benutzerdefinierte Variable »wichtig« zur Verfügung.
Ein Shellscript vererbt eine Variable an ein Shellscript (Sub-Subshell)
Wenn Sie aus einem Script ein weiteres Script aufrufen und auch hierbei dem aufzurufenden Script eine bestimmte Variable des aufrufenden Scripts zur Kenntnis geben wollen, müssen Sie im aufrufenden Script die entsprechende Variable exportieren.
# Demonstriert den Umgang export # Name : aexport1 wichtig="Noch wichtigere Daten" echo "aexport1: $wichtig" # Variable im Shellscript exportieren export wichtig ./aexport2
In diesem Script »aexport1« wird ein weiteres Script namens »aexport2« aufgerufen. Damit diesem Script auch die benutzerdefinierte Variable »wichtig« zur Verfügung steht, muss diese im Script zuvor noch exportiert werden. Hier das Script »aexport2«:
# Demonstriert den Umgang export # Name : aexport2 echo "aexport2: $wichtig"
Die Shellscripts bei der Ausführung:
you@host > ./aexport1 aexport1: Noch wichtigere Daten aexport2: Noch wichtigere Daten you@host > echo $wichtig you@host >
Hier wurde außerdem versucht, in der Shell, die das Shellscript »aexport1« aufgerufen hat, die benutzerdefinierte Variable »wichtig« auszugeben. Das ist logischerweise ein leerer String, da die Shell nichts von einer Variablen wissen kann, die in einer Subshell definiert und ausgeführt wird.
Bei dem Beispiel von eben stellt sich die Frage, was passiert, wenn eine Subshell den Inhalt einer vom Elternprozess benutzerdefinierten exportierten Variable verändert und die Ausführung des Scripts in der übergeordneten Shell fortgeführt wird. Um auf das Anwendungsbeispiel zurückzukommen: Im Script »aexport2« soll die Variable »wichtig« verändert werden. Hierzu nochmals beide Shellscripts etwas umgeschrieben. Zuerst das Script »aexport1«:
# Demonstriert den Umgang export # Name : aexport1 wichtig="Noch wichtigere Daten" echo "aexport1: $wichtig" # Variable im Shellscript exportieren export wichtig ./aexport2 # Nach der Ausführung von aexport2 echo "aexport1: $wichtig"
Jetzt noch »aexport2«:
# Demonstriert den Umgang export # Name : aexport2 echo "aexport2: $wichtig" wichtig="Unwichtig" echo "aexport2: $wichtig"
Die beiden Scripts wieder bei der Ausführung:
you@host > ./aexport1 aexport1: Noch wichtigere Daten aexport2: Noch wichtigere Daten aexport2: Unwichtig aexport1: Noch wichtigere Daten
An der Ausführung der Scripts konnten Sie ganz klar erkennen, dass es auf herkömmlichem Weg unmöglich ist, die Variablen einer Eltern-Shell zu verändern.
Starten eines Scripts in der aktuellen Shell â Punkte-Kommando
Wollen Sie trotzdem, dass die Eltern-Shell betroffen ist, wenn Sie eine Variable in einer Subshell verändern, dann können Sie das Punkte-Kommando verwenden. Dieses wurden beim Starten von Shellscripts bereits erwähnt. Mit dem Punkte-Kommando vor dem auszuführenden Shellscript veranlassen Sie, dass das Shellscript nicht in einer Subshell ausgeführt wird, sondern von der aktuellen Shell.
Auf das Beispiel »aexport1« bezogen müssen Sie nur Folgendes ändern:
# Demonstriert den Umgang export # Name : aexport1 wichtig="Noch wichtigere Daten" echo "aexport1: $wichtig" # Variable im Shellscript exportieren . ./aexport2 # Nach der Ausführung von aexport2 echo "aexport1: $wichtig"
Beim Ausführen der Scripts ist jetzt auch die Veränderung der Variablen »wichtig« im Script »aexport2« beim Script »aexport1« angekommen.
Das Hauptanwendungsgebiet des Punkte-Kommandos ist aber das Einlesen von Konfigurationsdateien. Eine Konfigurationsdatei wird häufig verwendet, wenn Sie ein Script für mehrere Systeme oder unterschiedlichste Optionen (bspw. mehrere Sprachen, eingeschränkte oder unterschiedliche Funktionen) anbieten wollen. In solch einer Konfigurationsdatei wird dann häufig die Variablenzuweisung vorgenommen. Dadurch können die Variablen vom eigentlichen Script getrennt werden â was ein Script erheblich flexibler macht. Durch die Verwendung des Punkte-Kommandos bleiben Ihnen diese Variablen in der aktuellen Shell erhalten. Als Beispiel folgende Konfigurationsdatei:
# aconf.conf lang="deutsch" prg="Shell"
Und jetzt noch das entsprechende Script, das diese Konfigurationsdatei verwendet:
# Name : apoint # Konfigurationsdaten einlesen . aconf.conf echo "Spracheinstellung: $lang; ($prg)"
Das Script bei der Ausführung:
you@host > ./apoint Spracheinstellung: deutsch; (Shell)
Jetzt kann durch ein Verändern der Konfigurationsdatei aconf.conf die Ausgabe des Scripts nach Bedarf verändert werden. Im Beispiel ist dieser Fall natürlich recht belanglos.
Es gibt außerdem noch zwei Anwendungsfälle, wo bei einem Script die benutzerdefinierten Variablen ohne einem Export sichtbar sind. Dies geschieht durch die Verwendung von Kommando-Substitution `...` und bei einer Gruppierung von Befehlen (...). In beiden Fällen bekommt die Subshell (bzw. das Shellscript) eine komplette Kopie aller Variablen der Eltern-Shell kopiert.
Hinweis   Anstelle des Punkte-Operators können Sie auch den Builtin-Befehl script verwenden.
Variablen exportieren â extra (Bash und Korn-Shell only)
Intern wird beim Exportieren einer Variablen ein Flag gesetzt. Dank dieses Flags kann eine Variable an weitere Subshells vererbt werden, ohne dass hierbei weitere export-Aufrufe erforderlich sind. In der Bash oder der Korn-Shell können Sie dieses Flag mithilfe des Kommandos typeset setzen oder wieder entfernen. Um mit typeset eine Variable zu exportieren, wird die Option âx verwendet. Hier die Syntax:
typeset -x variable
Natürlich kann auch hier wieder mehr als nur eine Variable exportiert werden. Ebenso können Sie die Zuweisungen und das Exportieren zusammen ausführen. typeset ist deshalb so interessant für das Exportieren von Variablen, weil Sie hierbei jederzeit mit der Option +x das Export-Flag wieder löschen können.
Umgebungsvariablen exportieren (Bourne-Shell only)
Zwar werden die Umgebungsvariablen noch gesondert behandelt, doch bezüglich der Exportierung hier schon ein paar Bemerkungen. Wichtig ist vor allem zu erwähnen, dass Umgebungsvariablen bei jedem Aufruf einer Shell neu angelegt werden (natürlich als Kopie der Eltern-Shell). Des Weiteren werden bei der Bash und der Korn-Shell die Umgebungsvariablen immer automatisch exportiert und müssen somit nicht explizit angewiesen werden. Geschieht dies trotzdem, ist das kein Fehler â zum Glück, denn die Bourne-Shell exportiert die Umgebungsvariablen leider nicht automatisch.
Hier ein solcher Vorgang in der Bourne-Shell:
sh-2.05b$ echo $USER you sh-2.05b$ USER=neu sh-2.05b$ echo $USER neu sh-2.05b$ sh sh-2.05b$ echo $USER you
Damit in einer Bourne-Shell auch die aktuellen Umgebungsvariablen bei einer neuen Shell gültig sind, müssen Sie diese explizit exportieren.
sh-2.05b$ echo $USER you sh-2.05b$ USER=neu sh-2.05b$ echo $USER neu sh-2.05b$ export USER sh-2.05b$ sh sh-2.05b$ echo $USER neu
Anzeigen exportierter Variablen
Wenn Sie das Kommando export ohne irgendwelche Argumente verwenden, bekommen Sie alle exportierten Variablen zurück. Bei der Bash und Korn-Shell finden Sie darin auch die Umgebungsvariablen wieder, weil diese hier ja immer als »exportiert« markiert sind.
Bei der Bash bzw. der Korn-Shell können Sie sich auch die exportierten Variablen mit dem Kommando typeset und der Option âx (keine weiteren Argumente) ansehen.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 2.6 Variablen exportierenÂ
Definieren Sie Variablen in Ihrem Shellscript, so sind diese gewöhnlich nur zur Ausführzeit verfügbar. Nach der Beendigung des Scripts werden die Variablen wieder freigegeben. Manches Mal ist es allerdings nötig, dass mehrere Scripts oder Subshells mit einer einzelnen Variablen arbeiten. Damit das jeweilige Script bzw. die nächste Subshell von der Variable Kenntnis nimmt, wird diese exportiert.
Exportiere Variablen, die Sie in eine neue Shell übertragen möchten, werden mit dem Kommando export übernommen:
> export variable
Beim Start der neuen Shell steht die exportierte Variable dann auch zur Verfügung â natürlich mitsamt dem Wert, den diese in der alten Shell belegt hat. In der Bourne-Shell müssen Sie die Zuweisung und das Exportieren einer Variablen getrennt ausführen.
> you@host > variable=1234 you@host > export variable
In der Bash und der Korn-Shell können Sie diesen Vorgang zusammenlegen:
> you@host > export variable=1234
Natürlich können auch mehr Variablen auf einmal exportiert werden:
> you@host > export var1 var2 var3 var4
Die weiteren Variablen, die Sie auf einmal exportieren wollen, müssen mit einem Leerzeichen voneinander getrennt sein.
# Die Shell vererbt eine Variable an ein Script (Subshell)
Der einfachste Fall ist gegeben, wenn eine in der Shell definierte Variable auch für ein Shellscript vorhanden sein soll. Da die Shell ja zum Starten von Shellscripts in der Regel eine Subshell startet, weiß die Subshell bzw. das Shellscript nichts mehr von den benutzerdefinierten Variablen. Das Beispiel:
> you@host > wichtig="Wichtige Daten"
Das Shellscript:
> # Demonstriert den Umgang export # Name : aexport1 echo "aexport1: $wichtig"
Beim Ausführen von »aexport1« wird bei Verwendung der Variablen »wichtig« ein leerer String ausgegeben. Hier sollte man in der Shell, in der das Script gestartet wird, einen Export durchführen.
> you@host > export wichtig you@host > ./aexport1 aexport1: Wichtige Daten
Und schon steht dem Script die benutzerdefinierte Variable »wichtig« zur Verfügung.
# Ein Shellscript vererbt eine Variable an ein Shellscript (Sub-Subshell)
Wenn Sie aus einem Script ein weiteres Script aufrufen und auch hierbei dem aufzurufenden Script eine bestimmte Variable des aufrufenden Scripts zur Kenntnis geben wollen, müssen Sie im aufrufenden Script die entsprechende Variable exportieren.
> # Demonstriert den Umgang export # Name : aexport1 wichtig="Noch wichtigere Daten" echo "aexport1: $wichtig" # Variable im Shellscript exportieren export wichtig ./aexport2
In diesem Script »aexport1« wird ein weiteres Script namens »aexport2« aufgerufen. Damit diesem Script auch die benutzerdefinierte Variable »wichtig« zur Verfügung steht, muss diese im Script zuvor noch exportiert werden. Hier das Script »aexport2«:
> # Demonstriert den Umgang export # Name : aexport2 echo "aexport2: $wichtig"
Die Shellscripts bei der Ausführung:
> you@host > ./aexport1 aexport1: Noch wichtigere Daten aexport2: Noch wichtigere Daten you@host > echo $wichtig you@host Hier wurde außerdem versucht, in der Shell, die das Shellscript »aexport1« aufgerufen hat, die benutzerdefinierte Variable »wichtig« auszugeben. Das ist logischerweise ein leerer String, da die Shell nichts von einer Variablen wissen kann, die in einer Subshell definiert und ausgeführt wird.
Bei dem Beispiel von eben stellt sich die Frage, was passiert, wenn eine Subshell den Inhalt einer vom Elternprozess benutzerdefinierten exportierten Variable verändert und die Ausführung des Scripts in der übergeordneten Shell fortgeführt wird. Um auf das Anwendungsbeispiel zurückzukommen: Im Script »aexport2« soll die Variable »wichtig« verändert werden. Hierzu nochmals beide Shellscripts etwas umgeschrieben. Zuerst das Script »aexport1«:
> # Demonstriert den Umgang export # Name : aexport1 wichtig="Noch wichtigere Daten" echo "aexport1: $wichtig" # Variable im Shellscript exportieren export wichtig ./aexport2 # Nach der Ausführung von aexport2 echo "aexport1: $wichtig"
Jetzt noch »aexport2«:
> # Demonstriert den Umgang export # Name : aexport2 echo "aexport2: $wichtig" wichtig="Unwichtig" echo "aexport2: $wichtig"
Die beiden Scripts wieder bei der Ausführung:
> you@host > ./aexport1 aexport1: Noch wichtigere Daten aexport2: Noch wichtigere Daten aexport2: Unwichtig aexport1: Noch wichtigere Daten
An der Ausführung der Scripts konnten Sie ganz klar erkennen, dass es auf herkömmlichem Weg unmöglich ist, die Variablen einer Eltern-Shell zu verändern.
# Starten eines Scripts in der aktuellen Shell â Punkte-Kommando
Wollen Sie trotzdem, dass die Eltern-Shell betroffen ist, wenn Sie eine Variable in einer Subshell verändern, dann können Sie das Punkte-Kommando verwenden. Dieses wurden beim Starten von Shellscripts bereits erwähnt. Mit dem Punkte-Kommando vor dem auszuführenden Shellscript veranlassen Sie, dass das Shellscript nicht in einer Subshell ausgeführt wird, sondern von der aktuellen Shell.
Auf das Beispiel »aexport1« bezogen müssen Sie nur Folgendes ändern:
> # Demonstriert den Umgang export # Name : aexport1 wichtig="Noch wichtigere Daten" echo "aexport1: $wichtig" # Variable im Shellscript exportieren . ./aexport2 # Nach der Ausführung von aexport2 echo "aexport1: $wichtig"
Beim Ausführen der Scripts ist jetzt auch die Veränderung der Variablen »wichtig« im Script »aexport2« beim Script »aexport1« angekommen.
Das Hauptanwendungsgebiet des Punkte-Kommandos ist aber das Einlesen von Konfigurationsdateien. Eine Konfigurationsdatei wird häufig verwendet, wenn Sie ein Script für mehrere Systeme oder unterschiedlichste Optionen (bspw. mehrere Sprachen, eingeschränkte oder unterschiedliche Funktionen) anbieten wollen. In solch einer Konfigurationsdatei wird dann häufig die Variablenzuweisung vorgenommen. Dadurch können die Variablen vom eigentlichen Script getrennt werden â was ein Script erheblich flexibler macht. Durch die Verwendung des Punkte-Kommandos bleiben Ihnen diese Variablen in der aktuellen Shell erhalten. Als Beispiel folgende Konfigurationsdatei:
> # aconf.conf lang="deutsch" prg="Shell"
Und jetzt noch das entsprechende Script, das diese Konfigurationsdatei verwendet:
> # Name : apoint # Konfigurationsdaten einlesen . aconf.conf echo "Spracheinstellung: $lang; ($prg)"
Das Script bei der Ausführung:
> you@host > ./apoint Spracheinstellung: deutsch; (Shell)
Jetzt kann durch ein Verändern der Konfigurationsdatei aconf.conf die Ausgabe des Scripts nach Bedarf verändert werden. Im Beispiel ist dieser Fall natürlich recht belanglos.
Es gibt außerdem noch zwei Anwendungsfälle, wo bei einem Script die benutzerdefinierten Variablen ohne einem Export sichtbar sind. Dies geschieht durch die Verwendung von Kommando-Substitution `...` und bei einer Gruppierung von Befehlen (...). In beiden Fällen bekommt die Subshell (bzw. das Shellscript) eine komplette Kopie aller Variablen der Eltern-Shell kopiert.
# Variablen exportieren â extra (Bash und Korn-Shell only)
Intern wird beim Exportieren einer Variablen ein Flag gesetzt. Dank dieses Flags kann eine Variable an weitere Subshells vererbt werden, ohne dass hierbei weitere export-Aufrufe erforderlich sind. In der Bash oder der Korn-Shell können Sie dieses Flag mithilfe des Kommandos typeset setzen oder wieder entfernen. Um mit typeset eine Variable zu exportieren, wird die Option âx verwendet. Hier die Syntax:
> typeset -x variable
Natürlich kann auch hier wieder mehr als nur eine Variable exportiert werden. Ebenso können Sie die Zuweisungen und das Exportieren zusammen ausführen. typeset ist deshalb so interessant für das Exportieren von Variablen, weil Sie hierbei jederzeit mit der Option +x das Export-Flag wieder löschen können.
# Umgebungsvariablen exportieren (Bourne-Shell only)
Zwar werden die Umgebungsvariablen noch gesondert behandelt, doch bezüglich der Exportierung hier schon ein paar Bemerkungen. Wichtig ist vor allem zu erwähnen, dass Umgebungsvariablen bei jedem Aufruf einer Shell neu angelegt werden (natürlich als Kopie der Eltern-Shell). Des Weiteren werden bei der Bash und der Korn-Shell die Umgebungsvariablen immer automatisch exportiert und müssen somit nicht explizit angewiesen werden. Geschieht dies trotzdem, ist das kein Fehler â zum Glück, denn die Bourne-Shell exportiert die Umgebungsvariablen leider nicht automatisch.
Hier ein solcher Vorgang in der Bourne-Shell:
> sh-2.05b$ echo $USER you sh-2.05b$ USER=neu sh-2.05b$ echo $USER neu sh-2.05b$ sh sh-2.05b$ echo $USER you
Damit in einer Bourne-Shell auch die aktuellen Umgebungsvariablen bei einer neuen Shell gültig sind, müssen Sie diese explizit exportieren.
> sh-2.05b$ echo $USER you sh-2.05b$ USER=neu sh-2.05b$ echo $USER neu sh-2.05b$ export USER sh-2.05b$ sh sh-2.05b$ echo $USER neu
# Anzeigen exportierter Variablen
Wenn Sie das Kommando export ohne irgendwelche Argumente verwenden, bekommen Sie alle exportierten Variablen zurück. Bei der Bash und Korn-Shell finden Sie darin auch die Umgebungsvariablen wieder, weil diese hier ja immer als »exportiert« markiert sind.
Bei der Bash bzw. der Korn-Shell können Sie sich auch die exportierten Variablen mit dem Kommando typeset und der Option âx (keine weiteren Argumente) ansehen.
# 2.7 Umgebungsvariablen eines ProzessesÂ
2.7 Umgebungsvariablen eines ProzessesÂ
Wenn Sie ein Script, eine neue Shell oder ein Programm starten, so wird diesem Programm eine Liste von Zeichenketten (genauer Array von Zeichenketten) übergeben. Diese Liste wird als Umgebung des Prozesses bezeichnet. Gewöhnlich enthält eine solche Umgebung zeilenweise Einträge in Form von:
variable=wert
Somit sind Umgebungsvariablen zunächst nichts anderes als global mit dem Kommando export oder typeset âx definierte Variablen. Trotzdem gibt es irgendwo eine Trennung zwischen benutzerdefinierten und von der Shell vordefinierten Umgebungsvariablen. Normalerweise werden die von der Shell vordefinierten Umgebungsvariablen großgeschrieben â aber dies ist keine Regel. Eine Umgebungsvariable bleibt weiterhin eine, solange Sie der kompletten Umgebung eines Prozesses zur Verfügung steht.
Vordefinierte Umgebungsvariablen werden benötigt, um das Verhalten der Shell oder der Kommandos zu beeinflussen. Hier als Beispiel die Umgebungsvariable HOME, welche gewöhnlich das Heimverzeichnis des eingeloggten Benutzers beinhaltet.
you@host > echo $HOME /home/you you@host > cd /usr/include you@host :/usr/include> cd you@host > pwd /home/you you@host > backup=$HOME you@host > HOME=/usr you@host :/home/you> cd you@host > pwd /usr you@host > echo $HOME /usr you@host > HOME=$backup you@host :/usr> cd you@host > pwd /home/you
Wenn also die Rede von Umgebungsvariablen ist, dann sind wohl meistens die von der Shell vordefinierten Umgebungsvariablen gemeint und nicht die benutzerdefinierten. Selbstverständlich gilt in Bezug auf die Weitervererbung an Subshells für die vordefinierten Umgebungsvariablen dasselbe wie für die benutzerdefinierten, so wie es auf den vorherigen Seiten beschrieben wurde.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 2.7 Umgebungsvariablen eines ProzessesÂ
Wenn Sie ein Script, eine neue Shell oder ein Programm starten, so wird diesem Programm eine Liste von Zeichenketten (genauer Array von Zeichenketten) übergeben. Diese Liste wird als Umgebung des Prozesses bezeichnet. Gewöhnlich enthält eine solche Umgebung zeilenweise Einträge in Form von:
> variable=wert
Somit sind Umgebungsvariablen zunächst nichts anderes als global mit dem Kommando export oder typeset âx definierte Variablen. Trotzdem gibt es irgendwo eine Trennung zwischen benutzerdefinierten und von der Shell vordefinierten Umgebungsvariablen. Normalerweise werden die von der Shell vordefinierten Umgebungsvariablen großgeschrieben â aber dies ist keine Regel. Eine Umgebungsvariable bleibt weiterhin eine, solange Sie der kompletten Umgebung eines Prozesses zur Verfügung steht.
Vordefinierte Umgebungsvariablen werden benötigt, um das Verhalten der Shell oder der Kommandos zu beeinflussen. Hier als Beispiel die Umgebungsvariable HOME, welche gewöhnlich das Heimverzeichnis des eingeloggten Benutzers beinhaltet.
> you@host > echo $HOME /home/you you@host > cd /usr/include you@host :/usr/include> cd you@host > pwd /home/you you@host > backup=$HOME you@host > HOME=/usr you@host :/home/you> cd you@host > pwd /usr you@host > echo $HOME /usr you@host > HOME=$backup you@host :/usr> cd you@host > pwd /home/you
Wenn also die Rede von Umgebungsvariablen ist, dann sind wohl meistens die von der Shell vordefinierten Umgebungsvariablen gemeint und nicht die benutzerdefinierten. Selbstverständlich gilt in Bezug auf die Weitervererbung an Subshells für die vordefinierten Umgebungsvariablen dasselbe wie für die benutzerdefinierten, so wie es auf den vorherigen Seiten beschrieben wurde.
# 2.9 Automatische Variablen der ShellÂ
2.9 Automatische Variablen der ShellÂ
Nach dem Aufruf eines Shellscripts versorgt Sie die Shell außerdem noch durch einer Reihe von Variablen mit Informationen zum laufenden Prozess.
2.9.1 Der Name des Shellscripts â $0Â
Den Namen des aufgerufenen Shellscripts finden Sie in der Variablen $0. Bspw. folgendes Shellscript:
# Name : ichbin echo "Mein Kommandoname ist $0"
Führen Sie dieses Script aus, bekommen Sie folgende Ausgabe zurück:
you@host > ./ichbin Mein Kommandoname ist ./ichbin you@host > $HOME/ichbin Mein Kommandoname ist /home/you/ichbin
Diese Variable wird häufig für Fehlermeldungen verwendet, zum Beispiel um anzuzeigen, wie man ein Script richtig anwendet bzw. aufruft. Dafür ist selbstverständlich auch der Kommandoname von Bedeutung.
Hinweis   Die Formulierung »den Namen des aufgerufenen Shellscripts« trifft die Sache eigentlich nicht ganz genau. Wenn man bspw. den Befehl in der Konsole eingibt, bekommt man ja den Namen der Shell zurück. Somit könnte man wohl auch vom Namen des Elternprozesses des ausgeführten Befehls echo sprechen. Aber wir wollen hier nicht kleinlich werden.
Mit dieser Variablen können Sie übrigens auch ermitteln, ob ein Script mit einem vorangestellten Punkt gestartet wurde und entsprechend darauf reagieren:
you@host > . ./ichbin Mein Kommandoname ist /bin/bash you@host > sh sh-2.05b$ . ./ichbin Mein Kommandoname ist sh sh-2.05b$ ksh $ . ./ichbin Mein Kommandoname ist ksh
Wird also ein Shellscript mit einem vorangestellten Punkt aufgerufen, so enthält die Variable $0 den Namen des Kommando-Interpreters.
2.9.2 Die Prozessnummer des Shellscripts â $$Â
Die Variable $$ wird von der Shell durch die entsprechende Prozessnummer des Shellscripts ersetzt. Bspw. folgendes Shellscript:
# Name : mypid echo "Meine Prozessnummer ($0) lautet $$"
Das Shellscript bei der Ausführung:
you@host > ./mypid Meine Prozessnummer (./mypid) lautet 4902 you@host > . ./mypid Meine Prozessnummer (/bin/bash) lautet 3234 you@host > ps PID TTY TIME CMD 3234 pts/38 00:00:00 bash 4915 pts/38 00:00:00 ps
Durch ein Voranstellen des Punkte-Operators können Sie hierbei auch die Prozessnummer der ausführenden Shell bzw. des Kommando-Interpreters ermitteln.
2.9.3 Der Beendigungsstatus eines Shellscripts â $?Â
Diese Variable wurde bereits in Zusammenhang mit exit behandelt. In dieser Variablen finden Sie den Beendigungsstatus des zuletzt ausgeführten Kommandos (oder eben auch Shellscripts).
you@host > cat gibtesnicht cat: gibtesnicht: Datei oder Verzeichnis nicht gefunden you@host > echo $? 1 you@host > ls -l | wc -l 31 you@host > echo $? 0
Ist der Wert der Variablen $? ungleich 0, ist beim letzten Kommandoaufruf ein Fehler aufgetreten. Wenn $? gleich 0 ist, deutet dies auf einen fehlerlosen Kommandoaufruf hin.
2.9.4 Die Prozessnummer des zuletzt gestarteten Hintergrundprozesses â $!Â
Wenn Sie in der Shell ein Kommando oder ein Shellscript im Hintergrund ausführen lassen (&), wird die Prozessnummer in die Variable $! gelegt. Anstatt also nach der Nummer des zuletzt gestarteten Hintergrundprozesses zu suchen, können Sie auch einfach die Variable $! verwenden. Im folgenden Beispiel wird diese Variable verwendet, um den zuletzt gestarteten Hintergrundprozess zu beenden.
you@host > find / -print > ausgabe 2> /dev/null & [1] 5845 you@host > kill $! you@host > [1]+ Beendet find / -print >ausgabe 2>/dev/null
2.9.5 Weitere vordefinierte Variablen der ShellÂ
Es gibt noch weitere automatische Variablen der Shell, die allerdings erst in Kapitel 3, Parameter und Argumente, ausführlicher behandelt werden. Tabelle 2.10 gibt bereits einen Überblick.
Tabelle 2.10 Â Automatisch vordefinierte Variablen der Shell
Variable(n)
Bedeutung
$1 bis $n
Argumente aus der Kommandozeile
$*
Alle Argumente aus der Kommandozeile in einer Zeichenkette
$@
Alle Argumente aus der Kommandozeile als einzelne Zeichenketten (Array von Zeichenketten)
$#
Anzahl aller Argumente in der Kommandozeile
$_
(Bash only) Letztes Argument in der Kommandozeile des zuletzt aufgerufenen Kommandos
Tabelle 2.11 Â Ständig neu gesetzte Variablen (Bash und Korn-Shell)
LINENO
Diese Variable enthält immer die aktuelle Zeilennummer im Shellscript. Wird die Variable innerhalb einer Scriptfunktion aufgerufen, entspricht der Wert von LINENO den bis zum Aufruf innerhalb der Funktion ausgeführten einfachen Kommandos. Außerhalb von Shellscripts ist diese Variable nicht sinnvoll belegt. Wird die LINENO-Shell-Variable mit unset gelöscht, kann sie nicht wieder mit ihrer automatischen Funktion erzeugt werden.
OLDPWD
Der Wert ist das zuvor besuchte Arbeitsverzeichnis; wird vom Kommando cd gesetzt.
OPTARG
Der Wert ist das Argument der zuletzt von getopts ausgewerteten Option.
OPTIND
Enthält die Nummer (Index) der zuletzt von getopts ausgewerteten Option
PPID
Prozess-ID des Elternprozesses (Parent Process ID = PPID); eine Subshell, die als Kopie einer Shell erzeugt wird, setzt PPID nicht.
PWD
Aktuelles Arbeitsverzeichnis
RANDOM
Pseudo-Zufallszahl zwischen 0 bis 32767; weisen Sie RANDOM einen neuen Wert zu, so führt dies dazu, dass der Zufallsgenerator neu initialisiert wird.
REPLY
Wird vom Shell-Kommando read gesetzt, wenn keine andere Variable als Rückgabeparameter benannt ist und bei Menüs (select) enthält REPLY die ausgewählte Nummer.
SECONDS
Enthält die Anzahl von Sekunden, die seit dem Start (Login) der aktuellen Shell vergangen ist. Wird SECONDS ein Wert zugewiesen, erhöht sich dieser Wert jede Sekunde automatisch um eins.
Automatische Variablen nur für die Korn-Shell
Tabelle 2.12 Â Ständig neu gesetzte Variable (Korn-Shell only)
ERRNO
Fehlernummer des letzten fehlgeschlagenen Systemaufrufs
Automatische Variablen nur für die Bash
Tabelle 2.13 Â Ständig neu gesetzte Variablen (Bash only)
BASH
Kompletter Pfadname der aktuellen Shell
BASH_VERSION
Versionsnummer der Shell
EUID
Beinhaltet die effektive Benutzerkennung des Anwenders. Diese Nummer wird während der Ausführung von Programmen, bei denen das SUID-Bit aktiviert ist, gesetzt.
HISTCMD
Enthält die Nummer des aktuellen Kommandos aus der Historydatei
HOSTTYPE
Typ des Rechners. Für Linux kommen u. a. die Typen i386 oder i486 in Frage.
OSTYPE
Name des Betriebssystems. Da allerdings die Variable OSTYPE den aktuellen Wert zum Übersetzungszeitpunkt der Bash anzeigt, ist dieser Wert nicht zuverlässig. Re-kompilieren Sie bspw. alles neu, ändert sich dieser Wert nicht mehr. Zuverlässiger ist da wohl das Kommando uname.
PROMPT_COMMAND
Hier kann ein Kommando angegeben werden, das vor jeder Eingabeaufforderung automatisch ausgeführt wird.
SHLVL
Steht für den Shell-Level. Bei jedem Aufruf einer neuen Shell in der Shell wird der Shell-Level um eins erhöht; der Wert 2 kann z. B. innerhalb eines Scripts bestehen, das aus einer Login-Shell gestartet wurde. Eine Möglichkeit, zwischen den Levels zu wechseln, gibt es nicht.
UID
Die User-ID des Anwenders. Diese Kennung ist in der Datei /etc/passwd dem Benutzernamen zugeordnet.
## 2.9 Automatische Variablen der ShellÂ
Nach dem Aufruf eines Shellscripts versorgt Sie die Shell außerdem noch durch einer Reihe von Variablen mit Informationen zum laufenden Prozess.
### 2.9.1 Der Name des Shellscripts â $0Â
Den Namen des aufgerufenen Shellscripts finden Sie in der Variablen $0. Bspw. folgendes Shellscript:
> # Name : ichbin echo "Mein Kommandoname ist $0"
Führen Sie dieses Script aus, bekommen Sie folgende Ausgabe zurück:
> you@host > ./ichbin Mein Kommandoname ist ./ichbin you@host > $HOME/ichbin Mein Kommandoname ist /home/you/ichbin
Diese Variable wird häufig für Fehlermeldungen verwendet, zum Beispiel um anzuzeigen, wie man ein Script richtig anwendet bzw. aufruft. Dafür ist selbstverständlich auch der Kommandoname von Bedeutung.
Mit dieser Variablen können Sie übrigens auch ermitteln, ob ein Script mit einem vorangestellten Punkt gestartet wurde und entsprechend darauf reagieren:
> you@host > . ./ichbin Mein Kommandoname ist /bin/bash you@host > sh sh-2.05b$ . ./ichbin Mein Kommandoname ist sh sh-2.05b$ ksh $ . ./ichbin Mein Kommandoname ist ksh
Wird also ein Shellscript mit einem vorangestellten Punkt aufgerufen, so enthält die Variable $0 den Namen des Kommando-Interpreters.
### 2.9.2 Die Prozessnummer des Shellscripts â $$Â
Die Variable $$ wird von der Shell durch die entsprechende Prozessnummer des Shellscripts ersetzt. Bspw. folgendes Shellscript:
> # Name : mypid echo "Meine Prozessnummer ($0) lautet $$"
Das Shellscript bei der Ausführung:
> you@host > ./mypid Meine Prozessnummer (./mypid) lautet 4902 you@host > . ./mypid Meine Prozessnummer (/bin/bash) lautet 3234 you@host > ps PID TTY TIME CMD 3234 pts/38 00:00:00 bash 4915 pts/38 00:00:00 ps
Durch ein Voranstellen des Punkte-Operators können Sie hierbei auch die Prozessnummer der ausführenden Shell bzw. des Kommando-Interpreters ermitteln.
### 2.9.3 Der Beendigungsstatus eines Shellscripts â $?Â
Diese Variable wurde bereits in Zusammenhang mit exit behandelt. In dieser Variablen finden Sie den Beendigungsstatus des zuletzt ausgeführten Kommandos (oder eben auch Shellscripts).
> you@host > cat gibtesnicht cat: gibtesnicht: Datei oder Verzeichnis nicht gefunden you@host > echo $? 1 you@host > ls -l | wc -l 31 you@host > echo $? 0
Ist der Wert der Variablen $? ungleich 0, ist beim letzten Kommandoaufruf ein Fehler aufgetreten. Wenn $? gleich 0 ist, deutet dies auf einen fehlerlosen Kommandoaufruf hin.
### 2.9.4 Die Prozessnummer des zuletzt gestarteten Hintergrundprozesses â $!Â
Wenn Sie in der Shell ein Kommando oder ein Shellscript im Hintergrund ausführen lassen (&), wird die Prozessnummer in die Variable $! gelegt. Anstatt also nach der Nummer des zuletzt gestarteten Hintergrundprozesses zu suchen, können Sie auch einfach die Variable $! verwenden. Im folgenden Beispiel wird diese Variable verwendet, um den zuletzt gestarteten Hintergrundprozess zu beenden.
> you@host > find / -print > ausgabe 2> /dev/null & [1] 5845 you@host > kill $! you@host > [1]+ Beendet find / -print >ausgabe 2>/dev/null
### 2.9.5 Weitere vordefinierte Variablen der ShellÂ
Es gibt noch weitere automatische Variablen der Shell, die allerdings erst in Kapitel 3, Parameter und Argumente, ausführlicher behandelt werden. Tabelle 2.10 gibt bereits einen Überblick.
Variable(n) | Bedeutung |
| --- | --- |
$1 bis $n | Argumente aus der Kommandozeile |
$* | Alle Argumente aus der Kommandozeile in einer Zeichenkette |
$@ | Alle Argumente aus der Kommandozeile als einzelne Zeichenketten (Array von Zeichenketten) |
$# | Anzahl aller Argumente in der Kommandozeile |
$_ | (Bash only) Letztes Argument in der Kommandozeile des zuletzt aufgerufenen Kommandos |
### 2.9.6 Weitere automatische Variablen für Bash und Korn-ShellÂ
Hierzu finden Sie im Folgenden die Tabelle 2.11 bis Tabelle 2.13, die Ihnen weitere automatische Variablen vorstellen, die ständig von der Shell neu gesetzt werden.
# Automatische Variablen für Bash und Korn-Shell
Variable | Bedeutung |
| --- | --- |
LINENO | Diese Variable enthält immer die aktuelle Zeilennummer im Shellscript. Wird die Variable innerhalb einer Scriptfunktion aufgerufen, entspricht der Wert von LINENO den bis zum Aufruf innerhalb der Funktion ausgeführten einfachen Kommandos. Außerhalb von Shellscripts ist diese Variable nicht sinnvoll belegt. Wird die LINENO-Shell-Variable mit unset gelöscht, kann sie nicht wieder mit ihrer automatischen Funktion erzeugt werden. |
OLDPWD | Der Wert ist das zuvor besuchte Arbeitsverzeichnis; wird vom Kommando cd gesetzt. |
OPTARG | Der Wert ist das Argument der zuletzt von getopts ausgewerteten Option. |
OPTIND | Enthält die Nummer (Index) der zuletzt von getopts ausgewerteten Option |
PPID | Prozess-ID des Elternprozesses (Parent Process ID = PPID); eine Subshell, die als Kopie einer Shell erzeugt wird, setzt PPID nicht. |
PWD | Aktuelles Arbeitsverzeichnis |
RANDOM | Pseudo-Zufallszahl zwischen 0 bis 32767; weisen Sie RANDOM einen neuen Wert zu, so führt dies dazu, dass der Zufallsgenerator neu initialisiert wird. |
REPLY | Wird vom Shell-Kommando read gesetzt, wenn keine andere Variable als Rückgabeparameter benannt ist und bei Menüs (select) enthält REPLY die ausgewählte Nummer. |
SECONDS | Enthält die Anzahl von Sekunden, die seit dem Start (Login) der aktuellen Shell vergangen ist. Wird SECONDS ein Wert zugewiesen, erhöht sich dieser Wert jede Sekunde automatisch um eins. |
# Automatische Variablen nur für die Korn-Shell
Variable | Bedeutung |
| --- | --- |
ERRNO | Fehlernummer des letzten fehlgeschlagenen Systemaufrufs |
# Automatische Variablen nur für die Bash
Variable | Bedeutung |
| --- | --- |
BASH | Kompletter Pfadname der aktuellen Shell |
BASH_VERSION | Versionsnummer der Shell |
EUID | Beinhaltet die effektive Benutzerkennung des Anwenders. Diese Nummer wird während der Ausführung von Programmen, bei denen das SUID-Bit aktiviert ist, gesetzt. |
HISTCMD | Enthält die Nummer des aktuellen Kommandos aus der Historydatei |
HOSTTYPE | Typ des Rechners. Für Linux kommen u. a. die Typen i386 oder i486 in Frage. |
OSTYPE | Name des Betriebssystems. Da allerdings die Variable OSTYPE den aktuellen Wert zum Übersetzungszeitpunkt der Bash anzeigt, ist dieser Wert nicht zuverlässig. Re-kompilieren Sie bspw. alles neu, ändert sich dieser Wert nicht mehr. Zuverlässiger ist da wohl das Kommando uname. |
PROMPT_COMMAND | Hier kann ein Kommando angegeben werden, das vor jeder Eingabeaufforderung automatisch ausgeführt wird. |
SHLVL | Steht für den Shell-Level. Bei jedem Aufruf einer neuen Shell in der Shell wird der Shell-Level um eins erhöht; der Wert 2 kann z. B. innerhalb eines Scripts bestehen, das aus einer Login-Shell gestartet wurde. Eine Möglichkeit, zwischen den Levels zu wechseln, gibt es nicht. |
UID | Die User-ID des Anwenders. Diese Kennung ist in der Datei /etc/passwd dem Benutzernamen zugeordnet. |
# 3.2 Kommandozeilenparameter $1 bis $9Â
3.2 Kommandozeilenparameter $1 bis $9Â
Innerhalb Ihres Shellscripts können Sie ohne weiteres mit den speziellen Variablen $1 bis $9 (auch Positionsparameter genannt) auf die Argumente der Kommandozeile zugreifen (siehe Abbildung 3.1). Hierbei werden die Argumente in der Kommandozeile in einzelne Teil-Strings zerlegt (ohne den Scriptnamen, der befindet sich weiterhin in $0). Als Begrenzungszeichen wird der in der Shell-Variable IFS angegebene Trenner verwendet (zu IFS finden Sie in Abschnitt 5.3.6 mehr).
Als Beispiel ein einfaches Shellscript, das die ersten drei Argumente in der Kommandozeile berücksichtigt:
# Beachtet die ersten drei Argumente der Kommandozeile # Name: aargument echo "Erstes Argument: $1" echo "Zweites Argument: $2" echo "Drittes Argument: $3"
Das Shellscript bei der Ausführung:
you@host > ./aargument Erstes Argument: Zweites Argument: Drittes Argument: you@host > ./aargument test1 test2 Erstes Argument: test1 Zweites Argument: test2 Drittes Argument: you@host > ./aargument test1 test2 test3 Erstes Argument: test1 Zweites Argument: test2 Drittes Argument: test3 you@host > ./aargument test1 test2 test3 test4 Erstes Argument: test1 Zweites Argument: test2 Drittes Argument: test3
Geben Sie weniger oder gar keine Argumente an, ist der jeweilige Positionsparameter mit einem leeren String belegt. Sofern Sie mehr Argumente eingeben als vom Script berücksichtigt wird, werden die überflüssigen ignoriert.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 3.2 Kommandozeilenparameter $1 bis $9Â
Innerhalb Ihres Shellscripts können Sie ohne weiteres mit den speziellen Variablen $1 bis $9 (auch Positionsparameter genannt) auf die Argumente der Kommandozeile zugreifen (siehe Abbildung 3.1). Hierbei werden die Argumente in der Kommandozeile in einzelne Teil-Strings zerlegt (ohne den Scriptnamen, der befindet sich weiterhin in $0). Als Begrenzungszeichen wird der in der Shell-Variable IFS angegebene Trenner verwendet (zu IFS finden Sie in Abschnitt 5.3.6 mehr).
Als Beispiel ein einfaches Shellscript, das die ersten drei Argumente in der Kommandozeile berücksichtigt:
> # Beachtet die ersten drei Argumente der Kommandozeile # Name: aargument echo "Erstes Argument: $1" echo "Zweites Argument: $2" echo "Drittes Argument: $3"
Das Shellscript bei der Ausführung:
> you@host > ./aargument Erstes Argument: Zweites Argument: Drittes Argument: you@host > ./aargument test1 test2 Erstes Argument: test1 Zweites Argument: test2 Drittes Argument: you@host > ./aargument test1 test2 test3 Erstes Argument: test1 Zweites Argument: test2 Drittes Argument: test3 you@host > ./aargument test1 test2 test3 test4 Erstes Argument: test1 Zweites Argument: test2 Drittes Argument: test3
Geben Sie weniger oder gar keine Argumente an, ist der jeweilige Positionsparameter mit einem leeren String belegt. Sofern Sie mehr Argumente eingeben als vom Script berücksichtigt wird, werden die überflüssigen ignoriert.
# 3.3 Besondere ParameterÂ
3.3 Besondere ParameterÂ
Die hier beschriebenen Variablen wurden zwar bereits kurz in Kapitel 2, Variablen, angesprochen, aber sie passen doch eher in dieses Kapitel. Daher werden diese Variablen jetzt genau erläutert.
3.3.1 Die Variable $*Â
In der Variablen $* werden alle Argumente in der Kommandozeile (ausgenommen der Scriptname = $0) als eine einzige Zeichenkette gespeichert.
# Beachtet alle Argumente der Kommandozeile # Name: aargumstr echo "Scriptname : $0" echo "Die restlichen Argumente : $*"
Das Script bei der Ausführung:
you@host > ./aargumstr test1 test2 test3 Scriptname : ./aargumstr Die restlichen Argumente : test1 test2 test3 you@host > ./aargumstr Viel mehr Argumente aber ein String Scriptname : ./aargumstr Die restlichen Argumente : Viel mehr Argumente aber ein String
Die Variable $* wird gern bei Shellscripts verwendet, die eine variable Anzahl von Argumenten erwarten. Wenn Sie nicht genau wissen, wie viel Argumente in der Kommandozeile eingegeben werden, müssten Sie Ihr Script immer um die Anzahl der Positionsparameter ($1 bis $n) erweitern. Verwenden Sie hingegen $*, ist die Anzahl der Argumente unwichtig, weil hierbei alle in $* zusammengefasst werden. Dies lässt sich z. B. hervorragend in einer for-Schleife verwenden.
# Eine variable Anzahl von Argumenten # Name: avararg for i in $* do echo '$*:' $i done
Das Script bei der Ausführung:
you@host > ./avararg eine variable Anzahl von Argumenten $*: eine $*: variable $*: Anzahl $*: von $*: Argumenten you@host > ./avararg egal wie viele oder wenig $*: egal $*: wie $*: viele $*: oder $*: wenig
Die for-Schleife wird in Abschnitt 4.10 ausführlich behandelt. Beachten Sie aber bei dem Beispiel »avararg«: Wenn Sie dem Script ein Argument wie folgt übergeben
you@host > ./avararg "eine variable Anzahl von Argumenten"
sieht die Ausgabe genauso aus wie ohne die doppelten Anführungszeichen, obwohl ja eigentlich nur ein Argument ($1) übergeben wurde. Die Ursache ist hierbei die for-Schleife, welche das Argument anhand der Variablen IFS (hier anhand der Leerzeichen) auftrennt.
Würden Sie in der for-Schleife die Variable $* in doppelte Anführungszeichen setzen, so würde die anschließende Ausgabe wieder zu einer Zeichenkette zusammengefasst:
# Eine variable Anzahl von Argumenten # Name: avararg2 for i in "$*" do echo '$*:' $i done
Wenn Sie dies nun mit "$*" anstatt $* ausführen, sieht die Ausgabe wie folgt aus:
you@host > ./avararg2 eine variable Anzahl von Argumenten $*: eine variable Anzahl von Argumenten
3.3.2 Die Variable $@Â
Im Gegensatz zur Variablen $* fasst die Variable $@ die Argumente in einzelne Zeichenketten zusammen. Die Funktion wird ähnlich verwendet wie $*, nur mit dem eben erwähnten Unterschied. Das Anwendungsgebiet dieser Variablen liegt ebenfalls vorwiegend in einer Schleife, weshalb hierauf im gegebenen Kapitel (siehe Kapitel 4, Kontrollstrukturen, nochmals darauf eingegangen wird.
Merke   Alle Argumente (auch mehr als 9) sind durch $* oder $@ erreichbar. $* liefert sie als ein Wort, verkettet mit Leerzeichen, und $@ liefert sie als ein Argument pro Wort.
3.3.3 Die Variable $#Â
Die Variable $# enthält die Anzahl der Argumente, die beim Aufruf des Shellscripts mit angegeben wurden. Als Beispiel dient folgendes Shellscript:
# Anzahl von Argumenten # Name: acountarg echo $* echo "Das sind $# Argumente"
Das Beispiel bei der Ausführung:
you@host > ./acountarg Das sind 0 Argumente you@host > ./acountarg test1 test2 test3 test1 test2 test3 Das sind 3 Argumente
Der häufigste Einsatz von $# erfolgt bei einer if-Entscheidungsanweisung, ob die vorgegebene Anzahl von Argumenten übergeben wurde oder nicht. Wenn nicht, können Sie mit einer Fehlermeldung antworten. Vorweggenommen, ohne genauer darauf einzugehen, finden Sie hier einen solch typischen Fall:
# Überprüft die Anzahl von Argumenten # Name: achkarg # Wurden weniger als zwei Argumente eingegeben? if [ $# -lt 2 ] # lt = lower then then echo "Mindestens zwei Argumente erforderlich ..." echo "$0 arg1 arg2 ... arg_n" exit 1 fi echo "Anzahl erforderlicher Argumente erhalten"
Das Shellscript bei der Ausführung:
you@host > ./achckarg Mindestens zwei Argumente erforderlich ... ./acheckarg arg1 arg2 ... arg_n you@host > ./achckarg test1 test2 Anzahl erforderlicher Argumente erhalten
Mehr zu den Entscheidungsanweisungen mit if erfahren Sie im nächsten Kapitel 4, Kontrollstrukturen.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.de.
## 3.3 Besondere ParameterÂ
Die hier beschriebenen Variablen wurden zwar bereits kurz in Kapitel 2, Variablen, angesprochen, aber sie passen doch eher in dieses Kapitel. Daher werden diese Variablen jetzt genau erläutert.
### 3.3.1 Die Variable $*Â
In der Variablen $* werden alle Argumente in der Kommandozeile (ausgenommen der Scriptname = $0) als eine einzige Zeichenkette gespeichert.
> # Beachtet alle Argumente der Kommandozeile # Name: aargumstr echo "Scriptname : $0" echo "Die restlichen Argumente : $*"
Das Script bei der Ausführung:
> you@host > ./aargumstr test1 test2 test3 Scriptname : ./aargumstr Die restlichen Argumente : test1 test2 test3 you@host > ./aargumstr Viel mehr Argumente aber ein String Scriptname : ./aargumstr Die restlichen Argumente : Viel mehr Argumente aber ein String
Die Variable $* wird gern bei Shellscripts verwendet, die eine variable Anzahl von Argumenten erwarten. Wenn Sie nicht genau wissen, wie viel Argumente in der Kommandozeile eingegeben werden, müssten Sie Ihr Script immer um die Anzahl der Positionsparameter ($1 bis $n) erweitern. Verwenden Sie hingegen $*, ist die Anzahl der Argumente unwichtig, weil hierbei alle in $* zusammengefasst werden. Dies lässt sich z. B. hervorragend in einer for-Schleife verwenden.
> # Eine variable Anzahl von Argumenten # Name: avararg for i in $* do echo '$*:' $i done
Das Script bei der Ausführung:
> you@host > ./avararg eine variable Anzahl von Argumenten $*: eine $*: variable $*: Anzahl $*: von $*: Argumenten you@host > ./avararg egal wie viele oder wenig $*: egal $*: wie $*: viele $*: oder $*: wenig
Die for-Schleife wird in Abschnitt 4.10 ausführlich behandelt. Beachten Sie aber bei dem Beispiel »avararg«: Wenn Sie dem Script ein Argument wie folgt übergeben
you@host > ./avararg "eine variable Anzahl von Argumenten"
sieht die Ausgabe genauso aus wie ohne die doppelten Anführungszeichen, obwohl ja eigentlich nur ein Argument ($1) übergeben wurde. Die Ursache ist hierbei die for-Schleife, welche das Argument anhand der Variablen IFS (hier anhand der Leerzeichen) auftrennt.
Würden Sie in der for-Schleife die Variable $* in doppelte Anführungszeichen setzen, so würde die anschließende Ausgabe wieder zu einer Zeichenkette zusammengefasst:
> # Eine variable Anzahl von Argumenten # Name: avararg2 for i in "$*" do echo '$*:' $i done
Wenn Sie dies nun mit "$*" anstatt $* ausführen, sieht die Ausgabe wie folgt aus:
> you@host > ./avararg2 eine variable Anzahl von Argumenten $*: eine variable Anzahl von Argumenten
### 3.3.2 Die Variable $@Â
Im Gegensatz zur Variablen $* fasst die Variable $@ die Argumente in einzelne Zeichenketten zusammen. Die Funktion wird ähnlich verwendet wie $*, nur mit dem eben erwähnten Unterschied. Das Anwendungsgebiet dieser Variablen liegt ebenfalls vorwiegend in einer Schleife, weshalb hierauf im gegebenen Kapitel (siehe Kapitel 4, Kontrollstrukturen, nochmals darauf eingegangen wird.
### 3.3.3 Die Variable $#Â
Die Variable $# enthält die Anzahl der Argumente, die beim Aufruf des Shellscripts mit angegeben wurden. Als Beispiel dient folgendes Shellscript:
> # Anzahl von Argumenten # Name: acountarg echo $* echo "Das sind $# Argumente"
Das Beispiel bei der Ausführung:
> you@host > ./acountarg Das sind 0 Argumente you@host > ./acountarg test1 test2 test3 test1 test2 test3 Das sind 3 Argumente
Der häufigste Einsatz von $# erfolgt bei einer if-Entscheidungsanweisung, ob die vorgegebene Anzahl von Argumenten übergeben wurde oder nicht. Wenn nicht, können Sie mit einer Fehlermeldung antworten. Vorweggenommen, ohne genauer darauf einzugehen, finden Sie hier einen solch typischen Fall:
> # Überprüft die Anzahl von Argumenten # Name: achkarg # Wurden weniger als zwei Argumente eingegeben? if [ $# -lt 2 ] # lt = lower then then echo "Mindestens zwei Argumente erforderlich ..." echo "$0 arg1 arg2 ... arg_n" exit 1 fi echo "Anzahl erforderlicher Argumente erhalten"
Das Shellscript bei der Ausführung:
> you@host > ./achckarg Mindestens zwei Argumente erforderlich ... ./acheckarg arg1 arg2 ... arg_n you@host > ./achckarg test1 test2 Anzahl erforderlicher Argumente erhalten
Mehr zu den Entscheidungsanweisungen mit if erfahren Sie im nächsten Kapitel 4, Kontrollstrukturen.
# 3.4 Der Befehl shiftÂ
3.4 Der Befehl shiftÂ
Mit shift können Sie die Positionsparameter von $1 bis $n jeweils um eine Stelle nach links verschieben. So können Sie die Argumente aus der Liste, die für weitere Verarbeitungsschritte nicht mehr benötigt werden, entfernen. Hier die Syntax zu shift:
shift [n]
Setzen Sie shift (ohne weitere Argumente) ein, wird bspw. der Inhalt von $2 nach $1 übertragen. Befand sich in $3 ein Inhalt, so entspricht dieser nun dem Inhalt von $2. Es geht immer der erste Wert ($1) des Positionsparameters verloren. Sie »schneiden« quasi die Argumente der Kommandozeile von links nach rechts ab, sodass nacheinander alle Argumente zwangsläufig in $1 landen. Geben Sie hingegen bei shift für n eine ganze Zahl an, so wird nicht um eine, sondern um n Anzahl von Stellen geschoben. Natürlich sind durch einen Aufruf von shift auch die Variablen $*, $@ und $# betroffen. $* und $@ werden um einige Zeichen erleichtert und $# wird um den Wert eins dekrementiert.
Eine solche Verarbeitung wird recht gern verwendet, wenn bei Argumenten optionale Parameter angegeben werden dürfen, die sich bspw. irgendwo in der Parameterliste befinden. Man kann mit shift dann einfach an die Position der Parameterliste verschieben.
Hierzu ein Script, welches shift bei seiner Ausführung demonstriert:
# Demonstriert das Kommando shift # Name: ashifter echo "Argumente (Anzahl) : $* ($#)" echo "Argument \$1 : $1" shift echo "Argumente (Anzahl) : $* ($#)" echo "Argument \$1 : $1" shift echo "Argumente (Anzahl) : $* ($#)" echo "Argument \$1 : $1" shift
Das Script bei der Ausführung:
you@host > ./ashifter ein paar Argumente gefällig Argumente (Anzahl) : ein paar Argumente gefällig (4) Argument $1 : ein Argumente (Anzahl) : paar Argumente gefällig (3) Argument $1 : paar Argumente (Anzahl) : Argumente gefällig (2) Argument $1 : Argumente
Sicherlich erscheint Ihnen das Ganze nicht sonderlich elegant oder sinnvoll, aber bspw. in Schleifen eingesetzt, können Sie hierbei hervorragend alle Argumente der Kommandozeile zur Verarbeitung von Optionen heranziehen. Als Beispiel ein kurzer theoretischer Code-Ausschnitt, wie so etwas in der Praxis realisiert werden kann:
# Demonstriert das Kommando shift in einer Schleife # Name: ashifter2 while [ $# -ne 0 ] # ne = not equal do # Tue was mit dem 1. Parameter # Hier einfach eine Ausgabe ... echo $1 # immer 1. Parameter verarbeiten shift # weiterschieben ... done
Das Beispiel bei der Ausführung:
you@host > ./ashifter2 wieder ein paar Argumente wieder ein paar Argumente
Auch hier wurde mit while wieder auf ein Schleifen-Konstrukt zurückgegriffen, dessen Verwendung erst in Kapitel 4, Kontrollstrukturen, erläutert wird.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 3.4 Der Befehl shiftÂ
Mit shift können Sie die Positionsparameter von $1 bis $n jeweils um eine Stelle nach links verschieben. So können Sie die Argumente aus der Liste, die für weitere Verarbeitungsschritte nicht mehr benötigt werden, entfernen. Hier die Syntax zu shift:
> shift [n]
Setzen Sie shift (ohne weitere Argumente) ein, wird bspw. der Inhalt von $2 nach $1 übertragen. Befand sich in $3 ein Inhalt, so entspricht dieser nun dem Inhalt von $2. Es geht immer der erste Wert ($1) des Positionsparameters verloren. Sie »schneiden« quasi die Argumente der Kommandozeile von links nach rechts ab, sodass nacheinander alle Argumente zwangsläufig in $1 landen. Geben Sie hingegen bei shift für n eine ganze Zahl an, so wird nicht um eine, sondern um n Anzahl von Stellen geschoben. Natürlich sind durch einen Aufruf von shift auch die Variablen $*, $@ und $# betroffen. $* und $@ werden um einige Zeichen erleichtert und $# wird um den Wert eins dekrementiert.
Eine solche Verarbeitung wird recht gern verwendet, wenn bei Argumenten optionale Parameter angegeben werden dürfen, die sich bspw. irgendwo in der Parameterliste befinden. Man kann mit shift dann einfach an die Position der Parameterliste verschieben.
Hierzu ein Script, welches shift bei seiner Ausführung demonstriert:
> # Demonstriert das Kommando shift # Name: ashifter echo "Argumente (Anzahl) : $* ($#)" echo "Argument \$1 : $1" shift echo "Argumente (Anzahl) : $* ($#)" echo "Argument \$1 : $1" shift echo "Argumente (Anzahl) : $* ($#)" echo "Argument \$1 : $1" shift
Das Script bei der Ausführung:
> you@host > ./ashifter ein paar Argumente gefällig Argumente (Anzahl) : ein paar Argumente gefällig (4) Argument $1 : ein Argumente (Anzahl) : paar Argumente gefällig (3) Argument $1 : paar Argumente (Anzahl) : Argumente gefällig (2) Argument $1 : Argumente
Sicherlich erscheint Ihnen das Ganze nicht sonderlich elegant oder sinnvoll, aber bspw. in Schleifen eingesetzt, können Sie hierbei hervorragend alle Argumente der Kommandozeile zur Verarbeitung von Optionen heranziehen. Als Beispiel ein kurzer theoretischer Code-Ausschnitt, wie so etwas in der Praxis realisiert werden kann:
> # Demonstriert das Kommando shift in einer Schleife # Name: ashifter2 while [ $# -ne 0 ] # ne = not equal do # Tue was mit dem 1. Parameter # Hier einfach eine Ausgabe ... echo $1 # immer 1. Parameter verarbeiten shift # weiterschieben ... done
Das Beispiel bei der Ausführung:
> you@host > ./ashifter2 wieder ein paar Argumente wieder ein paar Argumente
Auch hier wurde mit while wieder auf ein Schleifen-Konstrukt zurückgegriffen, dessen Verwendung erst in Kapitel 4, Kontrollstrukturen, erläutert wird.
# 3.5 Argumente und LeerzeichenÂ
3.5 Argumente und LeerzeichenÂ
Die Shell erkennt anhand der Shell-Variablen IFS, wann ein Argument endet und das nächste beginnt. Soweit ist das kein Problem, wenn man nicht für ein Argument zwei oder mehrere Zeichenketten verwenden will. Ein einfaches Beispiel, das zeigt, worauf ich hinauswill:
# Argumente mit einem Leerzeichen # Name: awhitespacer echo "Vorname : $1" echo "Name : $2" echo "Alter : $3"
Das Script bei der Ausführung:
you@host > ./awhitespacer Jürgen von Braunschweig 30 Name : Jürgen Vorname : von Alter : Braunschweig
Hier war eigentlich beabsichtigt, dass beim Nachnamen (Argument $2) »von Braunschweig« stehen sollte. Die Shell allerdings behandelt dies richtigerweise als zwei Argumente. Diese »Einschränkung« zu umgehen, ist nicht sonderlich schwer, aber eben eine recht häufig gestellte Aufgabe. Sie müssen nur entsprechende Zeichenketten in zwei doppelte Anführungszeichen setzen. Das Script nochmals bei der Ausführung:
you@host > ./awhitespacer Jürgen "von Braunschweig" 30 Name : Jürgen Vorname : von Braunschweig Alter : 30
Jetzt werden die Daten auch korrekt am Bildschirm angezeigt.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 3.5 Argumente und LeerzeichenÂ
Die Shell erkennt anhand der Shell-Variablen IFS, wann ein Argument endet und das nächste beginnt. Soweit ist das kein Problem, wenn man nicht für ein Argument zwei oder mehrere Zeichenketten verwenden will. Ein einfaches Beispiel, das zeigt, worauf ich hinauswill:
> # Argumente mit einem Leerzeichen # Name: awhitespacer echo "Vorname : $1" echo "Name : $2" echo "Alter : $3"
Das Script bei der Ausführung:
> you@host > ./awhitespacer Jürgen von Braunschweig 30 Name : Jürgen Vorname : von Alter : Braunschweig
Hier war eigentlich beabsichtigt, dass beim Nachnamen (Argument $2) »von Braunschweig« stehen sollte. Die Shell allerdings behandelt dies richtigerweise als zwei Argumente. Diese »Einschränkung« zu umgehen, ist nicht sonderlich schwer, aber eben eine recht häufig gestellte Aufgabe. Sie müssen nur entsprechende Zeichenketten in zwei doppelte Anführungszeichen setzen. Das Script nochmals bei der Ausführung:
> you@host > ./awhitespacer Jürgen "von Braunschweig" 30 Name : Jürgen Vorname : von Braunschweig Alter : 30
Jetzt werden die Daten auch korrekt am Bildschirm angezeigt.
# 3.6 Argumente jenseits von $9Â
3.6 Argumente jenseits von $9Â
Bisher wurde nur die Möglichkeit behandelt, wie neun Argumente in der Kommandozeile ausgewertet werden können. Eine simple Technik, die Ihnen in allen Shells zur Verfügung steht, ist der Befehl shift, welchen Sie ja bereits kennen gelernt haben (siehe auch das Script-Beispiel »ashifter2« aus Abschnitt 3.4).
Neben shift gibt es noch zwei weitere gängige Methoden, mit den Variablen $* oder $@ zu arbeiten. Auch hierbei können Sie in einer for-Schleife sämtliche Argumente abgrasen, egal, wie viele Argumente vorhanden sind. Wenn Sie sich fragen, wozu das gut sein soll, so viele Argumente zu behandeln, kann ich Ihnen als Stichwort »Metazeichen« oder »Datei-Expansion« nennen. Als Beispiel folgendes Script:
# Beliebig viele Argumente in der Kommandozeile auswerten # Name: aunlimited i=1 for argument in $* do echo "$i. Argument : $argument" i=`expr $i + 1` done
Das Script bei der Ausführung:
you@host > ./aunlimited A B C D E F G H I J K 1. Argument : A 2. Argument : B 3. Argument : C 4. Argument : D 5. Argument : E 6. Argument : F 7. Argument : G 8. Argument : H 9. Argument : I 10. Argument : J 11. Argument : K
Das Script arbeitet zwar jetzt beliebig viele Argumente ab, aber es wurde immer noch nicht demonstriert, wofür so etwas gut sein soll. Rufen Sie doch das Script nochmals folgendermaßen auf:
you@host > ./aunlimited /usr/include/*.h 1. Argument : /usr/include/af_vfs.h 2. Argument : /usr/include/aio.h 3. Argument : /usr/include/aliases.h 4. Argument : /usr/include/alloca.h 5. Argument : /usr/include/ansidecl.h ... ... 235. Argument : /usr/include/xlocale.h 236. Argument : /usr/include/xmi.h 237. Argument : /usr/include/zconf.h 238. Argument : /usr/include/zlib.h 239. Argument : /usr/include/zutil.h
Das dürfte Ihre Fragen nach dem Sinn beantworten. Durch die Datei-Expansion wurden aus einem Argument auf einmal 239 Argumente.
3.6.1 Beliebig viele Argumente (Bash und Korn-Shell only)Â
In der Bash und der Korn-Shell steht Ihnen noch eine weitere Alternative zur Verfügung, um auf ein Element jenseits von neun zurückzugreifen. Hierbei können Sie alles wie gehabt nutzen (also $1, $2 ... $9), nur dass Sie nach der neunten Position den Wert in geschweifte Klammern (${n}) setzen müssen. Wollen Sie bspw. auf das 20. Argument zurückgreifen, gehen Sie folgendermaßen vor:
# Argument 20 echo "Das 20. Argument: ${20}" # Argument 99 echo ${99}
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 3.6 Argumente jenseits von $9Â
Bisher wurde nur die Möglichkeit behandelt, wie neun Argumente in der Kommandozeile ausgewertet werden können. Eine simple Technik, die Ihnen in allen Shells zur Verfügung steht, ist der Befehl shift, welchen Sie ja bereits kennen gelernt haben (siehe auch das Script-Beispiel »ashifter2« aus Abschnitt 3.4).
Neben shift gibt es noch zwei weitere gängige Methoden, mit den Variablen $* oder $@ zu arbeiten. Auch hierbei können Sie in einer for-Schleife sämtliche Argumente abgrasen, egal, wie viele Argumente vorhanden sind. Wenn Sie sich fragen, wozu das gut sein soll, so viele Argumente zu behandeln, kann ich Ihnen als Stichwort »Metazeichen« oder »Datei-Expansion« nennen. Als Beispiel folgendes Script:
> # Beliebig viele Argumente in der Kommandozeile auswerten # Name: aunlimited i=1 for argument in $* do echo "$i. Argument : $argument" i=`expr $i + 1` done
Das Script bei der Ausführung:
> you@host > ./aunlimited A B C D E F G H I J K 1. Argument : A 2. Argument : B 3. Argument : C 4. Argument : D 5. Argument : E 6. Argument : F 7. Argument : G 8. Argument : H 9. Argument : I 10. Argument : J 11. Argument : K
Das Script arbeitet zwar jetzt beliebig viele Argumente ab, aber es wurde immer noch nicht demonstriert, wofür so etwas gut sein soll. Rufen Sie doch das Script nochmals folgendermaßen auf:
> you@host > ./aunlimited /usr/include/*.h 1. Argument : /usr/include/af_vfs.h 2. Argument : /usr/include/aio.h 3. Argument : /usr/include/aliases.h 4. Argument : /usr/include/alloca.h 5. Argument : /usr/include/ansidecl.h ... ... 235. Argument : /usr/include/xlocale.h 236. Argument : /usr/include/xmi.h 237. Argument : /usr/include/zconf.h 238. Argument : /usr/include/zlib.h 239. Argument : /usr/include/zutil.h
Das dürfte Ihre Fragen nach dem Sinn beantworten. Durch die Datei-Expansion wurden aus einem Argument auf einmal 239 Argumente.
### 3.6.1 Beliebig viele Argumente (Bash und Korn-Shell only)Â
In der Bash und der Korn-Shell steht Ihnen noch eine weitere Alternative zur Verfügung, um auf ein Element jenseits von neun zurückzugreifen. Hierbei können Sie alles wie gehabt nutzen (also $1, $2 ... $9), nur dass Sie nach der neunten Position den Wert in geschweifte Klammern (${n}) setzen müssen. Wollen Sie bspw. auf das 20. Argument zurückgreifen, gehen Sie folgendermaßen vor:
> # Argument 20 echo "Das 20. Argument: ${20}" # Argument 99 echo ${99}
# 3.7 Argumente setzen mit set und Kommando-SubstitutionÂ
3.7 Argumente setzen mit set und Kommando-SubstitutionÂ
Es ist neben dem Kommandoaufruf auch noch ein anderer Weg möglich, die Positionsparameter $1 bis $n mit Werten zu belegen. Dies lässt sich mit dem Kommando set realisieren. Vorwiegend wird diese Technik dazu benutzt, Zeichenketten in einzelne Worte zu zerlegen. Ein Aufruf von set überträgt die Argumente seines Aufrufs nacheinander an die Positionsparameter $1 bis $n. Dies nehmen selbstverständlich auch die Variablen $#, $* und $@ zur Kenntnis (siehe Abbildung 3.3).
Zur Demonstration folgender Vorgang in einer Shell:
you@host > set test1 test2 test3 test4 you@host > echo $1 test1 you@host > echo $2 test2 you@host > echo $3 test3 you@host > echo $4 test4 you@host > echo $# 4 you@host > echo $* test1 test2 test3 test4
Hier werden die einzelnen Argumente durch den Befehl set an die Positionsparameter $1 bis $4 übergeben. Als Trenner zwischen den einzelnen Argumenten muss hier mindestens ein Leerzeichen (siehe Variable IFS) verwendet werden.
Nun aber noch zu folgendem Problem:
you@host > set -a -b -c bash: set: -c: invalid option set: usage: set [--abefhkmnptuvxBCHP] [-o option] [arg ...]
Hier hätte ich gern die Optionen âa, âb und âc an $1, $2 und $3 übergeben. Aber die Shell verwendet hier das Minuszeichen für »echte« Optionen, also Optionen, mit denen Sie das Kommando set beeinflussen können. Wollen Sie dem entgegnen, müssen Sie vor den neuen Positionsparametern zwei Minuszeichen (ââ) angeben.
you@host > set -- -a -b -c you@host > echo $1 $2 $3 -a -b -c
Hinweis   Bitte beachten Sie, dass Sie mit set -- ohne weitere Angaben von Argumenten alle Positionsparameter löschen. Dies gilt allerdings nur für die Bash und die Korn-Shell. In der Bourne-Shell können Sie hierfür einen leeren String (set "") angeben.
Wie bereits erwähnt wurde, erfolgt der Einsatz von set eher nicht bei der Übergabe von Parametern, sondern der Zerlegung von Zeichenketten, insbesondere der Zeichenketten, die von Kommandos zurückgegeben werden. Ein einfaches Beispiel einer solchen Anwendung (natürlich wird hierzu die Kommando-Substitution herangezogen):
# Positionsparameter mit set und Kommando-Substitution # auserinfo set `who | grep $1` echo "User : $1" echo "Bildschirm : $2" echo "Login um : $5"
Das Script bei der Ausführung:
you@host > ./auserinfo you User : you Bildschirm : tty03 Login um : 23:05 you@host > ./auserinfo tot User : tot Bildschirm : :0 Login um : 21:05
Um zu wissen, wie die einzelnen Positionsparameter zu Stande kommen, muss man selbstverständlich mit dem entsprechenden Kommando vertraut sein. Hier wurde who verwendet. Mit einer Pipe wurde die Standardausgabe auf die Standardeingabe von grep weitergeleitet und filtert hierbei nur noch den entsprechenden String aus, den Sie als erstes Argument beim Aufruf mitgeben. Um zu sehen, welche Parameter set übergeben werden, können Sie einfach mal den Befehl who | grep you eingeben (für »you« geben Sie einen auf Ihrem Rechner vorhandenen Benutzernamen ein). Zwar wurde hierbei der Benutzername verwendet, aber es hindert Sie keiner daran, Folgendes vorzunehmen:
you@host > ./auserinfo 23:* User : you Bildschirm : tty03 Login um : 23:05
Damit wird nach einem User gesucht, der sich ab 23 Uhr eingeloggt hat. Zurück zu den Positionsparametern; ein Aufruf von who verschafft Klarheit:
you@host > who tot :0 Feb 16 21:05 (console) you tty03 Feb 16 23:05
Daraus ergeben sich folgende Positionsparameter (siehe Abbildung 3.4):
Durch den Aufruf von
set `who | grep $1`
werden jetzt die einzelnen Positionsparameter den Variablen $1 bis $5 zugewiesen (optional könnten Sie hier mit »(console)« auch den sechsten Parameter mit einbeziehen), die Sie anschließend in Ihrem Script verarbeiten können â im Beispiel wurde eine einfache Ausgabe mittels echo verwendet. Dass dies derart funktioniert, ist immer der Shell-Variablen IFS zu verdanken, die das Trennzeichen für solche Aktionen beinhaltet. Sollten bei einem Kommando keine Leerzeichen als Trenner zurückgegeben werden, so müssen Sie die Variable IFS mit einem neuen Wert versehen.
Ein weiteres simples und häufig zitiertes Beispiel mit date:
you@host > date Do Feb 17 01:05:15 CET 2005 you@host > set `date` you@host > echo "$3.$2.$6 um $4" 17.Feb.2005 um 01:05:22
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 3.7 Argumente setzen mit set und Kommando-SubstitutionÂ
Es ist neben dem Kommandoaufruf auch noch ein anderer Weg möglich, die Positionsparameter $1 bis $n mit Werten zu belegen. Dies lässt sich mit dem Kommando set realisieren. Vorwiegend wird diese Technik dazu benutzt, Zeichenketten in einzelne Worte zu zerlegen. Ein Aufruf von set überträgt die Argumente seines Aufrufs nacheinander an die Positionsparameter $1 bis $n. Dies nehmen selbstverständlich auch die Variablen $#, $* und $@ zur Kenntnis (siehe Abbildung 3.3).
Zur Demonstration folgender Vorgang in einer Shell:
> you@host > set test1 test2 test3 test4 you@host > echo $1 test1 you@host > echo $2 test2 you@host > echo $3 test3 you@host > echo $4 test4 you@host > echo $# 4 you@host > echo $* test1 test2 test3 test4
Hier werden die einzelnen Argumente durch den Befehl set an die Positionsparameter $1 bis $4 übergeben. Als Trenner zwischen den einzelnen Argumenten muss hier mindestens ein Leerzeichen (siehe Variable IFS) verwendet werden.
Nun aber noch zu folgendem Problem:
> you@host > set -a -b -c bash: set: -c: invalid option set: usage: set [--abefhkmnptuvxBCHP] [-o option] [arg ...]
Hier hätte ich gern die Optionen âa, âb und âc an $1, $2 und $3 übergeben. Aber die Shell verwendet hier das Minuszeichen für »echte« Optionen, also Optionen, mit denen Sie das Kommando set beeinflussen können. Wollen Sie dem entgegnen, müssen Sie vor den neuen Positionsparametern zwei Minuszeichen (ââ) angeben.
> you@host > set -- -a -b -c you@host > echo $1 $2 $3 -a -b -c
Wie bereits erwähnt wurde, erfolgt der Einsatz von set eher nicht bei der Übergabe von Parametern, sondern der Zerlegung von Zeichenketten, insbesondere der Zeichenketten, die von Kommandos zurückgegeben werden. Ein einfaches Beispiel einer solchen Anwendung (natürlich wird hierzu die Kommando-Substitution herangezogen):
> # Positionsparameter mit set und Kommando-Substitution # auserinfo set `who | grep $1` echo "User : $1" echo "Bildschirm : $2" echo "Login um : $5"
Das Script bei der Ausführung:
> you@host > ./auserinfo you User : you Bildschirm : tty03 Login um : 23:05 you@host > ./auserinfo tot User : tot Bildschirm : :0 Login um : 21:05
Um zu wissen, wie die einzelnen Positionsparameter zu Stande kommen, muss man selbstverständlich mit dem entsprechenden Kommando vertraut sein. Hier wurde who verwendet. Mit einer Pipe wurde die Standardausgabe auf die Standardeingabe von grep weitergeleitet und filtert hierbei nur noch den entsprechenden String aus, den Sie als erstes Argument beim Aufruf mitgeben. Um zu sehen, welche Parameter set übergeben werden, können Sie einfach mal den Befehl who | grep you eingeben (für »you« geben Sie einen auf Ihrem Rechner vorhandenen Benutzernamen ein). Zwar wurde hierbei der Benutzername verwendet, aber es hindert Sie keiner daran, Folgendes vorzunehmen:
> you@host > ./auserinfo 23:* User : you Bildschirm : tty03 Login um : 23:05
Damit wird nach einem User gesucht, der sich ab 23 Uhr eingeloggt hat. Zurück zu den Positionsparametern; ein Aufruf von who verschafft Klarheit:
> you@host > who tot :0 Feb 16 21:05 (console) you tty03 Feb 16 23:05
Daraus ergeben sich folgende Positionsparameter (siehe Abbildung 3.4):
Durch den Aufruf von
> set `who | grep $1`
werden jetzt die einzelnen Positionsparameter den Variablen $1 bis $5 zugewiesen (optional könnten Sie hier mit »(console)« auch den sechsten Parameter mit einbeziehen), die Sie anschließend in Ihrem Script verarbeiten können â im Beispiel wurde eine einfache Ausgabe mittels echo verwendet. Dass dies derart funktioniert, ist immer der Shell-Variablen IFS zu verdanken, die das Trennzeichen für solche Aktionen beinhaltet. Sollten bei einem Kommando keine Leerzeichen als Trenner zurückgegeben werden, so müssen Sie die Variable IFS mit einem neuen Wert versehen.
Ein weiteres simples und häufig zitiertes Beispiel mit date:
> you@host > date Do Feb 17 01:05:15 CET 2005 you@host > set `date` you@host > echo "$3.$2.$6 um $4" 17.Feb.2005 um 01:05:22
# 3.8 getopts â Kommandozeilenoptionen auswertenÂ
kommando -o datei.txt # Richtig
kommando datei.txt -o # Falsch
kommando -o -x datei.txt # Richtig
kommando -x -o datei.txt # Auch richtig
Â
Optionen können auch zusammengefasst werden:
kommando -ox datei.txt # Richtig
Wird getopts aufgerufen, überprüft dieses Kommando, ob sich eine Option (eingeleitet mit einem »â«) in den Positionsparametern (von links nach rechts) befindet. Der Parameter, der als Nächstes bearbeitet werden soll, wird bei einer Shell in der automatischen Variable OPTIND verwaltet. Der Wert dieser Variablen beträgt beim Aufruf erst einmal 1 â wird aber bei jedem weiteren getopts-Aufruf um 1 erhöht. Wenn eine Kommandozeile mehrfach eingelesen werden soll, muss der Index manuell zurückgesetzt werden.
Wird eine Option gefunden, dann wird dieses Zeichen der Variablen »Variable« zugewiesen. In der Zeichenkette »Optionen« werden alle Schalter mit einem Kennbuchstaben angegeben. Schalter, die zusätzliche Argumente erhalten, bekommen einen Doppelpunkt dahinter angehängt. Die Optionen, die ein weiteres Argument erwarten (also die, die mit einem Doppelpunkt markiert sind), werden in der Shell-Variablen OPTARG gespeichert.
Wenn nicht ausdrücklich ein »Argument« beim Aufruf von getopts übergeben wird, verwendet das Shell-Kommando die Positionsparameter, also die Argumente von der Kommandozeile des Scripts.
Wenn getopts 0 zurückliefert, deutet dies auf eine gefundene Option hin. Bei einem Wert ungleich 0 wurde das Ende der Argumente erreicht oder es ist ein Fehler aufgetreten. Wollen Sie eine Fehlermeldungsausgabe vermeiden, können Sie einen Doppelpunkt als erstes Zeichen verwenden oder die Shell-Variable OPTERR auf 0 setzen.
Gewöhnlich wird getopts in einer Schleife mit einer case-Konstruktion ausgewertet. Zwar wurde beides bisher noch nicht behandelt (erst im nächsten Kapitel), aber trotzdem will ich es an dieser Stelle nicht bei der trockenen Theorie belassen. Hier ein mögliches Beispiel:
# Demonstriert getopt # Name: agetopter while getopts abc:D: opt do case $opt in a) echo "Option a";; b) echo "Option b";; c) echo "Option c : ($OPTARG)";; D) echo "Option D : ($OPTARG)";; esac done
Das Script bei der Ausführung:
you@host > ./agetopter -a Option a you@host > ./agetopter -b Option b you@host > ./agetopter -c ./agetopter: option requires an argument -- c you@host > ./agetopter -c Hallo Option c : (Hallo) you@host > ./agetopter -D ./agetopter: option requires an argument -- D you@host > ./agetopter -D Nochmals Option D : (Nochmals) you@host > ./agetopter -ab Option a Option b you@host > ./agetopter -abD Option a Option b ./agetopter: option requires an argument -- D you@host > ./agetopter -abD Hallo Option a Option b Option D : (Hallo) you@host > ./agetopter -x ./agetopter: illegal option â x
Im Beispiel konnten Sie außerdem auch gleich die Auswirkungen des Doppelpunktes hinter einem Optionszeichen erkennen. Geben Sie bei Verwendung einer solchen Option kein weiteres Argument an, wird eine Fehlermeldung zurückgegeben. Selbst ein falsches Argument wertet getopts hier aus und meldet den Fehler.
Wollen Sie die Fehlerausgabe selbst behandeln, also nicht eine von getopts produzierte Fehlermeldung verwenden, dann müssen Sie die Ausgabe von getopts in das Datengrab (/dev/null) schieben und als weitere Option in case ein Fragezeichen auswerten. Im Fehlerfall liefert Ihnen nämlich getopts ein Fragezeichen zurück. Dasselbe Script, jetzt ohne getopts-Fehlermeldungen:
# Demonstriert getopt # Name: agetopter2 while getopts abc:D: opt 2>/dev/null do case $opt in a) echo "Option a";; b) echo "Option b";; c) echo "Option c : ($OPTARG)";; D) echo "Option D : ($OPTARG)";; ?) echo "($0): Ein Fehler bei der Optionsangabe" esac done
Das Script bei einer fehlerhaften Ausführung:
you@host > ./agetopter2 -D (./agetopter2): Ein Fehler bei der Optionsangabe you@host > ./agetopter2 -x (./agetopter2): Ein Fehler bei der Optionsangabe
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 3.8 getopts â Kommandozeilenoptionen auswertenÂ
Um die Funktion getopts richtig einzusetzen, müssen Sie folgende Regeln beachten:
> kommando -o datei.txt # Richtig
> kommando datei.txt -o # Falsch
> kommando -o -x datei.txt # Richtig
> kommando -x -o datei.txt # Auch richtig
 | Optionen können auch zusammengefasst werden: |
| --- | --- |
Wird getopts aufgerufen, überprüft dieses Kommando, ob sich eine Option (eingeleitet mit einem »â«) in den Positionsparametern (von links nach rechts) befindet. Der Parameter, der als Nächstes bearbeitet werden soll, wird bei einer Shell in der automatischen Variable OPTIND verwaltet. Der Wert dieser Variablen beträgt beim Aufruf erst einmal 1 â wird aber bei jedem weiteren getopts-Aufruf um 1 erhöht. Wenn eine Kommandozeile mehrfach eingelesen werden soll, muss der Index manuell zurückgesetzt werden.
Wird eine Option gefunden, dann wird dieses Zeichen der Variablen »Variable« zugewiesen. In der Zeichenkette »Optionen« werden alle Schalter mit einem Kennbuchstaben angegeben. Schalter, die zusätzliche Argumente erhalten, bekommen einen Doppelpunkt dahinter angehängt. Die Optionen, die ein weiteres Argument erwarten (also die, die mit einem Doppelpunkt markiert sind), werden in der Shell-Variablen OPTARG gespeichert.
Wenn nicht ausdrücklich ein »Argument« beim Aufruf von getopts übergeben wird, verwendet das Shell-Kommando die Positionsparameter, also die Argumente von der Kommandozeile des Scripts.
Wenn getopts 0 zurückliefert, deutet dies auf eine gefundene Option hin. Bei einem Wert ungleich 0 wurde das Ende der Argumente erreicht oder es ist ein Fehler aufgetreten. Wollen Sie eine Fehlermeldungsausgabe vermeiden, können Sie einen Doppelpunkt als erstes Zeichen verwenden oder die Shell-Variable OPTERR auf 0 setzen.
Gewöhnlich wird getopts in einer Schleife mit einer case-Konstruktion ausgewertet. Zwar wurde beides bisher noch nicht behandelt (erst im nächsten Kapitel), aber trotzdem will ich es an dieser Stelle nicht bei der trockenen Theorie belassen. Hier ein mögliches Beispiel:
> # Demonstriert getopt # Name: agetopter while getopts abc:D: opt do case $opt in a) echo "Option a";; b) echo "Option b";; c) echo "Option c : ($OPTARG)";; D) echo "Option D : ($OPTARG)";; esac done
Das Script bei der Ausführung:
> you@host > ./agetopter -a Option a you@host > ./agetopter -b Option b you@host > ./agetopter -c ./agetopter: option requires an argument -- c you@host > ./agetopter -c Hallo Option c : (Hallo) you@host > ./agetopter -D ./agetopter: option requires an argument -- D you@host > ./agetopter -D Nochmals Option D : (Nochmals) you@host > ./agetopter -ab Option a Option b you@host > ./agetopter -abD Option a Option b ./agetopter: option requires an argument -- D you@host > ./agetopter -abD Hallo Option a Option b Option D : (Hallo) you@host > ./agetopter -x ./agetopter: illegal option â x
Im Beispiel konnten Sie außerdem auch gleich die Auswirkungen des Doppelpunktes hinter einem Optionszeichen erkennen. Geben Sie bei Verwendung einer solchen Option kein weiteres Argument an, wird eine Fehlermeldung zurückgegeben. Selbst ein falsches Argument wertet getopts hier aus und meldet den Fehler.
Wollen Sie die Fehlerausgabe selbst behandeln, also nicht eine von getopts produzierte Fehlermeldung verwenden, dann müssen Sie die Ausgabe von getopts in das Datengrab (/dev/null) schieben und als weitere Option in case ein Fragezeichen auswerten. Im Fehlerfall liefert Ihnen nämlich getopts ein Fragezeichen zurück. Dasselbe Script, jetzt ohne getopts-Fehlermeldungen:
> # Demonstriert getopt # Name: agetopter2 while getopts abc:D: opt 2>/dev/null do case $opt in a) echo "Option a";; b) echo "Option b";; c) echo "Option c : ($OPTARG)";; D) echo "Option D : ($OPTARG)";; ?) echo "($0): Ein Fehler bei der Optionsangabe" esac done
Das Script bei einer fehlerhaften Ausführung:
> you@host > ./agetopter2 -D (./agetopter2): Ein Fehler bei der Optionsangabe you@host > ./agetopter2 -x (./agetopter2): Ein Fehler bei der Optionsangabe
# 3.9 Vorgabewerte für VariablenÂ
3.9 Vorgabewerte für VariablenÂ
Da Sie sich nicht immer darauf verlassen können, dass die Anwender Ihrer Scrips schon das Richtige eingeben werden, gibt es so genannte Vorgabewerte für Variablen. Dass ich hier nicht »Vorgabewerte für Argumente« schreibe, deutet schon darauf hin, dass dieses Anwendungsgebiet nicht nur für die Kommandozeile gilt, sondern auch für Variablen im Allgemeinen. Neben den Positionsparametern können Sie damit also auch jegliche Art von Benutzereingaben bearbeiten.
Wenn Sie zu Kapitel 4, Kontrollstrukturen, kommen, wird Ihnen auffallen, dass die Verwendung von Vorgabewerten den if-then-else-Konstrukten ähnelt. Hierzu ein simples Beispiel. Es sollen aus einem Verzeichnis alle Verzeichnisse, die sich darin befinden, ausgegeben werden. Nehmen wir als Scriptnamen »lsdirs«. Rufen Sie dieses Script ohne ein Argument auf, wird durch einen Standardwert (im Beispiel einfach das aktuelle Arbeitsverzeichnis pwd) das aufzulistende Verzeichnis vorgegeben. Hier das Shellscript:
# Vorgabewerte setzen # Name: lsdirs directory=${1:-`pwd`} ls -ld $directory | grep ^d
Das Script bei der Ausführung:
you@host > ./lsdirs drwxr-xr-x 2 tot users 72 2005â02â07 10:29 bin drwx------ 3 tot users 424 2005â02â07 11:29 Desktop drwxr-xr-x 2 tot users 112 2005â02â17 08:11 Documents drwxr-xr-x 4 tot users 208 2005â02â07 10:29 HelpExplorer drwxr-xr-x 2 tot users 80 2005â02â05 15:03 public_html drwxr-xr-x 3 tot users 216 2004â09â04 19:55 Setup drwxr-xr-x 4 tot users 304 2005â02â15 07:19 Shellbuch you@host > ./lsdirs /home/tot/Shellbuch drwxr-xr-x 2 tot users 2712 2005â02â09 03:57 chm_pdf drwxr-xr-x 2 tot users 128 2005â02â05 15:15 Planung you@host > ./lsdirs /home drwxr-xr-x 27 tot users 2040 2005â02â18 00:30 tot drwxr-xr-x 45 you users 2040 2005â01â28 02:32 you
Zugegeben, das mit dem grep ^d hätte man auch mit einem einfachen test-Kommando realisieren können, aber hier müsste ich wieder auf ein Thema vorgreifen, was bisher noch nicht behandelt wurde. Durch ^d werden einfach alle Zeilen von ls âld herausgezogen, die mit einem d (hier für die Dateiart directory) anfangen. Mit der Zeile
directory=${1:-`pwd`}
übergeben Sie der Variablen »directory« entweder den Wert des Positionsparameters $1 oder â wenn diese Variable leer ist â es wird stattdessen eine Kommando-Substitution durchgeführt (hier pwd) und deren Wert in »directory« abgelegt. Es gibt noch mehr solcher Konstruktionen für Standardwerte von Variablen, wie sie hier mit ${var:âwort} verwendet wurde. Tabelle 3.1 nennt alle Möglichkeiten:
Tabelle 3.1 Â Vorgabewerte für Variablen
Vorgabewert-Konstrukt
Bedeutung
${var:âwort}
Ist var mit einem Inhalt besetzt (nicht Null), wird var zurückgegeben. Ansonsten wird die Variable wort verwendet.
${var:+wort}
Hier wird wort zurückgegeben, wenn die var nicht (!) leer ist. Ansonsten wird ein Null-Wert zurückgegeben. Praktisch das Gegenteil von ${var:âwort}.
${var:=wort}
Ist var nicht gesetzt oder entspricht var einem Null-Wert, dann setze var=wort. Ansonsten wird var zurückgegeben.
${var:?wort}
Gibt var zurück, wenn var nicht Null ist. Ansonsten: Ist ein wort gesetzt, dann eben wort ausgeben. Wenn kein wort angegeben wurde, einen vordefinierten Text verwenden und das Shellscript verlassen.
Anmerkung   wort kann hier in allen Ausdrücken entweder ein String sein oder eben ein Ausdruck (wie im Beispiel eine Kommando-Substitution).
Hinweis   Lassen Sie bei diesen Ausdrücken den Doppelpunkt weg, ändert sich die erste Abfrage so, dass nur überprüft wird, ob diese Variable definiert ist oder nicht.
## 3.9 Vorgabewerte für VariablenÂ
Da Sie sich nicht immer darauf verlassen können, dass die Anwender Ihrer Scrips schon das Richtige eingeben werden, gibt es so genannte Vorgabewerte für Variablen. Dass ich hier nicht »Vorgabewerte für Argumente« schreibe, deutet schon darauf hin, dass dieses Anwendungsgebiet nicht nur für die Kommandozeile gilt, sondern auch für Variablen im Allgemeinen. Neben den Positionsparametern können Sie damit also auch jegliche Art von Benutzereingaben bearbeiten.
Wenn Sie zu Kapitel 4, Kontrollstrukturen, kommen, wird Ihnen auffallen, dass die Verwendung von Vorgabewerten den if-then-else-Konstrukten ähnelt. Hierzu ein simples Beispiel. Es sollen aus einem Verzeichnis alle Verzeichnisse, die sich darin befinden, ausgegeben werden. Nehmen wir als Scriptnamen »lsdirs«. Rufen Sie dieses Script ohne ein Argument auf, wird durch einen Standardwert (im Beispiel einfach das aktuelle Arbeitsverzeichnis pwd) das aufzulistende Verzeichnis vorgegeben. Hier das Shellscript:
> # Vorgabewerte setzen # Name: lsdirs directory=${1:-`pwd`} ls -ld $directory | grep ^d
Das Script bei der Ausführung:
> you@host > ./lsdirs drwxr-xr-x 2 tot users 72 2005â02â07 10:29 bin drwx------ 3 tot users 424 2005â02â07 11:29 Desktop drwxr-xr-x 2 tot users 112 2005â02â17 08:11 Documents drwxr-xr-x 4 tot users 208 2005â02â07 10:29 HelpExplorer drwxr-xr-x 2 tot users 80 2005â02â05 15:03 public_html drwxr-xr-x 3 tot users 216 2004â09â04 19:55 Setup drwxr-xr-x 4 tot users 304 2005â02â15 07:19 Shellbuch you@host > ./lsdirs /home/tot/Shellbuch drwxr-xr-x 2 tot users 2712 2005â02â09 03:57 chm_pdf drwxr-xr-x 2 tot users 128 2005â02â05 15:15 Planung you@host > ./lsdirs /home drwxr-xr-x 27 tot users 2040 2005â02â18 00:30 tot drwxr-xr-x 45 you users 2040 2005â01â28 02:32 you
Zugegeben, das mit dem grep ^d hätte man auch mit einem einfachen test-Kommando realisieren können, aber hier müsste ich wieder auf ein Thema vorgreifen, was bisher noch nicht behandelt wurde. Durch ^d werden einfach alle Zeilen von ls âld herausgezogen, die mit einem d (hier für die Dateiart directory) anfangen. Mit der Zeile
> directory=${1:-`pwd`}
übergeben Sie der Variablen »directory« entweder den Wert des Positionsparameters $1 oder â wenn diese Variable leer ist â es wird stattdessen eine Kommando-Substitution durchgeführt (hier pwd) und deren Wert in »directory« abgelegt. Es gibt noch mehr solcher Konstruktionen für Standardwerte von Variablen, wie sie hier mit ${var:âwort} verwendet wurde. Tabelle 3.1 nennt alle Möglichkeiten:
Vorgabewert-Konstrukt | Bedeutung |
| --- | --- |
${var:âwort} | Ist var mit einem Inhalt besetzt (nicht Null), wird var zurückgegeben. Ansonsten wird die Variable wort verwendet. |
${var:+wort} | Hier wird wort zurückgegeben, wenn die var nicht (!) leer ist. Ansonsten wird ein Null-Wert zurückgegeben. Praktisch das Gegenteil von ${var:âwort}. |
${var:=wort} | Ist var nicht gesetzt oder entspricht var einem Null-Wert, dann setze var=wort. Ansonsten wird var zurückgegeben. |
${var:?wort} | Gibt var zurück, wenn var nicht Null ist. Ansonsten: Ist ein wort gesetzt, dann eben wort ausgeben. Wenn kein wort angegeben wurde, einen vordefinierten Text verwenden und das Shellscript verlassen. |
Das Script bei der Ausführung:
> you@host > ./adefaultvar Alternatives_erstes_Argument wort2 wort3 var4 var5 ./adefaultvar: line 22: var6: wort6 you@host > ./adefaultvar mein_Argument mein_Argument wort2 wort3 var4 var5 ./adefaultvar: line 22: var6: wort6
# 4.2 Die else-Alternative für eine if-VerzweigungÂ
4.2 Die else-Alternative für eine if-VerzweigungÂ
Oft will man der if-Verzweigung noch eine Alternative anbieten. Das heißt, wenn die Überprüfung des Kommandos (oder auch der test-Aufruf) fehlgeschlagen ist, soll in einen anderen Codeabschnitt verzweigt werden. Hier die Syntax zur else-Verzweigung:
if Kommando_erfolgreich then # Ja, Kommando war erfolgreich # ... hier Befehle für erfolgreiches Kommando verwenden else # Nein, Kommando war nicht erfolgreich # ... hier die Befehle bei einer erfolglosen # Kommandoausführung setzen fi
Gegenüber der if-Verzweigung kommt hier ein alternatives else hinzu. War die Kommandoausführung erfolglos, geht es hinter else bis fi mit der Programmausführung weiter (siehe Abbildung 4.3).
Natürlich können Sie auch hier die unleserlichere Form der if-else-Verzweigung verwenden:
if Kommando_erfolgreich ; then befehl(e) ; else befehl(e) ; fi
Umgeschrieben auf das Beispiel »aif1« vom Anfang dieses Kapitels sieht das Script mit else folgendermaßen aus:
# Demonstriert eine alternative Verzweigung mit else # Name: aelse # Benutzer in /etc/passwd suchen ... if grep "^$1" /etc/passwd > /dev/null then # grep erfolgreich echo "User $1 ist bekannt auf dem System" else # grep erfolglos echo "User $1 gibt es hier nicht" fi
Siehe dazu auch Abbildung 4.4:
Gegenüber dem Script aif1 hat sich bei der Ausführung nicht viel geändert. Nur wenn grep erfolglos war, kann jetzt in die else-Alternative verzweigt werden und das Script muss nicht mehr mit exit beendet werden.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 4.2 Die else-Alternative für eine if-VerzweigungÂ
Oft will man der if-Verzweigung noch eine Alternative anbieten. Das heißt, wenn die Überprüfung des Kommandos (oder auch der test-Aufruf) fehlgeschlagen ist, soll in einen anderen Codeabschnitt verzweigt werden. Hier die Syntax zur else-Verzweigung:
> if Kommando_erfolgreich then # Ja, Kommando war erfolgreich # ... hier Befehle für erfolgreiches Kommando verwenden else # Nein, Kommando war nicht erfolgreich # ... hier die Befehle bei einer erfolglosen # Kommandoausführung setzen fi
Gegenüber der if-Verzweigung kommt hier ein alternatives else hinzu. War die Kommandoausführung erfolglos, geht es hinter else bis fi mit der Programmausführung weiter (siehe Abbildung 4.3).
Natürlich können Sie auch hier die unleserlichere Form der if-else-Verzweigung verwenden:
> if Kommando_erfolgreich ; then befehl(e) ; else befehl(e) ; fi
Umgeschrieben auf das Beispiel »aif1« vom Anfang dieses Kapitels sieht das Script mit else folgendermaßen aus:
> # Demonstriert eine alternative Verzweigung mit else # Name: aelse # Benutzer in /etc/passwd suchen ... if grep "^$1" /etc/passwd > /dev/null then # grep erfolgreich echo "User $1 ist bekannt auf dem System" else # grep erfolglos echo "User $1 gibt es hier nicht" fi
Siehe dazu auch Abbildung 4.4:
Gegenüber dem Script aif1 hat sich bei der Ausführung nicht viel geändert. Nur wenn grep erfolglos war, kann jetzt in die else-Alternative verzweigt werden und das Script muss nicht mehr mit exit beendet werden.
# 4.3 Mehrfache Alternative mit elifÂ
4.3 Mehrfache Alternative mit elifÂ
Es ist auch möglich, mehrere Bedingungen zu testen. Hierzu verwendet man elif. Die Syntax:
if Kommando1_erfolgreich then # Ja, Kommando1 war erfolgreich # ... hier Befehle für erfolgreiches Kommando1 verwenden elif Kommando2_erfolgreich then # Ja, Kommando2 war erfolgreich # ... hier Befehle für erfolgreiches Kommando2 verwenden else # Nein, kein Kommando war erfolgreich # ... hier die Befehle bei einer erfolglosen # Kommandoausführung setzen fi
Liefert die Kommandoausführung von if 0 zurück, werden wieder entsprechende Kommandos nach then bis zum nächsten elif ausgeführt. Gibt if hingegen einen Wert ungleich 0 zurück, dann findet die Ausführung in der nächsten Verzweigung, hier mit elif, statt. Liefert elif hier einen Wert von 0 zurück, werden die Kommandos im darauf folgenden then bis zu else ausgeführt. Gibt elif hingegen ebenfalls einen Wert ungleich 0 zurück, findet die weitere Ausführung in der else-Alternative statt. else ist hier optional und muss nicht verwendet werden. Selbstverständlich können Sie hier beliebig viele elif-Verzweigungen einbauen. Sobald eines der Kommandos erfolgreich war, wird entsprechender Zweig abgearbeitet (siehe Abbildung 4.5).
Alternativ zu dem etwas längeren elif-Konstrukt wird häufig die case-Fallunterscheidung verwendet (sehr empfehlenswert).
Ein einfaches Anwendungsbeispiel ist es, zu ermitteln, ob ein bestimmtes Kommando auf dem System vorhanden ist. Im Beispiel soll ermittelt werden, was für ein make sich auf dem System befindet. Bspw. gibt es neben make auch gmake, welches von einigen Programmen benutzt wird. Auf neueren Systemen ist häufig ein entsprechender symbolischer Link gesetzt. Beim Verwenden von gmake etwa unter Linux wird behauptet, dass dieses Kommando hier tatsächlich existiert. Ein Blick auf dieses Kommando zeigt uns aber einen symbolischen Link auf make.
you@host > which gmake /usr/bin/gmake you@host > ls -l /usr/bin/gmake lrwxrwxrwx 1 root root 4 2005â02â05 14:21 /usr/bin/gmake -> make
Hier ein solches Shellscript, das testet, ob das Kommando zum Bauen hier xmake (erfunden), make oder gmake heißt.
# Demonstriert eine elif-Verzweigung # Name: aelif if which xmake > /dev/null 2>&1 then echo "xmake vorhanden" # Hier die Kommandos für xmake elif which make > /dev/null 2>&1 then echo "make vorhanden" # Hier die Kommandos für make elif which gmake > /dev/null 2>&1 then echo "gmake vorhanden" # Hier die Kommandos für gmake else echo "Kein make auf diesem System vorhanden" exit 1 fi
Das Script bei der Ausführung:
you@host > ./aelif make vorhanden
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 4.3 Mehrfache Alternative mit elifÂ
Es ist auch möglich, mehrere Bedingungen zu testen. Hierzu verwendet man elif. Die Syntax:
> if Kommando1_erfolgreich then # Ja, Kommando1 war erfolgreich # ... hier Befehle für erfolgreiches Kommando1 verwenden elif Kommando2_erfolgreich then # Ja, Kommando2 war erfolgreich # ... hier Befehle für erfolgreiches Kommando2 verwenden else # Nein, kein Kommando war erfolgreich # ... hier die Befehle bei einer erfolglosen # Kommandoausführung setzen fi
Liefert die Kommandoausführung von if 0 zurück, werden wieder entsprechende Kommandos nach then bis zum nächsten elif ausgeführt. Gibt if hingegen einen Wert ungleich 0 zurück, dann findet die Ausführung in der nächsten Verzweigung, hier mit elif, statt. Liefert elif hier einen Wert von 0 zurück, werden die Kommandos im darauf folgenden then bis zu else ausgeführt. Gibt elif hingegen ebenfalls einen Wert ungleich 0 zurück, findet die weitere Ausführung in der else-Alternative statt. else ist hier optional und muss nicht verwendet werden. Selbstverständlich können Sie hier beliebig viele elif-Verzweigungen einbauen. Sobald eines der Kommandos erfolgreich war, wird entsprechender Zweig abgearbeitet (siehe Abbildung 4.5).
Alternativ zu dem etwas längeren elif-Konstrukt wird häufig die case-Fallunterscheidung verwendet (sehr empfehlenswert).
Ein einfaches Anwendungsbeispiel ist es, zu ermitteln, ob ein bestimmtes Kommando auf dem System vorhanden ist. Im Beispiel soll ermittelt werden, was für ein make sich auf dem System befindet. Bspw. gibt es neben make auch gmake, welches von einigen Programmen benutzt wird. Auf neueren Systemen ist häufig ein entsprechender symbolischer Link gesetzt. Beim Verwenden von gmake etwa unter Linux wird behauptet, dass dieses Kommando hier tatsächlich existiert. Ein Blick auf dieses Kommando zeigt uns aber einen symbolischen Link auf make.
> you@host > which gmake /usr/bin/gmake you@host > ls -l /usr/bin/gmake lrwxrwxrwx 1 root root 4 2005â02â05 14:21 /usr/bin/gmake -> make
Hier ein solches Shellscript, das testet, ob das Kommando zum Bauen hier xmake (erfunden), make oder gmake heißt.
> # Demonstriert eine elif-Verzweigung # Name: aelif if which xmake > /dev/null 2>&1 then echo "xmake vorhanden" # Hier die Kommandos für xmake elif which make > /dev/null 2>&1 then echo "make vorhanden" # Hier die Kommandos für make elif which gmake > /dev/null 2>&1 then echo "gmake vorhanden" # Hier die Kommandos für gmake else echo "Kein make auf diesem System vorhanden" exit 1 fi
Das Script bei der Ausführung:
> you@host > ./aelif make vorhanden
# 4.4 Das Kommando testÂ
4.4 Das Kommando testÂ
Wie Sie im Verlauf bereits festgestellt haben, scheint das test-Kommando das ultimative Werkzeug für viele Shellscripts zu sein. Und in der Tat wäre das Shell-Leben ohne dieses Kommando nur halb so einfach. Das test-Kommando wird überall dort eingebaut, wo ein Vergleich von Zeichenketten oder Zahlenwerten und eine Überprüfung von Zuständen einer Datei erforderlich sind. Da die if-Verzweigung eigentlich für Kommandos konzipiert wurde, ist es nicht so ohne weiteres möglich, einfach zwei Zeichenketten oder Zahlen mit einem if zu überprüfen. Würden Sie dies dennoch tun, würde die Shell entsprechende Zeichenketten oder Zahlenwerte als Befehl erkennen und versuchen, diesen auszuführen.
Damit Sie also solche Vergleiche durchführen können, benötigen Sie ein weiteres Kommando, das test-Kommando. Erst mit test ist es Ihnen möglich, verschiedene Ausdrücke zu formulieren und auszuwerten. Hier die Syntax des test-Kommandos:
if test Ausdruck then # Ausdruck ist wahr, der Rückgabewert von test 0 # hier die weiteren Kommandos bei erfolgreichem Ausdruck fi
Das test-Kommando wird vorwiegend in seiner »symbolischen« Form mit den eckigen Klammern eingesetzt.
if [ Ausdruck ] then # Ausdruck ist wahr, der Rückgabewert von test 0 # hier die weiteren Kommandos bei erfolgreichem Ausdruck fi
Hinweis   Bitte beachten Sie, dass der Befehl test Leerzeichen zwischen den einzelnen Ausdrücken erwartet. Dies gilt übrigens auch hinter jedem sich öffnenden [ und vor jedem schließenden ].
Dass bei einem korrekten Ausdruck wieder 0 zurückgegeben wird, liegt daran, dass test letztendlich auch nichts anderes als ein Kommando ist.
4.4.1 Ganze Zahlen vergleichenÂ
Um ganze Zahlen mit test zu vergleichen, stehen Ihnen folgende Operatoren zur Verfügung (siehe Tabelle 4.1):
Tabelle 4.1 Â Ganze Zahlen mit test vergleichen
[ var1 âeq var2 ]
(eq = equal)
var1 gleich var2 ist
[ var1 âne var2 ]
(ne = not equal)
var1 ungleich var2 ist
[ var1 âlt var2 ]
(lt = less than)
var1 kleiner als var2 ist
[ var1 âgt var2 ]
(gt = greater than)
var1 größer als var2 ist
[ var1 âle var2 ]
(le = less equal)
var1 kleiner oder gleich var2 ist
[ var1 âge var2 ]
(ge = greater equal)
var1 größer oder gleich var2 ist
Hier ein einfaches Script, das Ihnen die Zahlenvergleiche in der Praxis demonstriert:
# Demonstriert das test-Kommando mit Zahlenwerten # Name: avalue a=6; b=7 if [ $a -eq $b ] then echo "\$a ($a) ist gleich mit \$b ($b)" else echo "\$a ($a) ist nicht gleich mit \$b ($b)" fi if [ $a -gt $b ] then echo "\$a ($a) ist größer als \$b ($b)" elif [ $a -lt $b ] then echo "\$a ($a) ist kleiner als \$b ($b)" else echo "\$a ($a) ist gleich mit \$b ($b)" fi if [ $a -ne 5 ] then echo "\$a ($a) ist ungleich 5" fi if [ 7 -eq $b ] then echo "\$b ist gleich 7" fi
Das Script bei der Ausführung:
you@host > ./avalue $a (6) ist nicht gleich mit $b (7) $a (6) ist kleiner als $b (7) $a (6) ist ungleich 5 $b ist gleich 7
Dass hier tatsächlich numerische Zahlenvergleiche und keine String-Vergleiche stattfinden, ist den Ausdrücken âeq, âne, âlt, âgt, âle und âge zu verdanken. Steht einer dieser Ausdrücke zwischen zwei Zahlen, wandelt der Befehl test die Zeichenketten vorher noch in Zahlen um. Dabei ist das test-Kommando sehr »intelligent« und erkennt selbst folgende Werte als Zahlen an:
you@host > a="00001"; b=1 you@host > [ $a -eq $b ] you@host > echo $? 0 you@host > a=" 1"; b=1 you@host > [ $a -eq $b ] you@host > echo $? 0
Hier wurde auch die Zeichenkette "00001" und " 1" von test in eine numerische 1 umgewandelt.
Argumente aus der Kommandozeile überprüfen
Eine fast immer verwendete Aktion des test-Kommandos ist das Überprüfen, ob die richtige Anzahl von Argumenten in der Kommandozeile eingegeben wurde. Die Anzahl der Argumente finden Sie in der Variablen $# â mit dem test-Kommando können Sie jetzt entsprechend reagieren.
# Überprüft die richtige Anzahl von Argumenten # aus der Kommandozeile # Name: atestarg if [ $# -ne 2 ] then echo "Hier sind mindestens 2 Argumente erforderlich" echo "usage: $0 arg1 arg2 ... [arg_n]" exit 1 else echo "Erforderliche Anzahl Argumente erhalten" fi
Das Script bei der Ausführung:
you@host > ./atestarg Hier sind mindestens 2 Argumente erforderlich usage: atestarg arg1 arg2 ... [arg_n] you@host > ./atestarg Hallo Welt Erforderliche Anzahl Argumente erhalten
4.4.2 Ganze Zahlen vergleichen mit let (Bash und Korn-Shell only)Â
In der Bash und der Korn-Shell steht Ihnen noch eine weitere Alternative zum Vergleichen von Zahlen zur Verfügung. Hier kommen auch die programmiertypischen Operatoren für den Vergleich zum Einsatz (siehe Tabelle 4.2).
Tabelle 4.2 Â Ganze Zahlen mit test vergleichen (Bash und Korn-Shell)
(( var1 == var2 ))
==
var1 gleich var2 ist
(( var1 != var2 ))
!=
var1 ungleich var2 ist
(( var1 < var2 ))
<
var1 kleiner als var2 ist
(( var1 > var2 ))
>
var1 größer als var2 ist
(( var1 >= var2 ))
>=
var1 größer oder gleich var2 ist
(( var1 <= var2 ))
<=
var1 kleiner oder gleich var2 ist
Wer sich vielleicht noch an den let-Abschnitt erinnert, dem dürfte auffallen, was wir hier haben â richtig, bei der doppelten Klammerung (( ... )) handelt es sich um nichts anderes als um eine symbolische Form für das let-Kommando, welches Sie bereits bei den Variablen mit arithmetischen Ausdrücken verwendet haben. Natürlich können Sie hierbei auch anstatt der doppelten Klammerung das Kommando let verwenden. So kann beispielsweise statt
if (( $a > $b ))
auch let verwendet werden:
if let "$a > $b"
Im Gegensatz zur Verwendung von âeq, âne usw. sind die Leerzeichen bei den Zahlenvergleichen hier nicht von Bedeutung und können bei Bedarf unleserlich zusammengequetscht werden ;-). Des Weiteren kann, wie Sie von let vielleicht noch wissen, das $-Zeichen vor den Variablen beim Vergleich in doppelter Klammerung entfallen.
Wollen Sie bspw. aus dem Listing »atestarg« den Test, ob die richtige Anzahl von Argumenten in der Kommandozeile eingegeben wurde, umschreiben auf die alternative Schreibweise der Bash und Korn-Shell, so müssen Sie nur die Zeile
if [ $# -ne 2 ]
umschreiben in
if (( $# != 2 ))
4.4.3 Zeichenketten vergleichenÂ
Das Vergleichen von Zeichenketten mit test funktioniert ähnlich wie bei Zahlenwerten. Hierbei übergeben Sie dem Kommando test eine Zeichenkette, einen Operanden und eine weitere Zeichenkette. Auch hier müssen Sie wieder jeweils (mindestens) ein Leerzeichen dazwischen einschieben. In Tabelle 4.3 sind die Operatoren zum Vergleichen von Zeichenketten aufgelistet:
Tabelle 4.3 Â Zeichenketten vergleichen
[ "$var1" = "$var2" ]
=
var1 gleich var2 ist
[ "$var1" != "$var2" ]
!=
var1 ungleich var2 ist
[ âz "$var" ]
âz
var leer ist
[ ân "$var" ]
ân
var nicht leer ist
Hinweis   Auch wenn es nicht vorgeschrieben ist, sollten Sie bei einem test mit Zeichenketten diese immer zwischen zwei doppelte Anführungszeichen setzen. Dies hilft Ihnen zu vermeiden, dass beim Vergleich einer Variable, die nicht existiert oder kein "" enthält, Fehler auftreten.
Auch hierzu ein einfaches Shellscript, das verschiedene Vergleiche von Zeichenketten durchführt.
# Demonstriert einfache Zeichenkettenvergleiche # ateststring name1=juergen name2=jonathan if [ $# -lt 1 ] then echo "Hier ist mindestens ein Argument erforderlich" echo "usage: $0 Zeichenkette" exit 1 fi if [ "$1" = "$name1" ] then echo "<NAME>" elif [ "$1" = "$name2" ] then echo "<NAME>" else echo "Hier wurde weder $name1 noch $name2 verwendet" fi if [ -n "$2" ] then echo "Hier wurde auch ein zweites Argument verwendet ($2)" else echo "Hier wurde kein zweites Argument verwendet" fi if [ -z "$name3" ] then echo "Der String \$name3 ist leer oder existiert nicht" elif [ "$name3" != "you" ] then echo "Bei \$name3 handelt es sich nicht um \"you\"" else echo "Hier ist doch \"you\" gemeint" fi
Das Script bei der Ausführung:
you@host > ./ateststring Hier ist mindestens ein Argument erforderlich usage: ./ateststring Zeichenkette you@host > ./ateststring test Hier wurde weder juergen noch jonathan verwendet Hier wurde kein zweites Argument verwendet Der String $name3 ist leer oder existiert nicht you@host > ./ateststring juergen Hallo juergen Hier wurde kein zweites Argument verwendet Der String $name3 ist leer oder existiert nicht you@host > ./ateststring juergen wolf Hallo juergen Hier wurde auch ein zweites Argument verwendet (wolf) Der String $name3 ist leer oder existiert nicht you@host > name3=wolf you@host > export name3 you@host > ./ateststring jonathan wolf Hallo <NAME> wurde auch ein zweites Argument verwendet (wolf) Bei $name3 handelt es sich nicht um "you" you@host > export name3=you you@host > ./ateststring jonathan wolf Hallo j<NAME> wurde auch ein zweites Argument verwendet (wolf) Hier ist doch "you" gemeint
Wer jetzt immer noch denkt, man könne mit den bisherigen Mitteln noch kein vernünftiges Shellscript schreiben, für den soll hier ein einfaches Backup-Script geschrieben werden. Das folgende Script soll Ihnen zwei Möglichkeiten bieten. Zum einen eine Eingabe wie:
you@host > ./abackup1 save $HOME/Shellbuch
Hiermit soll der komplette Inhalt vom Verzeichnis $HOME/Shellbuch mittels tar archiviert werden (mitsamt den Meldungen des kompletten Verzeichnisbaums). Das Backup soll in einem extra erstellen Verzeichnis mit einem extra erstellten Namen (im Beispiel einfach TagMonatJahr.tar, bspw. 21Feb2005.tar) erstellt werden (natürlich komprimiert).
Auf der anderen Seite soll es selbstverständlich auch möglich sein, den Inhalt dieser Meldungen, der archivierten Backup-Datei, zu lesen, was gerade bei vielen Backup-Dateien auf der Platte unverzichtbar ist. Dies soll mit einem Aufruf wie
you@host > ./abackup1 read $HOME/backup/21Feb2005.tar
erreicht werden. Mit grep hinter einer Pipe können Sie nun nach einer bestimmten Datei im Archiv suchen. Dies könnte man natürlich auch im Script extra einbauen. Aber der Umfang soll hier nicht ins Unermessliche wachsen. Hier ein einfaches, aber anspruchsvolles Backup-Script:
# Ein einfaches Backup-Script # Name: abackup1 # Beispiel: ./abackup1 save Verzeichnis # Beispiel: ./abackup1 read (backupfile).tar BACKUPDIR=$HOME/backup DIR=$2 if [ $# != 2 ] then echo "Hier sind 2 Argumente erforderlich" echo "usage: $0 Option Verzeichnis/Backupfile" echo echo "Mögliche Angaben für Option:" echo "save = Führt Backup vom kompletten Verzeichnis durch" echo " Verzeichnis wird als zweites Argument angegeben" echo "read = Liest den Inhalt eines Backupfiles" echo " Backupfile wird als zweites Argument angegeben" exit 1 fi # Falls Verzeichnis für Backup nicht existiert ... if ls $BACKUPDIR > /dev/null then echo "Backup-Verzeichnis ($BACKUPDIR) existiert" elif mkdir $BACKUPDIR > /dev/null then echo "Backup-Verzeichnis angelegt ($BACKUPDIR)" else echo "Konnte kein Backup-Verzeichnis anlegen" exit 1 fi # Wurde save oder read als erstes Argument verwendet ... if [ "$1" = "save" ] then set `date` BACKUPFILE="$3$2$6" if tar czvf ${BACKUPDIR}/${BACKUPFILE}.tar $DIR then echo "Backup für $DIR erfolgreich in $BACKUPDIR angelegt" echo "Backup-Name : ${BACKUPFILE}.tar" else echo "Backup wurde nicht durchgeführt !!!" fi elif [ "$1" = "read" ] then echo "Inhalt von $DIR : " tar tzf $DIR else echo "Falsche Scriptausführung!!!" echo "usage: $0 option Verzeichnis/Backupfile" echo echo "Mögliche Angaben für Option:" echo "save = Führt ein Backup eines kompletten Verzeichnisses durch" echo " Verzeichnis wird als zweites Argument angegeben" echo "read = Liest den Inhalt eines Backupfiles" echo " Backupfile wird als zweites Argument angegeben" fi
Das Script bei der Ausführung:
you@host > ./abackup1 save $HOME/Shellbuch ls: /home/you/backup: Datei oder Verzeichnis nicht gefunden Backup-Verzeichnis angelegt (/home/you/backup) tar: Entferne führende `/' von Archivnamen. home/you/Shellbuch/ home/you/Shellbuch/Planung_und_Bewerbung/ home/you/Shellbuch/Planung_und_Bewerbung/shellprogrammierung.doc home/you/Shellbuch/Planung_und_Bewerbung/shellprogrammierung.sxw home/you/Shellbuch/kap004.txt home/you/Shellbuch/Kap003.txt~ home/you/Shellbuch/kap004.txt~ ... Backup für /home/you/Shellbuch erfolgreich in /home/you/backup angelegt Backup-Name : 21Feb2005.tar you@host > ./abackup1 read $HOME/backup/21Feb2005.tar | \ > grep Kap002 home/tot/Shellbuch/Kap002.doc home/tot/Shellbuch/Kap002.sxw you@host > ./abackup1 read $HOME/backup/21Feb2005.tar | wc -l 50
Hier wurde ein Backup vom kompletten Verzeichnis $HOME/Shellbuch durchgeführt. Anschließend wurde mit der Option read und grep nach »Kap002« gesucht, welches hier in zweifacher Ausführung vorhanden ist. Ebenso einfach können Sie hiermit die Anzahl von Dateien in einem Archiv (hier mit wc âl) ermitteln.
4.4.4 Zeichenketten vergleichen (Bash und Korn-Shell only)Â
In der Bash und der Korn-Shell stehen Ihnen noch weitere alternative Möglichkeit zur Verfügung, um Zeichenketten zu vergleichen (siehe Tabelle 4.4). Besonders interessant erscheint mir in diesem Zusammenhang, dass hiermit jetzt auch echte Mustervergleiche möglich sind.
Tabelle 4.4 Â Zeichenketten vergleichen (Bash und Korn-Shell)
[[ "$var1" == "$var2" ]]
==
var1 gleich var2 ist
[[ "$var1" != "$var2" ]]
!=
var1 ungleich var2 ist
[[ âz "$var" ]]
âz
var leer ist
[[ ân "$var" ]]
ân
var nicht leer ist
[[ "$var1" > "$var2" ]]
>
var1 alphabetisch größer als var2 ist
[[ "$var1" < "$var2" ]]
<
var1 alphabetisch kleiner als var2 ist
[[ "$var" == pattern ]]
==
var entspricht dem Muster pattern
[[ "$var" != pattern ]]
!=
var entspricht nicht dem Muster-Pattern
Das wirklich sehr interessante Feature bei den Vergleichen von Zeichenketten in Bash und Korn-Shell ist die Möglichkeit, Muster beim Vergleich zu verwenden. Hierbei gilt es zu beachten, dass sich das Muster auf der rechten Seite befindet und nicht zwischen Anführungsstrichen stehen darf. Zur Verwendung von Mustern stehen Ihnen wieder die Metazeichen *, ? und [ ] zur Verfügung, deren Bedeutung und Verwendung Sie bereits in Abschnitt 1.10.6 kennen gelernt haben. Natürlich können Sie auch die Konstruktionen für alternative Muster nutzen, welche Sie in Abschnitt 1.10.8 verwendet haben. Hier ein Beispiel mit einfachen Mustervergleichen:
# Demonstriert erweiterte Zeichenkettenvergleiche # ateststring2 if [ $# -lt 1 ] then echo "Hier ist mindestens ein Argument erforderlich" echo "usage: $0 Zeichenkette" exit 1 fi if [[ "$1" = *ist* ]] then echo "$1 enthält die Textfolge \"ist\"" elif [[ "$1" = ?art ]] then echo "$1 enthält die Textfolge \"art\"" elif [[ "$1" = kap[0â9] ]] then echo "$1 enthält die Textfolge \"kap\"" else echo "Erfolgloser Mustervergleich" fi
Das Script bei der Ausführung:
you@host > ./ateststring2 Bauart Erfolgloser Mustervergleich you@host > ./ateststring2 zart zart enthält die Textfolge "art" you@host > ./ateststring2 kap7 kap7 enthält die Textfolge "kap" you@host > ./ateststring2 kapa Erfolgloser Mustervergleich you@host > ./ateststring2 Mistgabel Mistgabel enthält die Textfolge "ist"
Eine weitere interessante Erneuerung ist der Vergleich auf größer bzw. kleiner als. Hiermit werden Zeichenketten alphabetisch in lexikografischer Anordnung verglichen. Hierzu ein Script:
# Demonstriert erweiterte Zeichenkettenvergleiche # ateststring3 var1=aaa var2=aab var3=aaaa var4=b if [[ "$var1" > "$var2" ]] then echo "$var1 ist größer als $var2" else echo "$var1 ist kleiner als $var2" fi if [[ "$var2" < "$var3" ]] then echo "$var2 ist kleiner als $var3" else echo "$var2 ist größer als $var3" fi if [[ "$var3" < "$var4" ]] then echo "$var3 ist kleiner als $var4" else echo "$var3 ist größer als $var4" fi
Das Script bei der Ausführung:
you@host > ./ateststring3 aaa ist kleiner als aab aab ist größer als aaaa aaaa ist kleiner als b
An diesem Script können Sie sehr gut erkennen, dass hier nicht die Länge der Zeichenkette zwischen dem größer bzw. kleiner entscheidet, sondern das Zeichen, womit die Zeichenkette beginnt. Somit ist laut Alphabet das Zeichen a kleiner als das Zeichen b. Beginnen beide Zeichenketten mit dem Buchstaben a, wird das nächste Zeichen verglichen â eben so, wie Sie dies von einem Lexikon her kennen.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 4.4 Das Kommando testÂ
Wie Sie im Verlauf bereits festgestellt haben, scheint das test-Kommando das ultimative Werkzeug für viele Shellscripts zu sein. Und in der Tat wäre das Shell-Leben ohne dieses Kommando nur halb so einfach. Das test-Kommando wird überall dort eingebaut, wo ein Vergleich von Zeichenketten oder Zahlenwerten und eine Überprüfung von Zuständen einer Datei erforderlich sind. Da die if-Verzweigung eigentlich für Kommandos konzipiert wurde, ist es nicht so ohne weiteres möglich, einfach zwei Zeichenketten oder Zahlen mit einem if zu überprüfen. Würden Sie dies dennoch tun, würde die Shell entsprechende Zeichenketten oder Zahlenwerte als Befehl erkennen und versuchen, diesen auszuführen.
Damit Sie also solche Vergleiche durchführen können, benötigen Sie ein weiteres Kommando, das test-Kommando. Erst mit test ist es Ihnen möglich, verschiedene Ausdrücke zu formulieren und auszuwerten. Hier die Syntax des test-Kommandos:
> if test Ausdruck then # Ausdruck ist wahr, der Rückgabewert von test 0 # hier die weiteren Kommandos bei erfolgreichem Ausdruck fi
Das test-Kommando wird vorwiegend in seiner »symbolischen« Form mit den eckigen Klammern eingesetzt.
> if [ Ausdruck ] then # Ausdruck ist wahr, der Rückgabewert von test 0 # hier die weiteren Kommandos bei erfolgreichem Ausdruck fi
Dass bei einem korrekten Ausdruck wieder 0 zurückgegeben wird, liegt daran, dass test letztendlich auch nichts anderes als ein Kommando ist.
### 4.4.1 Ganze Zahlen vergleichenÂ
Um ganze Zahlen mit test zu vergleichen, stehen Ihnen folgende Operatoren zur Verfügung (siehe Tabelle 4.1):
Ausdruck | Bedeutung | Liefert wahr (0) zurück, wenn ... |
| --- | --- | --- |
[ var1 âeq var2 ] | (eq = equal) | var1 gleich var2 ist |
[ var1 âne var2 ] | (ne = not equal) | var1 ungleich var2 ist |
[ var1 âlt var2 ] | (lt = less than) | var1 kleiner als var2 ist |
[ var1 âgt var2 ] | (gt = greater than) | var1 größer als var2 ist |
[ var1 âle var2 ] | (le = less equal) | var1 kleiner oder gleich var2 ist |
[ var1 âge var2 ] | (ge = greater equal) | var1 größer oder gleich var2 ist |
Hier ein einfaches Script, das Ihnen die Zahlenvergleiche in der Praxis demonstriert:
> # Demonstriert das test-Kommando mit Zahlenwerten # Name: avalue a=6; b=7 if [ $a -eq $b ] then echo "\$a ($a) ist gleich mit \$b ($b)" else echo "\$a ($a) ist nicht gleich mit \$b ($b)" fi if [ $a -gt $b ] then echo "\$a ($a) ist größer als \$b ($b)" elif [ $a -lt $b ] then echo "\$a ($a) ist kleiner als \$b ($b)" else echo "\$a ($a) ist gleich mit \$b ($b)" fi if [ $a -ne 5 ] then echo "\$a ($a) ist ungleich 5" fi if [ 7 -eq $b ] then echo "\$b ist gleich 7" fi
Das Script bei der Ausführung:
> you@host > ./avalue $a (6) ist nicht gleich mit $b (7) $a (6) ist kleiner als $b (7) $a (6) ist ungleich 5 $b ist gleich 7
Dass hier tatsächlich numerische Zahlenvergleiche und keine String-Vergleiche stattfinden, ist den Ausdrücken âeq, âne, âlt, âgt, âle und âge zu verdanken. Steht einer dieser Ausdrücke zwischen zwei Zahlen, wandelt der Befehl test die Zeichenketten vorher noch in Zahlen um. Dabei ist das test-Kommando sehr »intelligent« und erkennt selbst folgende Werte als Zahlen an:
> you@host > a="00001"; b=1 you@host > [ $a -eq $b ] you@host > echo $? 0 you@host > a=" 1"; b=1 you@host > [ $a -eq $b ] you@host > echo $? 0
Hier wurde auch die Zeichenkette "00001" und " 1" von test in eine numerische 1 umgewandelt.
# Argumente aus der Kommandozeile überprüfen
Eine fast immer verwendete Aktion des test-Kommandos ist das Überprüfen, ob die richtige Anzahl von Argumenten in der Kommandozeile eingegeben wurde. Die Anzahl der Argumente finden Sie in der Variablen $# â mit dem test-Kommando können Sie jetzt entsprechend reagieren.
> # Überprüft die richtige Anzahl von Argumenten # aus der Kommandozeile # Name: atestarg if [ $# -ne 2 ] then echo "Hier sind mindestens 2 Argumente erforderlich" echo "usage: $0 arg1 arg2 ... [arg_n]" exit 1 else echo "Erforderliche Anzahl Argumente erhalten" fi
Das Script bei der Ausführung:
> you@host > ./atestarg Hier sind mindestens 2 Argumente erforderlich usage: atestarg arg1 arg2 ... [arg_n] you@host > ./atestarg Hallo Welt Erforderliche Anzahl Argumente erhalten
### 4.4.2 Ganze Zahlen vergleichen mit let (Bash und Korn-Shell only)Â
In der Bash und der Korn-Shell steht Ihnen noch eine weitere Alternative zum Vergleichen von Zahlen zur Verfügung. Hier kommen auch die programmiertypischen Operatoren für den Vergleich zum Einsatz (siehe Tabelle 4.2).
Ausdruck | Operator | Liefert wahr (0) zurück, wenn ... |
| --- | --- | --- |
(( var1 == var2 )) | == | var1 gleich var2 ist |
(( var1 != var2 )) | != | var1 ungleich var2 ist |
(( var1 < var2 )) | < | var1 kleiner als var2 ist |
(( var1 > var2 )) | > | var1 größer als var2 ist |
(( var1 >= var2 )) | >= | var1 größer oder gleich var2 ist |
(( var1 <= var2 )) | <= | var1 kleiner oder gleich var2 ist |
Wer sich vielleicht noch an den let-Abschnitt erinnert, dem dürfte auffallen, was wir hier haben â richtig, bei der doppelten Klammerung (( ... )) handelt es sich um nichts anderes als um eine symbolische Form für das let-Kommando, welches Sie bereits bei den Variablen mit arithmetischen Ausdrücken verwendet haben. Natürlich können Sie hierbei auch anstatt der doppelten Klammerung das Kommando let verwenden. So kann beispielsweise statt
> if (( $a > $b ))
auch let verwendet werden:
> if let "$a > $b"
Im Gegensatz zur Verwendung von âeq, âne usw. sind die Leerzeichen bei den Zahlenvergleichen hier nicht von Bedeutung und können bei Bedarf unleserlich zusammengequetscht werden ;-). Des Weiteren kann, wie Sie von let vielleicht noch wissen, das $-Zeichen vor den Variablen beim Vergleich in doppelter Klammerung entfallen.
Wollen Sie bspw. aus dem Listing »atestarg« den Test, ob die richtige Anzahl von Argumenten in der Kommandozeile eingegeben wurde, umschreiben auf die alternative Schreibweise der Bash und Korn-Shell, so müssen Sie nur die Zeile
> if [ $# -ne 2 ]
umschreiben in
> if (( $# != 2 ))
### 4.4.3 Zeichenketten vergleichenÂ
Das Vergleichen von Zeichenketten mit test funktioniert ähnlich wie bei Zahlenwerten. Hierbei übergeben Sie dem Kommando test eine Zeichenkette, einen Operanden und eine weitere Zeichenkette. Auch hier müssen Sie wieder jeweils (mindestens) ein Leerzeichen dazwischen einschieben. In Tabelle 4.3 sind die Operatoren zum Vergleichen von Zeichenketten aufgelistet:
Ausdruck | Operator | Liefert wahr (0) zurück, wenn ... |
| --- | --- | --- |
[ "$var1" = "$var2" ] | = | var1 gleich var2 ist |
[ "$var1" != "$var2" ] | != | var1 ungleich var2 ist |
[ âz "$var" ] | âz | var leer ist |
[ ân "$var" ] | ân | var nicht leer ist |
Auch hierzu ein einfaches Shellscript, das verschiedene Vergleiche von Zeichenketten durchführt.
> # Demonstriert einfache Zeichenkettenvergleiche # ateststring name1=juergen name2=jonathan if [ $# -lt 1 ] then echo "Hier ist mindestens ein Argument erforderlich" echo "usage: $0 Zeichenkette" exit 1 fi if [ "$1" = "$name1" ] then echo "<NAME>" elif [ "$1" = "$name2" ] then echo "<NAME>" else echo "Hier wurde weder $name1 noch $name2 verwendet" fi if [ -n "$2" ] then echo "Hier wurde auch ein zweites Argument verwendet ($2)" else echo "Hier wurde kein zweites Argument verwendet" fi if [ -z "$name3" ] then echo "Der String \$name3 ist leer oder existiert nicht" elif [ "$name3" != "you" ] then echo "Bei \$name3 handelt es sich nicht um \"you\"" else echo "Hier ist doch \"you\" gemeint" fi
Das Script bei der Ausführung:
> you@host > ./ateststring Hier ist mindestens ein Argument erforderlich usage: ./ateststring Zeichenkette you@host > ./ateststring test Hier wurde weder juergen noch jonathan verwendet Hier wurde kein zweites Argument verwendet Der String $name3 ist leer oder existiert nicht you@host > ./ateststring juergen Hallo juergen Hier wurde kein zweites Argument verwendet Der String $name3 ist leer oder existiert nicht you@host > ./ateststring juergen wolf Hallo juergen Hier wurde auch ein zweites Argument verwendet (wolf) Der String $name3 ist leer oder existiert nicht you@host > name3=wolf you@host > export name3 you@host > ./ateststring <NAME> wurde auch ein zweites Argument verwendet (wolf) Bei $name3 handelt es sich nicht um "you" you@host > export name3=you you@host > ./ateststring <NAME> wurde auch ein zweites Argument verwendet (wolf) Hier ist doch "you" gemeint
Wer jetzt immer noch denkt, man könne mit den bisherigen Mitteln noch kein vernünftiges Shellscript schreiben, für den soll hier ein einfaches Backup-Script geschrieben werden. Das folgende Script soll Ihnen zwei Möglichkeiten bieten. Zum einen eine Eingabe wie:
> you@host > ./abackup1 save $HOME/Shellbuch
Hiermit soll der komplette Inhalt vom Verzeichnis $HOME/Shellbuch mittels tar archiviert werden (mitsamt den Meldungen des kompletten Verzeichnisbaums). Das Backup soll in einem extra erstellen Verzeichnis mit einem extra erstellten Namen (im Beispiel einfach TagMonatJahr.tar, bspw. 21Feb2005.tar) erstellt werden (natürlich komprimiert).
Auf der anderen Seite soll es selbstverständlich auch möglich sein, den Inhalt dieser Meldungen, der archivierten Backup-Datei, zu lesen, was gerade bei vielen Backup-Dateien auf der Platte unverzichtbar ist. Dies soll mit einem Aufruf wie
> you@host > ./abackup1 read $HOME/backup/21Feb2005.tar
erreicht werden. Mit grep hinter einer Pipe können Sie nun nach einer bestimmten Datei im Archiv suchen. Dies könnte man natürlich auch im Script extra einbauen. Aber der Umfang soll hier nicht ins Unermessliche wachsen. Hier ein einfaches, aber anspruchsvolles Backup-Script:
> # Ein einfaches Backup-Script # Name: abackup1 # Beispiel: ./abackup1 save Verzeichnis # Beispiel: ./abackup1 read (backupfile).tar BACKUPDIR=$HOME/backup DIR=$2 if [ $# != 2 ] then echo "Hier sind 2 Argumente erforderlich" echo "usage: $0 Option Verzeichnis/Backupfile" echo echo "Mögliche Angaben für Option:" echo "save = Führt Backup vom kompletten Verzeichnis durch" echo " Verzeichnis wird als zweites Argument angegeben" echo "read = Liest den Inhalt eines Backupfiles" echo " Backupfile wird als zweites Argument angegeben" exit 1 fi # Falls Verzeichnis für Backup nicht existiert ... if ls $BACKUPDIR > /dev/null then echo "Backup-Verzeichnis ($BACKUPDIR) existiert" elif mkdir $BACKUPDIR > /dev/null then echo "Backup-Verzeichnis angelegt ($BACKUPDIR)" else echo "Konnte kein Backup-Verzeichnis anlegen" exit 1 fi # Wurde save oder read als erstes Argument verwendet ... if [ "$1" = "save" ] then set `date` BACKUPFILE="$3$2$6" if tar czvf ${BACKUPDIR}/${BACKUPFILE}.tar $DIR then echo "Backup für $DIR erfolgreich in $BACKUPDIR angelegt" echo "Backup-Name : ${BACKUPFILE}.tar" else echo "Backup wurde nicht durchgeführt !!!" fi elif [ "$1" = "read" ] then echo "Inhalt von $DIR : " tar tzf $DIR else echo "Falsche Scriptausführung!!!" echo "usage: $0 option Verzeichnis/Backupfile" echo echo "Mögliche Angaben für Option:" echo "save = Führt ein Backup eines kompletten Verzeichnisses durch" echo " Verzeichnis wird als zweites Argument angegeben" echo "read = Liest den Inhalt eines Backupfiles" echo " Backupfile wird als zweites Argument angegeben" fi
Das Script bei der Ausführung:
> you@host > ./abackup1 save $HOME/Shellbuch ls: /home/you/backup: Datei oder Verzeichnis nicht gefunden Backup-Verzeichnis angelegt (/home/you/backup) tar: Entferne führende `/' von Archivnamen. home/you/Shellbuch/ home/you/Shellbuch/Planung_und_Bewerbung/ home/you/Shellbuch/Planung_und_Bewerbung/shellprogrammierung.doc home/you/Shellbuch/Planung_und_Bewerbung/shellprogrammierung.sxw home/you/Shellbuch/kap004.txt home/you/Shellbuch/Kap003.txt~ home/you/Shellbuch/kap004.txt~ ... Backup für /home/you/Shellbuch erfolgreich in /home/you/backup angelegt Backup-Name : 21Feb2005.tar you@host > ./abackup1 read $HOME/backup/21Feb2005.tar | \ > grep Kap002 home/tot/Shellbuch/Kap002.doc home/tot/Shellbuch/Kap002.sxw you@host > ./abackup1 read $HOME/backup/21Feb2005.tar | wc -l 50
Hier wurde ein Backup vom kompletten Verzeichnis $HOME/Shellbuch durchgeführt. Anschließend wurde mit der Option read und grep nach »Kap002« gesucht, welches hier in zweifacher Ausführung vorhanden ist. Ebenso einfach können Sie hiermit die Anzahl von Dateien in einem Archiv (hier mit wc âl) ermitteln.
### 4.4.4 Zeichenketten vergleichen (Bash und Korn-Shell only)Â
In der Bash und der Korn-Shell stehen Ihnen noch weitere alternative Möglichkeit zur Verfügung, um Zeichenketten zu vergleichen (siehe Tabelle 4.4). Besonders interessant erscheint mir in diesem Zusammenhang, dass hiermit jetzt auch echte Mustervergleiche möglich sind.
Ausdruck | Operator | Liefert wahr (0) zurück, wenn ... |
| --- | --- | --- |
[[ "$var1" == "$var2" ]] | == | var1 gleich var2 ist |
[[ "$var1" != "$var2" ]] | != | var1 ungleich var2 ist |
[[ âz "$var" ]] | âz | var leer ist |
[[ ân "$var" ]] | ân | var nicht leer ist |
[[ "$var1" > "$var2" ]] | > | var1 alphabetisch größer als var2 ist |
[[ "$var1" < "$var2" ]] | < | var1 alphabetisch kleiner als var2 ist |
[[ "$var" == pattern ]] | == | var entspricht dem Muster pattern |
[[ "$var" != pattern ]] | != | var entspricht nicht dem Muster-Pattern |
Das wirklich sehr interessante Feature bei den Vergleichen von Zeichenketten in Bash und Korn-Shell ist die Möglichkeit, Muster beim Vergleich zu verwenden. Hierbei gilt es zu beachten, dass sich das Muster auf der rechten Seite befindet und nicht zwischen Anführungsstrichen stehen darf. Zur Verwendung von Mustern stehen Ihnen wieder die Metazeichen *, ? und [ ] zur Verfügung, deren Bedeutung und Verwendung Sie bereits in Abschnitt 1.10.6 kennen gelernt haben. Natürlich können Sie auch die Konstruktionen für alternative Muster nutzen, welche Sie in Abschnitt 1.10.8 verwendet haben. Hier ein Beispiel mit einfachen Mustervergleichen:
> # Demonstriert erweiterte Zeichenkettenvergleiche # ateststring2 if [ $# -lt 1 ] then echo "Hier ist mindestens ein Argument erforderlich" echo "usage: $0 Zeichenkette" exit 1 fi if [[ "$1" = *ist* ]] then echo "$1 enthält die Textfolge \"ist\"" elif [[ "$1" = ?art ]] then echo "$1 enthält die Textfolge \"art\"" elif [[ "$1" = kap[0â9] ]] then echo "$1 enthält die Textfolge \"kap\"" else echo "Erfolgloser Mustervergleich" fi
Das Script bei der Ausführung:
> you@host > ./ateststring2 Bauart Erfolgloser Mustervergleich you@host > ./ateststring2 zart zart enthält die Textfolge "art" you@host > ./ateststring2 kap7 kap7 enthält die Textfolge "kap" you@host > ./ateststring2 kapa Erfolgloser Mustervergleich you@host > ./ateststring2 Mistgabel Mistgabel enthält die Textfolge "ist"
Eine weitere interessante Erneuerung ist der Vergleich auf größer bzw. kleiner als. Hiermit werden Zeichenketten alphabetisch in lexikografischer Anordnung verglichen. Hierzu ein Script:
> # Demonstriert erweiterte Zeichenkettenvergleiche # ateststring3 var1=aaa var2=aab var3=aaaa var4=b if [[ "$var1" > "$var2" ]] then echo "$var1 ist größer als $var2" else echo "$var1 ist kleiner als $var2" fi if [[ "$var2" < "$var3" ]] then echo "$var2 ist kleiner als $var3" else echo "$var2 ist größer als $var3" fi if [[ "$var3" < "$var4" ]] then echo "$var3 ist kleiner als $var4" else echo "$var3 ist größer als $var4" fi
Das Script bei der Ausführung:
> you@host > ./ateststring3 aaa ist kleiner als aab aab ist größer als aaaa aaaa ist kleiner als b
An diesem Script können Sie sehr gut erkennen, dass hier nicht die Länge der Zeichenkette zwischen dem größer bzw. kleiner entscheidet, sondern das Zeichen, womit die Zeichenkette beginnt. Somit ist laut Alphabet das Zeichen a kleiner als das Zeichen b. Beginnen beide Zeichenketten mit dem Buchstaben a, wird das nächste Zeichen verglichen â eben so, wie Sie dies von einem Lexikon her kennen.
# 4.5 Status von Dateien erfragenÂ
4.5 Status von Dateien erfragenÂ
Um den Status von Dateien abzufragen, bietet Ihnen test eine Menge Optionen an. Dies ist ebenfalls ein tägliches Geschäft in der Shell-Programmierung. Die Verwendung des test-Kommandos in Bezug auf Dateien sieht wie folgt aus:
if [ -Operator Datei ] then # ... fi
Die Verwendung ist recht einfach, einem Operator folgt immer genau ein Datei- bzw. Verzeichnisname. In der Tabelle 4.5 bis Tabelle 4.7 finden Sie einen Überblick zu verschiedenen Operatoren, mit denen Sie einen Dateitest durchführen können. Aufteilen lassen sich diese Tests in Dateitypen, Zugriffsrechte auf eine Datei und charakteristische Dateieigenschaften.
Tabelle 4.5 Â Operatoren zum Testen des Dateityps
âb DATEI
Datei existiert und ist ein block special device (Gerätedatei).
âc DATEI
Datei existiert und ist ein character special file (Gerätedatei).
âd DATEI
Datei existiert und ist ein Verzeichnis.
âf DATEI
Datei existiert und ist eine reguläre Datei.
âh DATEI
Datei existiert und ist ein symbolischer Link (dasselbe wie âL).
âL DATEI
Datei existiert und ist ein symbolischer Link (dasselbe wie âh).
âp DATEI
Datei existiert und ist eine named Pipe.
âS DATEI
Datei existiert und ist ein (UNIX-Domain-)Socket (Gerätedatei im Netzwerk).
ât [FD]
Ein Filedescriptor (FD) ist auf einem seriellen Terminal geöffnet.
Tabelle 4.6 Â Operatoren zum Testen der Zugriffsrechte auf eine Datei
âg DATEI
Datei existiert und das setgid-Bit ist gesetzt.
âk DATEI
Datei existiert und das sticky-Bit ist gesetzt.
âr DATEI
Datei existiert und ist lesbar.
âu DATEI
Datei existiert und das setuid-Bit ist gesetzt.
âw DATEI
Datei existiert und ist beschreibbar.
âx DATEI
Datei existiert und ist ausführbar.
âO DATEI
Datei existiert und der Benutzer des Scripts ist der Eigentümer (owner) der Datei.
âG DATEI
Datei existiert und der Benutzer des Scripts hat dieselbe GID wie die Datei.
Tabelle 4.7 Â Operatoren zum Testen von charakteristischen Eigenschaften
âe DATEI
Datei existiert.
âs DATEI
Datei existiert und ist nicht leer.
DATEI1 âef DATEI2
Datei1 und Datei2 haben dieselbe Geräte- und Inodennummer und sind somit Hardlinks.
DATEI1 ânt DATEI2
Datei1 ist neueren Datums (Modifikationsdatum, nt = newer time) als Datei2.
DATEI1 âot DATEI2
Datei1 ist älter (Modifikationsdatum, ot = older time) als Datei2.
Hinweis   In der Korn-Shell besteht auch die Möglichkeit, die doppelten eckigen Klammerungen für den Dateitest zu verwenden ([[ -option DATEI ]]).
## 4.5 Status von Dateien erfragenÂ
Um den Status von Dateien abzufragen, bietet Ihnen test eine Menge Optionen an. Dies ist ebenfalls ein tägliches Geschäft in der Shell-Programmierung. Die Verwendung des test-Kommandos in Bezug auf Dateien sieht wie folgt aus:
> if [ -Operator Datei ] then # ... fi
Die Verwendung ist recht einfach, einem Operator folgt immer genau ein Datei- bzw. Verzeichnisname. In der Tabelle 4.5 bis Tabelle 4.7 finden Sie einen Überblick zu verschiedenen Operatoren, mit denen Sie einen Dateitest durchführen können. Aufteilen lassen sich diese Tests in Dateitypen, Zugriffsrechte auf eine Datei und charakteristische Dateieigenschaften.
Operator | Bedeutung |
| --- | --- |
âb DATEI | Datei existiert und ist ein block special device (Gerätedatei). |
âc DATEI | Datei existiert und ist ein character special file (Gerätedatei). |
âd DATEI | Datei existiert und ist ein Verzeichnis. |
âf DATEI | Datei existiert und ist eine reguläre Datei. |
âh DATEI | Datei existiert und ist ein symbolischer Link (dasselbe wie âL). |
âL DATEI | Datei existiert und ist ein symbolischer Link (dasselbe wie âh). |
âp DATEI | Datei existiert und ist eine named Pipe. |
âS DATEI | Datei existiert und ist ein (UNIX-Domain-)Socket (Gerätedatei im Netzwerk). |
ât [FD] | Ein Filedescriptor (FD) ist auf einem seriellen Terminal geöffnet. |
Operator | Bedeutung |
| --- | --- |
âg DATEI | Datei existiert und das setgid-Bit ist gesetzt. |
âk DATEI | Datei existiert und das sticky-Bit ist gesetzt. |
âr DATEI | Datei existiert und ist lesbar. |
âu DATEI | Datei existiert und das setuid-Bit ist gesetzt. |
âw DATEI | Datei existiert und ist beschreibbar. |
âx DATEI | Datei existiert und ist ausführbar. |
âO DATEI | Datei existiert und der Benutzer des Scripts ist der Eigentümer (owner) der Datei. |
âG DATEI | Datei existiert und der Benutzer des Scripts hat dieselbe GID wie die Datei. |
Operator | Bedeutung |
| --- | --- |
âe DATEI | Datei existiert. |
âs DATEI | Datei existiert und ist nicht leer. |
DATEI1 âef DATEI2 | Datei1 und Datei2 haben dieselbe Geräte- und Inodennummer und sind somit Hardlinks. |
DATEI1 ânt DATEI2 | Datei1 ist neueren Datums (Modifikationsdatum, nt = newer time) als Datei2. |
DATEI1 âot DATEI2 | Datei1 ist älter (Modifikationsdatum, ot = older time) als Datei2. |
Das Script bei der Ausführung:
> you@host > ./afiletester Verzeichnis atestdir erfolgreich angelegt atestfile.txt erfolgreich angelegt atestfile.txt ist ... ... lesbar ... schreibbar ... nicht ausführbar you@host > rm -r atestdir you@host > touch atestdir you@host > chmod 000 atestfile.txt you@host > ./afiletester atestdir exstiert bereits, ist aber eine reguläre Datei atestfile.txt existiert bereits atestfile.txt ist ... ... nicht lesbar ... nicht schreibbar ... nicht ausführbar
# 4.6 Logische Verknüpfung von AusdrückenÂ
Tabelle 4.8 Â Logische Operatoren
! ausdruck
Negation
der Ausdruck falsch ist.
Tabelle 4.9 Â Logische Operatoren (Bash und Korn-Shell)
4.6.1 Negationsoperator !Â
Den Negationsoperator ! können Sie vor jeden Ausdruck setzen. Als Ergebnis des Tests erhalten Sie immer das Gegenteil. Alles, was ohne ! wahr wäre, ist falsch und alles, was falsch ist, wäre dann wahr.
# Demonstriert Dateitest mit Negation # anegation file=atestfile.txt # Eine Datei anlegen if [ ! -e $file ] then touch $file if [ ! -e $file ] then echo "Konnte $file nicht anlegen" exit 1 fi fi echo "$file angelegt/vorhanden!"
Im Beispiel wird zuerst ausgewertet, ob die Datei atestfile.txt bereits existiert. Existiert sie nicht, wird ein Wert ungleich 0 zurückgegeben und es wird nicht in die if-Verzweigung gesprungen. Allerdings wurde hier durch das Voranstellen des Negationsoperators ! der Ausdruck umgekehrt. Und somit wird in den if-Zweig gesprungen, wenn die Datei nicht existiert. Bei Nicht-Existenz entsprechender Datei wird diese neu erzeugt (mit touch). Anschließend wird selbige Überprüfung nochmals durchgeführt.
Hinweis   Bitte beachten Sie, dass die echte Bourne-Shell den Negationsoperator außerhalb des test-Kommandos nicht kennt.
4.6.2 Die UND-Verknüpfung (-a und &&)Â
Bei einer UND-Verknüpfung müssen alle verwendeten Ausdrücke wahr sein, damit der komplette Ausdruck ebenfalls wahr (0) wird. Sobald ein Ausdruck einen Wert ungleich 0 zurückgibt, gilt der komplette Ausdruck als falsch. Siehe Abbildung 4.6:
Ein einfaches Beispiel:
# Demonstriert die logische UND-Verknüpfung # aandtest file=atestfile.txt # Eine Datei anlegen if [ -f $file -a -w $file ] then echo "Datei $file ist eine reguläre Datei und beschreibbar" fi
Hier wird überprüft, ob die Datei atestfile.txt eine reguläre Datei UND beschreibbar ist. Gleiches mit dem alternativen UND-Operator && in der Bash oder der Korn-Shell wird wie folgt geschrieben:
if [ -f $file ] && [ -w $file ]
Wollen Sie überprüfen, ob eine Zahl einen Wert zwischen 1 und 10 besitzt, kann der UND-Operator wie folgt verwendet werden (hier kann der Wert als erstes Argument der Kommandozeile mit übergeben werden, ansonsten wird als Alternative einfach der Wert 5 verwendet):
# Demonstriert den UND-Operator # aandnumber number=${1:-"5"} # Eine Datei anlegen if [ $number -gt 0 -a $number -lt 11 ] then echo "Wert liegt zwischen 1 und 10" else echo "Wert liegt nicht zwischen 1 und 10" fi
Das Script bei der Ausführung:
you@host > ./aandnumber 9 Wert liegt zwischen 1 und 10 you@host > ./aandnumber 0 Wert liegt nicht zwischen 1 und 10 you@host > ./aandnumber 11 Wert liegt nicht zwischen 1 und 10 you@host > ./aandnumber 10 Wert liegt zwischen 1 und 10
Natürlich dürfen Sie auch hier wieder die alternative Syntax der Bash bzw. der Korn-Shell verwenden:
if (( $number > 0 )) && (( $number < 11 ))
4.6.3 Die ODER-Verknüpfung (-o und ||)Â
Eine ODER-Verknüpfung liefert bereits wahr zurück, wenn nur einer der Ausdrücke innerhalb der Verknüpfung wahr ist. Siehe Abbildung 4.7:
if (( $number == 1 )) || (( $number == 2 ))
Hier wird bspw. wahr zurückgeliefert, wenn der Wert von »number« 1 oder 2 ist. Gleiches in der Bourne-Shell:
if [ $number -eq 1 -o $number -eq 2 ]
Ähnliches wird häufig verwendet, um zu überprüfen, ob ein Anwender »j« oder »ja« bzw. »n« oder »nein« zur Bestätigung einer Abfrage eingegeben hat:
if [ $answer = "j" -o $answer = "ja" ] if [ $answer = "n" -o $answer = "nein" ]
Alternativ für »neuere« Shells:
if [[ $answer == "j" ]] || [[ $answer == "ja" ]] if [[ $answer == "n" ]] || [[ $answer == "nein" ]]
4.6.4 Klammerung und mehrere logische VerknüpfungenÂ
Die Auswertung von Ausdrücken kann entweder innerhalb oder außerhalb der Klammern erfolgen â wobei dies in der Bash bzw. der Korn-Shell das Gleiche bedeutet. Mithilfe von Klammerungen können Sie die Reihenfolge bei der Auswertung von logischen Ausdrücken bestimmen. Dies natürlich nur, solange sich die Ausdrücke innerhalb von runden (( )), doppelt eckigen [[ ]] oder ganz außerhalb von Klammern befinden.
# Klammerung muss außerhalb von [[ ]] stattfinden if ([[ $var1 == "abc" ]] && [[ $var2 == "cde" ]]) || \ ( [[ $var3 == "abc" ]] )
Hier überprüfen Sie beispielsweise, ob »var1« den Wert »abc« und »var2« den Wert »cde« oder aber »var3« den Wert »abc« enthält. Hier würde also wahr zurückgegeben, wenn der erste Ausdruck, der ja aufgeteilt wurde in zwei Ausdrücke, oder der zweite Ausdruck (hinter dem ODER) wahr zurückgibt. Natürlich können Sie Gleiches auch mit der Bourne-Shell-Kompatibilität vornehmen:
# Innerhalb von eckigen Klammern müssen runde Klammern # ausgeschaltet werden if [ \( "$var1" = "abc" -a "$var2" = "cde" \) -o \ "$var3" = "abc" ]
Hierbei (also innerhalb von eckigen Klammern) müssen Sie allerdings die ungeschützten runden Klammern durch einen Backslash ausschalten, denn sonst würde versucht werden, eine Subshell zu öffnen. Doch sollte nicht unerwähnt bleiben, dass die Klammerung hierbei auch entfallen könnte, da folgende Auswertung zum selben Ziel geführt hätte:
# Bash und Korn-Shell if [[ $var1 == "abc" ]] && [[ $var2 == "cde" ]] || \ [[ $var3 == "abc" ]] # alle Shells if [ "$var1" = "abc" -a "$var2" = "cde" -o "$var3" = "abc" ]
Gleiches gilt bei der Klammerung von Zahlenwerten in logischen Ausdrücken:
# Bash und Korn-Shell if (( $var1 == 4 )) || ( (( $var2 == 2 )) && (( $var3 == 3 )) ) # alle Shells if [ $var1 -eq 4 -o \( $var2 -eq 2 -a $var3 -eq 3 \) ]
Das Gleiche würde man hier auch ohne Klammerung wie folgt erreichen:
# Bash und Korn-Shell if (( $var1 == 4 )) || (( $var2 == 2 )) && (( $var3 == 3 )) # alle Shells if [ $var1 -eq 4 -o $var2 -eq 2 -a $var3 -eq 3 ]
Hier würde der Ausdruck daraufhin überprüft, ob entweder »var1« gleich 4 ist oder »var2« gleich 2 und »var3« gleich 3. Trifft einer dieser Ausdrücke zu, wird wahr zurückgegeben. In Tabelle 4.10 nochmals die Syntax zu den verschiedenen Klammerungen:
Tabelle 4.10 Â Verwendung von Klammern bei Ausdrücken
Ausdruck für
Ohne Klammern
Klammerung
Shell
Zeichenketten
[[ Ausdruck ]]
([[ Ausdruck ]])
Bash und Korn
Zahlenwerte
(( Ausdruck ))
( (( Ausdruck )) )
Bash und Korn
Dateitest
[[ Ausdruck ]]
([[ Ausdruck ]])
nur Korn
## 4.6 Logische Verknüpfung von AusdrückenÂ
Operator | Logischer Wert | Gibt wahr (0) zurück, wenn ... |
| --- | --- | --- |
ausdruck1 âa ausdruck2 | (and) UND | beide Ausdrücke wahr zurückgeben. |
ausdruck1 âo ausdruck2 | (or) ODER | mindestens einer der beiden Ausdrücke wahr ist. |
! ausdruck | Negation | der Ausdruck falsch ist. |
Operator | Logischer Wert | Gibt wahr (0) zurück, wenn ... |
| --- | --- | --- |
ausdruck1 && ausdruck2 | (and) UND | beide Ausdrücke wahr zurückgeben. |
ausdruck1 || ausdruck2 | (or) ODER | mindestens einer der beiden Ausdrücke wahr ist. |
### 4.6.1 Negationsoperator !Â
Den Negationsoperator ! können Sie vor jeden Ausdruck setzen. Als Ergebnis des Tests erhalten Sie immer das Gegenteil. Alles, was ohne ! wahr wäre, ist falsch und alles, was falsch ist, wäre dann wahr.
> # Demonstriert Dateitest mit Negation # anegation file=atestfile.txt # Eine Datei anlegen if [ ! -e $file ] then touch $file if [ ! -e $file ] then echo "Konnte $file nicht anlegen" exit 1 fi fi echo "$file angelegt/vorhanden!"
Im Beispiel wird zuerst ausgewertet, ob die Datei atestfile.txt bereits existiert. Existiert sie nicht, wird ein Wert ungleich 0 zurückgegeben und es wird nicht in die if-Verzweigung gesprungen. Allerdings wurde hier durch das Voranstellen des Negationsoperators ! der Ausdruck umgekehrt. Und somit wird in den if-Zweig gesprungen, wenn die Datei nicht existiert. Bei Nicht-Existenz entsprechender Datei wird diese neu erzeugt (mit touch). Anschließend wird selbige Überprüfung nochmals durchgeführt.
### 4.6.2 Die UND-Verknüpfung (-a und &&)Â
Bei einer UND-Verknüpfung müssen alle verwendeten Ausdrücke wahr sein, damit der komplette Ausdruck ebenfalls wahr (0) wird. Sobald ein Ausdruck einen Wert ungleich 0 zurückgibt, gilt der komplette Ausdruck als falsch. Siehe Abbildung 4.6:
Ein einfaches Beispiel:
> # Demonstriert die logische UND-Verknüpfung # aandtest file=atestfile.txt # Eine Datei anlegen if [ -f $file -a -w $file ] then echo "Datei $file ist eine reguläre Datei und beschreibbar" fi
Hier wird überprüft, ob die Datei atestfile.txt eine reguläre Datei UND beschreibbar ist. Gleiches mit dem alternativen UND-Operator && in der Bash oder der Korn-Shell wird wie folgt geschrieben:
> if [ -f $file ] && [ -w $file ]
Wollen Sie überprüfen, ob eine Zahl einen Wert zwischen 1 und 10 besitzt, kann der UND-Operator wie folgt verwendet werden (hier kann der Wert als erstes Argument der Kommandozeile mit übergeben werden, ansonsten wird als Alternative einfach der Wert 5 verwendet):
> # Demonstriert den UND-Operator # aandnumber number=${1:-"5"} # Eine Datei anlegen if [ $number -gt 0 -a $number -lt 11 ] then echo "Wert liegt zwischen 1 und 10" else echo "Wert liegt nicht zwischen 1 und 10" fi
Das Script bei der Ausführung:
> you@host > ./aandnumber 9 Wert liegt zwischen 1 und 10 you@host > ./aandnumber 0 Wert liegt nicht zwischen 1 und 10 you@host > ./aandnumber 11 Wert liegt nicht zwischen 1 und 10 you@host > ./aandnumber 10 Wert liegt zwischen 1 und 10
Natürlich dürfen Sie auch hier wieder die alternative Syntax der Bash bzw. der Korn-Shell verwenden:
> if (( $number > 0 )) && (( $number < 11 ))
### 4.6.3 Die ODER-Verknüpfung (-o und ||)Â
Eine ODER-Verknüpfung liefert bereits wahr zurück, wenn nur einer der Ausdrücke innerhalb der Verknüpfung wahr ist. Siehe Abbildung 4.7:
> if (( $number == 1 )) || (( $number == 2 ))
Hier wird bspw. wahr zurückgeliefert, wenn der Wert von »number« 1 oder 2 ist. Gleiches in der Bourne-Shell:
> if [ $number -eq 1 -o $number -eq 2 ]
Ähnliches wird häufig verwendet, um zu überprüfen, ob ein Anwender »j« oder »ja« bzw. »n« oder »nein« zur Bestätigung einer Abfrage eingegeben hat:
> if [ $answer = "j" -o $answer = "ja" ] if [ $answer = "n" -o $answer = "nein" ]
Alternativ für »neuere« Shells:
> if [[ $answer == "j" ]] || [[ $answer == "ja" ]] if [[ $answer == "n" ]] || [[ $answer == "nein" ]]
### 4.6.4 Klammerung und mehrere logische VerknüpfungenÂ
Die Auswertung von Ausdrücken kann entweder innerhalb oder außerhalb der Klammern erfolgen â wobei dies in der Bash bzw. der Korn-Shell das Gleiche bedeutet. Mithilfe von Klammerungen können Sie die Reihenfolge bei der Auswertung von logischen Ausdrücken bestimmen. Dies natürlich nur, solange sich die Ausdrücke innerhalb von runden (( )), doppelt eckigen [[ ]] oder ganz außerhalb von Klammern befinden.
> # Klammerung muss außerhalb von [[ ]] stattfinden if ([[ $var1 == "abc" ]] && [[ $var2 == "cde" ]]) || \ ( [[ $var3 == "abc" ]] )
Hier überprüfen Sie beispielsweise, ob »var1« den Wert »abc« und »var2« den Wert »cde« oder aber »var3« den Wert »abc« enthält. Hier würde also wahr zurückgegeben, wenn der erste Ausdruck, der ja aufgeteilt wurde in zwei Ausdrücke, oder der zweite Ausdruck (hinter dem ODER) wahr zurückgibt. Natürlich können Sie Gleiches auch mit der Bourne-Shell-Kompatibilität vornehmen:
> # Innerhalb von eckigen Klammern müssen runde Klammern # ausgeschaltet werden if [ \( "$var1" = "abc" -a "$var2" = "cde" \) -o \ "$var3" = "abc" ]
Hierbei (also innerhalb von eckigen Klammern) müssen Sie allerdings die ungeschützten runden Klammern durch einen Backslash ausschalten, denn sonst würde versucht werden, eine Subshell zu öffnen. Doch sollte nicht unerwähnt bleiben, dass die Klammerung hierbei auch entfallen könnte, da folgende Auswertung zum selben Ziel geführt hätte:
> # Bash und Korn-Shell if [[ $var1 == "abc" ]] && [[ $var2 == "cde" ]] || \ [[ $var3 == "abc" ]] # alle Shells if [ "$var1" = "abc" -a "$var2" = "cde" -o "$var3" = "abc" ]
Gleiches gilt bei der Klammerung von Zahlenwerten in logischen Ausdrücken:
> # Bash und Korn-Shell if (( $var1 == 4 )) || ( (( $var2 == 2 )) && (( $var3 == 3 )) ) # alle Shells if [ $var1 -eq 4 -o \( $var2 -eq 2 -a $var3 -eq 3 \) ]
Das Gleiche würde man hier auch ohne Klammerung wie folgt erreichen:
> # Bash und Korn-Shell if (( $var1 == 4 )) || (( $var2 == 2 )) && (( $var3 == 3 )) # alle Shells if [ $var1 -eq 4 -o $var2 -eq 2 -a $var3 -eq 3 ]
Hier würde der Ausdruck daraufhin überprüft, ob entweder »var1« gleich 4 ist oder »var2« gleich 2 und »var3« gleich 3. Trifft einer dieser Ausdrücke zu, wird wahr zurückgegeben. In Tabelle 4.10 nochmals die Syntax zu den verschiedenen Klammerungen:
Ausdruck für | Ohne Klammern | Klammerung | Shell |
| --- | --- | --- | --- |
Zeichenketten | [[ Ausdruck ]] | ([[ Ausdruck ]]) | Bash und Korn |
Zeichenketten | [ Ausdruck ] | [ \( Ausdruck \) ] | alle Shells |
Zahlenwerte | (( Ausdruck )) | ( (( Ausdruck )) ) | Bash und Korn |
Zahlenwerte | [ Ausdruck ] | [ \( Ausdruck \) ] | alle Shells |
Dateitest | [ Ausdruck ] | [ \( Ausdruck \) ] | alle Shells |
Dateitest | [[ Ausdruck ]] | ([[ Ausdruck ]]) | nur Korn |
# 4.7 Short Circuit-Tests â ergebnisabhängige BefehlsausführungÂ
4.7 Short Circuit-Tests â ergebnisabhängige BefehlsausführungÂ
Eine sehr interessante Besonderheit vieler Skriptsprachen besteht darin, dass man die logischen Operatoren nicht nur für Ausdrücke verwenden kann, sondern auch zur logischen Verknüpfung von Anweisungen. So wird zum Beispiel bei einer Verknüpfung zweier Anweisungen mit dem ODER-Operator || die zweite Anweisung nur dann ausgeführt, wenn die erste fehlschlägt. Ein einfaches Beispiel in der Kommandozeile:
you@host > rm afile 2>/dev/null || \ > echo "afile konnte nicht gelöscht werden" afile konnte nicht gelöscht werden
Kann im Beispiel die Anweisung rm nicht ausgeführt werden, wird die Anweisung hinter dem ODER-Operator ausgeführt. Ansonsten wird eben nur rm zum Löschen der Datei afile ausgeführt. Man spricht hierbei von der Short-Circuit-Logik der logischen Operatoren. Beim Short-Circuit wird mit der Auswertung eines logischen Operators aufgehört, sobald das Ergebnis feststeht. Ist im Beispiel oben der Rückgabewert von rm wahr (0), steht das Ergebnis bei einer ODER-Verknüpfung bereits fest. Ansonsten, wenn der erste Befehl einen Fehler (ungleich 0) zurückliefert, wird im Fall einer ODER-Verknüpfung die zweite Anweisung ausgeführt â im Beispiel die Fehlermeldung mittels echo. Man nutzt also hier schlicht den Rückgabewert von Kommandos aus, den man benutzt, um Befehle abhängig von deren Ergebnis miteinander zu verknüpfen.
In Tabelle 4.11 finden Sie die Syntax der ereignisabhängigen Befehlsausführung:
Tabelle 4.11 Â Ereignisabhängige Befehlsausführung (Short-Circuit-Test)
Verknüpfung
Bedeutung
befehl1 && befehl2
Hier wird befehl2 nur dann ausgeführt, wenn der befehl1 erfolgreich ausgeführt wurde, sprich den Rückgabewert 0 zurückgegeben hat.
befehl1 || befehl2
Hier wird befehl2 nur dann ausgeführt, wenn beim Ausführen von befehl1 ein Fehler auftrat, sprich einen Rückgabewert ungleich 0 zurückgegeben hat.
Das Ganze dürfte Ihnen wie bei einer if-Verzweigung vorkommen. Und in der Tat können Sie dieses Konstrukt als eine verkürzte if-Verzweigung verwenden. Bspw.:
befehl1 || befehl2
entspricht
if befehl1 then # Ja, befehl1 ist es else befehl2 fi
Und die ereignisabhängige Verknüpfung
befehl1 && befehl2
entspricht folgender if-Verzweigung:
if befehl1 then befehl2 fi
Ein einfaches Beispiel:
you@host > ls atestfile.txt > /dev/null 2>&1 && vi atestfile.txt
Hier »überprüfen« Sie praktisch, ob die Datei atestfile.txt existiert. Wenn diese existiert, gibt die erste Anweisung 0 bei Erfolg zurück, somit wird auch die zweite Anweisung ausgeführt. Dies wäre das Laden der Textdatei in den Editor vi. Dies können Sie nun auch mit dem test-Kommando bzw. seiner symbolischen Form verkürzen:
you@host > [ -e atestfile.txt ] && vi atestfile.txt
Diese Form entspricht dem Beispiel zuvor. Solche Dateitests werden übrigens sehr häufig in den Shellscripts eingesetzt. Natürlich können Sie hierbei auch eine Flut von && und || starten. Aber aus Übersichtlichkeitsgründen sei von Übertreibungen abgeraten. Meiner Meinung nach wird es bei mehr als drei Befehlen recht unübersichtlich.
you@host > who | grep tot > /dev/null && \ > echo "User tot ist aktiv" || echo "User tot ist nicht aktiv" User tot ist aktiv
Dieses Beispiel ist hart an der Grenze des Lesbaren. Bitte überdenken Sie das, wenn Sie anfangen, ganze Befehlsketten-Orgien auszuführen. Im Beispiel haben Sie außerdem gesehen, dass Sie auch && und || mischen können. Aber auch hier garantiere ich Ihnen, dass Sie bei wilden Mixturen schnell den Überblick verlieren werden. Das Verfahren wird auch sehr gern zum Beenden von Shellscripten eingesetzt:
# Datei lesbar... wenn nicht, beenden [ -r $file ] || exit 1 # Datei anlegen ... wenn nicht, beenden touch $file || [ -e $file ] || exit 2
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 4.7 Short Circuit-Tests â ergebnisabhängige BefehlsausführungÂ
Eine sehr interessante Besonderheit vieler Skriptsprachen besteht darin, dass man die logischen Operatoren nicht nur für Ausdrücke verwenden kann, sondern auch zur logischen Verknüpfung von Anweisungen. So wird zum Beispiel bei einer Verknüpfung zweier Anweisungen mit dem ODER-Operator || die zweite Anweisung nur dann ausgeführt, wenn die erste fehlschlägt. Ein einfaches Beispiel in der Kommandozeile:
> you@host > rm afile 2>/dev/null || \ > echo "afile konnte nicht gelöscht werden" afile konnte nicht gelöscht werden
Kann im Beispiel die Anweisung rm nicht ausgeführt werden, wird die Anweisung hinter dem ODER-Operator ausgeführt. Ansonsten wird eben nur rm zum Löschen der Datei afile ausgeführt. Man spricht hierbei von der Short-Circuit-Logik der logischen Operatoren. Beim Short-Circuit wird mit der Auswertung eines logischen Operators aufgehört, sobald das Ergebnis feststeht. Ist im Beispiel oben der Rückgabewert von rm wahr (0), steht das Ergebnis bei einer ODER-Verknüpfung bereits fest. Ansonsten, wenn der erste Befehl einen Fehler (ungleich 0) zurückliefert, wird im Fall einer ODER-Verknüpfung die zweite Anweisung ausgeführt â im Beispiel die Fehlermeldung mittels echo. Man nutzt also hier schlicht den Rückgabewert von Kommandos aus, den man benutzt, um Befehle abhängig von deren Ergebnis miteinander zu verknüpfen.
In Tabelle 4.11 finden Sie die Syntax der ereignisabhängigen Befehlsausführung:
Verknüpfung | Bedeutung |
| --- | --- |
befehl1 && befehl2 | Hier wird befehl2 nur dann ausgeführt, wenn der befehl1 erfolgreich ausgeführt wurde, sprich den Rückgabewert 0 zurückgegeben hat. |
befehl1 || befehl2 | Hier wird befehl2 nur dann ausgeführt, wenn beim Ausführen von befehl1 ein Fehler auftrat, sprich einen Rückgabewert ungleich 0 zurückgegeben hat. |
Das Ganze dürfte Ihnen wie bei einer if-Verzweigung vorkommen. Und in der Tat können Sie dieses Konstrukt als eine verkürzte if-Verzweigung verwenden. Bspw.:
> befehl1 || befehl2
entspricht
> if befehl1 then # Ja, befehl1 ist es else befehl2 fi
Und die ereignisabhängige Verknüpfung
> befehl1 && befehl2
entspricht folgender if-Verzweigung:
> if befehl1 then befehl2 fi
Ein einfaches Beispiel:
> you@host > ls atestfile.txt > /dev/null 2>&1 && vi atestfile.txt
Hier »überprüfen« Sie praktisch, ob die Datei atestfile.txt existiert. Wenn diese existiert, gibt die erste Anweisung 0 bei Erfolg zurück, somit wird auch die zweite Anweisung ausgeführt. Dies wäre das Laden der Textdatei in den Editor vi. Dies können Sie nun auch mit dem test-Kommando bzw. seiner symbolischen Form verkürzen:
> you@host > [ -e atestfile.txt ] && vi atestfile.txt
Diese Form entspricht dem Beispiel zuvor. Solche Dateitests werden übrigens sehr häufig in den Shellscripts eingesetzt. Natürlich können Sie hierbei auch eine Flut von && und || starten. Aber aus Übersichtlichkeitsgründen sei von Übertreibungen abgeraten. Meiner Meinung nach wird es bei mehr als drei Befehlen recht unübersichtlich.
> you@host > who | grep tot > /dev/null && \ > echo "User tot ist aktiv" || echo "User tot ist nicht aktiv" User tot ist aktiv
Dieses Beispiel ist hart an der Grenze des Lesbaren. Bitte überdenken Sie das, wenn Sie anfangen, ganze Befehlsketten-Orgien auszuführen. Im Beispiel haben Sie außerdem gesehen, dass Sie auch && und || mischen können. Aber auch hier garantiere ich Ihnen, dass Sie bei wilden Mixturen schnell den Überblick verlieren werden. Das Verfahren wird auch sehr gern zum Beenden von Shellscripten eingesetzt:
> # Datei lesbar... wenn nicht, beenden [ -r $file ] || exit 1 # Datei anlegen ... wenn nicht, beenden touch $file || [ -e $file ] || exit 2
# 4.8 Die Anweisung caseÂ
4.8 Die Anweisung caseÂ
Die Anweisung case wird oft als Alternative zu mehreren ifâelif-Verzweigungen verwendet. Allerdings überprüft case im Gegensatz zu if oder elif nicht den Rückgabewert eines Kommandos. case vergleicht einen bestimmten Wert mit einer Liste von anderen Werten. Findet hierbei eine Übereinstimmung statt, können einzelne oder mehrere Kommandos ausgeführt werden. Außerdem bietet eine case-Abfrage im Gegensatz zu einer ifâelif-Abfrage erheblich mehr Übersicht (siehe Abbildung 4.10). Hier die Syntax, wie Sie eine Fallunterscheidung mit case formulieren:
case "$var" in muster1) kommando ... kommando ;; muster2) kommando ... kommando ;; mustern) kommando ... kommando ;; esac
Die Zeichenkette »var« nach dem Schlüsselwort case wird nun von oben nach unten mit den verschiedensten Mustern verglichen, bis ein Muster gefunden wurde, das auf »var« passt oder eben nicht. In der Syntaxbeschreibung wird bspw. zuerst »muster1« mit »var« verglichen. Stimmt »muster1« mit »var« überein, werden entsprechende Kommandos ausgeführt, die sich hinter dem betroffenen Muster befinden. Stimmt »var« nicht mit »muster1« überein, wird das nächste Muster »muster2« mit »var« verglichen. Dies geht so lange weiter, bis eine entsprechende Übereinstimmung gefunden wird. Wird keine Übereinstimmung gefunden, werden alle Befehle der case-Anweisung ignoriert und die Ausführung hinter esac fortgeführt.
Eine sehr wichtige Rolle spielen auch die doppelten Semikolons (;;). Denn stimmt ein Muster mit »var« überein, werden die entsprechenden Befehle dahinter bis zu diesen doppelten Semikolons ausgeführt. Wenn die Shell auf diese Semikolons stößt, wird aus der case-Verzweigung herausgesprungen (oder der Muster-Befehlsblock beendet) und hinter esac mit der Scriptausführung fortgefahren. Würden hier keine doppelten Semikolons stehen, so würde die Shell weiter fortfahren (von oben nach unten), Muster zu testen. Ebenfalls von Bedeutung: Jedes Muster wird mit einer runden Klammer abgeschlossen, um es von den folgenden Kommandos abzugrenzen. Es darf aber auch das Muster zwischen zwei Klammern ( muster ) gesetzt werden. Dies verhindert, dass bei Verwendung von case in einer Kommando-Substitution ein Syntaxfehler auftritt. Abgeschlossen wird die ganze case-Anweisung mit esac (case rückwärts geschrieben).
Das Muster selbst kann ein String sein oder eines der bereits bekannten Metazeichen *, ? oder [ ]. Ebenso können Sie eine Muster-Alternative wie *(...|...|...); @(...|...|...) usw. der Bash und der Korn-Shell aus Abschnitt 1.10.8 einsetzen. Sollten Sie diese Sonderzeichen in Ihren Mustern setzen, so dürfen die Muster nicht zwischen Anführungszeichen stehen.
Der Wert von »var« muss nicht zwangsläufig eine Zeichenkette, sondern kann auch ein Ausdruck sein, der eine Zeichenkette zurückliefert. Es empfiehlt sich allerdings immer, »var« zwischen Double Quotes einzuschließen, um eventuell einen Syntaxfehler bei leeren oder nicht gesetzten Variablen zu vermeiden.
Ein simples Beispiel:
# Demonstriert die case-Anweisung # acase1 tag=`date +%a` case "$tag" in Mo) echo "Mo : Backup Datenbank machen" ;; Di) echo "Di : Backup Rechner Saurus" ;; Mi) echo "Mi : Backup Rechner Home" ;; Do) echo "Do : Backup Datenbank machen" ;; Fr) echo "Fr : Backup Rechner Saurus" ;; Sa) echo "Sa : Backup Rechner Home" ;; So) echo "So : Sämtliche Backups auf CD-R sichern" ;; esac
Im Beispiel wurde mit einer Kommando-Substitution der aktuelle Tag an die Variable tag übergeben. Anschließend wird in der case-Anweisung entsprechender Tag ausgewertet und ausgegeben. Hängen Sie jetzt hierbei statt einer simplen Ausgabe einige sinnvolle Befehle an und lassen jeden Tag den cron-Daemon darüber laufen, haben Sie ein echtes Backup-Script, das Ihnen Woche für Woche die ganze Arbeit abnimmt. Natürlich muss dies nicht unbedingt ein Backup-Script sein, es können auch bestimmte Log-Dateien ausgewertet werden.
4.8.1 Alternative VergleichsmusterÂ
Um gleich wieder auf das Script »acase1« zurückzukommen: Ein Problem, das hierbei auftreten kann, ist, dass die deutsche Schreibweise für den Wochentag verwendet wurde. Was aber, wenn die Umgebung in einer englischsprachigen Welt liegt, in der eben die Wochentage »Mon«, »Tue«, »Wed«, »Thu«, »Fri«, »Sat« und »Sun« heißen? Hierzu bietet Ihnen case in der Auswahlliste alternative Vergleichsmuster an. Sie können mehrere Muster zur Auswahl stellen. Die Muster werden mit dem bitweisen ODER-( | )-Operator getrennt.
case "$var" in muster1a|muster1b|muster1c) kommando ... kommando ;; muster2a|muster2b|muster2c) kommando ... kommando ;; muster3a|muster3b|muster3c) kommando ... kommando ;; esac
Wenden Sie dies nun auf das Script »acase1« an, so haben Sie ein Script erstellt, das sowohl die Ausgabe des deutschen als auch des englischsprachigen Wochentags akzeptiert.
# Demonstriert die case-Anweisung und die # alternativen Vergleichsmuster # acase2 tag=`date +%a` case "$tag" in Mo|Mon) echo "Mo : Backup Datenbank machen" ;; Di|Tue) echo "Di : Backup Rechner Saurus" ;; Mi|Wed) echo "Mi : Backup Rechner Home" ;; Do|Thu) echo "Do : Backup Datenbank machen" ;; Fr|Fri) echo "Fr : Backup Rechner Saurus" ;; Sa|Sat) echo "Sa : Backup Rechner Home" ;; So|Sun) echo "So : Sämtliche Backups auf CD-R sichern" ;; esac
Das Beispiel kann man nochmals verkürzen, da ja am Montag und Donnerstag, Dienstag und Freitag sowie Mittwoch und Samstag dieselbe Arbeit gemacht wird.
# Demonstriert die case-Anweisung und die alternativen Vergleichsmuster # acase3 tag=`date +%a` case "$tag" in Mo|Mon|Do|Thu) echo "$tag : Backup Datenbank machen" ;; Di|Tue|Fr|Fri) echo "$tag : Backup Rechner Saurus" ;; Mi|Wed|Sa|Sat) echo "$tag : Backup Rechner Home" ;; So|Sun) echo "So : Sämtliche Backups auf CD-R sichern" ;; esac
4.8.2 case und WildcardsÂ
Bei den Mustern der case-Anweisung sind auch alle Wildcard-Zeichen erlaubt, die Sie von der Dateinamen-Substitution her kennen. Die Muster-Alternativen wie bspw. *(...|...|...); @(...|...|...) bleiben weiterhin nur der Bash und der Korn-Shell vorbehalten. Auch hier kann wieder z. B. das Script »acase3« einspringen. Wenn alle Tage überprüft wurden, ist es eigentlich nicht mehr erforderlich, den Sonntag zu überprüfen. Hierfür könnte man jetzt theoretisch auch das Wildcard-Zeichen * verwenden.
# Demonstriert die case-Anweisung und die # alternativen Vergleichsmuster mit Wildcards # acase4 tag=`date +%a` case "$tag" in Mo|Mon|Do|Thu) echo "$tag : Backup Datenbank machen" ;; Di|Tue|Fr|Fri) echo "$tag : Backup Rechner Saurus" ;; Mi|Wed|Sa|Sat) echo "$tag : Backup Rechner Home" ;; *) echo "So : Sämtliche Backups auf CD-R sichern" ;; esac
Das Wildcard-Zeichen wird in einer case-Anweisung immer als letztes Alternativ-Muster eingesetzt, wenn bei den Mustervergleichen zuvor keine Übereinstimmung gefunden wurde. Somit führt der Vergleich mit * immer zum Erfolg und wird folglich auch immer ausgeführt, wenn keines der Mustervergleiche zuvor gepasst hat. Das * steht im Vergleich zur if-elif-Verzweigung für die else-Alternative.
Die Wildcards [] lassen sich bspw. hervorragend einsetzen, wenn Sie nicht sicher sein können, ob der Anwender Groß- und/oder Kleinbuchstaben bei der Eingabe verwendet. Wollen Sie bspw. überprüfen, ob der Benutzer für eine Ja-Antwort, »j«, »J« oder »ja« bzw. »Ja« oder »JA« verwendet hat, können Sie dies folgendermaßen mit den Wildcards [] vornehmen.
# Demonstriert die case-Anweisung mit Wildcards # acase5 # Als erstes Argument angeben case "$1" in [jJ] ) echo "Ja!" ;; [jJ][aA]) echo "Ja!" ;; [nN]) echo "Nein!" ;; [nN][eE][iI][nN]) echo "Nein!" ;; *) echo "Usage $0 [ja] [nein]" ;; esac
Das Script bei der Ausführung:
you@host > ./acase1 Usage ./acase1 [ja] [nein] you@host > ./acase1 j Ja! you@host > ./acase1 jA Ja! you@host > ./acase1 nEIn Nein! you@host > ./acase1 N Nein! you@host > ./acase1 n Nein! you@host > ./acase1 Ja Ja!
Das Ganze lässt sich natürlich mit Vergleichsmustern wiederum erheblich verkürzen:
# Demonstriert die case-Anweisung mit Wildcards und # alternativen Mustervergleichen # acase6 # Als erstes Argument angeben case "$1" in [jJ]|[jJ][aA]) echo "Ja!" ;; [nN]|[nN][eE][iI][nN]) echo "Nein!" ;; *) echo "Usage $0 [ja] [nein]" ;; esac
4.8.3 case und OptionenÂ
Zu guter Letzt eignet sich case hervorragend zum Auswerten von Optionen in der Kommandozeile. Ein einfaches Beispiel:
# Demonstriert die case-Anweisung zum Auswerten von Optionen # acase7 # Als erstes Argument angeben case "$1" in -[tT]|-test) echo "Option \"test\" aufgerufen" ;; -[hH]|-help|-hilfe) echo "Option \"hilfe\" aufgerufen" ;; *) echo "($1) Unbekannte Option aufgerufen!" esac
Das Script bei der Ausführung:
you@host > ./acase1 -t Option "test" aufgerufen you@host > ./acase1 -test Option "test" aufgerufen you@host > ./acase1 -h Option "hilfe" aufgerufen you@host > ./acase1 -hilfe Option "hilfe" aufgerufen you@host > ./acase1 -H Option "hilfe" aufgerufen you@host > ./acase1 -zzz (-zzz) Unbekannte Option aufgerufen!
Sie können hierfür wie gewohnt die Optionen der Kommandozeile mit dem Kommando getopts (siehe Abschnitt 3.8) auswerten, zum Beispiel:
# Demonstriert die case-Anweisung zum Auswerten von # Optionen mit getopts # acase8 while getopts tThH opt 2>/dev/null do case $opt in t|T) echo "Option test";; h|H) echo "Option hilfe";; ?) echo "($0): Ein Fehler bei der Optionsangabe" esac done
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 4.8 Die Anweisung caseÂ
Die Anweisung case wird oft als Alternative zu mehreren ifâelif-Verzweigungen verwendet. Allerdings überprüft case im Gegensatz zu if oder elif nicht den Rückgabewert eines Kommandos. case vergleicht einen bestimmten Wert mit einer Liste von anderen Werten. Findet hierbei eine Übereinstimmung statt, können einzelne oder mehrere Kommandos ausgeführt werden. Außerdem bietet eine case-Abfrage im Gegensatz zu einer ifâelif-Abfrage erheblich mehr Übersicht (siehe Abbildung 4.10). Hier die Syntax, wie Sie eine Fallunterscheidung mit case formulieren:
> case "$var" in muster1) kommando ... kommando ;; muster2) kommando ... kommando ;; mustern) kommando ... kommando ;; esac
Die Zeichenkette »var« nach dem Schlüsselwort case wird nun von oben nach unten mit den verschiedensten Mustern verglichen, bis ein Muster gefunden wurde, das auf »var« passt oder eben nicht. In der Syntaxbeschreibung wird bspw. zuerst »muster1« mit »var« verglichen. Stimmt »muster1« mit »var« überein, werden entsprechende Kommandos ausgeführt, die sich hinter dem betroffenen Muster befinden. Stimmt »var« nicht mit »muster1« überein, wird das nächste Muster »muster2« mit »var« verglichen. Dies geht so lange weiter, bis eine entsprechende Übereinstimmung gefunden wird. Wird keine Übereinstimmung gefunden, werden alle Befehle der case-Anweisung ignoriert und die Ausführung hinter esac fortgeführt.
Eine sehr wichtige Rolle spielen auch die doppelten Semikolons (;;). Denn stimmt ein Muster mit »var« überein, werden die entsprechenden Befehle dahinter bis zu diesen doppelten Semikolons ausgeführt. Wenn die Shell auf diese Semikolons stößt, wird aus der case-Verzweigung herausgesprungen (oder der Muster-Befehlsblock beendet) und hinter esac mit der Scriptausführung fortgefahren. Würden hier keine doppelten Semikolons stehen, so würde die Shell weiter fortfahren (von oben nach unten), Muster zu testen. Ebenfalls von Bedeutung: Jedes Muster wird mit einer runden Klammer abgeschlossen, um es von den folgenden Kommandos abzugrenzen. Es darf aber auch das Muster zwischen zwei Klammern ( muster ) gesetzt werden. Dies verhindert, dass bei Verwendung von case in einer Kommando-Substitution ein Syntaxfehler auftritt. Abgeschlossen wird die ganze case-Anweisung mit esac (case rückwärts geschrieben).
Das Muster selbst kann ein String sein oder eines der bereits bekannten Metazeichen *, ? oder [ ]. Ebenso können Sie eine Muster-Alternative wie *(...|...|...); @(...|...|...) usw. der Bash und der Korn-Shell aus Abschnitt 1.10.8 einsetzen. Sollten Sie diese Sonderzeichen in Ihren Mustern setzen, so dürfen die Muster nicht zwischen Anführungszeichen stehen.
Der Wert von »var« muss nicht zwangsläufig eine Zeichenkette, sondern kann auch ein Ausdruck sein, der eine Zeichenkette zurückliefert. Es empfiehlt sich allerdings immer, »var« zwischen Double Quotes einzuschließen, um eventuell einen Syntaxfehler bei leeren oder nicht gesetzten Variablen zu vermeiden.
Ein simples Beispiel:
> # Demonstriert die case-Anweisung # acase1 tag=`date +%a` case "$tag" in Mo) echo "Mo : Backup Datenbank machen" ;; Di) echo "Di : Backup Rechner Saurus" ;; Mi) echo "Mi : Backup Rechner Home" ;; Do) echo "Do : Backup Datenbank machen" ;; Fr) echo "Fr : Backup Rechner Saurus" ;; Sa) echo "Sa : Backup Rechner Home" ;; So) echo "So : Sämtliche Backups auf CD-R sichern" ;; esac
Im Beispiel wurde mit einer Kommando-Substitution der aktuelle Tag an die Variable tag übergeben. Anschließend wird in der case-Anweisung entsprechender Tag ausgewertet und ausgegeben. Hängen Sie jetzt hierbei statt einer simplen Ausgabe einige sinnvolle Befehle an und lassen jeden Tag den cron-Daemon darüber laufen, haben Sie ein echtes Backup-Script, das Ihnen Woche für Woche die ganze Arbeit abnimmt. Natürlich muss dies nicht unbedingt ein Backup-Script sein, es können auch bestimmte Log-Dateien ausgewertet werden.
### 4.8.1 Alternative VergleichsmusterÂ
Um gleich wieder auf das Script »acase1« zurückzukommen: Ein Problem, das hierbei auftreten kann, ist, dass die deutsche Schreibweise für den Wochentag verwendet wurde. Was aber, wenn die Umgebung in einer englischsprachigen Welt liegt, in der eben die Wochentage »Mon«, »Tue«, »Wed«, »Thu«, »Fri«, »Sat« und »Sun« heißen? Hierzu bietet Ihnen case in der Auswahlliste alternative Vergleichsmuster an. Sie können mehrere Muster zur Auswahl stellen. Die Muster werden mit dem bitweisen ODER-( | )-Operator getrennt.
> case "$var" in muster1a|muster1b|muster1c) kommando ... kommando ;; muster2a|muster2b|muster2c) kommando ... kommando ;; muster3a|muster3b|muster3c) kommando ... kommando ;; esac
Wenden Sie dies nun auf das Script »acase1« an, so haben Sie ein Script erstellt, das sowohl die Ausgabe des deutschen als auch des englischsprachigen Wochentags akzeptiert.
> # Demonstriert die case-Anweisung und die # alternativen Vergleichsmuster # acase2 tag=`date +%a` case "$tag" in Mo|Mon) echo "Mo : Backup Datenbank machen" ;; Di|Tue) echo "Di : Backup Rechner Saurus" ;; Mi|Wed) echo "Mi : Backup Rechner Home" ;; Do|Thu) echo "Do : Backup Datenbank machen" ;; Fr|Fri) echo "Fr : Backup Rechner Saurus" ;; Sa|Sat) echo "Sa : Backup Rechner Home" ;; So|Sun) echo "So : Sämtliche Backups auf CD-R sichern" ;; esac
Das Beispiel kann man nochmals verkürzen, da ja am Montag und Donnerstag, Dienstag und Freitag sowie Mittwoch und Samstag dieselbe Arbeit gemacht wird.
> # Demonstriert die case-Anweisung und die alternativen Vergleichsmuster # acase3 tag=`date +%a` case "$tag" in Mo|Mon|Do|Thu) echo "$tag : Backup Datenbank machen" ;; Di|Tue|Fr|Fri) echo "$tag : Backup Rechner Saurus" ;; Mi|Wed|Sa|Sat) echo "$tag : Backup Rechner Home" ;; So|Sun) echo "So : Sämtliche Backups auf CD-R sichern" ;; esac
### 4.8.2 case und WildcardsÂ
Bei den Mustern der case-Anweisung sind auch alle Wildcard-Zeichen erlaubt, die Sie von der Dateinamen-Substitution her kennen. Die Muster-Alternativen wie bspw. *(...|...|...); @(...|...|...) bleiben weiterhin nur der Bash und der Korn-Shell vorbehalten. Auch hier kann wieder z. B. das Script »acase3« einspringen. Wenn alle Tage überprüft wurden, ist es eigentlich nicht mehr erforderlich, den Sonntag zu überprüfen. Hierfür könnte man jetzt theoretisch auch das Wildcard-Zeichen * verwenden.
> # Demonstriert die case-Anweisung und die # alternativen Vergleichsmuster mit Wildcards # acase4 tag=`date +%a` case "$tag" in Mo|Mon|Do|Thu) echo "$tag : Backup Datenbank machen" ;; Di|Tue|Fr|Fri) echo "$tag : Backup Rechner Saurus" ;; Mi|Wed|Sa|Sat) echo "$tag : Backup Rechner Home" ;; *) echo "So : Sämtliche Backups auf CD-R sichern" ;; esac
Das Wildcard-Zeichen wird in einer case-Anweisung immer als letztes Alternativ-Muster eingesetzt, wenn bei den Mustervergleichen zuvor keine Übereinstimmung gefunden wurde. Somit führt der Vergleich mit * immer zum Erfolg und wird folglich auch immer ausgeführt, wenn keines der Mustervergleiche zuvor gepasst hat. Das * steht im Vergleich zur if-elif-Verzweigung für die else-Alternative.
Die Wildcards [] lassen sich bspw. hervorragend einsetzen, wenn Sie nicht sicher sein können, ob der Anwender Groß- und/oder Kleinbuchstaben bei der Eingabe verwendet. Wollen Sie bspw. überprüfen, ob der Benutzer für eine Ja-Antwort, »j«, »J« oder »ja« bzw. »Ja« oder »JA« verwendet hat, können Sie dies folgendermaßen mit den Wildcards [] vornehmen.
> # Demonstriert die case-Anweisung mit Wildcards # acase5 # Als erstes Argument angeben case "$1" in [jJ] ) echo "Ja!" ;; [jJ][aA]) echo "Ja!" ;; [nN]) echo "Nein!" ;; [nN][eE][iI][nN]) echo "Nein!" ;; *) echo "Usage $0 [ja] [nein]" ;; esac
Das Script bei der Ausführung:
> you@host > ./acase1 Usage ./acase1 [ja] [nein] you@host > ./acase1 j Ja! you@host > ./acase1 jA Ja! you@host > ./acase1 nEIn Nein! you@host > ./acase1 N Nein! you@host > ./acase1 n Nein! you@host > ./acase1 Ja Ja!
Das Ganze lässt sich natürlich mit Vergleichsmustern wiederum erheblich verkürzen:
> # Demonstriert die case-Anweisung mit Wildcards und # alternativen Mustervergleichen # acase6 # Als erstes Argument angeben case "$1" in [jJ]|[jJ][aA]) echo "Ja!" ;; [nN]|[nN][eE][iI][nN]) echo "Nein!" ;; *) echo "Usage $0 [ja] [nein]" ;; esac
### 4.8.3 case und OptionenÂ
Zu guter Letzt eignet sich case hervorragend zum Auswerten von Optionen in der Kommandozeile. Ein einfaches Beispiel:
> # Demonstriert die case-Anweisung zum Auswerten von Optionen # acase7 # Als erstes Argument angeben case "$1" in -[tT]|-test) echo "Option \"test\" aufgerufen" ;; -[hH]|-help|-hilfe) echo "Option \"hilfe\" aufgerufen" ;; *) echo "($1) Unbekannte Option aufgerufen!" esac
Das Script bei der Ausführung:
> you@host > ./acase1 -t Option "test" aufgerufen you@host > ./acase1 -test Option "test" aufgerufen you@host > ./acase1 -h Option "hilfe" aufgerufen you@host > ./acase1 -hilfe Option "hilfe" aufgerufen you@host > ./acase1 -H Option "hilfe" aufgerufen you@host > ./acase1 -zzz (-zzz) Unbekannte Option aufgerufen!
Sie können hierfür wie gewohnt die Optionen der Kommandozeile mit dem Kommando getopts (siehe Abschnitt 3.8) auswerten, zum Beispiel:
> # Demonstriert die case-Anweisung zum Auswerten von # Optionen mit getopts # acase8 while getopts tThH opt 2>/dev/null do case $opt in t|T) echo "Option test";; h|H) echo "Option hilfe";; ?) echo "($0): Ein Fehler bei der Optionsangabe" esac done
# 4.10 for-SchleifeÂ
4.10 for-SchleifeÂ
Die for-Schleife gehört zu der Familie der Aufzählschleifen. Damit wird eine Befehlsfolge für Wörter in einer angegebenen Liste ausgeführt. Das heißt, die for-Schleife benötigt eine Liste von Parametern. Die Anzahl der Wörter in dieser Liste bestimmt dann, wie oft die Schleife ausgeführt bzw. durchlaufen wird. Hier zunächst die Syntax der for-Schleife:
for var in liste_von_parameter do kommando ... kommando done
Die Funktionsweise von for ist folgende: Vor jedem Eintritt in die Schleife wird das nächste Wort der Parameterliste in die Variable »var« gelegt. Die Liste der Parameter hinter dem Schlüsselwort in besteht aus Wörtern, die jeweils von mindestens einem Leerzeichen (bzw. auch abhängig von der Variablen IFS) getrennt sein müssen. Dies kann bspw. so aussehen:
for var in wort1 wort2 wort3 do ... done
Somit würde beim ersten Schleifendurchlauf der Wert von »wort1« in die Variable »var« übertragen. Jetzt werden die Kommandos zwischen den Schlüsselworten do und done ausgeführt. Wurden alle Kommandos abgearbeitet, beginnt der nächste Schleifendurchlauf, bei dem jetzt »wort2« in die Variable »var« übertragen wird. Dieser Vorgang wird so lange wiederholt, bis alle Elemente der Parameterliste abgearbeitet wurden. Anschließend wird mit der Ausführung hinter dem Schlüsselwort done fortgefahren (siehe Abbildung 4.11).
4.10.1 Argumente bearbeiten mit forÂ
Wenn Sie sich die for-Schleife mit der Parameterliste ansehen, dürften Sie wohl gleich auf die Auswertung der Argumente in der Kommandozeile kommen. for scheint wie geschaffen, diese Argumente zu verarbeiten. Hierzu müssen Sie für die Parameterliste lediglich die Variable $@ verwenden. Das folgende Script überprüft alle Dateien, die Sie in der Kommandozeile mit angeben auf Ihren Typ.
# Demonstriert die Verwendung von for mit Argumenten # afor1 for datei in "$@" do [ -f $datei ] && echo "$datei: Reguläre Datei" [ -d $datei ] && echo "$datei: Verzeichnis" [ -b $datei ] && echo "$datei: Gerätedatei(block special)" [ -c $datei ] && echo "$datei: Gerätedatei(character special)" [ -t $datei ] && echo "$datei: serielles Terminal" [ ! -e $datei ] && echo "$datei: existiert nicht" done
Das Script bei der Ausführung:
you@host > ./afor1 Shellbuch backup afor1 /dev/tty \ > /dev/cdrom gibtsnicht Shellbuch: Verzeichnis backup: Verzeichnis afor1: Reguläre Datei /dev/tty: Gerätedatei (character special) /dev/cdrom: Gerätedatei (block special) gibtsnicht: existiert nicht
Zwar könnten Sie in Ihrem Script auch die Positionsparameter $1 bis $n verwenden, aber Sie sollten dies wenn möglich vermeiden und stattdessen die Variable $@ benutzen. Damit können Sie sichergehen, dass kein Argument aus der Kommandozeile vergessen wurde.
Eine Besonderheit bezüglich der Variablen $@ in der for-Schleife gibt es dann doch noch. Lassen Sie den Zusatz in "$@" weg, setzt die for-Schleife diesen automatisch ein:
# Demonstriert die Verwendung von for mit Argumenten # afor1 for datei do [ -f $datei ] && echo "$datei: Reguläre Datei" [ -d $datei ] && echo "$datei: Verzeichnis" [ -b $datei ] && echo "$datei: Gerätedatei(block special)" [ -c $datei ] && echo "$datei: Gerätedatei(character special)" [ -t $datei ] && echo "$datei: serielles Terminal" [ ! -e $datei ] && echo "$datei: existiert nicht" done
4.10.2 for und die Dateinamen-SubstitutionÂ
Das Generieren von Dateinamen mit den Metazeichen *, ? und [] kennen Sie ja bereits zur Genüge. Dieser Ersetzungsmechanismus wird standardmäßig von der Shell ausgeführt, sobald ein solches Sonderzeichen erkannt wird. Diese Dateinamen-Substitution können Sie auch in der for-Schleife verwenden. Dadurch wird eine Liste von Dateinamen für die Parameterliste der for-Schleife erzeugt. Die Frage, wie Sie ein ganzes Verzeichnis auslesen können, wird Ihnen hiermit so beantwortet:
# Demonstriert die Verwendung von for und der Datei-Substitution # afor2 # Gibt das komplette aktuelle Arbeitsverzeichnis aus for datei in * do echo $datei done
Wenn Sie das Script ausführen, wird das komplette aktuelle Arbeitsverzeichnis ausgegeben. Selbstverständlich können Sie das Ganze auch einschränken. Wollen Sie z. B. nur alle Dateien mit der Endung ».txt« ausgeben, können Sie folgendermaßen vorgehen:
# Demonstriert die Verwendung von for und der Datei-Substitution # afor3 # Gibt alle Textdateien des aktuellen Arbeitsverzeichnisses aus for datei in *.txt do echo $datei [ -r $datei ] && echo "... ist lesbar" [ -w $datei ] && echo "... ist schreibbar" done
Sie können alle Metazeichen *, ? und [] verwenden, wie Sie dies von der Shell her kennen. Einige Beispiele:
# Nur Dateien mit der Endung *.txt und *.c berücksichtigen for datei in *.txt *.c # Nur die Dateien log1.txt log2.txt ... log9.txt berücksichtigen for datei in log[1â9].txt # Nur versteckte Dateien beachten (beginnend mit einem Punkt) for datei in * .*
Kann in der for-Schleife ein Suchmuster nicht aufgelöst werden, führt das Script gewöhnlich keine Aktion durch. Solch einen Fall sollte man aber auch berücksichtigen, sofern Ihr Script benutzerfreundlich sein soll. Hierzu bietet sich eine case-Anweisung an, mit der Sie überprüfen, ob die entsprechende Variable (die in for ihren Wert von der Parameterliste erhält) auf das gewünschte Muster (Parameterliste) passt. Ganz klar, dass auch hierbei das Wildcard-Zeichen * verwendet wird.
# Demonstriert die Verwendung von for und der Datei-Substitution # afor4 # Bei erfolgloser Suche soll ein entsprechender Hinweis # ausgegeben werden for datei in *.jpg do case "$datei" in *.jpg) echo "Keine Datei zum Muster *.jpg vorhanden" ;; *) echo $datei ;; esac done
Das Script bei der Ausführung:
you@host > ./afor4 Keine Datei zum Muster *.jpg vorhanden
Die case-Verzweigung *.jpg) wird immer dann ausgeführt, wenn kein Suchmuster aufgelöst werden konnte. Ansonsten wird die Ausführung bei *) weitergeführt, was hier wieder eine simple Ausgabe des Dateinamens darstellt.
4.10.3 for und die Kommando-SubstitutionÂ
Mittlerweile dürften Sie schon überzeugt sein vom mächtigen Feature der for-Schleife. Ein weiteres unverzichtbares Mittel der Shell-Programmierung aber erhalten Sie aus der for-Schleife, die ihre Parameter aus einer Kommando-Substitution erhält. Dazu könnten Sie die Ausgabe von jedem beliebigen Kommando für die Parameterliste verwenden. Allerdings setzt die Verwendung der Kommando-Substitution immer gute Kenntnisse der entsprechenden Kommandos voraus â insbesondere über deren Ausgabe.
Hier ein einfaches Beispiel, bei dem Sie mit dem Kommando find alle Grafikdateien im HOME-Verzeichnis mit der Endung ».png« suchen und in ein separat dafür angelegtes Verzeichnis kopieren. Dabei sollen auch die Namen der Dateien entsprechend neu durchnummeriert werden.
# Demonstriert die Verwendung von for und der Kommando-Substitution # afor5 count=1 DIR=$HOME/backup_png # Verzeichnis anlegen mkdir $DIR 2>/dev/null # Überprüfen, ob erfolgreich angelegt oder überhaupt # vorhanden, ansonsten Ende [ ! -e $DIR ] && exit 1 for datei in `find $HOME -name "*.png" -print 2>/dev/null` do # Wenn Leserecht vorhanden, können wir die Datei kopieren if [ -r $datei ] then # PNG-Datei ins entsprechende Verzeichnis kopieren # Als Namen bild1.png, bild2.png ... bildn.png verwenden cp $datei $DIR/bild${count}.png # Zähler um eins erhöhen count=`expr $count + 1` fi done echo "$count Bilder erfolgreich nach $DIR kopiert"
Das Script bei der Ausführung:
you@host > ./afor5 388 Bilder erfolgreich nach /home/tot/backup_png kopiert you@host > ls backup_png bild100.png bild15.png bild218.png bild277.png bild335.png bild101.png bild160.png bild219.png bild278.png bild336.png ...
Da die Parameterliste der for-Schleife von der Variablen IFS abhängt, können Sie standardmäßig hierbei auch zeilenweise mit dem Kommando cat etwas einlesen und weiterverarbeiten â denn in der Variable IFS finden Sie auch das Newline-Zeichen wieder. Es sei bspw. folgende Datei namens .userlist mit einer Auflistung aller User, die bisher auf dem System registriert sind, gegeben:
you@host > cat .userlist tot you rot john root martin
Wollen Sie jetzt an alle User, die hier aufgelistet und natürlich im Augenblick eingeloggt sind, eine Nachricht senden, gehen Sie wie folgt vor:
# Demonstriert die Verwendung von for und # der Kommando-Substitution # afor6 # Komplette Argumentenliste für News an andere User verwenden NEU="$*" for user in `cat .userlist` do if who | grep ^$user > /dev/null then echo $NEU | write $user echo "Verschickt an $user" fi done
Das Script bei der Ausführung:
you@host > ./afor6 Tolle Neuigkeiten: Der Chef ist weg! Verschickt an tot Verschickt an martin
Im Augenblick scheinen hierbei die User »tot« und »martin« eingeloggt zu sein. Bei diesen sollte nun in der Konsole Folgendes ausgegeben werden:
tot@host > Message from you@host on pts/40 at 04:33 Tolle Neuigkeiten: Der Chef ist weg! EOF tot@host >
Wenn beim anderen User keine Ausgabe erscheint, kann es sein, dass er die entsprechende Option abgeschaltet hat. Damit andere User Ihnen mit dem write-Kommando Nachrichten zukommen lassen können, müssen Sie dies mit
tot@host > mesg y
einschalten. Mit mesg n schalten Sie es wieder ab.
Variablen-Interpolation
Häufig werden bei der Kommando-Substitution die Kommandos recht lang und kompliziert, weshalb sich hierfür eine Variablen-Interpolation besser eignen würde. Also, anstatt die Kommando-Substitution in die Parameterliste von for zu quetschen, können Sie den Wert der Kommando-Substitution auch in einer Variablen abspeichern und diese Variable an for übergeben. Um bei einem Beispiel von eben zu bleiben (afor5), würde man mit
for datei in `find $HOME -name "*.png" -print 2>/dev/null` do ... done
dasselbe erreichen wie mit der folgenden Variablen-Interpolation:
PNG=`find $HOME -name "*.png" -print 2>/dev/null` for datei in $PNG do ... done
Nur hat man hier ganz klar den Vorteil, dass dies übersichtlicher ist als die »direkte« Version.
4.10.4 for und Array (Bash und Korn Shell only)Â
Selbstverständlich eignet sich die for-Schleife hervorragend für die Arrays. Die Parameterliste lässt sich relativ einfach mit ${array[*]} realisieren.
# Demonstriert die Verwendung von for und der Arrays # afor7 # Liste von Werten in einem Array speichern # Version: Korn-Shell (auskommentiert) #set -A array 1 2 3 4 5 6 7 8 9 # Version: bash array=( 1 2 3 4 5 6 7 8 9 ) # Alle Elemente im Array durchlaufen for value in ${array[*]} do echo $value done
Das Script bei der Ausführung:
you@host > ./afor7 1 2 3 4 5 6 7 8 9
4.10.5 for-Schleife mit Schleifenzähler (Bash only)Â
Ab Version 2.0.4 wurde der Bash eine zweite Form der for-Schleife spendiert. Die Syntax ist hierbei an die der Programmiersprache C angelehnt.
for (( var=Anfangswert ; Bedingung ; Zähler )) do kommando1 ... kommanodn done
Diese Form der for-Schleife arbeitet mit einem Schleifenzähler. Bei den einzelnen Parametern handelt es sich um arithmetische Substitutionen. Der erste Wert, hier »var«, wird gewöhnlich für die Zuweisung eines Anfangswertes an eine Schleifenvariable verwendet. Der Anfangswert (»var«) wird hierbei nur einmal beim Eintritt in die Schleife ausgewertet. Die Bedingung dient als Abbruchbedingung und der Zähler wird verwendet, um die Schleifenvariable zu erhöhen oder zu verringern. Die Bedingung wird vor jedem neuen Schleifendurchlauf überprüft. Der Zähler hingegen wird nach jedem Schleifendurchlauf verändert. Den Zähler können Sie folgendermaßen ausdrücken (siehe Tabelle 4.12):
Tabelle 4.12 Â Zähler einer Schleifenvariablen verändern
Zähler verändern
Bedeutung
var++
Den Wert nach jedem Schleifendurchlauf um 1 erhöhen (inkrementieren)
varââ
Den Wert nach jedem Schleifendurchlauf um 1 verringern (dekrementieren)
((var=var+n))
Den Wert nach jedem Schleifendurchlauf um n erhöhen
((var=varân))
Den Wert nach jedem Schleifendurchlauf um n verringern
## 4.10 for-SchleifeÂ
Die for-Schleife gehört zu der Familie der Aufzählschleifen. Damit wird eine Befehlsfolge für Wörter in einer angegebenen Liste ausgeführt. Das heißt, die for-Schleife benötigt eine Liste von Parametern. Die Anzahl der Wörter in dieser Liste bestimmt dann, wie oft die Schleife ausgeführt bzw. durchlaufen wird. Hier zunächst die Syntax der for-Schleife:
> for var in liste_von_parameter do kommando ... kommando done
Die Funktionsweise von for ist folgende: Vor jedem Eintritt in die Schleife wird das nächste Wort der Parameterliste in die Variable »var« gelegt. Die Liste der Parameter hinter dem Schlüsselwort in besteht aus Wörtern, die jeweils von mindestens einem Leerzeichen (bzw. auch abhängig von der Variablen IFS) getrennt sein müssen. Dies kann bspw. so aussehen:
> for var in wort1 wort2 wort3 do ... done
Somit würde beim ersten Schleifendurchlauf der Wert von »wort1« in die Variable »var« übertragen. Jetzt werden die Kommandos zwischen den Schlüsselworten do und done ausgeführt. Wurden alle Kommandos abgearbeitet, beginnt der nächste Schleifendurchlauf, bei dem jetzt »wort2« in die Variable »var« übertragen wird. Dieser Vorgang wird so lange wiederholt, bis alle Elemente der Parameterliste abgearbeitet wurden. Anschließend wird mit der Ausführung hinter dem Schlüsselwort done fortgefahren (siehe Abbildung 4.11).
### 4.10.1 Argumente bearbeiten mit forÂ
Wenn Sie sich die for-Schleife mit der Parameterliste ansehen, dürften Sie wohl gleich auf die Auswertung der Argumente in der Kommandozeile kommen. for scheint wie geschaffen, diese Argumente zu verarbeiten. Hierzu müssen Sie für die Parameterliste lediglich die Variable $@ verwenden. Das folgende Script überprüft alle Dateien, die Sie in der Kommandozeile mit angeben auf Ihren Typ.
> # Demonstriert die Verwendung von for mit Argumenten # afor1 for datei in "$@" do [ -f $datei ] && echo "$datei: Reguläre Datei" [ -d $datei ] && echo "$datei: Verzeichnis" [ -b $datei ] && echo "$datei: Gerätedatei(block special)" [ -c $datei ] && echo "$datei: Gerätedatei(character special)" [ -t $datei ] && echo "$datei: serielles Terminal" [ ! -e $datei ] && echo "$datei: existiert nicht" done
Das Script bei der Ausführung:
> you@host > ./afor1 Shellbuch backup afor1 /dev/tty \ > /dev/cdrom gibtsnicht Shellbuch: Verzeichnis backup: Verzeichnis afor1: Reguläre Datei /dev/tty: Gerätedatei (character special) /dev/cdrom: Gerätedatei (block special) gibtsnicht: existiert nicht
Zwar könnten Sie in Ihrem Script auch die Positionsparameter $1 bis $n verwenden, aber Sie sollten dies wenn möglich vermeiden und stattdessen die Variable $@ benutzen. Damit können Sie sichergehen, dass kein Argument aus der Kommandozeile vergessen wurde.
Eine Besonderheit bezüglich der Variablen $@ in der for-Schleife gibt es dann doch noch. Lassen Sie den Zusatz in "$@" weg, setzt die for-Schleife diesen automatisch ein:
> # Demonstriert die Verwendung von for mit Argumenten # afor1 for datei do [ -f $datei ] && echo "$datei: Reguläre Datei" [ -d $datei ] && echo "$datei: Verzeichnis" [ -b $datei ] && echo "$datei: Gerätedatei(block special)" [ -c $datei ] && echo "$datei: Gerätedatei(character special)" [ -t $datei ] && echo "$datei: serielles Terminal" [ ! -e $datei ] && echo "$datei: existiert nicht" done
### 4.10.2 for und die Dateinamen-SubstitutionÂ
Das Generieren von Dateinamen mit den Metazeichen *, ? und [] kennen Sie ja bereits zur Genüge. Dieser Ersetzungsmechanismus wird standardmäßig von der Shell ausgeführt, sobald ein solches Sonderzeichen erkannt wird. Diese Dateinamen-Substitution können Sie auch in der for-Schleife verwenden. Dadurch wird eine Liste von Dateinamen für die Parameterliste der for-Schleife erzeugt. Die Frage, wie Sie ein ganzes Verzeichnis auslesen können, wird Ihnen hiermit so beantwortet:
> # Demonstriert die Verwendung von for und der Datei-Substitution # afor2 # Gibt das komplette aktuelle Arbeitsverzeichnis aus for datei in * do echo $datei done
Wenn Sie das Script ausführen, wird das komplette aktuelle Arbeitsverzeichnis ausgegeben. Selbstverständlich können Sie das Ganze auch einschränken. Wollen Sie z. B. nur alle Dateien mit der Endung ».txt« ausgeben, können Sie folgendermaßen vorgehen:
> # Demonstriert die Verwendung von for und der Datei-Substitution # afor3 # Gibt alle Textdateien des aktuellen Arbeitsverzeichnisses aus for datei in *.txt do echo $datei [ -r $datei ] && echo "... ist lesbar" [ -w $datei ] && echo "... ist schreibbar" done
Sie können alle Metazeichen *, ? und [] verwenden, wie Sie dies von der Shell her kennen. Einige Beispiele:
> # Nur Dateien mit der Endung *.txt und *.c berücksichtigen for datei in *.txt *.c # Nur die Dateien log1.txt log2.txt ... log9.txt berücksichtigen for datei in log[1â9].txt # Nur versteckte Dateien beachten (beginnend mit einem Punkt) for datei in * .*
Kann in der for-Schleife ein Suchmuster nicht aufgelöst werden, führt das Script gewöhnlich keine Aktion durch. Solch einen Fall sollte man aber auch berücksichtigen, sofern Ihr Script benutzerfreundlich sein soll. Hierzu bietet sich eine case-Anweisung an, mit der Sie überprüfen, ob die entsprechende Variable (die in for ihren Wert von der Parameterliste erhält) auf das gewünschte Muster (Parameterliste) passt. Ganz klar, dass auch hierbei das Wildcard-Zeichen * verwendet wird.
> # Demonstriert die Verwendung von for und der Datei-Substitution # afor4 # Bei erfolgloser Suche soll ein entsprechender Hinweis # ausgegeben werden for datei in *.jpg do case "$datei" in *.jpg) echo "Keine Datei zum Muster *.jpg vorhanden" ;; *) echo $datei ;; esac done
Das Script bei der Ausführung:
> you@host > ./afor4 Keine Datei zum Muster *.jpg vorhanden
Die case-Verzweigung *.jpg) wird immer dann ausgeführt, wenn kein Suchmuster aufgelöst werden konnte. Ansonsten wird die Ausführung bei *) weitergeführt, was hier wieder eine simple Ausgabe des Dateinamens darstellt.
### 4.10.3 for und die Kommando-SubstitutionÂ
Mittlerweile dürften Sie schon überzeugt sein vom mächtigen Feature der for-Schleife. Ein weiteres unverzichtbares Mittel der Shell-Programmierung aber erhalten Sie aus der for-Schleife, die ihre Parameter aus einer Kommando-Substitution erhält. Dazu könnten Sie die Ausgabe von jedem beliebigen Kommando für die Parameterliste verwenden. Allerdings setzt die Verwendung der Kommando-Substitution immer gute Kenntnisse der entsprechenden Kommandos voraus â insbesondere über deren Ausgabe.
Hier ein einfaches Beispiel, bei dem Sie mit dem Kommando find alle Grafikdateien im HOME-Verzeichnis mit der Endung ».png« suchen und in ein separat dafür angelegtes Verzeichnis kopieren. Dabei sollen auch die Namen der Dateien entsprechend neu durchnummeriert werden.
> # Demonstriert die Verwendung von for und der Kommando-Substitution # afor5 count=1 DIR=$HOME/backup_png # Verzeichnis anlegen mkdir $DIR 2>/dev/null # Überprüfen, ob erfolgreich angelegt oder überhaupt # vorhanden, ansonsten Ende [ ! -e $DIR ] && exit 1 for datei in `find $HOME -name "*.png" -print 2>/dev/null` do # Wenn Leserecht vorhanden, können wir die Datei kopieren if [ -r $datei ] then # PNG-Datei ins entsprechende Verzeichnis kopieren # Als Namen bild1.png, bild2.png ... bildn.png verwenden cp $datei $DIR/bild${count}.png # Zähler um eins erhöhen count=`expr $count + 1` fi done echo "$count Bilder erfolgreich nach $DIR kopiert"
Das Script bei der Ausführung:
> you@host > ./afor5 388 Bilder erfolgreich nach /home/tot/backup_png kopiert you@host > ls backup_png bild100.png bild15.png bild218.png bild277.png bild335.png bild101.png bild160.png bild219.png bild278.png bild336.png ...
Da die Parameterliste der for-Schleife von der Variablen IFS abhängt, können Sie standardmäßig hierbei auch zeilenweise mit dem Kommando cat etwas einlesen und weiterverarbeiten â denn in der Variable IFS finden Sie auch das Newline-Zeichen wieder. Es sei bspw. folgende Datei namens .userlist mit einer Auflistung aller User, die bisher auf dem System registriert sind, gegeben:
> you@host > cat .userlist tot you rot john root martin
Wollen Sie jetzt an alle User, die hier aufgelistet und natürlich im Augenblick eingeloggt sind, eine Nachricht senden, gehen Sie wie folgt vor:
> # Demonstriert die Verwendung von for und # der Kommando-Substitution # afor6 # Komplette Argumentenliste für News an andere User verwenden NEU="$*" for user in `cat .userlist` do if who | grep ^$user > /dev/null then echo $NEU | write $user echo "Verschickt an $user" fi done
Das Script bei der Ausführung:
> you@host > ./afor6 Tolle Neuigkeiten: Der Chef ist weg! Verschickt an tot Verschickt an martin
Im Augenblick scheinen hierbei die User »tot« und »martin« eingeloggt zu sein. Bei diesen sollte nun in der Konsole Folgendes ausgegeben werden:
> tot@host > Message from you@host on pts/40 at 04:33 Tolle Neuigkeiten: Der Chef ist weg! EOF tot@host Wenn beim anderen User keine Ausgabe erscheint, kann es sein, dass er die entsprechende Option abgeschaltet hat. Damit andere User Ihnen mit dem write-Kommando Nachrichten zukommen lassen können, müssen Sie dies mit
> tot@host > mesg y
einschalten. Mit mesg n schalten Sie es wieder ab.
# Variablen-Interpolation
Häufig werden bei der Kommando-Substitution die Kommandos recht lang und kompliziert, weshalb sich hierfür eine Variablen-Interpolation besser eignen würde. Also, anstatt die Kommando-Substitution in die Parameterliste von for zu quetschen, können Sie den Wert der Kommando-Substitution auch in einer Variablen abspeichern und diese Variable an for übergeben. Um bei einem Beispiel von eben zu bleiben (afor5), würde man mit
> for datei in `find $HOME -name "*.png" -print 2>/dev/null` do ... done
dasselbe erreichen wie mit der folgenden Variablen-Interpolation:
> PNG=`find $HOME -name "*.png" -print 2>/dev/null` for datei in $PNG do ... done
Nur hat man hier ganz klar den Vorteil, dass dies übersichtlicher ist als die »direkte« Version.
### 4.10.4 for und Array (Bash und Korn Shell only)Â
Selbstverständlich eignet sich die for-Schleife hervorragend für die Arrays. Die Parameterliste lässt sich relativ einfach mit ${array[*]} realisieren.
> # Demonstriert die Verwendung von for und der Arrays # afor7 # Liste von Werten in einem Array speichern # Version: Korn-Shell (auskommentiert) #set -A array 1 2 3 4 5 6 7 8 9 # Version: bash array=( 1 2 3 4 5 6 7 8 9 ) # Alle Elemente im Array durchlaufen for value in ${array[*]} do echo $value done
Das Script bei der Ausführung:
> you@host > ./afor7 1 2 3 4 5 6 7 8 9
### 4.10.5 for-Schleife mit Schleifenzähler (Bash only)Â
Ab Version 2.0.4 wurde der Bash eine zweite Form der for-Schleife spendiert. Die Syntax ist hierbei an die der Programmiersprache C angelehnt.
> for (( var=Anfangswert ; Bedingung ; Zähler )) do kommando1 ... kommanodn done
Diese Form der for-Schleife arbeitet mit einem Schleifenzähler. Bei den einzelnen Parametern handelt es sich um arithmetische Substitutionen. Der erste Wert, hier »var«, wird gewöhnlich für die Zuweisung eines Anfangswertes an eine Schleifenvariable verwendet. Der Anfangswert (»var«) wird hierbei nur einmal beim Eintritt in die Schleife ausgewertet. Die Bedingung dient als Abbruchbedingung und der Zähler wird verwendet, um die Schleifenvariable zu erhöhen oder zu verringern. Die Bedingung wird vor jedem neuen Schleifendurchlauf überprüft. Der Zähler hingegen wird nach jedem Schleifendurchlauf verändert. Den Zähler können Sie folgendermaßen ausdrücken (siehe Tabelle 4.12):
Zähler verändern | Bedeutung |
| --- | --- |
var++ | Den Wert nach jedem Schleifendurchlauf um 1 erhöhen (inkrementieren) |
varââ | Den Wert nach jedem Schleifendurchlauf um 1 verringern (dekrementieren) |
((var=var+n)) | Den Wert nach jedem Schleifendurchlauf um n erhöhen |
((var=varân)) | Den Wert nach jedem Schleifendurchlauf um n verringern |
Das Script bei der Ausführung:
> you@host > ./afor8 hallo welt wie gehts Argument 0 ist hallo Argument 1 ist welt Argument 2 ist wie Argument 3 ist gehts 5 4 3 2 1 ...go 100 50 25 12 6 3 1
# 4.11 Die while-SchleifeÂ
4.11 Die while-SchleifeÂ
Wer jetzt ein wenig neidisch auf die zweite Form der for-Schleife der Bash geschielt hat, dem sei gesagt, dass sich Gleiches auch mit der while-Schleife realisieren lässt (siehe Abbildung 4.12). Die Syntax:
while [bedingung] # oder natürlich auch: while-kommando do kommanodo_1 ... kommando_n done
Oder die Kurzform:
while [bedingung] ; do kommando_1 ; kommando_n ; done
Die einzelnen Kommandos zwischen do und done werden bei der while-Schleife abgearbeitet, solange die Bedingung wahr (0 oder auch true) ist. Trifft die Bedingung nicht mehr zu und ist falsch (ungleich 0 oder false), wird die Schleife beendet und die Ausführung des Scripts hinter done fortgeführt.
Hierzu nun das Beispiel, dessen Ausführung dieselbe ist wie schon beim Script »afor8«, nur dass die Zuweisung des Anfangswerts vor der while-Schleife erfolgt. Die Bedingung wird bei der while-Schleife überprüft. Der Zähler hingegen wird im Anweisungsblock der Schleife erhöht bzw. verringert. Das Script ist allerdings, da hier ein Array verwendet wird, nur in der Bash und der Korn-Shell ausführbar.
# Demonstriert die Verwendung einer while-Schleife # awhile1 [ $# -lt 1 ] && echo "Mindestens ein Argument" && exit 1 # Liste von Argumenten in einem Array speichern # Version: Korn-Shell (auskommentiert) #set -A array $* # Version: bash array=( $* ) i=0 while [ $i -lt $# ] do echo "Argument $i ist ${array[$i]}" i=`expr $i + 1` done # Countdown i=5 while [ $i -gt 0 ] do echo $i sleep 1 # Eine Sekunde anhalten i=`expr $i â 1` done echo "...go" # Auch andere arithmetische Ausdrücke als Zähler möglich i=100 while [ $i -gt 0 ] do echo $i i=`expr $i / 2` done
Die Ausgabe des Scripts ist dieselbe wie schon beim Script afor8 im Abschnitt zuvor.
Zwar liegt eines der Hauptanwendungsgebiete der while-Schleife im Durchlaufen eines bestimmten Zahlenbereichs, doch häufig wird eine while-Schleife auch für Benutzereingaben verwendet. Im folgenden Beispiel wird das Kommando read eingesetzt, welches in Kapitel 5, Terminal E/A, in dem es um die Benutzerein- und -ausgabe geht, genauer beschrieben wird.
# Demonstriert die Verwendung einer while-Schleife mit Benutzereingabe # awhile2 while [ "$input" != "ende" ] do # eventuell Befehle zum Abarbeiten hierhin ... echo "Weiter mit ENTER oder aufhören mit ende" read input done echo "Das Ende ist erreicht"
Das Script bei der Ausführung:
you@host > ./awhile2 Weiter mit ENTER oder aufhören mit ende (ENTER) Weiter mit ENTER oder aufhören mit ende ende Das Ende ist erreicht
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 4.11 Die while-SchleifeÂ
Wer jetzt ein wenig neidisch auf die zweite Form der for-Schleife der Bash geschielt hat, dem sei gesagt, dass sich Gleiches auch mit der while-Schleife realisieren lässt (siehe Abbildung 4.12). Die Syntax:
> while [bedingung] # oder natürlich auch: while-kommando do kommanodo_1 ... kommando_n done
Oder die Kurzform:
> while [bedingung] ; do kommando_1 ; kommando_n ; done
Die einzelnen Kommandos zwischen do und done werden bei der while-Schleife abgearbeitet, solange die Bedingung wahr (0 oder auch true) ist. Trifft die Bedingung nicht mehr zu und ist falsch (ungleich 0 oder false), wird die Schleife beendet und die Ausführung des Scripts hinter done fortgeführt.
Hierzu nun das Beispiel, dessen Ausführung dieselbe ist wie schon beim Script »afor8«, nur dass die Zuweisung des Anfangswerts vor der while-Schleife erfolgt. Die Bedingung wird bei der while-Schleife überprüft. Der Zähler hingegen wird im Anweisungsblock der Schleife erhöht bzw. verringert. Das Script ist allerdings, da hier ein Array verwendet wird, nur in der Bash und der Korn-Shell ausführbar.
> # Demonstriert die Verwendung einer while-Schleife # awhile1 [ $# -lt 1 ] && echo "Mindestens ein Argument" && exit 1 # Liste von Argumenten in einem Array speichern # Version: Korn-Shell (auskommentiert) #set -A array $* # Version: bash array=( $* ) i=0 while [ $i -lt $# ] do echo "Argument $i ist ${array[$i]}" i=`expr $i + 1` done # Countdown i=5 while [ $i -gt 0 ] do echo $i sleep 1 # Eine Sekunde anhalten i=`expr $i â 1` done echo "...go" # Auch andere arithmetische Ausdrücke als Zähler möglich i=100 while [ $i -gt 0 ] do echo $i i=`expr $i / 2` done
Die Ausgabe des Scripts ist dieselbe wie schon beim Script afor8 im Abschnitt zuvor.
Zwar liegt eines der Hauptanwendungsgebiete der while-Schleife im Durchlaufen eines bestimmten Zahlenbereichs, doch häufig wird eine while-Schleife auch für Benutzereingaben verwendet. Im folgenden Beispiel wird das Kommando read eingesetzt, welches in Kapitel 5, Terminal E/A, in dem es um die Benutzerein- und -ausgabe geht, genauer beschrieben wird.
> # Demonstriert die Verwendung einer while-Schleife mit Benutzereingabe # awhile2 while [ "$input" != "ende" ] do # eventuell Befehle zum Abarbeiten hierhin ... echo "Weiter mit ENTER oder aufhören mit ende" read input done echo "Das Ende ist erreicht"
Das Script bei der Ausführung:
> you@host > ./awhile2 Weiter mit ENTER oder aufhören mit ende (ENTER) Weiter mit ENTER oder aufhören mit ende ende Das Ende ist erreicht
# 4.12 Die until-SchleifeÂ
4.12 Die until-SchleifeÂ
Die until-Schleife wird im Gegensatz zur while-Schleife so lange ausgeführt, bis das Kommando in der until-Schleife einen Wert ungleich 0, also falsch (!) zurückgibt. Die Abbruchbedingung der Schleife ist gegeben, wenn das Kommando erfolglos war. Die Syntax:
until [bedingung] falsch # oder auch: until-kommando falsch do kommando1 ... kommandon done
So lange also die Bedingung oder die Kommandoausführung falsch (false oder ungleich 0) hinter until zurückgibt, werden die Kommandos zwischen do und done ausgeführt. Ist der Rückgabewert hingegen wahr (true oder 0), dann wird mit der Ausführung hinter done fortgefahren (siehe Abbildung 4.13).
Manch einer wird jetzt nach dem Sinn der until-Schleife fragen, denn alles, was man mit der until-Schleife machen kann, kann man ja auch mithilfe des Negationsoperators ! oder durch Umstellen der Bedingung in einer while-Schleife erreichen, zum Beispiel im Script awhile2:
while [ "$input" != "ende" ] do ... done
Schreibt man hierfür
while [ ! "$input" = "ende" ] do ... done
hätte man dasselbe erreicht wie mit der folgenden until-Schleife:
until [ "$input" = "ende" ] do ... done
Ebenso sieht dies beim Durchzählen von Zahlenbereichen aus. Dass es die until-Schleife dennoch gibt und dass diese auch nötig ist, liegt daran, dass die echte Bourne-Shell kein ! kennt. Wenn Sie in der Bourne-Shell Ausdrücke wie [ ! "$input" = "ende" ] ausführen, werden diese zwar funktionieren, aber dies liegt nur daran, dass Sie hier das test-Kommando verwenden. Und test wiederum kann etwas mit ! anfangen. Überprüfen Sie allerdings den Rückgabewert eines Befehls mit !, wird sich die echte Bourne-Shell schon bei Ihnen melden.
Hierzu ein einfaches Beispiel: In Ihrem Script benötigen Sie ein Kommando namens »abc«, welches aber nicht standardmäßig auf jedem Rechner installiert ist. Statt jetzt nur eine Fehlermeldung auszugeben, dass auf dem System das Tool »abc« fehlt, um Ihr Script auszuführen, bieten Sie dem Anwender doch gleich eine Installationsroutine mit an, beispielsweise indem Sie entsprechendes Tool im Quellcode mitliefern und mit entsprechenden Optionen übersetzen lassen und installieren. Natürlich können Sie das Paket auch mit einem passenden Paketmanager (bspw. rpm oder apt) vom Netz holen lassen und installieren. Sie müssen aber hier nicht zwangsläufig eine neue Anwendung installieren â häufig will man auch nur ein neues Script in entsprechende Pfade legen. Hierbei sind Ihnen kaum Grenzen gesetzt. Im Script wird einfach das Kommando cat mit cp in ein Verzeichnis kopiert. Im Beispiel wurde das Verzeichnis $HOME/bin verwendet, das bei mir auch in PATH eingetragen ist. Dies ist Voraussetzung, wenn Sie das Kommando anschließend ohne ./ vor dem Kommandonamen aufrufen wollen.
# Demonstriert die Verwendung einer until-Schleife # auntil # Hier soll ein Kommando namens "abc" installiert werden until abc /dev/null > /dev/null 2>&1 do echo "Kommando abc scheint hier nicht zu existieren" # Jetzt können Sie "abc" selbst nachinstallieren ... # Wir verwenden hierbei einfach ein Hauswerkzeug mit cat new=`which cat` # kompletten Pfad zu cat cp $new $HOME/bin/abc done # ... den eigenen Quellcode ausgeben abc $0
Das Script bei der Ausführung:
you@host > ./auntil # Demonstriert die Verwendung einer until-Schleife # auntil # Hier soll ein Kommando namens "abc" installiert werden until abc /dev/null > /dev/null 2>&1 ... you@host > abc > test.txt Hallo ein Test you@host > abc test.txt Hallo ein Test
Dieses Beispiel können Sie auf jeden Fall nicht in einer echten Bourne-Shell mit einer while-Schleife nachbilden. Die Betonung liegt hier auf »echt«, weil es diese unter Linux nicht gibt. Unter Linux führt jede Verwendung der Bourne-Shell zur Bash und somit kann hierbei auch der Negationsoperator ! verwendet werden.
Weitere typische Beispiele wie das Script »auntil« sind auch selbstentpackende Installer, die beim Aufruf den komprimierten Quellcode auspacken und installieren.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 4.12 Die until-SchleifeÂ
Die until-Schleife wird im Gegensatz zur while-Schleife so lange ausgeführt, bis das Kommando in der until-Schleife einen Wert ungleich 0, also falsch (!) zurückgibt. Die Abbruchbedingung der Schleife ist gegeben, wenn das Kommando erfolglos war. Die Syntax:
> until [bedingung] falsch # oder auch: until-kommando falsch do kommando1 ... kommandon done
So lange also die Bedingung oder die Kommandoausführung falsch (false oder ungleich 0) hinter until zurückgibt, werden die Kommandos zwischen do und done ausgeführt. Ist der Rückgabewert hingegen wahr (true oder 0), dann wird mit der Ausführung hinter done fortgefahren (siehe Abbildung 4.13).
Manch einer wird jetzt nach dem Sinn der until-Schleife fragen, denn alles, was man mit der until-Schleife machen kann, kann man ja auch mithilfe des Negationsoperators ! oder durch Umstellen der Bedingung in einer while-Schleife erreichen, zum Beispiel im Script awhile2:
> while [ "$input" != "ende" ] do ... done
Schreibt man hierfür
> while [ ! "$input" = "ende" ] do ... done
hätte man dasselbe erreicht wie mit der folgenden until-Schleife:
> until [ "$input" = "ende" ] do ... done
Ebenso sieht dies beim Durchzählen von Zahlenbereichen aus. Dass es die until-Schleife dennoch gibt und dass diese auch nötig ist, liegt daran, dass die echte Bourne-Shell kein ! kennt. Wenn Sie in der Bourne-Shell Ausdrücke wie [ ! "$input" = "ende" ] ausführen, werden diese zwar funktionieren, aber dies liegt nur daran, dass Sie hier das test-Kommando verwenden. Und test wiederum kann etwas mit ! anfangen. Überprüfen Sie allerdings den Rückgabewert eines Befehls mit !, wird sich die echte Bourne-Shell schon bei Ihnen melden.
Hierzu ein einfaches Beispiel: In Ihrem Script benötigen Sie ein Kommando namens »abc«, welches aber nicht standardmäßig auf jedem Rechner installiert ist. Statt jetzt nur eine Fehlermeldung auszugeben, dass auf dem System das Tool »abc« fehlt, um Ihr Script auszuführen, bieten Sie dem Anwender doch gleich eine Installationsroutine mit an, beispielsweise indem Sie entsprechendes Tool im Quellcode mitliefern und mit entsprechenden Optionen übersetzen lassen und installieren. Natürlich können Sie das Paket auch mit einem passenden Paketmanager (bspw. rpm oder apt) vom Netz holen lassen und installieren. Sie müssen aber hier nicht zwangsläufig eine neue Anwendung installieren â häufig will man auch nur ein neues Script in entsprechende Pfade legen. Hierbei sind Ihnen kaum Grenzen gesetzt. Im Script wird einfach das Kommando cat mit cp in ein Verzeichnis kopiert. Im Beispiel wurde das Verzeichnis $HOME/bin verwendet, das bei mir auch in PATH eingetragen ist. Dies ist Voraussetzung, wenn Sie das Kommando anschließend ohne ./ vor dem Kommandonamen aufrufen wollen.
> # Demonstriert die Verwendung einer until-Schleife # auntil # Hier soll ein Kommando namens "abc" installiert werden until abc /dev/null > /dev/null 2>&1 do echo "Kommando abc scheint hier nicht zu existieren" # Jetzt können Sie "abc" selbst nachinstallieren ... # Wir verwenden hierbei einfach ein Hauswerkzeug mit cat new=`which cat` # kompletten Pfad zu cat cp $new $HOME/bin/abc done # ... den eigenen Quellcode ausgeben abc $0
Das Script bei der Ausführung:
> you@host > ./auntil # Demonstriert die Verwendung einer until-Schleife # auntil # Hier soll ein Kommando namens "abc" installiert werden until abc /dev/null > /dev/null 2>&1 ... you@host > abc > test.txt Hallo ein Test you@host > abc test.txt Hallo ein Test
Dieses Beispiel können Sie auf jeden Fall nicht in einer echten Bourne-Shell mit einer while-Schleife nachbilden. Die Betonung liegt hier auf »echt«, weil es diese unter Linux nicht gibt. Unter Linux führt jede Verwendung der Bourne-Shell zur Bash und somit kann hierbei auch der Negationsoperator ! verwendet werden.
Weitere typische Beispiele wie das Script »auntil« sind auch selbstentpackende Installer, die beim Aufruf den komprimierten Quellcode auspacken und installieren.
# 4.13 Kontrollierte SprüngeÂ
Es sei angemerkt, dass exit keine schleifentypische Anweisung ist, im Gegensatz zu break und continue. Auf continue und break soll daher etwas genauer eingegangen werden.
4.13.1 Der Befehl continueÂ
Der Befehl continue beendet nur die aktuelle Schleifenausführung. Das bedeutet, dass ab dem Aufruf von continue im Anweisungsblock der Schleife alle anderen Anweisungen übersprungen werden und die Programmausführung zur Schleife mit der nächsten Ausführung zurückspringt.
# Demonstriert die Verwendung von continue # acontinue1 i=1 while [ $i -lt 20 ] do j=`expr $i % 2` if [ $j -ne 0 ] then i=`expr $i + 1` continue fi echo $i i=`expr $i + 1` done
Das Script bei der Ausführung:
you@host > ./acontinue1 2 4 6 8 10 12 14 16 18
Bei diesem Script wird überprüft, ob es sich bei der Variable »i« um eine ungerade Zahl handelt. Wenn »i« modulo 2 nicht 0 ist, also ein Rest zurückgegeben wird, handelt es sich um eine ungerade Zahl. Wir wollen aber nur gerade Zahlen ausgeben lassen, weshalb im if-Anweisungsblock zwischen then und fi eine continue-Anweisung ausgeführt wird. Dabei wird die echo-Ausgabe auf dem Bildschirm ausgelassen und es geht unmittelbar mit dem nächsten Schleifendurchlauf weiter (hier: die nächste Überprüfung). Hiermit haben Sie sich die else-Alternative erspart.
Hinweis Mit dem Modulo-Operator % wird der Rest einer ganzzahligen Division ermittelt.
Natürlich lässt sich continue auch mit jeder anderen Schleife verwenden. Wollen Sie bspw. alle Argumente der Kommandozeile darauf prüfen, ob etwas Brauchbares dabei ist, könnten Sie folgendermaßen vorgehen:
# Demonstriert die Verwendung von continue # acontinue2 # Durchläuft alle Argumente der Kommandozeile nach dem Wort "automatik" for var in "$@" do if [ "$var" != "automatik" ] then continue fi echo "Ok, \"$var\" vorhanden ...!" done
Das Script bei der Ausführung:
you@host > ./acontinue2 test1 test2 test3 test4 you@host > ./acontinue2 test1 test2 test3 test4 automatik \ > test5 testx Ok, "automatik" vorhanden ...!
In diesem Beispiel wird Argument für Argument aus der Kommandozeile durchlaufen und überprüft, ob sich eine Zeichenkette mit »automatik« darin befindet. Handelt es sich beim aktuellen Argument nicht um entsprechendes Wort, wird mittels continue wieder zur nächsten Schleifenanweisung hochgesprungen.
4.13.2 Der Befehl breakÂ
Im Gegensatz zu continue können Sie mit break eine Schleife vorzeitig beenden, sodass die Programmausführung unverzüglich hinter dem Schlüsselwort done ausgeführt wird. Man spricht hierbei von einem kontrollierten Schleifenabbruch. break setzt man immer dann, wenn man mit einer Schleife ein bestimmtes Ziel schon vorzeitig erreicht hat oder ein unerwarteter Fehler aufgetreten ist â sich eine weitere Ausführung des Scripts also nicht mehr lohnt â und natürlich bei einer Endlosschleife.
Um auf das Script acontinue2 zurückzukommen, könnte man hier bspw. break hinter der echo-Ausgabe setzen, sofern man mit einer gefundenen Zeichenkette zufrieden ist. Damit kann man sich ersparen, dass alle Argumente durchlaufen werden, obwohl das benötigte längst gefunden wurde. Hier das Script mit break:
# Demonstriert die Verwendung von break # abreak # Durchläuft alle Argumente der Kommandozeile nach # dem Wort "automatik" for var in "$@" do if [ "$var" = "automatik" ] then echo "Ok, \"$var\" vorhanden ...!" break fi done # Hier gehts nach einem break weiter ...
Natürlich kann break wie continue in jeglicher Art von Schleifen eingesetzt werden. Ein oft gemachter Fehler ist das Setzen von break in einer verschachtelten Schleife. Wenn break in der innersten Schleife verwendet wird, wird auch nur diese Schleife abgebrochen. Zwar ist es aus Übersichtlichkeitsgründen nicht ratsam, mehrere Schleifen zu verschachteln, aber da es sich häufig nicht vermeiden lässt, sollte dennoch darauf hingewiesen werden. Hier ein Beispiel:
# Demonstriert die Verwendung von break in einer Verschachtelung # abreak2 i=1 for var in "$@" do while [ $i -lt 5 ] do echo $i i=`expr $i + 1` break echo "Wird nie ausgegeben" done echo $var done
Das Script bei der Ausführung:
you@host > ./abreak2 test1 test2 test3 test4 1 test1 2 test2 3 test3 4 test4
Obwohl hier ein break verwendet wurde, werden alle Kommandozeilen-Argumente ausgegeben. Dass break dennoch funktioniert, beweist die Nicht-Ausgabe von »Wird nie ausgegeben«. break gilt hier nur innerhalb der while-Schleife und bricht somit auch nur diese ab.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an kommunikation<EMAIL>.
## 4.13 Kontrollierte SprüngeÂ
Es sei angemerkt, dass exit keine schleifentypische Anweisung ist, im Gegensatz zu break und continue. Auf continue und break soll daher etwas genauer eingegangen werden.
### 4.13.1 Der Befehl continueÂ
Der Befehl continue beendet nur die aktuelle Schleifenausführung. Das bedeutet, dass ab dem Aufruf von continue im Anweisungsblock der Schleife alle anderen Anweisungen übersprungen werden und die Programmausführung zur Schleife mit der nächsten Ausführung zurückspringt.
> # Demonstriert die Verwendung von continue # acontinue1 i=1 while [ $i -lt 20 ] do j=`expr $i % 2` if [ $j -ne 0 ] then i=`expr $i + 1` continue fi echo $i i=`expr $i + 1` done
Das Script bei der Ausführung:
> you@host > ./acontinue1 2 4 6 8 10 12 14 16 18
Bei diesem Script wird überprüft, ob es sich bei der Variable »i« um eine ungerade Zahl handelt. Wenn »i« modulo 2 nicht 0 ist, also ein Rest zurückgegeben wird, handelt es sich um eine ungerade Zahl. Wir wollen aber nur gerade Zahlen ausgeben lassen, weshalb im if-Anweisungsblock zwischen then und fi eine continue-Anweisung ausgeführt wird. Dabei wird die echo-Ausgabe auf dem Bildschirm ausgelassen und es geht unmittelbar mit dem nächsten Schleifendurchlauf weiter (hier: die nächste Überprüfung). Hiermit haben Sie sich die else-Alternative erspart.
Natürlich lässt sich continue auch mit jeder anderen Schleife verwenden. Wollen Sie bspw. alle Argumente der Kommandozeile darauf prüfen, ob etwas Brauchbares dabei ist, könnten Sie folgendermaßen vorgehen:
> # Demonstriert die Verwendung von continue # acontinue2 # Durchläuft alle Argumente der Kommandozeile nach dem Wort "automatik" for var in "$@" do if [ "$var" != "automatik" ] then continue fi echo "Ok, \"$var\" vorhanden ...!" done
Das Script bei der Ausführung:
> you@host > ./acontinue2 test1 test2 test3 test4 you@host > ./acontinue2 test1 test2 test3 test4 automatik \ > test5 testx Ok, "automatik" vorhanden ...!
In diesem Beispiel wird Argument für Argument aus der Kommandozeile durchlaufen und überprüft, ob sich eine Zeichenkette mit »automatik« darin befindet. Handelt es sich beim aktuellen Argument nicht um entsprechendes Wort, wird mittels continue wieder zur nächsten Schleifenanweisung hochgesprungen.
### 4.13.2 Der Befehl breakÂ
Im Gegensatz zu continue können Sie mit break eine Schleife vorzeitig beenden, sodass die Programmausführung unverzüglich hinter dem Schlüsselwort done ausgeführt wird. Man spricht hierbei von einem kontrollierten Schleifenabbruch. break setzt man immer dann, wenn man mit einer Schleife ein bestimmtes Ziel schon vorzeitig erreicht hat oder ein unerwarteter Fehler aufgetreten ist â sich eine weitere Ausführung des Scripts also nicht mehr lohnt â und natürlich bei einer Endlosschleife.
Um auf das Script acontinue2 zurückzukommen, könnte man hier bspw. break hinter der echo-Ausgabe setzen, sofern man mit einer gefundenen Zeichenkette zufrieden ist. Damit kann man sich ersparen, dass alle Argumente durchlaufen werden, obwohl das benötigte längst gefunden wurde. Hier das Script mit break:
> # Demonstriert die Verwendung von break # abreak # Durchläuft alle Argumente der Kommandozeile nach # dem Wort "automatik" for var in "$@" do if [ "$var" = "automatik" ] then echo "Ok, \"$var\" vorhanden ...!" break fi done # Hier gehts nach einem break weiter ...
Natürlich kann break wie continue in jeglicher Art von Schleifen eingesetzt werden. Ein oft gemachter Fehler ist das Setzen von break in einer verschachtelten Schleife. Wenn break in der innersten Schleife verwendet wird, wird auch nur diese Schleife abgebrochen. Zwar ist es aus Übersichtlichkeitsgründen nicht ratsam, mehrere Schleifen zu verschachteln, aber da es sich häufig nicht vermeiden lässt, sollte dennoch darauf hingewiesen werden. Hier ein Beispiel:
> # Demonstriert die Verwendung von break in einer Verschachtelung # abreak2 i=1 for var in "$@" do while [ $i -lt 5 ] do echo $i i=`expr $i + 1` break echo "Wird nie ausgegeben" done echo $var done
Das Script bei der Ausführung:
> you@host > ./abreak2 test1 test2 test3 test4 1 test1 2 test2 3 test3 4 test4
Obwohl hier ein break verwendet wurde, werden alle Kommandozeilen-Argumente ausgegeben. Dass break dennoch funktioniert, beweist die Nicht-Ausgabe von »Wird nie ausgegeben«. break gilt hier nur innerhalb der while-Schleife und bricht somit auch nur diese ab.
# 4.14 EndlosschleifenÂ
4.14 EndlosschleifenÂ
Manchmal benötigt man ein Konstrukt, das endlos ausgeführt wird. Hierzu verwendet man gewöhnlich Endlosschleifen. In einer Endlosschleife werden die Kommandos zwischen do und done endlos ausgeführt, ohne dass die Schleife abgebrochen wird. Sie sollten also bedenken, dass hiermit das Script niemals mehr beenden wird und zur aufrufenden Shell zurückkehrt. Daher werden gewöhnlich Scripts mit einer Endlosschleife im Hintergrund mit & ausgeführt (./scriptname &).
Um eine Endlosschleife zu erzeugen, gibt es verschiedene Möglichkeiten. Letztendlich muss nur die Bedingung bei einer Überprüfung gegeben sein. Folglich müsste die Bedingung bei einer while-Schleife immer wahr und bei einer until-Schleife immer falsch sein. Zu diesen Zweck hat man die Kommandos true (für wahr) und false (für falsch) eingeführt. Mit while erreichen Sie so folgendermaßen eine Endlosschleife:
while true do # Kommandos der Endlosschleifen done
Gleiches mit until:
until false do # Kommandos der Endlosschleife done
Hier eine solche Endlosschleife in der Praxis:
# Demonstriert eine Endlosschleife mit while # aneverending while true do echo "In der Endlosschleife" sleep 5 # 5 Sekunden warten done
Das Script bei der Ausführung:
you@host > ./aneverending In der Endlosschleife In der Endlosschleife In der Endlosschleife (Strg)+(C) you@host >
Dieses Script müssen Sie mit dem Signal SIGINT, generiert durch die Tastenkombination (Strg)+(C), quasi mit Gewalt beenden.
Richtig eingesetzt werden Endlosschleifen in Scripts, mit denen man bestimmte Dienste überwacht oder in gewissen Zeitabständen (bspw. mit sleep) gewisse Aktionen ausführen will, wie z. B. das Überprüfen des Mail-Postfachs oder die Verfügbarkeit eines Servers. In der Praxis werden Endlosschleifen recht gern eingesetzt, wobei hier auch mit dem Befehl break gearbeitet werden sollte. Sind zum Beispiel irgendwelche Grenzen oder Fehler (bzw. Bedingungen) aufgetreten, wird break verwendet, wodurch die Endlosschleife beendet wird. Häufig wird dies dann so erreicht:
while true do # Viele Kommandos if [ Bedingung ] then break fi done # eventuell noch einige Aufräumarbeiten ...
Hinweis   In diesem Kapitel wurden viele Beispiele verwendet, in denen Dateien kopiert oder ausgelesen werden. Es sollte noch erwähnt werden, dass es hierbei häufig zu Problemen mit der Variable IFS kommen kann (siehe Abschnitt 5.3.6), weil viele Benutzer gern Dateinamen oder Verzeichnisse mit einem Leerzeichen trennen. Dies ist ein Ärgernis, das schwer zu kontrollieren ist. Eine mögliche Lösung des Problems finden Sie im Praxisteil des Buchs (siehe Abschnitt 15.2.1). Das Abfangen des Fehlers ist sehr wichtig, um ein inkonsistentes Backup zu vermeiden.
## 4.14 EndlosschleifenÂ
Manchmal benötigt man ein Konstrukt, das endlos ausgeführt wird. Hierzu verwendet man gewöhnlich Endlosschleifen. In einer Endlosschleife werden die Kommandos zwischen do und done endlos ausgeführt, ohne dass die Schleife abgebrochen wird. Sie sollten also bedenken, dass hiermit das Script niemals mehr beenden wird und zur aufrufenden Shell zurückkehrt. Daher werden gewöhnlich Scripts mit einer Endlosschleife im Hintergrund mit & ausgeführt (./scriptname &).
Um eine Endlosschleife zu erzeugen, gibt es verschiedene Möglichkeiten. Letztendlich muss nur die Bedingung bei einer Überprüfung gegeben sein. Folglich müsste die Bedingung bei einer while-Schleife immer wahr und bei einer until-Schleife immer falsch sein. Zu diesen Zweck hat man die Kommandos true (für wahr) und false (für falsch) eingeführt. Mit while erreichen Sie so folgendermaßen eine Endlosschleife:
> while true do # Kommandos der Endlosschleifen done
Gleiches mit until:
> until false do # Kommandos der Endlosschleife done
Hier eine solche Endlosschleife in der Praxis:
> # Demonstriert eine Endlosschleife mit while # aneverending while true do echo "In der Endlosschleife" sleep 5 # 5 Sekunden warten done
Das Script bei der Ausführung:
> you@host > ./aneverending In der Endlosschleife In der Endlosschleife In der Endlosschleife (Strg)+(C) you@host Dieses Script müssen Sie mit dem Signal SIGINT, generiert durch die Tastenkombination (Strg)+(C), quasi mit Gewalt beenden.
Richtig eingesetzt werden Endlosschleifen in Scripts, mit denen man bestimmte Dienste überwacht oder in gewissen Zeitabständen (bspw. mit sleep) gewisse Aktionen ausführen will, wie z. B. das Überprüfen des Mail-Postfachs oder die Verfügbarkeit eines Servers. In der Praxis werden Endlosschleifen recht gern eingesetzt, wobei hier auch mit dem Befehl break gearbeitet werden sollte. Sind zum Beispiel irgendwelche Grenzen oder Fehler (bzw. Bedingungen) aufgetreten, wird break verwendet, wodurch die Endlosschleife beendet wird. Häufig wird dies dann so erreicht:
> while true do # Viele Kommandos if [ Bedingung ] then break fi done # eventuell noch einige Aufräumarbeiten ...
# 5.2 AusgabeÂ
5.2 AusgabeÂ
Auf den folgenden Seiten soll etwas umfangreicher auf die Ausgabe-Funktionen eingegangen werden, von denen Sie bei Ihren Scripts noch regen Gebrauch machen werden.
5.2.1 Der echo-BefehlÂ
Den echo-Befehl haben Sie bisher sehr häufig eingesetzt, und im Prinzip gibt es auch gar nicht mehr allzu viel dazu zu sagen. Und dennoch ist der echo-Befehl unter den unterschiedlichen Shells inkompatibel. Die Syntax zu echo:
echo [-option] argument1 argument2 ... argument_n
Die Argumente, die Sie echo übergeben, müssen mindestens mit einem Leerzeichen getrennt werden. Bei der Ausgabe auf dem Bildschirm verwendet echo dann ebenfalls ein Leerzeichen als Trennung zwischen den Argumenten. Abgeschlossen wird eine Ausgabe von echo mit einem Newline-Zeichen. Wollen Sie das Newline-Zeichen ignorieren, können Sie die Option ân zum Unterdrücken verwenden:
# Demonstriert den echo-Befehl # aecho1 echo -n "Hier wird das Newline-Zeichen unterdrückt" echo ":TEST"
Das Script bei der Ausführung:
you@host > ./aecho1 Hier wird das Newline-Zeichen unterdrückt:TEST
Soweit ist dies in Ordnung, nur dass eben die »echte« Bourne-Shell überhaupt keine Optionen für echo kennt. Dies stellt aber kein Problem dar, denn Sie können hierfür die Escape-Sequenz \c verwenden. \c unterdrückt ebenfalls das abschließende Newline-Zeichen. Hier dasselbe Script nochmals:
# Demonstriert den echo-Befehl # aecho2 echo "Hier wird das Newline-Zeichen unterdrückt\c" echo ":TEST"
In der Bourne- und Korn-Shell scheint die Welt jetzt in Ordnung zu sein, nur die Bash gibt hier Folgendes aus:
you@host > ./aecho2 Hier wird das Newline-Zeichen unterdrückt\c :TEST
Die Bash scheint die Escape-Sequenz nicht zu verstehen. Damit dies trotzdem funktioniert, müssen Sie die Option âe verwenden:
# Demonstriert den echo-Befehl # aecho3 echo -e "Hier wird das Newline-Zeichen unterdrückt\c" echo ":TEST"
Leider stehen Sie allerdings dann wieder vor dem Problem, dass die »echte« Bourne-Shell (also nicht der symbolische Link nach Bash!) keine Optionen kennt. Es ist also nicht möglich, auf diesem Weg ein portables echo für alle Shells zu verwenden. Folgende Regeln müssen Sie beachten:
Tabelle 5.1 Â Steuerzeichen im ASCII-Zeichencode
Dez
Hex
Kürzel
ASCII
Bedeutung
0
0
NUL
^@
Keine Verbindung
1
1
SOH
^A
Anfang Kopfdaten
2
2
STX
^B
Textanfang
3
3
ETX
^C
Textende
4
4
EOT
^D
Ende der Übertragung
5
5
ENQ
^E
Aufforderung
6
6
ACK
^F
erfolgreiche Antwort
7
7
BEL
^G
Signalton (Beep)
8
8
BS
^H
Rückschritt (Backspace)
9
9
HT
^I
Tabulatorenschritt (horizontal)
10
A
NL
^J
Zeilenvorschub (Newline)
11
B
VT
^K
Tabulatorenschritt (vertikal)
12
C
NP
^L
Seitenvorschub (NewPage)
13
D
CR
^M
Wagenrücklauf
14
E
SO
^N
Dauerumschaltung
15
F
SI
^O
Rückschaltung
16
10
DLE
^P
Umschaltung der Verbindung
17
11
DC1
^Q
Steuerzeichen 1 des Gerätes
18
12
DC2
^R
Steuerzeichen 2 des Gerätes
19
13
DC3
^S
Steuerzeichen 3 des Gerätes
20
14
DC4
^T
Steuerzeichen 4 des Gerätes
21
15
NAK
^U
erfolglose Antwort
22
16
SYN
^V
Synchronmodus
23
17
ETB
^W
Blockende
24
18
CAN
^X
Zeichen für ungültig
25
19
EM
^Y
Mediumende
26
1A
SUB
^Z
Ersetzung
27
1B
ESCAPE
ESC
Umschaltung
28
1C
FS
^\
Dateitrennzeichen
29
1D
GS
^]
Gruppentrennzeichen
30
1E
RS
^~
Satztrennzeichen
31
1F
US
^/
Worttrennzeichen
Welches Steuerzeichen hier jetzt auch wirklich die angegebene Wirkung zeigt, hängt stark vom Terminaltyp (TERM) und der aktuellen Systemkonfiguration ab. Wenn Sie im Text ein Steuerzeichen verwenden wollen, so ist dies mit (ENTER) und der (Ë_)-Taste kein Problem, weil hierfür ja Tasten vorhanden sind. Andere Steuerzeichen hingegen werden mit einer Tastenkombination aus (Strg)+Buchstabe ausgelöst. So könnten Sie statt mit (ENTER) auch mit der Tastenkombination (Strg)+(J) einen Zeilenvorschub auslösen. Andererseits lösen Sie mit der Tastenkombination (Strg)+(C) das Signal SIGINT aus, womit Sie z. B. einen Schreibprozess abbrechen würden.
Zum Glück unterstützen Kommandos wie u. a. echo die Umschreibung solcher Steuerzeichen durch Escape-Sequenzen (was übrigens nichts mit dem Zeichen (ESC) zu tun hat). Eine Escape-Sequenz besteht aus zwei Zeichen, wovon das erste Zeichen immer mit einem Backslash beginnt (siehe Tabelle 5.2).
Tabelle 5.2 Â Escape-Sequenzen
Escape-Sequenz
Bedeutung
\a
Alarm-Ton (Beep)
\b
Backspace; ein Zeichen zurück
\c
continue; das Newline-Zeichen unterdrücken
\f
Form Feed; einige Zeilen weiterspringen
\n
Newline; Zeilenumbruch
\r
return; zurück zum Anfang der Zeile
\t
Tabulator (horizontal)
\v
Tabulator (vertikal); meistens eine Zeile vorwärts
\\
das Backslash-Zeichen ausgeben
\0nnn
ASCII-Zeichen in oktaler Form (nur sh und ksh); z. B. aus \0102 wird B (dezimal 66)
\nnn
ASCII-Zeichen in oktaler Form (nur Bash); z. B. aus \102 wird wird B (dezimal 66)
Hinweis   Wie bereits erwähnt wurde, beachtet die Bash diese Escape-Sequenzen mit echo nur, wenn Sie die Option -e mit angeben.
Ein weiteres Beispiel zu den Escape-Sequenzen:
# Demonstriert den echo-Befehl # aecho4 echo "Mehrere Newline-Zeichen\n\n\n" echo "Tab1\tTab2\tEnde" echo "bach\bk" # Alternativ für die bash mit Option -e # echo -e "Mehrere Newline-Zeichen\n\n" # echo -e "Tab1\tTab2\tEnde" # echo -e "bach\bk"
Das Script bei der Ausführung:
you@host > ./aecho4 Mehrere Newline-Zeichen Tab1 Tab2 Ende zurück
5.2.2 print (Korn-Shell only)Â
Die Korn-Shell bietet mit print (nicht zu verwechseln mit printf) eine fast gleiche Funktion wie echo, allerdings bietet es einige Optionen mehr an. Hier die Optionen zur Funktion print in der Korn-Shell (siehe Tabelle 5.3):
Tabelle 5.3 Â Optionen für den print-Befehl (Korn-Shell only)
ân
Newline unterdrücken
âr
Escape-Sequenzen ignorieren
âu n
Ausgabe von print erfolgt auf den Filedeskriptor n
âp
Ausgabe von print erfolgt auf Co-Prozess
â
Argumente, die mit - beginnen, werden nicht als Option behandelt.
âR
wie -, mit Ausnahme der Angabe von ân
5.2.3 Der Befehl printfÂ
Wer schon einmal mit der C-Programmierung in Kontakt gekommen ist, der hat bereits mehr als einmal mit dem printf-Befehl zur formatierten Ausgabe zu tun gehabt. Allerdings ist dieses printf im Gegensatz zum printf der CâBibliothek ein echtes Programm und unabhängig von der Shell (also kein Builtin-Kommando). Die Syntax:
printf format argument1 argument2 ... argument_n
»format« ist hierbei der Formatstring, der beschreibt, wie die Ausgaben der Argumente »argument1« bis »argument_n« zu formatieren sind. Somit ist der Formatstring im Endeffekt eine Schablone für die Ausgabe, die vorgibt, wie die restlichen Parameter (»argument1« bis »argument_n«) ausgegeben werden sollen. Befindet sich im Formatstring ein bestimmtes Zeichenmuster (bspw. %s für eine Zeichenkette (s = String)), wird dies durch das erste Argument (argument1) von printf ersetzt und formatiert ausgegeben. Befindet sich ein zweites Zeichenmuster im Formatstring, wird dies durch das zweite Argument (argument2) hinter dem Formatstring von printf ersetzt usw. Werden mehr Zeichenmuster im Formatstring angegeben, als Argumente vorhanden sind, werden die restlichen Zeichenketten mit "" formatiert.
Ein Zeichenmuster wird durch das Zeichen % eingeleitet und mit einem weiteren Buchstaben abgeschlossen. Mit dem Zeichenmuster %s geben Sie etwa an, dass hier eine Zeichenkette mit beliebiger Länge ausgegeben wird. Welche Zeichenmuster es gibt und wie diese aussehen, können Sie Tabelle 5.4 entnehmen:
Tabelle 5.4 Â Zeichenmuster für printf und deren Bedeutung
Zeichenmuster
Typ
Beschreibung
%c
Zeichen
Gibt ein Zeichen entsprechend dem Parameter aus.
%s
Zeichenkette
Gibt eine Zeichenkette beliebiger Länge aus.
%d oder %i
Ganzzahl
Gibt eine Ganzzahl mit Vorzeichen aus.
%u
Ganzzahl
Gibt eine positive Ganzzahl aus. Negative Werte werden dann in der positiven CPU-Darstellung ausgegeben.
%f
reelle Zahl
Gibt eine Gleitpunktzahl aus.
%e oder %E
reelle Zahl
Gibt eine Gleitpunktzahl in der Exponentialschreibweise aus.
%x oder %X
Ganzzahl
Gibt eine Ganzzahl in hexadezimaler Form aus.
%g oder %G
reelle Zahl
Ist der Exponent kleiner als â4, wird das Format %e verwendet, ansonsten %f.
%%
Prozentzeichen
Gibt das Prozentzeichen aus.
Hier ein einfaches Beispiel, wie Sie eine Textdatei mit folgendem Format
you@host > cat bestellung.txt J.Wolf::10::Socken P.Weiss::5::T-Shirts U.Hahn::3::Hosen Z.Walter::6::Handschuhe
in lesbarer Form formatiert ausgeben lassen können:
# Demonstriert den printf-Befehl # aprintf1 FILE=bestellung.txt TRENNER=:: for data in `cat $FILE` do kunde=`echo $data | tr $TRENNER ' '` set $kunde printf "Kunde: %-10s Anzahl: %-5d Gegenstand: %15s\n" $1 $2 $3 done
Das Script liest zunächst zeilenweise mit cat aus der Textdatei bestellung.txt ein. Nach do werden die Trennzeichen »::« durch ein Leerzeichen ersetzt und mittels set auf die einzelnen Positionsparameter $1, $2 und $3 verteilt. Diese Positionsparameter werden anschließend mit printf als Argument für die Zeichenmuster verwendet. Das Script bei der Ausführung:
you@host > ./aprintf1 Kunde: J.Wolf Anzahl: 10 Gegenstand: Socken Kunde: P.Weiss Anzahl: 5 Gegenstand: T-Shirts Kunde: U.Hahn Anzahl: 3 Gegenstand: Hosen Kunde: Z.Walter Anzahl: 6 Gegenstand: Handschuhe
Wie Sie im Beispiel außerdem sehen, können hierbei auch die Escape-Sequenzen wie \n usw. im Formatstring verwendet werden. Ein Vorteil ist es zugleich, dass Sie in der Bash die Escape-Zeichen ohne eine Option wie âe bei echo anwenden können.
Im Beispiel wurde beim Formatstring das Zeichenmuster mit einer erweiterten Angabe formatiert. So können Sie zum Beispiel durch eine Zahlenangabe zwischen % und dem Buchstaben für das Format die minimale Breite der Ausgabe bestimmen. Besitzt ein Argument weniger Zeichen, als Sie mit der minimalen Breite vorgegeben haben, so werden die restlichen Zeichen mit einem Leerzeichen aufgefüllt.
Auf welcher Seite hier mit Leerzeichen aufgefüllt wird, hängt davon ab, ob sich vor der Ganzzahl noch ein Minuszeichen befindet. Setzen Sie nämlich noch ein Minuszeichen vor die Breitenangabe, wird die Ausgabe linksbündig justiert.
Hier können Sie noch näher spezifizieren, indem Sie einen Punkt hinter der Feldbreite gefolgt von einer weiteren Ganzzahl verwenden. Dann können Sie die Feldbreite von der Genauigkeit trennen (siehe anschließendes Beispiel). Bei einer Zeichenkette definieren Sie dabei die Anzahl von Zeichen, die maximal ausgegeben werden. Bei einer Fließkommazahl wird hiermit die Anzahl der Ziffern nach dem Komma definiert und bei Ganzzahlen die minimale Anzahl von Ziffern, die ausgegeben werden sollen â wobei Sie niemals eine Ganzzahl wie etwa 1234 durch %.2d auf zwei Ziffern kürzen können. Verwenden Sie hingegen eine größere Ziffer, als die Ganzzahl Stellen erhält, werden die restlichen Ziffern mit Nullen aufgefüllt. Hier ein Script, das die erweiterten Formatierungsmöglichkeiten demonstrieren soll:
# Demonstriert den printf-Befehl # aprintf2 text=Kopfstand a=3 b=12345 printf "|01234567890123456789|\n" printf "|%s|\n" $text printf "|%20s|\n" $text printf "|%-20s|\n" $text printf "|%20.4s|\n" $text printf "|%-20.4s|\n\n" $text printf "Fließkommazahl: %f\n" $a printf "gekürzt : %.2f\n" $a printf "Ganzzahl : %d\n" $b printf "gekürzt : %.2d\n" $b printf "erweitert : %.8d\n" $b
Das Script bei der Ausführung:
you@host > ./aprintf2 |01234567890123456789| |Kopfstand| | Kopfstand| |Kopfstand | | Kopf| |Kopf | Fließkommazahl: 3,000000 gekürzt : 3,00 Ganzzahl : 12345 gekürzt : 12345 erweitert : 00012345
5.2.4 Der Befehl tput â TerminalsteuerungÂ
Häufig will man neben der gewöhnlichen Ausgabe auch das Terminal oder den Bildschirm steuern: etwa den Bildschirm löschen, Farben verwenden oder die Höhe und Breite des Bildschirms ermitteln. Dies und noch vieles mehr können Sie mit tput realisieren. tput ist übrigens auch ein eigenständiges Programm (kein Builtin) und somit unabhängig von der Shell. Zum Glück kennt tput die Terminalbibliothek terminfo (früher auch /etc/termcap), in der viele Attribute zur Steuerung des Terminals abgelegt sind. Dort finden Sie eine Menge Funktionen, von denen Sie hier nur die nötigsten kennen lernen. Einen umfassenderen Einblick können Sie über die entsprechende Manual-Page von terminfo gewinnen.
Wie sieht ein solcher Eintrag in terminfo aus? Verwenden wir doch als einfachstes Beispiel clear, womit Sie den Bildschirm des Terminals löschen können.
you@host > tput clear | tr '\033' 'X' ; echo X[HX[2J
Hier erhalten Sie die Ausgabe der Ausführung von clear im Klartext. Es muss allerdings das Escape-Zeichen (\033) durch ein »X« oder Ähnliches ausgetauscht werden, weil ein Escape-Zeichen nicht darstellbar ist. Somit lautet der Befehl zum Löschen des Bildschirms hier:
\033[H\033[2J
Mit folgender Eingabe könnten Sie den Bildschirm löschen:
you@host > echo '\033[H\033[2J'
oder in der Bash:
you@host > echo -en '\033[H\033[2J'
Allerdings gilt dieses Beispiel nur, wenn »echo $TERM« die xterm ist. Es gibt nämlich noch eine Reihe weiterer Terminaltypen, weshalb Sie mit dieser Methode wohl nicht sehr weit kommen werden â und dies ist auch gar nicht erforderlich, da es hierfür ja die Bibliothek terminfo mit der Funktion tput gibt. Die Syntax von tput lautet:
tput Terminaleigenschaft
Wenn Sie sich die Mühe machen, die Manual-Page von terminfo durchzublättern, so finden Sie neben den Informationen zum Verändern von Textattributen und einigen steuernden Terminal-Eigenschaften auch Kürzel, mit denen Sie Informationen zum aktuellen Terminal abfragen können.
Tabelle 5.5 Â Informationen zum laufenden Terminal
cols
Aktuelle Anzahl der Spalten
lines
Aktuelle Anzahl der Zeilen
colors
Aktuelle Anzahl der Farben, die das Terminal unterstützt.
pairs
Aktuelle Anzahl der Farbenpaare (Schriftfarbe und Hintergrund), die dem Terminal zur Verfügung stehen.
Diese Informationen lassen sich in einem Script folgendermaßen ermitteln:
# Demonstriert den tput-Befehl # tput1 spalten=`tput cols` zeilen=`tput lines` farben=`tput colors` paare=`tput pairs` echo "Der Typ des Terminals $TERM hat im Augenblick" echo " + $spalten Spalten und $zeilen Zeilen" echo " + $farben Farben und $paare Farbenpaare"
Das Script bei der Ausführung:
you@host > ./tput1 Der Typ des Terminals xterm hat im Augenblick + 80 Spalten und 29 Zeilen + 8 Farben und 64 Farbenpaare
Zum Verändern der Textattribute finden Sie in tput ebenfalls einen guten Partner. Ob hervorgehoben, invers oder unterstrichen, auch dies ist mit der Bibliothek terminfo möglich. Tabelle 5.6 nennt hierzu einige Kürzel und deren Bedeutung.
Tabelle 5.6 Â Verändern von Textattributen
bold
Etwas hervorgehobenere (fette) Schrift
boldoff
Fettschrift abschalten
blink
Text blinken
rev
Inverse Schrift verwenden
smul
Text unterstreichen
rmul
Unterstreichen abschalten
sgr0
Alle Attribute wiederherstellen (also normale Standardeinstellung)
Auch zu den Textattributen soll ein Shellscript geschrieben werden:
# Demonstriert den tput-Befehl # tput2 fett=`tput bold` invers=`tput rev` unterstrich=`tput smul` reset=`tput sgr0` echo "Text wird ${fett}Hervorgehoben${reset}" echo "Oder gerne auch ${invers}Invers${reset}" echo "Natürlich auch mit ${unterstrich}Unterstrich${reset}"
Das Script bei der Ausführung:
Wenn Ihnen tput colors mehr als den Wert 1 zurückgibt, können Sie auch auf Farben zurückgreifen. Meistens handelt es sich dabei um acht Farben. Je acht Farben für den Hintergrund und acht für den Vordergrund ergeben zusammen 64 mögliche Farbenpaare, welche Sie mit
# Vordergrundfarbe setzen tput setf Nummer # Hintergrundfarbe setzen tput setb Nummer
aktivieren können. Für Nummer können Sie einen Wert zwischen 0 und 7 verwenden, wobei die einzelnen Nummern gewöhnlich den folgenden Farben entsprechen:
Tabelle 5.7 Â Farbenwerte für den Vorder- und Hintergrund
Nummer
Farbe
0
Schwarz
1
Blau
2
Grün
3
Gelb
4
Rot
5
Lila
6
Cyan
7
Grau
In der Praxis können Sie die Farben zum Beispiel dann so einsetzen:
# Demonstriert den tput-Befehl # tput3 Vgruen=`tput setf 2` Vblau=`tput setf 1` Hschwarz=`tput setb 0` Hgrau=`tput setb 7` reset=`tput sgr0` # Farbenpaar Schwarz-Grün erstellen Pschwarzgruen=`echo ${Vgruen}${Hschwarz}` echo $Pschwarzgruen echo "Ein klassischer Fall von Schwarz-Grün" echo ${Vblau}${Hgrau} echo "Ein ungewöhnliches Blau-Grau" # Alles wieder Rückgängig machen echo $reset
Natürlich finden Sie in der Bibliothek terminfo auch steuernde Terminal-Eigenschaften, nach denen häufig bevorzugt gefragt wird. Mit steuernden Eigenschaften sind hier Kürzel in terminfo gemeint, mit denen Sie die Textausgabe auf verschiedensten Positionen des Bildschirms erreichen können.
Tabelle 5.8 Â Steuernde Terminal-Eigenschaften
home
Cursor auf die linke obere Ecke des Bildschirms setzen
cup n m
Cursor auf die n-te Zeile in der m-ten Spalte setzen
dl1
Aktuelle Zeile löschen
il1
Zeile an aktueller Position einfügen
dch1
Ein Zeichen in der aktuellen Position löschen
clear
Bildschirm löschen
Hinweis   Dieser Abschnitt stellt natürlich nur einen kurzen Einblick in die Funktionen von terminfo dar. Die Funktionsvielfalt (bei einem Blick auf die Manual-Page) ist erschlagend, dennoch werden Sie in der Praxis das meiste nicht benötigen, und für den Hausgebrauch sind Sie hier schon recht gut gerüstet.
## 5.2 AusgabeÂ
Auf den folgenden Seiten soll etwas umfangreicher auf die Ausgabe-Funktionen eingegangen werden, von denen Sie bei Ihren Scripts noch regen Gebrauch machen werden.
### 5.2.1 Der echo-BefehlÂ
Den echo-Befehl haben Sie bisher sehr häufig eingesetzt, und im Prinzip gibt es auch gar nicht mehr allzu viel dazu zu sagen. Und dennoch ist der echo-Befehl unter den unterschiedlichen Shells inkompatibel. Die Syntax zu echo:
> echo [-option] argument1 argument2 ... argument_n
Die Argumente, die Sie echo übergeben, müssen mindestens mit einem Leerzeichen getrennt werden. Bei der Ausgabe auf dem Bildschirm verwendet echo dann ebenfalls ein Leerzeichen als Trennung zwischen den Argumenten. Abgeschlossen wird eine Ausgabe von echo mit einem Newline-Zeichen. Wollen Sie das Newline-Zeichen ignorieren, können Sie die Option ân zum Unterdrücken verwenden:
> # Demonstriert den echo-Befehl # aecho1 echo -n "Hier wird das Newline-Zeichen unterdrückt" echo ":TEST"
Das Script bei der Ausführung:
> you@host > ./aecho1 Hier wird das Newline-Zeichen unterdrückt:TEST
Soweit ist dies in Ordnung, nur dass eben die »echte« Bourne-Shell überhaupt keine Optionen für echo kennt. Dies stellt aber kein Problem dar, denn Sie können hierfür die Escape-Sequenz \c verwenden. \c unterdrückt ebenfalls das abschließende Newline-Zeichen. Hier dasselbe Script nochmals:
> # Demonstriert den echo-Befehl # aecho2 echo "Hier wird das Newline-Zeichen unterdrückt\c" echo ":TEST"
In der Bourne- und Korn-Shell scheint die Welt jetzt in Ordnung zu sein, nur die Bash gibt hier Folgendes aus:
> you@host > ./aecho2 Hier wird das Newline-Zeichen unterdrückt\c :TEST
Die Bash scheint die Escape-Sequenz nicht zu verstehen. Damit dies trotzdem funktioniert, müssen Sie die Option âe verwenden:
> # Demonstriert den echo-Befehl # aecho3 echo -e "Hier wird das Newline-Zeichen unterdrückt\c" echo ":TEST"
Leider stehen Sie allerdings dann wieder vor dem Problem, dass die »echte« Bourne-Shell (also nicht der symbolische Link nach Bash!) keine Optionen kennt. Es ist also nicht möglich, auf diesem Weg ein portables echo für alle Shells zu verwenden. Folgende Regeln müssen Sie beachten:
# Steuerzeichen und Escape-Sequenzen
Dez | Hex | Kürzel | ASCII | Bedeutung |
| --- | --- | --- | --- | --- |
0 | 0 | NUL | ^@ | Keine Verbindung |
1 | 1 | SOH | ^A | Anfang Kopfdaten |
2 | 2 | STX | ^B | Textanfang |
3 | 3 | ETX | ^C | Textende |
4 | 4 | EOT | ^D | Ende der Übertragung |
5 | 5 | ENQ | ^E | Aufforderung |
6 | 6 | ACK | ^F | erfolgreiche Antwort |
7 | 7 | BEL | ^G | Signalton (Beep) |
8 | 8 | BS | ^H | Rückschritt (Backspace) |
9 | 9 | HT | ^I | Tabulatorenschritt (horizontal) |
10 | A | NL | ^J | Zeilenvorschub (Newline) |
11 | B | VT | ^K | Tabulatorenschritt (vertikal) |
12 | C | NP | ^L | Seitenvorschub (NewPage) |
13 | D | CR | ^M | Wagenrücklauf |
14 | E | SO | ^N | Dauerumschaltung |
15 | F | SI | ^O | Rückschaltung |
16 | 10 | DLE | ^P | Umschaltung der Verbindung |
17 | 11 | DC1 | ^Q | Steuerzeichen 1 des Gerätes |
18 | 12 | DC2 | ^R | Steuerzeichen 2 des Gerätes |
19 | 13 | DC3 | ^S | Steuerzeichen 3 des Gerätes |
20 | 14 | DC4 | ^T | Steuerzeichen 4 des Gerätes |
21 | 15 | NAK | ^U | erfolglose Antwort |
22 | 16 | SYN | ^V | Synchronmodus |
23 | 17 | ETB | ^W | Blockende |
24 | 18 | CAN | ^X | Zeichen für ungültig |
25 | 19 | EM | ^Y | Mediumende |
26 | 1A | SUB | ^Z | Ersetzung |
27 | 1B | ESCAPE | ESC | Umschaltung |
28 | 1C | FS | ^\ | Dateitrennzeichen |
29 | 1D | GS | ^] | Gruppentrennzeichen |
30 | 1E | RS | ^~ | Satztrennzeichen |
31 | 1F | US | ^/ | Worttrennzeichen |
Welches Steuerzeichen hier jetzt auch wirklich die angegebene Wirkung zeigt, hängt stark vom Terminaltyp (TERM) und der aktuellen Systemkonfiguration ab. Wenn Sie im Text ein Steuerzeichen verwenden wollen, so ist dies mit (ENTER) und der (Ë_)-Taste kein Problem, weil hierfür ja Tasten vorhanden sind. Andere Steuerzeichen hingegen werden mit einer Tastenkombination aus (Strg)+Buchstabe ausgelöst. So könnten Sie statt mit (ENTER) auch mit der Tastenkombination (Strg)+(J) einen Zeilenvorschub auslösen. Andererseits lösen Sie mit der Tastenkombination (Strg)+(C) das Signal SIGINT aus, womit Sie z. B. einen Schreibprozess abbrechen würden.
Zum Glück unterstützen Kommandos wie u. a. echo die Umschreibung solcher Steuerzeichen durch Escape-Sequenzen (was übrigens nichts mit dem Zeichen (ESC) zu tun hat). Eine Escape-Sequenz besteht aus zwei Zeichen, wovon das erste Zeichen immer mit einem Backslash beginnt (siehe Tabelle 5.2).
Escape-Sequenz | Bedeutung |
| --- | --- |
\a | Alarm-Ton (Beep) |
\b | Backspace; ein Zeichen zurück |
\c | continue; das Newline-Zeichen unterdrücken |
\f | Form Feed; einige Zeilen weiterspringen |
\n | Newline; Zeilenumbruch |
\r | return; zurück zum Anfang der Zeile |
\t | Tabulator (horizontal) |
\v | Tabulator (vertikal); meistens eine Zeile vorwärts |
\\ | das Backslash-Zeichen ausgeben |
\0nnn | ASCII-Zeichen in oktaler Form (nur sh und ksh); z. B. aus \0102 wird B (dezimal 66) |
\nnn | ASCII-Zeichen in oktaler Form (nur Bash); z. B. aus \102 wird wird B (dezimal 66) |
Ein weiteres Beispiel zu den Escape-Sequenzen:
> # Demonstriert den echo-Befehl # aecho4 echo "Mehrere Newline-Zeichen\n\n\n" echo "Tab1\tTab2\tEnde" echo "bach\bk" # Alternativ für die bash mit Option -e # echo -e "Mehrere Newline-Zeichen\n\n" # echo -e "Tab1\tTab2\tEnde" # echo -e "bach\bk"
Das Script bei der Ausführung:
> you@host > ./aecho4 Mehrere Newline-Zeichen Tab1 Tab2 Ende zurück
### 5.2.2 print (Korn-Shell only)Â
Die Korn-Shell bietet mit print (nicht zu verwechseln mit printf) eine fast gleiche Funktion wie echo, allerdings bietet es einige Optionen mehr an. Hier die Optionen zur Funktion print in der Korn-Shell (siehe Tabelle 5.3):
Option | Bedeutung |
| --- | --- |
ân | Newline unterdrücken |
âr | Escape-Sequenzen ignorieren |
âu n | Ausgabe von print erfolgt auf den Filedeskriptor n |
âp | Ausgabe von print erfolgt auf Co-Prozess |
â | Argumente, die mit - beginnen, werden nicht als Option behandelt. |
âR | wie -, mit Ausnahme der Angabe von ân |
### 5.2.3 Der Befehl printfÂ
Wer schon einmal mit der C-Programmierung in Kontakt gekommen ist, der hat bereits mehr als einmal mit dem printf-Befehl zur formatierten Ausgabe zu tun gehabt. Allerdings ist dieses printf im Gegensatz zum printf der CâBibliothek ein echtes Programm und unabhängig von der Shell (also kein Builtin-Kommando). Die Syntax:
> printf format argument1 argument2 ... argument_n
»format« ist hierbei der Formatstring, der beschreibt, wie die Ausgaben der Argumente »argument1« bis »argument_n« zu formatieren sind. Somit ist der Formatstring im Endeffekt eine Schablone für die Ausgabe, die vorgibt, wie die restlichen Parameter (»argument1« bis »argument_n«) ausgegeben werden sollen. Befindet sich im Formatstring ein bestimmtes Zeichenmuster (bspw. %s für eine Zeichenkette (s = String)), wird dies durch das erste Argument (argument1) von printf ersetzt und formatiert ausgegeben. Befindet sich ein zweites Zeichenmuster im Formatstring, wird dies durch das zweite Argument (argument2) hinter dem Formatstring von printf ersetzt usw. Werden mehr Zeichenmuster im Formatstring angegeben, als Argumente vorhanden sind, werden die restlichen Zeichenketten mit "" formatiert.
Ein Zeichenmuster wird durch das Zeichen % eingeleitet und mit einem weiteren Buchstaben abgeschlossen. Mit dem Zeichenmuster %s geben Sie etwa an, dass hier eine Zeichenkette mit beliebiger Länge ausgegeben wird. Welche Zeichenmuster es gibt und wie diese aussehen, können Sie Tabelle 5.4 entnehmen:
Zeichenmuster | Typ | Beschreibung |
| --- | --- | --- |
%c | Zeichen | Gibt ein Zeichen entsprechend dem Parameter aus. |
%s | Zeichenkette | Gibt eine Zeichenkette beliebiger Länge aus. |
%d oder %i | Ganzzahl | Gibt eine Ganzzahl mit Vorzeichen aus. |
%u | Ganzzahl | Gibt eine positive Ganzzahl aus. Negative Werte werden dann in der positiven CPU-Darstellung ausgegeben. |
%f | reelle Zahl | Gibt eine Gleitpunktzahl aus. |
%e oder %E | reelle Zahl | Gibt eine Gleitpunktzahl in der Exponentialschreibweise aus. |
%x oder %X | Ganzzahl | Gibt eine Ganzzahl in hexadezimaler Form aus. |
%g oder %G | reelle Zahl | Ist der Exponent kleiner als â4, wird das Format %e verwendet, ansonsten %f. |
%% | Prozentzeichen | Gibt das Prozentzeichen aus. |
Hier ein einfaches Beispiel, wie Sie eine Textdatei mit folgendem Format
> you@host > cat bestellung.txt J.Wolf::10::Socken P.Weiss::5::T-Shirts U.Hahn::3::Hosen Z.Walter::6::Handschuhe
in lesbarer Form formatiert ausgeben lassen können:
> # Demonstriert den printf-Befehl # aprintf1 FILE=bestellung.txt TRENNER=:: for data in `cat $FILE` do kunde=`echo $data | tr $TRENNER ' '` set $kunde printf "Kunde: %-10s Anzahl: %-5d Gegenstand: %15s\n" $1 $2 $3 done
Das Script liest zunächst zeilenweise mit cat aus der Textdatei bestellung.txt ein. Nach do werden die Trennzeichen »::« durch ein Leerzeichen ersetzt und mittels set auf die einzelnen Positionsparameter $1, $2 und $3 verteilt. Diese Positionsparameter werden anschließend mit printf als Argument für die Zeichenmuster verwendet. Das Script bei der Ausführung:
> you@host > ./aprintf1 Kunde: J.Wolf Anzahl: 10 Gegenstand: Socken Kunde: P.Weiss Anzahl: 5 Gegenstand: T-Shirts Kunde: U.Hahn Anzahl: 3 Gegenstand: Hosen Kunde: Z.Walter Anzahl: 6 Gegenstand: Handschuhe
Wie Sie im Beispiel außerdem sehen, können hierbei auch die Escape-Sequenzen wie \n usw. im Formatstring verwendet werden. Ein Vorteil ist es zugleich, dass Sie in der Bash die Escape-Zeichen ohne eine Option wie âe bei echo anwenden können.
Im Beispiel wurde beim Formatstring das Zeichenmuster mit einer erweiterten Angabe formatiert. So können Sie zum Beispiel durch eine Zahlenangabe zwischen % und dem Buchstaben für das Format die minimale Breite der Ausgabe bestimmen. Besitzt ein Argument weniger Zeichen, als Sie mit der minimalen Breite vorgegeben haben, so werden die restlichen Zeichen mit einem Leerzeichen aufgefüllt.
Auf welcher Seite hier mit Leerzeichen aufgefüllt wird, hängt davon ab, ob sich vor der Ganzzahl noch ein Minuszeichen befindet. Setzen Sie nämlich noch ein Minuszeichen vor die Breitenangabe, wird die Ausgabe linksbündig justiert.
Hier können Sie noch näher spezifizieren, indem Sie einen Punkt hinter der Feldbreite gefolgt von einer weiteren Ganzzahl verwenden. Dann können Sie die Feldbreite von der Genauigkeit trennen (siehe anschließendes Beispiel). Bei einer Zeichenkette definieren Sie dabei die Anzahl von Zeichen, die maximal ausgegeben werden. Bei einer Fließkommazahl wird hiermit die Anzahl der Ziffern nach dem Komma definiert und bei Ganzzahlen die minimale Anzahl von Ziffern, die ausgegeben werden sollen â wobei Sie niemals eine Ganzzahl wie etwa 1234 durch %.2d auf zwei Ziffern kürzen können. Verwenden Sie hingegen eine größere Ziffer, als die Ganzzahl Stellen erhält, werden die restlichen Ziffern mit Nullen aufgefüllt. Hier ein Script, das die erweiterten Formatierungsmöglichkeiten demonstrieren soll:
> # Demonstriert den printf-Befehl # aprintf2 text=Kopfstand a=3 b=12345 printf "|01234567890123456789|\n" printf "|%s|\n" $text printf "|%20s|\n" $text printf "|%-20s|\n" $text printf "|%20.4s|\n" $text printf "|%-20.4s|\n\n" $text printf "Fließkommazahl: %f\n" $a printf "gekürzt : %.2f\n" $a printf "Ganzzahl : %d\n" $b printf "gekürzt : %.2d\n" $b printf "erweitert : %.8d\n" $b
Das Script bei der Ausführung:
> you@host > ./aprintf2 |01234567890123456789| |Kopfstand| | Kopfstand| |Kopfstand | | Kopf| |Kopf | Fließkommazahl: 3,000000 gekürzt : 3,00 Ganzzahl : 12345 gekürzt : 12345 erweitert : 00012345
### 5.2.4 Der Befehl tput â TerminalsteuerungÂ
Häufig will man neben der gewöhnlichen Ausgabe auch das Terminal oder den Bildschirm steuern: etwa den Bildschirm löschen, Farben verwenden oder die Höhe und Breite des Bildschirms ermitteln. Dies und noch vieles mehr können Sie mit tput realisieren. tput ist übrigens auch ein eigenständiges Programm (kein Builtin) und somit unabhängig von der Shell. Zum Glück kennt tput die Terminalbibliothek terminfo (früher auch /etc/termcap), in der viele Attribute zur Steuerung des Terminals abgelegt sind. Dort finden Sie eine Menge Funktionen, von denen Sie hier nur die nötigsten kennen lernen. Einen umfassenderen Einblick können Sie über die entsprechende Manual-Page von terminfo gewinnen.
Wie sieht ein solcher Eintrag in terminfo aus? Verwenden wir doch als einfachstes Beispiel clear, womit Sie den Bildschirm des Terminals löschen können.
> you@host > tput clear | tr '\033' 'X' ; echo X[HX[2J
Hier erhalten Sie die Ausgabe der Ausführung von clear im Klartext. Es muss allerdings das Escape-Zeichen (\033) durch ein »X« oder Ähnliches ausgetauscht werden, weil ein Escape-Zeichen nicht darstellbar ist. Somit lautet der Befehl zum Löschen des Bildschirms hier:
> \033[H\033[2J
Mit folgender Eingabe könnten Sie den Bildschirm löschen:
> you@host > echo '\033[H\033[2J'
oder in der Bash:
> you@host > echo -en '\033[H\033[2J'
Allerdings gilt dieses Beispiel nur, wenn »echo $TERM« die xterm ist. Es gibt nämlich noch eine Reihe weiterer Terminaltypen, weshalb Sie mit dieser Methode wohl nicht sehr weit kommen werden â und dies ist auch gar nicht erforderlich, da es hierfür ja die Bibliothek terminfo mit der Funktion tput gibt. Die Syntax von tput lautet:
> tput Terminaleigenschaft
Wenn Sie sich die Mühe machen, die Manual-Page von terminfo durchzublättern, so finden Sie neben den Informationen zum Verändern von Textattributen und einigen steuernden Terminal-Eigenschaften auch Kürzel, mit denen Sie Informationen zum aktuellen Terminal abfragen können.
Kürzel | Bedeutung |
| --- | --- |
cols | Aktuelle Anzahl der Spalten |
lines | Aktuelle Anzahl der Zeilen |
colors | Aktuelle Anzahl der Farben, die das Terminal unterstützt. |
pairs | Aktuelle Anzahl der Farbenpaare (Schriftfarbe und Hintergrund), die dem Terminal zur Verfügung stehen. |
Diese Informationen lassen sich in einem Script folgendermaßen ermitteln:
> # Demonstriert den tput-Befehl # tput1 spalten=`tput cols` zeilen=`tput lines` farben=`tput colors` paare=`tput pairs` echo "Der Typ des Terminals $TERM hat im Augenblick" echo " + $spalten Spalten und $zeilen Zeilen" echo " + $farben Farben und $paare Farbenpaare"
Das Script bei der Ausführung:
> you@host > ./tput1 Der Typ des Terminals xterm hat im Augenblick + 80 Spalten und 29 Zeilen + 8 Farben und 64 Farbenpaare
Zum Verändern der Textattribute finden Sie in tput ebenfalls einen guten Partner. Ob hervorgehoben, invers oder unterstrichen, auch dies ist mit der Bibliothek terminfo möglich. Tabelle 5.6 nennt hierzu einige Kürzel und deren Bedeutung.
Kürzel | Bedeutung |
| --- | --- |
bold | Etwas hervorgehobenere (fette) Schrift |
boldoff | Fettschrift abschalten |
blink | Text blinken |
rev | Inverse Schrift verwenden |
smul | Text unterstreichen |
rmul | Unterstreichen abschalten |
sgr0 | Alle Attribute wiederherstellen (also normale Standardeinstellung) |
Auch zu den Textattributen soll ein Shellscript geschrieben werden:
> # Demonstriert den tput-Befehl # tput2 fett=`tput bold` invers=`tput rev` unterstrich=`tput smul` reset=`tput sgr0` echo "Text wird ${fett}Hervorgehoben${reset}" echo "Oder gerne auch ${invers}Invers${reset}" echo "Natürlich auch mit ${unterstrich}Unterstrich${reset}"
Wenn Ihnen tput colors mehr als den Wert 1 zurückgibt, können Sie auch auf Farben zurückgreifen. Meistens handelt es sich dabei um acht Farben. Je acht Farben für den Hintergrund und acht für den Vordergrund ergeben zusammen 64 mögliche Farbenpaare, welche Sie mit
> # Vordergrundfarbe setzen tput setf Nummer # Hintergrundfarbe setzen tput setb Nummer
aktivieren können. Für Nummer können Sie einen Wert zwischen 0 und 7 verwenden, wobei die einzelnen Nummern gewöhnlich den folgenden Farben entsprechen:
Nummer | Farbe |
| --- | --- |
0 | Schwarz |
1 | Blau |
2 | Grün |
3 | Gelb |
4 | Rot |
5 | Lila |
6 | Cyan |
7 | Grau |
In der Praxis können Sie die Farben zum Beispiel dann so einsetzen:
> # Demonstriert den tput-Befehl # tput3 Vgruen=`tput setf 2` Vblau=`tput setf 1` Hschwarz=`tput setb 0` Hgrau=`tput setb 7` reset=`tput sgr0` # Farbenpaar Schwarz-Grün erstellen Pschwarzgruen=`echo ${Vgruen}${Hschwarz}` echo $Pschwarzgruen echo "Ein klassischer Fall von Schwarz-Grün" echo ${Vblau}${Hgrau} echo "Ein ungewöhnliches Blau-Grau" # Alles wieder Rückgängig machen echo $reset
Natürlich finden Sie in der Bibliothek terminfo auch steuernde Terminal-Eigenschaften, nach denen häufig bevorzugt gefragt wird. Mit steuernden Eigenschaften sind hier Kürzel in terminfo gemeint, mit denen Sie die Textausgabe auf verschiedensten Positionen des Bildschirms erreichen können.
Kürzel | Bedeutung |
| --- | --- |
home | Cursor auf die linke obere Ecke des Bildschirms setzen |
cup n m | Cursor auf die n-te Zeile in der m-ten Spalte setzen |
dl1 | Aktuelle Zeile löschen |
il1 | Zeile an aktueller Position einfügen |
dch1 | Ein Zeichen in der aktuellen Position löschen |
clear | Bildschirm löschen |
# 5.3 EingabeÂ
5.3 EingabeÂ
Neben der Ausgabe werden Sie relativ häufig auch die Benutzereingaben benötigen, um ein Script entsprechend zu steuern. Darauf soll jetzt näher eingegangen werden.
5.3.1 Der Befehl readÂ
Mit dem Befehl read können Sie die (Standard-)Eingabe von der Tastatur lesen und in einer Variablen abspeichern. Selbstverständlich wird bei einem Aufruf von read die Programmausführung angehalten, bis die Eingabe erfolgt ist und (ENTER) betätigt wurde. Die Syntax für read:
# Eingabe von Tastatur befindet sich in der variable read variable
Beispielsweise:
you@host > read myname Jürgen you@host > echo $myname Jürgen you@host > read myname <NAME> you@host > echo $myname <NAME>
Hier sehen Sie auch gleich, dass read die komplette Eingabe bis zum (ENTER) einliest, also inklusive Leerzeichen (Tab-Zeichen werden durch ein Leerzeichen ersetzt). Wollen Sie aber anstatt wie hier (Vorname, Nachname) beide Angaben in einer separaten Variablen speichern, müssen Sie read folgendermaßen verwenden:
read variable1 variable2 ... variable_n
read liest hierbei für Sie die Zeile ein und trennt die einzelnen Worte anhand des Trennzeichens in der Variablen IFS auf. Wenn mehr Wörter eingegeben wurden, als Variablen vorhanden sind, bekommt die letzte Variable den Rest einer Zeile zugewiesen, zum Beispiel:
you@host > read vorname nachname <NAME> you@host > echo $vorname Jürgen you@host > echo $nachname Wolf you@host > read vorname nachname <NAME> und noch mehr Text you@host > echo $vorname Jürgen you@host > echo $nachname Wolf und noch mehr Text
Sie haben außerdem die Möglichkeit, read mit einer Option zu verwenden:
read -option variable
Hierzu stehen Ihnen sehr interessante Optionen zur Verfügung, die in Tabelle 5.9 aufgelistet sind.
Tabelle 5.9 Â Optionen für das Kommando read
ân anzahl
Hier muss die Eingabe nicht zwangsläufig mit (ENTER) abgeschlossen werden. Wenn die anzahl Zeichen erreicht wurde, wird auch ohne (ENTER) in die Variable geschrieben.
âs
(s = silent) Hierbei ist die Eingabe am Bildschirm nicht sichtbar, wie dies etwa bei einer Passworteingabe der Fall ist.
ât sekunden
Hiermit können Sie ein Timeout für die Eingabe vorgeben. Wenn read binnen Anzahl sekunden keine Eingabe erhält, wird das Programm fortgesetzt. Der Rückgabewert bei einer Eingabe ist 0. Wurde binnen vorgegebener Zeit keine Eingabe vorgenommen, wird 1 zurückgegeben. Hierfür können Sie zur Überprüfung die Variable $? verwenden.
Auf den folgenden Seiten werden Sie verblüfft sein, wie vielseitig das Kommando read wirklich ist. Für die Eingabe (nicht nur) von der Tastatur ist read somit fast das Nonplusultra.
5.3.2 (Zeilenweise) Lesen einer Datei mit readÂ
Eine wirklich bemerkenswerte Funktion erfüllt read mit dem zeilenweisen Auslesen von Dateien, beispielsweise:
you@host > cat zitat.txt Des Ruhmes Würdigkeit verliert an Wert, wenn der Gepriesene selbst mit Lob sich ehrt. you@host > read variable < zitat.txt you@host > echo $variable Des Ruhmes Würdigkeit verliert an Wert,
Damit hierbei allerdings auch wirklich alle Zeilen ausgelesen werden (Sie ahnen es schon), ist eine Schleife erforderlich. Trotzdem sieht die Verwendung von read garantiert anders aus, als Sie jetzt sicher vermuten:
while read var < datei do ... done
Würden Sie read auf diese Weise verwenden, so wäre es wieder nur eine Zeile, die gelesen wird. Hier müssen Sie die Datei in das komplette while-Konstrukt umleiten:
while read var do ... done < datei
Ohne großen Aufwand können Sie so zeilenweise eine Datei einlesen:
# Demonstriert den Befehl read zum zeilenweisen Lesen einer Datei # Name : areadline if [ $# -lt 1 ] then echo "usage: $0 datei_zum_lesen" exit 1 fi # Argument $1 soll zeilenweise eingelesen werden while read variable do echo $variable done < $1
Das Script bei der Ausführung:
you@host > ./areadline zitat.txt Des Ruhmes Würdigkeit verliert an Wert, wenn der Gepriesene selbst mit Lob sich ehrt.
Diese Methode wird sehr häufig verwendet, um aus größeren Dateien einzelne Einträge herauszufiltern. Des Weiteren ist sie eine tolle Möglichkeit, um die Variable IFS in Schleifen zu überlisten. Damit können Sie Schleifen nicht mehr einmal pro Wort, sondern einmal pro Zeile ausführen lassen. Es ist der Hack schlechthin, wenn etwa Dateien, deren Namen ein Leerzeichen enthalten, verarbeitet werden sollen.
5.3.3 Zeilenweise mit einer Pipe aus einem Kommando lesen (read)Â
Wenn das Umlenkungszeichen (wie im Abschnitt zuvor gesehen) in einer Schleife mit read funktioniert, um zeilenweise aus einer Datei zu lesen, sollte Gleiches auch mit einer Pipe vor einer Schleife und einem Kommando gelingen. Sie schieben so quasi die Standardausgabe eines Kommandos in die Standardeingabe der Schleife â immer vorausgesetzt natürlich, hier wird read verwendet, das ja auch etwas von der Standardeingabe erwartet. Und auch hier funktioniert das Prinzip so lange, bis read keine Zeile mehr lesen kann und somit 1 zurückgibt, was gleichzeitig das Ende der Schleife bedeutet. Hier die Syntax eines solchen Konstrukts:
kommando | while read line do # Variable line bearbeiten done
Um beim Beispiel »areadline« vom Abschnitt zuvor zu bleiben, sieht das Script mit einer Pipe nun wie folgt aus:
# Demonstriert den Befehl read zum zeilenweisen Lesen einer Datei # Name : areadline2 if [ $# -lt 1 ] then echo "usage: $0 datei_zum_lesen" exit 1 fi # Argument $1 soll zeilenweise eingelesen werden cat $1 | while read variable do echo $variable done
Ausgeführt macht das Beispiel dasselbe, wie schon das Script »areadline« zuvor. Beachten Sie allerdings, dass nach jedem erneuten Schleifendurchlauf die Pipe geschlossen und die Variable somit verworfen wird.
5.3.4 Here-Dokumente (Inline-Eingabeumleitung)Â
Bisher wurde noch nicht auf das Umleitungszeichen << eingegangen. Sie kennen bereits das Umleitungszeichen <, mit dem Sie die Standardeingabe umlenken können. Das Umlenkungssymbol >> der Ausgabe hat ja die Ausgabedateien ans Ende einer Datei gehängt. Bei der Standardeingabe macht so etwas keinen Sinn. Mit einfachen Worten ist es kompliziert, diese Umlenkung zu beschreiben, daher hier zunächst einmal die Syntax:
kommando <<TEXT_MARKE ... TEXT_MARKE
Das Kommando nutzt in diesem Fall durch das doppelte Umlenkungszeichen alle nachfolgenden Zeilen für die Standardeingabe. Dies geschieht so lange, bis es auf das Wort trifft, welches hinter dem Umlenkungszeichen << folgt (in der Syntaxbeschreibung ist dies das Wort TEXT_MARKE). Beachten Sie außerdem, dass beide Textmarken absolut identisch sein müssen.
Ein Beispiel:
# Demonstriert die Verwendung von Here-Dokumenten # Name : ahere1 cat <<TEXT_MARKE Heute ist `date`, Sie befinden sich im Verzeichnis `pwd`. Ihr aktuelles Terminal ist `echo -n $TERM` und Ihr Heimverzeichnis befindet sich in $HOME. TEXT_MARKE
Das Script bei der Ausführung:
you@host > ./ahere1 Heute ist Fr Mär 4 00:36:37 CET 2005, Sie befinden sich im Verzeichnis /home/tot. Ihr aktuelles Terminal ist xterm und Ihr Heimverzeichnis befindet sich in /home/you.
In diesem Beispiel liest das Kommando cat so lange die nachfolgenden Zeilen, bis es auf das Wort TEXT_MARKE stößt. Hierbei können auch Kommando-Substitutionen (was durchaus ein anderes Shellscript sein darf) und Variablen innerhalb des Textes verwendet werden. Die Shell kümmert sich um entsprechende Ersetzung. Der Vorteil von »Here-Dokumenten« ist, dass Sie einen Text direkt an Befehle weiterleiten können, ohne diesen vorher in einer Datei unterbringen zu müssen. Gerade bei der Ausgabe von etwas umfangreicheren Texten oder Fehlermeldungen fällt einem die Ausgabe hier wesentlich leichter.
Wollen Sie allerdings nicht, dass eine Kommando-Substitution oder eine Variable von der Shell interpretiert wird, müssen Sie nur zwischen dem Umlenkungszeichen << und der Textmarke einen Backslash setzen:
# Demonstriert die Verwendung von Here-Dokumenten # Name : ahere2 cat <<\TEXT_MARKE Heute ist `date`, Sie befinden sich im Verzeichnis `pwd`. Ihr aktuelles Terminal ist `echo -n $TERM` und Ihr Heimverzeichnis befindet sich in $HOME. TEXT_MARKE
Dann sieht die Ausgabe folgendermaßen aus:
you@host > ./ahere2 Heute ist `date`, Sie befinden sich im Verzeichnis `pwd`. Ihr aktuelles Terminal ist `echo -n $TERM` und Ihr Heimverzeichnis befindet sich in $HOME.
Neben der Möglichkeit mit einem Backslash zwischen dem Umlenkungszeichen << und der Textmarke, existiert auch die Variante, die Textmarke zwischen Single Quotes zu stellen:
kommando <<'MARKE' ... MARKE
Befinden sich außerdem im Text führende Leer- oder Tabulatorzeichen, können Sie diese mit einem Minuszeichen zwischen dem Umlenkungszeichen und der Textmarke entfernen (<<âMARKE ).
Nichts hält Sie übrigens davon ab, die Standardeingabe mittels << an eine Variable zu übergeben:
# Demonstriert die Verwendung von Here-Dokumenten # Name : ahere3 count=`cat <<TEXT_MARKE \`ls -l | wc -l\` TEXT_MARKE` echo "Im Verzeichnis $HOME befinden sich $count Dateien"
Das Script bei der Ausführung:
you@host > ./ahere3 Im Verzeichnis /home/tot befinden sich 40 Dateien
Damit in der Variable »count« nicht die Textfolge ls l | wc âl steht, sondern auch wirklich eine Kommando-Substitution durchgeführt wird, müssen Sie die Kommandos durch einen Backslash schützen, weil hier bereits die Ausgabe der »Textmarke« als Kommando-Substitution verwendet wird.
Ich persönlich verwende diese Inline-Eingabeumleitung gern in Verbindung mit Fließkommaberechnungen und dem Kommando bc (siehe auch Abschnitt 2.2.3). Verwenden Sie hierbei zusätzlich noch eine Kommando-Substitution, können Sie das Ergebnis der Berechnung in einer Variablen speichern, wie Sie dies von expr her kennen. Außerdem können Sie mit der Verwendung der mathematischen Bibliothek (anzugeben mit der Option âl) noch weitere Winkelfunktionen mit z. B. Sinus (s()) oder Kosinus (c()) nutzen. Hierzu das Script, womit jetzt Berechnungen komplexer Ausdrücke nichts mehr im Wege steht:
# Demonstriert die Verwendung von Here-Dokumenten # Name : ahere4 if [ $# == 0 ] then echo "usage: $0 Ausdruck" exit 1 fi # Option -l für die mathematische Bibliothek bc -l <<CALC $* quit CALC
Das Script bei der Ausführung:
you@host > ./ahere4 123.12/5 24.62400000000000000000 you@host > ./ahere4 1234.3*2 2468.6 you@host > ./ahere4 '(12.34â8.12)/2' 2.11000000000000000000 you@host > ./ahere4 sqrt\(24\) 4.89897948556635619639 you@host > var=`./ahere4 1234.1234*2*1.2` you@host > echo $var 2961.89616
5.3.5 Here-Dokumente mit read verwendenÂ
Wie beim Auslesen einer Datei mit read können Sie selbstverständlich auch Here-Dokumente verwenden. Dies verhält sich dann so, als würde zeilenweise aus einer Datei gelesen. Und damit nicht nur die erste Zeile gelesen wird, geht man exakt genauso wie beim zeilenweise Einlesen einer Datei vor. Hier die Syntax:
while read line do ... done <<TEXT_MARKE line1 line2 ... line_n TEXT_MARKE
In der Praxis lässt sich diese Inline-Umlenkung mit read folgendermaßen einsetzen:
# Demonstriert die Verwendung von Here-Dokumenten und read # Name : ahere5 i=1 while read line do echo "$i. Zeile : $line" i=`expr $i + 1` done <<TEXT Eine Zeile `date` Homeverzeichnis $HOME Das Ende TEXT
Das Script bei der Ausführung:
you@host > ./ahere5 1. Zeile : Eine Zeile 2. Zeile : Fr Mär 4 03:23:22 CET 2005 3. Zeile : Homeverzeichnis /home/tot 4. Zeile : Das Ende
5.3.6 Die Variable IFSÂ
Die Variable IFS (Internal Field Separator) aus Ihrer Shell-Umgebung wurde in den letzten Abschnitten schon öfters erwähnt. Insbesondere scheint IFS eine spezielle Bedeutung bei der Ein- und Ausgabe von Daten als Trennzeichen zu haben. Wenn Sie allerdings versuchen, den Inhalt der Variablen mit echo auszugeben, werden Sie wohl nicht besonders schlau daraus. Für Abhilfe kann hierbei das Kommando od sorgen, mit dem Sie den Inhalt der Variablen in hexadezimaler oder ASCII-Form betrachten können.
you@host > echo -n "$IFS" | od -x 0000000 0920 000a 0000003
0000000 ist hierbei das Offset der Zahlenreihe, in der sich drei hexadezimale Werte befinden, die IFS beinhaltet. Damit das Newline von echo nicht mit enthalten ist, wurde hier die Option ân verwendet. Die Werte sind:
09 20 00 0a
Der Wert 00 hat hier keine Bedeutung. Jetzt können Sie einen Blick auf eine ASCII-Tabelle werfen, um festzustellen, für welches Sonderzeichen diese hexadezimalen Werte stehen.
Das Ganze gelingt mit od auch im ASCII-Format, nur dass das Leerzeichen als ein »Leerzeichen« angezeigt wird:
you@host > echo -n "$IFS" | od -c 0000000 \t \n 0000003
Diese voreingestellten Trennzeichen der Shell dienen der Trennung der Eingabe für den Befehl read, der Variablen- sowie auch der Kommando-Substitution. Wenn Sie also z. B. read folgendermaßen verwenden
you@host > read nachname vorname alter <NAME> 30 you@host > echo $vorname Jürgen you@host > echo $nachname Wolf you@host > echo $alter 30
so verdanken Sie es der Variablen IFS, dass hierbei die entsprechenden Eingaben an die dafür vorgesehenen Variablen übergeben wurden.
Häufig war aber in den Kapiteln zuvor die Rede davon, die Variable IFS den eigenen Bedürfnissen anzupassen â sprich: das oder die Trennzeichen zu verändern. Wollen Sie beispielsweise, dass bei der Eingabe von read ein Semikolon statt eines Leerzeichens als Trennzeichen dient, so lässt sich dies einfach erreichen:
you@host > BACKIFS="$IFS" you@host > IFS=\; you@host > echo -n "$IFS" | od -c 0000000 ; 0000001 you@host > read nachname vorname alter Wolf;Jürgen;30 you@host > echo $vorname Jürgen you@host > echo $nachname Wolf you@host > echo $alter 30 you@host > IFS=$BACKIFS
Zuerst wurde eine Sicherung der Variablen IFS in BACKIFS vorgenommen. Der Vorgang, ein Backup von IFS zu erstellen, und das anschließende Wiederherstellen ist wichtiger, als dies auf den ersten Blick erscheint. Unterlässt man es, kann man sich auf das aktuelle Terminal nicht mehr verlassen, da einige Programme nur noch Unsinn fabrizieren.
Als Nächstes übergeben Sie im Beispiel der Variablen IFS ein Semikolon (geschützt mit einem Backslash). Daraufhin folgt dasselbe Beispiel wie zuvor, nur mit einem Semikolon als Trennzeichen. Am Ende stellen wir den ursprünglichen Wert von IFS wieder her.
Natürlich spricht auch nichts dagegen, IFS mit mehr als einem Trennzeichen zu versehen, beispielsweise:
you@host > IFS=$BACKIFS you@host > IFS=:, you@host > read nachname vorname alter Wolf,Jürgen:30 you@host > echo $vorname Jürgen you@host > echo $nachname Wolf you@host > echo $alter 30 you@host > IFS=$BACKIFS
Im Beispiel wurde IFS mit den Trennzeichen : und , definiert. Wollen Sie bei einer Eingabe mit read, dass IFS nicht immer die führenden Leerzeichen (falls vorhanden) entfernt, so müssen Sie IFS nur mittels IFS= auf »leer« setzen:
you@host > IFS=$BACKIFS you@host > read var Hier sind führende Leerzeichen vorhanden you@host > echo $var Hier sind führende Leerzeichen vorhanden you@host > IFS= you@host > read var Hier sind führende Leerzeichen vorhanden you@host > echo $var Hier sind führende Leerzeichen vorhanden you@host > IFS=$BACKIFS
Gleiches wird natürlich auch gern bei einer Variablen-Substitution verwendet. Im folgenden Beispiel wird die Variable durch ein Minuszeichen als Trenner zerteilt und mittels set auf die einzelnen Positionsparameter zerstückelt.
# Demonstriert die Verwendung von IFS # Name : aifs1 # IFS sichern BACKIFS="$IFS" # Minuszeichen als Trenner IFS=- counter=1 var="Wolf-Jürgen-30-Bayern" # var anhand von Trennzeichen in IFS auftrennen set $var # Ein Zugriff auf $1, $2, ... wäre hier auch möglich for daten in "$@" do echo "$counter. $daten" counter=`expr $counter + 1` done IFS=$BACKIFS
Das Script bei der Ausführung:
you@host > ./aifs1 1. Wolf 2. Jürgen 3. 30 4. Bayern
Wenn Sie das Script ein wenig umschreiben, können Sie hiermit zeilenweise alle Einträge der Shell-Variablen PATH ausgeben lassen, welche ja durch einen Doppelpunkt getrennt werden.
# Demonstriert die Verwendung von IFS # Name : aifs2 # IFS sichern BACKIFS="$IFS" # Minuszeichen als Trenner IFS=: # PATH anhand von Trennzeichen in IFS auftrennen set $PATH for path in "$@" do echo "$path" done IFS=$BACKIFS
Das Script bei der Ausführung:
you@host > ./aifs2 /home/tot/bin /usr/local/bin /usr/bin /usr/X11R6/bin /bin /usr/games /opt/gnome/bin /opt/kde3/bin /usr/lib/java/jre/bin
Natürlich hätte man dies wesentlich effektiver mit folgender Zeile lösen können:
you@host > echo $PATH | tr ':' '\n'
Zu guter Letzt sei erwähnt, dass die Variable IFS noch recht häufig in Verbindung mit einer Kommando-Substitution verwendet wird. Wollen Sie etwa mittels grep nach einem bestimmten User in /etc/passwd suchen, wird meistens eine Ausgabe wie folgt genutzt:
you@host > grep you /etc/passwd you:x:1001:100::/home/you:/bin/bash
Dies in die Einzelteile zu zerlegen, sollte Ihnen jetzt mit der Variablen IFS nicht mehr schwer fallen.
# Demonstriert die Verwendung von IFS # Name : aifs3 # IFS sichern BACKIFS="$IFS" # Minuszeichen als Trenner IFS=: if [ $# -lt 1 ] then echo "usage: $0 User" exit 1 fi # Ausgabe anhand von Trennzeichen in IFS auftrennen set `grep ^$1 /etc/passwd` echo "User : $1" echo "User-Nummer : $3" echo "Gruppen-Nummer : $4" echo "Home-Verzeichnis : $6" echo "Start-Shell : $7" IFS=$BACKIFS
Das Script bei der Ausführung:
you@host > ./aifs3 you User : you User-Nummer : 1001 Gruppen-Nummer : 100 Home-Verzeichnis : /home/you Start-Shell : /bin/bash you@host > ./aifs3 tot User : tot User-Nummer : 1000 Gruppen-Nummer : 100 Home-Verzeichnis : /home/tot Start-Shell : /bin/ksh you@host > ./aifs3 root User : root User-Nummer : 0 Gruppen-Nummer : 0 Home-Verzeichnis : /root Start-Shell : /bin/ksh
5.3.7 Arrays einlesen mit read (Bash und Korn-Shell only)Â
Arrays lassen sich genauso einfach einlesen wie normale Variablen, nur dass man hier den Index beachten muss. Hierzu nur ein Script, da die Arrays bereits in Abschnitt 2.5 ausführlich behandelt wurden.
# Demonstriert die Verwendung von read mit Arrays # Name : ararray typeset -i i=0 while [ $i -lt 5 ] do printf "Eingabe machen : " read valarr[$i] i=i+1 done echo "Hier, die Eingaben ..." i=0 while [ $i -lt 5 ] do echo ${valarr[$i]} i=i+1 done
Das Script bei der Ausführung:
you@host > ./ararray Eingabe machen : Testeingabe1 Eingabe machen : Noch eine Eingabe Eingabe machen : Test3 Eingabe machen : Test4 Eingabe machen : Und Ende Hier, die Eingaben ... Testeingabe1 Noch eine Eingabe Test3 Test4 Und Ende
Selbstverständlich lassen sich Arrays ganz besonders im Zusammenhang mit zeilenweisem Einlesen aus einer Datei mit read (siehe Abschnitt 5.3.2) verwenden:
typeset -i i=0 while read vararray[$i] do i=i+1 done < datei_zum_einlesen
Damit können Sie anschließend bequem mithilfe des Feldindexes auf den Inhalt einzelner Zeilen zugreifen und diesen weiterverarbeiten.
In der Bash haben Sie außerdem die Möglichkeit, mit read einzelne Zeichenketten zu zerlegen und diese Werte direkt in eine Variable zu legen. Die einzelnen Zeichenketten (abhängig wieder vom Trennzeichen IFS) werden dann in extra Felder (einem Array) aufgeteilt. Hierfür müssen Sie read allerdings mit der Option âa aufrufen:
read -a array <<TEXTMARKE $variable TEXTMARKE
Mit diesem Beispiel können Sie ganze Zeichenketten in ein Array aufsplitten (immer abhängig davon, welche Trennzeichen Sie verwenden).
5.3.8 Shell-abhängige Anmerkungen zu readÂ
Da read ein Builtin-Kommando der Shell ist, gibt es hier zwischen den einzelnen Shells noch einige unterschiedliche Funktionalitäten.
Ausgabetext in read integrieren (Bash)
Verwenden Sie bei der Funktion read die Option âp, können Sie den Ausgabetext (für die Frage) direkt in die read-Abfrage integrieren. Die Syntax:
read -p Ausgabetext var1 var2 var3 ... var_n
In der Praxis sieht dies so aus:
you@host > read -p "Ihr Vorname : " vorname Ihr Vorname : Jürgen you@host > echo $vorname Jürgen
Ausgabetext in read integrieren (Korn-Shell)
Auch die Korn-Shell bietet Ihnen die Möglichkeit, eine Benutzerabfrage direkt in den read-Befehl einzubauen.
read var?"Ausgabetext"
In der Praxis:
you@host > read name?"Ihr Name bitte: " Ihr Name bitte: <NAME> you@host > echo $name <NAME>
Default-Variable für read (Bash und Korn-Shell only)
Verwenden Sie read, ohne eine Variable als Parameter, wird die anschließende Eingabe in der Default-Variablen REPLY gespeichert.
you@host > read <NAME> you@host > echo $REPLY <NAME>
5.3.9 Einzelnes Zeichen abfragenÂ
Eine häufig gestellte Frage lautet, wie man einen einzelnen Tastendruck abfragen kann, ohne (ENTER) zu drücken. Das Problem an dieser Sache ist, dass ein Terminal gewöhnlich zeilengepuffert ist. Bei einer Zeilenpufferung wartet das Script z. B. bei einer Eingabeaufforderung mit read so lange mit der Weiterarbeit, bis die Zeile mit einem (ENTER) bestätigt wird.
Da ein Terminal mit unzähligen Eigenschaften versehen ist, ist es am einfachsten, all diese Parameter mit dem Kommando stty und der Option raw auszuschalten. Damit schalten Sie Ihr Terminal in einen rohen Modus, womit Sie praktisch »nackte« Zeichen behandeln können. Sobald Sie fertig sind, ein Zeichen einzulesen, sollten Sie diesen Modus auf jeden Fall wieder mit âraw verlassen!
Wenn Sie Ihr Terminal in einen rohen Modus geschaltet haben, benötigen Sie eine Funktion, welche ein einzelnes Zeichen lesen kann. Die Funktion read fällt wegen der Zeilenpufferung aus. Einzige Möglichkeit ist das Kommando dd (Dump Disk), worauf man zunächst gar nicht kommen würde. dd liest eine Datei und schreibt den Inhalt mit wählbarer Blockgröße und verschiedenen Konvertierungen. Anstatt für eine Datei kann dd genauso gut für das Kopieren der Standardeingabe in die Standardausgabe verwandt werden. Hierzu reicht es, die Anzahl der Datensätze (count hier 1) anzugeben, und welche Blockgröße Sie in Bytes verwenden wollen (bs, auch hier 1 Byte). Da dd seine Meldung auf die Standardfehlerausgabe vornimmt, können Sie diese Ausgabe ins Datengrab schicken.
Hier ein Script, das unermüdlich auf den Tastendruck »q« für Quit zum Beenden einer Schleife wartet.
# Demonstriert das Einlesen einzelner Zeichen # Name : areadchar echo "Bitte eine Taste betätigen â mit q beenden" char='' # Terminal in den "rohen" Modus schalten stty raw -echo # In Schleife überprüfen, ob 'q' gedrückt wurde while [ "$char" != "q" ] do char=`dd bs=1 count=1 2>/dev/null` done # Den "rohen" Modus wieder abschalten stty -raw echo echo "Die Taste war $char"
Das Script bei der Ausführung:
you@host > ./areadchar Bitte eine Taste betätigen â mit q beenden (Q) Die Taste war q
Hinweis In der Bash können Sie auch die Abfrage einzelner Zeichen mit read und der Option -n vornehmen. Schreiben Sie etwa read -n 1 var, befindet sich in var das einzelne Zeichen. Allerdings unterstützt dies nur darstellbare ASCII-Zeichen. Bei Zeichen mit Escape-Folgen (siehe nächster Abschnitt) taugt auch dies nicht mehr.
5.3.10 Einzelne Zeichen mit Escape-Sequenzen abfragenÂ
Im Beispiel zuvor haben Sie gesehen, wie Sie einzelne Tastendrücke mithilfe der Kommandos stty und dd abfragen können. Aber sobald Sie hierbei Tastendrücke wie (F1), (F2), (ESC), (Ë), (Ľ) usw. abfragen wollen, wird dies nicht mehr funktionieren. Testen Sie am besten selbst unser abgespecktes Script von eben:
# Demonstriert das Einlesen einzelner Zeichen # Name : areadchar2 echo "Bitte eine Taste betätigen" stty raw -echo char=`dd bs=1 count=1 2>/dev/null` stty -raw echo echo "Die Taste war $char"
Das Script bei der Ausführung:
you@host > ./areadchar2 Bitte eine Taste betätigen (A) Die Taste war A you@host > ./areadchar2 Bitte eine Taste betätigen (ESC) Die Taste war you@host > ./areadchar2 Bitte eine Taste betätigen (F1) Die Taste war [A
Das Problem mit den Pfeil- oder Funktionstasten ist nun mal, dass hierbei Escape-Folgen zurückgegeben werden. Schlimmer noch, die Escape-Folgen sind häufig unterschiedlich lang und â um es noch schlimmer zu machen â abhängig vom Typ des Terminals und zusätzlich auch noch der nationalen Besonderheit (Umlaute und Ähnliches).
Somit scheint eine portable Lösung des Problems fast unmöglich, es sei denn, man kennt sich ein bisschen mit C aus. Da ich dies nicht voraussetzen kann, gebe ich Ihnen hier ein einfaches Rezept an die Hand, welches Sie bei Bedarf erweitern können.
Escape-Sequenzen des Terminals ermitteln
Zuerst müssen Sie sich darum kümmern, wie die Escape-Sequenzen für entsprechende Terminals aussehen. Welches Terminal Sie im Augenblick nutzen, können Sie mit echo $TERM in Erfahrung bringen. Im nächsten Schritt können Sie den Befehl infocmp verwenden, um vom entsprechenden Terminaltyp Informationen zu den Tasten zu erhalten.
you@host > echo $TERM xterm you@host > infocmp # Reconstructed via infocmp from file: /usr/share/terminfo/x/xterm xterm|xterm terminal emulator (X Window System), am, bce, km, mc5i, mir, msgr, npc, xenl, colors#8, cols#80, it#8, lines#24, pairs#64, bel=^G, blink=\E[5m, bold=\E[1m, cbt=\E[Z, civis=\E[?25l, clear=\E[H\E[2J, cnorm=\E[?12l\E[?25h, cr=^M, csr=\E[%i%p1 %d;%p2 %dr, cub=\E[%p1 %dD, cub1=^H, cud=\E[%p1 %dB, cud1=^J, cuf=\E[%p1 %dC, cuf1=\E[C, cup=\E[%i%p1 %d;%p2 %dH, cuu=\E[%p1 %dA, cuu1=\E[A, cvvis=\E[?12;25h, dch=\E[%p1 %dP, dch1=\E[P, dl=\E[%p1 %dM, dl1=\E[M, ech=\E[%p1 %dX, ed=\E[J, el=\E[K, el1=\E[1K, enacs=\E(B\E)0, flash=\E[?5h$<100/>\E[?5l, home=\E[H, hpa=\E[%i%p1 %dG, ht=^I, hts=\EH, ich=\E[%p1 %d@, il=\E[%p1 %dL, il1=\E[L, ind=^J, indn=\E[%p1 %dS, invis=\E[8m, is2=\E[!p\E[?3;4l\E[4l\E>, kDC=\E[3;2~, kEND=\E[1;2F, kHOM=\E[1;2H, kIC=\E[2;2~, kLFT=\E[1;2D, kNXT=\E[6;2~, kPRV=\E[5;2~, kRIT=\E[1;2C, kb2=\EOE, kbs=\177, kcbt=\E[Z, kcub1=\EOD, kcud1=\EOB, kcuf1=\EOC, kcuu1=\EOA, kdch1=\E[3~, kend=\EOF, kent=\EOM, kf1=\EOP, kf10=\E[21~, kf11=\E[23~, kf12=\E[24~, kf13=\EO2P, ...
Hierbei werden Ihnen zunächst eine Menge Informationen um die Ohren gehauen, welche auf den ersten Blick wie Zeichensalat wirken. Bei genauerem Hinsehen aber können Sie alte Bekannte aus Abschnitt 5.2.4 zur Steuerung des Terminals mit tput wieder entdecken.
Hinweis   Wenn Sie über kein infocmp auf Ihrem Rechner verfügen, liegt das wahrscheinlich daran, dass das ncurses-devel-Paket nicht installiert ist. Diesem Paket liegt auch infocmp bei.
infocmp gibt Ihnen hier Einträge wie
cuu1=\E[A
zurück. Dass man daraus nicht sonderlich schlau wird, leuchtet ein. Daher sollten Sie die Manual-Page von terminfo(5) befragen (oder genauer absuchen), wofür denn hier cuu1 steht:
you@host > man 5 terminfo | grep cuu1 Formatiere terminfo(5) neu, bitte warten... cursor_up cuu1 up up one line key_up kcuu1 ku up-arrow key micro_up mcuu1 Zd Like cursor_up in kcuf1=\E[C, kcuu1=\E[A, kf1=\E[M, kf10=\E[V,
Also, die Pfeil-nach-oben-Taste (cursor_up) haben wir hier. Und diese wird bei der xterm mit \E[A »dargestellt«. Das \E ist hierbei das Escape-Zeichen und besitzt den ASCII-Codewert 27. Somit sieht der C-Quellcode zum Überprüfen der Pfeil-nach-oben-Taste wie folgt aus (fett hervorgehoben ist hier die eigentliche Überprüfung, die Sie bei Bedarf erweitern können):
/* getkey.c */ #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <termios.h> enum { ERROR=-1, SUCCESS, ONEBYTE }; /*Altes Terminal wiederherstellen*/ static struct termios BACKUP_TTY; /*Altes Terminal wiederherstellen*/ /* Eingabekanal wird so umgeändert, damit die Tasten einzeln */ /* abgefragt werden können */ int new_tty(int fd) { struct termios buffer; /* Wir fragen nach den Attributen des Terminals und übergeben */ /* diese dann an buffer. BACKUP_TTY dient bei Programmende zur */ /* Wiederherstellung der alten Attribute und bleibt unberührt. */ if((tcgetattr(fd, &BACKUP_TTY)) == ERROR) return ERROR; buffer = BACKUP_TTY; /* Lokale Flags werden gelöscht : */ /* ECHO = Zeichenausgabe auf Bildschirm */ /* ICANON = Zeilenorientierter Eingabemodus */ /* ISIG = Terminal Steuerzeichen */ buffer.c_lflag = buffer.c_lflag & ~(ECHO|ICANON|ISIG); /* VMIN=Anzahl der Bytes, die gelesen werden müssen, bevor */ /* read() zurückkehrt In unserem Beispiel 1Byte für 1 Zeichen */ buffer.c_cc[VMIN] = 1; /* Wir setzen jetzt die von uns gewünschten Attribute */ if((tcsetattr(fd, TCSAFLUSH, &buffer)) == ERROR) return ERROR; return SUCCESS; } /* Ursprüngliches Terminal wiederherstellen */ int restore_tty(int fd) { if((tcsetattr(fd, TCSAFLUSH, &BACKUP_TTY)) == ERROR) return ERROR; return SUCCESS; } int main(int argc, char **argv) { int rd; char c, buffer[10]; /* Setzen des neuen Modus */ if(new_tty(STDIN_FILENO) == ERROR) { printf("Fehler bei der Funktion new_tty()\n"); exit(EXIT_FAILURE); } /* Erste Zeichen lesen */ if(read(STDIN_FILENO, &c, 1) < ONEBYTE) { printf("Fehler bei read()\n"); restore_tty(STDIN_FILENO); exit(EXIT_FAILURE); } /* Haben wir ein ESC ('\E') gelesen? */ if(c == 27) { /* Jep eine Escape-Sequenz, wir wollen den Rest */ /* der Zeichen auslesen */ rd=read(STDIN_FILENO, buffer, 4); /*String terminieren*/ buffer[rd]='\0'; /* Hier erfolgt die Überprüfung des Tastendrucks*/ /* Wars der Pfeil-nach-oben \E[A */ if(strcmp(buffer,"[A") == SUCCESS) printf("Pfeil-nach-oben betätigt\n"); /* Nein, keine Escape-Sequenz */ } else { if((c < 32) || (c == 127)) printf("--> %d\n",c); /* Numerischen Wert ausgeben */ else printf("--> %c\n",c); /* Zeichen ausgeben */ } restore_tty(STDIN_FILENO); return EXIT_SUCCESS; }
Wird das Programm kompiliert, übersetzt und anschließend ausgeführt, erhalten Sie folgende Ausgabe:
you@host > gcc -Wall -o getkey getkey.c you@host > ./getkey (1) --> 1 you@host > ./getkey (A) --> A you@host > ./getkey (Ë) Pfeil-nach-oben betätigt
Wie Sie jetzt im Beispiel vorgegangen sind, können Sie auch mit anderen Pfeil- und Funktionstasten vorgehen. Den Pfeil-nach-rechts können Sie z. B. wie folgt implementieren:
if(strcmp(buffer,"[D") == SUCCESS) printf("Pfeil-nach-rechts\n");
Aber anstatt einer Ausgabe empfehle ich Ihnen, einen Rückgabewert mittels return zu verwenden, etwa:
... /* Hier erfolgt die Überprüfung des Tastendrucks*/ /* Wars der Pfeil-nach-oben \E[A */ if(strcmp(buffer,"[A") == SUCCESS) { restore_tty(STDIN_FILENO); return 10; /* Rückgabewert für unser Shellscript */ } /* Wars der Pfeil-nach-links */ if(strcmp(buffer,"[D") == SUCCESS) { restore_tty(STDIN_FILENO); return 11; /* Rückgabewert für unser Shellscript */ } ...
Damit können Sie das Beispiel auch mit einem Shellscript verwenden, indem Sie die Variable $? abfragen. Hier das Script:
# Demonstriert das Einlesen einzelner Zeichen # Name : areadchar3 echo "Bitte eine Taste betätigen" ./getkey ret=$? if [ $ret -eq 0 ] then echo "Keine Pfeiltaste" fi if [ $ret -eq 10 ] then echo "Die Pfeil-nach-oben-Taste wars" fi if [ $ret -eq 11 ] then echo "Die Pfeil-nach-links-Taste wars" fi
Übersetzen Sie das C-Programm von Neuem mit den geänderten Zeilen Code und führen Sie das Shellscript aus:
you@host > ./areadchar3 Bitte eine Taste betätigen (d)--> d Keine Pfeiltaste you@host > ./areadchar3 Bitte eine Taste betätigen (Ë)Die Pfeil-nach-oben-Taste wars you@host > ./areadchar3 Bitte eine Taste betätigen (Ä)Die Pfeil-nach-links-Taste wars
Mit dem C-Programm haben Sie quasi Ihr eigenes Tool geschrieben, welches Sie am besten in ein PATH-Verzeichnis kopieren und auf welches Sie somit jederzeit zurückgreifen und natürlich erweitern können/sollen.
Hinweis   Natürlich wurde das Thema hier nur gestreift, aber es würde keinen Sinn ergeben, noch mehr einzusteigen, da ich von Ihnen nicht auch noch C-Kenntnisse verlangen kann. Aber Sie konnten hierbei schon sehen, dass Sie mit zusätzlichen Programmierkenntnissen noch viel weiter kommen können. Fehlt ein bestimmtes Tool, programmiert man eben selbst eins.
Und noch ein Hinweis   Noch mehr Lust auf die C-Programmierung bekommen? Ich könnte Ihnen mein Buch »C von A bis Z« empfehlen oder etwas zur Linux-UNIX-Programimerung. Auch hierzu finden Sie ein Buch aus meiner Feder (»Linux-Unix-Programmierung«). Beide Bücher sind bei Galileo Press erschienen.
5.3.11 PassworteingabeÂ
Wollen Sie in Ihrem Script eine Passworteingabe vornehmen und darauf verzichten, dass die Eingabe auf dem Bildschirm mit ausgegeben wird, können Sie auch hierfür stty verwenden. Hierbei reicht es aus, die Ausgabe (echo) abzuschalten und nach der Passworteingabe wieder zu aktivieren.
stty -echo # Passwort einlesen stty echo
Ein Script als Beispiel:
# Demonstriert das Einlesen einzelner Zeichen # Name : anoecho printf "Bitte Eingabe machen: " # Ausgabe auf dem Bildschirm abschalten stty -echo read passwort # Ausgabe auf dem Bildschirm wieder einschalten stty echo printf "\nIhre Eingabe lautete : %s\n" $passwort
Das Script bei der Ausführung:
you@host > ./anoecho Bitte Eingabe machen: Ihre Eingabe lautete: k3l5o6i8
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 5.3 EingabeÂ
Neben der Ausgabe werden Sie relativ häufig auch die Benutzereingaben benötigen, um ein Script entsprechend zu steuern. Darauf soll jetzt näher eingegangen werden.
### 5.3.1 Der Befehl readÂ
Mit dem Befehl read können Sie die (Standard-)Eingabe von der Tastatur lesen und in einer Variablen abspeichern. Selbstverständlich wird bei einem Aufruf von read die Programmausführung angehalten, bis die Eingabe erfolgt ist und (ENTER) betätigt wurde. Die Syntax für read:
> # Eingabe von Tastatur befindet sich in der variable read variable
Beispielsweise:
> you@host > read myname Jürgen you@host > echo $myname Jürgen you@host > read myname <NAME> you@host > echo $myname <NAME>
Hier sehen Sie auch gleich, dass read die komplette Eingabe bis zum (ENTER) einliest, also inklusive Leerzeichen (Tab-Zeichen werden durch ein Leerzeichen ersetzt). Wollen Sie aber anstatt wie hier (Vorname, Nachname) beide Angaben in einer separaten Variablen speichern, müssen Sie read folgendermaßen verwenden:
> read variable1 variable2 ... variable_n
read liest hierbei für Sie die Zeile ein und trennt die einzelnen Worte anhand des Trennzeichens in der Variablen IFS auf. Wenn mehr Wörter eingegeben wurden, als Variablen vorhanden sind, bekommt die letzte Variable den Rest einer Zeile zugewiesen, zum Beispiel:
> you@host > read vorname nachname <NAME> you@host > echo $vorname Jürgen you@host > echo $nachname Wolf you@host > read vorname nachname <NAME> und noch mehr Text you@host > echo $vorname Jürgen you@host > echo $nachname Wolf und noch mehr Text
Sie haben außerdem die Möglichkeit, read mit einer Option zu verwenden:
> read -option variable
Hierzu stehen Ihnen sehr interessante Optionen zur Verfügung, die in Tabelle 5.9 aufgelistet sind.
Option | Bedeutung |
| --- | --- |
ân anzahl | Hier muss die Eingabe nicht zwangsläufig mit (ENTER) abgeschlossen werden. Wenn die anzahl Zeichen erreicht wurde, wird auch ohne (ENTER) in die Variable geschrieben. |
âs | (s = silent) Hierbei ist die Eingabe am Bildschirm nicht sichtbar, wie dies etwa bei einer Passworteingabe der Fall ist. |
ât sekunden | Hiermit können Sie ein Timeout für die Eingabe vorgeben. Wenn read binnen Anzahl sekunden keine Eingabe erhält, wird das Programm fortgesetzt. Der Rückgabewert bei einer Eingabe ist 0. Wurde binnen vorgegebener Zeit keine Eingabe vorgenommen, wird 1 zurückgegeben. Hierfür können Sie zur Überprüfung die Variable $? verwenden. |
Auf den folgenden Seiten werden Sie verblüfft sein, wie vielseitig das Kommando read wirklich ist. Für die Eingabe (nicht nur) von der Tastatur ist read somit fast das Nonplusultra.
### 5.3.2 (Zeilenweise) Lesen einer Datei mit readÂ
Eine wirklich bemerkenswerte Funktion erfüllt read mit dem zeilenweisen Auslesen von Dateien, beispielsweise:
> you@host > cat zitat.txt Des Ruhmes Würdigkeit verliert an Wert, wenn der Gepriesene selbst mit Lob sich ehrt. you@host > read variable < zitat.txt you@host > echo $variable Des Ruhmes Würdigkeit verliert an Wert,
Damit hierbei allerdings auch wirklich alle Zeilen ausgelesen werden (Sie ahnen es schon), ist eine Schleife erforderlich. Trotzdem sieht die Verwendung von read garantiert anders aus, als Sie jetzt sicher vermuten:
> while read var < datei do ... done
Würden Sie read auf diese Weise verwenden, so wäre es wieder nur eine Zeile, die gelesen wird. Hier müssen Sie die Datei in das komplette while-Konstrukt umleiten:
> while read var do ... done < datei
Ohne großen Aufwand können Sie so zeilenweise eine Datei einlesen:
> # Demonstriert den Befehl read zum zeilenweisen Lesen einer Datei # Name : areadline if [ $# -lt 1 ] then echo "usage: $0 datei_zum_lesen" exit 1 fi # Argument $1 soll zeilenweise eingelesen werden while read variable do echo $variable done < $1
Das Script bei der Ausführung:
> you@host > ./areadline zitat.txt Des Ruhmes Würdigkeit verliert an Wert, wenn der Gepriesene selbst mit Lob sich ehrt.
Diese Methode wird sehr häufig verwendet, um aus größeren Dateien einzelne Einträge herauszufiltern. Des Weiteren ist sie eine tolle Möglichkeit, um die Variable IFS in Schleifen zu überlisten. Damit können Sie Schleifen nicht mehr einmal pro Wort, sondern einmal pro Zeile ausführen lassen. Es ist der Hack schlechthin, wenn etwa Dateien, deren Namen ein Leerzeichen enthalten, verarbeitet werden sollen.
### 5.3.3 Zeilenweise mit einer Pipe aus einem Kommando lesen (read)Â
Wenn das Umlenkungszeichen (wie im Abschnitt zuvor gesehen) in einer Schleife mit read funktioniert, um zeilenweise aus einer Datei zu lesen, sollte Gleiches auch mit einer Pipe vor einer Schleife und einem Kommando gelingen. Sie schieben so quasi die Standardausgabe eines Kommandos in die Standardeingabe der Schleife â immer vorausgesetzt natürlich, hier wird read verwendet, das ja auch etwas von der Standardeingabe erwartet. Und auch hier funktioniert das Prinzip so lange, bis read keine Zeile mehr lesen kann und somit 1 zurückgibt, was gleichzeitig das Ende der Schleife bedeutet. Hier die Syntax eines solchen Konstrukts:
> kommando | while read line do # Variable line bearbeiten done
Um beim Beispiel »areadline« vom Abschnitt zuvor zu bleiben, sieht das Script mit einer Pipe nun wie folgt aus:
> # Demonstriert den Befehl read zum zeilenweisen Lesen einer Datei # Name : areadline2 if [ $# -lt 1 ] then echo "usage: $0 datei_zum_lesen" exit 1 fi # Argument $1 soll zeilenweise eingelesen werden cat $1 | while read variable do echo $variable done
Ausgeführt macht das Beispiel dasselbe, wie schon das Script »areadline« zuvor. Beachten Sie allerdings, dass nach jedem erneuten Schleifendurchlauf die Pipe geschlossen und die Variable somit verworfen wird.
### 5.3.4 Here-Dokumente (Inline-Eingabeumleitung)Â
Bisher wurde noch nicht auf das Umleitungszeichen << eingegangen. Sie kennen bereits das Umleitungszeichen <, mit dem Sie die Standardeingabe umlenken können. Das Umlenkungssymbol >> der Ausgabe hat ja die Ausgabedateien ans Ende einer Datei gehängt. Bei der Standardeingabe macht so etwas keinen Sinn. Mit einfachen Worten ist es kompliziert, diese Umlenkung zu beschreiben, daher hier zunächst einmal die Syntax:
> kommando <<TEXT_MARKE ... TEXT_MARKE
Das Kommando nutzt in diesem Fall durch das doppelte Umlenkungszeichen alle nachfolgenden Zeilen für die Standardeingabe. Dies geschieht so lange, bis es auf das Wort trifft, welches hinter dem Umlenkungszeichen << folgt (in der Syntaxbeschreibung ist dies das Wort TEXT_MARKE). Beachten Sie außerdem, dass beide Textmarken absolut identisch sein müssen.
Ein Beispiel:
> # Demonstriert die Verwendung von Here-Dokumenten # Name : ahere1 cat <<TEXT_MARKE Heute ist `date`, Sie befinden sich im Verzeichnis `pwd`. Ihr aktuelles Terminal ist `echo -n $TERM` und Ihr Heimverzeichnis befindet sich in $HOME. TEXT_MARKE
Das Script bei der Ausführung:
> you@host > ./ahere1 Heute ist Fr Mär 4 00:36:37 CET 2005, Sie befinden sich im Verzeichnis /home/tot. Ihr aktuelles Terminal ist xterm und Ihr Heimverzeichnis befindet sich in /home/you.
In diesem Beispiel liest das Kommando cat so lange die nachfolgenden Zeilen, bis es auf das Wort TEXT_MARKE stößt. Hierbei können auch Kommando-Substitutionen (was durchaus ein anderes Shellscript sein darf) und Variablen innerhalb des Textes verwendet werden. Die Shell kümmert sich um entsprechende Ersetzung. Der Vorteil von »Here-Dokumenten« ist, dass Sie einen Text direkt an Befehle weiterleiten können, ohne diesen vorher in einer Datei unterbringen zu müssen. Gerade bei der Ausgabe von etwas umfangreicheren Texten oder Fehlermeldungen fällt einem die Ausgabe hier wesentlich leichter.
Wollen Sie allerdings nicht, dass eine Kommando-Substitution oder eine Variable von der Shell interpretiert wird, müssen Sie nur zwischen dem Umlenkungszeichen << und der Textmarke einen Backslash setzen:
> # Demonstriert die Verwendung von Here-Dokumenten # Name : ahere2 cat <<\TEXT_MARKE Heute ist `date`, Sie befinden sich im Verzeichnis `pwd`. Ihr aktuelles Terminal ist `echo -n $TERM` und Ihr Heimverzeichnis befindet sich in $HOME. TEXT_MARKE
Dann sieht die Ausgabe folgendermaßen aus:
> you@host > ./ahere2 Heute ist `date`, Sie befinden sich im Verzeichnis `pwd`. Ihr aktuelles Terminal ist `echo -n $TERM` und Ihr Heimverzeichnis befindet sich in $HOME.
Neben der Möglichkeit mit einem Backslash zwischen dem Umlenkungszeichen << und der Textmarke, existiert auch die Variante, die Textmarke zwischen Single Quotes zu stellen:
> kommando <<'MARKE' ... MARKE
Befinden sich außerdem im Text führende Leer- oder Tabulatorzeichen, können Sie diese mit einem Minuszeichen zwischen dem Umlenkungszeichen und der Textmarke entfernen (<<âMARKE ).
Nichts hält Sie übrigens davon ab, die Standardeingabe mittels << an eine Variable zu übergeben:
> # Demonstriert die Verwendung von Here-Dokumenten # Name : ahere3 count=`cat <<TEXT_MARKE \`ls -l | wc -l\` TEXT_MARKE` echo "Im Verzeichnis $HOME befinden sich $count Dateien"
Das Script bei der Ausführung:
> you@host > ./ahere3 Im Verzeichnis /home/tot befinden sich 40 Dateien
Damit in der Variable »count« nicht die Textfolge ls l | wc âl steht, sondern auch wirklich eine Kommando-Substitution durchgeführt wird, müssen Sie die Kommandos durch einen Backslash schützen, weil hier bereits die Ausgabe der »Textmarke« als Kommando-Substitution verwendet wird.
Ich persönlich verwende diese Inline-Eingabeumleitung gern in Verbindung mit Fließkommaberechnungen und dem Kommando bc (siehe auch Abschnitt 2.2.3). Verwenden Sie hierbei zusätzlich noch eine Kommando-Substitution, können Sie das Ergebnis der Berechnung in einer Variablen speichern, wie Sie dies von expr her kennen. Außerdem können Sie mit der Verwendung der mathematischen Bibliothek (anzugeben mit der Option âl) noch weitere Winkelfunktionen mit z. B. Sinus (s()) oder Kosinus (c()) nutzen. Hierzu das Script, womit jetzt Berechnungen komplexer Ausdrücke nichts mehr im Wege steht:
> # Demonstriert die Verwendung von Here-Dokumenten # Name : ahere4 if [ $# == 0 ] then echo "usage: $0 Ausdruck" exit 1 fi # Option -l für die mathematische Bibliothek bc -l <<CALC $* quit CALC
Das Script bei der Ausführung:
> you@host > ./ahere4 123.12/5 24.62400000000000000000 you@host > ./ahere4 1234.3*2 2468.6 you@host > ./ahere4 '(12.34â8.12)/2' 2.11000000000000000000 you@host > ./ahere4 sqrt\(24\) 4.89897948556635619639 you@host > var=`./ahere4 1234.1234*2*1.2` you@host > echo $var 2961.89616
### 5.3.5 Here-Dokumente mit read verwendenÂ
Wie beim Auslesen einer Datei mit read können Sie selbstverständlich auch Here-Dokumente verwenden. Dies verhält sich dann so, als würde zeilenweise aus einer Datei gelesen. Und damit nicht nur die erste Zeile gelesen wird, geht man exakt genauso wie beim zeilenweise Einlesen einer Datei vor. Hier die Syntax:
> while read line do ... done <<TEXT_MARKE line1 line2 ... line_n TEXT_MARKE
In der Praxis lässt sich diese Inline-Umlenkung mit read folgendermaßen einsetzen:
> # Demonstriert die Verwendung von Here-Dokumenten und read # Name : ahere5 i=1 while read line do echo "$i. Zeile : $line" i=`expr $i + 1` done <<TEXT Eine Zeile `date` Homeverzeichnis $HOME Das Ende TEXT
Das Script bei der Ausführung:
> you@host > ./ahere5 1. Zeile : Eine Zeile 2. Zeile : Fr Mär 4 03:23:22 CET 2005 3. Zeile : Homeverzeichnis /home/tot 4. Zeile : Das Ende
### 5.3.6 Die Variable IFSÂ
Die Variable IFS (Internal Field Separator) aus Ihrer Shell-Umgebung wurde in den letzten Abschnitten schon öfters erwähnt. Insbesondere scheint IFS eine spezielle Bedeutung bei der Ein- und Ausgabe von Daten als Trennzeichen zu haben. Wenn Sie allerdings versuchen, den Inhalt der Variablen mit echo auszugeben, werden Sie wohl nicht besonders schlau daraus. Für Abhilfe kann hierbei das Kommando od sorgen, mit dem Sie den Inhalt der Variablen in hexadezimaler oder ASCII-Form betrachten können.
> you@host > echo -n "$IFS" | od -x 0000000 0920 000a 0000003
0000000 ist hierbei das Offset der Zahlenreihe, in der sich drei hexadezimale Werte befinden, die IFS beinhaltet. Damit das Newline von echo nicht mit enthalten ist, wurde hier die Option ân verwendet. Die Werte sind:
> 09 20 00 0a
Der Wert 00 hat hier keine Bedeutung. Jetzt können Sie einen Blick auf eine ASCII-Tabelle werfen, um festzustellen, für welches Sonderzeichen diese hexadezimalen Werte stehen.
Das Ganze gelingt mit od auch im ASCII-Format, nur dass das Leerzeichen als ein »Leerzeichen« angezeigt wird:
> you@host > echo -n "$IFS" | od -c 0000000 \t \n 0000003
Diese voreingestellten Trennzeichen der Shell dienen der Trennung der Eingabe für den Befehl read, der Variablen- sowie auch der Kommando-Substitution. Wenn Sie also z. B. read folgendermaßen verwenden
> you@host > read nachname vorname alter Wolf Jürgen 30 you@host > echo $vorname Jürgen you@host > echo $nachname Wolf you@host > echo $alter 30
so verdanken Sie es der Variablen IFS, dass hierbei die entsprechenden Eingaben an die dafür vorgesehenen Variablen übergeben wurden.
Häufig war aber in den Kapiteln zuvor die Rede davon, die Variable IFS den eigenen Bedürfnissen anzupassen â sprich: das oder die Trennzeichen zu verändern. Wollen Sie beispielsweise, dass bei der Eingabe von read ein Semikolon statt eines Leerzeichens als Trennzeichen dient, so lässt sich dies einfach erreichen:
> you@host > BACKIFS="$IFS" you@host > IFS=\; you@host > echo -n "$IFS" | od -c 0000000 ; 0000001 you@host > read nachname vorname alter Wolf;Jürgen;30 you@host > echo $vorname Jürgen you@host > echo $nachname Wolf you@host > echo $alter 30 you@host > IFS=$BACKIFS
Zuerst wurde eine Sicherung der Variablen IFS in BACKIFS vorgenommen. Der Vorgang, ein Backup von IFS zu erstellen, und das anschließende Wiederherstellen ist wichtiger, als dies auf den ersten Blick erscheint. Unterlässt man es, kann man sich auf das aktuelle Terminal nicht mehr verlassen, da einige Programme nur noch Unsinn fabrizieren.
Als Nächstes übergeben Sie im Beispiel der Variablen IFS ein Semikolon (geschützt mit einem Backslash). Daraufhin folgt dasselbe Beispiel wie zuvor, nur mit einem Semikolon als Trennzeichen. Am Ende stellen wir den ursprünglichen Wert von IFS wieder her.
Natürlich spricht auch nichts dagegen, IFS mit mehr als einem Trennzeichen zu versehen, beispielsweise:
> you@host > IFS=$BACKIFS you@host > IFS=:, you@host > read nachname vorname alter Wolf,Jürgen:30 you@host > echo $vorname Jürgen you@host > echo $nachname Wolf you@host > echo $alter 30 you@host > IFS=$BACKIFS
Im Beispiel wurde IFS mit den Trennzeichen : und , definiert. Wollen Sie bei einer Eingabe mit read, dass IFS nicht immer die führenden Leerzeichen (falls vorhanden) entfernt, so müssen Sie IFS nur mittels IFS= auf »leer« setzen:
> you@host > IFS=$BACKIFS you@host > read var Hier sind führende Leerzeichen vorhanden you@host > echo $var Hier sind führende Leerzeichen vorhanden you@host > IFS= you@host > read var Hier sind führende Leerzeichen vorhanden you@host > echo $var Hier sind führende Leerzeichen vorhanden you@host > IFS=$BACKIFS
Gleiches wird natürlich auch gern bei einer Variablen-Substitution verwendet. Im folgenden Beispiel wird die Variable durch ein Minuszeichen als Trenner zerteilt und mittels set auf die einzelnen Positionsparameter zerstückelt.
> # Demonstriert die Verwendung von IFS # Name : aifs1 # IFS sichern BACKIFS="$IFS" # Minuszeichen als Trenner IFS=- counter=1 var="Wolf-Jürgen-30-Bayern" # var anhand von Trennzeichen in IFS auftrennen set $var # Ein Zugriff auf $1, $2, ... wäre hier auch möglich for daten in "$@" do echo "$counter. $daten" counter=`expr $counter + 1` done IFS=$BACKIFS
Das Script bei der Ausführung:
> you@host > ./aifs1 1. Wolf 2. Jürgen 3. 30 4. Bayern
Wenn Sie das Script ein wenig umschreiben, können Sie hiermit zeilenweise alle Einträge der Shell-Variablen PATH ausgeben lassen, welche ja durch einen Doppelpunkt getrennt werden.
> # Demonstriert die Verwendung von IFS # Name : aifs2 # IFS sichern BACKIFS="$IFS" # Minuszeichen als Trenner IFS=: # PATH anhand von Trennzeichen in IFS auftrennen set $PATH for path in "$@" do echo "$path" done IFS=$BACKIFS
Das Script bei der Ausführung:
> you@host > ./aifs2 /home/tot/bin /usr/local/bin /usr/bin /usr/X11R6/bin /bin /usr/games /opt/gnome/bin /opt/kde3/bin /usr/lib/java/jre/bin
Natürlich hätte man dies wesentlich effektiver mit folgender Zeile lösen können:
> you@host > echo $PATH | tr ':' '\n'
Zu guter Letzt sei erwähnt, dass die Variable IFS noch recht häufig in Verbindung mit einer Kommando-Substitution verwendet wird. Wollen Sie etwa mittels grep nach einem bestimmten User in /etc/passwd suchen, wird meistens eine Ausgabe wie folgt genutzt:
> you@host > grep you /etc/passwd you:x:1001:100::/home/you:/bin/bash
Dies in die Einzelteile zu zerlegen, sollte Ihnen jetzt mit der Variablen IFS nicht mehr schwer fallen.
> # Demonstriert die Verwendung von IFS # Name : aifs3 # IFS sichern BACKIFS="$IFS" # Minuszeichen als Trenner IFS=: if [ $# -lt 1 ] then echo "usage: $0 User" exit 1 fi # Ausgabe anhand von Trennzeichen in IFS auftrennen set `grep ^$1 /etc/passwd` echo "User : $1" echo "User-Nummer : $3" echo "Gruppen-Nummer : $4" echo "Home-Verzeichnis : $6" echo "Start-Shell : $7" IFS=$BACKIFS
Das Script bei der Ausführung:
> you@host > ./aifs3 you User : you User-Nummer : 1001 Gruppen-Nummer : 100 Home-Verzeichnis : /home/you Start-Shell : /bin/bash you@host > ./aifs3 tot User : tot User-Nummer : 1000 Gruppen-Nummer : 100 Home-Verzeichnis : /home/tot Start-Shell : /bin/ksh you@host > ./aifs3 root User : root User-Nummer : 0 Gruppen-Nummer : 0 Home-Verzeichnis : /root Start-Shell : /bin/ksh
### 5.3.7 Arrays einlesen mit read (Bash und Korn-Shell only)Â
Arrays lassen sich genauso einfach einlesen wie normale Variablen, nur dass man hier den Index beachten muss. Hierzu nur ein Script, da die Arrays bereits in Abschnitt 2.5 ausführlich behandelt wurden.
> # Demonstriert die Verwendung von read mit Arrays # Name : ararray typeset -i i=0 while [ $i -lt 5 ] do printf "Eingabe machen : " read valarr[$i] i=i+1 done echo "Hier, die Eingaben ..." i=0 while [ $i -lt 5 ] do echo ${valarr[$i]} i=i+1 done
Das Script bei der Ausführung:
> you@host > ./ararray Eingabe machen : Testeingabe1 Eingabe machen : Noch eine Eingabe Eingabe machen : Test3 Eingabe machen : Test4 Eingabe machen : Und Ende Hier, die Eingaben ... Testeingabe1 Noch eine Eingabe Test3 Test4 Und Ende
Selbstverständlich lassen sich Arrays ganz besonders im Zusammenhang mit zeilenweisem Einlesen aus einer Datei mit read (siehe Abschnitt 5.3.2) verwenden:
> typeset -i i=0 while read vararray[$i] do i=i+1 done < datei_zum_einlesen
Damit können Sie anschließend bequem mithilfe des Feldindexes auf den Inhalt einzelner Zeilen zugreifen und diesen weiterverarbeiten.
In der Bash haben Sie außerdem die Möglichkeit, mit read einzelne Zeichenketten zu zerlegen und diese Werte direkt in eine Variable zu legen. Die einzelnen Zeichenketten (abhängig wieder vom Trennzeichen IFS) werden dann in extra Felder (einem Array) aufgeteilt. Hierfür müssen Sie read allerdings mit der Option âa aufrufen:
> read -a array <<TEXTMARKE $variable TEXTMARKE
Mit diesem Beispiel können Sie ganze Zeichenketten in ein Array aufsplitten (immer abhängig davon, welche Trennzeichen Sie verwenden).
### 5.3.8 Shell-abhängige Anmerkungen zu readÂ
Da read ein Builtin-Kommando der Shell ist, gibt es hier zwischen den einzelnen Shells noch einige unterschiedliche Funktionalitäten.
# Ausgabetext in read integrieren (Bash)
Verwenden Sie bei der Funktion read die Option âp, können Sie den Ausgabetext (für die Frage) direkt in die read-Abfrage integrieren. Die Syntax:
> read -p Ausgabetext var1 var2 var3 ... var_n
In der Praxis sieht dies so aus:
> you@host > read -p "Ihr Vorname : " vorname Ihr Vorname : Jürgen you@host > echo $vorname Jürgen
# Ausgabetext in read integrieren (Korn-Shell)
Auch die Korn-Shell bietet Ihnen die Möglichkeit, eine Benutzerabfrage direkt in den read-Befehl einzubauen.
> read var?"Ausgabetext"
In der Praxis:
> you@host > read name?"Ihr Name bitte: " Ihr Name bitte: <NAME> you@host > echo $name <NAME>
# Default-Variable für read (Bash und Korn-Shell only)
Verwenden Sie read, ohne eine Variable als Parameter, wird die anschließende Eingabe in der Default-Variablen REPLY gespeichert.
> you@host > read <NAME> you@host > echo $REPLY <NAME>
### 5.3.9 Einzelnes Zeichen abfragenÂ
Eine häufig gestellte Frage lautet, wie man einen einzelnen Tastendruck abfragen kann, ohne (ENTER) zu drücken. Das Problem an dieser Sache ist, dass ein Terminal gewöhnlich zeilengepuffert ist. Bei einer Zeilenpufferung wartet das Script z. B. bei einer Eingabeaufforderung mit read so lange mit der Weiterarbeit, bis die Zeile mit einem (ENTER) bestätigt wird.
Da ein Terminal mit unzähligen Eigenschaften versehen ist, ist es am einfachsten, all diese Parameter mit dem Kommando stty und der Option raw auszuschalten. Damit schalten Sie Ihr Terminal in einen rohen Modus, womit Sie praktisch »nackte« Zeichen behandeln können. Sobald Sie fertig sind, ein Zeichen einzulesen, sollten Sie diesen Modus auf jeden Fall wieder mit âraw verlassen!
Wenn Sie Ihr Terminal in einen rohen Modus geschaltet haben, benötigen Sie eine Funktion, welche ein einzelnes Zeichen lesen kann. Die Funktion read fällt wegen der Zeilenpufferung aus. Einzige Möglichkeit ist das Kommando dd (Dump Disk), worauf man zunächst gar nicht kommen würde. dd liest eine Datei und schreibt den Inhalt mit wählbarer Blockgröße und verschiedenen Konvertierungen. Anstatt für eine Datei kann dd genauso gut für das Kopieren der Standardeingabe in die Standardausgabe verwandt werden. Hierzu reicht es, die Anzahl der Datensätze (count hier 1) anzugeben, und welche Blockgröße Sie in Bytes verwenden wollen (bs, auch hier 1 Byte). Da dd seine Meldung auf die Standardfehlerausgabe vornimmt, können Sie diese Ausgabe ins Datengrab schicken.
Hier ein Script, das unermüdlich auf den Tastendruck »q« für Quit zum Beenden einer Schleife wartet.
> # Demonstriert das Einlesen einzelner Zeichen # Name : areadchar echo "Bitte eine Taste betätigen â mit q beenden" char='' # Terminal in den "rohen" Modus schalten stty raw -echo # In Schleife überprüfen, ob 'q' gedrückt wurde while [ "$char" != "q" ] do char=`dd bs=1 count=1 2>/dev/null` done # Den "rohen" Modus wieder abschalten stty -raw echo echo "Die Taste war $char"
Das Script bei der Ausführung:
> you@host > ./areadchar Bitte eine Taste betätigen â mit q beenden (Q) Die Taste war q
### 5.3.10 Einzelne Zeichen mit Escape-Sequenzen abfragenÂ
Im Beispiel zuvor haben Sie gesehen, wie Sie einzelne Tastendrücke mithilfe der Kommandos stty und dd abfragen können. Aber sobald Sie hierbei Tastendrücke wie (F1), (F2), (ESC), (Ë), (Ľ) usw. abfragen wollen, wird dies nicht mehr funktionieren. Testen Sie am besten selbst unser abgespecktes Script von eben:
> # Demonstriert das Einlesen einzelner Zeichen # Name : areadchar2 echo "Bitte eine Taste betätigen" stty raw -echo char=`dd bs=1 count=1 2>/dev/null` stty -raw echo echo "Die Taste war $char"
Das Script bei der Ausführung:
> you@host > ./areadchar2 Bitte eine Taste betätigen (A) Die Taste war A you@host > ./areadchar2 Bitte eine Taste betätigen (ESC) Die Taste war you@host > ./areadchar2 Bitte eine Taste betätigen (F1) Die Taste war [A
Das Problem mit den Pfeil- oder Funktionstasten ist nun mal, dass hierbei Escape-Folgen zurückgegeben werden. Schlimmer noch, die Escape-Folgen sind häufig unterschiedlich lang und â um es noch schlimmer zu machen â abhängig vom Typ des Terminals und zusätzlich auch noch der nationalen Besonderheit (Umlaute und Ähnliches).
Somit scheint eine portable Lösung des Problems fast unmöglich, es sei denn, man kennt sich ein bisschen mit C aus. Da ich dies nicht voraussetzen kann, gebe ich Ihnen hier ein einfaches Rezept an die Hand, welches Sie bei Bedarf erweitern können.
# Escape-Sequenzen des Terminals ermitteln
Zuerst müssen Sie sich darum kümmern, wie die Escape-Sequenzen für entsprechende Terminals aussehen. Welches Terminal Sie im Augenblick nutzen, können Sie mit echo $TERM in Erfahrung bringen. Im nächsten Schritt können Sie den Befehl infocmp verwenden, um vom entsprechenden Terminaltyp Informationen zu den Tasten zu erhalten.
> you@host > echo $TERM xterm you@host > infocmp # Reconstructed via infocmp from file: /usr/share/terminfo/x/xterm xterm|xterm terminal emulator (X Window System), am, bce, km, mc5i, mir, msgr, npc, xenl, colors#8, cols#80, it#8, lines#24, pairs#64, bel=^G, blink=\E[5m, bold=\E[1m, cbt=\E[Z, civis=\E[?25l, clear=\E[H\E[2J, cnorm=\E[?12l\E[?25h, cr=^M, csr=\E[%i%p1 %d;%p2 %dr, cub=\E[%p1 %dD, cub1=^H, cud=\E[%p1 %dB, cud1=^J, cuf=\E[%p1 %dC, cuf1=\E[C, cup=\E[%i%p1 %d;%p2 %dH, cuu=\E[%p1 %dA, cuu1=\E[A, cvvis=\E[?12;25h, dch=\E[%p1 %dP, dch1=\E[P, dl=\E[%p1 %dM, dl1=\E[M, ech=\E[%p1 %dX, ed=\E[J, el=\E[K, el1=\E[1K, enacs=\E(B\E)0, flash=\E[?5h$<100/>\E[?5l, home=\E[H, hpa=\E[%i%p1 %dG, ht=^I, hts=\EH, ich=\E[%p1 %d@, il=\E[%p1 %dL, il1=\E[L, ind=^J, indn=\E[%p1 %dS, invis=\E[8m, is2=\E[!p\E[?3;4l\E[4l\E>, kDC=\E[3;2~, kEND=\E[1;2F, kHOM=\E[1;2H, kIC=\E[2;2~, kLFT=\E[1;2D, kNXT=\E[6;2~, kPRV=\E[5;2~, kRIT=\E[1;2C, kb2=\EOE, kbs=\177, kcbt=\E[Z, kcub1=\EOD, kcud1=\EOB, kcuf1=\EOC, kcuu1=\EOA, kdch1=\E[3~, kend=\EOF, kent=\EOM, kf1=\EOP, kf10=\E[21~, kf11=\E[23~, kf12=\E[24~, kf13=\EO2P, ...
Hierbei werden Ihnen zunächst eine Menge Informationen um die Ohren gehauen, welche auf den ersten Blick wie Zeichensalat wirken. Bei genauerem Hinsehen aber können Sie alte Bekannte aus Abschnitt 5.2.4 zur Steuerung des Terminals mit tput wieder entdecken.
infocmp gibt Ihnen hier Einträge wie
> cuu1=\E[A
zurück. Dass man daraus nicht sonderlich schlau wird, leuchtet ein. Daher sollten Sie die Manual-Page von terminfo(5) befragen (oder genauer absuchen), wofür denn hier cuu1 steht:
> you@host > man 5 terminfo | grep cuu1 Formatiere terminfo(5) neu, bitte warten... cursor_up cuu1 up up one line key_up kcuu1 ku up-arrow key micro_up mcuu1 Zd Like cursor_up in kcuf1=\E[C, kcuu1=\E[A, kf1=\E[M, kf10=\E[V,
Also, die Pfeil-nach-oben-Taste (cursor_up) haben wir hier. Und diese wird bei der xterm mit \E[A »dargestellt«. Das \E ist hierbei das Escape-Zeichen und besitzt den ASCII-Codewert 27. Somit sieht der C-Quellcode zum Überprüfen der Pfeil-nach-oben-Taste wie folgt aus (fett hervorgehoben ist hier die eigentliche Überprüfung, die Sie bei Bedarf erweitern können):
> /* getkey.c */ #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <termios.h> enum { ERROR=-1, SUCCESS, ONEBYTE }; /*Altes Terminal wiederherstellen*/ static struct termios BACKUP_TTY; /*Altes Terminal wiederherstellen*/ /* Eingabekanal wird so umgeändert, damit die Tasten einzeln */ /* abgefragt werden können */ int new_tty(int fd) { struct termios buffer; /* Wir fragen nach den Attributen des Terminals und übergeben */ /* diese dann an buffer. BACKUP_TTY dient bei Programmende zur */ /* Wiederherstellung der alten Attribute und bleibt unberührt. */ if((tcgetattr(fd, &BACKUP_TTY)) == ERROR) return ERROR; buffer = BACKUP_TTY; /* Lokale Flags werden gelöscht : */ /* ECHO = Zeichenausgabe auf Bildschirm */ /* ICANON = Zeilenorientierter Eingabemodus */ /* ISIG = Terminal Steuerzeichen */ buffer.c_lflag = buffer.c_lflag & ~(ECHO|ICANON|ISIG); /* VMIN=Anzahl der Bytes, die gelesen werden müssen, bevor */ /* read() zurückkehrt In unserem Beispiel 1Byte für 1 Zeichen */ buffer.c_cc[VMIN] = 1; /* Wir setzen jetzt die von uns gewünschten Attribute */ if((tcsetattr(fd, TCSAFLUSH, &buffer)) == ERROR) return ERROR; return SUCCESS; } /* Ursprüngliches Terminal wiederherstellen */ int restore_tty(int fd) { if((tcsetattr(fd, TCSAFLUSH, &BACKUP_TTY)) == ERROR) return ERROR; return SUCCESS; } int main(int argc, char **argv) { int rd; char c, buffer[10]; /* Setzen des neuen Modus */ if(new_tty(STDIN_FILENO) == ERROR) { printf("Fehler bei der Funktion new_tty()\n"); exit(EXIT_FAILURE); } /* Erste Zeichen lesen */ if(read(STDIN_FILENO, &c, 1) < ONEBYTE) { printf("Fehler bei read()\n"); restore_tty(STDIN_FILENO); exit(EXIT_FAILURE); } /* Haben wir ein ESC ('\E') gelesen? */ if(c == 27) { /* Jep eine Escape-Sequenz, wir wollen den Rest */ /* der Zeichen auslesen */ rd=read(STDIN_FILENO, buffer, 4); /*String terminieren*/ buffer[rd]='\0'; /* Hier erfolgt die Überprüfung des Tastendrucks*/ /* Wars der Pfeil-nach-oben \E[A */ if(strcmp(buffer,"[A") == SUCCESS) printf("Pfeil-nach-oben betätigt\n"); /* Nein, keine Escape-Sequenz */ } else { if((c < 32) || (c == 127)) printf("--> %d\n",c); /* Numerischen Wert ausgeben */ else printf("--> %c\n",c); /* Zeichen ausgeben */ } restore_tty(STDIN_FILENO); return EXIT_SUCCESS; }
Wird das Programm kompiliert, übersetzt und anschließend ausgeführt, erhalten Sie folgende Ausgabe:
> you@host > gcc -Wall -o getkey getkey.c you@host > ./getkey (1) --> 1 you@host > ./getkey (A) --> A you@host > ./getkey (Ë) Pfeil-nach-oben betätigt
Wie Sie jetzt im Beispiel vorgegangen sind, können Sie auch mit anderen Pfeil- und Funktionstasten vorgehen. Den Pfeil-nach-rechts können Sie z. B. wie folgt implementieren:
> if(strcmp(buffer,"[D") == SUCCESS) printf("Pfeil-nach-rechts\n");
Aber anstatt einer Ausgabe empfehle ich Ihnen, einen Rückgabewert mittels return zu verwenden, etwa:
> ... /* Hier erfolgt die Überprüfung des Tastendrucks*/ /* Wars der Pfeil-nach-oben \E[A */ if(strcmp(buffer,"[A") == SUCCESS) { restore_tty(STDIN_FILENO); return 10; /* Rückgabewert für unser Shellscript */ } /* Wars der Pfeil-nach-links */ if(strcmp(buffer,"[D") == SUCCESS) { restore_tty(STDIN_FILENO); return 11; /* Rückgabewert für unser Shellscript */ } ...
Damit können Sie das Beispiel auch mit einem Shellscript verwenden, indem Sie die Variable $? abfragen. Hier das Script:
> # Demonstriert das Einlesen einzelner Zeichen # Name : areadchar3 echo "Bitte eine Taste betätigen" ./getkey ret=$? if [ $ret -eq 0 ] then echo "Keine Pfeiltaste" fi if [ $ret -eq 10 ] then echo "Die Pfeil-nach-oben-Taste wars" fi if [ $ret -eq 11 ] then echo "Die Pfeil-nach-links-Taste wars" fi
Übersetzen Sie das C-Programm von Neuem mit den geänderten Zeilen Code und führen Sie das Shellscript aus:
> you@host > ./areadchar3 Bitte eine Taste betätigen (d)--> d Keine Pfeiltaste you@host > ./areadchar3 Bitte eine Taste betätigen (Ë)Die Pfeil-nach-oben-Taste wars you@host > ./areadchar3 Bitte eine Taste betätigen (Ä)Die Pfeil-nach-links-Taste wars
Mit dem C-Programm haben Sie quasi Ihr eigenes Tool geschrieben, welches Sie am besten in ein PATH-Verzeichnis kopieren und auf welches Sie somit jederzeit zurückgreifen und natürlich erweitern können/sollen.
### 5.3.11 PassworteingabeÂ
Wollen Sie in Ihrem Script eine Passworteingabe vornehmen und darauf verzichten, dass die Eingabe auf dem Bildschirm mit ausgegeben wird, können Sie auch hierfür stty verwenden. Hierbei reicht es aus, die Ausgabe (echo) abzuschalten und nach der Passworteingabe wieder zu aktivieren.
> stty -echo # Passwort einlesen stty echo
Ein Script als Beispiel:
> # Demonstriert das Einlesen einzelner Zeichen # Name : anoecho printf "Bitte Eingabe machen: " # Ausgabe auf dem Bildschirm abschalten stty -echo read passwort # Ausgabe auf dem Bildschirm wieder einschalten stty echo printf "\nIhre Eingabe lautete : %s\n" $passwort
Das Script bei der Ausführung:
> you@host > ./anoecho Bitte Eingabe machen: Ihre Eingabe lautete: k3l5o6i8
# 5.4 Umlenken mit dem Befehl execÂ
5.4 Umlenken mit dem Befehl execÂ
Mit dem Befehl exec können Sie sämtliche Ein- bzw. Ausgaben von Kommandos eines Shellscripts umleiten. Hier die Syntax:
# Standardausgabe nach Ausgabedatei umlenken exec >Ausgabedatei # Standardausgabe nach Ausgabedatei umlenken # und ans Ende der Datei anfügen exec >>Ausgabedatei # Standardfehlerausgabe nach Ausgabedatei umlenken exec 2>Fehler_Ausgabedatei # Standardfehlerausgabe nach Ausgabedatei umlenken # und ans Ende der Datei anfügen exec 2>>Fehler_Ausgabedatei # Standardeingabe umlenken exec <Eingabedatei
Hierzu ein einfaches Beispiel:
# Demonstriert eine Umlenkung mit exec # aexec1 # Wird noch auf dem Bildschirm ausgegeben echo "$0 wird ausgeführt" exec >ausgabe.txt # Alle Ausgaben ab hier in die Datei "ausgabe.txt" val=`ls -l | wc -l` echo "Im Verzeichnis $HOME befinden sich $val Dateien" echo "Hier der Inhalt: " ls -l
Das Script bei der Ausführung:
you@host > ./aexec1 ./aexec1 wird ausgeführt you@host > cat ausgabe.txt Im Verzeichnis /home/tot befinden sich 44 Dateien Hier der Inhalt: insgesamt 1289 -rwxr--r-- 1 tot users 188 2005â02â24 04:31 acase1 -rw-r--r-- 1 tot users 277 2005â02â24 02:44 acase1~ -rwxr--r-- 1 tot users 314 2005â03â06 11:17 aecho1 ...
Die Ausführung des Scripts ist einfach. Die komplette Standardausgabe wird hier mittels
exec >ausgabe.txt
in die Datei ausgabe.txt umgeleitet. Alles, was sich vor dieser Zeile befindet, wird noch wie gewöhnlich auf dem Bildschirm ausgegeben. Wollen Sie im Beispiel eventuell auch auftretende Fehlermeldungen in eine Datei umleiten, können Sie exec wie folgt erweitern:
exec >ausgabe.txt 2>fehlerausgabe.txt
Jetzt werden auftretende Fehlermeldungen ebenfalls in eine separate Datei namens fehlerausgabe.txt geschrieben.
Die Umlenkung der Eingabe erfolgt nach demselben Schema. Alle entsprechenden Eingabekommandos erhalten dann ihre Daten aus einer entsprechenden Datei.
# Demonstriert eine Umlenkung mit exec # aexec2 # Alle Eingaben im Script werden hier von data.dat entnommen exec <data.dat printf "%-15s %-15s %-8s\n" "Nachname" "Vorname" "Telefon" printf "+%-15s+%-15s+%-8s\n" "--------" "-------" "-------" while read vorname nachname telefon do printf " %-15s %-15s %-8d\n" $nachname $vorname $telefon done
Das Script bei der Ausführung:
you@host > cat data.dat <NAME> 1234 <NAME> 3213 Mike Katz 3213 Mike Mentzer 1343 you@host > ./aexec2 Nachname Vorname Telefon +-------- +------- +------- <NAME> 1234 <NAME> 3213 Katz Mike 3213 Mentzer Mike 1343
Bitte beachten Sie aber, dass jedes Kommando an der Position des letzten Lesekommandos fortfährt. Würden Sie z. B. im Script »axec2« beim ersten Schleifendurchlauf abbrechen und anschließend einen erneuten Durchlauf starten, so würde der zweite Durchlauf dort weitermachen, wo sich der erste Durchlauf beendet hat, beispielsweise würden Sie mit dem leicht modifizierten Script »axec3« dasselbe erreichen wie mit »axec2«.
# Demonstriert eine Umlenkung mit exec # aexec3 # Wird noch auf dem Bildschirm ausgegeben echo "$0 wird ausgeführt" # Alle Eingaben im Script werden hier von data.dat entnommen exec <data.dat printf "%-15s %-15s %-8s\n" "Nachname" "Vorname" "Telefon" printf "+%-15s+%-15s+%-8s\n" "--------" "-------" "-------" while read vorname nachname telefon do printf " %-15s %-15s %-8d\n" $nachname $vorname $telefon break # Hier wird testweise nach einem Durchlauf abgebrochen done while read vorname nachname telefon do printf " %-15s %-15s %-8d\n" $nachname $vorname $telefon done
Dass dies so ist, liegt am Filedeskriptor, welcher in seinem Dateitabelleneintrag u. a. auch die aktuelle Lese-/Schreibposition enthält. Zu den Filedeskriptoren kommen wir gleich im nächsten Abschnitt.
Den Vorteil der Verwendung von exec kann man hier zwar nicht auf den ersten Blick erkennen. Doch spätestens bei etwas längeren Scripts, bei denen Sie angesichts vieler Befehle eine Umlenkung der Ausgabe vornehmen müssen/wollen, wird Ihnen eine Menge Tipparbeit mit exec abgenommen. Anstatt hinter jede Befehlszeile das Umlenkungszeichen für die Standard- und Fehlerausgabe zu setzen, können Sie mit einem einfachen exec-Aufruf einen Filedeskriptor während der Scriptausführung komplett in eine andere Richtung umlenken.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>-verlag.de.
## 5.4 Umlenken mit dem Befehl execÂ
Mit dem Befehl exec können Sie sämtliche Ein- bzw. Ausgaben von Kommandos eines Shellscripts umleiten. Hier die Syntax:
> # Standardausgabe nach Ausgabedatei umlenken exec >Ausgabedatei # Standardausgabe nach Ausgabedatei umlenken # und ans Ende der Datei anfügen exec >>Ausgabedatei # Standardfehlerausgabe nach Ausgabedatei umlenken exec 2>Fehler_Ausgabedatei # Standardfehlerausgabe nach Ausgabedatei umlenken # und ans Ende der Datei anfügen exec 2>>Fehler_Ausgabedatei # Standardeingabe umlenken exec <Eingabedatei
Hierzu ein einfaches Beispiel:
> # Demonstriert eine Umlenkung mit exec # aexec1 # Wird noch auf dem Bildschirm ausgegeben echo "$0 wird ausgeführt" exec >ausgabe.txt # Alle Ausgaben ab hier in die Datei "ausgabe.txt" val=`ls -l | wc -l` echo "Im Verzeichnis $HOME befinden sich $val Dateien" echo "Hier der Inhalt: " ls -l
Das Script bei der Ausführung:
> you@host > ./aexec1 ./aexec1 wird ausgeführt you@host > cat ausgabe.txt Im Verzeichnis /home/tot befinden sich 44 Dateien Hier der Inhalt: insgesamt 1289 -rwxr--r-- 1 tot users 188 2005â02â24 04:31 acase1 -rw-r--r-- 1 tot users 277 2005â02â24 02:44 acase1~ -rwxr--r-- 1 tot users 314 2005â03â06 11:17 aecho1 ...
Die Ausführung des Scripts ist einfach. Die komplette Standardausgabe wird hier mittels
> exec >ausgabe.txt
in die Datei ausgabe.txt umgeleitet. Alles, was sich vor dieser Zeile befindet, wird noch wie gewöhnlich auf dem Bildschirm ausgegeben. Wollen Sie im Beispiel eventuell auch auftretende Fehlermeldungen in eine Datei umleiten, können Sie exec wie folgt erweitern:
> exec >ausgabe.txt 2>fehlerausgabe.txt
Jetzt werden auftretende Fehlermeldungen ebenfalls in eine separate Datei namens fehlerausgabe.txt geschrieben.
Die Umlenkung der Eingabe erfolgt nach demselben Schema. Alle entsprechenden Eingabekommandos erhalten dann ihre Daten aus einer entsprechenden Datei.
> # Demonstriert eine Umlenkung mit exec # aexec2 # Alle Eingaben im Script werden hier von data.dat entnommen exec <data.dat printf "%-15s %-15s %-8s\n" "Nachname" "Vorname" "Telefon" printf "+%-15s+%-15s+%-8s\n" "--------" "-------" "-------" while read vorname nachname telefon do printf " %-15s %-15s %-8d\n" $nachname $vorname $telefon done
Das Script bei der Ausführung:
> you@host > cat data.dat <NAME> 1234 <NAME> 3213 Mike Katz 3213 Mike Mentzer 1343 you@host > ./aexec2 Nachname Vorname Telefon +-------- +------- +------- <NAME> 1234 <NAME> 3213 Katz Mike 3213 Mentzer Mike 1343
Bitte beachten Sie aber, dass jedes Kommando an der Position des letzten Lesekommandos fortfährt. Würden Sie z. B. im Script »axec2« beim ersten Schleifendurchlauf abbrechen und anschließend einen erneuten Durchlauf starten, so würde der zweite Durchlauf dort weitermachen, wo sich der erste Durchlauf beendet hat, beispielsweise würden Sie mit dem leicht modifizierten Script »axec3« dasselbe erreichen wie mit »axec2«.
> # Demonstriert eine Umlenkung mit exec # aexec3 # Wird noch auf dem Bildschirm ausgegeben echo "$0 wird ausgeführt" # Alle Eingaben im Script werden hier von data.dat entnommen exec <data.dat printf "%-15s %-15s %-8s\n" "Nachname" "Vorname" "Telefon" printf "+%-15s+%-15s+%-8s\n" "--------" "-------" "-------" while read vorname nachname telefon do printf " %-15s %-15s %-8d\n" $nachname $vorname $telefon break # Hier wird testweise nach einem Durchlauf abgebrochen done while read vorname nachname telefon do printf " %-15s %-15s %-8d\n" $nachname $vorname $telefon done
Dass dies so ist, liegt am Filedeskriptor, welcher in seinem Dateitabelleneintrag u. a. auch die aktuelle Lese-/Schreibposition enthält. Zu den Filedeskriptoren kommen wir gleich im nächsten Abschnitt.
Den Vorteil der Verwendung von exec kann man hier zwar nicht auf den ersten Blick erkennen. Doch spätestens bei etwas längeren Scripts, bei denen Sie angesichts vieler Befehle eine Umlenkung der Ausgabe vornehmen müssen/wollen, wird Ihnen eine Menge Tipparbeit mit exec abgenommen. Anstatt hinter jede Befehlszeile das Umlenkungszeichen für die Standard- und Fehlerausgabe zu setzen, können Sie mit einem einfachen exec-Aufruf einen Filedeskriptor während der Scriptausführung komplett in eine andere Richtung umlenken.
# 5.5 FiledeskriptorenÂ
5.5 FiledeskriptorenÂ
Der exec-Befehl ist der Grundstein für diesen Abschnitt. Ein Filedeskriptor ist ein einfacher ganzzahliger Wert, welcher sich in einer bestimmten Tabelle von geöffneten Dateien befindet, und der vom Kernel für jeden Prozess bereitgestellt wird. Die Werte 0, 1 und 2 sind vorbelegte Filedeskriptoren und Verweise auf die Standardeingabe (stdin), Standardausgabe (stdout) und Standardfehlerausgabe (stderr). Diese drei Filedeskriptoren sind meist mit dem Terminal (TTY) des Anwenders verbunden und können natürlich auch umgeleitet werden, was Sie ja schon des Öfteren getan haben.
Somit haben Sie also auch schon mit Filedeskriptoren (ob nun bewusst oder unbewusst) gearbeitet. Haben Sie z. B. eine Umlenkung wie
ls -l > Augabedatei
vorgenommen, dann haben Sie praktisch den Kanal 1 oder genauer den Filedeskriptor 1 für die Standardausgabe in entsprechende Ausgabedatei umgeleitet. Ähnlich war dies auch bei der Standardeingabe mit dem Kanal 0, genauer dem Filedeskriptor mit der Nummer 0:
write user <nachricht
In beiden Fällen hätten Sie auch die alternative Schreibweise
ls -l 1> Ausgabedatei write user 0<nachricht
verwenden können, aber dies ist bei den Kanälen 0 und 1 nicht erforderlich und optional, da diese vom System bei Verwendung ohne eine Ziffer mit entsprechenden Werten belegt werden. Aber ab dem Kanal 2, der Standardfehlerausgabe, muss schon die Filedeskriptoren-Nummer mit angegeben werden, da das System sonst statt Kanal 2 den Kanal 1 verwenden würde. Es hindert Sie aber niemand, statt der Standardausgabe (Kanal 1) die Standardfehlerausgabe (Kanal 2) zu verwenden. Hierzu müssen Sie nur den entsprechenden Ausgabebefehl anweisen, seine Standardausgabe auf den Kanal 2 vorzunehmen:
you@host > echo "Hallo Welt auf stderr" >&2 Hallo Welt auf stderr
Gleiches haben Sie ja schon öfter mit der Standardfehlerausgabe mittels 2>&1 vorgenommen. Hierbei wurde die Standardfehlerausgabe (Kanal 2) auf den Filedeskriptor 1 umgelenkt. Damit hatten Fehlerausgabe und Standardausgabe dasselbe Ziel. Somit lautet die Syntax, um aus einer Datei bzw. einem Filedeskriptor zu lesen oder zu schreiben, wie folgt:
# Lesen aus einer Datei bzw. Filedeskriptor kommando <&fd # Schreiben in eine Datei bzw. Filedeskriptor kommando >&fd # Anhängen in eine Datei bzw. Filedeskriptor kommanod >>&fd
5.5.1 Einen neuen Filedeskriptor verwendenÂ
Selbstverständlich können Sie neben den Standardkanälen weitere Filedeskriptoren erzeugen. Für die Namen stehen Ihnen hierzu die Nummern 3 bis 9 zur Verfügung. Dabei wird der exec-Befehl folgendermaßen verwendet (für fd steht Ihnen eine Zahl zwischen 3 und 9 zur Verfügung):
# Zusätzlicher Ausgabekanal exec fd> ziel # Zusätzlicher Ausgabekanal zum Anhängen exec fd>> ziel # Zusätzlicher Eingabekanal exec fd< ziel
Im folgenden Beispiel erzeugen wir einen neuen Ausgabekanal für die aktuelle Konsole mit der Nummer 3:
you@host > exec 3> `tty` you@host > echo "Hallo neuer Kanal" >&3 Hallo neuer Kanal
Wenn Sie einen Filedeskriptor nicht mehr benötigen, sollten Sie diesen wieder freigeben. Auch hierbei wird wieder das exec-Kommando genutzt:
exec fd>&-
Auf den Filedeskriptor 3 bezogen, den Sie eben erzeugt haben, sieht dieser Vorgang wie folgt aus:
you@host > exec 3>&- you@host > echo "Hallo neuer Kanal" >&3 bash: 3: Ungültiger Dateideskriptor
Hinweis   Behalten Sie immer im Hinterkopf, dass ein neu von Ihnen erzeugter Kanal auch den weiteren Subshells zur Verfügung steht.
Hinweis   Verwenden Sie exec >&- ohne Angabe eines Kanals, wird hier die Standardausgabe geschlossen. Gleiches gilt auch für Kanal 0 und 2 mit exec <&- (schließt die Standardeingabe) und exec 2>&- (schließt die Standardfehlerausgabe).
Es wurde bereits im Abschnitt zuvor erwähnt, dass jedes Kommando an der Position des letzten Lese-/Schreibzugriffs fortfährt. Diesen Sachverhalt nutzt man in der Praxis in drei Hauptanwendungsgebieten.
Offenhalten von Dateien
Würden Sie z. B. zweimal über eine Umlenkung etwas von einer Datei oder Kommando-Substitution lesen, würde zweimal dasselbe eingelesen, beispielsweise:
you@host > who you tty2 Mar 6 14:09 tot :0 Mar 5 12:21 (console) you@host > who > user.dat you@host > read user1 <user.dat you@host > read user2 <user.dat you@host > echo $user1 you tty2 Mar 6 14:09 you@host > echo $user2 you tty2 Mar 6 14:09
Hier war wohl nicht beabsichtigt, dass zweimal derselbe User eingelesen wird. Verwenden Sie hingegen einen neuen Kanal, bleibt Ihnen die aktuelle Position des Lesezeigers erhalten. Dasselbe mit einem neuen Filedeskriptor:
you@host > who > user.dat you@host > exec 3< user.dat you@host > read user1 <&3 you@host > read user2 <&3 you@host > echo $user1 you tty2 Mar 6 14:09 you@host > echo $user2 tot :0 Mar 5 12:21 (console)
Lesen aus mehreren Dateien
Das Prinzip ist recht einfach: Sie verwenden einfach mehrere Filedeskriptoren, lesen die Werte jeweils zeilenweise in eine Variable ein (am besten in einer Schleife) und werten dann entsprechende Daten aus (bspw. einen Durchschnittswert).
... exec 3< file1 exec 4< file2 exec 5< file3 exec 6< file4 while true do read var1 <&3 read var2 <&4 read var3 <&5 read var4 <&6 # Hier die Variablen überprüfen und verarbeiten done
Lesen von der Standardeingabe und einer Datei
Dies ist ein häufiger Anwendungsfall. Wurde eine Datei zum Lesen geöffnet und stellt man gleichzeitig auch Anfragen an den Benutzer, würde bei einer Verwendung von read sowohl von der Datei als auch von der Benutzereingabe gelesen, z. B.:
# Demonstriert eine Umlenkung mit exec # aexec4 exec 3< $1 while read line <&3 do echo $line printf "Eine weitere Zeile einlesen? [j/n] : " read [ "$REPLY" = "n" ] && break done
Das Script bei der Ausführung:
you@host > ./aexec4 zitat.txt Des Ruhmes Würdigkeit verliert an Wert, Eine weitere Zeile einlesen? [j/n] : n
Ohne einen neuen Filedeskriptor würde im eben gezeigten Beispiel beim Betätigen von »j« immer wieder mit der ersten Zeile der einzulesenden Datei begonnen. Testen Sie es am besten selbst.
Hinweis   Natürlich lässt sich mit einem Filedeskriptor noch mehr anstellen als hier beschrieben, doch dürften dies schon die Hauptanwendungsgebiete sein. Ein weiteres Beispiel habe ich Ihnen bereits bei der Fehlerauswertung von mehreren Befehlen in einer Kommandoverkettung bei einer Pipe demonstriert (siehe Abschnitt 4.1.2).
5.5.2 Die Umlenkung <>Â
Neben den bisher bekannten Umlenkungszeichen gibt es noch ein etwas anderes, welches sich hier in Verbindung mit den Filedeskriptoren und dem exec-Befehl etwas einfacher erklären lässt. Mit dieser Umlenkung wird eine Datei sowohl zum Lesen als auch zum Schreiben geöffnet, und dies ist auch das Besondere daran, die Ausgabe wird exakt an derjenigen Stelle fortgeführt, an der sich im Augenblick der Lesezeiger befindet. Ebenso sieht es beim Schreiben aus â alle geschriebenen Daten landen genau dort, wo sich der Schreibzeiger im Augenblick befindet. Und noch etwas Besonderes beim Schreiben: Alle Daten hinter den neu geschriebenen Daten bleiben bestehen.
Hier ein einfaches Beispiel, welches demonstriert, wie Sie einen Filedeskriptor mit <> gleichzeitig zum Lesen und Schreiben verwenden können.
# Demonstriert eine Umlenkung mit exec und <> # aexec5 exec 3<> $1 while read line <&3 do echo $line printf "Hier eine neue Zeile einfügen? [j/n] : " read [ "$REPLY" = "j" ] && break done printf "Bitte hier die neue Zeile eingeben : " read echo $REPLY >&3
Das Script bei der Ausführung:
you@host > ./aexec5 zitat.txt Des Ruhmes Würdigkeit verliert an Wert, Hier eine neue Zeile einfügen? [j/n] : j Bitte hier die neue Zeile eingeben : Hier eine neue Zeile you@host > cat zitat.txt Des Ruhmes Würdigkeit verliert an Wert, Hier eine neue Zeile der selbst mit Lob sich ehrt.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 5.5 FiledeskriptorenÂ
Der exec-Befehl ist der Grundstein für diesen Abschnitt. Ein Filedeskriptor ist ein einfacher ganzzahliger Wert, welcher sich in einer bestimmten Tabelle von geöffneten Dateien befindet, und der vom Kernel für jeden Prozess bereitgestellt wird. Die Werte 0, 1 und 2 sind vorbelegte Filedeskriptoren und Verweise auf die Standardeingabe (stdin), Standardausgabe (stdout) und Standardfehlerausgabe (stderr). Diese drei Filedeskriptoren sind meist mit dem Terminal (TTY) des Anwenders verbunden und können natürlich auch umgeleitet werden, was Sie ja schon des Öfteren getan haben.
Somit haben Sie also auch schon mit Filedeskriptoren (ob nun bewusst oder unbewusst) gearbeitet. Haben Sie z. B. eine Umlenkung wie
> ls -l > Augabedatei
vorgenommen, dann haben Sie praktisch den Kanal 1 oder genauer den Filedeskriptor 1 für die Standardausgabe in entsprechende Ausgabedatei umgeleitet. Ähnlich war dies auch bei der Standardeingabe mit dem Kanal 0, genauer dem Filedeskriptor mit der Nummer 0:
> write user <nachricht
In beiden Fällen hätten Sie auch die alternative Schreibweise
> ls -l 1> Ausgabedatei write user 0<nachricht
verwenden können, aber dies ist bei den Kanälen 0 und 1 nicht erforderlich und optional, da diese vom System bei Verwendung ohne eine Ziffer mit entsprechenden Werten belegt werden. Aber ab dem Kanal 2, der Standardfehlerausgabe, muss schon die Filedeskriptoren-Nummer mit angegeben werden, da das System sonst statt Kanal 2 den Kanal 1 verwenden würde. Es hindert Sie aber niemand, statt der Standardausgabe (Kanal 1) die Standardfehlerausgabe (Kanal 2) zu verwenden. Hierzu müssen Sie nur den entsprechenden Ausgabebefehl anweisen, seine Standardausgabe auf den Kanal 2 vorzunehmen:
> you@host > echo "Hallo Welt auf stderr" >&2 Hallo Welt auf stderr
Gleiches haben Sie ja schon öfter mit der Standardfehlerausgabe mittels 2>&1 vorgenommen. Hierbei wurde die Standardfehlerausgabe (Kanal 2) auf den Filedeskriptor 1 umgelenkt. Damit hatten Fehlerausgabe und Standardausgabe dasselbe Ziel. Somit lautet die Syntax, um aus einer Datei bzw. einem Filedeskriptor zu lesen oder zu schreiben, wie folgt:
> # Lesen aus einer Datei bzw. Filedeskriptor kommando <&fd # Schreiben in eine Datei bzw. Filedeskriptor kommando >&fd # Anhängen in eine Datei bzw. Filedeskriptor kommanod >>&fd
### 5.5.1 Einen neuen Filedeskriptor verwendenÂ
Selbstverständlich können Sie neben den Standardkanälen weitere Filedeskriptoren erzeugen. Für die Namen stehen Ihnen hierzu die Nummern 3 bis 9 zur Verfügung. Dabei wird der exec-Befehl folgendermaßen verwendet (für fd steht Ihnen eine Zahl zwischen 3 und 9 zur Verfügung):
> # Zusätzlicher Ausgabekanal exec fd> ziel # Zusätzlicher Ausgabekanal zum Anhängen exec fd>> ziel # Zusätzlicher Eingabekanal exec fd< ziel
Im folgenden Beispiel erzeugen wir einen neuen Ausgabekanal für die aktuelle Konsole mit der Nummer 3:
> you@host > exec 3> `tty` you@host > echo "Hallo neuer Kanal" >&3 Hallo neuer Kanal
Wenn Sie einen Filedeskriptor nicht mehr benötigen, sollten Sie diesen wieder freigeben. Auch hierbei wird wieder das exec-Kommando genutzt:
> exec fd>&-
Auf den Filedeskriptor 3 bezogen, den Sie eben erzeugt haben, sieht dieser Vorgang wie folgt aus:
> you@host > exec 3>&- you@host > echo "Hallo neuer Kanal" >&3 bash: 3: Ungültiger Dateideskriptor
Es wurde bereits im Abschnitt zuvor erwähnt, dass jedes Kommando an der Position des letzten Lese-/Schreibzugriffs fortfährt. Diesen Sachverhalt nutzt man in der Praxis in drei Hauptanwendungsgebieten.
# Offenhalten von Dateien
Würden Sie z. B. zweimal über eine Umlenkung etwas von einer Datei oder Kommando-Substitution lesen, würde zweimal dasselbe eingelesen, beispielsweise:
> you@host > who you tty2 Mar 6 14:09 tot :0 Mar 5 12:21 (console) you@host > who > user.dat you@host > read user1 <user.dat you@host > read user2 <user.dat you@host > echo $user1 you tty2 Mar 6 14:09 you@host > echo $user2 you tty2 Mar 6 14:09
Hier war wohl nicht beabsichtigt, dass zweimal derselbe User eingelesen wird. Verwenden Sie hingegen einen neuen Kanal, bleibt Ihnen die aktuelle Position des Lesezeigers erhalten. Dasselbe mit einem neuen Filedeskriptor:
> you@host > who > user.dat you@host > exec 3< user.dat you@host > read user1 <&3 you@host > read user2 <&3 you@host > echo $user1 you tty2 Mar 6 14:09 you@host > echo $user2 tot :0 Mar 5 12:21 (console)
# Lesen aus mehreren Dateien
Das Prinzip ist recht einfach: Sie verwenden einfach mehrere Filedeskriptoren, lesen die Werte jeweils zeilenweise in eine Variable ein (am besten in einer Schleife) und werten dann entsprechende Daten aus (bspw. einen Durchschnittswert).
> ... exec 3< file1 exec 4< file2 exec 5< file3 exec 6< file4 while true do read var1 <&3 read var2 <&4 read var3 <&5 read var4 <&6 # Hier die Variablen überprüfen und verarbeiten done
# Lesen von der Standardeingabe und einer Datei
Dies ist ein häufiger Anwendungsfall. Wurde eine Datei zum Lesen geöffnet und stellt man gleichzeitig auch Anfragen an den Benutzer, würde bei einer Verwendung von read sowohl von der Datei als auch von der Benutzereingabe gelesen, z. B.:
> # Demonstriert eine Umlenkung mit exec # aexec4 exec 3< $1 while read line <&3 do echo $line printf "Eine weitere Zeile einlesen? [j/n] : " read [ "$REPLY" = "n" ] && break done
Das Script bei der Ausführung:
> you@host > ./aexec4 zitat.txt Des Ruhmes Würdigkeit verliert an Wert, Eine weitere Zeile einlesen? [j/n] : n
Ohne einen neuen Filedeskriptor würde im eben gezeigten Beispiel beim Betätigen von »j« immer wieder mit der ersten Zeile der einzulesenden Datei begonnen. Testen Sie es am besten selbst.
### 5.5.2 Die Umlenkung <>Â
Neben den bisher bekannten Umlenkungszeichen gibt es noch ein etwas anderes, welches sich hier in Verbindung mit den Filedeskriptoren und dem exec-Befehl etwas einfacher erklären lässt. Mit dieser Umlenkung wird eine Datei sowohl zum Lesen als auch zum Schreiben geöffnet, und dies ist auch das Besondere daran, die Ausgabe wird exakt an derjenigen Stelle fortgeführt, an der sich im Augenblick der Lesezeiger befindet. Ebenso sieht es beim Schreiben aus â alle geschriebenen Daten landen genau dort, wo sich der Schreibzeiger im Augenblick befindet. Und noch etwas Besonderes beim Schreiben: Alle Daten hinter den neu geschriebenen Daten bleiben bestehen.
Hier ein einfaches Beispiel, welches demonstriert, wie Sie einen Filedeskriptor mit <> gleichzeitig zum Lesen und Schreiben verwenden können.
> # Demonstriert eine Umlenkung mit exec und <> # aexec5 exec 3<> $1 while read line <&3 do echo $line printf "Hier eine neue Zeile einfügen? [j/n] : " read [ "$REPLY" = "j" ] && break done printf "Bitte hier die neue Zeile eingeben : " read echo $REPLY >&3
Das Script bei der Ausführung:
> you@host > ./aexec5 zitat.txt Des Ruhmes Würdigkeit verliert an Wert, Hier eine neue Zeile einfügen? [j/n] : j Bitte hier die neue Zeile eingeben : Hier eine neue Zeile you@host > cat zitat.txt Des Ruhmes Würdigkeit verliert an Wert, Hier eine neue Zeile der selbst mit Lob sich ehrt.
# 5.6 Named PipesÂ
5.6 Named PipesÂ
Ein recht selten verwendetes Konzept steht Ihnen in der Kommandozeile der Shell mit einer Named Pipe zur Verfügung. Named Pipes sind den normalen Pipes sehr ähnlich, nur haben diese einen entscheidenden Vorteil: Hiermit kann eine schon existierende Pipe von beliebigen Prozessen genutzt werden. Vereinfacht ausgedrückt: Eine Named Pipe ist eine Konstruktion, welche die Ausgabe eines Prozesses als Eingabe zur Verfügung stellt. Bei normalen Pipes mussten die Prozesse vom selben Elternprozess abstammen (um an die Dateideskriptoren heranzukommen).
Zum Anlegen einer Named Pipe wird das Kommando mknod oder mkfifo verwendet:
mknod name p mkfifo name
Da beim Kommando mknod auch mehrere Arten von Dateien angelegt werden können, muss hier die gewünschte Art durch einen Buchstaben gekennzeichnet werden. Bei einer Named Pipe ist dies ein p. Named Pipes werden häufig auch als FIFOs bezeichnet, weil Sie nach dem First-In-First-Out-Prinzip arbeiten. Jedoch tun dies normale (temporäre) Pipes auch, weshalb hier eine unterscheidende Bezeichnung nicht passen würde.
you@host > mknod apipe p you@host > ls -l apipe prw-r--r-- 1 tot users 0 2005â03â07 06:10 apipe
Beim Dateityp taucht die Named Pipe mit einem p auf. In die Named Pipe können Sie etwas einfügen mit:
you@host > echo "<NAME>" > apipe
Ein anderer Prozess könnte die Pipe jetzt wie folgt wieder auslesen:
tot@linux > tail -f apipe Hallo User
Wie Ihnen hierbei sicherlich auffällt, kann der erste Prozess, der in die Pipe schreibt, erst dann weiterarbeiten, wenn ein zweiter Prozess aus dieser Pipe liest, also ein weiterer Prozess die Leseseite einer Pipe öffnet.
Die Named Pipe existiert natürlich weiter, bis diese wie eine gewöhnliche Datei gelöscht wird. Damit neben anderen Prozessen auch noch andere User auf eine Named Pipe lesend und schreibend zugreifen können, müssen Sie die Zugriffsrechte anpassen, denn standardmäßig verfügt nur der Eigentümer über Schreibrecht (abhängig von umask). Im Unterschied zu einer normalen Pipe, die in einem Schritt erzeugt wird (ein Inode-Eintrag, zwei Datei-Objekte und die Speicherseite) und sofort zum Lesen und Schreiben bereitsteht, werden die Named Pipes von den Prozessen im Userspace geöffnet und geschlossen. Dabei beachtet der Kernel, dass eine Pipe zum Lesen geöffnet ist, bevor etwas hineingeschrieben wird, sowie dass eine Pipe zum Schreiben geöffnet ist, bevor sie auch zum Lesen geöffnet wurde. Auf der Kernelebene ist dies in etwa dasselbe wie beim Erzeugen einer Gerätedatei (Device File).
Zwar ist es auch hier möglich, dass mehrere Prozesse dieselbe Named Pipe benutzen, allerdings können Daten nur in einer 1:1-Beziehungen ausgetauscht werden. Das heißt, Daten, die ein Prozess in die Pipe schreibt, können nur von einem Prozess gelesen werden. Die entsprechenden Zeilen werden hierbei in der Reihenfolge der Ankunft gespeichert, die älteste Zeile wird an den nächsten lesenden Prozess vermittelt (eben das FIFO-Prinzip).
Beachten Sie außerdem, dass sich eine Pipe schließt, wenn einer der Prozesse (auch echo und read) abgeschlossen ist. Dies macht sich besonders dann bemerkbar, wenn Sie vorhaben, aus einem Script mehrere Zeilen in eine Pipe zu schieben bzw. mehrere Zeilen daraus zu lesen. Um dieses Problem zu umgehen, müssen Sie eine Umleitung am Ende einer Schleife verwenden, so zum Beispiel:
# Demonstriert das Lesen mehrerer Zeilen aus einer Named Pipe # areadpipe while read zeile do echo $zeile done < apipe
Das Gegenstück:
# Demonstriert das Schreiben mehrerer Zeilen in eine Named Pipe # awritepipe while true do echo "Ein paar Zeilen für die Named Pipe" echo "Die zweite Zeile soll auch ankommen" echo "Und noch eine letzte Zeile" break done > apipe
Das Script bei der Ausführung:
you@host > ./awritepipe --- [Andere Konsole] --- tot@linux > ./areadpipe Ein paar Zeilen für die Named Pipe Die zweite Zeile soll auch ankommen Und noch eine letzte Zeile
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 5.6 Named PipesÂ
Ein recht selten verwendetes Konzept steht Ihnen in der Kommandozeile der Shell mit einer Named Pipe zur Verfügung. Named Pipes sind den normalen Pipes sehr ähnlich, nur haben diese einen entscheidenden Vorteil: Hiermit kann eine schon existierende Pipe von beliebigen Prozessen genutzt werden. Vereinfacht ausgedrückt: Eine Named Pipe ist eine Konstruktion, welche die Ausgabe eines Prozesses als Eingabe zur Verfügung stellt. Bei normalen Pipes mussten die Prozesse vom selben Elternprozess abstammen (um an die Dateideskriptoren heranzukommen).
Zum Anlegen einer Named Pipe wird das Kommando mknod oder mkfifo verwendet:
> mknod name p mkfifo name
Da beim Kommando mknod auch mehrere Arten von Dateien angelegt werden können, muss hier die gewünschte Art durch einen Buchstaben gekennzeichnet werden. Bei einer Named Pipe ist dies ein p. Named Pipes werden häufig auch als FIFOs bezeichnet, weil Sie nach dem First-In-First-Out-Prinzip arbeiten. Jedoch tun dies normale (temporäre) Pipes auch, weshalb hier eine unterscheidende Bezeichnung nicht passen würde.
> you@host > mknod apipe p you@host > ls -l apipe prw-r--r-- 1 tot users 0 2005â03â07 06:10 apipe
Beim Dateityp taucht die Named Pipe mit einem p auf. In die Named Pipe können Sie etwas einfügen mit:
> you@host > echo "<NAME>" > apipe
Ein anderer Prozess könnte die Pipe jetzt wie folgt wieder auslesen:
> tot@linux > tail -f apipe Hallo User
Wie Ihnen hierbei sicherlich auffällt, kann der erste Prozess, der in die Pipe schreibt, erst dann weiterarbeiten, wenn ein zweiter Prozess aus dieser Pipe liest, also ein weiterer Prozess die Leseseite einer Pipe öffnet.
Die Named Pipe existiert natürlich weiter, bis diese wie eine gewöhnliche Datei gelöscht wird. Damit neben anderen Prozessen auch noch andere User auf eine Named Pipe lesend und schreibend zugreifen können, müssen Sie die Zugriffsrechte anpassen, denn standardmäßig verfügt nur der Eigentümer über Schreibrecht (abhängig von umask). Im Unterschied zu einer normalen Pipe, die in einem Schritt erzeugt wird (ein Inode-Eintrag, zwei Datei-Objekte und die Speicherseite) und sofort zum Lesen und Schreiben bereitsteht, werden die Named Pipes von den Prozessen im Userspace geöffnet und geschlossen. Dabei beachtet der Kernel, dass eine Pipe zum Lesen geöffnet ist, bevor etwas hineingeschrieben wird, sowie dass eine Pipe zum Schreiben geöffnet ist, bevor sie auch zum Lesen geöffnet wurde. Auf der Kernelebene ist dies in etwa dasselbe wie beim Erzeugen einer Gerätedatei (Device File).
Zwar ist es auch hier möglich, dass mehrere Prozesse dieselbe Named Pipe benutzen, allerdings können Daten nur in einer 1:1-Beziehungen ausgetauscht werden. Das heißt, Daten, die ein Prozess in die Pipe schreibt, können nur von einem Prozess gelesen werden. Die entsprechenden Zeilen werden hierbei in der Reihenfolge der Ankunft gespeichert, die älteste Zeile wird an den nächsten lesenden Prozess vermittelt (eben das FIFO-Prinzip).
Beachten Sie außerdem, dass sich eine Pipe schließt, wenn einer der Prozesse (auch echo und read) abgeschlossen ist. Dies macht sich besonders dann bemerkbar, wenn Sie vorhaben, aus einem Script mehrere Zeilen in eine Pipe zu schieben bzw. mehrere Zeilen daraus zu lesen. Um dieses Problem zu umgehen, müssen Sie eine Umleitung am Ende einer Schleife verwenden, so zum Beispiel:
> # Demonstriert das Lesen mehrerer Zeilen aus einer Named Pipe # areadpipe while read zeile do echo $zeile done < apipe
Das Gegenstück:
> # Demonstriert das Schreiben mehrerer Zeilen in eine Named Pipe # awritepipe while true do echo "Ein paar Zeilen für die Named Pipe" echo "Die zweite Zeile soll auch ankommen" echo "Und noch eine letzte Zeile" break done > apipe
Das Script bei der Ausführung:
> you@host > ./awritepipe --- [Andere Konsole] --- tot@linux > ./areadpipe Ein paar Zeilen für die Named Pipe Die zweite Zeile soll auch ankommen Und noch eine letzte Zeile
# 5.7 Menüs mit select (Bash und Korn-Shell only)Â
5.7 Menüs mit select (Bash und Korn-Shell only)Â
Die Bash und die Korn-Shell stellen Ihnen mit der select-Anweisung ein bequemes Konstrukt zum Erzeugen von Auswahlmenüs zur Verfügung. Der Befehl funktioniert ähnlich wie die for-Schleife, nur dass hierbei statt des Schlüsselworts for eben select steht.
select variable in menü_punkte do kommando1 ... kommandon done
select führt dabei die Anzeige der Menüpunkte und deren Auswahl automatisch durch und gibt den ausgewählten Menüpunkt zurück. Hierbei können Sie die Menüpunkte wie bei for entweder in einer Liste mit Werten oder direkt die einzelnen Menüpunkte angeben. Werte und Menüpunkte sollten dabei jeweils durch mindestens ein Leerzeichen getrennt aufgeführt werden (bei Werten: abhängig von IFS). Selbstverständlich können Sie auch die Argumente aus der Kommandozeile ($*) verwenden, sofern dies sinnvoll ist. Nach dem Aufruf von select wird eine Liste der Menüpunkte auf die Standardfehlerausgabe (!) in einer fortlaufenden Nummerierung (1 bis n) ausgegeben. Dann können Sie die Auswahl anhand dieser Nummerierung bestimmen. Der entsprechend ausgewählte Menüpunkt wird dann in »variable« gespeichert. Falls Sie die benötigte Nummer ebenfalls erhalten möchten, finden Sie diese in der Variablen REPLY. Bei einer falschen Eingabe wird »variable« auf leer gesetzt.
Wenn nach der Auswahl entsprechende Kommandos zwischen do und done ausgeführt wurden, wird die Eingabeaufforderung der select-Anweisung erneut angezeigt. Bei select handelt es sich somit um eine Endlosschleife, die Sie nur mit der Tastenkombination (Strg)+(D) (EOF) oder den Befehlen exit bzw. break innerhalb von do und done beenden können (siehe Abbildung 5.2).
Hierzu ein einfaches Script:
# Demonstriert die select-Anweisung # aselect1 select auswahl in Punkt1 Punkt2 Punkt3 Punkt4 do echo "Ihre Auswahl war : $auswahl" done
Das Script bei der Ausführung:
you@host > ./aselect1 1) Punkt1 2) Punkt2 3) Punkt3 4) Punkt4 #? 1 Ihre Auswahl war : Punkt1 #? 2 Ihre Auswahl war : Punkt2 #? 3 Ihre Auswahl war : Punkt3 #? 4 Ihre Auswahl war : Punkt4 #? 0 Ihre Auswahl war : #? (Strg)+(D) you@host >
Als Prompt wird hier immer der Inhalt der Variablen PS3 angezeigt, der bei den Shells meistens mit #? vorbelegt ist. Wollen Sie ein anderes Prompt nutzen, müssen Sie diese Variable nur neu besetzen.
Natürlich eignet sich das Script »aselect1« nur zu Demonstrationszwecken. Bei einer echten Menü-Auswahl will man schließlich die Auswahl auch auswerten und entsprechend reagieren. Was eignet sich besser als eine case-Anweisung?
# Demonstriert die select-Anweisung # aselect2 # Ein neues Auswahl-Prompt PS3="Ihre Wahl : " select auswahl in Punkt1 Punkt2 Punkt3 Punkt4 Ende do case "$auswahl" in Ende) echo "Ende"; break ;; "") echo "Ungültige Auswahl" ;; *) echo "Sie haben $auswahl gewählt" esac done
Das Script bei der Ausführung:
you@host > ./aselect2 1) Punkt1 2) Punkt2 3) Punkt3 4) Punkt4 5) Ende Ihre Wahl : 1 Sie haben Punkt1 gewählt Ihre Wahl : 4 Sie haben Punkt4 gewählt Ihre Wahl : 5 Ende
Mit select können Sie außerdem wie bei der for-Schleife auch die Liste der Menüpunkte mit den Wildcard-Zeichen *, ? und [] (Datennamengenerierung) oder mit einer Kommando-Substitution erzeugen lassen. Hier einige Beispiele:
# Listet alle Dateien mit der Endung *.c auf select var in *.c # Listet alle Dateien mit der Endung *.c, *.sh und *.txt auf select var in *.c *.sh *.txt # Listet alle aktiven User-Namen auf select var in `who | cut -c1â8 | grep -v $LOGNAME` # Listet alle Dateien im aktuellen Verzeichnis auf, # die mit a beginnen select var in `ls a*`
Hierzu noch ein relativ praktisches Beispiel. Mithilfe des Wildcard-Zeichens * werden alle sich im aktuellen Verzeichnis befindlichen Dateien ausgegeben. Anschließend werden Sie nach einer Datei gefragt, die Sie editieren wollen, welche daraufhin mit einem Editor Ihrer Wahl (hier vi) geöffnet wird und die Sie dann bearbeiten können. Dieses Script ist natürlich stark ausbaufähig:
# Demonstriert die select-Anweisung # aselect3 # Ein neues Auswahl-Prompt PS3="Datei zum Editieren auswählen : " # Hier den Editor Ihrer Wahl angeben EDIT=vi select auswahl in * Ende do case "$auswahl" in Ende) echo "Ende" ; break ;; "") echo "$REPLY: Ungültige Auswahl" ;; *) [ -d "$auswahl" ] && \ echo "Verzeichnis kann nicht editiert werden" &&\ continue $EDIT $auswahl break ;; esac done
Das Script bei der Ausführung:
you@host > ./aselect3 1) 1 27) awritepipe~ 2) acase1 28) bestellung.txt 3) acase1~ 29) bin 4) aecho1 30) data.dat ... ... 23) avalue~ 49) user.dat 24) awhile1 50) zitat.txt 25) awhile1~ 51) zitat.txt~ 26) awritepipe 52) Ende Datei zum Editieren auswählen : 2
Und zu guter Letzt können Sie mit select auch Untermenüs erzeugen, sprich mehrere select-Anweisungen verschachteln. Wollen Sie hierbei ein anderes Prompt verwenden, müssen Sie allerdings PS3 in der Verschachtelung mit einem neuen Inhalt versehen. Hier ein Beispiel eines solchen Untermenüs:
# Demonstriert die select-Anweisung # aselect4 # Ein neues Auswahl-Prompt PS3="Bitte wählen : " select auswahl in A B C Ende do case "$auswahl" in Ende) echo "Ende" ; break ;; "") echo "$REPLY: Ungültige Auswahl" ;; A) select auswahla in A1 A2 A3 do echo "Auswahl $auswahla" done ;; *) echo "Ihre Auswahl war : $auswahl" ;; esac done
Das Script bei der Ausführung:
you@host > ./aselect4 1) A 2) B 3) C 4) Ende Bitte wählen : 1 1) A1 2) A2 3) A3 Bitte wählen : 2 Auswahl A2 Bitte wählen : ...
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 5.7 Menüs mit select (Bash und Korn-Shell only)Â
Die Bash und die Korn-Shell stellen Ihnen mit der select-Anweisung ein bequemes Konstrukt zum Erzeugen von Auswahlmenüs zur Verfügung. Der Befehl funktioniert ähnlich wie die for-Schleife, nur dass hierbei statt des Schlüsselworts for eben select steht.
> select variable in menü_punkte do kommando1 ... kommandon done
select führt dabei die Anzeige der Menüpunkte und deren Auswahl automatisch durch und gibt den ausgewählten Menüpunkt zurück. Hierbei können Sie die Menüpunkte wie bei for entweder in einer Liste mit Werten oder direkt die einzelnen Menüpunkte angeben. Werte und Menüpunkte sollten dabei jeweils durch mindestens ein Leerzeichen getrennt aufgeführt werden (bei Werten: abhängig von IFS). Selbstverständlich können Sie auch die Argumente aus der Kommandozeile ($*) verwenden, sofern dies sinnvoll ist. Nach dem Aufruf von select wird eine Liste der Menüpunkte auf die Standardfehlerausgabe (!) in einer fortlaufenden Nummerierung (1 bis n) ausgegeben. Dann können Sie die Auswahl anhand dieser Nummerierung bestimmen. Der entsprechend ausgewählte Menüpunkt wird dann in »variable« gespeichert. Falls Sie die benötigte Nummer ebenfalls erhalten möchten, finden Sie diese in der Variablen REPLY. Bei einer falschen Eingabe wird »variable« auf leer gesetzt.
Wenn nach der Auswahl entsprechende Kommandos zwischen do und done ausgeführt wurden, wird die Eingabeaufforderung der select-Anweisung erneut angezeigt. Bei select handelt es sich somit um eine Endlosschleife, die Sie nur mit der Tastenkombination (Strg)+(D) (EOF) oder den Befehlen exit bzw. break innerhalb von do und done beenden können (siehe Abbildung 5.2).
Hierzu ein einfaches Script:
> # Demonstriert die select-Anweisung # aselect1 select auswahl in Punkt1 Punkt2 Punkt3 Punkt4 do echo "Ihre Auswahl war : $auswahl" done
Das Script bei der Ausführung:
> you@host > ./aselect1 1) Punkt1 2) Punkt2 3) Punkt3 4) Punkt4 #? 1 Ihre Auswahl war : Punkt1 #? 2 Ihre Auswahl war : Punkt2 #? 3 Ihre Auswahl war : Punkt3 #? 4 Ihre Auswahl war : Punkt4 #? 0 Ihre Auswahl war : #? (Strg)+(D) you@host Als Prompt wird hier immer der Inhalt der Variablen PS3 angezeigt, der bei den Shells meistens mit #? vorbelegt ist. Wollen Sie ein anderes Prompt nutzen, müssen Sie diese Variable nur neu besetzen.
Natürlich eignet sich das Script »aselect1« nur zu Demonstrationszwecken. Bei einer echten Menü-Auswahl will man schließlich die Auswahl auch auswerten und entsprechend reagieren. Was eignet sich besser als eine case-Anweisung?
> # Demonstriert die select-Anweisung # aselect2 # Ein neues Auswahl-Prompt PS3="Ihre Wahl : " select auswahl in Punkt1 Punkt2 Punkt3 Punkt4 Ende do case "$auswahl" in Ende) echo "Ende"; break ;; "") echo "Ungültige Auswahl" ;; *) echo "Sie haben $auswahl gewählt" esac done
Das Script bei der Ausführung:
> you@host > ./aselect2 1) Punkt1 2) Punkt2 3) Punkt3 4) Punkt4 5) Ende Ihre Wahl : 1 Sie haben Punkt1 gewählt Ihre Wahl : 4 Sie haben Punkt4 gewählt Ihre Wahl : 5 Ende
Mit select können Sie außerdem wie bei der for-Schleife auch die Liste der Menüpunkte mit den Wildcard-Zeichen *, ? und [] (Datennamengenerierung) oder mit einer Kommando-Substitution erzeugen lassen. Hier einige Beispiele:
> # Listet alle Dateien mit der Endung *.c auf select var in *.c # Listet alle Dateien mit der Endung *.c, *.sh und *.txt auf select var in *.c *.sh *.txt # Listet alle aktiven User-Namen auf select var in `who | cut -c1â8 | grep -v $LOGNAME` # Listet alle Dateien im aktuellen Verzeichnis auf, # die mit a beginnen select var in `ls a*`
Hierzu noch ein relativ praktisches Beispiel. Mithilfe des Wildcard-Zeichens * werden alle sich im aktuellen Verzeichnis befindlichen Dateien ausgegeben. Anschließend werden Sie nach einer Datei gefragt, die Sie editieren wollen, welche daraufhin mit einem Editor Ihrer Wahl (hier vi) geöffnet wird und die Sie dann bearbeiten können. Dieses Script ist natürlich stark ausbaufähig:
> # Demonstriert die select-Anweisung # aselect3 # Ein neues Auswahl-Prompt PS3="Datei zum Editieren auswählen : " # Hier den Editor Ihrer Wahl angeben EDIT=vi select auswahl in * Ende do case "$auswahl" in Ende) echo "Ende" ; break ;; "") echo "$REPLY: Ungültige Auswahl" ;; *) [ -d "$auswahl" ] && \ echo "Verzeichnis kann nicht editiert werden" &&\ continue $EDIT $auswahl break ;; esac done
Das Script bei der Ausführung:
> you@host > ./aselect3 1) 1 27) awritepipe~ 2) acase1 28) bestellung.txt 3) acase1~ 29) bin 4) aecho1 30) data.dat ... ... 23) avalue~ 49) user.dat 24) awhile1 50) zitat.txt 25) awhile1~ 51) zitat.txt~ 26) awritepipe 52) Ende Datei zum Editieren auswählen : 2
Und zu guter Letzt können Sie mit select auch Untermenüs erzeugen, sprich mehrere select-Anweisungen verschachteln. Wollen Sie hierbei ein anderes Prompt verwenden, müssen Sie allerdings PS3 in der Verschachtelung mit einem neuen Inhalt versehen. Hier ein Beispiel eines solchen Untermenüs:
> # Demonstriert die select-Anweisung # aselect4 # Ein neues Auswahl-Prompt PS3="Bitte wählen : " select auswahl in A B C Ende do case "$auswahl" in Ende) echo "Ende" ; break ;; "") echo "$REPLY: Ungültige Auswahl" ;; A) select auswahla in A1 A2 A3 do echo "Auswahl $auswahla" done ;; *) echo "Ihre Auswahl war : $auswahl" ;; esac done
Das Script bei der Ausführung:
> you@host > ./aselect4 1) A 2) B 3) C 4) Ende Bitte wählen : 1 1) A1 2) A2 3) A3 Bitte wählen : 2 Auswahl A2 Bitte wählen : ...
# 5.8 dialog und XdialogÂ
5.8 dialog und XdialogÂ
Mit dialog und Xdialog können Sie grafische (bzw. semigrafische) Dialoge in Ihre Shellscripts mit einbauen. Die Tools dienen zur einfachen Darstellung (halb-)grafischer Dialogfenster auf dem Bildschirm, sodass Sie Benutzerabfragen in Scripts anschaulicher und einfacher gestalten können. Die Rückgabewerte der Dialoge entscheiden dann über den weiteren Verlauf des Shellscripts.
Die Tools funktionieren im Prinzip ähnlich wie die üblichen Linux-UNIX-Kommandos und werden mit Kommandozeilenparameter und Standardeingabe zur Beeinflussung von Aussehen und Inhalten gesteuert. Die Resultate (auch Fehler) der Benutzeraktionen werden über die Standardausgabe bzw. die Standardfehlerausgabe und den Exit-Status des Scripts zurückgegeben und können von demjenigen Script, von dem der Dialog gestartet wurde, weiterverarbeitet werden.
dialog läuft ausschließlich in einer Textkonsole und stellt eine Art halbgrafische Oberfläche (basierend auf ncurses) zur Verfügung. Dass dialog keine echte grafische Oberfläche besitzt (bzw. keinen laufenden X-Server benötigt) und somit recht anspruchslos ist, macht es zu einem komfortablen Tool zur Fernwartung über SSH. Die Steuerung von dialog erfolgt über die Tastatur, jedoch ist auch Maussteuerung möglich. Gewöhnlich liegt dialog Ihrer Distribution bei und wird eventuell auch schon über eine Standardinstallation mitinstalliert.
Xdialog läuft im Gegensatz zu dialog nur, wenn ein laufender X-Server zur Verfügung steht. Da dies bei Fernwartungen selten der Fall ist, eignet sich Xdialog eher für Shellscripts und Wartungsarbeiten auf dem heimischen Desktoprechner. Xdialog verwendet als Toolkit GTK+ und muss meistens noch separat bezogen werden (hierbei muss auch gtk-devel größer als 1.2 installiert sein). Natürlich lässt sich Xdialog GUI-üblich mit der Maus steuern.
Hinweis   Sofern Sie eines (oder beide) der Pakete nicht auf Ihrem Rechner vorfinden, sei hier die Bezugsquelle der jeweiligen Tools genannt: http://hightek.org/dialog/ (dialog) und http://xdialog.dyns.net/ (Xdialog). Vermutlich werden Sie allerdings den Quellcode selbst übersetzen müssen (ggf. INSTALL lesen).
Das Tolle an dialog und Xdialog ist, dass beim Entwickeln von Xdialog auf Kompatibilität geachtet wurde. Sie können die mit dialog erstellten Scripts mit einem minimalen Aufwand in eine absolute grafische Oberfläche mit Xdialog umwandeln. Hierzu müssen Sie nur den Kommandoaufruf dialog in Xdialog umändern. Wobei erwähnt werden muss, dass Xdialog noch ein paar Features mehr anbietet als dialog.
Die allgemeine Syntax zu dialog bzw. Xdialog sieht wie folgt aus:
[X]dialog [optionen] [Art des Feldes] "Text" [breite] [höhe]
5.8.1 Entscheidungsfrage --yesnoÂ
Mit ââyesno können Sie eine Entscheidungsfrage stellen, die mit »ja« oder »nein« bestätigt wird. Gibt der Benutzer »ja« ein, gibt dialog den Wert 0 zurück ($? = 0), ansonsten wird bei »nein« 1 zurückgegeben. Die Syntax:
dialog --yesno [Text] [Höhe] [Breite]
Ein Beispielscript:
# Demonstriert dialog --yesno # Name : dialog1 dialog --yesno "Möchten Sie wirklich abbrechen?" 0 0 # 0=ja; 1=nein antwort=$? # Bildschirm löschen clear # Ausgabe auf die Konsole if [ $antwort = 0 ] then echo "Die Antwort war JA." else echo "Die Antwort war NEIN." fi
Das Script bei der Ausführung (siehe Abbildung 5.3):
Nun wollen Sie sicherlich testen, ob die Ausführungen zu dialog und Xdialog stimmen und man tatsächlich nur ein (großes) »X« vor dialog stellen muss, um eine echte grafische Oberfläche zu erhalten. Ändern Sie die Zeile
dialog --yesno "Möchten Sie wirklich abbrechen?" 0 0
im Script »dialog1« um in
Xdialog --yesno "Möchten Sie wirklich abbrechen?" 0 0
und Sie erhalten folgendes, in Abbildung 5.4 gezeigtes Ergebnis (immer vorausgesetzt, Sie haben Xdialog installiert):
Hinweis   Wird bei der Höhe oder Breite 0 angegeben, so werden die entsprechenden Maße automatisch an den Text angepasst.
5.8.2 Nachrichtenbox mit Bestätigung --msgboxÂ
Mit ââmsgbox erscheint eine Informationsbox mit beliebigem Text, den der Benutzer mit »OK« bestätigen muss. Die Abarbeitung des Scripts wird so lange angehalten, bis der Benutzer den OK-Button drückt. Hierbei wird kein Rückgabewert zurückgegeben.
[X]dialog --msgbox [Text] [Höhe] [Breite]
Ein Beispielscript:
# Demonstriert dialog --msgbox # Name : dialog2 dialog --yesno "Möchten Sie wirklich abbrechen?" 0 0 # 0=ja; 1=nein antwort=$? # Dialog-Bildschirm löschen dialog --clear # Ausgabe auf die Konsole if [ $antwort = 0 ] then dialog --msgbox "Die Antwort war JA." 5 40 else dialog --msgbox "Die Antwort war NEIN." 5 40 fi # Bildschirm löschen clear
Das Script bei der Ausführung (siehe Abbildung 5.5):
5.8.3 Hinweisfenster ohne Bestätigung --infoboxÂ
ââinfobox ist gleichwertig zum eben erwähnten Dialog ââmsgbox, nur mit dem Unterschied, dass dieser Dialog nicht auf die Bestätigung des Benutzers wartet und somit das Shellscript im Hintergrund weiter ausgeführt wird.
[X]dialog --infobox [Text] [Höhe] [Breite]
Ein Script als Demonstration:
# Demonstriert dialog --msgbox # Name : dialog3 dialog --yesno "Möchten Sie wirklich alles löschen?" 0 0 # 0=ja; 1=nein antwort=$? # Dialog-Bildschirm löschen dialog --clear # Ausgabe auf die Konsole if [ $antwort = 0 ] then dialog --infobox "Dieser Vorgang kann ein wenig dauern" 5 50 # Hier die Kommandos zur Ausführung zum Löschen sleep 5 # ... wir warten einfach 5 Sekunden dialog --clear # Dialog-Bildschirm löschen dialog --msgbox "Done! Alle Löschvorgänge ausgeführt" 5 50 fi # Bildschirm löschen clear
5.8.4 Text-Eingabezeile --inputboxÂ
In einer Text-Eingabezeile mit ââinputbox können Eingaben des Benutzers erfolgen. Optional kann man hier auch einen Text vorbelegen. Die Ausgabe erfolgt anschließend auf die Standardfehlerausgabe.
[X]dialog --inputbox [Text] [Höhe] [Breite] [[Vorgabetext]]
Ein Script zu Demonstrationszwecken:
# Demonstriert dialog --inputbox # Name : dialog4 name=`dialog --inputbox "Wie heißen Sie?" 0 0 "Jürgen" \ 3>&1 1>&2 2>&3` # Dialog-Bildschirm löschen dialog --clear dialog --msgbox "Hallo $name, Willkommen bei $HOST!" 5 50 # Bildschirm löschen clear
Das Script bei der Ausführung (siehe Abbildung 5.6):
Vielleicht wundern Sie sich im Script bei der Verarbeitung des Dialog-Kommandos über den Zusatz 3>&1 1>&2 2>&3 in der Kommando-Substitution. Dies ist einer der Nachteile von dialog, weil hierbei das Ergebnis immer auf die Standardfehlerausgabe statt auf die Standardausgabe erfolgt. Und um die Dialogausgabe zur weiteren Verarbeitung in eine Variable zu schreiben, müssen Sie ebenso vorgehen (siehe auch Abschnitt 5.5).
5.8.5 Ein einfacher Dateibetrachter --textboxÂ
Mit diesem Dialog kann der Inhalt einer als Parameter übergebenen Datei angezeigt werden. Enthält der Text mehr Zeilen oder Spalten als angezeigt werden können, kann der darzustellende Text mit den Pfeiltasten ((?) (?) (?) (?)) gescrollt werden.
[X]dialog --textbox [Datei] [Höhe] [Breite]
Ein Script zur Demonstration:
# Demonstriert dialog --textbox # Name : dialog5 # Zeigt den Inhalt des eigenen Quelltextes an dialog --textbox "$0" 0 0 # Dialog-Bildschirm löschen dialog --clear # Bildschirm löschen clear
Das Script bei der Ausführung (siehe Abbildung 5.7):
5.8.6 Ein Menü --menuÂ
Mit diesem Dialog verfügen Sie über eine echte Alternative zu select. Hierbei wird eine Liste von Einträgen (ggf. mit Scrollbalken) angezeigt, von denen jeweils einer ausgewählt werden kann. Der entsprechende Eintrag (dessen Kürzel) wird dann auf den Standardfehlerkanal zurückgegeben, ansonsten â bei Abbruch â eine leere Zeichenkette.
[X]dialog --menu [Text] [Höhe] [Breite] [Menühöhe] [Tag1] \ [Eintrag1] ...
Der »Text« wird oberhalb der Auswahlbox gesetzt. »Breite« und »Höhe« sprechen für sich, wobei die »Breite« ein etwas genaueres Augenmerk verlangt, da längere Menüeinträge einfach abgeschnitten werden. Haben Sie eine Auswahl getroffen, wird durch Betätigen von »OK« der selektierte Eintrag zurückgegeben.
Ein Script zur Demonstration:
# Demonstriert dialog --menu # Name : dialog6 os=`dialog --menu "Betriebssystem wählen" 0 0 0 \ "Linux" "" "BSD" "" "Solaris" "" 3>&1 1>&2 2>&3` dialog --clear dialog --yesno "Bestätigen Sie Ihre Auswahl: $os" 0 0 dialog --clear clear
Das Script bei der Ausführung (siehe Abbildung 5.8):
5.8.7 Auswahlliste zum Ankreuzen --checklistÂ
Hierbei handelt es sich um eine Liste von Einträgen, von denen Sie beliebig viele markieren (ankreuzen) können. Es werden â wie schon bei âmenu â die Kürzel aller ausgewählten Einträge auf den Standardfehlerkanal zurückgegeben.
[X]dialog --checklist [Text] [Höhe] [Breite] [Listenhöhe] \ [Tag1] [Eintrag1] [Status1] ...
Auch hier wird der »Text« wieder oberhalb der Auswahlliste ausgegeben. Die »Höhe«, »Breite« und »Listenhöhe« sind eigentlich wieder selbsterklärend. Bei ihnen sollte man stets auf ein vernünftiges Maß achten. Wenn das erste Zeichen der Tags eindeutig ist, kann auch mit einem Tastendruck des entsprechenden Zeichens direkt dorthin gesprungen werden. Als »Status« können Sie den Eintrag als markiert »on« oder als deaktiviert »off« voreinstellen.
Das folgende Script soll Ihnen die Auswahlliste demonstrieren:
# Demonstriert dialog --checklist # Name : dialog7 pizza=`dialog --checklist "Pizza mit ..." 0 0 4 \ Käse "" on\ Salami "" off\ Schinken "" off\ Thunfisch "" off 3>&1 1>&2 2>&3` dialog --clear clear echo "Ihre Bestellung: Pizza mit $pizza"
Das Script bei der Ausführung (siehe Abbildung 5.9):
5.8.8 Radiobuttons zum Auswählen --radiolistÂ
Im Unterschied zu ââchecklist kann hier aus einer Liste von Einträgen nur eine Option mit der Leertaste markiert werden. Ansonsten entspricht dieser Dialog exakt dem von ââchecklist.
[X]dialog --radiolist [Text] [Höhe] [Breite] [Listenhöhe] \ [Tag1] [Eintrag1] [Status1] ...
Ein Script zur Demonstration:
# Demonstriert dialog --radiolist # Name : dialog8 pizza=`dialog --radiolist "Pizza mit ..." 0 0 3 \ Salami "" off\ Schinken "" off\ Thunfisch "" off 3>&1 1>&2 2>&3` dialog --clear clear echo "Ihre Bestellung: Pizza mit $pizza"
Das Script bei der Ausführung (siehe Abbildung 5.10):
5.8.9 Fortschrittszustand anzeigen --gaugeÂ
Hiermit können Sie einen Fortschrittszustand einbauen, um etwa anzuzeigen, wie weit der Prozess des Kopierens von Dateien schon abgeschlossen ist.
[X]dialog --gauge [Text] [Höhe] [Breite] [Prozent]
Der Text wird wieder oberhalb des Fortschrittsbalkens angezeigt. Der Startwert des Balkens wird über Prozent angegeben. Um die Anzeige zu aktualisieren, erwartet dieser Dialog weitere Werte aus der Standardeingabe. Erst wenn hierbei auf EOF gestoßen wurde, ist gauge fertig. Der Fortschrittszustand ist meiner Meinung nach ohnehin nie etwas Genaues, sondern dient wohl eher dazu, dem Anwender zu zeigen, dass auf dem System noch etwas passiert. ;-)
Ein Script zur Demonstration:
# Demonstriert dialog --gauge # Name : dialog9 DIALOG=dialog ( echo "10" ; sleep 1 echo "XXX" ; echo "Alle Daten werden gesichert"; echo "XXX" echo "20" ; sleep 1 echo "50" ; sleep 1 echo "XXX" ; echo "Alle Daten werden archiviert"; echo "XXX" echo "75" ; sleep 1 echo "XXX" ; echo "Daten werden ins Webverzeichnis hochgeladen"; echo "XXX" echo "100" ; sleep 3 ) | $DIALOG --title "Fortschrittszustand" --gauge "Starte Backup-Script" 8 30 $DIALOG --clear $DIALOG --msgbox "Arbeit erfolgreich beendet ..." 0 0 $DIALOG --clear clear
Das Script bei der Ausführung (siehe Abbildung 5.11):
Tipp   Beim letzten Script ist mir erst eingefallen, dass es recht umständlich ist, dialog immer in Xdialog umzubenennen. Eine globale Variable eignet sich viel besser. Soll Ihr Script für die Konsole sein, dann schreiben Sie DIALOG=dialog, für einen Dialog mit dem X-Server DIALOG=Xdialog.
Hinweis   Für Scripts, die als root ausgeführt werden, sollte man möglichst kein Xdialog verwenden, da sich root nur mit dem X-Server verbinden kann, wenn der Server freigegeben wurde oder root grafisch eingeloggt ist, was man beides vermeiden sollte.
5.8.10 Verändern von Aussehen und AusgabeÂ
Es gibt noch ein paar Dialoge, mit denen Sie das Aussehen und die Ausgabe beeinflussen können. Tabelle 5.10 bietet eine kurze Übersicht.
Tabelle 5.10 Â Weitere Dialoge
Option
Erläuterung
ââtitle
Eine Titelzeile für einen Dialog festlegen (Beschriftung für den oberen Rand)
ââbacktitle
Eine Titelzeile am Bildschirmrand festlegen (hierbei wird häufig der Scriptname verwendet, der zum jeweiligen Dialog gehört)
ââclear
Dialog-Bildschirm löschen
5.8.11 Kleines BeispielÂ
dialog und Xdialog lassen sich sehr vielseitig und eigentlich überall verwenden, sodass ein Beispiel immer recht wenig Sinn ergibt. Trotzdem will ich Ihnen ein kleines Beispiel zeigen. Es soll der Befehl find verwendet werden, und zwar so, dass auch der unbedarfte Linux-UNIX-User sofort an sein Ziel kommt. Im Beispiel wird ein Menü verwendet, wobei der User Dateien nach Namen, User-Kennung, Größe oder Zugriffsrechten suchen kann. In der anschließenden Eingabebox können Sie das Suchmuster festlegen und am Ende wird find aktiv. Zwar könnte man die Ausgabe von find auch in eine Textbox von dialog packen, aber bei einer etwas längeren Ausgabe macht die Dialogbox schlapp und gibt einen Fehler aus wie: »Die Argumentliste ist zu lang«. Hier das Beispielscript:
# Demonstriert dialog # Name : dialog10 myfind=`dialog --menu \ "Suchen nach Dateien â Suchkriterium auswählen" 0 0 0 \ "Dateinamen" "" \ "Benutzerkennung" "" \ "Grösse" "" \ "Zugriffsrechte" "" \ "Ende" "" 3>&1 1>&2 2>&3` dialog --clear case "$myfind" in Dateinamen) search=`dialog --inputbox \ "Dateinamen eingeben" 0 0 "" 3>&1 1>&2 2>&3` command="-name $search" ;; Benutzerkennung) kennung=`dialog --inputbox \ "Benutzerkennung eingeben" 0 0 "" 3>&1 1>&2 2>&3` command="-user $kennung" ;; Grösse) bsize=`dialog --inputbox \ "Dateigrösse (in block size) eingeben" 0 0 "" \ 3>&1 1>&2 2>&3` command="-size $bsize" ;; Zugriffsrechte) permission=`dialog --inputbox \ "Zugriffsrechte (oktal) eingeben" 0 0 "" 3>&1 1>&2 2>&3` command="-perm $permission" ;; Ende) dialog --clear; clear; exit 0 ;; esac find /home $command -print 2>/dev/null
Das Script bei der Ausführung (siehe Abbildung 5.12):
5.8.12 ZusammenfassungÂ
Sicherlich ließe sich zu dialog und insbesondere Xdialog noch einiges mehr sagen, aber die Basis haben Sie gelegt. Xdialog bietet Ihnen natürlich noch einige Widgets mehr an. Wenn Sie sich das Quellcode-Paket herunterladen, finden Sie im Verzeichnis samples eine Menge interessanter Scripts, die Ihnen zusätzliche Funktionen von Xdialog demonstrieren.
Eine weitere, sehr verbreitete Dialog-Schnittstelle ist lxdialog. Sie ist eine modifizierte Version von dialog und speziell für die Konfiguration des Kernels ausgelegt. lxdialog wird also vorwiegend von Kernel-Entwicklern eingesetzt und unterstützt im Gegensatz zu dialog auch Untermenüs und Abhängigkeiten zwischen Inhalten und einer Vorauswahl anderer Punkte. Leider ist lxdialog sehr schlecht dokumentiert, sodass schon viel Motivation dazugehört, sich damit auseinander zu setzen.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 5.8 dialog und XdialogÂ
Mit dialog und Xdialog können Sie grafische (bzw. semigrafische) Dialoge in Ihre Shellscripts mit einbauen. Die Tools dienen zur einfachen Darstellung (halb-)grafischer Dialogfenster auf dem Bildschirm, sodass Sie Benutzerabfragen in Scripts anschaulicher und einfacher gestalten können. Die Rückgabewerte der Dialoge entscheiden dann über den weiteren Verlauf des Shellscripts.
Die Tools funktionieren im Prinzip ähnlich wie die üblichen Linux-UNIX-Kommandos und werden mit Kommandozeilenparameter und Standardeingabe zur Beeinflussung von Aussehen und Inhalten gesteuert. Die Resultate (auch Fehler) der Benutzeraktionen werden über die Standardausgabe bzw. die Standardfehlerausgabe und den Exit-Status des Scripts zurückgegeben und können von demjenigen Script, von dem der Dialog gestartet wurde, weiterverarbeitet werden.
dialog läuft ausschließlich in einer Textkonsole und stellt eine Art halbgrafische Oberfläche (basierend auf ncurses) zur Verfügung. Dass dialog keine echte grafische Oberfläche besitzt (bzw. keinen laufenden X-Server benötigt) und somit recht anspruchslos ist, macht es zu einem komfortablen Tool zur Fernwartung über SSH. Die Steuerung von dialog erfolgt über die Tastatur, jedoch ist auch Maussteuerung möglich. Gewöhnlich liegt dialog Ihrer Distribution bei und wird eventuell auch schon über eine Standardinstallation mitinstalliert.
Xdialog läuft im Gegensatz zu dialog nur, wenn ein laufender X-Server zur Verfügung steht. Da dies bei Fernwartungen selten der Fall ist, eignet sich Xdialog eher für Shellscripts und Wartungsarbeiten auf dem heimischen Desktoprechner. Xdialog verwendet als Toolkit GTK+ und muss meistens noch separat bezogen werden (hierbei muss auch gtk-devel größer als 1.2 installiert sein). Natürlich lässt sich Xdialog GUI-üblich mit der Maus steuern.
Das Tolle an dialog und Xdialog ist, dass beim Entwickeln von Xdialog auf Kompatibilität geachtet wurde. Sie können die mit dialog erstellten Scripts mit einem minimalen Aufwand in eine absolute grafische Oberfläche mit Xdialog umwandeln. Hierzu müssen Sie nur den Kommandoaufruf dialog in Xdialog umändern. Wobei erwähnt werden muss, dass Xdialog noch ein paar Features mehr anbietet als dialog.
Die allgemeine Syntax zu dialog bzw. Xdialog sieht wie folgt aus:
> [X]dialog [optionen] [Art des Feldes] "Text" [breite] [höhe]
### 5.8.1 Entscheidungsfrage --yesnoÂ
Mit ââyesno können Sie eine Entscheidungsfrage stellen, die mit »ja« oder »nein« bestätigt wird. Gibt der Benutzer »ja« ein, gibt dialog den Wert 0 zurück ($? = 0), ansonsten wird bei »nein« 1 zurückgegeben. Die Syntax:
> dialog --yesno [Text] [Höhe] [Breite]
Ein Beispielscript:
> # Demonstriert dialog --yesno # Name : dialog1 dialog --yesno "Möchten Sie wirklich abbrechen?" 0 0 # 0=ja; 1=nein antwort=$? # Bildschirm löschen clear # Ausgabe auf die Konsole if [ $antwort = 0 ] then echo "Die Antwort war JA." else echo "Die Antwort war NEIN." fi
Nun wollen Sie sicherlich testen, ob die Ausführungen zu dialog und Xdialog stimmen und man tatsächlich nur ein (großes) »X« vor dialog stellen muss, um eine echte grafische Oberfläche zu erhalten. Ändern Sie die Zeile
> dialog --yesno "Möchten Sie wirklich abbrechen?" 0 0
im Script »dialog1« um in
> Xdialog --yesno "Möchten Sie wirklich abbrechen?" 0 0
und Sie erhalten folgendes, in Abbildung 5.4 gezeigtes Ergebnis (immer vorausgesetzt, Sie haben Xdialog installiert):
### 5.8.2 Nachrichtenbox mit Bestätigung --msgboxÂ
Mit ââmsgbox erscheint eine Informationsbox mit beliebigem Text, den der Benutzer mit »OK« bestätigen muss. Die Abarbeitung des Scripts wird so lange angehalten, bis der Benutzer den OK-Button drückt. Hierbei wird kein Rückgabewert zurückgegeben.
> [X]dialog --msgbox [Text] [Höhe] [Breite]
Ein Beispielscript:
> # Demonstriert dialog --msgbox # Name : dialog2 dialog --yesno "Möchten Sie wirklich abbrechen?" 0 0 # 0=ja; 1=nein antwort=$? # Dialog-Bildschirm löschen dialog --clear # Ausgabe auf die Konsole if [ $antwort = 0 ] then dialog --msgbox "Die Antwort war JA." 5 40 else dialog --msgbox "Die Antwort war NEIN." 5 40 fi # Bildschirm löschen clear
### 5.8.3 Hinweisfenster ohne Bestätigung --infoboxÂ
ââinfobox ist gleichwertig zum eben erwähnten Dialog ââmsgbox, nur mit dem Unterschied, dass dieser Dialog nicht auf die Bestätigung des Benutzers wartet und somit das Shellscript im Hintergrund weiter ausgeführt wird.
> [X]dialog --infobox [Text] [Höhe] [Breite]
Ein Script als Demonstration:
> # Demonstriert dialog --msgbox # Name : dialog3 dialog --yesno "Möchten Sie wirklich alles löschen?" 0 0 # 0=ja; 1=nein antwort=$? # Dialog-Bildschirm löschen dialog --clear # Ausgabe auf die Konsole if [ $antwort = 0 ] then dialog --infobox "Dieser Vorgang kann ein wenig dauern" 5 50 # Hier die Kommandos zur Ausführung zum Löschen sleep 5 # ... wir warten einfach 5 Sekunden dialog --clear # Dialog-Bildschirm löschen dialog --msgbox "Done! Alle Löschvorgänge ausgeführt" 5 50 fi # Bildschirm löschen clear
### 5.8.4 Text-Eingabezeile --inputboxÂ
In einer Text-Eingabezeile mit ââinputbox können Eingaben des Benutzers erfolgen. Optional kann man hier auch einen Text vorbelegen. Die Ausgabe erfolgt anschließend auf die Standardfehlerausgabe.
> [X]dialog --inputbox [Text] [Höhe] [Breite] [[Vorgabetext]]
Ein Script zu Demonstrationszwecken:
> # Demonstriert dialog --inputbox # Name : dialog4 name=`dialog --inputbox "Wie heißen Sie?" 0 0 "Jürgen" \ 3>&1 1>&2 2>&3` # Dialog-Bildschirm löschen dialog --clear dialog --msgbox "Hallo $name, Willkommen bei $HOST!" 5 50 # Bildschirm löschen clear
Das Script bei der Ausführung (siehe Abbildung 5.6):
Vielleicht wundern Sie sich im Script bei der Verarbeitung des Dialog-Kommandos über den Zusatz 3>&1 1>&2 2>&3 in der Kommando-Substitution. Dies ist einer der Nachteile von dialog, weil hierbei das Ergebnis immer auf die Standardfehlerausgabe statt auf die Standardausgabe erfolgt. Und um die Dialogausgabe zur weiteren Verarbeitung in eine Variable zu schreiben, müssen Sie ebenso vorgehen (siehe auch Abschnitt 5.5).
### 5.8.5 Ein einfacher Dateibetrachter --textboxÂ
Mit diesem Dialog kann der Inhalt einer als Parameter übergebenen Datei angezeigt werden. Enthält der Text mehr Zeilen oder Spalten als angezeigt werden können, kann der darzustellende Text mit den Pfeiltasten ((?) (?) (?) (?)) gescrollt werden.
> [X]dialog --textbox [Datei] [Höhe] [Breite]
Ein Script zur Demonstration:
> # Demonstriert dialog --textbox # Name : dialog5 # Zeigt den Inhalt des eigenen Quelltextes an dialog --textbox "$0" 0 0 # Dialog-Bildschirm löschen dialog --clear # Bildschirm löschen clear
### 5.8.6 Ein Menü --menuÂ
Mit diesem Dialog verfügen Sie über eine echte Alternative zu select. Hierbei wird eine Liste von Einträgen (ggf. mit Scrollbalken) angezeigt, von denen jeweils einer ausgewählt werden kann. Der entsprechende Eintrag (dessen Kürzel) wird dann auf den Standardfehlerkanal zurückgegeben, ansonsten â bei Abbruch â eine leere Zeichenkette.
> [X]dialog --menu [Text] [Höhe] [Breite] [Menühöhe] [Tag1] \ [Eintrag1] ...
Der »Text« wird oberhalb der Auswahlbox gesetzt. »Breite« und »Höhe« sprechen für sich, wobei die »Breite« ein etwas genaueres Augenmerk verlangt, da längere Menüeinträge einfach abgeschnitten werden. Haben Sie eine Auswahl getroffen, wird durch Betätigen von »OK« der selektierte Eintrag zurückgegeben.
Ein Script zur Demonstration:
> # Demonstriert dialog --menu # Name : dialog6 os=`dialog --menu "Betriebssystem wählen" 0 0 0 \ "Linux" "" "BSD" "" "Solaris" "" 3>&1 1>&2 2>&3` dialog --clear dialog --yesno "Bestätigen Sie Ihre Auswahl: $os" 0 0 dialog --clear clear
Das Script bei der Ausführung (siehe Abbildung 5.8):
### 5.8.7 Auswahlliste zum Ankreuzen --checklistÂ
Hierbei handelt es sich um eine Liste von Einträgen, von denen Sie beliebig viele markieren (ankreuzen) können. Es werden â wie schon bei âmenu â die Kürzel aller ausgewählten Einträge auf den Standardfehlerkanal zurückgegeben.
> [X]dialog --checklist [Text] [Höhe] [Breite] [Listenhöhe] \ [Tag1] [Eintrag1] [Status1] ...
Auch hier wird der »Text« wieder oberhalb der Auswahlliste ausgegeben. Die »Höhe«, »Breite« und »Listenhöhe« sind eigentlich wieder selbsterklärend. Bei ihnen sollte man stets auf ein vernünftiges Maß achten. Wenn das erste Zeichen der Tags eindeutig ist, kann auch mit einem Tastendruck des entsprechenden Zeichens direkt dorthin gesprungen werden. Als »Status« können Sie den Eintrag als markiert »on« oder als deaktiviert »off« voreinstellen.
Das folgende Script soll Ihnen die Auswahlliste demonstrieren:
> # Demonstriert dialog --checklist # Name : dialog7 pizza=`dialog --checklist "Pizza mit ..." 0 0 4 \ Käse "" on\ Salami "" off\ Schinken "" off\ Thunfisch "" off 3>&1 1>&2 2>&3` dialog --clear clear echo "Ihre Bestellung: Pizza mit $pizza"
### 5.8.8 Radiobuttons zum Auswählen --radiolistÂ
Im Unterschied zu ââchecklist kann hier aus einer Liste von Einträgen nur eine Option mit der Leertaste markiert werden. Ansonsten entspricht dieser Dialog exakt dem von ââchecklist.
> [X]dialog --radiolist [Text] [Höhe] [Breite] [Listenhöhe] \ [Tag1] [Eintrag1] [Status1] ...
Ein Script zur Demonstration:
> # Demonstriert dialog --radiolist # Name : dialog8 pizza=`dialog --radiolist "Pizza mit ..." 0 0 3 \ Salami "" off\ Schinken "" off\ Thunfisch "" off 3>&1 1>&2 2>&3` dialog --clear clear echo "Ihre Bestellung: Pizza mit $pizza"
### 5.8.9 Fortschrittszustand anzeigen --gaugeÂ
Hiermit können Sie einen Fortschrittszustand einbauen, um etwa anzuzeigen, wie weit der Prozess des Kopierens von Dateien schon abgeschlossen ist.
> [X]dialog --gauge [Text] [Höhe] [Breite] [Prozent]
Der Text wird wieder oberhalb des Fortschrittsbalkens angezeigt. Der Startwert des Balkens wird über Prozent angegeben. Um die Anzeige zu aktualisieren, erwartet dieser Dialog weitere Werte aus der Standardeingabe. Erst wenn hierbei auf EOF gestoßen wurde, ist gauge fertig. Der Fortschrittszustand ist meiner Meinung nach ohnehin nie etwas Genaues, sondern dient wohl eher dazu, dem Anwender zu zeigen, dass auf dem System noch etwas passiert. ;-)
Ein Script zur Demonstration:
> # Demonstriert dialog --gauge # Name : dialog9 DIALOG=dialog ( echo "10" ; sleep 1 echo "XXX" ; echo "Alle Daten werden gesichert"; echo "XXX" echo "20" ; sleep 1 echo "50" ; sleep 1 echo "XXX" ; echo "Alle Daten werden archiviert"; echo "XXX" echo "75" ; sleep 1 echo "XXX" ; echo "Daten werden ins Webverzeichnis hochgeladen"; echo "XXX" echo "100" ; sleep 3 ) | $DIALOG --title "Fortschrittszustand" --gauge "Starte Backup-Script" 8 30 $DIALOG --clear $DIALOG --msgbox "Arbeit erfolgreich beendet ..." 0 0 $DIALOG --clear clear
### 5.8.10 Verändern von Aussehen und AusgabeÂ
Es gibt noch ein paar Dialoge, mit denen Sie das Aussehen und die Ausgabe beeinflussen können. Tabelle 5.10 bietet eine kurze Übersicht.
Option | Erläuterung |
| --- | --- |
ââtitle | Eine Titelzeile für einen Dialog festlegen (Beschriftung für den oberen Rand) |
ââbacktitle | Eine Titelzeile am Bildschirmrand festlegen (hierbei wird häufig der Scriptname verwendet, der zum jeweiligen Dialog gehört) |
ââclear | Dialog-Bildschirm löschen |
### 5.8.11 Kleines BeispielÂ
dialog und Xdialog lassen sich sehr vielseitig und eigentlich überall verwenden, sodass ein Beispiel immer recht wenig Sinn ergibt. Trotzdem will ich Ihnen ein kleines Beispiel zeigen. Es soll der Befehl find verwendet werden, und zwar so, dass auch der unbedarfte Linux-UNIX-User sofort an sein Ziel kommt. Im Beispiel wird ein Menü verwendet, wobei der User Dateien nach Namen, User-Kennung, Größe oder Zugriffsrechten suchen kann. In der anschließenden Eingabebox können Sie das Suchmuster festlegen und am Ende wird find aktiv. Zwar könnte man die Ausgabe von find auch in eine Textbox von dialog packen, aber bei einer etwas längeren Ausgabe macht die Dialogbox schlapp und gibt einen Fehler aus wie: »Die Argumentliste ist zu lang«. Hier das Beispielscript:
> # Demonstriert dialog # Name : dialog10 myfind=`dialog --menu \ "Suchen nach Dateien â Suchkriterium auswählen" 0 0 0 \ "Dateinamen" "" \ "Benutzerkennung" "" \ "Grösse" "" \ "Zugriffsrechte" "" \ "Ende" "" 3>&1 1>&2 2>&3` dialog --clear case "$myfind" in Dateinamen) search=`dialog --inputbox \ "Dateinamen eingeben" 0 0 "" 3>&1 1>&2 2>&3` command="-name $search" ;; Benutzerkennung) kennung=`dialog --inputbox \ "Benutzerkennung eingeben" 0 0 "" 3>&1 1>&2 2>&3` command="-user $kennung" ;; Grösse) bsize=`dialog --inputbox \ "Dateigrösse (in block size) eingeben" 0 0 "" \ 3>&1 1>&2 2>&3` command="-size $bsize" ;; Zugriffsrechte) permission=`dialog --inputbox \ "Zugriffsrechte (oktal) eingeben" 0 0 "" 3>&1 1>&2 2>&3` command="-perm $permission" ;; Ende) dialog --clear; clear; exit 0 ;; esac find /home $command -print 2>/dev/null
### 5.8.12 ZusammenfassungÂ
Sicherlich ließe sich zu dialog und insbesondere Xdialog noch einiges mehr sagen, aber die Basis haben Sie gelegt. Xdialog bietet Ihnen natürlich noch einige Widgets mehr an. Wenn Sie sich das Quellcode-Paket herunterladen, finden Sie im Verzeichnis samples eine Menge interessanter Scripts, die Ihnen zusätzliche Funktionen von Xdialog demonstrieren.
Eine weitere, sehr verbreitete Dialog-Schnittstelle ist lxdialog. Sie ist eine modifizierte Version von dialog und speziell für die Konfiguration des Kernels ausgelegt. lxdialog wird also vorwiegend von Kernel-Entwicklern eingesetzt und unterstützt im Gegensatz zu dialog auch Untermenüs und Abhängigkeiten zwischen Inhalten und einer Vorauswahl anderer Punkte. Leider ist lxdialog sehr schlecht dokumentiert, sodass schon viel Motivation dazugehört, sich damit auseinander zu setzen.
# 5.9 gnuplot â Visualisierung von MessdatenÂ
Date: 2005-03-10
Categories:
Tags:
5.9 gnuplot â Visualisierung von MessdatenÂ
Hinweis   Zugegeben, gnuplot hat eigentlich nichts mit der Terminal-Ein-/ Ausgabe zu tun, aber irgendwie ist es doch eine besondere »Art« der Ausgabe, weshalb ich diesen Abschnitt hier eingefügt habe. Wer schon immer mal gnuplot näher kennen lernen wollte, der kann die nächsten Seiten gemütlich durcharbeiten. Ansonsten können Sie sie auch einfach überspringen und bei Bedarf nachlesen. Der folgende Abschnitt ist also keine Voraussetzung für Kommendes, sondern nur als einfaches Add-on zu betrachten.
gnuplot ist ein Kommandozeilen-Plotprogramm, welches unter Linux/UNIX mittlerweile als das Tool für interaktive wissenschaftliche Visualisierungen von Messdaten gilt. Sowohl Kurven mit x/y-Datenpaaren als auch 3-D-Objekte lassen sich mit gnuplot realisieren. Wer sich schon einen kurzen Überblick verschaffen will, kann sich einige Demos unter http://gnuplot.sourceforge.net/demo/ ansehen.
Ein weiterer Vorteil von gnuplot im Gegensatz zu anderen Programmen ist es, dass es auf fast jeglicher Art von Rechnerarchitektur vorhanden ist. Also ist gnuplot neben Linux auch für alle Arten von UNIX (IRIX, HP-UX, Solaris und Digital Unix), den BSD-Versionen und auch für Microsoft (wgnuplot)- und Macintosh-Welten erhältlich. Wenn man auch hier und da (besonders unter UNIX) manchmal kein Paket dafür vorfindet, so kann man immer noch den Quellcode passend kompilieren. Und für manche auch noch von Vorteil: gnuplot ist kostenlos zu beziehen (bspw. unter www.gnu.org; beim Schreiben des Buchs lag die Version 4.1 vor).
Hinweis   Sofern Sie gnuplot selbst übersetzen wollen/müssen, benötigen Sie häufig die eine oder andere Bibliothek dafür, wenn Sie beispielsweise als Ausgabeformat GIF- oder PNG-Grafiken erhalten wollen. Hierbei kann es immer mal zu Problemen kommen, weil die Versionen der einzelnen Bibliotheken ebenfalls zusammenpassen müssen.
5.9.1 Wozu wird gnuplot eingesetzt?Â
Das Anwendungsgebiet ist gewaltig. Doch ohne auf die Fachgebiete von gnuplot zu kommen, ist hervorzuheben, dass gnuplot überall dort eingesetzt werden kann, wo Sie Funktionen bzw. Messdaten in einem zweidimensionalen kartesischen Koordinatensystem oder einem dreidimensionalen Raum darstellen wollen. Flächen können Sie hierbei als Netzgittermodell im 3-D-Raum darstellen oder in einer x/y-Ebene anzeigen.
Ihr primäres Einsatzgebiet dürfte wohl die zweidimensionale Darstellung von Statistiken sein. Hierzu stehen Ihnen zahlreiche Styles wie Linien, Punkte, Balken, Kästen, Netze, Linien-und-Punkte usw. zur Verfügung. Tortendiagramme sind zwar rein theoretisch auch möglich, aber nicht unbedingt die Stärke von gnuplot. Die einzelnen Kurven und Achsen lassen sich auch mit Markierungen, Überschriften oder Datums- bzw. Zeitangaben beschriften. Natürlich können Sie gnuplot auch für Dinge wie Polynome (mitsamt Interpolation) und trigonometrische Funktionen einsetzen, und last but not least, gnuplot kennt auch die polaren Koordinatensysteme.
Ebenfalls kann gnuplot bei 3-D-Interpolation (gridding) zwischen ungleichmäßigen Datenpunkten mit einem einfachen Gewichtungsverfahren verwendet werden. Allerdings muss bei Letzterem gesagt werden, dass es Software gibt, die diesen Aspekt noch etwas besser verarbeiten. Trotzdem dürfte kaum jemand so schnell an die Grenzen von gnuplot stoßen.
5.9.2 gnuplot startenÂ
Da gnuplot ein interaktives Kommandozeilen-Tool (mit einem eigenen Prompt gnuplot>) ist, können Sie gnuplot interaktiv oder aber auch in Ihrem Shellscript verwenden. gnuplot wird mit seinem Namen aufgerufen und wartet anschließend im Eingabeprompt z. B.. auf seine Plotbefehle, die Definition einer Funktion oder eine Angabe zur Formatierung einer Achse. Verlassen können Sie gnuplot mit quit oder exit. Ein (umfangreiches) Hilfe-System erhalten Sie mit der Eingabe von help im gnuplot-Prompt.
you@host > gnuplot G N U P L O T Version 4.0 patchlevel 0 last modified Thu Dec 12 13:00:00 GMT 2002 System: Linux 2.6.4â52-default ... Terminal type set to 'x11' gnuplot> quit you@host >
5.9.3 Das Kommando zum PlottenÂ
Zum Plotten wird das Kommando plot (für eine 2-D-Darstellung) oder splot (für die 3-D-Darstellung) verwendet. gnuplot selbst zeichnet dann aus einer Quelle, beispielsweise einer Funktion oder numerischen Daten, welche in einer Datei gespeichert werden, einen Graphen. Einfachstes Beispiel (siehe Abbildung 5.13):
gnuplot> plot sin(x)
Hier wurde ein einfacher 2-D-Graph mit x/y-Koordinaten geplottet. Im Beispiel wurde der Bezugsrahmen der x/y-Koordinaten nicht angegeben. In diesem Fall macht gnuplot dies automatisch. Die Standardwerte für die x-Achse lauten hier â10 bis +10 und die y-Achse wird automatisch ermittelt. Wollen Sie bspw. die x-Achse auf den Wert 0 bis 4 setzen, so können Sie dies folgendermaßen realisieren (siehe Abbildung 5.14):
gnuplot> plot [0:4] sin(x)
5.9.4 Variablen und Parameter für gnuplotÂ
Im Beispiel oben konnten Sie sehen, wie man bei gnuplot den Bezugsrahmen der x-Achse verändern kann. Allerdings war diese Version nicht unbedingt eindeutig und außerdem will man die Achsen auch noch beschriften. Hier bietet Ihnen gnuplot natürlich wieder eine Menge Variablen, welche Sie verändern können. Hierüber können/sollten Sie sich einen Überblick mit help set verschaffen:
gnuplot> help set ... Subtopics available for set: angles arrow autoscale bar bmargin border boxwidth clabel clip cntrparam contour data dgrid3d dummy encoding format function grid hidden3d isosamples key label linestyle lmargin locale logscale mapping margin missing multiplot mx2tics mxtics my2tics mytics mztics noarrow noautoscale noborder noclabel noclip ...
Hinweis   Sie kommen mit einem einfachen (ENTER) wieder aus dem Hilfe-System-Prompt heraus (oder auch ins nächsthöhere Hilfemenü hoch).
Sie finden eine Menge Variablen wieder, die Sie jederzeit Ihren Bedürfnissen anpassen können. Uns interessieren erst einmal die Variablen xlabel und ylabel für die Beschriftung sowie xrange und yrange für den Bezugsrahmen der einzelnen Achsen. All diese Variablen können Sie mit dem Kommando set anpassen:
set variable wert
Um auf das erste Plot-Beispiel zurückzukommen, können Sie die einzelnen (eben genannten) Werte wie folgt verändern:
gnuplot> set xlabel "X-ACHSE" gnuplot> set ylabel "Y-ACHSE" gnuplot> set xrange [0:4] gnuplot> set yrange [-1:1] gnuplot> plot sin(x)
5.9.5 Ausgabe von gnuplot umleitenÂ
Im Beispiel erfolgte die Ausgabe von gnuplot bisher immer auf ein separat sich öffnendes Fenster (meistens ist dabei terminal auf »x11« eingestellt). Dieses Ziel können Sie selbstverständlich auch beispielsweise in eine Postscript-Datei oder einen (Postscript-)Drucker umleiten. gnuplot hat eine Menge Treiber an Bord, die plattformunabhängig sind. Die Ausgabe geben Sie mit set terminal foo an, wodurch die Ausgabe ins foo-Format umgewandelt wird. Welche »Terminals« Ihr gnuplot alle zur Anzeige bzw. Ausgabe unterstützt, können Sie mit einem einfachen set terminal abfragen:
gnuplot> set terminal ... kyo Kyocera Laser Printer with Courier font latex LaTeX picture environment mf Metafont plotting standard mif Frame maker MIF 3.00 format mp MetaPost plotting standard nec_cp6 NEC printer CP6, Epson LQ-800 [monocrome color draft] okidata OKIDATA 320/321 Standard pbm Portable bitmap [small medium large] pcl5 HP Designjet 750C, HP Laserjet III/IV, etc. png Portable Network Graphics [small medium large] postscript PostScript graphics language prescribe Prescribe â for the Kyocera Laser Printer pslatex LaTeX picture environment with PostScript \specials pstex plain TeX with PostScript \specials pstricks LaTeX picture environment with PSTricks macros ...
Wollen Sie, dass anstatt in einem x11-Fenster die Ausgabe im Postscript-Format erfolgen soll, müssen Sie nur den Terminaltyp mit
gnuplot> set terminal postscript Terminal type set to 'postscript'
ändern. Die häufigsten Endgeräte (Terminals) sind hierbei die Formate: »postscript«, »latex« und »windows«, wobei »windows« wiederum für die Ausgabe auf dem Bildschirm (ähnlich wie »x11«) steht. Beachten Sie allerdings, dass wenn Sie ein anderes Format angeben (z. B. »postscript«) hierbei ein Ausgabeziel definiert sein muss, da sonst der komplette Datenfluss auf Ihren Bildschirm erfolgt. Die Ausgabe verändern Sie ebenfalls mit dem Kommando set:
set output "zieldatei.endung"
Um etwa aus der einfachen Sinuskurve vom Beispiel oben eine echte Postscript-Datei zu erzeugen, gehen Sie wie folgt vor:
gnuplot> set terminal postscript Terminal type set to 'postscript' gnuplot> set output "testplot.ps" gnuplot> plot sin(x) gnuplot>
Ein Blick in das Arbeitsverzeichnis sollte nun die Postscript-Datei testplot.ps zu Tage fördern. Selbstverständlich können Sie hier â sofern vorhanden â auch andere Formate zur Ausgabe verwenden. So z. B. für das Internet eine PNG-Datei:
gnuplot> set terminal png Terminal type set to 'png' Options are ' small color' gnuplot> set output "testplot.png" gnuplot> plot sin(x)
Wenn Sie set output "PRN" verwenden, werden die Daten (vorausgesetzt, es wurden zuvor mit terminal die richtigen Treiber angegeben) an den Drucker geschickt.
Tipp   Das Drucken funktioniert auch via Rechts-Klick des Mausbuttons aus einem x11-Fenster heraus.
Tipp   Wollen Sie im aktuellen Arbeitsverzeichnis schnell nachsehen, ob hier tatsächlich eine entsprechende Datei erzeugt wurde, können Sie auch sämtliche Shell-Befehle in gnuplot verwenden. Sie müssen nur vor dem entsprechenden Befehl ein ! setzen, eine Leerzeile lassen und den Befehl anfügen. So listet ! ls -l Ihnen z. B. in gnuplot das aktuelle Verzeichnis auf.
5.9.6 Variablen und eigene Funktionen definierenÂ
Variablen können Sie mit gnuplot genauso definieren, wie Sie dies schon von der Shell-Programmierung her kennen:
variable=wert
Wenn Sie den Wert einer Variablen kennen oder Berechnungen mit gnuplot ausgeben lassen wollen, kann hierfür das print-Kommando verwendet werden.
gnuplot> var=2 gnuplot> print var 2 gnuplot> var_a=1+var*sqrt(2) gnuplot> print var_a 3.82842712474619
Solche Variablen können auch als Wert für ein Plot-Kommando genutzt werden. Im folgenden Beispiel wird eine Variable Pi verwendet, um den Bezugsrahmen der x-Koordinate zu »berechnen«.
gnuplot> Pi=3.1415 gnuplot> set xrange [-2*Pi:2*Pi] gnuplot> a=0.6 gnuplot> plot a*sin(x)
Hier wird ein Graph gezeichnet aus a*sin(x) von â2*Pi bis 2*Pi für den gilt a=0.5.
Das Ganze lässt sich aber auch mit einer eigenen Funktion definieren:
gnuplot> func(x)=var*sin(x)
Diese Funktion können Sie nun mit dem Namen plot func(x) aufrufen bzw. plotten lassen. Da diese Funktion auch eine User-definierte Variable var enthält, erwartet diese auch eine solche Variable von Ihnen:
gnuplot> var=0.5 gnuplot> plot func(x) gnuplot> var=0.6 gnuplot> plot func(x) gnuplot> var=0.9 gnuplot> plot func(x)
Hierbei wurde der Parameter var ständig verändert, um einige Test-Plots mit veränderten Wert durchzuführen.
5.9.7 Interpretation von Daten aus einer DateiÂ
Im folgenden Beispiel soll eine Datei namens messdat.dat mit Temperaturwerten der ersten sechs Monate der letzten vier Jahre mit gnuplot ausgelesen und grafisch ausgegeben werden.
gnuplot> ! cat messdat.dat 1 â5 3 â7 4 2 8 â5 9 â6 3 10 8 13 11 4 16 12 19 18 5 12 15 20 13 6 21 22 20 15
Jede Zeile soll für einen Monat stehen. Die erste Zeile z. B. steht für den Januar und beinhaltet Daten von vier Jahren (wir nehmen einfach mal 1999â2002). Um jetzt diese Messdaten als eine Grafik ausgeben zu lassen, können Sie wie folgt vorgehen:
gnuplot> set xrange [0:6] gnuplot> set yrange [-20:40] gnuplot> set xlabel "Monat" gnuplot> set ylabel "Grad/Celcius"
Bis hierhin nichts Neues. Jetzt müssen Sie den Zeichenstil angeben, den Sie verwenden wollen (darauf wird noch eingegangen):
gnuplot> set data style lp
Jetzt erst kommt der Plot-Befehl ins Spiel. Das Prinzip ist verhältnismäßig einfach, da gnuplot bestens â wie bei einer Tabellenkalkulation â mit dem spaltenorientierten Aufbau von Messdaten zurechtkommt. Die Syntax:
using Xachse:Yachse
Damit geben Sie an, dass Sie ab der Zeile Xachse sämtliche Daten aus der YachseâSpalte erhalten wollen. Beispielsweise:
# alle Daten ab der ersten Zeile aus der zweiten Spalte using 1:2 # alle Daten ab der ersten Zeile aus der vierten Spalte using 1:4
Damit using auch weiß, von wo die Daten kommen, müssen Sie ihm diese mit plot zuschieben:
plot datei using Xachse:Yachse
Somit könnten Sie aus unserer Messdatei messdat.dat alle Daten ab der ersten Zeile in der zweiten Spalte folgendermaßen ausgeben lassen:
gnuplot> plot "messdat.dat" using 1:2
Die Art der Linien, die hier ausgegeben werden, haben Sie mit set data style lp festgelegt. Wie es sich für ein echtes Messprotokoll gehört, beschriftet man die Linien auch entsprechend â was mit einem einfachen t für title und einer Zeichenkette dahinter erledigt werden kann:
gnuplot> plot "messdat.dat" using 1:2 t "1999"
Wollen Sie dem Messprotokoll auch noch einen Titel verpassen, so können Sie diesen mit
set title "ein Titel"
angeben. Nochmals das vollständige Beispiel, welches die Datei messdat.dat auswertet und plottet:
gnuplot> set xrange [0:6] gnuplot> set yrange [-20:40] gnuplot> set xlabel "Monat" gnuplot> set ylabel "Grad/Celcius" gnuplot> set data style lp gnuplot> set title "Temperatur-Daten 1999â2002" gnuplot> plot "messdat.dat" using 1:2 t "1999" ,\ > "messdat.dat" using 1:3 t "2000" ,\ > "messdat.dat" using 1:4 t "2001" ,\ > "messdat.dat" using 1:5 t "2002"
Im Beispiel sehen Sie außerdem, dass mehrere Plot-Anweisungen mit einem Komma und Zeilenumbrüche mit einem Backslash realisiert werden.
5.9.8 Alles bitte nochmals zeichnen (oder besser speichern und laden)Â
Das Beispiel zum Auswerten der Messdaten hält sich hinsichtlich des Aufwands in Grenzen, aber sofern man hier das ein oder andere ändern bzw. die Ausgabe nochmals ausgeben will, ist der plot-Befehl schon ein wenig lang. Zwar gibt es auch hier eine Kommando-History, doch es geht mit dem Befehl replot noch ein wenig schneller.
gnuplot> replot
Damit wird der zuvor vorgenommene plot nochmals geplottet. replot wird gewöhnlich verwendet, wenn Sie einen plot auf ein Fenster vorgenommen haben und diesen jetzt auch in einer Ausgabedatei speichern wollen. Im folgenden Beispiel soll der vorherige Plot in einer Postscript-Datei wieder zu sehen sein. Nichts einfacher als das:
gnuplot> set terminal postscript Terminal type set to 'postscript' gnuplot> set output "messdat.ps" gnuplot> replot gnuplot> ! ls *.ps messdat.ps
Bevor Sie jetzt gnuplot beenden, können Sie auch den kompletten Plot (genauer: alle Befehle, Funktionen und Variablen) in einer Datei speichern.
gnuplot> save "messdat.plt" gnuplot> quit
Starten Sie jetzt beim nächsten Mal gnuplot, können Sie mithilfe von load Ihre gespeicherten Plot-Daten wieder auf dem Bildschirm oder wohin Sie es angeben plotten lassen.
gnuplot> load "messdat.plt"
Tipp: Wenn Sie wissen wollen, welche Linie welche Farbe bekommt und wie sonst alles standardmäßig auf dem Terminal aussieht, genügt ein einfacher test-Befehl. Wenn Sie test in gnuplot eintippen, bekommen Sie die aktuelle Terminal-Einstellung von gnuplot in einem Fenster zurück.
5.9.9 gnuplot aus einem Shellscript heraus starten (der Batch-Betrieb)Â
Hierbei unterscheidet man zwischen zwei Möglichkeiten. Entweder es existiert bereits eine Batch-Datei, welche mit save "file.dat" gespeichert wurde und die es gilt aufzurufen, oder Sie wollen den ganzen gnuplot-Vorgang aus einem Shellscript heraus starten.
Batch-Datei verwenden
Eine Möglichkeit ist es, die Batch-Datei als Argument von gnuplot anzugeben:
you@host > gnuplot messdat.plt
gnuplot führt dann die angegebene Datei bis zur letzten Zeile in der Kommandozeile aus. Allerdings beendet sich gnuplot nach dem Lesen der letzten Zeile gleich wieder. Hier können Sie gegensteuern, indem Sie in der letzten Zeile der entsprechenden Batch-Datei (hier bspw. messdat.plt) pause â1 einfügen. Die Ausgabe hält dann so lange an, bis Sie eine Taste drücken.
Allerdings ist diese Methode unnötig, weil gnuplot Ihnen hier mit der Option âpersist Ähnliches anbietet. Die Option âpersist wird verwendet, damit das Fenster auch im Script-Betrieb sichtbar bleibt.
you@host > gnuplot -persist messdat.plt
Außerdem können Sie die Ausgabe auch wie ein Shellscript von gnuplot interpretieren lassen. Ein Blick auf die erste Zeile der Batch-Datei bringt Folgendes ans Tageslicht:
you@host > head â1 messdat.plt #!/usr/bin/gnuplot -persist
Also machen Sie die Batch-Datei ausführbar und starten das Script wie ein gewöhnliches:
you@host > chmod u+x messdat.plt you@host > ./messdat.plt
gnuplot aus einem Shellscript starten
Um gnuplot aus einem Shellscript heraus zu starten, benötigen Sie ebenfalls die Option âpersist (es sei denn, Sie schreiben in der letzten Zeile pause â1). Zwei Möglichkeiten stehen Ihnen zur Verfügung: mit echo und einer Pipe oder über ein Here-Dokument. Zuerst die Methode mit echo:
you@host > echo 'plot "messdat.dat" using 1:2 t "1999" with lp'\ > | gnuplot -persist
Wobei Sie hier gleich erkennen können, dass sich die Methode mit echo wohl eher für kurze und schnelle Plots eignet. Mit der Angabe with lp musste noch der Style angegeben werden, da sonst nur Punkte verwendet würden. Wenn Sie noch weitere Angaben vornehmen wollen, etwa den Namen der einzelnen Achsen oder den Bezugsrahmen, ist die Möglichkeit mit echo eher unübersichtlich. Zwar ließe sich dies auch so erreichen:
you@host > var='plot "messdat.dat" using 1:2 t "1999" with lp' you@host > echo $var | gnuplot -persist
doch meiner Meinung nach ist das Here-Dokument die einfachere Lösung. Hier ein Shellscript für die Methode mit dem Here-Dokument:
# Demonstriert einen Plot mit gnuplot und dem Here-Dokument # Name : aplot1 # Datei zum Plotten FILE=messdat.dat echo "Demonstriert einen Plot mit gnuplot" gnuplot -persist <<PLOT set xrange [0:6] set yrange [-20:40] set xlabel "Monat" set ylabel "Grad/Celcius" set data style lp set title "Temperatur-Daten 1999â2002" # Falls Sie eine Postscript-Datei erstellen wollen ... # set terminal postscript # set output "messdat.ps" plot "$FILE" using 1:2 t "1999" ,\ "$FILE" using 1:3 t "2000" ,\ "$FILE" using 1:4 t "2001" ,\ "$FILE" using 1:5 t "2002" quit PLOT echo "Done ..."
5.9.10 Plot-Styles und andere Ausgaben festlegenÂ
Wollen Sie nicht, dass gnuplot bestimmt, welcher Plotstil (Style) verwendet wird, können Sie diesen auch selbst auswählen. Im Beispiel hatten Sie den Stil bisher mit
set data style lp
festgelegt. lp ist eine Abkürzung für linepoints. Sie können aber den Stil auch angeben, indem Sie an einen plot-Befehl das Schlüsselwort with gefolgt vom Stil Ihrer Wahl anhängen.
plot "datei" using 1:2 with steps # bspw. you@host > echo 'plot "messdat.dat" using 1:2 \ > t "1999" with steps' | gnuplot -persist
Hier geben Sie z. B. als Stil eine Art Treppenstufe an. Die einzelnen Stile hier genauer zu beschreiben, geht wohl ein wenig zu weit und ist eigentlich nicht nötig. Am besten probieren Sie die einzelnen Styles selbst aus. Welche möglich sind, können Sie mit einem Aufruf von
gnuplot> set data style
in Erfahrung bringen. Das folgende Script demonstriert Ihnen einige dieser Styles, wobei gleich ins Auge sticht, welche Stile für diese Statistik brauchbar sind und welche nicht.
# Demonstriert einen Plot mit gnuplot und dem Here-Dokument # Name : aplotstyles1 # Datei zum Plotten FILE=messdat.dat # Verschiedene Styles zum Testen STYLES="lines points linespoints dots impulses \ steps fsteps histeps boxes" for var in $STYLES do gnuplot -persist <<PLOT set xrange [0:6] set yrange [-20:40] set xlabel "Monat" set ylabel "Grad/Celcius" set data style $var set title "Temperatur-Daten 1999â2002" # Falls Sie eine Postscript-Datei erstellen wollen ... # set terminal postscript # set output "messdat.ps" plot "$FILE" using 1:2 t "1999" ,\ "$FILE" using 1:3 t "2000" ,\ "$FILE" using 1:4 t "2001" ,\ "$FILE" using 1:5 t "2002" quit PLOT done
Andere Styles wie
xerrorbars xyerrorbars boxerrorbars yerrorbars boxxyerrorbars vector financebars candlesticks
wiederum benötigen zum Plotten mehr Spalten als Informationen. Näheres entnehmen Sie hierzu bitte den Hilfsseiten von gnuplot.
Beschriftungen
Neben Titel, Legenden und der x/y-Achse, welche Sie bereits verwendet und beschriftet haben, können Sie auch einen Label an einer beliebigen Position setzen.
set label "Zeichenkette" at X-Achse,Y-Achse
Wichtig in diesem Zusammenhang ist natürlich, dass sich die Angaben der Achsen innerhalb von xrange und yrange befinden.
Zusammenfassend finden Sie die häufig verwendeten Beschriftungen in der folgenden Tabelle 5.11:
Tabelle 5.11 Â Häufig verwendete Beschriftungen von gnuplot
xlabel
Beschriftung der x-Achse
ylabel
Beschriftung der y-Achse
label
Beschriftung an einer gewünschten x/y-Position innerhalb von xrange und yrange
title
In Verbindung mit set title "abcd" wird der Text als Überschrift verwendet oder innerhalb eines Plot-Befehls hinter der Zeile und Spalte als Legende. Kann auch mit einem einfachen t abgekürzt werden.
Ein Anwendungsbeispiel zur Beschriftung in gnuplot:
gnuplot> set xlabel "X-ACHSE" gnuplot> set ylabel "Y-ACHSE" gnuplot> set label "Ich bin ein LABEL" at 2,20 gnuplot> set title "Ich bin der TITEL" gnuplot> plot "messdat.dat" using 1:2 title "LEGENDE" with impuls
Dies sieht dann so aus, wie in Abbildung 5.18 zu sehen.
Linien und Punkte
Wenn es Ihnen nicht gefällt, wie gnuplot standardmäßig die Linien und Punkte auswählt, können Sie diese auch mit dem folgenden Befehl selbst festlegen:
set linestyle [indexnummer] {linetype} {linewidth} {pointtype} \ {pointsize}
Beispielsweise:
set linestyle 1 linetype 3 linewidth 4
Hier definieren Sie einen Linienstil mit dem Index 1. Er soll den linetyp 3 (in meinem Fall eine blaue Linie) und eine Dicke (linewidth) von 3 bekommen. Einen Überblick zu den Linientypen erhalten Sie mit dem Befehl test bei gnuplot:
gnuplot> test
Wollen Sie den Linienstil Nummer 1, den Sie eben festgelegt haben, beim Plotten verwenden, müssen Sie ihn hinter dem Style mit angeben:
plot messdat.dat using 1:2 t "1999" with lp linestyle 1
Tabelle 5.12 gibt einen kurzen Überblick zu den möglichen Werten, mit denen Sie die Ausgabe von Linien und Punkten bestimmen können:
Tabelle 5.12 Â Werte für Linien und Punkte in gnuplot
Wert
Bedeutung
linetype (Kurzform lt)
Hier können Sie den Linienstil angeben. Gewöhnlich handelt es sich um die entsprechende Farbe und â falls verwendet â den entsprechenden Punkt. Welcher Linienstil wie aussieht, können Sie sich mit dem Befehl test in gnuplot anzeigen lassen.
linewidth (Kurzform lw)
Die Stärke der Line; je höher dieser Wert ist, desto dicker wird der Strich.
pointtype (Kurzform pt)
Wie linetype, nur dass Sie hierbei den Stil eines Punktes angeben. Wie die entsprechenden Punkte bei Ihnen aussehen, lässt sich auch hier mit test anzeigen.
pointsize (Kurzform ps)
Wie linewidth, nur dass Sie hierbei die Größe des Punktes angeben â je höher der Wert ist, desto größer ist der entsprechende Punktstil (pointtype).
Hierzu nochmals das Shellscript, welches die Temperaturdaten der ersten sechs Monate in den letzten vier Jahren auswertet â jetzt mit veränderten Linien und Punkten.
# Demonstriert einen Plot mit gnuplot und dem Here-Dokument # Name : aplot2 FILE=messdat.dat gnuplot -persist <<PLOT set linestyle 1 linetype 1 linewidth 4 set linestyle 2 linetype 2 linewidth 3 pointtype 6 pointsize 3 set linestyle 3 linetype 0 linewidth 2 pointsize 2 set linestyle 4 linetype 7 linewidth 1 pointsize 2 set xlabel "Monat" set ylabel "Grad/Celcius" set yrange [-10:40] set xrange [0:7] set label "6 Monate/1999â2003" at 1,30 set title "Temperaturdaten" plot "$FILE" using 1:2 t "1999" with lp linestyle 1 ,\ "$FILE" using 1:3 t "2000" with lp linestyle 2,\ "$FILE" using 1:4 t "2001" with lp linestyle 3,\ "$FILE" using 1:5 t "2002" with lp linestyle 4 PLOT
Größe bzw. Abstände der Ausgabe verändern
Häufig ist die standardmäßige Einstellung der Ausgabe zu groß und manchmal (eher selten) auch zu klein. Besonders wenn man die entsprechende Ausgabe in eine Postscript-Datei für ein Latex-Dokument vornehmen will, muss man häufig etwas anpassen. Diese Angabe können Sie mittels
set size Xval,Yval
verändern (im Fachjargon »skalieren«).
set size 1.0,1.0
ist dabei der Standardwert und gibt Ihre Ausgabe unverändert zurück. Wollen Sie den Faktor (und somit auch die Ausgabe) verkleinern, müssen Sie den Wert reduzieren:
set size 0.8,0.8
Weitere Werte, die Sie zum Verändern bestimmter Abstände verwenden können, sind (siehe Tabelle 5.13):
Tabelle 5.13 Â Verändern von Abständen in gnupot
set offset links,rechts,oben,unten
Hiermit stellen Sie den Abstand der Daten von den Achsen ein. Als Einheit (Wert für links, rechts, oben und unten) dient die Einheit (siehe xrange und yrange), die Sie für die jeweilige Achse verwenden.
set bmargin [wert]
Justiert den Abstand der Grafik vom unteren Fensterrand
set tmargin [wert]
Justiert den Abstand der Grafik vom oberen Fensterrand
»Auflösung« (Samplerate) verändern
Gnuplot berechnet von einer Funktion eine Menge von Stützpunkten und verbindet diese dann durch »Spline« bzw. Linienelemente. Bei 3-D-Plots ist allerdings die Darstellung oft recht grob, weshalb es sich manchmal empfiehlt, die Qualität der Ausgabe mit set sample wert (ein wert von bspw. 1000 ist recht fein) feiner einzustellen. Sie verändern so zwar nicht direkt die Auflösung im eigentlichen Sinne, jedoch können Sie hiermit die Anzahl von Stützpunkten erhöhen, wodurch die Ausgabe automatisch feiner wird (allerdings ggf. auch mehr Rechenzeit beansprucht).
5.9.11 Tricks für die AchsenÂ
Offset für die x-Achse
Manchmal benötigen Sie ein Offset für die Achsen, etwa wenn Sie Impulsstriche als Darstellungsform gewählt haben. Beim Beispiel mit den Temperaturdaten würde eine Verwendung von Impulsstrichen als Style so aussehen (siehe Abbildung 5.20).
Die Striche überlagern sich alle in der x-Achse. In diesem Beispiel mag die Verwendung von Impulsstrichen weniger geeignet sein, aber wenn Sie Punkte hierfür verwenden, werden häufig viele Punkte auf einem Fleck platziert. Egal welches Anwendungsgebiet, welcher Stil und welche Achse hiervon nun betroffen ist, auch die Achsen (genauer: das Offset der Achsen) können Sie mit einem einfachen Trick verschieben. Nehmen wir z. B. folgende Angabe, die die eben gezeigte Abbildung demonstriert:
plot "$FILE" using 1:2 t "1999" with impuls,\ "$FILE" using 1:3 t "2000" with impuls,\ "$FILE" using 1:4 t "2001" with impuls,\ "$FILE" using 1:5 t "2002" with impuls
Hier sind die Werte hinter using von Bedeutung:
using 1:2
Wollen Sie jetzt, dass die Impulsstriche minimal von der linken Seite der eigentlichen Position weg platziert werden, müssen Sie dies umändern in:
using (column(1)-.15):2
Jetzt wird der erste Strich um â0.15 von der eigentlichen Position der x-Achse nach links verschoben. Wollen Sie die Impulsstriche um 0.1 nach rechts verschieben, schreiben Sie dies so:
using (column(1)+.1):2
Auf das vollständige Script angewandt sieht dies folgendermaßen aus:
# Demonstriert einen Plot mit gnuplot und dem Here-Dokument # Name : aplot3 FILE=messdat.dat gnuplot -persist <<PLOT set xlabel "Monat" set ylabel "Grad/Celcius" set yrange [-10:40] set xrange [0:7] set label "6 Monate/1999â2002" at 2,20 set title "Temperaturdaten" plot "$FILE" using (column(1)-.15):2 t "1999" with impuls ,\ "$FILE" using (column(1)-.05):3 t "2000" with impuls ,\ "$FILE" using (column(1)+.05):4 t "2001" with impuls ,\ "$FILE" using (column(1)+.15):5 t "2002" with impuls PLOT
Das Script bei der Ausführung (siehe Abbildung 5.21):
Zeitdaten
Wenn der Zeitverlauf irgendeiner Messung dargestellt werden soll, muss gnuplot die x-Werte als Datum/Stunde/Minute etc. erkennen. Gerade als (angehender) Systemadministrator bzw. Webmaster bekommen Sie es regelmäßig mit Zeitdaten zu tun. Wollen Sie dem Kunden bzw. dem Angestellten zeigen, zu welcher Uhrzeit sich etwas Besonderes ereignet hat, können Sie hierzu ebenfalls auf gnuplot zählen. So zeigen Sie diesen Vorgang anschaulich, statt abschreckend mit einer Kolonne von Zahlen um sich zu werfen. Hier hilft es Ihnen auch wieder weiter, wenn Sie sich mit dem Kommando date auseinander gesetzt haben, denn das Format und die Formatierungszeichen sind dieselben. Folgende Daten sind z. B. vorhanden:
you@host > cat besucher.dat 10.03.05 655 408 11.03.05 838 612 12.03.05 435 345 13.03.05 695 509 14.03.05 412 333 15.03.05 905 765 16.03.05 355 208
Jede dieser Zeilen soll folgende Bedeutung haben:
[Datum] [Besucher] [Besucher mit unterschiedlicher IP-Adresse]
Eine Besucherstatistik einer Webseite also. Um hierbei gnuplot mitzuteilen, dass Sie an Daten mit Zeitwerten interessiert sind, müssen Sie dies mit
set xdata time
angeben. Damit gnuplot auch das Zeitformat kennt, geben Sie ihm die Daten mit set timefmt ' ... ' mit. Die Formatangabe entspricht hierbei der von date. Im Beispiel von besucher.dat also:
set timefmt '%d.%m.%y'
Zum Schluss müssen Sie noch angeben, welche Zeitinformationen als Achsenbeschriftung verwendet werden sollen. Dies wird mit einem einfachen
set format x ' ... '
erreicht. Soll beispielsweise das Format Tag/Monat auf der x-Achse erscheinen, dann schreibt man dies so:
set format x "%d/%m"
Oder das Format Jahr/Monat/Tag:
set format x "%y/%m%/%d"
Alles zusammengefasst: set xdata time definiert die x-Werte als Zeitangaben. set timefmt erklärt gnuplot, wie es die Daten zu interpretieren hat. set format x definiert, welche der Zeitinformationen als Achsenbeschriftung auftauchen soll.
Hier das Shellscript, welches die Besucherdaten auswertet und plottet:
# Demonstriert einen Plot mit gnuplot und dem Here-Dokument # Name : aplot4 FILE=besucher.dat gnuplot -persist <<PLOT set xdata time set timefmt '%d.%m.%y' set format x "%d/%m" set ylabel "Besucher" set title " --- Besucherstatistik vom 10.3. bis 16.3.2003 ---" set data style lp plot "$FILE" using (timecolumn(1)):2 t "Besucher" pointsize 2 , \ "$FILE" using (timecolumn(1)):3 t "Besucher mit untersch. IP" linewidth 2 pointsize 2 PLOT
Das Script bei der Ausführung (siehe Abbildung 5.22):
5.9.12 Die dritte DimensionÂ
Zwar werden Sie als Administrator eher selten mit dreidimensionalen Plottings zu tun haben, dennoch soll dieser Punkt nicht unerwähnt bleiben. Der Schritt zur dritten Dimension ist im Grunde nicht schwer: Hier kommt eine z-Achse hinzu, weshalb Sie gegebenenfalls noch entsprechende Werte belegen müssen/können (zrange, zlabel etc.). Ebenfalls festlegen können Sie Blickwinkel und -höhe (mit set view). Der Befehl zum Plotten von dreidimensionalen Darstellungen lautet splot. Hierzu eine Datei, mit der jetzt ein dreidimensionaler Plot vorgenommen werden soll.
you@host > cat data.dat 1 4 4609 2 4 3534 3 4 4321 4 4 6345 1 5 6765 2 5 3343 3 5 5431 4 5 3467 1 6 4321 2 6 5333 3 6 4342 4 6 5878 1 7 4351 2 7 4333 3 7 5342 4 7 4878
Die Bedeutung der einzelnen Spalten sei hierbei:
[Woche] [Monat] [Besucher]
Also wieder eine Auswertung einer Besucherstatistik, wobei hier die erste Spalte der Woche, die zweite Spalte dem Monat und die dritte der Zahl der Besucher gewidmet ist, die in der n-ten Woche im m-ten Monat auf die Webseite zugegriffen haben. Hier das Script, welches diese Daten auswertet und mit einem dreidimensionalen Plot ausgibt:
# Demonstriert einen Plot mit gnuplot und dem Here-Dokument # Name : aplot5 FILE=data.dat gnuplot -persist <<PLOT set view ,75,1,1 set xlabel "Woche" set ylabel "Monat" set zlabel "Besucher" set ymtics set title "Besucherstatistik pro Woche eines Monats" splot "$FILE" using (column(1)+.3):2:3 t "Besucher pro Woche" with impuls linewidth 3 PLOT
Das Script bei der Ausführung (siehe Abbildung 5.23):
Zuerst stellen Sie mit set view den Blickwinkel und die -höhe ein. Neu kommt hier die Beschriftung des zlabel und die Einstellung von ymtics hinzu. Damit wandeln Sie die Zahlen auf der Monatsachse (hier in der zweiten Spalte von data.dat) in eine Monatsbezeichnung um (Gleiches könnten Sie auch mit ydtics für Wochentage auf der y-Achse vornehmen; oder xdtics entspräche den Wochentagen auf der x-Achse). Natürlich setzt dies immer voraus, dass hierbei entsprechende Daten mit vernünftigen Werten übergeben werden.
5.9.13 ZusammenfassungÂ
Für den Standardgebrauch (Systemadministration) sind Sie gerüstet. Wollen Sie allerdings wissenschaftlich mit gnuplot arbeiten, werden Sie noch häufiger help eingeben müssen. Daher hierzu noch einige Anlaufstellen im Web, über die Sie noch tiefer in die Plot-Welt einsteigen können:
## 5.9 gnuplot â Visualisierung von MessdatenÂ
gnuplot ist ein Kommandozeilen-Plotprogramm, welches unter Linux/UNIX mittlerweile als das Tool für interaktive wissenschaftliche Visualisierungen von Messdaten gilt. Sowohl Kurven mit x/y-Datenpaaren als auch 3-D-Objekte lassen sich mit gnuplot realisieren. Wer sich schon einen kurzen Überblick verschaffen will, kann sich einige Demos unter http://gnuplot.sourceforge.net/demo/ ansehen.
Ein weiterer Vorteil von gnuplot im Gegensatz zu anderen Programmen ist es, dass es auf fast jeglicher Art von Rechnerarchitektur vorhanden ist. Also ist gnuplot neben Linux auch für alle Arten von UNIX (IRIX, HP-UX, Solaris und Digital Unix), den BSD-Versionen und auch für Microsoft (wgnuplot)- und Macintosh-Welten erhältlich. Wenn man auch hier und da (besonders unter UNIX) manchmal kein Paket dafür vorfindet, so kann man immer noch den Quellcode passend kompilieren. Und für manche auch noch von Vorteil: gnuplot ist kostenlos zu beziehen (bspw. unter www.gnu.org; beim Schreiben des Buchs lag die Version 4.1 vor).
### 5.9.1 Wozu wird gnuplot eingesetzt?Â
Das Anwendungsgebiet ist gewaltig. Doch ohne auf die Fachgebiete von gnuplot zu kommen, ist hervorzuheben, dass gnuplot überall dort eingesetzt werden kann, wo Sie Funktionen bzw. Messdaten in einem zweidimensionalen kartesischen Koordinatensystem oder einem dreidimensionalen Raum darstellen wollen. Flächen können Sie hierbei als Netzgittermodell im 3-D-Raum darstellen oder in einer x/y-Ebene anzeigen.
Ihr primäres Einsatzgebiet dürfte wohl die zweidimensionale Darstellung von Statistiken sein. Hierzu stehen Ihnen zahlreiche Styles wie Linien, Punkte, Balken, Kästen, Netze, Linien-und-Punkte usw. zur Verfügung. Tortendiagramme sind zwar rein theoretisch auch möglich, aber nicht unbedingt die Stärke von gnuplot. Die einzelnen Kurven und Achsen lassen sich auch mit Markierungen, Überschriften oder Datums- bzw. Zeitangaben beschriften. Natürlich können Sie gnuplot auch für Dinge wie Polynome (mitsamt Interpolation) und trigonometrische Funktionen einsetzen, und last but not least, gnuplot kennt auch die polaren Koordinatensysteme.
Ebenfalls kann gnuplot bei 3-D-Interpolation (gridding) zwischen ungleichmäßigen Datenpunkten mit einem einfachen Gewichtungsverfahren verwendet werden. Allerdings muss bei Letzterem gesagt werden, dass es Software gibt, die diesen Aspekt noch etwas besser verarbeiten. Trotzdem dürfte kaum jemand so schnell an die Grenzen von gnuplot stoßen.
### 5.9.2 gnuplot startenÂ
Da gnuplot ein interaktives Kommandozeilen-Tool (mit einem eigenen Prompt gnuplot>) ist, können Sie gnuplot interaktiv oder aber auch in Ihrem Shellscript verwenden. gnuplot wird mit seinem Namen aufgerufen und wartet anschließend im Eingabeprompt z. B.. auf seine Plotbefehle, die Definition einer Funktion oder eine Angabe zur Formatierung einer Achse. Verlassen können Sie gnuplot mit quit oder exit. Ein (umfangreiches) Hilfe-System erhalten Sie mit der Eingabe von help im gnuplot-Prompt.
> you@host > gnuplot G N U P L O T Version 4.0 patchlevel 0 last modified Thu Dec 12 13:00:00 GMT 2002 System: Linux 2.6.4â52-default ... Terminal type set to 'x11' gnuplot> quit you@host ### 5.9.3 Das Kommando zum PlottenÂ
Zum Plotten wird das Kommando plot (für eine 2-D-Darstellung) oder splot (für die 3-D-Darstellung) verwendet. gnuplot selbst zeichnet dann aus einer Quelle, beispielsweise einer Funktion oder numerischen Daten, welche in einer Datei gespeichert werden, einen Graphen. Einfachstes Beispiel (siehe Abbildung 5.13):
> gnuplot> plot sin(x)
Hier wurde ein einfacher 2-D-Graph mit x/y-Koordinaten geplottet. Im Beispiel wurde der Bezugsrahmen der x/y-Koordinaten nicht angegeben. In diesem Fall macht gnuplot dies automatisch. Die Standardwerte für die x-Achse lauten hier â10 bis +10 und die y-Achse wird automatisch ermittelt. Wollen Sie bspw. die x-Achse auf den Wert 0 bis 4 setzen, so können Sie dies folgendermaßen realisieren (siehe Abbildung 5.14):
> gnuplot> plot [0:4] sin(x)
### 5.9.4 Variablen und Parameter für gnuplotÂ
Im Beispiel oben konnten Sie sehen, wie man bei gnuplot den Bezugsrahmen der x-Achse verändern kann. Allerdings war diese Version nicht unbedingt eindeutig und außerdem will man die Achsen auch noch beschriften. Hier bietet Ihnen gnuplot natürlich wieder eine Menge Variablen, welche Sie verändern können. Hierüber können/sollten Sie sich einen Überblick mit help set verschaffen:
> gnuplot> help set ... Subtopics available for set: angles arrow autoscale bar bmargin border boxwidth clabel clip cntrparam contour data dgrid3d dummy encoding format function grid hidden3d isosamples key label linestyle lmargin locale logscale mapping margin missing multiplot mx2tics mxtics my2tics mytics mztics noarrow noautoscale noborder noclabel noclip ...
Sie finden eine Menge Variablen wieder, die Sie jederzeit Ihren Bedürfnissen anpassen können. Uns interessieren erst einmal die Variablen xlabel und ylabel für die Beschriftung sowie xrange und yrange für den Bezugsrahmen der einzelnen Achsen. All diese Variablen können Sie mit dem Kommando set anpassen:
> set variable wert
Um auf das erste Plot-Beispiel zurückzukommen, können Sie die einzelnen (eben genannten) Werte wie folgt verändern:
> gnuplot> set xlabel "X-ACHSE" gnuplot> set ylabel "Y-ACHSE" gnuplot> set xrange [0:4] gnuplot> set yrange [-1:1] gnuplot> plot sin(x)
### 5.9.5 Ausgabe von gnuplot umleitenÂ
Im Beispiel erfolgte die Ausgabe von gnuplot bisher immer auf ein separat sich öffnendes Fenster (meistens ist dabei terminal auf »x11« eingestellt). Dieses Ziel können Sie selbstverständlich auch beispielsweise in eine Postscript-Datei oder einen (Postscript-)Drucker umleiten. gnuplot hat eine Menge Treiber an Bord, die plattformunabhängig sind. Die Ausgabe geben Sie mit set terminal foo an, wodurch die Ausgabe ins foo-Format umgewandelt wird. Welche »Terminals« Ihr gnuplot alle zur Anzeige bzw. Ausgabe unterstützt, können Sie mit einem einfachen set terminal abfragen:
> gnuplot> set terminal ... kyo Kyocera Laser Printer with Courier font latex LaTeX picture environment mf Metafont plotting standard mif Frame maker MIF 3.00 format mp MetaPost plotting standard nec_cp6 NEC printer CP6, Epson LQ-800 [monocrome color draft] okidata OKIDATA 320/321 Standard pbm Portable bitmap [small medium large] pcl5 HP Designjet 750C, HP Laserjet III/IV, etc. png Portable Network Graphics [small medium large] postscript PostScript graphics language prescribe Prescribe â for the Kyocera Laser Printer pslatex LaTeX picture environment with PostScript \specials pstex plain TeX with PostScript \specials pstricks LaTeX picture environment with PSTricks macros ...
Wollen Sie, dass anstatt in einem x11-Fenster die Ausgabe im Postscript-Format erfolgen soll, müssen Sie nur den Terminaltyp mit
> gnuplot> set terminal postscript Terminal type set to 'postscript'
ändern. Die häufigsten Endgeräte (Terminals) sind hierbei die Formate: »postscript«, »latex« und »windows«, wobei »windows« wiederum für die Ausgabe auf dem Bildschirm (ähnlich wie »x11«) steht. Beachten Sie allerdings, dass wenn Sie ein anderes Format angeben (z. B. »postscript«) hierbei ein Ausgabeziel definiert sein muss, da sonst der komplette Datenfluss auf Ihren Bildschirm erfolgt. Die Ausgabe verändern Sie ebenfalls mit dem Kommando set:
> set output "zieldatei.endung"
Um etwa aus der einfachen Sinuskurve vom Beispiel oben eine echte Postscript-Datei zu erzeugen, gehen Sie wie folgt vor:
> gnuplot> set terminal postscript Terminal type set to 'postscript' gnuplot> set output "testplot.ps" gnuplot> plot sin(x) gnuplotEin Blick in das Arbeitsverzeichnis sollte nun die Postscript-Datei testplot.ps zu Tage fördern. Selbstverständlich können Sie hier â sofern vorhanden â auch andere Formate zur Ausgabe verwenden. So z. B. für das Internet eine PNG-Datei:
> gnuplot> set terminal png Terminal type set to 'png' Options are ' small color' gnuplot> set output "testplot.png" gnuplot> plot sin(x)
Wenn Sie set output "PRN" verwenden, werden die Daten (vorausgesetzt, es wurden zuvor mit terminal die richtigen Treiber angegeben) an den Drucker geschickt.
### 5.9.6 Variablen und eigene Funktionen definierenÂ
Variablen können Sie mit gnuplot genauso definieren, wie Sie dies schon von der Shell-Programmierung her kennen:
> variable=wert
Wenn Sie den Wert einer Variablen kennen oder Berechnungen mit gnuplot ausgeben lassen wollen, kann hierfür das print-Kommando verwendet werden.
> gnuplot> var=2 gnuplot> print var 2 gnuplot> var_a=1+var*sqrt(2) gnuplot> print var_a 3.82842712474619
Solche Variablen können auch als Wert für ein Plot-Kommando genutzt werden. Im folgenden Beispiel wird eine Variable Pi verwendet, um den Bezugsrahmen der x-Koordinate zu »berechnen«.
> gnuplot> Pi=3.1415 gnuplot> set xrange [-2*Pi:2*Pi] gnuplot> a=0.6 gnuplot> plot a*sin(x)
Hier wird ein Graph gezeichnet aus a*sin(x) von â2*Pi bis 2*Pi für den gilt a=0.5.
Das Ganze lässt sich aber auch mit einer eigenen Funktion definieren:
> gnuplot> func(x)=var*sin(x)
Diese Funktion können Sie nun mit dem Namen plot func(x) aufrufen bzw. plotten lassen. Da diese Funktion auch eine User-definierte Variable var enthält, erwartet diese auch eine solche Variable von Ihnen:
> gnuplot> var=0.5 gnuplot> plot func(x) gnuplot> var=0.6 gnuplot> plot func(x) gnuplot> var=0.9 gnuplot> plot func(x)
Hierbei wurde der Parameter var ständig verändert, um einige Test-Plots mit veränderten Wert durchzuführen.
### 5.9.7 Interpretation von Daten aus einer DateiÂ
Im folgenden Beispiel soll eine Datei namens messdat.dat mit Temperaturwerten der ersten sechs Monate der letzten vier Jahre mit gnuplot ausgelesen und grafisch ausgegeben werden.
> gnuplot> ! cat messdat.dat 1 â5 3 â7 4 2 8 â5 9 â6 3 10 8 13 11 4 16 12 19 18 5 12 15 20 13 6 21 22 20 15
Jede Zeile soll für einen Monat stehen. Die erste Zeile z. B. steht für den Januar und beinhaltet Daten von vier Jahren (wir nehmen einfach mal 1999â2002). Um jetzt diese Messdaten als eine Grafik ausgeben zu lassen, können Sie wie folgt vorgehen:
> gnuplot> set xrange [0:6] gnuplot> set yrange [-20:40] gnuplot> set xlabel "Monat" gnuplot> set ylabel "Grad/Celcius"
Bis hierhin nichts Neues. Jetzt müssen Sie den Zeichenstil angeben, den Sie verwenden wollen (darauf wird noch eingegangen):
> gnuplot> set data style lp
Jetzt erst kommt der Plot-Befehl ins Spiel. Das Prinzip ist verhältnismäßig einfach, da gnuplot bestens â wie bei einer Tabellenkalkulation â mit dem spaltenorientierten Aufbau von Messdaten zurechtkommt. Die Syntax:
> using Xachse:Yachse
Damit geben Sie an, dass Sie ab der Zeile Xachse sämtliche Daten aus der YachseâSpalte erhalten wollen. Beispielsweise:
> # alle Daten ab der ersten Zeile aus der zweiten Spalte using 1:2 # alle Daten ab der ersten Zeile aus der vierten Spalte using 1:4
Damit using auch weiß, von wo die Daten kommen, müssen Sie ihm diese mit plot zuschieben:
> plot datei using Xachse:Yachse
Somit könnten Sie aus unserer Messdatei messdat.dat alle Daten ab der ersten Zeile in der zweiten Spalte folgendermaßen ausgeben lassen:
> gnuplot> plot "messdat.dat" using 1:2
Die Art der Linien, die hier ausgegeben werden, haben Sie mit set data style lp festgelegt. Wie es sich für ein echtes Messprotokoll gehört, beschriftet man die Linien auch entsprechend â was mit einem einfachen t für title und einer Zeichenkette dahinter erledigt werden kann:
> gnuplot> plot "messdat.dat" using 1:2 t "1999"
Wollen Sie dem Messprotokoll auch noch einen Titel verpassen, so können Sie diesen mit
> set title "ein Titel"
angeben. Nochmals das vollständige Beispiel, welches die Datei messdat.dat auswertet und plottet:
> gnuplot> set xrange [0:6] gnuplot> set yrange [-20:40] gnuplot> set xlabel "Monat" gnuplot> set ylabel "Grad/Celcius" gnuplot> set data style lp gnuplot> set title "Temperatur-Daten 1999â2002" gnuplot> plot "messdat.dat" using 1:2 t "1999" ,\ > "messdat.dat" using 1:3 t "2000" ,\ > "messdat.dat" using 1:4 t "2001" ,\ > "messdat.dat" using 1:5 t "2002"
Im Beispiel sehen Sie außerdem, dass mehrere Plot-Anweisungen mit einem Komma und Zeilenumbrüche mit einem Backslash realisiert werden.
### 5.9.8 Alles bitte nochmals zeichnen (oder besser speichern und laden)Â
Das Beispiel zum Auswerten der Messdaten hält sich hinsichtlich des Aufwands in Grenzen, aber sofern man hier das ein oder andere ändern bzw. die Ausgabe nochmals ausgeben will, ist der plot-Befehl schon ein wenig lang. Zwar gibt es auch hier eine Kommando-History, doch es geht mit dem Befehl replot noch ein wenig schneller.
> gnuplot> replot
Damit wird der zuvor vorgenommene plot nochmals geplottet. replot wird gewöhnlich verwendet, wenn Sie einen plot auf ein Fenster vorgenommen haben und diesen jetzt auch in einer Ausgabedatei speichern wollen. Im folgenden Beispiel soll der vorherige Plot in einer Postscript-Datei wieder zu sehen sein. Nichts einfacher als das:
> gnuplot> set terminal postscript Terminal type set to 'postscript' gnuplot> set output "messdat.ps" gnuplot> replot gnuplot> ! ls *.ps messdat.ps
Bevor Sie jetzt gnuplot beenden, können Sie auch den kompletten Plot (genauer: alle Befehle, Funktionen und Variablen) in einer Datei speichern.
> gnuplot> save "messdat.plt" gnuplot> quit
Starten Sie jetzt beim nächsten Mal gnuplot, können Sie mithilfe von load Ihre gespeicherten Plot-Daten wieder auf dem Bildschirm oder wohin Sie es angeben plotten lassen.
> gnuplot> load "messdat.plt"
### 5.9.9 gnuplot aus einem Shellscript heraus starten (der Batch-Betrieb)Â
Hierbei unterscheidet man zwischen zwei Möglichkeiten. Entweder es existiert bereits eine Batch-Datei, welche mit save "file.dat" gespeichert wurde und die es gilt aufzurufen, oder Sie wollen den ganzen gnuplot-Vorgang aus einem Shellscript heraus starten.
# Batch-Datei verwenden
Eine Möglichkeit ist es, die Batch-Datei als Argument von gnuplot anzugeben:
> you@host > gnuplot messdat.plt
gnuplot führt dann die angegebene Datei bis zur letzten Zeile in der Kommandozeile aus. Allerdings beendet sich gnuplot nach dem Lesen der letzten Zeile gleich wieder. Hier können Sie gegensteuern, indem Sie in der letzten Zeile der entsprechenden Batch-Datei (hier bspw. messdat.plt) pause â1 einfügen. Die Ausgabe hält dann so lange an, bis Sie eine Taste drücken.
Allerdings ist diese Methode unnötig, weil gnuplot Ihnen hier mit der Option âpersist Ähnliches anbietet. Die Option âpersist wird verwendet, damit das Fenster auch im Script-Betrieb sichtbar bleibt.
> you@host > gnuplot -persist messdat.plt
Außerdem können Sie die Ausgabe auch wie ein Shellscript von gnuplot interpretieren lassen. Ein Blick auf die erste Zeile der Batch-Datei bringt Folgendes ans Tageslicht:
> you@host > head â1 messdat.plt #!/usr/bin/gnuplot -persist
Also machen Sie die Batch-Datei ausführbar und starten das Script wie ein gewöhnliches:
> you@host > chmod u+x messdat.plt you@host > ./messdat.plt
# gnuplot aus einem Shellscript starten
Um gnuplot aus einem Shellscript heraus zu starten, benötigen Sie ebenfalls die Option âpersist (es sei denn, Sie schreiben in der letzten Zeile pause â1). Zwei Möglichkeiten stehen Ihnen zur Verfügung: mit echo und einer Pipe oder über ein Here-Dokument. Zuerst die Methode mit echo:
> you@host > echo 'plot "messdat.dat" using 1:2 t "1999" with lp'\ > | gnuplot -persist
Wobei Sie hier gleich erkennen können, dass sich die Methode mit echo wohl eher für kurze und schnelle Plots eignet. Mit der Angabe with lp musste noch der Style angegeben werden, da sonst nur Punkte verwendet würden. Wenn Sie noch weitere Angaben vornehmen wollen, etwa den Namen der einzelnen Achsen oder den Bezugsrahmen, ist die Möglichkeit mit echo eher unübersichtlich. Zwar ließe sich dies auch so erreichen:
> you@host > var='plot "messdat.dat" using 1:2 t "1999" with lp' you@host > echo $var | gnuplot -persist
doch meiner Meinung nach ist das Here-Dokument die einfachere Lösung. Hier ein Shellscript für die Methode mit dem Here-Dokument:
> # Demonstriert einen Plot mit gnuplot und dem Here-Dokument # Name : aplot1 # Datei zum Plotten FILE=messdat.dat echo "Demonstriert einen Plot mit gnuplot" gnuplot -persist <<PLOT set xrange [0:6] set yrange [-20:40] set xlabel "Monat" set ylabel "Grad/Celcius" set data style lp set title "Temperatur-Daten 1999â2002" # Falls Sie eine Postscript-Datei erstellen wollen ... # set terminal postscript # set output "messdat.ps" plot "$FILE" using 1:2 t "1999" ,\ "$FILE" using 1:3 t "2000" ,\ "$FILE" using 1:4 t "2001" ,\ "$FILE" using 1:5 t "2002" quit PLOT echo "Done ..."
### 5.9.10 Plot-Styles und andere Ausgaben festlegenÂ
Wollen Sie nicht, dass gnuplot bestimmt, welcher Plotstil (Style) verwendet wird, können Sie diesen auch selbst auswählen. Im Beispiel hatten Sie den Stil bisher mit
> set data style lp
festgelegt. lp ist eine Abkürzung für linepoints. Sie können aber den Stil auch angeben, indem Sie an einen plot-Befehl das Schlüsselwort with gefolgt vom Stil Ihrer Wahl anhängen.
> plot "datei" using 1:2 with steps # bspw. you@host > echo 'plot "messdat.dat" using 1:2 \ > t "1999" with steps' | gnuplot -persist
Hier geben Sie z. B. als Stil eine Art Treppenstufe an. Die einzelnen Stile hier genauer zu beschreiben, geht wohl ein wenig zu weit und ist eigentlich nicht nötig. Am besten probieren Sie die einzelnen Styles selbst aus. Welche möglich sind, können Sie mit einem Aufruf von
> gnuplot> set data style
in Erfahrung bringen. Das folgende Script demonstriert Ihnen einige dieser Styles, wobei gleich ins Auge sticht, welche Stile für diese Statistik brauchbar sind und welche nicht.
> # Demonstriert einen Plot mit gnuplot und dem Here-Dokument # Name : aplotstyles1 # Datei zum Plotten FILE=messdat.dat # Verschiedene Styles zum Testen STYLES="lines points linespoints dots impulses \ steps fsteps histeps boxes" for var in $STYLES do gnuplot -persist <<PLOT set xrange [0:6] set yrange [-20:40] set xlabel "Monat" set ylabel "Grad/Celcius" set data style $var set title "Temperatur-Daten 1999â2002" # Falls Sie eine Postscript-Datei erstellen wollen ... # set terminal postscript # set output "messdat.ps" plot "$FILE" using 1:2 t "1999" ,\ "$FILE" using 1:3 t "2000" ,\ "$FILE" using 1:4 t "2001" ,\ "$FILE" using 1:5 t "2002" quit PLOT done
Andere Styles wie
> xerrorbars xyerrorbars boxerrorbars yerrorbars boxxyerrorbars vector financebars candlesticks
wiederum benötigen zum Plotten mehr Spalten als Informationen. Näheres entnehmen Sie hierzu bitte den Hilfsseiten von gnuplot.
# Beschriftungen
Neben Titel, Legenden und der x/y-Achse, welche Sie bereits verwendet und beschriftet haben, können Sie auch einen Label an einer beliebigen Position setzen.
> set label "Zeichenkette" at X-Achse,Y-Achse
Wichtig in diesem Zusammenhang ist natürlich, dass sich die Angaben der Achsen innerhalb von xrange und yrange befinden.
Zusammenfassend finden Sie die häufig verwendeten Beschriftungen in der folgenden Tabelle 5.11:
Variable | Bedeutung |
| --- | --- |
xlabel | Beschriftung der x-Achse |
ylabel | Beschriftung der y-Achse |
label | Beschriftung an einer gewünschten x/y-Position innerhalb von xrange und yrange |
title | In Verbindung mit set title "abcd" wird der Text als Überschrift verwendet oder innerhalb eines Plot-Befehls hinter der Zeile und Spalte als Legende. Kann auch mit einem einfachen t abgekürzt werden. |
Ein Anwendungsbeispiel zur Beschriftung in gnuplot:
> gnuplot> set xlabel "X-ACHSE" gnuplot> set ylabel "Y-ACHSE" gnuplot> set label "Ich bin ein LABEL" at 2,20 gnuplot> set title "Ich bin der TITEL" gnuplot> plot "messdat.dat" using 1:2 title "LEGENDE" with impuls
Dies sieht dann so aus, wie in Abbildung 5.18 zu sehen.
# Linien und Punkte
Wenn es Ihnen nicht gefällt, wie gnuplot standardmäßig die Linien und Punkte auswählt, können Sie diese auch mit dem folgenden Befehl selbst festlegen:
> set linestyle [indexnummer] {linetype} {linewidth} {pointtype} \ {pointsize}
Beispielsweise:
> set linestyle 1 linetype 3 linewidth 4
Hier definieren Sie einen Linienstil mit dem Index 1. Er soll den linetyp 3 (in meinem Fall eine blaue Linie) und eine Dicke (linewidth) von 3 bekommen. Einen Überblick zu den Linientypen erhalten Sie mit dem Befehl test bei gnuplot:
> gnuplot> test
Wollen Sie den Linienstil Nummer 1, den Sie eben festgelegt haben, beim Plotten verwenden, müssen Sie ihn hinter dem Style mit angeben:
> plot messdat.dat using 1:2 t "1999" with lp linestyle 1
Tabelle 5.12 gibt einen kurzen Überblick zu den möglichen Werten, mit denen Sie die Ausgabe von Linien und Punkten bestimmen können:
Wert | Bedeutung |
| --- | --- |
linetype (Kurzform lt) | Hier können Sie den Linienstil angeben. Gewöhnlich handelt es sich um die entsprechende Farbe und â falls verwendet â den entsprechenden Punkt. Welcher Linienstil wie aussieht, können Sie sich mit dem Befehl test in gnuplot anzeigen lassen. |
linewidth (Kurzform lw) | Die Stärke der Line; je höher dieser Wert ist, desto dicker wird der Strich. |
pointtype (Kurzform pt) | Wie linetype, nur dass Sie hierbei den Stil eines Punktes angeben. Wie die entsprechenden Punkte bei Ihnen aussehen, lässt sich auch hier mit test anzeigen. |
pointsize (Kurzform ps) | Wie linewidth, nur dass Sie hierbei die Größe des Punktes angeben â je höher der Wert ist, desto größer ist der entsprechende Punktstil (pointtype). |
Hierzu nochmals das Shellscript, welches die Temperaturdaten der ersten sechs Monate in den letzten vier Jahren auswertet â jetzt mit veränderten Linien und Punkten.
> # Demonstriert einen Plot mit gnuplot und dem Here-Dokument # Name : aplot2 FILE=messdat.dat gnuplot -persist <<PLOT set linestyle 1 linetype 1 linewidth 4 set linestyle 2 linetype 2 linewidth 3 pointtype 6 pointsize 3 set linestyle 3 linetype 0 linewidth 2 pointsize 2 set linestyle 4 linetype 7 linewidth 1 pointsize 2 set xlabel "Monat" set ylabel "Grad/Celcius" set yrange [-10:40] set xrange [0:7] set label "6 Monate/1999â2003" at 1,30 set title "Temperaturdaten" plot "$FILE" using 1:2 t "1999" with lp linestyle 1 ,\ "$FILE" using 1:3 t "2000" with lp linestyle 2,\ "$FILE" using 1:4 t "2001" with lp linestyle 3,\ "$FILE" using 1:5 t "2002" with lp linestyle 4 PLOT
# Größe bzw. Abstände der Ausgabe verändern
Häufig ist die standardmäßige Einstellung der Ausgabe zu groß und manchmal (eher selten) auch zu klein. Besonders wenn man die entsprechende Ausgabe in eine Postscript-Datei für ein Latex-Dokument vornehmen will, muss man häufig etwas anpassen. Diese Angabe können Sie mittels
> set size Xval,Yval
verändern (im Fachjargon »skalieren«).
> set size 1.0,1.0
ist dabei der Standardwert und gibt Ihre Ausgabe unverändert zurück. Wollen Sie den Faktor (und somit auch die Ausgabe) verkleinern, müssen Sie den Wert reduzieren:
> set size 0.8,0.8
Weitere Werte, die Sie zum Verändern bestimmter Abstände verwenden können, sind (siehe Tabelle 5.13):
Befehl | Bedeutung |
| --- | --- |
set offset links,rechts,oben,unten | Hiermit stellen Sie den Abstand der Daten von den Achsen ein. Als Einheit (Wert für links, rechts, oben und unten) dient die Einheit (siehe xrange und yrange), die Sie für die jeweilige Achse verwenden. |
set lmargin [wert] | Justiert den Abstand der Grafik vom linken Fensterrand |
set rmargin [wert] | Justiert den Abstand der Grafik vom rechten Fensterrand |
set bmargin [wert] | Justiert den Abstand der Grafik vom unteren Fensterrand |
set tmargin [wert] | Justiert den Abstand der Grafik vom oberen Fensterrand |
# »Auflösung« (Samplerate) verändern
Gnuplot berechnet von einer Funktion eine Menge von Stützpunkten und verbindet diese dann durch »Spline« bzw. Linienelemente. Bei 3-D-Plots ist allerdings die Darstellung oft recht grob, weshalb es sich manchmal empfiehlt, die Qualität der Ausgabe mit set sample wert (ein wert von bspw. 1000 ist recht fein) feiner einzustellen. Sie verändern so zwar nicht direkt die Auflösung im eigentlichen Sinne, jedoch können Sie hiermit die Anzahl von Stützpunkten erhöhen, wodurch die Ausgabe automatisch feiner wird (allerdings ggf. auch mehr Rechenzeit beansprucht).
### 5.9.11 Tricks für die AchsenÂ
# Offset für die x-Achse
Manchmal benötigen Sie ein Offset für die Achsen, etwa wenn Sie Impulsstriche als Darstellungsform gewählt haben. Beim Beispiel mit den Temperaturdaten würde eine Verwendung von Impulsstrichen als Style so aussehen (siehe Abbildung 5.20).
Die Striche überlagern sich alle in der x-Achse. In diesem Beispiel mag die Verwendung von Impulsstrichen weniger geeignet sein, aber wenn Sie Punkte hierfür verwenden, werden häufig viele Punkte auf einem Fleck platziert. Egal welches Anwendungsgebiet, welcher Stil und welche Achse hiervon nun betroffen ist, auch die Achsen (genauer: das Offset der Achsen) können Sie mit einem einfachen Trick verschieben. Nehmen wir z. B. folgende Angabe, die die eben gezeigte Abbildung demonstriert:
> plot "$FILE" using 1:2 t "1999" with impuls,\ "$FILE" using 1:3 t "2000" with impuls,\ "$FILE" using 1:4 t "2001" with impuls,\ "$FILE" using 1:5 t "2002" with impuls
Hier sind die Werte hinter using von Bedeutung:
> using 1:2
Wollen Sie jetzt, dass die Impulsstriche minimal von der linken Seite der eigentlichen Position weg platziert werden, müssen Sie dies umändern in:
> using (column(1)-.15):2
Jetzt wird der erste Strich um â0.15 von der eigentlichen Position der x-Achse nach links verschoben. Wollen Sie die Impulsstriche um 0.1 nach rechts verschieben, schreiben Sie dies so:
> using (column(1)+.1):2
Auf das vollständige Script angewandt sieht dies folgendermaßen aus:
> # Demonstriert einen Plot mit gnuplot und dem Here-Dokument # Name : aplot3 FILE=messdat.dat gnuplot -persist <<PLOT set xlabel "Monat" set ylabel "Grad/Celcius" set yrange [-10:40] set xrange [0:7] set label "6 Monate/1999â2002" at 2,20 set title "Temperaturdaten" plot "$FILE" using (column(1)-.15):2 t "1999" with impuls ,\ "$FILE" using (column(1)-.05):3 t "2000" with impuls ,\ "$FILE" using (column(1)+.05):4 t "2001" with impuls ,\ "$FILE" using (column(1)+.15):5 t "2002" with impuls PLOT
# Zeitdaten
Wenn der Zeitverlauf irgendeiner Messung dargestellt werden soll, muss gnuplot die x-Werte als Datum/Stunde/Minute etc. erkennen. Gerade als (angehender) Systemadministrator bzw. Webmaster bekommen Sie es regelmäßig mit Zeitdaten zu tun. Wollen Sie dem Kunden bzw. dem Angestellten zeigen, zu welcher Uhrzeit sich etwas Besonderes ereignet hat, können Sie hierzu ebenfalls auf gnuplot zählen. So zeigen Sie diesen Vorgang anschaulich, statt abschreckend mit einer Kolonne von Zahlen um sich zu werfen. Hier hilft es Ihnen auch wieder weiter, wenn Sie sich mit dem Kommando date auseinander gesetzt haben, denn das Format und die Formatierungszeichen sind dieselben. Folgende Daten sind z. B. vorhanden:
> you@host > cat besucher.dat 10.03.05 655 408 11.03.05 838 612 12.03.05 435 345 13.03.05 695 509 14.03.05 412 333 15.03.05 905 765 16.03.05 355 208
Jede dieser Zeilen soll folgende Bedeutung haben:
> [Datum] [Besucher] [Besucher mit unterschiedlicher IP-Adresse]
Eine Besucherstatistik einer Webseite also. Um hierbei gnuplot mitzuteilen, dass Sie an Daten mit Zeitwerten interessiert sind, müssen Sie dies mit
> set xdata time
angeben. Damit gnuplot auch das Zeitformat kennt, geben Sie ihm die Daten mit set timefmt ' ... ' mit. Die Formatangabe entspricht hierbei der von date. Im Beispiel von besucher.dat also:
> set timefmt '%d.%m.%y'
Zum Schluss müssen Sie noch angeben, welche Zeitinformationen als Achsenbeschriftung verwendet werden sollen. Dies wird mit einem einfachen
> set format x ' ... '
erreicht. Soll beispielsweise das Format Tag/Monat auf der x-Achse erscheinen, dann schreibt man dies so:
> set format x "%d/%m"
Oder das Format Jahr/Monat/Tag:
> set format x "%y/%m%/%d"
Alles zusammengefasst: set xdata time definiert die x-Werte als Zeitangaben. set timefmt erklärt gnuplot, wie es die Daten zu interpretieren hat. set format x definiert, welche der Zeitinformationen als Achsenbeschriftung auftauchen soll.
Hier das Shellscript, welches die Besucherdaten auswertet und plottet:
> # Demonstriert einen Plot mit gnuplot und dem Here-Dokument # Name : aplot4 FILE=besucher.dat gnuplot -persist <<PLOT set xdata time set timefmt '%d.%m.%y' set format x "%d/%m" set ylabel "Besucher" set title " --- Besucherstatistik vom 10.3. bis 16.3.2003 ---" set data style lp plot "$FILE" using (timecolumn(1)):2 t "Besucher" pointsize 2 , \ "$FILE" using (timecolumn(1)):3 t "Besucher mit untersch. IP" linewidth 2 pointsize 2 PLOT
### 5.9.12 Die dritte DimensionÂ
Zwar werden Sie als Administrator eher selten mit dreidimensionalen Plottings zu tun haben, dennoch soll dieser Punkt nicht unerwähnt bleiben. Der Schritt zur dritten Dimension ist im Grunde nicht schwer: Hier kommt eine z-Achse hinzu, weshalb Sie gegebenenfalls noch entsprechende Werte belegen müssen/können (zrange, zlabel etc.). Ebenfalls festlegen können Sie Blickwinkel und -höhe (mit set view). Der Befehl zum Plotten von dreidimensionalen Darstellungen lautet splot. Hierzu eine Datei, mit der jetzt ein dreidimensionaler Plot vorgenommen werden soll.
> you@host > cat data.dat 1 4 4609 2 4 3534 3 4 4321 4 4 6345 1 5 6765 2 5 3343 3 5 5431 4 5 3467 1 6 4321 2 6 5333 3 6 4342 4 6 5878 1 7 4351 2 7 4333 3 7 5342 4 7 4878
Die Bedeutung der einzelnen Spalten sei hierbei:
> [Woche] [Monat] [Besucher]
Also wieder eine Auswertung einer Besucherstatistik, wobei hier die erste Spalte der Woche, die zweite Spalte dem Monat und die dritte der Zahl der Besucher gewidmet ist, die in der n-ten Woche im m-ten Monat auf die Webseite zugegriffen haben. Hier das Script, welches diese Daten auswertet und mit einem dreidimensionalen Plot ausgibt:
> # Demonstriert einen Plot mit gnuplot und dem Here-Dokument # Name : aplot5 FILE=data.dat gnuplot -persist <<PLOT set view ,75,1,1 set xlabel "Woche" set ylabel "Monat" set zlabel "Besucher" set ymtics set title "Besucherstatistik pro Woche eines Monats" splot "$FILE" using (column(1)+.3):2:3 t "Besucher pro Woche" with impuls linewidth 3 PLOT
Das Script bei der Ausführung (siehe Abbildung 5.23):
Zuerst stellen Sie mit set view den Blickwinkel und die -höhe ein. Neu kommt hier die Beschriftung des zlabel und die Einstellung von ymtics hinzu. Damit wandeln Sie die Zahlen auf der Monatsachse (hier in der zweiten Spalte von data.dat) in eine Monatsbezeichnung um (Gleiches könnten Sie auch mit ydtics für Wochentage auf der y-Achse vornehmen; oder xdtics entspräche den Wochentagen auf der x-Achse). Natürlich setzt dies immer voraus, dass hierbei entsprechende Daten mit vernünftigen Werten übergeben werden.
### 5.9.13 ZusammenfassungÂ
Für den Standardgebrauch (Systemadministration) sind Sie gerüstet. Wollen Sie allerdings wissenschaftlich mit gnuplot arbeiten, werden Sie noch häufiger help eingeben müssen. Daher hierzu noch einige Anlaufstellen im Web, über die Sie noch tiefer in die Plot-Welt einsteigen können:
# 6.2 Funktionen, die Funktionen aufrufenÂ
6.2 Funktionen, die Funktionen aufrufenÂ
Selbstverständlich kann eine Funktion auch eine andere Funktion aufrufen â man spricht auch vom Schachteln der Funktionen. Es spielt dabei eigentlich keine Rolle, in welcher Reihenfolge Sie die einzelnen Funktionen aufrufen, da alle Funktionen ohnehin erst vom Hauptprogramm aufgerufen werden. Die Hauptsache ist (wie gehabt), dass alle Funktionen vor der Hauptfunktion definiert sind.
# Demonstriert verschachtelte Funktionsaufrufe # Name: afunc4 # Die Funktion func1 func1() { echo "Ich bin func1 ..." } # Die Funktion func2 func2() { echo "Ich bin func2 ..." } # Die Funktion func3 func3() { echo "Ich bin func 3 ..." func1 func2 echo "func3 ist fertig" } # Das Hauptprogramm func3
Das Script bei der Ausführung:
you@host > ./afunc4 Ich bin func 3 ... Ich bin func1 ... Ich bin func2 ... func3 ist fertig
Natürlich können Sie in der Shell auch Funktionen schreiben, die sich wieder selbst aufrufen. Bei diesem Konzept handelt es sich nicht um eines der Shell, sondern um ein Konzept der Programmierung im Allgemeinen. Dies wird als Rekursion bezeichnet. Eine Rekursion verwendet man, indem man mehrmals einen Codeabschnitt (genauer eine Funktion) wiederholt. Hierbei übergibt man das Ergebnis eines Funktionsaufrufs als Argument an den nächsten sich selbst aufrufenden Funktionsaufruf. Allerdings werden Sie als Systemadministrator wohl eher selten auf Rekursionen treffen. Vorwiegend werden mit Rekursionen mathematische Probleme aller Art gelöst. Das folgende Beispiel demonstriert die Berechnung der Fakultät in rekursiver Form (hier werden wieder einige Dinge vorweggenommen, welche Ihnen aber in diesem Kapitel noch näher erläutert werden).
# Demonstriert die Verwendung von Rekursionen # Name: afakul fakul() { value=$1 # erstes Argument des Funktionsaufrufs an value # Wenn value kleiner als 1, den Wert 1 ausgeben und beenden [ $((value)) -le 1 ] && { echo 1; return; } # Ansonsten mit einem rekursiven Aufruf fortfahren echo $(($value * `fakul $value-1` )) } fakul $1
Das Script bei der Ausführung:
you@host > ./afakul 20 200 you@host > ./afakul 9 45
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 6.2 Funktionen, die Funktionen aufrufenÂ
Selbstverständlich kann eine Funktion auch eine andere Funktion aufrufen â man spricht auch vom Schachteln der Funktionen. Es spielt dabei eigentlich keine Rolle, in welcher Reihenfolge Sie die einzelnen Funktionen aufrufen, da alle Funktionen ohnehin erst vom Hauptprogramm aufgerufen werden. Die Hauptsache ist (wie gehabt), dass alle Funktionen vor der Hauptfunktion definiert sind.
> # Demonstriert verschachtelte Funktionsaufrufe # Name: afunc4 # Die Funktion func1 func1() { echo "Ich bin func1 ..." } # Die Funktion func2 func2() { echo "Ich bin func2 ..." } # Die Funktion func3 func3() { echo "Ich bin func 3 ..." func1 func2 echo "func3 ist fertig" } # Das Hauptprogramm func3
Das Script bei der Ausführung:
> you@host > ./afunc4 Ich bin func 3 ... Ich bin func1 ... Ich bin func2 ... func3 ist fertig
Natürlich können Sie in der Shell auch Funktionen schreiben, die sich wieder selbst aufrufen. Bei diesem Konzept handelt es sich nicht um eines der Shell, sondern um ein Konzept der Programmierung im Allgemeinen. Dies wird als Rekursion bezeichnet. Eine Rekursion verwendet man, indem man mehrmals einen Codeabschnitt (genauer eine Funktion) wiederholt. Hierbei übergibt man das Ergebnis eines Funktionsaufrufs als Argument an den nächsten sich selbst aufrufenden Funktionsaufruf. Allerdings werden Sie als Systemadministrator wohl eher selten auf Rekursionen treffen. Vorwiegend werden mit Rekursionen mathematische Probleme aller Art gelöst. Das folgende Beispiel demonstriert die Berechnung der Fakultät in rekursiver Form (hier werden wieder einige Dinge vorweggenommen, welche Ihnen aber in diesem Kapitel noch näher erläutert werden).
> # Demonstriert die Verwendung von Rekursionen # Name: afakul fakul() { value=$1 # erstes Argument des Funktionsaufrufs an value # Wenn value kleiner als 1, den Wert 1 ausgeben und beenden [ $((value)) -le 1 ] && { echo 1; return; } # Ansonsten mit einem rekursiven Aufruf fortfahren echo $(($value * `fakul $value-1` )) } fakul $1
Das Script bei der Ausführung:
> you@host > ./afakul 20 200 you@host > ./afakul 9 45
# 6.3 ParameterübergabeÂ
6.3 ParameterübergabeÂ
Da Sie nun wissen, dass Funktionen wie gewöhnliche Kommandos ausgeführt werden, stellt sich die Frage, ob dies auch für die Argumente gilt. Und in der Tat erfolgt die Übergabe von Argumenten an eine Funktion nach demselben Schema wie schon bei den gewöhnlichen Kommandos bzw. Scriptaufrufen.
functions_name arg1 arg2 arg3 ... arg_n
Und, so wie Sie es schon von den Argumenten aus der Kommandozeile her kennen, können Sie auch in der Funktion auf die einzelnen Variablen mit den Positionsparametern $1, $2 bis $9 bzw. ${n} zugreifen. Genauso sieht dies auch mit $@ und $* aus, worin Sie alle übergebenen Argumente in Form einer Liste bzw. Zeichenkette wieder finden. Die Anzahl der Parameter finden Sie in der Funktion ebenfalls wieder mit der Variable $#. Allerdings bleibt der Positionsparameter $0 weiterhin dem Scriptnamen vorbehalten und nicht dem Funktionsnamen.
# Demonstriert die Verwendung von Parametern # Name: afunc5 # Funktion readarg readarg() { i=1 echo "Anzahl der Parameter, die übergeben wurden : $#" for var in $* do echo "$i. Parameter : $var" i=`expr $i + 1` done # Oder via Positionsparameter; die ersten drei echo $1:$2:$3 } # Hauptprogramm printf "Ein paar Argumente bitte : " read readarg $REPLY
Das Script bei der Ausführung:
you@host > ./afunc5 Ein paar Argumente bitte : eins zwei drei vier Anzahl der Parameter, die übergeben wurden : 4 1. Parameter : eins 2. Parameter : zwei 3. Parameter : drei 4. Parameter : vier eins:zwei:drei
Um keine Missverständnisse aufkommen zu lassen: Die Parameter, die Sie an die Funktionen übergeben, haben nichts mit den Kommandozeilenparametern des Scripts zu tun, auch wenn eine Funktion nach demselben Schema arbeitet. Würden Sie die Kommandozeilenparameter eines Scripts einfach in einer Funktion verwenden, so würde die Funktion diese nicht sehen, weil sie von den eigentlichen Funktionsparametern überdeckt werden. Hier ein Beispiel, das zeigt, worauf ich hinaus will:
# Demonstriert die Verwendung von Parametern # Name: afunc6 # Funktion readcmd readcmd() { echo "Anzahl der Parameter in der Kommandozeile : $#" for var in $* do echo "$i. Parameter : $var" i=`expr $i + 1` done } # Hauptprogramm echo "Vor der Funktion ..." readcmd echo "... nach der Funktion"
Das Script bei der Ausführung:
you@host > ./afunc6 eins zwei drei vier Vor der Funktion ... Anzahl der Parameter in der Kommandozeile : 0 ... nach der Funktion
Wollen Sie die Kommandozeilenparameter in einer Funktion verwenden, müssen Sie die entsprechenden Parameter auch als Argument beim Funktionsaufruf mit angeben:
# Demonstriert die Verwendung von Parametern # Name: afunc7 # Funktion readcmd readcmd() { echo "Anzahl der Parameter in der Kommandozeile : $#" for var in $* do echo "$i. Parameter : $var" i=`expr $i + 1` done } # Hauptprogramm echo "Vor der Funktion ..." readcmd $* echo "... nach der Funktion"
Das Script bei der Ausführung:
you@host > ./afunc7 eins zwei drei vier Vor der Funktion ... Anzahl der Parameter in der Kommandozeile : 4 . Parameter : eins 1. Parameter : zwei 2. Parameter : drei 3. Parameter : vier ... nach der Funktion
Natürlich können Sie auch einzelne Kommandozeilenparameter an eine Funktion übergeben, z. B.:
# Positionsparameter 1 und 3 an die Funktion übergeben readcmd $1 $3
6.3.1 FUNCNAME (Bash only)Â
In der Bash ab Version 2.04 wird Ihnen eine Variable namens FUNCNAME angeboten, in der sich der Name der aktuell ausgeführten Funktion befindet.
# Demonstriert die Verwendung von Parametern # Name: afunc8 # Funktion myname myname() { echo "Ich bin die Funktion: $FUNCNAME" } # Funktion andmyname andmyname() { echo "Und ich heiße $FUNCNAME" } # Hauptprogramm echo "Vor der Funktion ..." myname andmyname echo "... nach der Funktion"
Das Script bei der Ausführung:
you@host > ./afunc8 Vor der Funktion ... Ich bin die Funktion: myname Und ich heiße andmyname ... nach der Funktion
Im Hauptprogramm des Scripts ist diese Variable allerdings leer (""). Hierbei steht Ihnen ja weiterhin $0 zur Verfügung. Wollen Sie den Funktionsnamen innerhalb der Funktion löschen (warum auch immer), können Sie dies mit unset $FUNCNAME erreichen.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 6.3 ParameterübergabeÂ
Da Sie nun wissen, dass Funktionen wie gewöhnliche Kommandos ausgeführt werden, stellt sich die Frage, ob dies auch für die Argumente gilt. Und in der Tat erfolgt die Übergabe von Argumenten an eine Funktion nach demselben Schema wie schon bei den gewöhnlichen Kommandos bzw. Scriptaufrufen.
> functions_name arg1 arg2 arg3 ... arg_n
Und, so wie Sie es schon von den Argumenten aus der Kommandozeile her kennen, können Sie auch in der Funktion auf die einzelnen Variablen mit den Positionsparametern $1, $2 bis $9 bzw. ${n} zugreifen. Genauso sieht dies auch mit $@ und $* aus, worin Sie alle übergebenen Argumente in Form einer Liste bzw. Zeichenkette wieder finden. Die Anzahl der Parameter finden Sie in der Funktion ebenfalls wieder mit der Variable $#. Allerdings bleibt der Positionsparameter $0 weiterhin dem Scriptnamen vorbehalten und nicht dem Funktionsnamen.
> # Demonstriert die Verwendung von Parametern # Name: afunc5 # Funktion readarg readarg() { i=1 echo "Anzahl der Parameter, die übergeben wurden : $#" for var in $* do echo "$i. Parameter : $var" i=`expr $i + 1` done # Oder via Positionsparameter; die ersten drei echo $1:$2:$3 } # Hauptprogramm printf "Ein paar Argumente bitte : " read readarg $REPLY
Das Script bei der Ausführung:
> you@host > ./afunc5 Ein paar Argumente bitte : eins zwei drei vier Anzahl der Parameter, die übergeben wurden : 4 1. Parameter : eins 2. Parameter : zwei 3. Parameter : drei 4. Parameter : vier eins:zwei:drei
Um keine Missverständnisse aufkommen zu lassen: Die Parameter, die Sie an die Funktionen übergeben, haben nichts mit den Kommandozeilenparametern des Scripts zu tun, auch wenn eine Funktion nach demselben Schema arbeitet. Würden Sie die Kommandozeilenparameter eines Scripts einfach in einer Funktion verwenden, so würde die Funktion diese nicht sehen, weil sie von den eigentlichen Funktionsparametern überdeckt werden. Hier ein Beispiel, das zeigt, worauf ich hinaus will:
> # Demonstriert die Verwendung von Parametern # Name: afunc6 # Funktion readcmd readcmd() { echo "Anzahl der Parameter in der Kommandozeile : $#" for var in $* do echo "$i. Parameter : $var" i=`expr $i + 1` done } # Hauptprogramm echo "Vor der Funktion ..." readcmd echo "... nach der Funktion"
Das Script bei der Ausführung:
> you@host > ./afunc6 eins zwei drei vier Vor der Funktion ... Anzahl der Parameter in der Kommandozeile : 0 ... nach der Funktion
Wollen Sie die Kommandozeilenparameter in einer Funktion verwenden, müssen Sie die entsprechenden Parameter auch als Argument beim Funktionsaufruf mit angeben:
> # Demonstriert die Verwendung von Parametern # Name: afunc7 # Funktion readcmd readcmd() { echo "Anzahl der Parameter in der Kommandozeile : $#" for var in $* do echo "$i. Parameter : $var" i=`expr $i + 1` done } # Hauptprogramm echo "Vor der Funktion ..." readcmd $* echo "... nach der Funktion"
Das Script bei der Ausführung:
> you@host > ./afunc7 eins zwei drei vier Vor der Funktion ... Anzahl der Parameter in der Kommandozeile : 4 . Parameter : eins 1. Parameter : zwei 2. Parameter : drei 3. Parameter : vier ... nach der Funktion
Natürlich können Sie auch einzelne Kommandozeilenparameter an eine Funktion übergeben, z. B.:
> # Positionsparameter 1 und 3 an die Funktion übergeben readcmd $1 $3
### 6.3.1 FUNCNAME (Bash only)Â
In der Bash ab Version 2.04 wird Ihnen eine Variable namens FUNCNAME angeboten, in der sich der Name der aktuell ausgeführten Funktion befindet.
> # Demonstriert die Verwendung von Parametern # Name: afunc8 # Funktion myname myname() { echo "Ich bin die Funktion: $FUNCNAME" } # Funktion andmyname andmyname() { echo "Und ich heiße $FUNCNAME" } # Hauptprogramm echo "Vor der Funktion ..." myname andmyname echo "... nach der Funktion"
Das Script bei der Ausführung:
> you@host > ./afunc8 Vor der Funktion ... Ich bin die Funktion: myname Und ich heiße andmyname ... nach der Funktion
Im Hauptprogramm des Scripts ist diese Variable allerdings leer (""). Hierbei steht Ihnen ja weiterhin $0 zur Verfügung. Wollen Sie den Funktionsnamen innerhalb der Funktion löschen (warum auch immer), können Sie dies mit unset $FUNCNAME erreichen.
# 6.4 Rückgabewert aus einer FunktionÂ
6.4 Rückgabewert aus einer FunktionÂ
Sie haben verschiedene Möglichkeiten, ein Ergebnis von einer Funktion zu erhalten:
6.4.1 Rückgabewert mit returnÂ
Wird der Befehl return innerhalb einer Funktion ausgeführt, wird die Funktion verlassen und der Wert n als Ganzzahl zurückgegeben. Die Syntax:
return [n]
Wird hierbei nicht der Parameter n verwendet, dann wird der Rückgabewert des zuletzt ausgeführten Kommandos zurückgegeben. Für n können Sie einen Wert zwischen 0 bis 255 angeben. Negative Werte sind ebenfalls möglich. Den Rückgabewert von return kann man dann im Hauptprogramm des Scripts mit der Variablen $? auswerten.
Hinweis   Verwendet man kein return innerhalb einer Funktion, kann man dennoch die Variable $? auswerten. Allerdings befindet sich darin dann der Exit-Status des zuletzt ausgeführten Kommandos in der Funktion.
# Demonstriert die Verwendung von Parametern # Name: afunc9 # Funktion usage usage() { if [ $# -lt 1 ] then echo "usage: $0 datei_zum_lesen" return 1 # return-Code 1 zurückgeben : Fehler fi return 0 # return-Code 0 zurückgeben : Ok } # Hauptprogramm usage $* # Wurde 1 aus usage zurückgegeben ... if [ $? -eq 1 ] then printf "Bitte Datei zum Lesen eingeben : " read file else file=$1 fi echo "Datei $file wird gelesen"
Das Script bei der Ausführung:
you@host > ./afunc9 usage: ./afunc1 datei_zum_lesen Bitte Datei zum Lesen eingeben : testfile.dat Datei testfile.dat wird gelesen you@host > ./afunc9 testfile.dat Datei testfile.dat wird gelesen
Im folgenden Script gibt die Funktion usage den Rückgabewert 1 zurück, wenn keine Datei als Argument zum Lesen angegeben wurde. Im Hauptprogramm werten wir dann die Variable $? aus. Befindet sich hier der Wert 1 (von der Funktion usage), wird der Anwender erneut aufgefordert, eine Datei zum Lesen einzugeben.
6.4.2 Rückgabewert mit echo und einer Kommando-SubstitutionÂ
Sicherlich wird sich der ein oder andere hier fragen, wie man denn nun tatsächlich »echte« Werte wie Ergebnisse mathematischer Berechnungen oder Zeichenketten aus einer Funktion zurückgeben kann. Hierfür kommt nur die bereits bekannte Kommando-Substitution in Frage. Das Prinzip ist einfach: Sie geben das Ergebnis in der Funktion mittels echo auf die Standardausgabe aus und im Hauptprogramm leiten Sie diese Daten mit einer Kommando-Substitution in eine Variable um.
variable=`functions_name arg1 arg2 ... arg_n`
Hierzu ein Beispiel:
# Demonstriert die Verwendung von Parametern # Name: afunc10 # Funktion verdoppeln verdoppeln() { val=`expr $1 \* 2` echo $val } #Funktion halbieren halbieren() { val=`expr $1 / 2` echo $val } # Alle Kleinbuchstaben zu Grossbuchstaben upper() { echo $string | tr 'a-z' 'A-Z' } # Alle Grossbuchstaben zu Kleinbuchstaben lower() { echo $string | tr 'A-Z' 'a-z' } # Hauptprogramm val=`verdoppeln 25` echo "verdoppeln 25 = $val" # So geht es auch ... echo "halbieren 20 = `halbieren 20`" string="<NAME>" ustring=`upper $string` echo "upper $string = $ustring" string="<NAME>" echo "lower $string = `lower $string`"
Das Script bei der Ausführung:
you@host > ./afunc10 verdoppeln 25 = 50 halbieren 20 = 10 upper Hallo Welt = HALLO WELT lower Hallo Welt = hallo welt
Rückgabe mehrerer Werte aus einer Funktion
Mit derselben Methode, die eben verwendet wurde, ist es auch möglich, mehrere Werte aus einer Funktion zurückzugeben. Hierzu müssen Sie nur alle Variablen als eine Zeichenkette mittels echo auf die Standardausgabe schreiben und in der Hauptfunktion diese Zeichenkette mittels set wieder auf die einzelnen Positionsparameter aufsplitten. Das Beispiel dazu:
# Demonstriert die Verwendung von Parametern # Name: afunc11 # Funktion verdoppeln verdoppeln_und_halbieren() { val1=`expr $1 \* 2` val2=`expr $1 / 2` echo $val1 $val2 } # Hauptprogramm val=`verdoppeln_und_halbieren 20` # Aufsplitten auf die einzelnen Positionsparameter set $val echo "verdoppeln 20 = $1" echo "halbieren 20 = $2"
Das Script bei der Ausführung:
you@host > ./afunc11 verdoppeln 20 = 40 halbieren 20 = 10
6.4.3 Rückgabewert ohne eine echte Rückgabe (lokale Variable)Â
Die letzte Methode besteht darin, eine Variable aus einer Funktion, also auch in der Hauptfunktion, zu verwenden. Für jemanden, der schon in anderen Programmiersprachen Erfahrung gesammelt hat, dürfte dies recht ungewöhnlich sein, weil es von der üblichen Struktur der Programmierung abweicht. Da aber alle Funktionen auf alle Variablen des Hauptprogramms zugreifen können, ist dies hier möglich. Ein Beispiel:
# Demonstriert die Verwendung von Parametern # Name: afunc12 # Funktion verdoppeln verdoppeln_und_halbieren() { val1=`expr $1 \* 2` val2=`expr $1 / 2` } # Hauptprogramm verdoppeln_und_halbieren 20 echo "verdoppeln 20 = $val1" echo "halbieren 20 = $val2"
6.4.4 Funktionen und exitÂ
Beachten Sie bitte: Wenn Sie eine Funktion mit exit beschließen, beenden Sie hiermit das komplette Script bzw. die Subshell. Der exit-Befehl reißt das komplette Script in den Beendigungsstatus. Dies kann bspw. gewollt sein, wenn eine Funktion bzw. ein Script nicht die Daten (Argumente) erhält, die für ein sinnvolles Weiterarbeiten erforderlich wären. Ein Beispiel ist die Übergabe von Argumenten aus der Kommandozeile.
# Demonstriert die Verwendung von Parametern # Name: afunc13 # Funktion usage usage() { if [ $# -lt 1 ] then echo "usage: $0 datei_zum_lesen" exit 1 fi } # Hauptprogramm echo "Vor der Funktion ..." usage $* echo "... nach der Funktion"
Das Script bei der Ausführung:
you@host > ./afunc13 Vor der Funktion ... usage: ./afunc12 datei_zum_lesen you@host > echo $? 1 you@host > ./afunc13 test Vor der Funktion ... ... nach der Funktion you@host > echo $? 0
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 6.4 Rückgabewert aus einer FunktionÂ
Sie haben verschiedene Möglichkeiten, ein Ergebnis von einer Funktion zu erhalten:
### 6.4.1 Rückgabewert mit returnÂ
Wird der Befehl return innerhalb einer Funktion ausgeführt, wird die Funktion verlassen und der Wert n als Ganzzahl zurückgegeben. Die Syntax:
> return [n]
Wird hierbei nicht der Parameter n verwendet, dann wird der Rückgabewert des zuletzt ausgeführten Kommandos zurückgegeben. Für n können Sie einen Wert zwischen 0 bis 255 angeben. Negative Werte sind ebenfalls möglich. Den Rückgabewert von return kann man dann im Hauptprogramm des Scripts mit der Variablen $? auswerten.
> # Demonstriert die Verwendung von Parametern # Name: afunc9 # Funktion usage usage() { if [ $# -lt 1 ] then echo "usage: $0 datei_zum_lesen" return 1 # return-Code 1 zurückgeben : Fehler fi return 0 # return-Code 0 zurückgeben : Ok } # Hauptprogramm usage $* # Wurde 1 aus usage zurückgegeben ... if [ $? -eq 1 ] then printf "Bitte Datei zum Lesen eingeben : " read file else file=$1 fi echo "Datei $file wird gelesen"
Das Script bei der Ausführung:
> you@host > ./afunc9 usage: ./afunc1 datei_zum_lesen Bitte Datei zum Lesen eingeben : testfile.dat Datei testfile.dat wird gelesen you@host > ./afunc9 testfile.dat Datei testfile.dat wird gelesen
Im folgenden Script gibt die Funktion usage den Rückgabewert 1 zurück, wenn keine Datei als Argument zum Lesen angegeben wurde. Im Hauptprogramm werten wir dann die Variable $? aus. Befindet sich hier der Wert 1 (von der Funktion usage), wird der Anwender erneut aufgefordert, eine Datei zum Lesen einzugeben.
### 6.4.2 Rückgabewert mit echo und einer Kommando-SubstitutionÂ
Sicherlich wird sich der ein oder andere hier fragen, wie man denn nun tatsächlich »echte« Werte wie Ergebnisse mathematischer Berechnungen oder Zeichenketten aus einer Funktion zurückgeben kann. Hierfür kommt nur die bereits bekannte Kommando-Substitution in Frage. Das Prinzip ist einfach: Sie geben das Ergebnis in der Funktion mittels echo auf die Standardausgabe aus und im Hauptprogramm leiten Sie diese Daten mit einer Kommando-Substitution in eine Variable um.
> variable=`functions_name arg1 arg2 ... arg_n`
Hierzu ein Beispiel:
> # Demonstriert die Verwendung von Parametern # Name: afunc10 # Funktion verdoppeln verdoppeln() { val=`expr $1 \* 2` echo $val } #Funktion halbieren halbieren() { val=`expr $1 / 2` echo $val } # Alle Kleinbuchstaben zu Grossbuchstaben upper() { echo $string | tr 'a-z' 'A-Z' } # Alle Grossbuchstaben zu Kleinbuchstaben lower() { echo $string | tr 'A-Z' 'a-z' } # Hauptprogramm val=`verdoppeln 25` echo "verdoppeln 25 = $val" # So geht es auch ... echo "halbieren 20 = `halbieren 20`" string="<NAME>" ustring=`upper $string` echo "upper $string = $ustring" string="Hallo Welt" echo "lower $string = `lower $string`"
Das Script bei der Ausführung:
> you@host > ./afunc10 verdoppeln 25 = 50 halbieren 20 = 10 upper Hallo Welt = HALLO WELT lower Hallo Welt = hallo welt
# Rückgabe mehrerer Werte aus einer Funktion
Mit derselben Methode, die eben verwendet wurde, ist es auch möglich, mehrere Werte aus einer Funktion zurückzugeben. Hierzu müssen Sie nur alle Variablen als eine Zeichenkette mittels echo auf die Standardausgabe schreiben und in der Hauptfunktion diese Zeichenkette mittels set wieder auf die einzelnen Positionsparameter aufsplitten. Das Beispiel dazu:
> # Demonstriert die Verwendung von Parametern # Name: afunc11 # Funktion verdoppeln verdoppeln_und_halbieren() { val1=`expr $1 \* 2` val2=`expr $1 / 2` echo $val1 $val2 } # Hauptprogramm val=`verdoppeln_und_halbieren 20` # Aufsplitten auf die einzelnen Positionsparameter set $val echo "verdoppeln 20 = $1" echo "halbieren 20 = $2"
Das Script bei der Ausführung:
> you@host > ./afunc11 verdoppeln 20 = 40 halbieren 20 = 10
### 6.4.3 Rückgabewert ohne eine echte Rückgabe (lokale Variable)Â
Die letzte Methode besteht darin, eine Variable aus einer Funktion, also auch in der Hauptfunktion, zu verwenden. Für jemanden, der schon in anderen Programmiersprachen Erfahrung gesammelt hat, dürfte dies recht ungewöhnlich sein, weil es von der üblichen Struktur der Programmierung abweicht. Da aber alle Funktionen auf alle Variablen des Hauptprogramms zugreifen können, ist dies hier möglich. Ein Beispiel:
> # Demonstriert die Verwendung von Parametern # Name: afunc12 # Funktion verdoppeln verdoppeln_und_halbieren() { val1=`expr $1 \* 2` val2=`expr $1 / 2` } # Hauptprogramm verdoppeln_und_halbieren 20 echo "verdoppeln 20 = $val1" echo "halbieren 20 = $val2"
### 6.4.4 Funktionen und exitÂ
Beachten Sie bitte: Wenn Sie eine Funktion mit exit beschließen, beenden Sie hiermit das komplette Script bzw. die Subshell. Der exit-Befehl reißt das komplette Script in den Beendigungsstatus. Dies kann bspw. gewollt sein, wenn eine Funktion bzw. ein Script nicht die Daten (Argumente) erhält, die für ein sinnvolles Weiterarbeiten erforderlich wären. Ein Beispiel ist die Übergabe von Argumenten aus der Kommandozeile.
> # Demonstriert die Verwendung von Parametern # Name: afunc13 # Funktion usage usage() { if [ $# -lt 1 ] then echo "usage: $0 datei_zum_lesen" exit 1 fi } # Hauptprogramm echo "Vor der Funktion ..." usage $* echo "... nach der Funktion"
Das Script bei der Ausführung:
> you@host > ./afunc13 Vor der Funktion ... usage: ./afunc12 datei_zum_lesen you@host > echo $? 1 you@host > ./afunc13 test Vor der Funktion ... ... nach der Funktion you@host > echo $? 0
# 6.5 Lokale contra globale VariablenÂ
6.5 Lokale contra globale VariablenÂ
In Abschnitt 6.4.3 konnten Sie bereits sehen, dass man ohne Probleme von der Hauptfunktion aus auf Variablen einer Funktion zugreifen kann. Dabei handelt es sich um globale Variablen. Aber wie »global« eigentlich globale Variablen in einer Funktion sind, soll Ihnen das folgende Beispiel zeigen:
# Demonstriert die Verwendung von Parametern # Name: afunc14 # Funktion value1 value1() { val1=10 } # Hauptprogramm echo $val1
In diesem Beispiel wird nicht wie erwartet der Wert 10 von »val« ausgegeben, sondern ein leerer String. Die globale Variable »val1« steht Ihnen erst zur Verfügung, wenn Sie die Funktion »value1« aufrufen, da erst durch den Aufruf der Funktion »value1« die Variable »val1« mit einem Wert belegt wird.
# Demonstriert die Verwendung von Parametern # Name: afunc15 # Funktion value1 value1() { val1=10 } # Hauptprogramm value1 echo $val1
Jetzt wird auch der Wert von »val1« gleich 10 ausgegeben. Wünschen Sie aber eine globale Variable, die auch sofort der Hauptfunktion zur Verfügung steht, müssen Sie diese eben noch vor der ersten Verwendung außerhalb einer Funktion definieren.
# Demonstriert die Verwendung von Parametern # Name: afunc16 # Globale Variable val1=11 # Funktion value1 value1() { val1=10 } # Hauptprogramm echo $val1 value1 echo $val1
Das Script bei der Ausführung:
you@host > ./afunc16 11 10
Sie müssen sich immer vor Augen halten, dass sich jede Veränderung einer globalen Variablen auch auf diese Variable bezieht. Des Weiteren sind globale Variablen bei immer länger werdenden Scripts eher kontraproduktiv, da man schnell die Übersicht verlieren kann. Besonders bei häufig verwendeten Variablennamen wie »i«, »var«, »file«, »dir« usw. kann es schnell passieren, dass man dieselbe Variable im Script in einer anderen Funktion nochmals verwendet. Solche Doppelungen wurden früher (und heute immer noch) in den Shells mit speziellen Variablennamen vermieden. Hieß bspw. die Funktion »readcmd«, verwendete man Variablen mit dem Präfix »read_var1«, »read_var2« usw. Dadurch, dass man im Variablennamen ein paar Buchstaben des Funktionsnamens voranstellt, lassen sich viele Fehler vermeiden. Ein Beispiel:
# Funktion connect connect { con_file=$1 con_pass=$2 con_user=$3 con_i=0 ... } # Funktion insert insert { ins_file=$1 ins_pass=$2 ins_user=$3 ins_i=0 ... } ...
6.5.1 Lokale Variablen (Bash und Korn-Shell only)Â
In Bash und Korn-Shell können Sie lokale Variablen innerhalb von Funktionen (und nur dort) definieren. Hierzu müssen Sie nur der Variablen bei der Definition das Schlüsselwort local voranstellen.
local var=wert
Eine so definierte Variable hat Ihre Lebensdauer somit auch nur innerhalb der Funktion, in der diese erzeugt wurde. Beendet sich die Funktion, verliert auch die Variable ihre Gültigkeit und zerfällt. Da macht es auch nichts mehr aus, wenn Sie in Ihrem Script eine globale Variable mit demselben Namen verwenden, wie ihn die lokale Variable hat. Mithilfe von lokalen Variablen können Sie eine Variable in einer Funktion vor dem Rest des Scripts verstecken.
Hier ein Beispiel, das den Einsatz von lokalen Variablen demonstriert:
# Demonstriert die Verwendung von Parametern # Name: afunc17 # Globale Variable var="ich bin global" # Funktion localtest localtest() { local var="ich bin local" echo $var } # Hauptfunktion echo $var localtest echo $var
Das Script bei der Ausführung:
you@host > ./afunc17 ich bin global ich bin local ich bin global
Hätten Sie das Schlüsselwort local nicht verwendet, so würde die Ausgabe des Scripts wie folgt aussehen:
you@host > ./afunc17 ich bin global ich bin local ich bin local
Rufen Sie aus einer Funktion eine weitere Funktion auf, steht dieser Funktion ebenfalls die lokale Variable zur Verfügung â auch wenn eine gleichnamige globale Variable hierzu existieren würde.
# Demonstriert die Verwendung von Parametern # Name: afunc18 # Globale Variable var="ich bin global" # Funktion localtest localtest() { local var="ich bin local" alocaltest } #Funktion alocaltest alocaltest() { echo $var } # Hauptfunktion localtest
Das Script bei der Ausführung:
you@host > ./afunc18 ich bin local
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 6.5 Lokale contra globale VariablenÂ
In Abschnitt 6.4.3 konnten Sie bereits sehen, dass man ohne Probleme von der Hauptfunktion aus auf Variablen einer Funktion zugreifen kann. Dabei handelt es sich um globale Variablen. Aber wie »global« eigentlich globale Variablen in einer Funktion sind, soll Ihnen das folgende Beispiel zeigen:
> # Demonstriert die Verwendung von Parametern # Name: afunc14 # Funktion value1 value1() { val1=10 } # Hauptprogramm echo $val1
In diesem Beispiel wird nicht wie erwartet der Wert 10 von »val« ausgegeben, sondern ein leerer String. Die globale Variable »val1« steht Ihnen erst zur Verfügung, wenn Sie die Funktion »value1« aufrufen, da erst durch den Aufruf der Funktion »value1« die Variable »val1« mit einem Wert belegt wird.
> # Demonstriert die Verwendung von Parametern # Name: afunc15 # Funktion value1 value1() { val1=10 } # Hauptprogramm value1 echo $val1
Jetzt wird auch der Wert von »val1« gleich 10 ausgegeben. Wünschen Sie aber eine globale Variable, die auch sofort der Hauptfunktion zur Verfügung steht, müssen Sie diese eben noch vor der ersten Verwendung außerhalb einer Funktion definieren.
> # Demonstriert die Verwendung von Parametern # Name: afunc16 # Globale Variable val1=11 # Funktion value1 value1() { val1=10 } # Hauptprogramm echo $val1 value1 echo $val1
Das Script bei der Ausführung:
> you@host > ./afunc16 11 10
Sie müssen sich immer vor Augen halten, dass sich jede Veränderung einer globalen Variablen auch auf diese Variable bezieht. Des Weiteren sind globale Variablen bei immer länger werdenden Scripts eher kontraproduktiv, da man schnell die Übersicht verlieren kann. Besonders bei häufig verwendeten Variablennamen wie »i«, »var«, »file«, »dir« usw. kann es schnell passieren, dass man dieselbe Variable im Script in einer anderen Funktion nochmals verwendet. Solche Doppelungen wurden früher (und heute immer noch) in den Shells mit speziellen Variablennamen vermieden. Hieß bspw. die Funktion »readcmd«, verwendete man Variablen mit dem Präfix »read_var1«, »read_var2« usw. Dadurch, dass man im Variablennamen ein paar Buchstaben des Funktionsnamens voranstellt, lassen sich viele Fehler vermeiden. Ein Beispiel:
> # Funktion connect connect { con_file=$1 con_pass=$2 con_user=$3 con_i=0 ... } # Funktion insert insert { ins_file=$1 ins_pass=$2 ins_user=$3 ins_i=0 ... } ...
### 6.5.1 Lokale Variablen (Bash und Korn-Shell only)Â
In Bash und Korn-Shell können Sie lokale Variablen innerhalb von Funktionen (und nur dort) definieren. Hierzu müssen Sie nur der Variablen bei der Definition das Schlüsselwort local voranstellen.
> local var=wert
Eine so definierte Variable hat Ihre Lebensdauer somit auch nur innerhalb der Funktion, in der diese erzeugt wurde. Beendet sich die Funktion, verliert auch die Variable ihre Gültigkeit und zerfällt. Da macht es auch nichts mehr aus, wenn Sie in Ihrem Script eine globale Variable mit demselben Namen verwenden, wie ihn die lokale Variable hat. Mithilfe von lokalen Variablen können Sie eine Variable in einer Funktion vor dem Rest des Scripts verstecken.
Hier ein Beispiel, das den Einsatz von lokalen Variablen demonstriert:
> # Demonstriert die Verwendung von Parametern # Name: afunc17 # Globale Variable var="ich bin global" # Funktion localtest localtest() { local var="ich bin local" echo $var } # Hauptfunktion echo $var localtest echo $var
Das Script bei der Ausführung:
> you@host > ./afunc17 ich bin global ich bin local ich bin global
Hätten Sie das Schlüsselwort local nicht verwendet, so würde die Ausgabe des Scripts wie folgt aussehen:
> you@host > ./afunc17 ich bin global ich bin local ich bin local
Rufen Sie aus einer Funktion eine weitere Funktion auf, steht dieser Funktion ebenfalls die lokale Variable zur Verfügung â auch wenn eine gleichnamige globale Variable hierzu existieren würde.
> # Demonstriert die Verwendung von Parametern # Name: afunc18 # Globale Variable var="ich bin global" # Funktion localtest localtest() { local var="ich bin local" alocaltest } #Funktion alocaltest alocaltest() { echo $var } # Hauptfunktion localtest
Das Script bei der Ausführung:
> you@host > ./afunc18 ich bin local
# 6.6 alias und unaliasÂ
6.6 alias und unaliasÂ
In den Shells gibt es die Möglichkeit, einen Alias für beliebige Befehle zu erteilen. Mit einem Alias teilt man dem System mit, dass ein Befehl auch unter einem anderen Namen ausgeführt werden soll. Aliase werden eigentlich nicht direkt in der Shell-Programmierung verwendet, aber sie stellen doch irgendwie kleine Funktionen dar. Die Syntax:
# Definition eines Alias alias name definition # Löschen einer Alias-Definition unalias name # Anzeigen aller gesetzten Aliase alias
Durch Eingabe von »name« werden die in der »definition« enthaltenen Kommandos ausgeführt. Meistens finden Sie auf den Systemen schon eine Menge vordefinierter Aliase, die Sie mit einem einfachen alias auflisten lassen können:
you@host > alias alias +='pushd .' alias -='popd' alias ..='cd ..' alias ...='cd ../..' ... alias which='type -p'
Eigene Aliase werden vorwiegend erzeugt, wenn der Befehl zu lang oder zu kryptisch erscheint. Einfaches Beispiel:
you@host > alias xpwd="du -sh ." you@host > xpwd 84M . you@host > cd $HOME you@host > xpwd 465M .
Anstatt ständig die Befehlsfolge du âsh . einzugeben, um die Größe des aktuellen Arbeitsverzeichnisses anzuzeigen, haben Sie einen Alias mit xpwd angelegt. Durch den Aufruf von xpwd wird nun die Befehlsfolge du âsh . gestartet. Wollen Sie den Alias wieder entfernen, müssen Sie nur unalias verwenden.
you@host > unalias xpwd you@host > xpwd bash: xpwd: command not found
Auf der einen Seite mögen solche Aliase sehr bequem sein, aber bedenken Sie, dass Ihnen der ein oder andere Befehl nicht zur Verfügung steht, wenn Sie auf verschiedenen Systemen arbeiten. Auf der anderen Seite können solche Aliase gerade über System- und Distributionsgrenzen sehr nützlich sein. So befindet sich z. B. die Stelle für das Einhängen (mounten) diverser Datenträger â etwa von CD-ROMs â bei vielen Systemen ganz woanders. Bei Debian ist dies im Verzeichnis /cdrom, bei SuSE unter /media/cdrom und bei anderen Systemen kann es wieder anders sein. Hier ist ein Alias der folgenden Art sehr nützlich:
you@host > alias mcdrom="mount /media/cdrom" you@host > alias ucdrom="umount /media/cdrom"
Hinweis   Aliase sind abhängig vom Benutzer und dessen Account. Somit kann nur der Eigentümer eines Accounts auf seine Aliase zugreifen, keine andere Person. Dies sollte aus Sicherheitsgründen erwähnt werden, da so kein anderer User den Alias eines Benutzers manipulieren kann, etwa um böswillige Absichten auszuführen. Ein weiterer Grund übrigens, ein gutes Passwort zu verwenden und es niemandem weiterzugeben. Dies ist auch die Grundlage vieler Rootkits. Oft wird ein Alias auf ls oder ähnlich definiert oder â noch schlimmer â das ls-Binary mit einem anderen überschrieben. Bei Letzterem wurde das System schon übernommen, und dem Angreifer geht es nur noch darum, möglichst lange unentdeckt zu bleiben.
Natürlich kann man mit Aliasen auch eine Folge von Befehlen verwenden. Mehrere Befehle werden hierbei mit einem Semikolon getrennt in einer Zeile geschrieben. Sie können auch eine Pipe oder die Umleitungszeichen verwenden:
you@host > alias sys="ps -ef | more"
Sind Sie sich nicht ganz sicher, ob Sie ein Kommando oder nur einen Alias verwenden, können Sie mit type nachfragen:
you@host > type dir dir is aliased to `ls -l'
Wollen Sie Ihre alten DOS-Gewohnheiten ablegen:
you@host > alias dir="echo Dies hier ist kein DOS, verwenden \ > Sie ls :-\)" you@host > dir Dies hier ist kein DOS, verwenden Sie ls :-)
Alias-Definitionen werden außerdem immer als Erstes interpretiert, also noch vor den eingebauten Shell-Kommandos (Builtins) und den externen Kommandos in PATH. Dadurch können Sie gegebenenfalls auch ein mit gleichem Namen existierendes Kommando umdefinieren. Alias-Definitionen, die Sie zur Laufzeit festlegen, sind nach einem Neustart des Systems allerdings nicht mehr vorhanden. Die Aliase, die Ihnen nach dem Einloggen in der untergeordneten Shell zur Verfügung stehen, sind meistens in Dateien wie bspw. .bashr, .cshrc, .kshrc oder .myalias eingetragen. Wollen Sie einen Alias definieren, der Ihnen gleich nach dem Einloggen zur Verfügung steht, müssen Sie ebenfalls hier Ihren Eintrag hinzufügen. Auf Dateien, bei denen Sie etwas finden und verändern können, wird noch in Abschnitt 8.9 eingegangen.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 6.6 alias und unaliasÂ
In den Shells gibt es die Möglichkeit, einen Alias für beliebige Befehle zu erteilen. Mit einem Alias teilt man dem System mit, dass ein Befehl auch unter einem anderen Namen ausgeführt werden soll. Aliase werden eigentlich nicht direkt in der Shell-Programmierung verwendet, aber sie stellen doch irgendwie kleine Funktionen dar. Die Syntax:
> # Definition eines Alias alias name definition # Löschen einer Alias-Definition unalias name # Anzeigen aller gesetzten Aliase alias
Durch Eingabe von »name« werden die in der »definition« enthaltenen Kommandos ausgeführt. Meistens finden Sie auf den Systemen schon eine Menge vordefinierter Aliase, die Sie mit einem einfachen alias auflisten lassen können:
> you@host > alias alias +='pushd .' alias -='popd' alias ..='cd ..' alias ...='cd ../..' ... alias which='type -p'
Eigene Aliase werden vorwiegend erzeugt, wenn der Befehl zu lang oder zu kryptisch erscheint. Einfaches Beispiel:
> you@host > alias xpwd="du -sh ." you@host > xpwd 84M . you@host > cd $HOME you@host > xpwd 465M .
Anstatt ständig die Befehlsfolge du âsh . einzugeben, um die Größe des aktuellen Arbeitsverzeichnisses anzuzeigen, haben Sie einen Alias mit xpwd angelegt. Durch den Aufruf von xpwd wird nun die Befehlsfolge du âsh . gestartet. Wollen Sie den Alias wieder entfernen, müssen Sie nur unalias verwenden.
> you@host > unalias xpwd you@host > xpwd bash: xpwd: command not found
Auf der einen Seite mögen solche Aliase sehr bequem sein, aber bedenken Sie, dass Ihnen der ein oder andere Befehl nicht zur Verfügung steht, wenn Sie auf verschiedenen Systemen arbeiten. Auf der anderen Seite können solche Aliase gerade über System- und Distributionsgrenzen sehr nützlich sein. So befindet sich z. B. die Stelle für das Einhängen (mounten) diverser Datenträger â etwa von CD-ROMs â bei vielen Systemen ganz woanders. Bei Debian ist dies im Verzeichnis /cdrom, bei SuSE unter /media/cdrom und bei anderen Systemen kann es wieder anders sein. Hier ist ein Alias der folgenden Art sehr nützlich:
> you@host > alias mcdrom="mount /media/cdrom" you@host > alias ucdrom="umount /media/cdrom"
Natürlich kann man mit Aliasen auch eine Folge von Befehlen verwenden. Mehrere Befehle werden hierbei mit einem Semikolon getrennt in einer Zeile geschrieben. Sie können auch eine Pipe oder die Umleitungszeichen verwenden:
> you@host > alias sys="ps -ef | more"
Sind Sie sich nicht ganz sicher, ob Sie ein Kommando oder nur einen Alias verwenden, können Sie mit type nachfragen:
> you@host > type dir dir is aliased to `ls -l'
Wollen Sie Ihre alten DOS-Gewohnheiten ablegen:
> you@host > alias dir="echo Dies hier ist kein DOS, verwenden \ > Sie ls :-\)" you@host > dir Dies hier ist kein DOS, verwenden Sie ls :-)
Alias-Definitionen werden außerdem immer als Erstes interpretiert, also noch vor den eingebauten Shell-Kommandos (Builtins) und den externen Kommandos in PATH. Dadurch können Sie gegebenenfalls auch ein mit gleichem Namen existierendes Kommando umdefinieren. Alias-Definitionen, die Sie zur Laufzeit festlegen, sind nach einem Neustart des Systems allerdings nicht mehr vorhanden. Die Aliase, die Ihnen nach dem Einloggen in der untergeordneten Shell zur Verfügung stehen, sind meistens in Dateien wie bspw. .bashr, .cshrc, .kshrc oder .myalias eingetragen. Wollen Sie einen Alias definieren, der Ihnen gleich nach dem Einloggen zur Verfügung steht, müssen Sie ebenfalls hier Ihren Eintrag hinzufügen. Auf Dateien, bei denen Sie etwas finden und verändern können, wird noch in Abschnitt 8.9 eingegangen.
# 6.7 Autoload (Korn-Shell only)Â
6.7 Autoload (Korn-Shell only)Â
In der Korn-Shell (in der Z-Shell übrigens auch) steht Ihnen eine Möglichkeit zur Verfügung, aus jeder Funktion, die Sie erstellt haben, eine Datei zu erstellen, ohne dass diese Datei ausführbar sein muss. Wo ist hier der Vorteil gegenüber dem Starten einer Funktion mit dem Punkteoperator? Vorwiegend geht es hier um die Geschwindigkeit. Bisher konnten Sie Funktionsbibliotheken nur komplett einlesen â auch wenn Sie aus einer Bibliothek nur eine Funktion benötigten. Mit dem Autoload-Mechanismus können Sie wirklich nur diese eine Funktion laden, egal, ob sich Hunderte anderer Funktionen in der Datei aufhalten.
Hierzu müssen Sie lediglich die Variable FPATH auf das oder die Verzeichnisse lenken, in denen die Funktionsdateien enthalten sind. Anschließend müssen Sie FPATH exportieren.
FPATH="$FPATH:/pfad/zur/funktionsdatei" export FPATH
Als Beispiel verwenden wir einfach folgende Datei mit einigen Funktionen:
# Name: funktionen afunction1() { echo "Ich bin eine Funktion" } afunction2() { echo "Ich bin auch eine Funktion" }
Diese Datei müssen Sie nun der Variablen FPATH bekannt machen und exportieren:
you@host > FPATH=$HOME/funktionen you@host > export FPATH
Jetzt müssen Sie diesen Dateinamen nur noch mit
autoload funktionen
bekannt geben. Nun stehen Ihnen nach einmaligem Aufruf des Namens der Funktionen alle Funktionen zur Verfügung. Verwenden Sie beim Schreiben nur den Funktionsrumpf allein, dann haben Sie gegenüber dem Schreiben von Funktionen den Vorteil, dass diese sofort ausgeführt werden können. autoload-Funktionen müssen nämlich zunächst ein Mal aufgerufen werden, damit ihr Rumpf bekannt ist.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 6.7 Autoload (Korn-Shell only)Â
In der Korn-Shell (in der Z-Shell übrigens auch) steht Ihnen eine Möglichkeit zur Verfügung, aus jeder Funktion, die Sie erstellt haben, eine Datei zu erstellen, ohne dass diese Datei ausführbar sein muss. Wo ist hier der Vorteil gegenüber dem Starten einer Funktion mit dem Punkteoperator? Vorwiegend geht es hier um die Geschwindigkeit. Bisher konnten Sie Funktionsbibliotheken nur komplett einlesen â auch wenn Sie aus einer Bibliothek nur eine Funktion benötigten. Mit dem Autoload-Mechanismus können Sie wirklich nur diese eine Funktion laden, egal, ob sich Hunderte anderer Funktionen in der Datei aufhalten.
Hierzu müssen Sie lediglich die Variable FPATH auf das oder die Verzeichnisse lenken, in denen die Funktionsdateien enthalten sind. Anschließend müssen Sie FPATH exportieren.
> FPATH="$FPATH:/pfad/zur/funktionsdatei" export FPATH
Als Beispiel verwenden wir einfach folgende Datei mit einigen Funktionen:
> # Name: funktionen afunction1() { echo "Ich bin eine Funktion" } afunction2() { echo "Ich bin auch eine Funktion" }
Diese Datei müssen Sie nun der Variablen FPATH bekannt machen und exportieren:
> you@host > FPATH=$HOME/funktionen you@host > export FPATH
Jetzt müssen Sie diesen Dateinamen nur noch mit
> autoload funktionen
bekannt geben. Nun stehen Ihnen nach einmaligem Aufruf des Namens der Funktionen alle Funktionen zur Verfügung. Verwenden Sie beim Schreiben nur den Funktionsrumpf allein, dann haben Sie gegenüber dem Schreiben von Funktionen den Vorteil, dass diese sofort ausgeführt werden können. autoload-Funktionen müssen nämlich zunächst ein Mal aufgerufen werden, damit ihr Rumpf bekannt ist.
# 7.2 Signale senden â killÂ
Copyright © Rheinwerk Verlag GmbH 2005Für Ihren privaten Gebrauch dürfen Sie die Online-Version natürlich ausdrucken. Ansonsten unterliegt das Openbook denselben Bestimmungen, wie die gebundene Ausgabe: Das Werk einschließlich aller seiner Teile ist urheberrechtlich geschützt. Alle Rechte vorbehalten einschließlich der Vervielfältigung, Übersetzung, Mikroverfilmung sowie Einspeicherung und Verarbeitung in elektronischen Systemen.
# 7.3 Eine Fallgrube für Signale â trapÂ
7.3 Eine Fallgrube für Signale â trapÂ
Das Kommando trap ist das Gegenstück zu kill. Damit können Sie in Ihren Shellscripts auf Signale reagieren (bzw. sie abfangen), um so den Programmabbruch zu verhindern.
trap 'kommando1; ... ; kommaond_n' Signalnummer
Dem Kommandonamen trap folgt hier ein Befehl oder eine Liste von Befehlen, die ausgeführt werden sollen, wenn das Signal mit »Signalnummer« eintrifft. Die Befehle werden zwischen einfache Single Quotes geschrieben. Trifft beim ausführenden Script ein Signal ein, wird die Ausführung an der aktuellen Position unterbrochen und die angegebenen Befehle der trap-Anweisung werden ausgeführt. Danach wird mit der Ausführung des Scripts an der unterbrochenen Stelle wieder fortgefahren (siehe Abbildung 7.2).
Ein einfaches Beispiel:
# Demonstriert die Funktion trap zum Abfangen von Signalen # Name: trap1 # Signal SIGINT (2) (Strg+C) trap 'echo SIGINT erhalten' 2 i=0 while [ $i -lt 5 ] do echo "Bitte nicht stören!" sleep 2 i=`expr $i + 1` done
Das Script bei der Ausführung:
you@host > ./trap1 Bitte nicht stören! Bitte nicht stören! (Strg)+(C) SIGINT erhalten Bitte nicht stören! Bitte nicht stören! (Strg)+(C) SIGINT erhalten Bitte nicht stören! you@host >
Mit der trap-Anweisung zu Beginn des Scripts sorgen Sie dafür, dass beim Eintreffen des Signals SIGINT â entweder durch die Tastenkombination (Strg)+(C) oder mit kill âSIGINT PID_von_trap1 ausgelöst â der echo-Befehl ausgeführt wird. Würden Sie hier keine trap-Anweisung verwenden, so würde das Script beim ersten (Strg)+(C) (SIGINT) beendet werden.
Natürlich können Sie in einem Script auch mehrere Signale abfangen:
# Demonstriert die Funktion trap zum Abfangen von Signalen # Name: trap2 # Signal SIGINT (2) (Strg+C) trap 'echo SIGINT erhalten' 2 # Signal SIGTERM (15) kill -TERM PID_of_trap2 trap 'echo SIGTERM erhalten' 15 i=0 while [ $i -lt 5 ] do echo "Bitte nicht stören! ($$)" sleep 5 i=`expr $i + 1` done
Das Script bei der Ausführung:
you@host > ./trap2 Bitte nicht stören! (10175) (Strg)+(C) SIGINT erhalten Bitte nicht stören! (10175) Bitte nicht stören! (10175) SIGTERM erhalten Bitte nicht stören! (10175) Bitte nicht stören! (10175) you@host >
Bei der Ausführung von »trap2« wurde das Signal SIGTERM hier aus einer anderen Konsole wie folgt an das Script gesendet:
you@host > kill -TERM 10175
Sie müssen allerdings nicht für jedes Signal eine extra trap-Anweisung verwenden, sondern Sie können hinter der Signalnummer weitere Signalnummern â getrennt mit mindestens einem Leerzeichen â anfügen und somit auf mehrere Signale mit denselben Befehlen reagieren.
# Demonstriert die Funktion trap zum Abfangen von Signalen # Name: trap3 # Signale SIGINT und SIGTERM abfangen trap 'echo Ein Signal (SIGINT/SIGTERM) erhalten' 2 15 i=0 while [ $i -lt 5 ] do echo "Bitte nicht stören! ($$)" sleep 5 i=`expr $i + 1` done
In diesem Beispiel können Sie auch erkennen, warum es nicht möglich ist und auch niemals möglich sein darf, dass das Signal SIGKILL (Nr. 9) ignoriert oder als nicht unterbrechbar eingerichtet wird. Stellen Sie sich jetzt noch eine Endlosschleife vor, die ständig Daten in eine Datei schreibt. Würde hierbei das Signal SIGKILL ausgeschaltet, so würden ewig Daten in die Datei geschrieben, bis das System oder der Plattenplatz schlapp macht, und nicht mal mehr der Systemadministrator könnte das Script unterbrechen.
7.3.1 Einen Signalhandler (Funktion) einrichtenÂ
In der Praxis werden Sie beim Auftreten eines bestimmten Signals gewöhnlich keine einfache Ausgabe vornehmen. Meistens werden Sie mit Dingen wie dem »Saubermachen« von Datenresten beschäftigt sein â oder aber, Sie richten sich hierzu einen eigenen Signalhandler (Funktion) ein, welcher auf ein bestimmtes Signal reagieren soll:
# Demonstriert die Funktion trap zum Abfangen von Signalen # Name: trap4 sighandler_INT() { printf "Habe das Signal SIGINT empfangen\n" printf "Soll das Script beendet werden (j/n) : " read if [[ $REPLY = "j" ]] then echo "Bye!" exit 0; fi } # Signale SIGINT abfangen trap 'sighandler_INT' 2 i=0 while [ $i -lt 5 ] do echo "Bitte nicht stören! ($$)" sleep 3 i=`expr $i + 1` done
Das Script bei der Ausführung:
you@host > ./trap4 Bitte nicht stören! (4843) Bitte nicht stören! (4843) (Strg)+(C) Habe das Signal SIGINT empfangen Soll das Script beendet werden (j/n) : n Bitte nicht stören! (4843) Bitte nicht stören! (4843) Bitte nicht stören! (4843) you@host > ./trap4 Bitte nicht stören! (4854) (Strg)+(C) Habe das Signal SIGINT empfangen Soll das Script beendet werden (j/n) : n Bitte nicht stören! (4854) (Strg)+(C) Habe das Signal SIGINT empfangen Soll das Script beendet werden (j/n) : j Bye! you@host Hinweis   Verwenden Sie in der Korn-Shell innerhalb von Funktionen die trap-Anweisung, dann sind die mit ihr eingerichteten Signale nur innerhalb dieser Funktion gültig.
Ein eigener Signalhandler (bzw. eine Funktion) wird häufig eingerichtet, um eine Konfigurationsdatei neu einzulesen. Bestimmt haben Sie schon einmal bei einem Dämon- oder Serverprogramm die Konfigurationsdatei Ihren Bedürfnissen angepasst. Damit die aktuellen Änderungen aktiv werden, mussten Sie die Konfigurationsdatei mittels
kill -HUP PID_of_dämon_oder_server
neu einlesen. Wie Sie dies in Ihrem Script erreichen können, haben Sie eben mit dem Script »trap4« in ähnlicher Weise gesehen. Ein einfaches Beispiel auch hierzu:
# Demonstriert die Funktion trap zum Abfangen von Signalen # Name: trap5 # Signal SIGHUP empfangen trap 'readconfig' 1 readconfig() { . aconfigfile } a=1 b=2 c=3 # Endlosschleife while [ 1 ] do echo "Werte (PID:$$)" echo "a=$a" echo "b=$b" echo "c=$c" sleep 5 done
Die Datei aconfigfile sieht wie folgt aus:
# Konfigurationsdatei für trap5 # Name: aconfigfile # Hinweis: Um hier vorgenommene Änderungen umgehend zu # aktivieren, müssen Sie lediglich die # Konfigurationsdatei aconfigfile mittels # kill -HUP PID_of_trap5 # neu einlesen. a=3 b=6 c=9
Das Script bei der Ausführung:
###--- tty1 ---### you@host > ./trap5 Werte (PID:6263) a=1 b=2 c=3 Werte (PID:6263) a=1 b=2 c=3 ###--- tty2 ---### you@host > kill -HUP 6263 ###--- tty1 ---### Werte (PID:6263) a=3 b=6 c=9 (Strg)+(C) you@host >
7.3.2 Mit Signalen Schleifendurchläufe abbrechenÂ
Genauso einfach können Sie auch Signale verwenden, um Schleifen abzubrechen. Hierzu müssen Sie nur in den Befehlen von trap die Anweisung break eintragen, dann wird bei Auftreten eines gewünschten Signals eine Schleife abgebrochen:
# Demonstriert die Funktion trap zum Abfangen von Signalen # Name: trap6 # Signale SIGINT und SIGTERM abfangen trap 'break' 2 i=1 while [ $i -lt 10 ] do echo "$i. Schleifendurchlauf" sleep 1 i=`expr $i + 1` done echo "Nach dem $i Schleifendurchlauf abgebrochen" echo "--- Hinter der Schleife ---"
Das Script bei der Ausführung:
you@host > ./trap6 1. Schleifendurchlauf 2. Schleifendurchlauf 3. Schleifendurchlauf 4. Schleifendurchlauf 5. Schleifendurchlauf (Strg)+(C) Nach dem 5. Schleifendurchlauf abgebrochen --- Hinter der Schleife --- you@host >
7.3.3 Mit Signalen das Script beendenÂ
Bitte beachten Sie, dass Sie mit einem abgefangenen Signal keinen Programmabbruch erreichen. Haben Sie zunächst mit der trap-Anweisung ein Signal abgefangen, müssen Sie sich gegebenenfalls selbst um die Beendigung eines Prozesses kümmern. Diese Methode wird recht häufig eingesetzt, wenn der Anwender ein Signal an den Prozess sendet, dieser aber den Datenmüll vorher noch entsorgen soll. Hierzu setzen Sie den Befehl exit an das Ende der Kommandofolge, die Sie in der trap-Anweisung angegeben haben, beispielsweise:
trap 'rm atempfile.tmp ; exit 1' 2
Hier wird beim Auftreten des Signals SIGINT zunächst die temporäre Datei atempfile.tmp gelöscht, bevor im nächsten Schritt mittels exit das Script beendet wird.
7.3.4 Das Beenden der Shell (oder eines Scripts) abfangenÂ
Wollen Sie, dass beim Verlassen der Shell oder eines Scripts noch eine andere Datei bzw. Script ausgeführt wird, können Sie das Signal 0 (EXIT) abfangen. Dieses Signal wird beim normalen Beenden einer Shell oder eines Scripts oder über den Befehl exit gesendet, mit dem ein Script oder eine Shell vorzeitig beendet wird. Nicht abfangen können Sie hingegen mit dem Signal 0 Abbrüche, die durch kill von außen herbeigeführt wurden. Ein einfaches Beispiel, welches das Ende einer (echten) Shell abfängt:
# Name: dasEnde # Befehle, die beim Beenden einer Shell ausgeführt werden cat <<MARKE ******************************************** * Hier könnten noch einige nützliche * * zur Beendigung der Shell stehen. * ******************************************** MARKE echo "Alles erledigt â Shell mit ENTER beenden" read
Zunächst müssen Sie in Ihrer Shell das Signal EXIT abfangen und die Datei »dasEnde« aufrufen. Steht die »Falle« für das Signal EXIT, können Sie die Shell verlassen:
you@host > trap '$SHELL $HOME/dasEnde' 0 you@host > exit logout ******************************************** * Hier könnten noch einige nützliche * * zur Beendigung der Shell stehen. * ******************************************** Alles erledigt â Shell mit ENTER beenden (ENTER) login :
Damit die Datei »dasEnde« mit wichtigen Hinweisen in Zukunft nach jedem Verlassen einer Shell dauerhaft zur Verfügung steht, sollten Sie die Zeile
trap '$SHELL $HOME/dasEnde' 0
in die Datei .profile eintragen.
Gleiches lässt sich auch in einem Shellscript verwenden:
# Demonstriert die Funktion trap zum Abfangen von Signalen # Name: trap7 # Signal EXIT abfangen trap 'exithandler' 0 exithandler() { echo "Das Script wurde vorzeitig mit exit beendet!" # Hier noch anfallende Aufräumarbeiten ausführen } # Hauptfunktion echo "In der Hauptfunktion" && exit 1 echo "Wird nicht mehr ausgeführt"
Das Script bei der Ausführung:
you@host > ./trap7 In der Hauptfunktion Das Script wurde vorzeitig mit exit beendet! you@host >
Im Gegensatz zum Vorgehen in Abschnitt 7.3.3 müssen Sie hierbei kein zusätzliches exit bei den Kommandos von trap angeben. Hier halten Sie das Script nur beim wirklichen Ende kurz auf.
7.3.5 Signale ignorierenÂ
Wenn Sie in der Liste von Befehlen bei trap keine Angaben vornehmen, also leere Single Quotes verwenden, werden die angegebenen Signale (bzw. Signalnummern) ignoriert. Die Syntax:
trap '' Signalnummer
Somit würden Sie praktisch mit der Angabe von
trap '' 2
das Signal SIGINT beim Auftreten ignorieren. Das völlige Ignorieren von Signalen kann bei extrem kritischen Datenübertragungen sinnvoll sein. So können Sie zum Beispiel verhindern, dass beim Schreiben kritischer Systemdaten der Anwender mit einem unbedachten SIGINT reinpfuscht. Natürlich gilt hier weiterhin, dass die Signale SIGKILL und SIGSTOP nicht ignoriert werden können.
Hinweis   Laut POSIX ist das weitere Verhalten von Prozessen undefiniert, wenn die Signale SIGFPE, SIGILL oder SIGSEGV ignoriert werden und diese auch nicht durch einen manuellen Aufruf von kill ausgelöst wurden.
7.3.6 Signale zurücksetzenÂ
Wenn Sie die Reaktion von Signalen mittels trap einmal verändert haben, können Sie durch einen erneuten Aufruf von trap den Standardzustand der Signale wiederherstellen. Hierbei reicht lediglich der Aufruf von trap und der (bzw. den) Signalnummer(n), die Sie wieder auf den ursprünglichen Zustand zurücksetzen wollen. Mehrere Signalnummern werden wieder mit mindestens einem Leerzeichen von der vorangegangenen Signalnummer getrennt.
trap Signalnummer
Das Zurücksetzen von Signalen ist sinnvoll, wenn man die Signale nur bei einem bestimmten Codeausschnitt gesondert behandeln will.
# Demonstriert die Funktion trap zum Abfangen von Signalen # Name: trap8 # Signal SIGINT ignorieren trap '' 2 i=0 while [ $i -lt 5 ] do echo "Hier kein SIGINT möglich ..." sleep 1 i=`expr $i + 1` done # Signal SIGINT wieder zurücksetzen trap 2 i=0 while [ $i -lt 5 ] do echo "SIGINT wieder möglich ..." sleep 1 i=`expr $i + 1` done
Das Script bei der Ausführung:
you@host > ./trap8 Hier kein SIGINT möglich ... Hier kein SIGINT möglich ... (Strg)+(C) Hier kein SIGINT möglich ... (Strg)+(C) Hier kein SIGINT möglich ... Hier kein SIGINT möglich ... SIGINT wieder möglich ... SIGINT wieder möglich ... (Strg)+(C) you@host Hinweis   Wollen Sie wissen, welche Signale für eine bestimmte Routine mit trap abgefangen werden, können Sie das Kommando trap ohne jegliches Argument verwenden.
## 7.3 Eine Fallgrube für Signale â trapÂ
Das Kommando trap ist das Gegenstück zu kill. Damit können Sie in Ihren Shellscripts auf Signale reagieren (bzw. sie abfangen), um so den Programmabbruch zu verhindern.
> trap 'kommando1; ... ; kommaond_n' Signalnummer
Dem Kommandonamen trap folgt hier ein Befehl oder eine Liste von Befehlen, die ausgeführt werden sollen, wenn das Signal mit »Signalnummer« eintrifft. Die Befehle werden zwischen einfache Single Quotes geschrieben. Trifft beim ausführenden Script ein Signal ein, wird die Ausführung an der aktuellen Position unterbrochen und die angegebenen Befehle der trap-Anweisung werden ausgeführt. Danach wird mit der Ausführung des Scripts an der unterbrochenen Stelle wieder fortgefahren (siehe Abbildung 7.2).
Ein einfaches Beispiel:
> # Demonstriert die Funktion trap zum Abfangen von Signalen # Name: trap1 # Signal SIGINT (2) (Strg+C) trap 'echo SIGINT erhalten' 2 i=0 while [ $i -lt 5 ] do echo "Bitte nicht stören!" sleep 2 i=`expr $i + 1` done
Das Script bei der Ausführung:
> you@host > ./trap1 Bitte nicht stören! Bitte nicht stören! (Strg)+(C) SIGINT erhalten Bitte nicht stören! Bitte nicht stören! (Strg)+(C) SIGINT erhalten Bitte nicht stören! you@host Mit der trap-Anweisung zu Beginn des Scripts sorgen Sie dafür, dass beim Eintreffen des Signals SIGINT â entweder durch die Tastenkombination (Strg)+(C) oder mit kill âSIGINT PID_von_trap1 ausgelöst â der echo-Befehl ausgeführt wird. Würden Sie hier keine trap-Anweisung verwenden, so würde das Script beim ersten (Strg)+(C) (SIGINT) beendet werden.
Natürlich können Sie in einem Script auch mehrere Signale abfangen:
> # Demonstriert die Funktion trap zum Abfangen von Signalen # Name: trap2 # Signal SIGINT (2) (Strg+C) trap 'echo SIGINT erhalten' 2 # Signal SIGTERM (15) kill -TERM PID_of_trap2 trap 'echo SIGTERM erhalten' 15 i=0 while [ $i -lt 5 ] do echo "Bitte nicht stören! ($$)" sleep 5 i=`expr $i + 1` done
Das Script bei der Ausführung:
> you@host > ./trap2 Bitte nicht stören! (10175) (Strg)+(C) SIGINT erhalten Bitte nicht stören! (10175) Bitte nicht stören! (10175) SIGTERM erhalten Bitte nicht stören! (10175) Bitte nicht stören! (10175) you@host Bei der Ausführung von »trap2« wurde das Signal SIGTERM hier aus einer anderen Konsole wie folgt an das Script gesendet:
> you@host > kill -TERM 10175
Sie müssen allerdings nicht für jedes Signal eine extra trap-Anweisung verwenden, sondern Sie können hinter der Signalnummer weitere Signalnummern â getrennt mit mindestens einem Leerzeichen â anfügen und somit auf mehrere Signale mit denselben Befehlen reagieren.
> # Demonstriert die Funktion trap zum Abfangen von Signalen # Name: trap3 # Signale SIGINT und SIGTERM abfangen trap 'echo Ein Signal (SIGINT/SIGTERM) erhalten' 2 15 i=0 while [ $i -lt 5 ] do echo "Bitte nicht stören! ($$)" sleep 5 i=`expr $i + 1` done
In diesem Beispiel können Sie auch erkennen, warum es nicht möglich ist und auch niemals möglich sein darf, dass das Signal SIGKILL (Nr. 9) ignoriert oder als nicht unterbrechbar eingerichtet wird. Stellen Sie sich jetzt noch eine Endlosschleife vor, die ständig Daten in eine Datei schreibt. Würde hierbei das Signal SIGKILL ausgeschaltet, so würden ewig Daten in die Datei geschrieben, bis das System oder der Plattenplatz schlapp macht, und nicht mal mehr der Systemadministrator könnte das Script unterbrechen.
### 7.3.1 Einen Signalhandler (Funktion) einrichtenÂ
In der Praxis werden Sie beim Auftreten eines bestimmten Signals gewöhnlich keine einfache Ausgabe vornehmen. Meistens werden Sie mit Dingen wie dem »Saubermachen« von Datenresten beschäftigt sein â oder aber, Sie richten sich hierzu einen eigenen Signalhandler (Funktion) ein, welcher auf ein bestimmtes Signal reagieren soll:
> # Demonstriert die Funktion trap zum Abfangen von Signalen # Name: trap4 sighandler_INT() { printf "Habe das Signal SIGINT empfangen\n" printf "Soll das Script beendet werden (j/n) : " read if [[ $REPLY = "j" ]] then echo "Bye!" exit 0; fi } # Signale SIGINT abfangen trap 'sighandler_INT' 2 i=0 while [ $i -lt 5 ] do echo "Bitte nicht stören! ($$)" sleep 3 i=`expr $i + 1` done
Das Script bei der Ausführung:
> you@host > ./trap4 Bitte nicht stören! (4843) Bitte nicht stören! (4843) (Strg)+(C) Habe das Signal SIGINT empfangen Soll das Script beendet werden (j/n) : n Bitte nicht stören! (4843) Bitte nicht stören! (4843) Bitte nicht stören! (4843) you@host > ./trap4 Bitte nicht stören! (4854) (Strg)+(C) Habe das Signal SIGINT empfangen Soll das Script beendet werden (j/n) : n Bitte nicht stören! (4854) (Strg)+(C) Habe das Signal SIGINT empfangen Soll das Script beendet werden (j/n) : j Bye! you@host Ein eigener Signalhandler (bzw. eine Funktion) wird häufig eingerichtet, um eine Konfigurationsdatei neu einzulesen. Bestimmt haben Sie schon einmal bei einem Dämon- oder Serverprogramm die Konfigurationsdatei Ihren Bedürfnissen angepasst. Damit die aktuellen Änderungen aktiv werden, mussten Sie die Konfigurationsdatei mittels
> kill -HUP PID_of_dämon_oder_server
neu einlesen. Wie Sie dies in Ihrem Script erreichen können, haben Sie eben mit dem Script »trap4« in ähnlicher Weise gesehen. Ein einfaches Beispiel auch hierzu:
> # Demonstriert die Funktion trap zum Abfangen von Signalen # Name: trap5 # Signal SIGHUP empfangen trap 'readconfig' 1 readconfig() { . aconfigfile } a=1 b=2 c=3 # Endlosschleife while [ 1 ] do echo "Werte (PID:$$)" echo "a=$a" echo "b=$b" echo "c=$c" sleep 5 done
Die Datei aconfigfile sieht wie folgt aus:
> # Konfigurationsdatei für trap5 # Name: aconfigfile # Hinweis: Um hier vorgenommene Änderungen umgehend zu # aktivieren, müssen Sie lediglich die # Konfigurationsdatei aconfigfile mittels # kill -HUP PID_of_trap5 # neu einlesen. a=3 b=6 c=9
Das Script bei der Ausführung:
> ###--- tty1 ---### you@host > ./trap5 Werte (PID:6263) a=1 b=2 c=3 Werte (PID:6263) a=1 b=2 c=3 ###--- tty2 ---### you@host > kill -HUP 6263 ###--- tty1 ---### Werte (PID:6263) a=3 b=6 c=9 (Strg)+(C) you@host ### 7.3.2 Mit Signalen Schleifendurchläufe abbrechenÂ
Genauso einfach können Sie auch Signale verwenden, um Schleifen abzubrechen. Hierzu müssen Sie nur in den Befehlen von trap die Anweisung break eintragen, dann wird bei Auftreten eines gewünschten Signals eine Schleife abgebrochen:
> # Demonstriert die Funktion trap zum Abfangen von Signalen # Name: trap6 # Signale SIGINT und SIGTERM abfangen trap 'break' 2 i=1 while [ $i -lt 10 ] do echo "$i. Schleifendurchlauf" sleep 1 i=`expr $i + 1` done echo "Nach dem $i Schleifendurchlauf abgebrochen" echo "--- Hinter der Schleife ---"
Das Script bei der Ausführung:
> you@host > ./trap6 1. Schleifendurchlauf 2. Schleifendurchlauf 3. Schleifendurchlauf 4. Schleifendurchlauf 5. Schleifendurchlauf (Strg)+(C) Nach dem 5. Schleifendurchlauf abgebrochen --- Hinter der Schleife --- you@host ### 7.3.3 Mit Signalen das Script beendenÂ
Bitte beachten Sie, dass Sie mit einem abgefangenen Signal keinen Programmabbruch erreichen. Haben Sie zunächst mit der trap-Anweisung ein Signal abgefangen, müssen Sie sich gegebenenfalls selbst um die Beendigung eines Prozesses kümmern. Diese Methode wird recht häufig eingesetzt, wenn der Anwender ein Signal an den Prozess sendet, dieser aber den Datenmüll vorher noch entsorgen soll. Hierzu setzen Sie den Befehl exit an das Ende der Kommandofolge, die Sie in der trap-Anweisung angegeben haben, beispielsweise:
> trap 'rm atempfile.tmp ; exit 1' 2
Hier wird beim Auftreten des Signals SIGINT zunächst die temporäre Datei atempfile.tmp gelöscht, bevor im nächsten Schritt mittels exit das Script beendet wird.
### 7.3.4 Das Beenden der Shell (oder eines Scripts) abfangenÂ
Wollen Sie, dass beim Verlassen der Shell oder eines Scripts noch eine andere Datei bzw. Script ausgeführt wird, können Sie das Signal 0 (EXIT) abfangen. Dieses Signal wird beim normalen Beenden einer Shell oder eines Scripts oder über den Befehl exit gesendet, mit dem ein Script oder eine Shell vorzeitig beendet wird. Nicht abfangen können Sie hingegen mit dem Signal 0 Abbrüche, die durch kill von außen herbeigeführt wurden. Ein einfaches Beispiel, welches das Ende einer (echten) Shell abfängt:
> # Name: dasEnde # Befehle, die beim Beenden einer Shell ausgeführt werden cat <<MARKE ******************************************** * Hier könnten noch einige nützliche * * zur Beendigung der Shell stehen. * ******************************************** MARKE echo "Alles erledigt â Shell mit ENTER beenden" read
Zunächst müssen Sie in Ihrer Shell das Signal EXIT abfangen und die Datei »dasEnde« aufrufen. Steht die »Falle« für das Signal EXIT, können Sie die Shell verlassen:
> you@host > trap '$SHELL $HOME/dasEnde' 0 you@host > exit logout ******************************************** * Hier könnten noch einige nützliche * * zur Beendigung der Shell stehen. * ******************************************** Alles erledigt â Shell mit ENTER beenden (ENTER) login :
Damit die Datei »dasEnde« mit wichtigen Hinweisen in Zukunft nach jedem Verlassen einer Shell dauerhaft zur Verfügung steht, sollten Sie die Zeile
> trap '$SHELL $HOME/dasEnde' 0
in die Datei .profile eintragen.
Gleiches lässt sich auch in einem Shellscript verwenden:
> # Demonstriert die Funktion trap zum Abfangen von Signalen # Name: trap7 # Signal EXIT abfangen trap 'exithandler' 0 exithandler() { echo "Das Script wurde vorzeitig mit exit beendet!" # Hier noch anfallende Aufräumarbeiten ausführen } # Hauptfunktion echo "In der Hauptfunktion" && exit 1 echo "Wird nicht mehr ausgeführt"
Das Script bei der Ausführung:
> you@host > ./trap7 In der Hauptfunktion Das Script wurde vorzeitig mit exit beendet! you@host Im Gegensatz zum Vorgehen in Abschnitt 7.3.3 müssen Sie hierbei kein zusätzliches exit bei den Kommandos von trap angeben. Hier halten Sie das Script nur beim wirklichen Ende kurz auf.
### 7.3.5 Signale ignorierenÂ
Wenn Sie in der Liste von Befehlen bei trap keine Angaben vornehmen, also leere Single Quotes verwenden, werden die angegebenen Signale (bzw. Signalnummern) ignoriert. Die Syntax:
> trap '' Signalnummer
Somit würden Sie praktisch mit der Angabe von
> trap '' 2
das Signal SIGINT beim Auftreten ignorieren. Das völlige Ignorieren von Signalen kann bei extrem kritischen Datenübertragungen sinnvoll sein. So können Sie zum Beispiel verhindern, dass beim Schreiben kritischer Systemdaten der Anwender mit einem unbedachten SIGINT reinpfuscht. Natürlich gilt hier weiterhin, dass die Signale SIGKILL und SIGSTOP nicht ignoriert werden können.
### 7.3.6 Signale zurücksetzenÂ
Wenn Sie die Reaktion von Signalen mittels trap einmal verändert haben, können Sie durch einen erneuten Aufruf von trap den Standardzustand der Signale wiederherstellen. Hierbei reicht lediglich der Aufruf von trap und der (bzw. den) Signalnummer(n), die Sie wieder auf den ursprünglichen Zustand zurücksetzen wollen. Mehrere Signalnummern werden wieder mit mindestens einem Leerzeichen von der vorangegangenen Signalnummer getrennt.
> trap Signalnummer
Das Zurücksetzen von Signalen ist sinnvoll, wenn man die Signale nur bei einem bestimmten Codeausschnitt gesondert behandeln will.
> # Demonstriert die Funktion trap zum Abfangen von Signalen # Name: trap8 # Signal SIGINT ignorieren trap '' 2 i=0 while [ $i -lt 5 ] do echo "Hier kein SIGINT möglich ..." sleep 1 i=`expr $i + 1` done # Signal SIGINT wieder zurücksetzen trap 2 i=0 while [ $i -lt 5 ] do echo "SIGINT wieder möglich ..." sleep 1 i=`expr $i + 1` done
Das Script bei der Ausführung:
> you@host > ./trap8 Hier kein SIGINT möglich ... Hier kein SIGINT möglich ... (Strg)+(C) Hier kein SIGINT möglich ... (Strg)+(C) Hier kein SIGINT möglich ... Hier kein SIGINT möglich ... SIGINT wieder möglich ... SIGINT wieder möglich ... (Strg)+(C) you@host # 8.2 Warten auf andere ProzesseÂ
8.2 Warten auf andere ProzesseÂ
Wollen Sie mit Ihrem Script auf die Beendigung eines anderen Prozesses warten, können Sie die Funktion wait verwenden.
wait PID
Bauen Sie wait in Ihr Script ein, wird die Ausführung so lange angehalten, bis ein Prozess mit der Prozess-ID PID beendet wurde. Außerdem können Sie aus wait gleich den Rückgabewert des beendeten Prozesses entnehmen. Ist der Rückgabewert von wait gleich 127, so bedeutet dies, dass auf einen Prozess gewartet wurde, der nicht mehr existiert. Ansonsten ist der Rückgabewert gleich der PID des Prozesses, auf den wait gewartet hat. Rufen Sie wait ohne einen Parameter auf, wartet wait auf aktive Kindprozesse. Hierbei ist der Rückgabewert dann immer 0.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
8.2 Warten auf andere ProzesseÂ
Wollen Sie mit Ihrem Script auf die Beendigung eines anderen Prozesses warten, können Sie die Funktion wait verwenden.
wait PID
Bauen Sie wait in Ihr Script ein, wird die Ausführung so lange angehalten, bis ein Prozess mit der Prozess-ID PID beendet wurde. Außerdem können Sie aus wait gleich den Rückgabewert des beendeten Prozesses entnehmen. Ist der Rückgabewert von wait gleich 127, so bedeutet dies, dass auf einen Prozess gewartet wurde, der nicht mehr existiert. Ansonsten ist der Rückgabewert gleich der PID des Prozesses, auf den wait gewartet hat. Rufen Sie wait ohne einen Parameter auf, wartet wait auf aktive Kindprozesse. Hierbei ist der Rückgabewert dann immer 0.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 8.2 Warten auf andere ProzesseÂ
Wollen Sie mit Ihrem Script auf die Beendigung eines anderen Prozesses warten, können Sie die Funktion wait verwenden.
> wait PID
Bauen Sie wait in Ihr Script ein, wird die Ausführung so lange angehalten, bis ein Prozess mit der Prozess-ID PID beendet wurde. Außerdem können Sie aus wait gleich den Rückgabewert des beendeten Prozesses entnehmen. Ist der Rückgabewert von wait gleich 127, so bedeutet dies, dass auf einen Prozess gewartet wurde, der nicht mehr existiert. Ansonsten ist der Rückgabewert gleich der PID des Prozesses, auf den wait gewartet hat. Rufen Sie wait ohne einen Parameter auf, wartet wait auf aktive Kindprozesse. Hierbei ist der Rückgabewert dann immer 0.
# 8.3 Hintergrundprozess wieder hervorholenÂ
8.3 Hintergrundprozess wieder hervorholenÂ
Was ein Hintergrundprozess ist und wie Sie einen solchen starten können, wurde bereits beschrieben. Der Vorteil eines Hintergrundprozesses ist, dass der laufende Prozess die Shell nicht mehr blockiert. Allerdings ist es auch nicht mehr möglich, die Standardeingabe bei Hintergrundprozessen zu verwenden. Die Standardeingabe wird hierbei einfach ins Datengrab (/dev/null) gelenkt. Zwar können Sie den Inhalt einer Datei umlenken
kommando < file &
aber sobald hier etwas von der Tastatur gelesen werden soll, hält der Prozess an und wartet korrekterweise auf eine Eingabe. Ein einfaches Beispiel:
# Name bg1 printf "Eingabe machen : " read echo "Ihre Eingabe lautet $REPLY"
Führen Sie dieses Script jetzt im Hintergrund aus, passiert Folgendes:
you@host > ./bg1 & [1] 7249 you@host > Eingabe machen : [1]+ Stopped ./bg1
Das Script wird angehalten, denn es wartet ja auf eine Eingabe von der Tastatur. Ein Blick auf die laufenden Prozesse mit ps bestätigt dies auch:
7249 pts/41 T 0:00 /bin/bash
Ein gestoppter Prozess hat das T (traced) in der Prozessliste stehen. Damit Sie diesen Prozess jetzt weiterarbeiten lassen können, müssen Sie ihn in den Vordergrund holen. Dies können Sie mit dem Kommando fg %1 (für foreground) erreichen.
you@host > fg %1 ./bg1 hallo Ihre Eingabe lautet hallo you@host >
Genaueres zu fg (und zum Gegenstück bg) erfahren Sie in Abschnitt 8.7, wenn es um die Jobverwaltung geht.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 8.3 Hintergrundprozess wieder hervorholenÂ
Was ein Hintergrundprozess ist und wie Sie einen solchen starten können, wurde bereits beschrieben. Der Vorteil eines Hintergrundprozesses ist, dass der laufende Prozess die Shell nicht mehr blockiert. Allerdings ist es auch nicht mehr möglich, die Standardeingabe bei Hintergrundprozessen zu verwenden. Die Standardeingabe wird hierbei einfach ins Datengrab (/dev/null) gelenkt. Zwar können Sie den Inhalt einer Datei umlenken
> kommando < file &
aber sobald hier etwas von der Tastatur gelesen werden soll, hält der Prozess an und wartet korrekterweise auf eine Eingabe. Ein einfaches Beispiel:
> # Name bg1 printf "Eingabe machen : " read echo "Ihre Eingabe lautet $REPLY"
Führen Sie dieses Script jetzt im Hintergrund aus, passiert Folgendes:
> you@host > ./bg1 & [1] 7249 you@host > Eingabe machen : [1]+ Stopped ./bg1
Das Script wird angehalten, denn es wartet ja auf eine Eingabe von der Tastatur. Ein Blick auf die laufenden Prozesse mit ps bestätigt dies auch:
> 7249 pts/41 T 0:00 /bin/bash
Ein gestoppter Prozess hat das T (traced) in der Prozessliste stehen. Damit Sie diesen Prozess jetzt weiterarbeiten lassen können, müssen Sie ihn in den Vordergrund holen. Dies können Sie mit dem Kommando fg %1 (für foreground) erreichen.
> you@host > fg %1 ./bg1 hallo Ihre Eingabe lautet hallo you@host Genaueres zu fg (und zum Gegenstück bg) erfahren Sie in Abschnitt 8.7, wenn es um die Jobverwaltung geht.
# 8.4 Hintergrundprozess schützenÂ
## 8.4 Hintergrundprozess schützenÂ
Somit scheint diese Schwierigkeit ein wenig system- bzw. distributionsabhängig zu sein. Sofern Sie das Problem haben, dass Prozesse, die im Hintergrund laufen, beendet werden, wenn sich der Benutzer ausloggt oder ein Eltern-Prozess an alle Kind-Prozesse das Signal SIGHUP sendet und diese somit beendet werden, können Sie das Kommando nohup verwenden. Allerdings scheint es auch hier wieder keine Einheitlichkeit zu geben, auf dem einen System ist nohup ein binäres Programm und auf den anderen wiederum ein einfaches Shellscript, welches das Signal 1 (SIGHUP) mit trap auffängt. Die Syntax:
> nohup Kommando [Argument ...]
Natürlich wird mit der Verwendung von nohup der Prozess hier nicht automatisch in den Hintergrund gestellt, sondern Sie müssen auch hier wieder am Ende der Kommandozeile das & setzen. Und das passiert, wenn Sie einen Prozess mit nohup in den Hintergrund schicken:
Als Rückgabewert liefert Ihnen die Funktion nohup den Fehlercode des Kommandos zurück. Sollte der Fehlercode allerdings den Wert 126 oder 127 haben, so bedeutet dies, dass nohup das Kommando gefunden hat, aber nicht starten konnte (126), oder dass nohup das Kommando gar nicht finden konnte (127).
Hier einige Beispiele:
> you@host > find / -user $USER -print > out.txt 2>&1 & [2] 4406 you@host > kill -SIGHUP 4406 you@host > [2]+ Aufgelegt find / -user $USER -print >out.txt 2>&1 you@host > nohup find / -user $USER -print > out.txt 2>&1 & [1] 4573 you@host > kill -SIGHUP 4573 you@host > ps | grep find 4573 pts/40 00:00:01 find you@host > nohup find / -user $USER -print & [1] 10540 you@host > nohup: hänge Ausgabe an nohup.out an you@host > exit ### --- Nach neuem Login ---- #### you@host > cat nohup.out ... /home/tot/HelpExplorer/uninstall.sh /home/tot/HelpExplorer/EULA.txt /home/tot/HelpExplorer/README /home/tot/HelpExplorer/starthelp /home/tot/.fonts.cache-1 ...
# 8.5 SubshellsÂ
8.5 SubshellsÂ
Zwar war schon häufig die Rede von einer Subshell, aber es wurde nie richtig darauf eingegangen, wie Sie explizit eine Subshell in Ihrem Script starten können. Die Syntax:
( kommando1 ... kommando_n )
oder auch als Einzeiler (aber dann bitte die Leerzeichen beachten):
( kommando1 ; ... ; kommando_n )
Eine Subshell erstellen Sie, wenn Sie Kommandos zwischen runden Klammern gruppieren. Hierbei startet das Script einfach eine neue Shell, welche die aktuelle Umgebung mitsamt den Variablen übernimmt, führt die Befehle aus und beendet sich dann bei Ausführung des letzten Kommandos wieder und kehrt zum Script zurück. Als Rückgabewert wird der Exit-Code des zuletzt ausgeführten Kommandos zurückgegeben. Die Variablen, die Sie in einer Subshell verändern oder hinzufügen, haben keine Auswirkungen auf die Variablen des laufenden Scripts. Sobald sich die Subshell also verändert, verlieren die Werte in der Subshell ihre Bedeutung und dem laufenden Script (bzw. der laufenden Shell) steht nach wie vor die Umgebung zur Verfügung, die vor dem Starten der Subshell vorlag. Diese Technik nutzt die Shell u. a. auch mit dem Here-Dokument aus.
Ein einfaches Beispiel:
# Name: subshell a=1 b=2 c=3 echo "Im Script: a=$a; b=$b; c=$c" # Eine Subshell starten ( echo "Subshell : a=$a; b=$b; c=$c" # Werte verändern a=3 ; b=6 ; c=9 echo "Subshell : a=$a; b=$b; c=$c" ) # Nach der Subshell wieder ... echo "Im Script: a=$a; b=$b; c=$c"
Das Script bei der Ausführung:
you@host > ./subshell Im Script: a=1; b=2; c=3 Subshell : a=1; b=2; c=3 Subshell : a=3; b=6; c=9 Im Script: a=1; b=2; c=3
Aber Achtung: Häufig wird eine Subshell, die zwischen runden Klammern steht, irrtümlicherweise mit den geschweiften Klammern gleichgestellt. Kommandos, die zwischen geschweiften Klammern stehen, werden als eine Gruppe zusammengefasst, wodurch Funktionen ja eigentlich erst ihren Sinn bekommen. Befehlsgruppen, die in geschweiften Klammern stehen, laufen in derselben Umgebung (also auch Shell) ab, in der auch das Script ausgeführt wird.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 8.5 SubshellsÂ
Zwar war schon häufig die Rede von einer Subshell, aber es wurde nie richtig darauf eingegangen, wie Sie explizit eine Subshell in Ihrem Script starten können. Die Syntax:
> ( kommando1 ... kommando_n )
oder auch als Einzeiler (aber dann bitte die Leerzeichen beachten):
> ( kommando1 ; ... ; kommando_n )
Eine Subshell erstellen Sie, wenn Sie Kommandos zwischen runden Klammern gruppieren. Hierbei startet das Script einfach eine neue Shell, welche die aktuelle Umgebung mitsamt den Variablen übernimmt, führt die Befehle aus und beendet sich dann bei Ausführung des letzten Kommandos wieder und kehrt zum Script zurück. Als Rückgabewert wird der Exit-Code des zuletzt ausgeführten Kommandos zurückgegeben. Die Variablen, die Sie in einer Subshell verändern oder hinzufügen, haben keine Auswirkungen auf die Variablen des laufenden Scripts. Sobald sich die Subshell also verändert, verlieren die Werte in der Subshell ihre Bedeutung und dem laufenden Script (bzw. der laufenden Shell) steht nach wie vor die Umgebung zur Verfügung, die vor dem Starten der Subshell vorlag. Diese Technik nutzt die Shell u. a. auch mit dem Here-Dokument aus.
Ein einfaches Beispiel:
> # Name: subshell a=1 b=2 c=3 echo "Im Script: a=$a; b=$b; c=$c" # Eine Subshell starten ( echo "Subshell : a=$a; b=$b; c=$c" # Werte verändern a=3 ; b=6 ; c=9 echo "Subshell : a=$a; b=$b; c=$c" ) # Nach der Subshell wieder ... echo "Im Script: a=$a; b=$b; c=$c"
Das Script bei der Ausführung:
> you@host > ./subshell Im Script: a=1; b=2; c=3 Subshell : a=1; b=2; c=3 Subshell : a=3; b=6; c=9 Im Script: a=1; b=2; c=3
Aber Achtung: Häufig wird eine Subshell, die zwischen runden Klammern steht, irrtümlicherweise mit den geschweiften Klammern gleichgestellt. Kommandos, die zwischen geschweiften Klammern stehen, werden als eine Gruppe zusammengefasst, wodurch Funktionen ja eigentlich erst ihren Sinn bekommen. Befehlsgruppen, die in geschweiften Klammern stehen, laufen in derselben Umgebung (also auch Shell) ab, in der auch das Script ausgeführt wird.
8.6 Mehrere Scripts verbinden und ausführen (Kommunikation zwischen Scripts)Â
Im Laufe der Zeit werden Sie eine Menge Scripts schreiben und sammeln. Häufig will man hierbei gern das ein oder andere von einem anderen Script verwenden. Entweder man verwendet dann »Copy & Paste« oder man ruft das Script aus dem Haupt-Script auf. Da es nicht immer ganz einfach ist, Scripts miteinander zu verbinden und die Datenübertragung zu behandeln, soll im folgenden Abschnitt ein wenig genauer darauf eingegangen werden.
8.6.1 Datenübergabe zwischen ScriptsÂ
Zur Übergabe von Daten an das neue Script gibt es mehrere gängige Möglichkeiten. Eine einfache ist das Exportieren der Daten, bevor das zweite Script aufgerufen wird. Der Einfachheit halber werden wir hier von »script1« und »script2« reden und diese in der Praxis auch mehrmals verwenden. Hier die Möglichkeit, Daten an ein anderes Script mittels export zu übergeben:
# Name script1 a=1 b=2 c=3 export a b c ./script2
Jetzt das script2:
# Name : script2 echo "$0 : a=$a; b=$b; c=$c"
Die Datenübergabe bei der Ausführung:
you@host > ./script1 ./script2 : a=1; b=2; c=3
Eine weitere Möglichkeit ist die Übergabe als Argument, wie Sie dies von der Kommandozeile her kennen. Dem aufgerufenen Script stehen dann die einzelnen Variablen mit den Positionsparametern $1 bis $9 bzw. ${n} zur Verfügung. Auch hierzu wieder die beiden Scripts.
# Name script1 a=1 b=2 c=3 ./script2 $a $b $c
Und script2:
# Name : script2 echo "$0 : a=$1; b=$2; c=$3"
Die Ausführung entspricht der im Beispiel mittels export.
Sobald allerdings der Umfang der Daten zunimmt, werden Sie mit diesen beiden Möglichkeiten recht schnell an Grenzen stoßen. Hierzu würde sich die Verwendung einer temporären Datei anbieten: Ein Prozess schreibt etwas in die Datei und ein anderer liest wieder daraus.
# Name script1 IFS='\n' for var in `ls -l` do echo $var done > file.tmp ./script2
Das »script2«:
# Name : script2 while read line do echo $line done < file.tmp
Im Beispiel liest »script1« zeilenweise von `ls âl` ein und lenkt die Standardausgabe von echo auf die temporäre Datei file.tmp um. Am Ende wird »script2« gestartet. »script2« wiederum liest zeilenweise über eine Umlenkung von der Datei file.tmp ein und gibt dies auch zeilenweise mit echo auf dem Bildschirm aus. Das Ganze könnten Sie auch ohne eine temporäre Datei erledigen, indem Sie beide Scripts mit einer Pipe starten, da hier ja ein Script die Daten auf die Standardausgabe ausgibt und ein anderes Script die Daten von der Standardeingabe erhält:
you@host > ./script1 | ./script2
Damit dies auch funktioniert, müssen Sie in den beiden Scripts lediglich die Umlenkungen und den Scriptaufruf entfernen. Somit sieht »script1« wie folgt aus:
# Name script1 IFS='\n' for var in `ls -l` do echo $var done
Und gleiches Bild bei »script2«:
# Name : script2 while read line do echo $line done
8.6.2 Rückgabe von Daten an andere ScriptsÂ
Der gängigste Weg, Daten aus einem Script an ein anderes zurückzugeben, ist eigentlich die Kommando-Substitution. Ein simples Beispiel:
# Name : script1 var=`./script2` echo "var=$var"
Und das »script2«:
# Name : script2 echo "Hallo script1"
»script1« bei der Ausführung:
you@host > ./script1 var=Hallo script1
Gleiches funktioniert auch, wenn das Script mehrere Werte zurückgibt. Hierzu würde sich etwa das Aufsplitten der Rückgabe mittels set anbieten:
# Name script1 var=`./script2` set $var echo "$1; $2; $3"
Jetzt noch »script2«:
# Name : script2 var1=wert1 var2=wert2 var3=wert3 echo $var1 $var2 $var3
»script1« bei der Ausführung:
you@host > ./script1 wert1; wert2; wert3
Sollten Sie allerdings nicht wissen, wie viele Werte ein Script zurückgibt, können Sie das Ganze auch in einer Schleife abarbeiten:
# Name script1 var=`./script2` i=1 for wert in $var do echo "$i: $wert" i=`expr $i + 1` done
Trotzdem sollte man auch bedenken, dass mit steigendem Umfang der anfallenden Datenmenge auch hier nicht so vorgegangen werden kann. In einer Variablen Daten von mehreren Megabytes zu speichern, ist nicht mehr sehr sinnvoll. Hier bleibt Ihnen nur noch die Alternative, eine temporäre Datei zu verwenden, wie Sie sie schon in Abschnitt 8.6.2 verwendet haben. Allerdings besteht auch hier ein Problem, wenn bspw. »script1« die Daten aus der temporären Datei lesen will, die »script2« hineinschreibt, aber »script2« die Daten nur scheibchenweise oder eben permanent in die temporäre Datei hineinschreibt. Dann wäre eine Lösung mit einer Pipe die bessere Alternative. Auch hierzu müssten Sie nur »script1« verändern:
# Name script1 ./script2 | while read wert do for val in $wert do echo "$val" done done
Named Pipe
Ein weiteres sehr interessantes Mittel zur Datenübertragung zwischen mehreren Scripts haben Sie in Abschnitt 5.6 mit der Named Pipe (FIFOs) kennen gelernt. Der Vor- bzw. auch Nachteil (je nach Anwendungsfall) ist hierbei, dass ein Prozess, der etwas in eine Pipe schreibt, so lange blockiert wird, bis auf der anderen Seite ein Prozess ist, der etwas daraus liest. Umgekehrt natürlich derselbe Fall. Ein typischer Anwendungsfall wäre ein so genannter Server, der Daten von beliebigen Clients, die ihm etwas durch die Pipe schicken, einliest:
# Name: PipeServer mknod meine_pipe p while true do # Wartet auf Daten aus der Pipe read zeile < meine_pipe echo $zeile done
Statt einer Ausgabe auf dem Bildschirm können Sie mit diesem Server munter Daten von beliebig vielen anderen Scripts sammeln. Der Vorteil: Die anderen Scripts, die Daten in diese Pipe schicken, werden nicht blockiert, weil immer auf der anderen Seite des Rohrs der Server »PipeServer« darauf wartet.
8.6.3 Scripts synchronisierenÂ
Spätestens, wenn Ihre Scripts dauerhaft laufen sollen, benötigen Sie eine Prozess-Synchronisation. Fallen hierbei dann mehr als zwei Scripts an und wird außerdem in eine Datei geschrieben und gelesen, haben Sie schnell Datensalat. Eine recht einfache und zuverlässige Synchronisation zweier oder auch mehrerer Scripts erreichen Sie bspw. mit den Signalen. Richten Sie hierzu einfach einen Signalhandler mit trap ein, der auf ein bestimmtes Signal reagiert und ein weiteres Script aufruft. Bspw.:
# Name: script1 trap './script2' SIGUSR1 while true do echo "Lese Daten ..." sleep 5 echo "Starte script2 ..." kill -SIGUSR1 $$ done
Und das »script2«:
# Name: script2 trap './script1' SIGUSR2 while true do echo "Schreibe Daten ..." sleep 5 echo "Starte script1 ..." kill -SIGUSR2 $$ done
Die Scripts bei der Ausführung:
you@host > ./script1 Lese Daten ... Starte script2 ... Schreibe Daten ... Starte script1 ... Lese Daten ... Starte script2 ... Schreibe Daten ... Starte script1 ... Lese Daten ... Starte script2 ... Schreibe Daten ...
Eine weitere Möglichkeit zur Synchronisation von Scripts besteht darin, eine Datei zu verwenden. Hierbei wird in einer Endlosschleife immer überprüft, ob eine bestimmte Bedingung erfüllt ist. Je nach Bedingung wird dann ein entsprechendes Script ausgeführt. Hierzu werden alle Scripts aus einem Haupt-Script gesteuert. Was Sie dabei alles überprüfen, bleibt Ihnen überlassen. Häufig verwendet werden die Existenz, das Alter, die Größe oder die Zugriffsrechte auf eine Datei. Eben alles, was sich mit dem Kommando test realisieren lässt. Natürlich sollten Sie in einem Haupt-Script weiterhin die Steuerung übernehmen. Ein Beispiel:
# Name: mainscript FILE=tmpfile.tmp rm $FILE touch $FILE while true do # Ist die Datei lesbar if [ -r $FILE ] then echo "Datei wird gelesen ..." sleep 1 #./script_zum_Lesen # Freigeben zum Schreiben chmod 0200 $FILE; fi if [ -w $FILE ] then echo "Datei ist bereit zum Schreiben ..." sleep 1 #./script_zum_Shreiben # Freigeben zum Lesen chmod 0400 $FILE fi sleep 1 done
Hier wird in einer Endlosschleife immer überprüft, ob eine Datei lesbar oder schreibbar ist und dann eben entsprechende Aktionen ausgeführt. Selbiges könnten Sie übrigens auch ohne eine extra Datei mit einer globalen Variablen erledigen. Sie überprüfen in einer Endlosschleife ständig den Wert der globalen Variablen und führen entsprechende Aktionen aus. Nach der Ausführung einer Aktion verändern Sie die globale Variable so, dass eine weitere Aktion ausgeführt werden kann.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
Im Laufe der Zeit werden Sie eine Menge Scripts schreiben und sammeln. Häufig will man hierbei gern das ein oder andere von einem anderen Script verwenden. Entweder man verwendet dann »Copy & Paste« oder man ruft das Script aus dem Haupt-Script auf. Da es nicht immer ganz einfach ist, Scripts miteinander zu verbinden und die Datenübertragung zu behandeln, soll im folgenden Abschnitt ein wenig genauer darauf eingegangen werden.
### 8.6.1 Datenübergabe zwischen ScriptsÂ
Zur Übergabe von Daten an das neue Script gibt es mehrere gängige Möglichkeiten. Eine einfache ist das Exportieren der Daten, bevor das zweite Script aufgerufen wird. Der Einfachheit halber werden wir hier von »script1« und »script2« reden und diese in der Praxis auch mehrmals verwenden. Hier die Möglichkeit, Daten an ein anderes Script mittels export zu übergeben:
> # Name script1 a=1 b=2 c=3 export a b c ./script2
Jetzt das script2:
> # Name : script2 echo "$0 : a=$a; b=$b; c=$c"
Die Datenübergabe bei der Ausführung:
> you@host > ./script1 ./script2 : a=1; b=2; c=3
Eine weitere Möglichkeit ist die Übergabe als Argument, wie Sie dies von der Kommandozeile her kennen. Dem aufgerufenen Script stehen dann die einzelnen Variablen mit den Positionsparametern $1 bis $9 bzw. ${n} zur Verfügung. Auch hierzu wieder die beiden Scripts.
> # Name script1 a=1 b=2 c=3 ./script2 $a $b $c
Und script2:
> # Name : script2 echo "$0 : a=$1; b=$2; c=$3"
Die Ausführung entspricht der im Beispiel mittels export.
Sobald allerdings der Umfang der Daten zunimmt, werden Sie mit diesen beiden Möglichkeiten recht schnell an Grenzen stoßen. Hierzu würde sich die Verwendung einer temporären Datei anbieten: Ein Prozess schreibt etwas in die Datei und ein anderer liest wieder daraus.
> # Name script1 IFS='\n' for var in `ls -l` do echo $var done > file.tmp ./script2
Das »script2«:
> # Name : script2 while read line do echo $line done < file.tmp
Im Beispiel liest »script1« zeilenweise von `ls âl` ein und lenkt die Standardausgabe von echo auf die temporäre Datei file.tmp um. Am Ende wird »script2« gestartet. »script2« wiederum liest zeilenweise über eine Umlenkung von der Datei file.tmp ein und gibt dies auch zeilenweise mit echo auf dem Bildschirm aus. Das Ganze könnten Sie auch ohne eine temporäre Datei erledigen, indem Sie beide Scripts mit einer Pipe starten, da hier ja ein Script die Daten auf die Standardausgabe ausgibt und ein anderes Script die Daten von der Standardeingabe erhält:
> you@host > ./script1 | ./script2
Damit dies auch funktioniert, müssen Sie in den beiden Scripts lediglich die Umlenkungen und den Scriptaufruf entfernen. Somit sieht »script1« wie folgt aus:
> # Name script1 IFS='\n' for var in `ls -l` do echo $var done
Und gleiches Bild bei »script2«:
> # Name : script2 while read line do echo $line done
### 8.6.2 Rückgabe von Daten an andere ScriptsÂ
Der gängigste Weg, Daten aus einem Script an ein anderes zurückzugeben, ist eigentlich die Kommando-Substitution. Ein simples Beispiel:
> # Name : script1 var=`./script2` echo "var=$var"
Und das »script2«:
> # Name : script2 echo "Hallo script1"
»script1« bei der Ausführung:
> you@host > ./script1 var=Hallo script1
Gleiches funktioniert auch, wenn das Script mehrere Werte zurückgibt. Hierzu würde sich etwa das Aufsplitten der Rückgabe mittels set anbieten:
> # Name script1 var=`./script2` set $var echo "$1; $2; $3"
Jetzt noch »script2«:
> # Name : script2 var1=wert1 var2=wert2 var3=wert3 echo $var1 $var2 $var3
»script1« bei der Ausführung:
> you@host > ./script1 wert1; wert2; wert3
Sollten Sie allerdings nicht wissen, wie viele Werte ein Script zurückgibt, können Sie das Ganze auch in einer Schleife abarbeiten:
> # Name script1 var=`./script2` i=1 for wert in $var do echo "$i: $wert" i=`expr $i + 1` done
Trotzdem sollte man auch bedenken, dass mit steigendem Umfang der anfallenden Datenmenge auch hier nicht so vorgegangen werden kann. In einer Variablen Daten von mehreren Megabytes zu speichern, ist nicht mehr sehr sinnvoll. Hier bleibt Ihnen nur noch die Alternative, eine temporäre Datei zu verwenden, wie Sie sie schon in Abschnitt 8.6.2 verwendet haben. Allerdings besteht auch hier ein Problem, wenn bspw. »script1« die Daten aus der temporären Datei lesen will, die »script2« hineinschreibt, aber »script2« die Daten nur scheibchenweise oder eben permanent in die temporäre Datei hineinschreibt. Dann wäre eine Lösung mit einer Pipe die bessere Alternative. Auch hierzu müssten Sie nur »script1« verändern:
> # Name script1 ./script2 | while read wert do for val in $wert do echo "$val" done done
# Named Pipe
Ein weiteres sehr interessantes Mittel zur Datenübertragung zwischen mehreren Scripts haben Sie in Abschnitt 5.6 mit der Named Pipe (FIFOs) kennen gelernt. Der Vor- bzw. auch Nachteil (je nach Anwendungsfall) ist hierbei, dass ein Prozess, der etwas in eine Pipe schreibt, so lange blockiert wird, bis auf der anderen Seite ein Prozess ist, der etwas daraus liest. Umgekehrt natürlich derselbe Fall. Ein typischer Anwendungsfall wäre ein so genannter Server, der Daten von beliebigen Clients, die ihm etwas durch die Pipe schicken, einliest:
> # Name: PipeServer mknod meine_pipe p while true do # Wartet auf Daten aus der Pipe read zeile < meine_pipe echo $zeile done
Statt einer Ausgabe auf dem Bildschirm können Sie mit diesem Server munter Daten von beliebig vielen anderen Scripts sammeln. Der Vorteil: Die anderen Scripts, die Daten in diese Pipe schicken, werden nicht blockiert, weil immer auf der anderen Seite des Rohrs der Server »PipeServer« darauf wartet.
### 8.6.3 Scripts synchronisierenÂ
Spätestens, wenn Ihre Scripts dauerhaft laufen sollen, benötigen Sie eine Prozess-Synchronisation. Fallen hierbei dann mehr als zwei Scripts an und wird außerdem in eine Datei geschrieben und gelesen, haben Sie schnell Datensalat. Eine recht einfache und zuverlässige Synchronisation zweier oder auch mehrerer Scripts erreichen Sie bspw. mit den Signalen. Richten Sie hierzu einfach einen Signalhandler mit trap ein, der auf ein bestimmtes Signal reagiert und ein weiteres Script aufruft. Bspw.:
> # Name: script1 trap './script2' SIGUSR1 while true do echo "Lese Daten ..." sleep 5 echo "Starte script2 ..." kill -SIGUSR1 $$ done
Und das »script2«:
> # Name: script2 trap './script1' SIGUSR2 while true do echo "Schreibe Daten ..." sleep 5 echo "Starte script1 ..." kill -SIGUSR2 $$ done
Die Scripts bei der Ausführung:
> you@host > ./script1 Lese Daten ... Starte script2 ... Schreibe Daten ... Starte script1 ... Lese Daten ... Starte script2 ... Schreibe Daten ... Starte script1 ... Lese Daten ... Starte script2 ... Schreibe Daten ...
Eine weitere Möglichkeit zur Synchronisation von Scripts besteht darin, eine Datei zu verwenden. Hierbei wird in einer Endlosschleife immer überprüft, ob eine bestimmte Bedingung erfüllt ist. Je nach Bedingung wird dann ein entsprechendes Script ausgeführt. Hierzu werden alle Scripts aus einem Haupt-Script gesteuert. Was Sie dabei alles überprüfen, bleibt Ihnen überlassen. Häufig verwendet werden die Existenz, das Alter, die Größe oder die Zugriffsrechte auf eine Datei. Eben alles, was sich mit dem Kommando test realisieren lässt. Natürlich sollten Sie in einem Haupt-Script weiterhin die Steuerung übernehmen. Ein Beispiel:
> # Name: mainscript FILE=tmpfile.tmp rm $FILE touch $FILE while true do # Ist die Datei lesbar if [ -r $FILE ] then echo "Datei wird gelesen ..." sleep 1 #./script_zum_Lesen # Freigeben zum Schreiben chmod 0200 $FILE; fi if [ -w $FILE ] then echo "Datei ist bereit zum Schreiben ..." sleep 1 #./script_zum_Shreiben # Freigeben zum Lesen chmod 0400 $FILE fi sleep 1 done
Hier wird in einer Endlosschleife immer überprüft, ob eine Datei lesbar oder schreibbar ist und dann eben entsprechende Aktionen ausgeführt. Selbiges könnten Sie übrigens auch ohne eine extra Datei mit einer globalen Variablen erledigen. Sie überprüfen in einer Endlosschleife ständig den Wert der globalen Variablen und führen entsprechende Aktionen aus. Nach der Ausführung einer Aktion verändern Sie die globale Variable so, dass eine weitere Aktion ausgeführt werden kann.
# 8.7 JobverwaltungÂ
8.7 JobverwaltungÂ
Mit dem Job-Control-System können Sie Prozesse, die im Hintergrund laufen, in den Vordergrund holen. Dadurch können Sie Prozesse, die eventuell im Hintergrund »Amok« laufen (und die kompletten Ressourcen aufbrauchen) oder etwa auf eine Eingabe von stdin warten, hervorholen und unterbrechen oder gegebenenfalls wieder in den Hintergrund schicken und weiterlaufen lassen. Sie kennen das ja mittlerweile zuhauf: Wenn Sie einen Hintergrundprozess gestartet haben, wird die PID und eine Nummer in eckigen Klammern angezeigt. Bei dieser Nummer handelte es sich stets um die Jobnummer.
you@host > sleep 10 & [1] 5915 you@host > sleep 5 & [2] 5916
Hier wurden mit sleep also zwei Prozesse in den Hintergrund geschickt. Der Prozess mit der Nummer 5915 hat hier die Jobnummer 1 und 5916 die Nummer 2. Um sich jetzt einen Überblick zu allen sich im Hintergrund befindlichen Prozessen zu verschaffen, können Sie das Kommando jobs verwenden:
you@host > jobs [1]- Running sleep 10 & [2]+ Running sleep 5 &
Das Kommando jobs zeigt Ihnen neben den Prozessen, die im Hintergrund laufen, auch den aktuellen Zustand an (hier mit Running). Ein Beispiel:
you@host > du / ... ... (Strg)+(Z) [1]+ Stopped du /
Der Prozess hat Ihnen einfach zu lange gedauert und Sie haben diesem zwangsläufig mit (Strg)+(Z) das Signal SIGSTOP geschickt, womit der Prozess kurzum angehalten wurde. Jetzt haben Sie einen gestoppten Prozess in Ihrer Prozessliste:
you@host > ps -l | grep du +0 T 1000 6235 3237 0 78 0 â 769 pts/40 00:00:00 du
Anhand des T (für Traced) können Sie den Prozess erkennen. Auch ein Blick mit jobs zeigt Ihnen den Zustand des gestoppten Prozesses an:
you@host > jobs [1]+ Stopped du /
Hinweis   Natürlich hat man gerade bei einem Hintergrundprozess keine Möglichkeit mehr, mit der Tastatur das Signal SIGSTOP (mit (Strg)+(Z)) an den laufenden Hintergrundprozess zu schicken, weil hier ja die Standardeingabe nach /dev/null umgelenkt wird. Hierbei können Sie von der Kommandozeile das Signal mittels kill an den entsprechenden Prozess schicken, sofern Sie diesen nicht gleich ganz beenden wollen.
Tabelle 8.1 Â Kommandos zur Jobverwaltung
Job-Befehl
Bedeutung
fg %jobnr
Holt sich einen Job in den Vordergrund, wo dieser weiterläuft
bg %jobnr
Schiebt einen angehaltenen Job in den Hintergrund, wo dieser weiterläuft
(Ctrl)+(Z)
Hält einen sich im Vordergrund aufhaltenden Prozess auf (suspendiert diesen)
kill %jobnr
Beendet einen Prozess
kill âSIGCONT %jobnr
Setzt die Ausführung eines angehaltenen Prozesses fort (egal, ob im Hinter- oder Vordergrund)
Das folgende Script soll Ihnen den Sinn von fg näher demonstrieren.
# Name: ascript echo "Das Script wird gestartet ..." printf "Warte auf Eingabe: " read input echo "Script ist fertig ..."
Das Script bei einer Ausführung im Hintergrund:
you@host > ./ascript & [1] 7338 [1]+ Stopped ./script1 you@host > fg % ./script1 Das Script wird gestartet ... Warte auf Eingabe: Hallo Script ist fertig ... you@host >
Hier hat das Script von selbst angehalten, weil sich darin eine Eingabeaufforderung mittels read befand. Deshalb wurde das Script mittels fg in den Vordergrund geholt, um dieser Aufforderung nachzukommen. Anschließend wurde das Script weiter abgearbeitet.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 8.7 JobverwaltungÂ
Mit dem Job-Control-System können Sie Prozesse, die im Hintergrund laufen, in den Vordergrund holen. Dadurch können Sie Prozesse, die eventuell im Hintergrund »Amok« laufen (und die kompletten Ressourcen aufbrauchen) oder etwa auf eine Eingabe von stdin warten, hervorholen und unterbrechen oder gegebenenfalls wieder in den Hintergrund schicken und weiterlaufen lassen. Sie kennen das ja mittlerweile zuhauf: Wenn Sie einen Hintergrundprozess gestartet haben, wird die PID und eine Nummer in eckigen Klammern angezeigt. Bei dieser Nummer handelte es sich stets um die Jobnummer.
> you@host > sleep 10 & [1] 5915 you@host > sleep 5 & [2] 5916
Hier wurden mit sleep also zwei Prozesse in den Hintergrund geschickt. Der Prozess mit der Nummer 5915 hat hier die Jobnummer 1 und 5916 die Nummer 2. Um sich jetzt einen Überblick zu allen sich im Hintergrund befindlichen Prozessen zu verschaffen, können Sie das Kommando jobs verwenden:
> you@host > jobs [1]- Running sleep 10 & [2]+ Running sleep 5 &
Das Kommando jobs zeigt Ihnen neben den Prozessen, die im Hintergrund laufen, auch den aktuellen Zustand an (hier mit Running). Ein Beispiel:
> you@host > du / ... ... (Strg)+(Z) [1]+ Stopped du /
Der Prozess hat Ihnen einfach zu lange gedauert und Sie haben diesem zwangsläufig mit (Strg)+(Z) das Signal SIGSTOP geschickt, womit der Prozess kurzum angehalten wurde. Jetzt haben Sie einen gestoppten Prozess in Ihrer Prozessliste:
> you@host > ps -l | grep du +0 T 1000 6235 3237 0 78 0 â 769 pts/40 00:00:00 du
Anhand des T (für Traced) können Sie den Prozess erkennen. Auch ein Blick mit jobs zeigt Ihnen den Zustand des gestoppten Prozesses an:
> you@host > jobs [1]+ Stopped du /
Job-Befehl | Bedeutung |
| --- | --- |
fg %jobnr | Holt sich einen Job in den Vordergrund, wo dieser weiterläuft |
bg %jobnr | Schiebt einen angehaltenen Job in den Hintergrund, wo dieser weiterläuft |
(Ctrl)+(Z) | Hält einen sich im Vordergrund aufhaltenden Prozess auf (suspendiert diesen) |
kill %jobnr | Beendet einen Prozess |
kill âSIGCONT %jobnr | Setzt die Ausführung eines angehaltenen Prozesses fort (egal, ob im Hinter- oder Vordergrund) |
Das folgende Script soll Ihnen den Sinn von fg näher demonstrieren.
> # Name: ascript echo "Das Script wird gestartet ..." printf "Warte auf Eingabe: " read input echo "Script ist fertig ..."
Das Script bei einer Ausführung im Hintergrund:
> you@host > ./ascript & [1] 7338 [1]+ Stopped ./script1 you@host > fg % ./script1 Das Script wird gestartet ... Warte auf Eingabe: Hallo Script ist fertig ... you@host Hier hat das Script von selbst angehalten, weil sich darin eine Eingabeaufforderung mittels read befand. Deshalb wurde das Script mittels fg in den Vordergrund geholt, um dieser Aufforderung nachzukommen. Anschließend wurde das Script weiter abgearbeitet.
# 8.8 Shellscripts zeitgesteuert ausführenÂ
8.8 Shellscripts zeitgesteuert ausführenÂ
Als Systemadministrator oder Webmaster werden Sie zwangsläufig mit folgenden Wartungstätigkeiten konfrontiert:
Â
Sichern von Daten (Backup)
Â
»Rotieren« von Logfiles
Solange Sie auf einem Einzelplatz-Rechner arbeiten, können Sie hin und wieder solche Scripts, die diese Aufgaben erledigen, selbst von Hand starten (wobei dies auf Dauer auch sehr mühsam erscheint). Allerdings hat man es in der Praxis nicht nur mit einem Rechner zu tun, sondern mit einer ganzen Horde. Sie werden sich wohl kaum auf einem Dutzend Rechner einloggen, um jedes Mal ein Backup-Script von Hand zu starten.
Für solche Aufgaben wurden unter Linux/UNIX Daemons (Abkürzung für disk and execution monitors) eingeführt. Daemons (auch gern als Dämonen oder Geister bezeichnet) sind Prozesse, die zyklisch oder ständig im Hintergrund ablaufen und auf Aufträge warten. Auf Ihrem System laufen wahrscheinlich ein gutes Dutzend solcher Daemons. Die meisten werden beim Start des Systems vom Übergang in den Multi-User-Modus automatisch gestartet. In unserem Fall geht es vorwiegend um den cron-daemon.
»cron« kommt aus dem Griechischen (chronos) und bedeutet Zeit. Bei diesem cron-daemon definieren Sie eine Aufgabe und delegieren diese dann an den Daemon. Dies realisieren Sie durch einen Eintrag in crontab, einer Tabelle, in der festgelegt wird, wann welche Jobs ausgeführt werden sollen. Es ist logisch, dass der Rechner (und der cron-daemon) zu der Zeit laufen muss, zu der ein entsprechender Job ausgeführt wird. Dass dies nicht immer möglich ist, leuchtet ein (besonders bei einem Heimrechner). Damit aber trotzdem gewährleistet wird, dass die Jobs in der Tabelle crontab ausgeführt werden, bieten einige Distributionen zusätzlich den Daemon anacron an. Dieser tritt immer an die Stelle von cron und stellt sicher, dass die Jobs regelmäßig zu einer bestimmten Zeit ausgeführt werden â also auch, wenn der Rechner nicht eingeschaltet ist.
crond macht also in der Praxis nichts anderes, als in bestimmten Zeitintervallen (Standard eine Minute) die Tabelle crontab einzulesen und entsprechende Jobs auszuführen. Somit müssen Sie, um den cron-daemon zu verwenden, nichts anderes tun, als in crontab einen entsprechenden Eintrag zu hinterlegen. Bevor Sie erfahren, wie Sie dies praktisch anwenden, muss ich noch einige Anmerkungen zu Ihren Scripts machen, die mit dem cron-daemon gestartet werden sollen.
Wenn Sie ein Script mit dem cron-daemon starten lassen, sollten Sie sich immer vor Augen halten, dass Ihr Script keine Verbindung mehr zum Bildschirm und zur Tastatur hat. Damit sollte Ihnen klar sein, dass eine Benutzereingabeaufforderung mit read genauso sinnlos ist wie eine Ausgabe mittels echo. In beiden Fällen sollten Sie die Ein- bzw. Ausgabe umlenken. Die Ausgaben werden meist per Mail an den Eigentümer der crontab geschickt.
Ebenso sieht dies mit den Umgebungsvariablen aus. Sie haben in den Kapiteln zuvor recht häufig die Umgebungsvariablen verändert und verwendet, allerdings können Sie sich niemals zum Zeitpunkt der Ausführung eines Scripts, das vom cron-daemon gestartet wurde, darauf verlassen, dass die Umgebung derjenigen entspricht, wie sie vielleicht beim Testen des Shellscripts vorlag. Daher werden auch in der crontab-Datei entsprechende Umgebungsvariablen belegt. Wird ein cron-Job ausgeführt, wird die Umgebung des cron-daemons verwendet. LOGNAME (oder auch USER) wird auf den Eigentümer der crontab-Datei gesetzt und HOME auf dessen Verzeichnis, so wie dies bei /etc/passwd der Fall ist. Allerdings können Sie nicht alle Umgebungsvariablen in der crontab-Datei neu setzen. Was mit SHELL und HOME kein Problem ist, ist mit LOGNAME bzw. USER nicht möglich. Sicherstellen sollten (müssen) Sie auch, dass PATH genauso aussieht wie in der aktuellen Shell â sonst kann es passieren, dass einige Kommandos gar nicht ausgeführt werden können. Programme und Scripts sollten bei cron-Jobs möglichst mit absoluten Pfaden angegeben werden, um den etwaigen Problemen mit PATH vorzubeugen.
Wollen Sie wissen, ob Ihr Script jetzt im Hintergrund läuft oder nicht (wegen der Ein-/Ausgabe), können Sie dies im Script tty überprüfen. Wenn das Script nicht im Hintergrund läuft, gibt tty /dev/tty oder Ähnliches zurück. Läuft das Script allerdings im Hintergrund, gibt es keinen Bildschirm zur Ausgabe und tty gibt einen Fehlerstatus (ungleich 0) wie 1 zurück. Natürlich sollten Sie diese Überprüfung im Stillen ausführen, damit im Fall der Fälle keine störende Ausgabe auf dem Bildschirm erfolgt. Dies können Sie mit der Option âs (silence) erreichen:
you@host > tty /dev/pts/39 you@host > tty -s you@host > echo $? 0
Ebenso können Sie die Shell-Variable PS1 überprüfen, welche gewöhnlich nur bei der interaktiven Shell gesetzt wird.
Der cron-daemon speichert seine Einträge in der crontab-Datei. Dieser Pfad befindet sich gewöhnlich im /etc-Verzeichnis. Allerdings handelt es sich hierbei um die cron-Jobs von root. Natürlich hat auch jeder andere Benutzer auf dem System seine eigene crontab-Datei, welche sich für gewöhnlich im Verzeichnis /var/spool/cron/ befindet.
Zum Modifizieren der crontab-Datei genügt ein einfaches crontab mit der Option âe (edit). Normalerweise öffnet sich die crontab-Datei mit dem Standardeditor (meistens vi). Wollen Sie einen anderen Editor verwenden (bspw. nano, pico, joe ...), müssen Sie die Variable VISUAL oder EDITOR ändern und exportieren (export VISUAL='editor_ihrer_wahl'). Gewöhnlich ist die Datei, die sich in vi öffnet, leer (bei der ersten Verwendung von cron als normaler Benutzer).
Was aber schreibt man in eine solche crontab-Datei? Natürlich die Jobs, und zwar mit folgender einfacher Syntax:
Tabelle 8.2 Â Ein Eintrag in die crontab-Datei
Minuten
Stunden
Tage
Monate
Wochentage
[User]
Befehl
0â59
0â23
1â31
1â12
0â7
Name
kommando_oder_script
Praktisch besteht jede einzelne Zeile aus 6 (bzw. 7) Spalten, in denen ein Auftrag festgelegt wird. Die Spalten werden durch ein Leer- bzw. ein Tabulatorzeichen voneinander getrennt. In der ersten Spalte werden die Minuten (0â59), in der zweiten die Stunden (0â23), in der dritten die Tage (1â31), in der vierten die Monate (1â12), in der fünften die Wochentage (0â7; 0 und 7 stehen hierbei für Sonntag) und in der sechsten Spalte wird das Kommando bzw. Script angegeben, welches zum gegebenen Zeitpunkt ausgeführt werden soll. In der Syntax finden Sie noch eine Spalte »User« (die sechste), welche allerdings beim normalen User entfällt. Diese Spalte ist dem root vorbehalten, in dem dieser einen cron-Job für bestimmte User einrichten kann. Somit besteht eine Zeile (Auftrag) einer crontab-Datei für den normalen Benutzer aus sechs und für den root aus sieben Spalten.
Natürlich können Sie auch Kommentare bei der crontabâDatei hinzufügen. Diese werden mit der Shellscript-üblichen # eingeleitet. Sie können bei den Zeitangaben statt Zahlen auch einen Stern (*) verwenden, was für »Erster-Letzter« steht â also »immer«. Somit würde folgender Eintrag das Script »meinscript« jede Minute ausführen:
# Führt das Script jede Minute aus * * * * * $HOME/meinscript
Es sind auch Zahlenbereiche erlaubt, so kann z. B. ein Bereich durch einen Bindestrich von einem anderen getrennt werden. Die Zahlen zwischen den Bereichen sind immer inklusive. Somit bedeutet folgender Eintrag, dass das Script »meinscript« jeden Tag um 10, 11, 12, 13 und 14 Uhr ausgeführt wird:
0 10â14 * * * $HOME/meinscript
Es können auch Listen verwendet werden, bei denen Nummern oder auch ein Zahlenbereich von Nummern durch Kommata getrennt werden:
0 10â12,16,20â23 * * * $HOME/meinscript
Hier würde jeden Tag um 10, 11, 12, 16, 20, 21, 22 und 23 Uhr das Script »meinscript« ausgeführt. Es dürfen keine Leerzeichen in der Liste vorkommen, da das Leerzeichen ja das Trennzeichen für den nächsten Eintrag ist.
Es sind außerdem Aufträge in bestimmten Schritten möglich. So könnten Sie statt der Angabe, alle 4 Stunden ein bestimmtes Script auszuführen, mit
0 0,4,8,12,16,20 * * * $HOME/meinscript
dies mit 0â23/4 oder */4 verkürzen:
0 */4 * * * $HOME/meinscript
Hier einige Beispiele zum besseren Verständnis:
# Jeden Tag um 11 Uhr das Script meinscript ausführen 0 11 * * * $HOME/meinscript # Jeden Dienstag und Freitag um 23 Uhr das Script # meinscript ausführen 0 23 * * 2,5 $HOME/meinscript # Jeden zweiten Tag das Script meinscript um 23 Uhr ausführen 0 23 * * 0â6/2 $HOME/meinscript # Achtung: Das Script wird jeden 15. eines Monats UND jeden # Samstag um 23 Uhr ausgeführt 0 23 15 * 6 $HOME/meinscript
Gewöhnlich wird Ihre crontab-Datei auf Dauer immer umfangreicher. Sofern Sie crontab zur Fernwartung einsetzen, können Sie sich die Ausgabe Ihres Scripts per E-Mail zuschicken lassen. Hierzu müssen Sie die Variable MAILTO (per Standard meist auf user@host gesetzt) in crontab mit Ihrer E-Mail-Adresse versehen. Mit einem leeren MAILTO ("") können Sie dies wieder abschalten. Es kann auch sinnvoll sein, die SHELL entsprechend zu verändern, da standardmäßig häufig die Bourne-Shell verwendet wird. Da die Korn-Shell bzw. die Bash oft beim Thema »Builtins« und einigen weiteren Funktionen überlegen ist, sollten Sie hier entsprechende Shell eintragen, da ansonsten die Scripts nicht richtig laufen könnten. Unter Linux ist dies allerdings â wie schon einige Mal erwähnt â zweitrangig, da es hier keine echte Bourne-Shell gibt und alles ein Link auf die Bash ist. Hierzu soll eine einfach crontab-Datei erstellt werden:
you@host > crontab -e ### ---vi startet--- ### SHELL=/bin/ksh MAILTO=tot # Alle zwei Minuten "Hallo Welt" an die Testdatei hängen */2 * * * * echo "Hal<NAME>" >> $HOME/testdatei ###--- Abspeichern und Quit (:wq)---### crontab: installing new crontab you@host >
Wollen Sie einen Überblick aller cron-Jobs gewinnen, müssen Sie nur crontab mit der Option âl (für list) aufrufen. Die vollständige Tabelle können Sie mit der Option âr (remove) löschen.
you@host > crontab -l # DO NOT EDIT THIS FILE â edit the master and reinstall. # (/tmp/crontab.8999 installed on Mon Mar 28 19:07:50 2005) # (Cron version -- $Id: crontab.c,v 2.13 1994/01/17 03:20:37 vixie Exp $) SHELL=/bin/bash MAILTO=tot # Alle zwei Minuten "Hallo Welt" an die Testdatei hängen */2 * * * * echo "Hallo Welt" >> $HOME/testdatei you@host > crontab -r you@host > crontab -l no crontab for you
Tabelle 8.3 Â Optionen für crontab und deren Bedeutung
Aufruf
Bedeutung
crontab âe
Die crontab-Datei editieren
crontab âl
Alle cron-Jobs auflisten
crontab âr
Alle cron-Jobs löschen
## 8.8 Shellscripts zeitgesteuert ausführenÂ
Als Systemadministrator oder Webmaster werden Sie zwangsläufig mit folgenden Wartungstätigkeiten konfrontiert:
 | Sichern von Daten (Backup) |
| --- | --- |
 | »Rotieren« von Logfiles |
| --- | --- |
Solange Sie auf einem Einzelplatz-Rechner arbeiten, können Sie hin und wieder solche Scripts, die diese Aufgaben erledigen, selbst von Hand starten (wobei dies auf Dauer auch sehr mühsam erscheint). Allerdings hat man es in der Praxis nicht nur mit einem Rechner zu tun, sondern mit einer ganzen Horde. Sie werden sich wohl kaum auf einem Dutzend Rechner einloggen, um jedes Mal ein Backup-Script von Hand zu starten.
Für solche Aufgaben wurden unter Linux/UNIX Daemons (Abkürzung für disk and execution monitors) eingeführt. Daemons (auch gern als Dämonen oder Geister bezeichnet) sind Prozesse, die zyklisch oder ständig im Hintergrund ablaufen und auf Aufträge warten. Auf Ihrem System laufen wahrscheinlich ein gutes Dutzend solcher Daemons. Die meisten werden beim Start des Systems vom Übergang in den Multi-User-Modus automatisch gestartet. In unserem Fall geht es vorwiegend um den cron-daemon.
»cron« kommt aus dem Griechischen (chronos) und bedeutet Zeit. Bei diesem cron-daemon definieren Sie eine Aufgabe und delegieren diese dann an den Daemon. Dies realisieren Sie durch einen Eintrag in crontab, einer Tabelle, in der festgelegt wird, wann welche Jobs ausgeführt werden sollen. Es ist logisch, dass der Rechner (und der cron-daemon) zu der Zeit laufen muss, zu der ein entsprechender Job ausgeführt wird. Dass dies nicht immer möglich ist, leuchtet ein (besonders bei einem Heimrechner). Damit aber trotzdem gewährleistet wird, dass die Jobs in der Tabelle crontab ausgeführt werden, bieten einige Distributionen zusätzlich den Daemon anacron an. Dieser tritt immer an die Stelle von cron und stellt sicher, dass die Jobs regelmäßig zu einer bestimmten Zeit ausgeführt werden â also auch, wenn der Rechner nicht eingeschaltet ist.
crond macht also in der Praxis nichts anderes, als in bestimmten Zeitintervallen (Standard eine Minute) die Tabelle crontab einzulesen und entsprechende Jobs auszuführen. Somit müssen Sie, um den cron-daemon zu verwenden, nichts anderes tun, als in crontab einen entsprechenden Eintrag zu hinterlegen. Bevor Sie erfahren, wie Sie dies praktisch anwenden, muss ich noch einige Anmerkungen zu Ihren Scripts machen, die mit dem cron-daemon gestartet werden sollen.
Wenn Sie ein Script mit dem cron-daemon starten lassen, sollten Sie sich immer vor Augen halten, dass Ihr Script keine Verbindung mehr zum Bildschirm und zur Tastatur hat. Damit sollte Ihnen klar sein, dass eine Benutzereingabeaufforderung mit read genauso sinnlos ist wie eine Ausgabe mittels echo. In beiden Fällen sollten Sie die Ein- bzw. Ausgabe umlenken. Die Ausgaben werden meist per Mail an den Eigentümer der crontab geschickt.
Ebenso sieht dies mit den Umgebungsvariablen aus. Sie haben in den Kapiteln zuvor recht häufig die Umgebungsvariablen verändert und verwendet, allerdings können Sie sich niemals zum Zeitpunkt der Ausführung eines Scripts, das vom cron-daemon gestartet wurde, darauf verlassen, dass die Umgebung derjenigen entspricht, wie sie vielleicht beim Testen des Shellscripts vorlag. Daher werden auch in der crontab-Datei entsprechende Umgebungsvariablen belegt. Wird ein cron-Job ausgeführt, wird die Umgebung des cron-daemons verwendet. LOGNAME (oder auch USER) wird auf den Eigentümer der crontab-Datei gesetzt und HOME auf dessen Verzeichnis, so wie dies bei /etc/passwd der Fall ist. Allerdings können Sie nicht alle Umgebungsvariablen in der crontab-Datei neu setzen. Was mit SHELL und HOME kein Problem ist, ist mit LOGNAME bzw. USER nicht möglich. Sicherstellen sollten (müssen) Sie auch, dass PATH genauso aussieht wie in der aktuellen Shell â sonst kann es passieren, dass einige Kommandos gar nicht ausgeführt werden können. Programme und Scripts sollten bei cron-Jobs möglichst mit absoluten Pfaden angegeben werden, um den etwaigen Problemen mit PATH vorzubeugen.
Wollen Sie wissen, ob Ihr Script jetzt im Hintergrund läuft oder nicht (wegen der Ein-/Ausgabe), können Sie dies im Script tty überprüfen. Wenn das Script nicht im Hintergrund läuft, gibt tty /dev/tty oder Ähnliches zurück. Läuft das Script allerdings im Hintergrund, gibt es keinen Bildschirm zur Ausgabe und tty gibt einen Fehlerstatus (ungleich 0) wie 1 zurück. Natürlich sollten Sie diese Überprüfung im Stillen ausführen, damit im Fall der Fälle keine störende Ausgabe auf dem Bildschirm erfolgt. Dies können Sie mit der Option âs (silence) erreichen:
> you@host > tty /dev/pts/39 you@host > tty -s you@host > echo $? 0
Ebenso können Sie die Shell-Variable PS1 überprüfen, welche gewöhnlich nur bei der interaktiven Shell gesetzt wird.
Der cron-daemon speichert seine Einträge in der crontab-Datei. Dieser Pfad befindet sich gewöhnlich im /etc-Verzeichnis. Allerdings handelt es sich hierbei um die cron-Jobs von root. Natürlich hat auch jeder andere Benutzer auf dem System seine eigene crontab-Datei, welche sich für gewöhnlich im Verzeichnis /var/spool/cron/ befindet.
Zum Modifizieren der crontab-Datei genügt ein einfaches crontab mit der Option âe (edit). Normalerweise öffnet sich die crontab-Datei mit dem Standardeditor (meistens vi). Wollen Sie einen anderen Editor verwenden (bspw. nano, pico, joe ...), müssen Sie die Variable VISUAL oder EDITOR ändern und exportieren (export VISUAL='editor_ihrer_wahl'). Gewöhnlich ist die Datei, die sich in vi öffnet, leer (bei der ersten Verwendung von cron als normaler Benutzer).
Was aber schreibt man in eine solche crontab-Datei? Natürlich die Jobs, und zwar mit folgender einfacher Syntax:
Minuten | Stunden | Tage | Monate | Wochentage | [User] | Befehl |
| --- | --- | --- | --- | --- | --- | --- |
0â59 | 0â23 | 1â31 | 1â12 | 0â7 | Name | kommando_oder_script |
Praktisch besteht jede einzelne Zeile aus 6 (bzw. 7) Spalten, in denen ein Auftrag festgelegt wird. Die Spalten werden durch ein Leer- bzw. ein Tabulatorzeichen voneinander getrennt. In der ersten Spalte werden die Minuten (0â59), in der zweiten die Stunden (0â23), in der dritten die Tage (1â31), in der vierten die Monate (1â12), in der fünften die Wochentage (0â7; 0 und 7 stehen hierbei für Sonntag) und in der sechsten Spalte wird das Kommando bzw. Script angegeben, welches zum gegebenen Zeitpunkt ausgeführt werden soll. In der Syntax finden Sie noch eine Spalte »User« (die sechste), welche allerdings beim normalen User entfällt. Diese Spalte ist dem root vorbehalten, in dem dieser einen cron-Job für bestimmte User einrichten kann. Somit besteht eine Zeile (Auftrag) einer crontab-Datei für den normalen Benutzer aus sechs und für den root aus sieben Spalten.
Natürlich können Sie auch Kommentare bei der crontabâDatei hinzufügen. Diese werden mit der Shellscript-üblichen # eingeleitet. Sie können bei den Zeitangaben statt Zahlen auch einen Stern (*) verwenden, was für »Erster-Letzter« steht â also »immer«. Somit würde folgender Eintrag das Script »meinscript« jede Minute ausführen:
> # Führt das Script jede Minute aus * * * * * $HOME/meinscript
Es sind auch Zahlenbereiche erlaubt, so kann z. B. ein Bereich durch einen Bindestrich von einem anderen getrennt werden. Die Zahlen zwischen den Bereichen sind immer inklusive. Somit bedeutet folgender Eintrag, dass das Script »meinscript« jeden Tag um 10, 11, 12, 13 und 14 Uhr ausgeführt wird:
> 0 10â14 * * * $HOME/meinscript
Es können auch Listen verwendet werden, bei denen Nummern oder auch ein Zahlenbereich von Nummern durch Kommata getrennt werden:
> 0 10â12,16,20â23 * * * $HOME/meinscript
Hier würde jeden Tag um 10, 11, 12, 16, 20, 21, 22 und 23 Uhr das Script »meinscript« ausgeführt. Es dürfen keine Leerzeichen in der Liste vorkommen, da das Leerzeichen ja das Trennzeichen für den nächsten Eintrag ist.
Es sind außerdem Aufträge in bestimmten Schritten möglich. So könnten Sie statt der Angabe, alle 4 Stunden ein bestimmtes Script auszuführen, mit
> 0 0,4,8,12,16,20 * * * $HOME/meinscript
dies mit 0â23/4 oder */4 verkürzen:
> 0 */4 * * * $HOME/meinscript
Hier einige Beispiele zum besseren Verständnis:
> # Jeden Tag um 11 Uhr das Script meinscript ausführen 0 11 * * * $HOME/meinscript # Jeden Dienstag und Freitag um 23 Uhr das Script # meinscript ausführen 0 23 * * 2,5 $HOME/meinscript # Jeden zweiten Tag das Script meinscript um 23 Uhr ausführen 0 23 * * 0â6/2 $HOME/meinscript # Achtung: Das Script wird jeden 15. eines Monats UND jeden # Samstag um 23 Uhr ausgeführt 0 23 15 * 6 $HOME/meinscript
Gewöhnlich wird Ihre crontab-Datei auf Dauer immer umfangreicher. Sofern Sie crontab zur Fernwartung einsetzen, können Sie sich die Ausgabe Ihres Scripts per E-Mail zuschicken lassen. Hierzu müssen Sie die Variable MAILTO (per Standard meist auf user@host gesetzt) in crontab mit Ihrer E-Mail-Adresse versehen. Mit einem leeren MAILTO ("") können Sie dies wieder abschalten. Es kann auch sinnvoll sein, die SHELL entsprechend zu verändern, da standardmäßig häufig die Bourne-Shell verwendet wird. Da die Korn-Shell bzw. die Bash oft beim Thema »Builtins« und einigen weiteren Funktionen überlegen ist, sollten Sie hier entsprechende Shell eintragen, da ansonsten die Scripts nicht richtig laufen könnten. Unter Linux ist dies allerdings â wie schon einige Mal erwähnt â zweitrangig, da es hier keine echte Bourne-Shell gibt und alles ein Link auf die Bash ist. Hierzu soll eine einfach crontab-Datei erstellt werden:
> you@host > crontab -e ### ---vi startet--- ### SHELL=/bin/ksh MAILTO=tot # Alle zwei Minuten "Hallo Welt" an die Testdatei hängen */2 * * * * echo "Hallo Welt" >> $HOME/testdatei ###--- Abspeichern und Quit (:wq)---### crontab: installing new crontab you@host Wollen Sie einen Überblick aller cron-Jobs gewinnen, müssen Sie nur crontab mit der Option âl (für list) aufrufen. Die vollständige Tabelle können Sie mit der Option âr (remove) löschen.
> you@host > crontab -l # DO NOT EDIT THIS FILE â edit the master and reinstall. # (/tmp/crontab.8999 installed on Mon Mar 28 19:07:50 2005) # (Cron version -- $Id: crontab.c,v 2.13 1994/01/17 03:20:37 vixie Exp $) SHELL=/bin/bash MAILTO=tot # Alle zwei Minuten "Hallo Welt" an die Testdatei hängen */2 * * * * echo "Hallo Welt" >> $HOME/testdatei you@host > crontab -r you@host > crontab -l no crontab for you
Aufruf | Bedeutung |
| --- | --- |
crontab âe | Die crontab-Datei editieren |
crontab âl | Alle cron-Jobs auflisten |
crontab âr | Alle cron-Jobs löschen |
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
8.9 Startprozess- und Profildaten der ShellÂ
### 8.9.1 Arten von InitialisierungsdateienÂ
### 8.9.2 Ausführen von Profildateien beim Start einer Login-ShellÂ
Beim Ausführen von Profildateien gehen die Shells teilweise unterschiedliche Wege (abgesehen von der systemweiten Initialisierungsdatei /etc/profile), weshalb hier auch die einzelnen Shells berücksichtigt werden.
# Systemweite Einstellungen (Bourne-Shell, Korn-Shell und Bash)
Beim Anmelden einer interaktiven Login-Shell wird die systemweite Profildatei /etc/profile ausgeführt. Diese systemweiten Einstellungen sind allerdings vom normalen Benutzer nicht editierbar. Hier kann bspw. der Systemadministrator weitere Shell-Variablen definieren oder Umgebungsvariablen überschreiben. Meistens werden aus /etc/profile weitere Initialisierungsdateien aufgerufen.
# Benutzerspezifische Einstellungen
Wurden die systemspezifischen Konfigurationsdateien abgearbeitet, haben Sie die Möglichkeit, die Umgebung an die eigenen Bedürfnisse (bzw. für jeden Benutzer auf dem System) anzupassen. Hier enden dann auch die Gemeinsamkeiten der verschiedenen Shells, im Folgenden muss also auf die einzelnen Parteien eingegangen werden.
# Bourne-Shell
In der Bourne-Shell wird die lokale benutzerindividuelle Konfigurationsdatei .profile (im Heimverzeichnis des Benutzers $HOME/.profile) für eine interaktive Login-Shell eingelesen und interpretiert. Für die Bourne-Shell wäre der Login-Prozess somit abgeschlossen (siehe Abbildung 8.1).
# Bash
In der Bash wird im Heimverzeichnis des Benutzers zunächst nach der Datei .bash_profile gesucht. .bash_profile ist die lokale benutzerindividuelle Konfigurationsdatei für eine interaktive Login-Shell in der Bash. Existiert die Datei .bash_profile nicht, wird nach der Datei .bash_login (ebenfalls im Heimverzeichnis) gesucht. Konnte weder eine .bash_profile noch eine .bash_login-Datei gefunden werden, wird wie schon bei der Bourne-Shell nach .profile Ausschau gehalten und ausgeführt (siehe Abbildung 8.2). Als Alternative für .bash_profile findet man auf vielen Systemen auch die Datei .bashrc (mehr zu .bashrc in Kürze).
# Korn-Shell
### 8.9.3 Ausführen von Profildateien beim Start einer Nicht-Login-Shell (Bash und Korn-Shell)Â
Gerade Linux-User, die den Abschnitt oben zu den Profildateien zur geliebten Bash gelesen haben, werden sich fragen, wo denn nun entsprechende Dateien sind. Häufig findet der User beim Hausgebrauch gerade mal die Datei .profile wieder. Ein Eintrag in diese Datei zeigt zunächst, dass sich in einer »xterm« hier gar nichts rührt. Sie können testweise eine einfache export-Anweisung in die Profildateien einfügen und mittels echo ausgeben lassen. Erst wenn eine echte Login-Shell (bspw. mit (Strg)+(Alt)+(F1)) geöffnet wird und sich der Benutzer in diese einloggt, werden die Einträge in .profile aktiv. Das ist auch kein Wunder, denn alles zuvor Beschriebene galt für eine echte Login-Shell (das Thema wurde bereits in Abschnitt 1.9.2 behandelt).
Sobald Sie allerdings ein Pseudo-Terminal (pts bzw. ttyp) öffnen, haben Sie aber keine Login-Shell mehr, sondern eine neue Subshell (bzw. eine interaktive Shell). Eine Subshell starten Sie ...
Nun benötigt auch die Bash eine Startdatei, wie dies bei der Korn-Shell mit .kshrc standardmäßig der Fall ist. Der Grund ist hier derselbe wie schon bei der Korn-Shell. Auch hier ist es unmöglich, die Aliase, Shell-Optionen und Funktionen zu exportieren. Beim Starten einer Subshell zeigen also die Profildateien keine Wirkung mehr. Ganz einfach: Gäbe es hier keine weitere Startdatei, so gäbe es in der Subshell keine Aliase, Shell-Optionen und Funktionen, die Sie vielleicht sonst so regelmäßig einsetzen.
In der Korn-Shell, das haben Sie bereits erfahren, lautet die Startup-Datei .kshrc bzw. es ist die Datei, die in der Variablen ENV abgelegt wurde. Die Bash geht denselben Weg und führt gewöhnlich die Datei .bashrc im Heimverzeichnis des Benutzers aus. Allerdings gibt es in der Bash zwei Möglichkeiten. Wird eine neue Subshell im interaktiven Modus erzeugt (also einfach in der Arbeits-Shell durch einen Aufruf von bash), so wird die Datei .bashrc ausgeführt. Wird hingegen ein neues Script gestartet, wird die Datei gestartet, die sich in der Umgebungsvariablen BASH_ENV befindet. Sofern Sie allerdings BASH_ENV nicht selbst anpassen und eine andere Datei starten wollen, wird BASH_ENV gewöhnlich auch nur mit .bashrc belegt.
# Ausnahmen
Es gibt allerdings noch zwei Ausnahmen, bei denen zwar eine Subshell erzeugt, aber nicht die Startup-Datei .bashrc bzw. .kshrc ausgeführt wird. Und zwar ist dies der Fall beim Starten einer Subshell zwischen runden Klammern ( ... ) (siehe Abschnitt 8.5) und bei der Verwendung einer Kommando-Substitution. Hierbei erhält die Subshell immer jeweils eine exakte Kopie der Eltern-Shell mit allen Variablen. Und natürlich wird auch keine Startup-Datei ausgeführt (auch keine Subshell), wenn Sie ein Script in der aktuellen Shell mit einem Punkt aufrufen.
### 8.9.4 Zusammenfassung alle Profil- und Startup-DateienÂ
Tabelle 8.4 bietet zum Schluss noch einen kurzen Überblick zu den Dateien (und Variablen), die bezüglich der Initialisierung für Shells und Shellscripts bedeutend sind.
# 8.10 Ein Shellscript bei der AusführungÂ
8.10 Ein Shellscript bei der Ausführung In diesem Abschnitt soll kurz zusammengefasst werden, wie ein Shellscript arbeitet. Gewöhnlich starten Sie ein Shellscript so, dass die aktuelle Shell eine neue Subshell eröffnet. Diese Subshell ist für die Ausführung des Shellscripts verantwortlich. Sofern das Script in der aktuellen Shell nicht im Hintergrund ausgeführt wird, wartet die Shell auf die Beendigung der Subshell. Ansonsten, also bei einem im Hintergrund gestarteten Script, steht die aktuelle Shell gleich wieder zur Verfügung. Wird das Script hingegen mit einem Punkt gestartet, wird keine Subshell gestartet, sondern dann ist die laufende Shell verantwortlich für die Ausführung des Scripts. Grob läuft die Ausführung eines Scripts in drei Schritten ab, und zwar:8.10.1 Syntaxüberprüfung Bei der Syntaxüberprüfung wird das Shellscript Zeile für Zeile eingelesen. Daraufhin wird jede einzelne Zeile auf die richtige Syntax überprüft. Ein Syntaxfehler tritt auf, wenn ein Schlüsselwort oder eine Folge von Schlüsselworten und weitere Tokens nicht der Regel entsprechen.8.10.2 Expansionen Jetzt folgt eine Reihe von verschiedenen Expansionen: Aliase- und Tilde-Expansion Variablen-Interpolation Berechnungen (nur Bash und Korn-Shell)8.10.3 Kommandos Zu guter Letzt werden die Kommandos ausgeführt. Hier gibt es auch eine bestimmte Reihenfolge, die einzuhalten ist:1. Builtins-Kommandos der Shell (Shell-Funktionen)      2. Funktionen      3. Externes Kommando (in PATH) oder ein binäres ProgrammUnd natürlich können auch noch weitere Shellscripts aus einem Script gestartet werden (siehe Abbildung 8.4).Ihre MeinungWie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 8.10 Ein Shellscript bei der AusführungÂ
In diesem Abschnitt soll kurz zusammengefasst werden, wie ein Shellscript arbeitet. Gewöhnlich starten Sie ein Shellscript so, dass die aktuelle Shell eine neue Subshell eröffnet. Diese Subshell ist für die Ausführung des Shellscripts verantwortlich. Sofern das Script in der aktuellen Shell nicht im Hintergrund ausgeführt wird, wartet die Shell auf die Beendigung der Subshell. Ansonsten, also bei einem im Hintergrund gestarteten Script, steht die aktuelle Shell gleich wieder zur Verfügung. Wird das Script hingegen mit einem Punkt gestartet, wird keine Subshell gestartet, sondern dann ist die laufende Shell verantwortlich für die Ausführung des Scripts. Grob läuft die Ausführung eines Scripts in drei Schritten ab, und zwar:
### 8.10.1 SyntaxüberprüfungÂ
Bei der Syntaxüberprüfung wird das Shellscript Zeile für Zeile eingelesen. Daraufhin wird jede einzelne Zeile auf die richtige Syntax überprüft. Ein Syntaxfehler tritt auf, wenn ein Schlüsselwort oder eine Folge von Schlüsselworten und weitere Tokens nicht der Regel entsprechen.
### 8.10.2 ExpansionenÂ
Jetzt folgt eine Reihe von verschiedenen Expansionen:
 | Aliase- und Tilde-Expansion |
| --- | --- |
 | Variablen-Interpolation |
| --- | --- |
 | Berechnungen (nur Bash und Korn-Shell) |
| --- | --- |
### 8.10.3 KommandosÂ
Zu guter Letzt werden die Kommandos ausgeführt. Hier gibt es auch eine bestimmte Reihenfolge, die einzuhalten ist:
1. | Builtins-Kommandos der Shell (Shell-Funktionen) |
| --- | --- |
   |    |
2. | Funktionen |
| --- | --- |
   |    |
3. Externes Kommando (in PATH) oder ein binäres Programm
Und natürlich können auch noch weitere Shellscripts aus einem Script gestartet werden (siehe Abbildung 8.4).
# 8.11 Shellscripts optimierenÂ
## 8.11 Shellscripts optimierenÂ
Diese Überschrift täuscht ein wenig. Im Gegensatz zu anderen Programmiersprachen ist es schwer, ein Shellscript bzw. eine interpretierte Programmiersprache während der Laufzeit zu optimieren. Trotzdem gibt es einige Ratschläge, die Sie befolgen können, damit Ihre Scripts mindestens doppelt bis teilweise zehnmal so schnell laufen können.
Verwenden Sie wenn möglich Builtin-Kommandos (Übersicht siehe Anhang A) der Shell statt externer Kommandos, da diese immer wesentlich schneller interpretiert werden können als externe Kommandos (da in der Shell eingebaut). Dies führt auch gleich zu der Empfehlung, wenn möglich die Korn-Shell bzw. die Bash der Bourne-Shell vorzuziehen, da Ihnen bei den »moderneren« Shells zahlreichere eingebaute Builtin-Kommandos zur Verfügung stehen als bei der Bourne-Shell.
Des Weiteren sollten Sie bei der Auswahl des Kommandos immer auf die einfachere Version zurückgreifen. So ist zum Beispiel das Lesen von Daten mit cat fast zehnmal so schnell wie das Lesen von Daten mit awk und hundertmal (!) schneller als das Einlesen mir read. Hier gilt es, zwischendurch mal einen Vergleich anzustellen und die Performance der einzelnen Kommandos zu testen.
Grundsätzlich ist das Einlesen einer großen Datenmenge über read zu vermeiden â ganz besonders in einer Schleife. read eignet sich hervorragend zur Benutzereingabe oder vielleicht noch für eine kleine Datei, aber ansonsten sind hier Kommandos wie cat, grep, sed oder awk zu bevorzugen. Allerdings sollten Sie überdenken, ob Sie eigentlich alle Daten, die sich in einer sehr großen Datei befinden, benötigen. Wenn nicht, sollten Sie über grep, sed oder awk die für Sie wichtigen Zeilen herausfiltern und in einer temporären Datei für die spätere Weiterarbeit abspeichern. Auch bei der Verwendung von awk in Schleifen sollten Sie eher ein komplettes awk-Script in Erwägung ziehen. Dabei muss awk nicht jedes Mal von Neuem aufgerufen und gestartet werden.
Alle Goodies zusammengefasst:
 | Builtin-Kommandos bevorzugen gegenüber externen Kommandos |
| --- | --- |
 | Korn-Shell und Bash gegenüber der Bourne-Shell bevorzugen |
| --- | --- |
 | read beim Einlesen größerer Datenmengen vermeiden |
| --- | --- |
 | Immer nur die benötigten Daten aus einer Datei herausfiltern |
| --- | --- |
# 9.2 xargsÂ
9.2 xargsÂ
Unter Linux/UNIX kann man fast alle Befehle auf einzelne oder auch eine ganze Liste von Dateien anwenden. Wenn dies nicht möglich sein sollte oder sich eine Dateiliste nicht mit einer Wildcard erstellen lässt, könnten Sie das Kommando xargs verwenden. xargs erwartet als Parameter ein Kommando, welches dann auf alle Dateien einer Liste angewandt wird, die von der Standardeingabe gelesen werden. Die Syntax:
kommando1 xargs kommando2
Hierbei rufen Sie praktisch »kommando2« mit den Argumenten auf, die »kommando1« in der Standardausgabe erstellt. Das Prinzip ist recht einfach. Das Script sucht in einer Schleife mit find nach allen Dateien im aktuellen Arbeitsverzeichnis (mit Unterverzeichnissen) mit der Endung ».tmp«.
# Name: aremover1 for var in `find . -name "*.tmp"` do rm $var done
Der Shell-Guru wird hier gleich Einspruch erheben und sagen, das geht noch kürzer:
find . -name "*.tmp" -exec rm {} \;
Ich erhebe auch keinen Anspruch, beide Lösungen dürften bezüglich der Laufzeit gleich schlecht sein. Was aber ist so schlecht an diesem Beispiel? Stellen Sie sich vor, Ihr Script findet eine Datei mit der Endung ».tmp« gleich 1000 Mal im aktuellen Arbeitsverzeichnis. Dann wird nach jedem gefundenen Ergebnis das rm-Kommando gestartet. Es werden also 1000 Prozesse hintereinander gestartet und wieder beendet. Für einen Multi-User-Rechner kann das eine ziemliche Last sein. Besser wäre es doch, wenn man dem Kommando rm all diese Dateien als Argument übergibt und somit einen einzigen rm-Aufruf vornimmt. Und genau das ist der Sinn von xargs. Also sieht die Verwendung mit xargs wie folgt aus:
you@host > find . -name "*.tmp" -print | xargs rm
Jetzt werden alle aufgelisteten Files auf einmal über die Pipe an xargs weitergeleitet. xargs übernimmt diese Argumente in einer Liste und verwendet sie wiederum für den rm-Aufruf. Leider treten hier gleich wieder Probleme auf, wenn in einem Dateinamen Leerzeichen enthalten sind, wie dies häufig bspw. bei MP3-Musik- und MS-Windows-Dateien der Fall ist. Das Problem ist hier nämlich, dass xargs die Dateien anhand der Leerzeichen teilt.
Hierzu können Sie xargs mit dem Schalter â0 verwenden, dann wird die Zeichenfolge nicht mehr wegen eines Leerzeichens getrennt, sondern nach einer binären 0. Allerdings wäre dann noch das Problem mit find, welches Sie ebenfalls mit einem Schalter âprint0 vermeiden können. So werden mit folgender Kommandoangabe auch diejenigen Dateien gelöscht, bei denen sich ein Leerzeichen im Namen befindet:
you@host > find . -name "*.tmp" -print0 | xargs â0 rm
Aber nicht immer stehen â wie hier bei rm â die eingelesenen Listen, die in den Linux-UNIX-Befehl eingefügt werden sollen, an der richtigen Stelle. So lautet z. B. die Syntax zum Verschieben von Dateien mittels mv:
mv Dateiname Zielverzeichnis
Hierzu können Sie die Zeichenkette »{}« für die Dateinamen verwenden. Diese Zeichenfolge wird als Platzhalter für die Liste von Argumenten genutzt. Wollen Sie also, wie im Beispiel oben, nicht die Dateien mit der Endung ».tmp« löschen, sondern diese zur Sicherheit erst einmal in ein separates Verzeichnis verschieben, so lässt sich dies mit dem Platzhalterzeichen wie folgt erledigen:
you@host > find . -name "*.tmp" -print0 | \ > xargs â0 mv {} $HOME/backups
Natürlich lässt sich mit xargs und den entsprechenden Optionen noch eine Menge mehr anfangen, weshalb das Lesen der Manual-Page sehr empfehlenswert ist.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 9.2 xargsÂ
Unter Linux/UNIX kann man fast alle Befehle auf einzelne oder auch eine ganze Liste von Dateien anwenden. Wenn dies nicht möglich sein sollte oder sich eine Dateiliste nicht mit einer Wildcard erstellen lässt, könnten Sie das Kommando xargs verwenden. xargs erwartet als Parameter ein Kommando, welches dann auf alle Dateien einer Liste angewandt wird, die von der Standardeingabe gelesen werden. Die Syntax:
> kommando1 xargs kommando2
Hierbei rufen Sie praktisch »kommando2« mit den Argumenten auf, die »kommando1« in der Standardausgabe erstellt. Das Prinzip ist recht einfach. Das Script sucht in einer Schleife mit find nach allen Dateien im aktuellen Arbeitsverzeichnis (mit Unterverzeichnissen) mit der Endung ».tmp«.
> # Name: aremover1 for var in `find . -name "*.tmp"` do rm $var done
Der Shell-Guru wird hier gleich Einspruch erheben und sagen, das geht noch kürzer:
> find . -name "*.tmp" -exec rm {} \;
Ich erhebe auch keinen Anspruch, beide Lösungen dürften bezüglich der Laufzeit gleich schlecht sein. Was aber ist so schlecht an diesem Beispiel? Stellen Sie sich vor, Ihr Script findet eine Datei mit der Endung ».tmp« gleich 1000 Mal im aktuellen Arbeitsverzeichnis. Dann wird nach jedem gefundenen Ergebnis das rm-Kommando gestartet. Es werden also 1000 Prozesse hintereinander gestartet und wieder beendet. Für einen Multi-User-Rechner kann das eine ziemliche Last sein. Besser wäre es doch, wenn man dem Kommando rm all diese Dateien als Argument übergibt und somit einen einzigen rm-Aufruf vornimmt. Und genau das ist der Sinn von xargs. Also sieht die Verwendung mit xargs wie folgt aus:
> you@host > find . -name "*.tmp" -print | xargs rm
Jetzt werden alle aufgelisteten Files auf einmal über die Pipe an xargs weitergeleitet. xargs übernimmt diese Argumente in einer Liste und verwendet sie wiederum für den rm-Aufruf. Leider treten hier gleich wieder Probleme auf, wenn in einem Dateinamen Leerzeichen enthalten sind, wie dies häufig bspw. bei MP3-Musik- und MS-Windows-Dateien der Fall ist. Das Problem ist hier nämlich, dass xargs die Dateien anhand der Leerzeichen teilt.
Hierzu können Sie xargs mit dem Schalter â0 verwenden, dann wird die Zeichenfolge nicht mehr wegen eines Leerzeichens getrennt, sondern nach einer binären 0. Allerdings wäre dann noch das Problem mit find, welches Sie ebenfalls mit einem Schalter âprint0 vermeiden können. So werden mit folgender Kommandoangabe auch diejenigen Dateien gelöscht, bei denen sich ein Leerzeichen im Namen befindet:
> you@host > find . -name "*.tmp" -print0 | xargs â0 rm
Aber nicht immer stehen â wie hier bei rm â die eingelesenen Listen, die in den Linux-UNIX-Befehl eingefügt werden sollen, an der richtigen Stelle. So lautet z. B. die Syntax zum Verschieben von Dateien mittels mv:
> mv Dateiname Zielverzeichnis
Hierzu können Sie die Zeichenkette »{}« für die Dateinamen verwenden. Diese Zeichenfolge wird als Platzhalter für die Liste von Argumenten genutzt. Wollen Sie also, wie im Beispiel oben, nicht die Dateien mit der Endung ».tmp« löschen, sondern diese zur Sicherheit erst einmal in ein separates Verzeichnis verschieben, so lässt sich dies mit dem Platzhalterzeichen wie folgt erledigen:
> you@host > find . -name "*.tmp" -print0 | \ > xargs â0 mv {} $HOME/backups
Natürlich lässt sich mit xargs und den entsprechenden Optionen noch eine Menge mehr anfangen, weshalb das Lesen der Manual-Page sehr empfehlenswert ist.
# 9.3 dirname und basenameÂ
9.3 dirname und basenameÂ
Mit dem Kommando dirname können Sie von einem Dateinamen den Pfadanteil lösen und mit basename â das Gegenstück zu dirname â können Sie den reinen Dateinamen ohne Pfadanteil ermitteln. Die Syntax:
basename Dateiname [Suffix] dirname Dateiname
Geben Sie bei basename noch ein »Suffix« an, können Sie auch eine eventuell vorhandene Dateiendung entfernen. Ein einfaches Beispiel:
# Name: abasedir echo "Scriptname : $0" echo "basename : `basename $0`" echo "dirname : `dirname $0`" # ... oder die Endung entfernen basename $HOME/Kap005graf.zip .zip basename $HOME/meinText.txt .txt
Das Script bei der Ausführung:
you@host > $HOME/abasedir Scriptname : /home/tot/abasedir basename : abasedir dirname : /home/tot Kap005graf meinText
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 9.3 dirname und basenameÂ
Mit dem Kommando dirname können Sie von einem Dateinamen den Pfadanteil lösen und mit basename â das Gegenstück zu dirname â können Sie den reinen Dateinamen ohne Pfadanteil ermitteln. Die Syntax:
> basename Dateiname [Suffix] dirname Dateiname
Geben Sie bei basename noch ein »Suffix« an, können Sie auch eine eventuell vorhandene Dateiendung entfernen. Ein einfaches Beispiel:
> # Name: abasedir echo "Scriptname : $0" echo "basename : `basename $0`" echo "dirname : `dirname $0`" # ... oder die Endung entfernen basename $HOME/Kap005graf.zip .zip basename $HOME/meinText.txt .txt
Das Script bei der Ausführung:
> you@host > $HOME/abasedir Scriptname : /home/tot/abasedir basename : abasedir dirname : /home/tot Kap005graf meinText
# 9.4 umaskÂ
9.4 umaskÂ
Das Kommando umask wird verwendet, um die Benutzerrechte (Lesen (r), Schreiben (w) und Ausführen (x)) für den Benutzer (der ersten drei rwx), die Gruppe (die nächsten drei rwx) und allen anderen (die letzten drei rwx) einzuschränken, denn standardmäßig würden Dateien mit den Rechten rwârwârw (oktal 666) und Verzeichnisse mit rwxrwxrwx (oktal 0777) erstellt.
umask MASKE
Mit dem Kommando umask schränken Sie diese Rechte ein, indem vom Standardwert ein entsprechender umask-Wert abgezogen (subtrahiert) wird. Hierbei gilt es zu beachten, dass keine Ziffer in umask den Wert 7 überschreitet. Ist etwa eine umask von 26 gesetzt, so würde sich dies beim Erzeugen einer Datei wie folgt auswirken:
Datei : 666 umask : 026 --------------- Rechte : 640
Die Datei würde hier praktisch mit den Rechten 640 (rwârâââââ) erzeugt. Bitte beachten Sie außerdem, dass immer nur der Wert des zugehörigen oktalen Satzes subtrahiert wird. Eine umask von 27 hat also nicht den Effekt, dass der Rest von 6 minus 7 von den Rechten für alle anderen in die Gruppe übertragen wird. Bei einer umask von 27 wären die Rechte, mit der die Datei angelegt wird, dieselben wie mit 26.
Standardmäßig ist umask auf 0022 bei vielen Distributionen eingestellt. Mittlerweile gibt es aber schon Distributionen, die den Benutzer bei der Installation nach einem umask-Wert fragen. Selbstverständlich kann der Wert auch zur Laufzeit auf Benutzerebene verändert werden:
you@host > umask 0022 you@host > touch afil1 you@host > ls -l afil1 -rw-r--r-- 1 tot users 0 2005â03â31 07:51 afil1 you@host > umask 0177 you@host > umask 0177 you@host > touch afil2 you@host > ls -l afil2 -rw------- 1 tot users 0 2005â03â31 07:52 afil2
Hinweis   Wenn Sie beim Anlegen eines Verzeichnisses zuvor die umask verändert haben, setzen Sie beim Verzeichnis die Ausführrechte (x), da Sie sonst nicht mehr auf das Verzeichnis zugreifen können.
Die Veränderung der umask gilt allerdings nur für die Sitzung der laufenden Shell (oder auch des laufenden Pseudo-Terminals). Wollen Sie die Maske dauerhaft verändern, müssen Sie die entsprechende Profil- bzw. Startprozedurdatei verändern. Finden Sie nirgendwo anders einen Eintrag, gilt der Eintrag in /etc/profile (bei FreeBSD /etc/login.conf):
you@host > grep umask /etc/profile umask 022
Ansonsten hängt es davon ab, ob Sie eine echte Login-Shell verwenden (.profile) oder eben ein »xterm« mit der Bash (bspw. .bashrc) bzw. mit der Korn-Shell (.kshrc).
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 9.4 umaskÂ
Das Kommando umask wird verwendet, um die Benutzerrechte (Lesen (r), Schreiben (w) und Ausführen (x)) für den Benutzer (der ersten drei rwx), die Gruppe (die nächsten drei rwx) und allen anderen (die letzten drei rwx) einzuschränken, denn standardmäßig würden Dateien mit den Rechten rwârwârw (oktal 666) und Verzeichnisse mit rwxrwxrwx (oktal 0777) erstellt.
> umask MASKE
Mit dem Kommando umask schränken Sie diese Rechte ein, indem vom Standardwert ein entsprechender umask-Wert abgezogen (subtrahiert) wird. Hierbei gilt es zu beachten, dass keine Ziffer in umask den Wert 7 überschreitet. Ist etwa eine umask von 26 gesetzt, so würde sich dies beim Erzeugen einer Datei wie folgt auswirken:
> Datei : 666 umask : 026 --------------- Rechte : 640
Die Datei würde hier praktisch mit den Rechten 640 (rwârâââââ) erzeugt. Bitte beachten Sie außerdem, dass immer nur der Wert des zugehörigen oktalen Satzes subtrahiert wird. Eine umask von 27 hat also nicht den Effekt, dass der Rest von 6 minus 7 von den Rechten für alle anderen in die Gruppe übertragen wird. Bei einer umask von 27 wären die Rechte, mit der die Datei angelegt wird, dieselben wie mit 26.
Standardmäßig ist umask auf 0022 bei vielen Distributionen eingestellt. Mittlerweile gibt es aber schon Distributionen, die den Benutzer bei der Installation nach einem umask-Wert fragen. Selbstverständlich kann der Wert auch zur Laufzeit auf Benutzerebene verändert werden:
> you@host > umask 0022 you@host > touch afil1 you@host > ls -l afil1 -rw-r--r-- 1 tot users 0 2005â03â31 07:51 afil1 you@host > umask 0177 you@host > umask 0177 you@host > touch afil2 you@host > ls -l afil2 -rw------- 1 tot users 0 2005â03â31 07:52 afil2
Die Veränderung der umask gilt allerdings nur für die Sitzung der laufenden Shell (oder auch des laufenden Pseudo-Terminals). Wollen Sie die Maske dauerhaft verändern, müssen Sie die entsprechende Profil- bzw. Startprozedurdatei verändern. Finden Sie nirgendwo anders einen Eintrag, gilt der Eintrag in /etc/profile (bei FreeBSD /etc/login.conf):
> you@host > grep umask /etc/profile umask 022
Ansonsten hängt es davon ab, ob Sie eine echte Login-Shell verwenden (.profile) oder eben ein »xterm« mit der Bash (bspw. .bashrc) bzw. mit der Korn-Shell (.kshrc).
# 9.5 ulimit (Builtin)Â
9.5 ulimit (Builtin)Â
Mit ulimit können Sie den Wert einer Ressourcengrenze ausgeben lassen oder neu setzen. Die Syntax:
ulimit [Optionen] [n]
Wird n angegeben, wird eine Ressourcengrenze auf n gesetzt. Hierbei können Sie entweder harte (âH) oder weiche (âS) Ressourcengrenzen setzen. Standardmäßig setzt ulimit beide Grenzen fest oder gibt die weiche Grenze aus. Mit »Optionen« legen Sie fest, welche Ressource bearbeitet werden soll. Tabelle 9.1 nennt die Optionen, die hierzu verwendet werden können.
Tabelle 9.1 Â Optionen für ulimit
âH
Harte Grenze. Alle Benutzer dürfen eine harte Grenze herabsetzen, aber nur privilegierte Benutzer können sie erhöhen.
âS
Weiche Grenze. Sie muss unterhalb der harten Grenze liegen.
âa
Gibt alle Grenzwerte aus
âc
Maximale Größe der Speicherabzüge (core-File)
âd
Maximale Größe eines Datensegments oder Heaps in Kilobyte
âf
Maximale Anzahl an Dateien (Standardoption)
âm
Maximale Größe des physischen Speichers in Kilobyte (Bash und Korn-Shell only)
ân
Maximale Anzahl Filedeskriptoren (plus 1)
âp
Größe des Pipe-Puffers (Bash und Korn-Shell only, meistens 512 Bytes)
âs
Maximale Größe eines Stacksegments in Kilobyte
ât
Maximale CPU-Zeit in Sekunden
âu
Maximale Anzahl von User-Prozessen
âv
Maximale Größe des virtuellen Speichers in Kilobyte
Einen Überblick, welche Grenzen auf Ihrem System vorhanden sind, können Sie sich mit ulimit und der Option âa verschaffen:
you@host > ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 2046 virtual memory (kbytes, -v) unlimited
Folgende ulimit-Werte werden z. B. gern eingesetzt, um primitive DOS-(Denial of Service-)Angriffe zu erschweren (natürlich sind die Werte auch abhängig von der Anwendung eines Rechners). Am besten trägt man solche Werte in /etc/profile ein (hierbei sind auch die anderen Profil- und Startup-Dateien in Erwägung zu ziehen). Hier also ein Eintrag in /etc/profile, um dem DOS-Angreifer eine höhere Hürde zu setzen:
# Core Dumps verhindern ulimit -c 0 # keine Dateien größer 512 MB zulassen ulimit -f 512000 # weiches Limit von max. 250 Filedeskriptoren ulimit -S -n 250 # weiches Maximum von 100 Prozessen ulimit -S -u 100 # Speicherbenutzung max. 50 MB ulimit -H -v 50000 # weiches Limit der Speichernutzung 20 MB ulimit -S -v 20000
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 9.5 ulimit (Builtin)Â
Mit ulimit können Sie den Wert einer Ressourcengrenze ausgeben lassen oder neu setzen. Die Syntax:
> ulimit [Optionen] [n]
Wird n angegeben, wird eine Ressourcengrenze auf n gesetzt. Hierbei können Sie entweder harte (âH) oder weiche (âS) Ressourcengrenzen setzen. Standardmäßig setzt ulimit beide Grenzen fest oder gibt die weiche Grenze aus. Mit »Optionen« legen Sie fest, welche Ressource bearbeitet werden soll. Tabelle 9.1 nennt die Optionen, die hierzu verwendet werden können.
Option | Bedeutung |
| --- | --- |
âH | Harte Grenze. Alle Benutzer dürfen eine harte Grenze herabsetzen, aber nur privilegierte Benutzer können sie erhöhen. |
âS | Weiche Grenze. Sie muss unterhalb der harten Grenze liegen. |
âa | Gibt alle Grenzwerte aus |
âc | Maximale Größe der Speicherabzüge (core-File) |
âd | Maximale Größe eines Datensegments oder Heaps in Kilobyte |
âf | Maximale Anzahl an Dateien (Standardoption) |
âm | Maximale Größe des physischen Speichers in Kilobyte (Bash und Korn-Shell only) |
ân | Maximale Anzahl Filedeskriptoren (plus 1) |
âp | Größe des Pipe-Puffers (Bash und Korn-Shell only, meistens 512 Bytes) |
âs | Maximale Größe eines Stacksegments in Kilobyte |
ât | Maximale CPU-Zeit in Sekunden |
âu | Maximale Anzahl von User-Prozessen |
âv | Maximale Größe des virtuellen Speichers in Kilobyte |
Einen Überblick, welche Grenzen auf Ihrem System vorhanden sind, können Sie sich mit ulimit und der Option âa verschaffen:
> you@host > ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 2046 virtual memory (kbytes, -v) unlimited
Folgende ulimit-Werte werden z. B. gern eingesetzt, um primitive DOS-(Denial of Service-)Angriffe zu erschweren (natürlich sind die Werte auch abhängig von der Anwendung eines Rechners). Am besten trägt man solche Werte in /etc/profile ein (hierbei sind auch die anderen Profil- und Startup-Dateien in Erwägung zu ziehen). Hier also ein Eintrag in /etc/profile, um dem DOS-Angreifer eine höhere Hürde zu setzen:
> # Core Dumps verhindern ulimit -c 0 # keine Dateien größer 512 MB zulassen ulimit -f 512000 # weiches Limit von max. 250 Filedeskriptoren ulimit -S -n 250 # weiches Maximum von 100 Prozessen ulimit -S -u 100 # Speicherbenutzung max. 50 MB ulimit -H -v 50000 # weiches Limit der Speichernutzung 20 MB ulimit -S -v 20000
# 9.6 timeÂ
9.6 timeÂ
Mit dem Kommando time können Sie die Zeit ermitteln, die ein Script oder ein Kommando zur Ausführung benötigt hat.
time Kommando
Ein Beispiel:
you@host > time sleep 3 real 0m3.029s user 0m0.002s sys 0m0.002s you@host > time find $HOME -user tot ... real 0m1.328s user 0m0.046s sys 0m0.113s
Die erste Zeit (real) zeigt Ihnen die komplette Zeit vom Start bis zum Ende eines Scripts bzw. Kommandos an. Die zweite Zeit (user) zeigt die Zeit an, die das Script bzw. das Kommando im eigentlichen User-Modus verbracht hat, und die dritte Zeit (sys) ist die CPU-Zeit, die der Kommandoaufruf oder das Script im Kernel-Modus (Festplattenzugriff, System-Calls usw.) benötigt hat. Beide Zeiten zusammen (user+sys) ergeben die Gesamtzeit (real).
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 9.6 timeÂ
Mit dem Kommando time können Sie die Zeit ermitteln, die ein Script oder ein Kommando zur Ausführung benötigt hat.
> time Kommando
Ein Beispiel:
> you@host > time sleep 3 real 0m3.029s user 0m0.002s sys 0m0.002s you@host > time find $HOME -user tot ... real 0m1.328s user 0m0.046s sys 0m0.113s
Die erste Zeit (real) zeigt Ihnen die komplette Zeit vom Start bis zum Ende eines Scripts bzw. Kommandos an. Die zweite Zeit (user) zeigt die Zeit an, die das Script bzw. das Kommando im eigentlichen User-Modus verbracht hat, und die dritte Zeit (sys) ist die CPU-Zeit, die der Kommandoaufruf oder das Script im Kernel-Modus (Festplattenzugriff, System-Calls usw.) benötigt hat. Beide Zeiten zusammen (user+sys) ergeben die Gesamtzeit (real).
# 9.7 typesetÂ
9.7 typesetÂ
Das Kommando typeset hatten wir ja bereits mehrfach eingesetzt. Trotzdem will ich es nicht versäumen, es hier nochmals mitsamt seinen Optionen zu erwähnen. Sie wissen ja aus Kapitel 2, Variablen, dass diese zunächst immer vom Typ String, also eine Zeichenkette sind, und dass es egal ist, ob Sie jetzt Zahlen speichern oder nicht.
typset [option] [variable] [=wert]
typeset definiert dabei die Eigenschaft »option« für die Variable »variable« und setzt gegebenenfalls gleich auch den Wert, sofern dieser angegeben wurde. Die Bash bietet auch das Kommando declare an, welches denselben Zweck wie typeset erfüllt. declare ist nicht in der Korn-Shell vorhanden, weshalb man allein schon aus Kompatibilitätsgründen dem Kommando typeset den Vorzug lassen sollte. Um die entsprechende Eigenschaft für eine Variable zu setzen, muss man das Minuszeichen verwenden. Bspw. mit
typeset -i var=1
definieren Sie die Variable »var« als eine Integer-Variable. Abschalten können Sie das Ganze wieder mit dem Pluszeichen:
typeset +i var
Nach dieser Kommandoausführung wird »var« wieder wie jede andere normale Variable behandelt und ist kein Integer mehr. Tabelle 9.2 listet die Optionen auf, die Ihnen hierbei zur Verfügung stehen.
Tabelle 9.2 Â Optionen für typeset
Option
Bash
ksh
Bedeutung
A
X
Array
I
X
x
Integer-Variable
R
X
x
Konstante (readonly-Variable)
X
X
x
Variable exportieren
F
X
x
Zeigt Funktionen mit ihrer Definition an
fx
X
x
Exportiert eine Funktion
+f
x
Zeigt Funktionen ohne ihre Definition an
F
X
Zeigt Funktionen ohne ihre Definition an
fu
x
Deklariert Funktionen im Autoload-Mechanismus
l
x
Inhalt von Variablen in Kleinbuchstaben umwandeln.
u
x
Inhalt von Variablen in Großbuchstaben umwandeln.
Ln
x
Linksbündige Variable der Länge n
Rn
x
Rechtsbündige Variable der Länge n
Zn
x
Rechtsbündige Variable der Länge n. Leerer Raum wird mit Nullen gefüllt.
## 9.7 typesetÂ
Das Kommando typeset hatten wir ja bereits mehrfach eingesetzt. Trotzdem will ich es nicht versäumen, es hier nochmals mitsamt seinen Optionen zu erwähnen. Sie wissen ja aus Kapitel 2, Variablen, dass diese zunächst immer vom Typ String, also eine Zeichenkette sind, und dass es egal ist, ob Sie jetzt Zahlen speichern oder nicht.
> typset [option] [variable] [=wert]
typeset definiert dabei die Eigenschaft »option« für die Variable »variable« und setzt gegebenenfalls gleich auch den Wert, sofern dieser angegeben wurde. Die Bash bietet auch das Kommando declare an, welches denselben Zweck wie typeset erfüllt. declare ist nicht in der Korn-Shell vorhanden, weshalb man allein schon aus Kompatibilitätsgründen dem Kommando typeset den Vorzug lassen sollte. Um die entsprechende Eigenschaft für eine Variable zu setzen, muss man das Minuszeichen verwenden. Bspw. mit
> typeset -i var=1
definieren Sie die Variable »var« als eine Integer-Variable. Abschalten können Sie das Ganze wieder mit dem Pluszeichen:
> typeset +i var
Nach dieser Kommandoausführung wird »var« wieder wie jede andere normale Variable behandelt und ist kein Integer mehr. Tabelle 9.2 listet die Optionen auf, die Ihnen hierbei zur Verfügung stehen.
Option | Bash | ksh | Bedeutung |
| --- | --- | --- | --- |
A | X | Array |
I | X | x | Integer-Variable |
R | X | x | Konstante (readonly-Variable) |
X | X | x | Variable exportieren |
F | X | x | Zeigt Funktionen mit ihrer Definition an |
fx | X | x | Exportiert eine Funktion |
+f | x | Zeigt Funktionen ohne ihre Definition an |
F | X | Zeigt Funktionen ohne ihre Definition an |
fu | x | Deklariert Funktionen im Autoload-Mechanismus |
l | x | Inhalt von Variablen in Kleinbuchstaben umwandeln. |
u | x | Inhalt von Variablen in Großbuchstaben umwandeln. |
Ln | x | Linksbündige Variable der Länge n |
Rn | x | Rechtsbündige Variable der Länge n |
Zn | x | Rechtsbündige Variable der Länge n. Leerer Raum wird mit Nullen gefüllt. |
# 10.2 FehlerartenÂ
# 10.3 FehlersucheÂ
10.3 FehlersucheÂ
Wenn der Fehler aufgetreten ist, dann bietet Ihnen die Shell einige Optionen, die Ihnen bei der Fehlersuche helfen. Aber egal, welche Fehler denn nun aufgetreten sind, als Erstes sollten Sie die Fehlermeldung lesen und auch verstehen können. Plausibel, aber leider werden immer wieder Fragen gestellt, warum dies oder jenes falsch läuft, obwohl die Antwort zum Teil schon eindeutig der Fehlermeldung zu entnehmen ist. Ganz klar im Vorteil ist hier derjenige, der das Buch durchgearbeitet, die Scripts abgetippt und ausprobiert hat. Durch »Trial and error« hat derjenige eine Menge Fehler produziert, aber auch gelernt, wann welche Fehlermeldung ausgegeben wird.
10.3.1 Tracen mit set -xÂ
Der Trace-Modus (trace = verfolgen, untersuchen), den man mit set âx setzt, haben Sie schon des Öfteren in diesem Buch eingesetzt, etwa als es darum ging, zu sehen, wie das Script bzw. die Befehle ablaufen. Die Option set âx wurde bereits in Abschnitt 1.8.9 ausführlich behandelt. Trotzdem muss noch erwähnt werden, dass die Verwendung der Trace-Option nur in der aktuellen Shell und den Subshells aktiv ist. Zum Beispiel wollen Sie wissen, was beim folgenden Script passiert:
# Name: areweroot if [ $UID = 0 ] then echo "Wir sind root!" renice â5 $$ else echo "Wir sind nicht root!" su -c 'renice â5 $$' fi
Wenn Sie das Script jetzt tracen wollen, genügt es nicht, einfach vor seiner Verwendung die Option âx zu setzen:
you@host > ./areweroot + ./areweroot Wir sind nicht root! Password:******** 8967: Alte Priorität: 0, neue Priorität: â5
Das ist ein häufiges Missverständnis. Sie müssen die Option selbstverständlich an der entsprechenden Stelle (oder am Anfang des Scripts) setzen:
# Name: areweroot2 # Trace-Modus einschalten set -x if [ $UID = 0 ] then echo "Wir sind root!" renice â5 $$ else echo "Wir sind nicht root!" su -c 'renice â5 $$' fi
Das Script bei der Ausführung:
you@host > ./areweroot2 + ./areweroot ++ '[' 1000 = 0 ']' ++ echo 'Wir sind nicht root!' Wir sind nicht root! ++ su -c 'renice â5 $$' Password:******* 9050: Alte Priorität: 0, neue Priorität: â5 you@host > su Password:******** # ./areweroot2 ++ '[' 0 = 0 ']' ++ echo 'Wir sind root!' Wir sind root! ++ renice â5 9070 9070: Alte Priorität: 0, neue Priorität: â5
Hinweis   Alternativ bietet sich hier auch die Möglichkeit, die Option -x an das Script mit einem Shell-Aufruf zu übergeben, z. B. bash -x ./script oder ksh -x ./script, wodurch sich eine Modifikation am Script vermeiden lässt. Oder Sie verwenden die She-Bang-Zeile: #!/usr/bin/bash âx.
Häufig finden Sie mehrere Pluszeichen am Anfang einer Zeile. Dies zeigt an, wie tief die Verschachtelungsebene ist, in der die entsprechende Zeile ausgeführt wird. Jedes weitere Zeichen bedeutet eine Ebene tiefer. So verwendet beispielsweise folgendes Script drei Schachtelungsebenen:
# Name: datum # Trace-Modus einschalten set -x datum=`date` echo "Heute ist $datum
Das Script bei der Ausführung:
you@host > ./datum +./script1 +++ date ++ datum=Fr Apr 1 10:26:25 CEST 2005 ++ echo 'Heute ist Fr Apr 1 10:26:25 CEST 2005' Heute ist Fr Apr 1 10:26:25 CEST 2005
10.3.2 DEBUG und ERR-SignalÂ
Für Bash und Korn-Shell gibt es eine echte Debugging-Alternative mit dem DEBUG-Signal. Das ERR-Signal hingegen ist nur der Korn-Shell vorbehalten. Angewendet werden diese Signale genauso, wie Sie dies von den Signalen her kennen. Sie richten sich hierbei einen Handler mit trap ein.
trap 'Kommando(s)' DEBUG # nur für die Korn-Shell trap 'Kommando(s)' ERR
Im folgenden Beispiel haben wir ein Script, welches ständig in einer Endlosschleife läuft, aber wir sind zu blind, den Fehler zu erkennen:
# Name: debug1 val=1 while [ "$val" -le 10 ] do echo "Der ${val}. Schleifendurchlauf" i=`expr $val + 1` done
Jetzt wollen wir dem Script einen »Entwanzer« mit trap einbauen:
# Name: debug1 trap 'printf "$LINENO :-> " ; read line ; eval $line' DEBUG val=1 while [ "$val" -le 10 ] do echo "Der ${val}. Schleifendurchlauf" i=`expr $val + 1` done
Beim Entwanzen wird gewöhnlich das Kommando eval verwendet, mit dem Sie im Script Kommandos so ausführen können, als wären diese Teil des Scripts (siehe Abschnitt 9.1).
trap 'printf "$LINENO :-> " ; read line ; eval $line' DEBUG
Mit DEBUG wird nach jedem Ausdruck ein DEBUG-Signal gesendet, welches Sie »trap(pen)« können, um eine bestimmte Aktion auszuführen. Im Beispiel wird zunächst die Zeilennummer des Scripts gefolgt von einem Prompt ausgegeben. Anschließend können Sie einen Befehl einlesen und mit eval ausführen lassen (bspw. Variablen erhöhen oder reduzieren, Datei(en) anlegen, löschen, verändern, auflisten, überprüfen etc). Das Script bei der Ausführung:
you@host > ./debug1 5 :->(ENTER) 7 :->(ENTER) 9 :->(ENTER) Der 1. Schleifendurchlauf 10 :-> echo $val 1 7 :->(ENTER) 9 :->(ENTER) Der 1. Schleifendurchlauf 10 :-> echo $val 1 7 :->(ENTER) 9 :->(ENTER) Der 1. Schleifendurchlauf 10 :-> val=`expr $val + 1` 7 :-> echo $val 2 9 :->(ENTER) Der 2. Schleifendurchlauf 10 :->(ENTER) 7 :->(ENTER) 9 :->(ENTER) Der 2. Schleifendurchlauf 10 :-> exit
Der Fehler ist gefunden, die Variable »val« wurde nicht hochgezählt, und ein Blick auf die Zeile 10 zeigt Folgendes:
i=`expr $val + 1`
Hier haben wir »i« statt »val« verwendet. Etwas störend ist allerdings, dass nur die Zeilennummer ausgegeben wird. Bei längeren Scripts ist es schwer bzw. umständlich, die Zeilennummer parallel zum Debuggen zu behandeln. Daher macht es Sinn, wenn auch hier die entsprechende Zeile mit ausgegeben wird, die ausgeführt wird bzw. wurde. Dies ist im Grunde kein Problem, da hier die DEBUG-Signale nur in der Bash bzw. der Korn-Shell vorhanden sind, weshalb auch gleich das komplette Script in ein Array eingelesen werden kann. Für den Index verwenden Sie einfach wieder die Variable LINENO. Die Zeile eval zum Ausführen von Kommandos packen Sie einfach in eine separate Funktion, die beliebig viele Befehle aufnehmen kann, bis eben (ENTER) gedrückt wird.
# Name: debug2 # ------- DEBUG Anfang --------- # # Die übliche eval-Funktion debugging() { printf "STOP > " while true do read line [ "$line" = "" ] && break eval $line printf " > " done } typeset -i index=1 # Das komplette Script in ein Array einlesen while read zeile[$index] do index=index+1 done<$0 trap 'echo "${zeile[$LINENO]}" ; debugging' DEBUG # ------- DEBUG Ende --------- # typeset -i val=1 while (( $val <= 10 )) do echo "Der $val Schleifendurchlauf" val=val+1 done
Das Script bei der Ausführung:
you@host > ./debug2 typeset -i val=1 STOP >(ENTER) while (( $val <= 10 )) STOP > echo $val 1 > val=7 > echo $val 7 >(ENTER) echo "Der $val Schleifendurchlauf" STOP >(ENTER) Der 7 Schleifendurchlauf val=val+1 STOP >(ENTER) while (( $val <= 10 )) STOP >(ENTER) echo "Der $val Schleifendurchlauf" STOP >(ENTER) Der 8 Schleifendurchlauf val=val+1 STOP >(ENTER) while (( $val <= 10 )) STOP >(ENTER) echo "Der $val Schleifendurchlauf" STOP >(ENTER) Der 9 Schleifendurchlauf val=val+1 STOP >exit you@host Hinweis   Auch in der Bourne-Shell oder anderen Shells, die eben nicht das DEBUG-Signal unterstützen, können Sie die Funktion debugging() hinzufügen.
Nur müssen Sie diese Funktion mehrmals hinter oder/und vor den Zeilen einsetzen, von denen Sie vermuten, dass das Script nicht richtig funktioniert. Natürlich sollten Sie es nicht versäumen, dies aus Ihrem fertigen Script wieder zu entfernen.
In der Korn-Shell finden Sie des Weiteren das Signal ERR, welches Sie ebenfalls mit trap einfangen können. Allerdings handelt es sich hierbei eher um ein Pseudo-Signal, denn das Signal bezieht sich immer auf den Rückgabewert eines Kommandos. Ist dieser ungleich 0, wird das Signal ERR gesendet. Allerdings lässt sich dies nicht dort verwenden, wo bereits der Exit-Code eines Kommandos abgefragt wird (bspw. if, while ...). Auch hierzu ein simples Script, welches das Signal ERR abfängt:
# Name: debugERR error_handling() { echo "Fehler: $ERRNO Zeile: $LINENO" printf "Beenden (j/n) : " ; read [ "$REPLY" = "j" ] && exit 1 } trap 'error_handling' ERR echo "Testen des ERR-Signals" # Sollte dem normalen Benutzer untersagt sein cat > /etc/profile echo "Nach dem Testen des ERR-Signals"
Das Script bei der Ausführung:
you@host > ksh ./debugERR Testen des ERR-Signals Fehler: Permission denied Zeile: 4 Beenden (j/n) : j
Hinweis   Zwar wird die Bash beim Signal ERR in den Dokumentationen nicht erwähnt, aber beim Testen hat dies auch unter der Bash zu funktionieren, nur dass es eben in der Bash nicht die Variable ERRNO gibt.
10.3.3 Variablen und Syntax überprüfenÂ
Um wirklich sicherzugehen, dass Sie nicht auf eine nicht gesetzte Variable zugreifen, können Sie set mit der Option âu verwenden. Wird hierbei auf eine nicht definierte Variable zugegriffen, wird eine entsprechende Fehlermeldung ausgegeben. Mit +u schalten Sie diese Option wieder ab.
# Name: aunboundvar # Keine undefinierten Variablen zulassen set âu var1=100 echo $var1 $var2
Das Script bei der Ausführung:
you@host > ./aunboundvar ./aunboundvar: line 7: var2: unbound variable
Wollen Sie ein Script nicht ausführen, sondern nur dessen Syntax überprüfen lassen, können Sie die Option ân verwenden. Mit +n schalten Sie diese Option wieder aus.
10.3.4 Eine Debug-Ausgabe hinzufügenÂ
Die primitivste Form aller Debugging-Techniken ist gleichzeitig wohl die meisteingesetzte Variante â nicht nur in der Shell-Programmierung. Sie setzen überall dort, wo Sie einen Fehler vermuten, eine echo-Ausgabe über den Zustand der Daten (bspw. welchen Wert hat eine Variable). Gibt das Script sehr viel auf dem Bildschirm aus, kann man auch einfach hie und da mal ein einfaches read einbauen, wodurch auf einen (ENTER)-Tastendruck gewartet wird, ehe die Ausführung des Scripts weiter fortfährt, oder man kann auch die ein oder andere störende Ausgabe kurzeitig ins Datengrab /dev/null schicken. Ein einfaches Beispiel:
# Name: maskieren nobody() { echo "Die Ausgabe wollen wir nicht!!!" } echo "Ausgabe1" exec 1>/dev/null nobody exec 1>`tty` echo "Ausgabe2"
Hier interessieren wir uns nicht für die Ausgabe der Funktion »nobody« und schicken die Standardausgabe ins Datengrab. Natürlich müssen Sie das Ganze wieder rückgängig machen. Hier erreichen wir dies mit einer Umleitung auf das aktuelle Terminal (das Kommando tty übernimmt das für uns).
10.3.5 Debugging-ToolsÂ
Wer ein fertiges Debugging-Tool sucht, dem seien für die Bash der Debugger bashdb (http://bashdb.sourceforge.net/) und für die Korn-Shell kshdb ans Herz gelegt. Der Bash-Debugger ist nichts anderes als eine gepatchte Version der Bash, welche ein besseres Debugging, ebenso wie eine verbesserte Fehlerausgabe unterstützt.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 10.3 FehlersucheÂ
Wenn der Fehler aufgetreten ist, dann bietet Ihnen die Shell einige Optionen, die Ihnen bei der Fehlersuche helfen. Aber egal, welche Fehler denn nun aufgetreten sind, als Erstes sollten Sie die Fehlermeldung lesen und auch verstehen können. Plausibel, aber leider werden immer wieder Fragen gestellt, warum dies oder jenes falsch läuft, obwohl die Antwort zum Teil schon eindeutig der Fehlermeldung zu entnehmen ist. Ganz klar im Vorteil ist hier derjenige, der das Buch durchgearbeitet, die Scripts abgetippt und ausprobiert hat. Durch »Trial and error« hat derjenige eine Menge Fehler produziert, aber auch gelernt, wann welche Fehlermeldung ausgegeben wird.
### 10.3.1 Tracen mit set -xÂ
Der Trace-Modus (trace = verfolgen, untersuchen), den man mit set âx setzt, haben Sie schon des Öfteren in diesem Buch eingesetzt, etwa als es darum ging, zu sehen, wie das Script bzw. die Befehle ablaufen. Die Option set âx wurde bereits in Abschnitt 1.8.9 ausführlich behandelt. Trotzdem muss noch erwähnt werden, dass die Verwendung der Trace-Option nur in der aktuellen Shell und den Subshells aktiv ist. Zum Beispiel wollen Sie wissen, was beim folgenden Script passiert:
> # Name: areweroot if [ $UID = 0 ] then echo "Wir sind root!" renice â5 $$ else echo "Wir sind nicht root!" su -c 'renice â5 $$' fi
Wenn Sie das Script jetzt tracen wollen, genügt es nicht, einfach vor seiner Verwendung die Option âx zu setzen:
> you@host > ./areweroot + ./areweroot Wir sind nicht root! Password:******** 8967: Alte Priorität: 0, neue Priorität: â5
Das ist ein häufiges Missverständnis. Sie müssen die Option selbstverständlich an der entsprechenden Stelle (oder am Anfang des Scripts) setzen:
> # Name: areweroot2 # Trace-Modus einschalten set -x if [ $UID = 0 ] then echo "Wir sind root!" renice â5 $$ else echo "Wir sind nicht root!" su -c 'renice â5 $$' fi
Das Script bei der Ausführung:
> you@host > ./areweroot2 + ./areweroot ++ '[' 1000 = 0 ']' ++ echo 'Wir sind nicht root!' Wir sind nicht root! ++ su -c 'renice â5 $$' Password:******* 9050: Alte Priorität: 0, neue Priorität: â5 you@host > su Password:******** # ./areweroot2 ++ '[' 0 = 0 ']' ++ echo 'Wir sind root!' Wir sind root! ++ renice â5 9070 9070: Alte Priorität: 0, neue Priorität: â5
Häufig finden Sie mehrere Pluszeichen am Anfang einer Zeile. Dies zeigt an, wie tief die Verschachtelungsebene ist, in der die entsprechende Zeile ausgeführt wird. Jedes weitere Zeichen bedeutet eine Ebene tiefer. So verwendet beispielsweise folgendes Script drei Schachtelungsebenen:
> # Name: datum # Trace-Modus einschalten set -x datum=`date` echo "Heute ist $datum
Das Script bei der Ausführung:
> you@host > ./datum +./script1 +++ date ++ datum=Fr Apr 1 10:26:25 CEST 2005 ++ echo 'Heute ist Fr Apr 1 10:26:25 CEST 2005' Heute ist Fr Apr 1 10:26:25 CEST 2005
### 10.3.2 DEBUG und ERR-SignalÂ
Für Bash und Korn-Shell gibt es eine echte Debugging-Alternative mit dem DEBUG-Signal. Das ERR-Signal hingegen ist nur der Korn-Shell vorbehalten. Angewendet werden diese Signale genauso, wie Sie dies von den Signalen her kennen. Sie richten sich hierbei einen Handler mit trap ein.
> trap 'Kommando(s)' DEBUG # nur für die Korn-Shell trap 'Kommando(s)' ERR
Im folgenden Beispiel haben wir ein Script, welches ständig in einer Endlosschleife läuft, aber wir sind zu blind, den Fehler zu erkennen:
> # Name: debug1 val=1 while [ "$val" -le 10 ] do echo "Der ${val}. Schleifendurchlauf" i=`expr $val + 1` done
Jetzt wollen wir dem Script einen »Entwanzer« mit trap einbauen:
> # Name: debug1 trap 'printf "$LINENO :-> " ; read line ; eval $line' DEBUG val=1 while [ "$val" -le 10 ] do echo "Der ${val}. Schleifendurchlauf" i=`expr $val + 1` done
Beim Entwanzen wird gewöhnlich das Kommando eval verwendet, mit dem Sie im Script Kommandos so ausführen können, als wären diese Teil des Scripts (siehe Abschnitt 9.1).
> trap 'printf "$LINENO :-> " ; read line ; eval $line' DEBUG
Mit DEBUG wird nach jedem Ausdruck ein DEBUG-Signal gesendet, welches Sie »trap(pen)« können, um eine bestimmte Aktion auszuführen. Im Beispiel wird zunächst die Zeilennummer des Scripts gefolgt von einem Prompt ausgegeben. Anschließend können Sie einen Befehl einlesen und mit eval ausführen lassen (bspw. Variablen erhöhen oder reduzieren, Datei(en) anlegen, löschen, verändern, auflisten, überprüfen etc). Das Script bei der Ausführung:
> you@host > ./debug1 5 :->(ENTER) 7 :->(ENTER) 9 :->(ENTER) Der 1. Schleifendurchlauf 10 :-> echo $val 1 7 :->(ENTER) 9 :->(ENTER) Der 1. Schleifendurchlauf 10 :-> echo $val 1 7 :->(ENTER) 9 :->(ENTER) Der 1. Schleifendurchlauf 10 :-> val=`expr $val + 1` 7 :-> echo $val 2 9 :->(ENTER) Der 2. Schleifendurchlauf 10 :->(ENTER) 7 :->(ENTER) 9 :->(ENTER) Der 2. Schleifendurchlauf 10 :-> exit
Der Fehler ist gefunden, die Variable »val« wurde nicht hochgezählt, und ein Blick auf die Zeile 10 zeigt Folgendes:
> i=`expr $val + 1`
Hier haben wir »i« statt »val« verwendet. Etwas störend ist allerdings, dass nur die Zeilennummer ausgegeben wird. Bei längeren Scripts ist es schwer bzw. umständlich, die Zeilennummer parallel zum Debuggen zu behandeln. Daher macht es Sinn, wenn auch hier die entsprechende Zeile mit ausgegeben wird, die ausgeführt wird bzw. wurde. Dies ist im Grunde kein Problem, da hier die DEBUG-Signale nur in der Bash bzw. der Korn-Shell vorhanden sind, weshalb auch gleich das komplette Script in ein Array eingelesen werden kann. Für den Index verwenden Sie einfach wieder die Variable LINENO. Die Zeile eval zum Ausführen von Kommandos packen Sie einfach in eine separate Funktion, die beliebig viele Befehle aufnehmen kann, bis eben (ENTER) gedrückt wird.
> # Name: debug2 # ------- DEBUG Anfang --------- # # Die übliche eval-Funktion debugging() { printf "STOP > " while true do read line [ "$line" = "" ] && break eval $line printf " > " done } typeset -i index=1 # Das komplette Script in ein Array einlesen while read zeile[$index] do index=index+1 done<$0 trap 'echo "${zeile[$LINENO]}" ; debugging' DEBUG # ------- DEBUG Ende --------- # typeset -i val=1 while (( $val <= 10 )) do echo "Der $val Schleifendurchlauf" val=val+1 done
Das Script bei der Ausführung:
> you@host > ./debug2 typeset -i val=1 STOP >(ENTER) while (( $val <= 10 )) STOP > echo $val 1 > val=7 > echo $val 7 >(ENTER) echo "Der $val Schleifendurchlauf" STOP >(ENTER) Der 7 Schleifendurchlauf val=val+1 STOP >(ENTER) while (( $val <= 10 )) STOP >(ENTER) echo "Der $val Schleifendurchlauf" STOP >(ENTER) Der 8 Schleifendurchlauf val=val+1 STOP >(ENTER) while (( $val <= 10 )) STOP >(ENTER) echo "Der $val Schleifendurchlauf" STOP >(ENTER) Der 9 Schleifendurchlauf val=val+1 STOP >exit you@host In der Korn-Shell finden Sie des Weiteren das Signal ERR, welches Sie ebenfalls mit trap einfangen können. Allerdings handelt es sich hierbei eher um ein Pseudo-Signal, denn das Signal bezieht sich immer auf den Rückgabewert eines Kommandos. Ist dieser ungleich 0, wird das Signal ERR gesendet. Allerdings lässt sich dies nicht dort verwenden, wo bereits der Exit-Code eines Kommandos abgefragt wird (bspw. if, while ...). Auch hierzu ein simples Script, welches das Signal ERR abfängt:
> # Name: debugERR error_handling() { echo "Fehler: $ERRNO Zeile: $LINENO" printf "Beenden (j/n) : " ; read [ "$REPLY" = "j" ] && exit 1 } trap 'error_handling' ERR echo "Testen des ERR-Signals" # Sollte dem normalen Benutzer untersagt sein cat > /etc/profile echo "Nach dem Testen des ERR-Signals"
Das Script bei der Ausführung:
> you@host > ksh ./debugERR Testen des ERR-Signals Fehler: Permission denied Zeile: 4 Beenden (j/n) : j
### 10.3.3 Variablen und Syntax überprüfenÂ
Um wirklich sicherzugehen, dass Sie nicht auf eine nicht gesetzte Variable zugreifen, können Sie set mit der Option âu verwenden. Wird hierbei auf eine nicht definierte Variable zugegriffen, wird eine entsprechende Fehlermeldung ausgegeben. Mit +u schalten Sie diese Option wieder ab.
> # Name: aunboundvar # Keine undefinierten Variablen zulassen set âu var1=100 echo $var1 $var2
Das Script bei der Ausführung:
> you@host > ./aunboundvar ./aunboundvar: line 7: var2: unbound variable
Wollen Sie ein Script nicht ausführen, sondern nur dessen Syntax überprüfen lassen, können Sie die Option ân verwenden. Mit +n schalten Sie diese Option wieder aus.
### 10.3.4 Eine Debug-Ausgabe hinzufügenÂ
Die primitivste Form aller Debugging-Techniken ist gleichzeitig wohl die meisteingesetzte Variante â nicht nur in der Shell-Programmierung. Sie setzen überall dort, wo Sie einen Fehler vermuten, eine echo-Ausgabe über den Zustand der Daten (bspw. welchen Wert hat eine Variable). Gibt das Script sehr viel auf dem Bildschirm aus, kann man auch einfach hie und da mal ein einfaches read einbauen, wodurch auf einen (ENTER)-Tastendruck gewartet wird, ehe die Ausführung des Scripts weiter fortfährt, oder man kann auch die ein oder andere störende Ausgabe kurzeitig ins Datengrab /dev/null schicken. Ein einfaches Beispiel:
> # Name: maskieren nobody() { echo "Die Ausgabe wollen wir nicht!!!" } echo "Ausgabe1" exec 1>/dev/null nobody exec 1>`tty` echo "Ausgabe2"
Hier interessieren wir uns nicht für die Ausgabe der Funktion »nobody« und schicken die Standardausgabe ins Datengrab. Natürlich müssen Sie das Ganze wieder rückgängig machen. Hier erreichen wir dies mit einer Umleitung auf das aktuelle Terminal (das Kommando tty übernimmt das für uns).
### 10.3.5 Debugging-ToolsÂ
Wer ein fertiges Debugging-Tool sucht, dem seien für die Bash der Debugger bashdb (http://bashdb.sourceforge.net/) und für die Korn-Shell kshdb ans Herz gelegt. Der Bash-Debugger ist nichts anderes als eine gepatchte Version der Bash, welche ein besseres Debugging, ebenso wie eine verbesserte Fehlerausgabe unterstützt.
# 11.2 grepÂ
11.2.1 Wie arbeitet grep?Â
Das grep-Kommando sucht nach einem Muster von Zeichen in einer oder mehreren Datei(en). Enthält das Muster ein Whitespace, muss es entsprechend gequotet werden. Das Muster ist also entweder eine gequotete Zeichenkette oder ein einfaches Wort. Alle anderen Worte hinter dem Muster werden von grep dann als Datei(en) verwendet, in denen nach dem Muster gesucht wird. Die Ausgabe sendet grep an die Standardausgabe (meistens dem Bildschirm) und nimmt auch keinerlei Änderung an der Eingabedatei vor. Die Syntax:grep wort datei1 [datei2] ... [datein]
Ein viel zitiertes Beispiel:
you@host > grep john /etc/passwd john:x:1002:100:<NAME>:/home/john:/bin/csh
grep sucht hier nach dem Muster »john« in der Datei /etc/passwd. Bei Erfolg wird die entsprechende Zeile auf dem Bildschirm ausgegeben. Wird das Muster nicht gefunden, gibt es keine Ausgabe und auch keine Fehlermeldung. Existiert die Datei nicht, wird eine Fehlermeldung auf dem Bildschirm ausgegeben.
Als Rückgabewert von grep erhalten Sie bei einer erfolgreichen Suche den Wert 0. Wird ein Muster nicht gefunden, gibt grep 1 als Exit-Code zurück, und wird die Datei nicht gefunden, wird 2 zurückgegeben. Der Rückgabewert von grep ist in den Shellscripts häufig von Bedeutung, da Sie relativ selten die Ausgaben auf dem Bildschirm machen werden. Und vom Exit-Code hängt es häufig auch ab, wie Ihr Script weiterlaufen soll.
grep gibt den Exit-Code 0 zurück, also wurde ein Muster gefunden:
you@host > grep you /etc/passwd > /dev/null you@host > echo $? 0
grep gibt den Exit-Code 1 zurück, somit wurde kein übereinstimmendes Muster gefunden:
you@host > grep gibtsnicht /etc/passwd > /dev/null you@host > echo $? 1
grep gibt den Exit-Code 2 zurück, die Datei scheint nicht zu existieren (oder wurde, wie hier, falsch geschrieben):
you@host > grep you /etc/PASSwd > /dev/null 2>&1 you@host > echo $? 2
grep kann seine Eingabe neben Dateien auch von der Standardeingabe oder einer Pipe erhalten.
you@host > grep echo < script1 echo "Ausgabe1" echo "Ausgabe2" you@host > cat script1 | grep echo echo "Ausgabe1" echo "Ausgabe2"
Auch grep kennt eine ganze Menge regulärer Ausdrücke. Metazeichen helfen Ihnen dabei, sich mit grep ein Suchmuster zu erstellen. Und natürlich unterstützt auch grep viele Optionen, die Sie dem Kommando mitgeben können. Auf einige dieser Features wird in den folgenden Abschnitten eingegangen.
11.2.2 grep mit regulären AusdrückenÂ
Das grep eines der ältesten Programme ist, das reguläre Ausdrücke kennt, wurde bereits beschrieben. Welche regulären Ausdrücke grep so alles kennt, wird in Tabelle 11.4 aufgelistet. Allerdings kennt grep nicht alle regulären Ausdrücke, weshalb es außerdem das Kommando egrep gibt, das noch einiges mehr versteht (siehe Tabelle 11.5).
Tabelle 11.4 Â Reguläre Ausdrücke von grep
^
Anfang der Zeile
'^wort'
Gibt alle Zeilen aus, die mit »wort« beginnen.
$
Ende der Zeile
'wort$'
Gibt alle Zeilen aus, die mit »wort« enden.
^$
komplette Zeile
'^wort$'
Gibt alle vollständige Zeilen mit dem Muster »wort« aus.
.
beliebiges Zeichen
'w.rt'
Gibt alle Zeilen aus, die ein »w«, einen beliebigen Buchstaben und »rt« enthalten (bspw. »wort«, »wert«, »wirt«, »wart«).
*
beliebig oft
'wort*'
Gibt alle Zeilen aus mit beliebig vielen (oder auch gar keinen) des vorangegangenen Zeichens.
.*
beliebig viele
'wort.*wort'
Die Kombination .* steht für beliebig viele Zeichen.
[]
ein Zeichen aus dem Bereich
'[Ww]ort'
Gibt alle Zeilen aus, welche Zeichen in dem angegebenen Bereich (im Beispiel nach »Wort« oder »wort«) enthalten.
[^]
kein Zeichen aus dem Bereich
'[^AâVXâZaâz]ort'
Die Zeichen, die im angegebenen Bereich stehen, werden nicht beachtet (im Beispiel kann »Wort« gefunden werden, nicht aber »Tort« oder »Sort« und auch nicht »wort«).
\<
Anfang eines Wortes
'\<wort'
Findet hier alles, was mit »wort« beginnt (bspw. »wort«, »wortreich« aber nicht »Vorwort« oder »Nachwort«).
\>
Ende eines Wortes
'wort\>'
Findet alle Zeilen, welche mit »wort« enden (bspw. »Vorwort« oder »Nachwort«, nicht aber »wort« oder »wortreich«).
\<\>
ein Wort
'\<wort\>'
Findet exakt »wort« und nicht »Nachwort« oder »wortreich«.
\(...\)
Backreferenz
'\(wort\)'
Merkt sich die eingeschlossenen Muster vor, um darauf später über \1 zuzugreifen. Bis zu neun Muster können auf diese Weise gespeichert werden.
x\{m\}
exakte Wiederholung des Zeichens
x\{3\}
Exakt 3-maliges Auftreten des Zeichens »x«.
x\{m,\}
mindestens Wiederholung des Zeichens
x\{3,\}
Mindestens ein 3-maliges Auftreten des Zeichens »x«.
x\{m,n\}
mindestens bis maxmiale Wiederholung des Zeichens
x\{3,6\}
Mindestens ein 3-maliges Auftreten des Zeichens »x« bis maximal 6-maliges Auftreten (nicht mehr).
Tabelle 11.5 Â Weitere reguläre Ausdrucke von egrep
+
mindestens ein Mal
'wort[0â9]+'
Es muss mindestens eine Ziffer aus dem Bereich vorkommen.
?
null oder ein Mal
'wort[0â9]?'
Eine Ziffer aus dem Bereich darf, muss aber nicht vorkommen.
|
Alternativen
'worta|wort'
Das Wort »worta« oder »wortb«.
Hier jetzt einige Beispiele zu den regulären Ausdrücken mit grep. Folgende Datei mit folgendem Muster sei hierfür gegeben:
you@host > cat mrolymia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 Arnold Schwarzenegger Österreich 1970 1971 1972 1973 1974 1975 <NAME> Argentinien 1976 1981 <NAME> USA 1982 <NAME> Libanon 1983 <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> 1992 1993 1994 1995 1996 1997 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Einfachstes Beispiel:
you@host > grep Libanon mrolymia.dat <NAME> Libanon 1983
Hierbei werden alle Zeilen ausgegeben, die den regulären Ausdruck »Libanon« in der Datei mrolympia.dat enthalten. Nächstes Beispiel:
you@host > grep '^S' mrolymia.dat <NAME> USA 1967 1968 1969 <NAME> Libanon 1983
Hiermit werden alle Zeilen ausgegeben, die mit dem Zeichen »S« beginnen. Das Caret-Zeichen (^) steht immer für den Anfang einer Zeile. Nächstes Beispiel:
you@host > grep '1$' mrolymia.dat <NAME> Argentinien 1976 1981 Lee Haney USA 1984 1985 1986 1987 1988 1989 1990 1991
Hiermit werden alle Zeilen ausgegeben, die mit dem Zeichen »1« enden. Das Dollarzeichen steht hierbei für das Ende einer Zeile. Nächstes Beispiel:
you@host > grep <NAME> mrolymia.dat grep: Yates: Datei oder Verzeichnis nicht gefunden mrolymia.dat:Sergio Oliva USA 1967 1968 1969
Hier wurde ein Fehler gemacht, da grep das dritte Argument bereits als eine Dateiangabe, in der nach einem Muster gesucht wird, behandelt. Einen Namen wie »<NAME>« gibt es nämlich nicht in dieser Datei. Damit das Muster auch komplett zum Vergleich für grep verwendet wird, müssen Sie es zwischen Single Quoten stellen.
you@host > grep '<NAME>' mrolymia.dat you@host > echo $? 1
Das nächste Beispiel:
you@host > grep '197.' mrolymia.dat Arnold Schwarzenegger Österreich 1970 1971 1972 1973 1974 1975 Franco Columbu Argentinien 1976 1981
Wer war hier in den 70-ern am besten. Damit geben Sie alle Zeilen aus, in denen sich das Muster »197« und ein weiteres beliebiges einzelnes Zeichen befindet. Sofern Sie wirklich nach einem Punkt suchen, müssen Sie ein Backslash davor setzen. Dies gilt übrigens für alle Metazeichen. Nächstes Beispiel:
you@host > grep '^[AS]' mrolymia.dat <NAME> USA 1967 1968 1969 Arnold Schwarzenegger Österreich 1970 1971 1972 1973 1974 1975 <NAME>anon 1983
Hier wird jede Zeile ausgegeben, die mit dem Zeichen »A« und »S« beginnt. Nächstes Beispiel:
you@host > grep '^[^AS]' mrolymia.dat <NAME> USA 1965 1966 Franco Columbu Argentinien 1976 1981 <NAME> USA 1982 Lee Haney USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME>ien 1992 1993 1994 1995 1996 1997 Ronnie Coleman USA 1998 1999 2000 2001 2002 2003 2004
Jetzt werden alle Zeilen ausgegeben, die nicht ([^AS]) mit dem Zeichen »A« oder »S« beginnen. Nächstes Beispiel:
you@host > grep '^S.*Libanon' mrolymia.dat <NAME> Libanon 1983
Hier liefert grep die Zeile zurück, die mit einem Zeichen »S« beginnt, gefolgt von beliebig vielen Zeichen und die Zeichenfolge »Libanon« enthält. Nächstes Beispiel:
you@host > grep '^S.*196.' mrolymia.dat <NAME> USA 1967 1968 1969
Ähnlich wie im Beispiel zuvor werden die Zeilen ausgegeben, die mit dem Zeichen »S« beginnen und die Textfolge »196« mit einem beliebigen weiteren Zeichen enthalten. Nächstes Beispiel:
you@host > grep '[a-z]\{14\}' mrolymia.dat <NAME> Österreich 1970 1971 1972 1973 1974 1975 <NAME>britannien 1992 1993 1994 1995 1996 1997
Gibt alle Zeilen aus, in denen 14 Buchstaben hintereinander Kleinbuchstaben sind. Nächstes Beispiel:
you@host > grep '\<Col' mrolymia.dat Franco Columbu Argentinien 1976 1981 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Hier werden alle Zeilen ausgegeben, in denen sich ein Wort befindet, das mit »Col« beginnt. Nächstes Beispiel:
you@host > grep 'A\>' mrolymia.dat <NAME>cott USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> USA 1982 <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Das Gegenteil vom Beispiel zuvor. Hier werden alle Zeilen ausgegeben, in denen sich ein Wort befindet, welches mit »A« endet. Nächstes Beispiel:
you@host > grep '\<Coleman\>' mrolymia.dat <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Hierbei wird nach einem vollständigen Wort »Coleman« gesucht. Also kein »AColeman« und auch kein »Colemann«. Nächstes Beispiel:
you@host > grep '\<.*ien.*\>' mrolymia.dat Franco Columbu Argentinien 1976 1981 <NAME>britannien 1992 1993 1994 1995 1996 1997
Hier werden alle Zeilen ausgegeben, die ein Wort mit der Zeichenfolge »ien« enthalten. Davor und danach können sich beliebig viele Zeichen befinden. Nächstes Beispiel:
you@host > grep '\<[G].*ien\>' mrolymia.dat <NAME> 1992 1993 1994 1995 1996 1997
Hier wird nach einem Wort gesucht, das mit dem Großbuchstaben »G« beginnt und mit der Zeichenfolge »ien« endet. Dazwischen können sich beliebig viele Zeichen befinden.
Natürlich können Sie grep auch dazu verwenden, in einem ganzen Verzeichnis die Dateien nach einem bestimmten Muster abzusuchen. Hier können Sie wieder das Metazeichen * als Platzhalter für alle Dateien im Verzeichnis verwenden:
you@host > grep 'echo' * ...
Hier werden im aktuellen Arbeitsverzeichnis alle Dateien nach dem Muster »echo« durchsucht.
11.2.3 grep mit PipesÂ
Häufig wird grep in Verbindung mit einer Pipe verwendet. Hierbei übergeben Sie grep dann statt einer Datei als drittes Argument für die Eingabe die Daten durch eine Pipe von der Standardausgabe eines anderen Kommandos. Bspw.:
you@host > ps -ef | grep $USER
Hiermit bekommen Sie alle Prozesse aufgelistet, deren Eigentümer der aktuelle User ist bzw. die dieser gestartet hat. Dabei gibt ps seine Ausgabe durch die Pipe an die Standardeingabe von grep. grep wiederum sucht dann in der entsprechenden Ausgabe nach dem entsprechenden Muster und gibt gegebenenfalls die Zeile(n) aus, die mit dem Muster übereinstimmen. Natürlich können Sie hierbei auch, wie schon gehabt, die regulären Ausdrücke verwenden. Die Syntax:
kommando | grep muster
Ebenfalls relativ häufig wird grep mit ls zur Suche bestimmter Dateien verwendet:
you@host > ls | grep '^scr.*' script1 script1~ script2 script2~
Im Abschnitt zuvor haben Sie mit
you@host > grep 'echo' *
alle Dateien nach dem Muster »echo«' durchsucht. Meistens, mir ging es zumindest so, haben Sie neben einfachen Scripts auch noch eine Menge Dokumentationen im Verzeichnis herumliegen. Wollen Sie jetzt auch noch die Dateinamen mithilfe regulärer Ausdrücke eingrenzen, können Sie die Ausgabe von grep an die Eingabe eines weiteren grep-Aufrufs hängen:
you@host > grep 'echo' * | grep 'scrip.*' ...
Jetzt wird in allen Dateien des aktuellen Verzeichnisses nach dem Muster »echo« gesucht (erstes grep) und anschließend (zweites grep) werden nur die Dateien berücksichtigt, die mit der Zeichenfolge »scrip« beginnen, gefolgt von beliebig vielen weiteren Zeichen.
11.2.4 grep mit OptionenÂ
Natürlich bietet Ihnen grep neben den regulären Ausdrücken auch noch eine Menge weiterer Optionen an, mit denen Sie das Verhalten, insbesondere der Standardausgabe steuern können. Im folgenden Abschnitt finden Sie eine Liste mit den interessantesten und gängigsten Optionen (was bedeutet, dass dies längst nicht alle sind). Reichen Ihnen diese Optionen nicht aus, müssen Sie in den Manual-Page blättern. Als Beispiel dient wieder unsere Datei mrolympia.dat:
you@host > cat mrolymia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 Arn<NAME> Österreich 1970 1971 1972 1973 1974 1975 <NAME> Argentinien 1976 1981 <NAME> USA 1982 <NAME> Libanon 1983 Lee Haney USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME>ien 1992 1993 1994 1995 1996 1997 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
-n â damit wird die Zeile zurückgegeben, in der das Suchmuster gefunden wurde, und zwar mit zusätzlich n Zeilen vor und nach dieser Zeile. In der Praxis:
you@host > grep â1 Sergio mrolymia.dat <NAME>cott USA 1965 1966 <NAME> USA 1967 1968 1969 Arnold Schwarzenegger Österreich 1970 1971 1972 1973 1974 1975
Hierbei wurde vor und nach der gefundenen Zeile jeweils eine zusätzliche Zeile ausgegeben. Dies ist in der Praxis bei einen etwas längeren Text mit mehreren Abschnitten sehr sinnvoll, da man häufig mit einem ausgegebenen Teilsatz aus der Zeile 343 nicht viel anfangen kann.
-A anzahl â ähnlich wie -n, nur dass noch zusätzlich »anzahl« Zeilen mit ausgegeben werden, die nach der Zeile enthalten sind.
-B anzahl â wie âA anzahl, nur dass »anzahl« Zeilen ausgegeben werden, die vor der Zeile enthalten sind, in welcher der reguläre Ausdruck abgedeckt wurde.
-c â (für count) hiermit wird nur die Anzahl von Zeilen ausgegeben, die durch den regulären Ausdruck abgedeckt werden.
you@host > grep -c '.*' mrolymia.dat 9 you@host > grep -c 'USA' mrolymia.dat 5
Hier wurden zum Beispiel die Daten aller Sportler ausgegeben und beim nächsten Ausdruck nur noch die Teilnehmer aus den »USA«. 5 von den 9 ehemaligen Titelträgern kamen also aus den USA.
-h â bei dieser Option wird der Dateiname, in dem der Suchstring gefunden wurde, nicht vor den Zeilen mit ausgegeben:
you@host > grep 'Ausgabe1' * script1:echo "Ausgabe1" script1~:echo "Ausgabe1" you@host > grep -h 'Ausgabe1' * echo "Ausgabe1" echo "Ausgabe1"
-i â es wird nicht zwischen Groß- und Kleinschreibung unterschieden. Ein Beispiel:
you@host > grep 'uSa' mrolymia.dat you@host > grep -i 'uSa' mrolymia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> USA 1982 Lee Haney USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
-l â bei dieser Option werden nur die Dateinamen aufgelistet, in denen eine Zeile mit dem entsprechenden Suchmuster gefunden wurde:
you@host > grep -l echo * Kap 1 bis 9.doc ksh.2005â02â02.linux.i386 script1 script1~ script2 script2~
-n â hiermit wird vor jeder gefundenen Zeile in der Datei die laufende Zeilennummer mit ausgegeben. Bspw.:
you@host > grep -n echo * script1:4: echo "Die Ausgabe wollen wir nicht!!!" script1:7:echo "Ausgabe1" script1:11:echo "Ausgabe2" ... script2:9: echo "Starte script1 ..." script2~:7: echo "Warte ein wenig ..." script2~:9: echo "Starte script1 ..."
-q â bei Verwendung dieser Option erfolgt keine Ausgabe, sondern es wird 0 zurückgegeben, wenn ein Suchtext gefunden wurde, oder 1, wenn Suche erfolglos war. Diese Option wird gewöhnlich in Shellscripts verwendet, bei denen man sich meistens nur dafür interessiert, ob eine Datei einen Suchtext enthält oder nicht.
-s â mit dieser Option werden keine Fehlermeldungen ausgegeben, wenn eine Datei nicht existiert.
you@host > grep test /etc/gibtsnicht grep: /etc/gibtsnicht: Datei oder Verzeichnis nicht gefunden you@host > grep -s test /etc/gibtsnicht you@host >
-v â damit werden alle Zeilen ausgegeben, die nicht durch den angegebenen regulären Ausdruck abgedeckt werden:
you@host > grep -v 'USA' mrolymia.dat <NAME> Österreich 1970 1971 1972 1973 1974 1975 <NAME> 1976 1981 <NAME> 1983 <NAME> 1992 1993 1994 1995 1996 1997
-w â eine Abkürzung für \<wort\>. Damit wird nach ganzen Wörtern im Suchtext gesucht:
you@host > grep 'Lib' mrolymia.dat <NAME> Libanon 1983 you@host > grep -w 'Lib' mrolymia.dat you@host >
11.2.5 egrep (extended grep)Â
Wie Sie bereits erfahren haben, lassen sich mit egrep erweiterte und weitaus komplexere reguläre Ausdrücke bilden. Allerdings wird Otto Normalverbraucher (wie auch die meisten Systemadministratoren) mit grep mehr als zufrieden sein. Komplexere reguläre Ausdrücke gehen natürlich enorm auf Kosten der Ausführgeschwindigkeit. Übrigens können Sie einen egrep-Aufruf auch mit grep und der Option âE realisieren:
grep -E regex Datei
Einige Beispiele mit egrep:
you@host > egrep 'Colombo|Columbu' mrolymia.dat Franco Columbu Argentinien 1976 1981 you@host > egrep 'Colombo|Columbu|Col' mrolymia.dat Franco Columbu Argentinien 1976 1981 Ronnie Coleman USA 1998 1999 2000 2001 2002 2003 2004
Wissen Sie nicht genau, wie man einen bestimmten Namen schreibt, können Sie mit | mehrere Suchmuster miteinander verknüpfen. Im Beispiel wird nach »Colombo« oder »Columbu« und im zweiten Beispiel noch zusätzlich nach einer Zeichenfolge »Col« gesucht. Kennen Sie mehrere Personen mit dem Namen »Olivia« und wollen nach einem »Sergio Olivia« und »<NAME>« suchen, können Sie das Ganze folgendermaßen definieren:
egrep '(Sergio|Gregor) Olivia' mrolympia.dat
Nächstes Beispiel:
you@host > egrep 'y+' mrolymia.dat <NAME> USA 1965 1966 <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991
Mit dem + wird nach mindestens einem »y«-Zeichen gesucht. Wollen Sie bspw. alle Zeilen einer Datei ausgeben, die mit mindestens einem oder mehreren Leerzeichen beginnen, können Sie dies folgendermaßen definieren:
you@host > egrep '^ +' mrolymia.dat
Nächstes Beispiel:
you@host > egrep 'US?' mrolymia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> USA 1982 <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Wissen Sie jetzt nicht, ob die Zeichenfolge für die USA »USA« oder »US« lautet, können Sie das Fragezeichen verwenden. Damit legen Sie fest, dass hier ein Zeichen sein darf, aber nicht muss. Somit wird sowohl die Zeichenfolge »US« als auch »USA« gefunden.
11.2.6 fgrep (fixed oder fast grep)Â
fgrep wird vorwiegend für »schnelle greps« verwendet (fgrep = fast grep). Allerdings ist nur eine Suche nach einfachen Zeichenketten möglich. Reguläre Ausdrücke gibt es hierbei nicht â ein Vorteil, wenn Sie nach Zeichenfolgen suchen, die Metazeichen enthalten.
11.2.7 rgrepÂ
Weil grep nur im aktuellen Verzeichnis sucht, wurde rgrep entwickelt. rgrep sucht mit einer rekursiven Suche in Unterverzeichnissen nach entsprechenden Mustern. Bei einer großen Verzeichnistiefe kann dies allerdings problematisch werden. Wenn Sie bspw. nur das aktuelle Verzeichnis und die direkten Unterverzeichnisse durchsuchen wollen, so würde auch ein grep wie
grep regex */*
ausreichen. Mehr zu rgrep entnehmen Sie bitte der Manual-Page.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 11.2 grepÂ
### 11.2.1 Wie arbeitet grep?Â
Das grep-Kommando sucht nach einem Muster von Zeichen in einer oder mehreren Datei(en). Enthält das Muster ein Whitespace, muss es entsprechend gequotet werden. Das Muster ist also entweder eine gequotete Zeichenkette oder ein einfaches Wort. Alle anderen Worte hinter dem Muster werden von grep dann als Datei(en) verwendet, in denen nach dem Muster gesucht wird. Die Ausgabe sendet grep an die Standardausgabe (meistens dem Bildschirm) und nimmt auch keinerlei Änderung an der Eingabedatei vor. Die Syntax:grep wort datei1 [datei2] ... [datein]
Ein viel zitiertes Beispiel:
> you@host > grep john /etc/passwd john:x:1002:100:<NAME>:/home/john:/bin/csh
grep sucht hier nach dem Muster »john« in der Datei /etc/passwd. Bei Erfolg wird die entsprechende Zeile auf dem Bildschirm ausgegeben. Wird das Muster nicht gefunden, gibt es keine Ausgabe und auch keine Fehlermeldung. Existiert die Datei nicht, wird eine Fehlermeldung auf dem Bildschirm ausgegeben.
Als Rückgabewert von grep erhalten Sie bei einer erfolgreichen Suche den Wert 0. Wird ein Muster nicht gefunden, gibt grep 1 als Exit-Code zurück, und wird die Datei nicht gefunden, wird 2 zurückgegeben. Der Rückgabewert von grep ist in den Shellscripts häufig von Bedeutung, da Sie relativ selten die Ausgaben auf dem Bildschirm machen werden. Und vom Exit-Code hängt es häufig auch ab, wie Ihr Script weiterlaufen soll.
grep gibt den Exit-Code 0 zurück, also wurde ein Muster gefunden:
> you@host > grep you /etc/passwd > /dev/null you@host > echo $? 0
grep gibt den Exit-Code 1 zurück, somit wurde kein übereinstimmendes Muster gefunden:
> you@host > grep gibtsnicht /etc/passwd > /dev/null you@host > echo $? 1
grep gibt den Exit-Code 2 zurück, die Datei scheint nicht zu existieren (oder wurde, wie hier, falsch geschrieben):
> you@host > grep you /etc/PASSwd > /dev/null 2>&1 you@host > echo $? 2
grep kann seine Eingabe neben Dateien auch von der Standardeingabe oder einer Pipe erhalten.
> you@host > grep echo < script1 echo "Ausgabe1" echo "Ausgabe2" you@host > cat script1 | grep echo echo "Ausgabe1" echo "Ausgabe2"
Auch grep kennt eine ganze Menge regulärer Ausdrücke. Metazeichen helfen Ihnen dabei, sich mit grep ein Suchmuster zu erstellen. Und natürlich unterstützt auch grep viele Optionen, die Sie dem Kommando mitgeben können. Auf einige dieser Features wird in den folgenden Abschnitten eingegangen.
### 11.2.2 grep mit regulären AusdrückenÂ
Das grep eines der ältesten Programme ist, das reguläre Ausdrücke kennt, wurde bereits beschrieben. Welche regulären Ausdrücke grep so alles kennt, wird in Tabelle 11.4 aufgelistet. Allerdings kennt grep nicht alle regulären Ausdrücke, weshalb es außerdem das Kommando egrep gibt, das noch einiges mehr versteht (siehe Tabelle 11.5).
Zeichen | Funktion | Beispiel | Bedeutung |
| --- | --- | --- | --- |
^ | Anfang der Zeile | '^wort' | Gibt alle Zeilen aus, die mit »wort« beginnen. |
$ | Ende der Zeile | 'wort$' | Gibt alle Zeilen aus, die mit »wort« enden. |
^$ | komplette Zeile | '^wort$' | Gibt alle vollständige Zeilen mit dem Muster »wort« aus. |
. | beliebiges Zeichen | 'w.rt' | Gibt alle Zeilen aus, die ein »w«, einen beliebigen Buchstaben und »rt« enthalten (bspw. »wort«, »wert«, »wirt«, »wart«). |
* | beliebig oft | 'wort*' | Gibt alle Zeilen aus mit beliebig vielen (oder auch gar keinen) des vorangegangenen Zeichens. |
.* | beliebig viele | 'wort.*wort' | Die Kombination .* steht für beliebig viele Zeichen. |
[] | ein Zeichen aus dem Bereich | '[Ww]ort' | Gibt alle Zeilen aus, welche Zeichen in dem angegebenen Bereich (im Beispiel nach »Wort« oder »wort«) enthalten. |
[^] | kein Zeichen aus dem Bereich | '[^AâVXâZaâz]ort' | Die Zeichen, die im angegebenen Bereich stehen, werden nicht beachtet (im Beispiel kann »Wort« gefunden werden, nicht aber »Tort« oder »Sort« und auch nicht »wort«). |
\< | Anfang eines Wortes | '\<wort' | Findet hier alles, was mit »wort« beginnt (bspw. »wort«, »wortreich« aber nicht »Vorwort« oder »Nachwort«). |
\> | Ende eines Wortes | 'wort\>' | Findet alle Zeilen, welche mit »wort« enden (bspw. »Vorwort« oder »Nachwort«, nicht aber »wort« oder »wortreich«). |
\<\> | ein Wort | '\<wort\>' | Findet exakt »wort« und nicht »Nachwort« oder »wortreich«. |
\(...\) | Backreferenz | '\(wort\)' | Merkt sich die eingeschlossenen Muster vor, um darauf später über \1 zuzugreifen. Bis zu neun Muster können auf diese Weise gespeichert werden. |
x\{m\} | exakte Wiederholung des Zeichens | x\{3\} | Exakt 3-maliges Auftreten des Zeichens »x«. |
x\{m,\} | mindestens Wiederholung des Zeichens | x\{3,\} | Mindestens ein 3-maliges Auftreten des Zeichens »x«. |
x\{m,n\} | mindestens bis maxmiale Wiederholung des Zeichens | x\{3,6\} | Mindestens ein 3-maliges Auftreten des Zeichens »x« bis maximal 6-maliges Auftreten (nicht mehr). |
Zeichen | Funktion | Beispiel | Bedeutung |
| --- | --- | --- | --- |
+ | mindestens ein Mal | 'wort[0â9]+' | Es muss mindestens eine Ziffer aus dem Bereich vorkommen. |
? | null oder ein Mal | 'wort[0â9]?' | Eine Ziffer aus dem Bereich darf, muss aber nicht vorkommen. |
| | Alternativen | 'worta|wort' | Das Wort »worta« oder »wortb«. |
Hier jetzt einige Beispiele zu den regulären Ausdrücken mit grep. Folgende Datei mit folgendem Muster sei hierfür gegeben:
> you@host > cat mrolymia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 Ar<NAME>zenegger Österreich 1970 1971 1972 1973 1974 1975 <NAME> Argentinien 1976 1981 <NAME> USA 1982 <NAME> Libanon 1983 <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> 1992 1993 1994 1995 1996 1997 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Einfachstes Beispiel:
> you@host > grep Libanon mrolymia.dat <NAME> Libanon 1983
Hierbei werden alle Zeilen ausgegeben, die den regulären Ausdruck »Libanon« in der Datei mrolympia.dat enthalten. Nächstes Beispiel:
> you@host > grep '^S' mrolymia.dat <NAME> USA 1967 1968 1969 <NAME> Libanon 1983
Hiermit werden alle Zeilen ausgegeben, die mit dem Zeichen »S« beginnen. Das Caret-Zeichen (^) steht immer für den Anfang einer Zeile. Nächstes Beispiel:
> you@host > grep '1$' mrolymia.dat <NAME> Argentinien 1976 1981 <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991
Hiermit werden alle Zeilen ausgegeben, die mit dem Zeichen »1« enden. Das Dollarzeichen steht hierbei für das Ende einer Zeile. Nächstes Beispiel:
> you@host > grep <NAME> mrolymia.dat grep: Yates: Datei oder Verzeichnis nicht gefunden mrolymia.dat:Sergio Oliva USA 1967 1968 1969
Hier wurde ein Fehler gemacht, da grep das dritte Argument bereits als eine Dateiangabe, in der nach einem Muster gesucht wird, behandelt. Einen Namen wie »<NAME>« gibt es nämlich nicht in dieser Datei. Damit das Muster auch komplett zum Vergleich für grep verwendet wird, müssen Sie es zwischen Single Quoten stellen.
> you@host > grep '<NAME>' mrolymia.dat you@host > echo $? 1
Das nächste Beispiel:
> you@host > grep '197.' mrolymia.dat Arnold Schwarzenegger Österreich 1970 1971 1972 1973 1974 1975 <NAME> Argentinien 1976 1981
Wer war hier in den 70-ern am besten. Damit geben Sie alle Zeilen aus, in denen sich das Muster »197« und ein weiteres beliebiges einzelnes Zeichen befindet. Sofern Sie wirklich nach einem Punkt suchen, müssen Sie ein Backslash davor setzen. Dies gilt übrigens für alle Metazeichen. Nächstes Beispiel:
> you@host > grep '^[AS]' mrolymia.dat <NAME> USA 1967 1968 1969 Arnold Schwarzenegger Österreich 1970 1971 1972 1973 1974 1975 <NAME>anon 1983
Hier wird jede Zeile ausgegeben, die mit dem Zeichen »A« und »S« beginnt. Nächstes Beispiel:
> you@host > grep '^[^AS]' mrolymia.dat <NAME> USA 1965 1966 Franco Columbu Argentinien 1976 1981 <NAME> USA 1982 Lee Haney USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> 1992 1993 1994 1995 1996 1997 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Jetzt werden alle Zeilen ausgegeben, die nicht ([^AS]) mit dem Zeichen »A« oder »S« beginnen. Nächstes Beispiel:
> you@host > grep '^S.*Libanon' mrolymia.dat <NAME> Libanon 1983
Hier liefert grep die Zeile zurück, die mit einem Zeichen »S« beginnt, gefolgt von beliebig vielen Zeichen und die Zeichenfolge »Libanon« enthält. Nächstes Beispiel:
> you@host > grep '^S.*196.' mrolymia.dat <NAME> USA 1967 1968 1969
Ähnlich wie im Beispiel zuvor werden die Zeilen ausgegeben, die mit dem Zeichen »S« beginnen und die Textfolge »196« mit einem beliebigen weiteren Zeichen enthalten. Nächstes Beispiel:
> you@host > grep '[a-z]\{14\}' mrolymia.dat <NAME>gger Österreich 1970 1971 1972 1973 1974 1975 <NAME> 1992 1993 1994 1995 1996 1997
Gibt alle Zeilen aus, in denen 14 Buchstaben hintereinander Kleinbuchstaben sind. Nächstes Beispiel:
> you@host > grep '\<Col' mrolymia.dat Franco Columbu Argentinien 1976 1981 Ronnie Coleman USA 1998 1999 2000 2001 2002 2003 2004
Das Gegenteil vom Beispiel zuvor. Hier werden alle Zeilen ausgegeben, in denen sich ein Wort befindet, welches mit »A« endet. Nächstes Beispiel:
> you@host > grep '\<Coleman\>' mrolymia.dat Ronnie Coleman USA 1998 1999 2000 2001 2002 2003 2004
Hierbei wird nach einem vollständigen Wort »Coleman« gesucht. Also kein »AColeman« und auch kein »Colemann«. Nächstes Beispiel:
> you@host > grep '\<.*ien.*\>' mrolymia.dat Franco Columbu Argentinien 1976 1981 <NAME>ien 1992 1993 1994 1995 1996 1997
Hier werden alle Zeilen ausgegeben, die ein Wort mit der Zeichenfolge »ien« enthalten. Davor und danach können sich beliebig viele Zeichen befinden. Nächstes Beispiel:
> you@host > grep '\<[G].*ien\>' mrolymia.dat <NAME>britannien 1992 1993 1994 1995 1996 1997
Hier wird nach einem Wort gesucht, das mit dem Großbuchstaben »G« beginnt und mit der Zeichenfolge »ien« endet. Dazwischen können sich beliebig viele Zeichen befinden.
Natürlich können Sie grep auch dazu verwenden, in einem ganzen Verzeichnis die Dateien nach einem bestimmten Muster abzusuchen. Hier können Sie wieder das Metazeichen * als Platzhalter für alle Dateien im Verzeichnis verwenden:
> you@host > grep 'echo' * ...
Hier werden im aktuellen Arbeitsverzeichnis alle Dateien nach dem Muster »echo« durchsucht.
### 11.2.3 grep mit PipesÂ
Häufig wird grep in Verbindung mit einer Pipe verwendet. Hierbei übergeben Sie grep dann statt einer Datei als drittes Argument für die Eingabe die Daten durch eine Pipe von der Standardausgabe eines anderen Kommandos. Bspw.:
> you@host > ps -ef | grep $USER
Hiermit bekommen Sie alle Prozesse aufgelistet, deren Eigentümer der aktuelle User ist bzw. die dieser gestartet hat. Dabei gibt ps seine Ausgabe durch die Pipe an die Standardeingabe von grep. grep wiederum sucht dann in der entsprechenden Ausgabe nach dem entsprechenden Muster und gibt gegebenenfalls die Zeile(n) aus, die mit dem Muster übereinstimmen. Natürlich können Sie hierbei auch, wie schon gehabt, die regulären Ausdrücke verwenden. Die Syntax:
> kommando | grep muster
Ebenfalls relativ häufig wird grep mit ls zur Suche bestimmter Dateien verwendet:
> you@host > ls | grep '^scr.*' script1 script1~ script2 script2~
Im Abschnitt zuvor haben Sie mit
> you@host > grep 'echo' *
alle Dateien nach dem Muster »echo«' durchsucht. Meistens, mir ging es zumindest so, haben Sie neben einfachen Scripts auch noch eine Menge Dokumentationen im Verzeichnis herumliegen. Wollen Sie jetzt auch noch die Dateinamen mithilfe regulärer Ausdrücke eingrenzen, können Sie die Ausgabe von grep an die Eingabe eines weiteren grep-Aufrufs hängen:
> you@host > grep 'echo' * | grep 'scrip.*' ...
Jetzt wird in allen Dateien des aktuellen Verzeichnisses nach dem Muster »echo« gesucht (erstes grep) und anschließend (zweites grep) werden nur die Dateien berücksichtigt, die mit der Zeichenfolge »scrip« beginnen, gefolgt von beliebig vielen weiteren Zeichen.
### 11.2.4 grep mit OptionenÂ
Natürlich bietet Ihnen grep neben den regulären Ausdrücken auch noch eine Menge weiterer Optionen an, mit denen Sie das Verhalten, insbesondere der Standardausgabe steuern können. Im folgenden Abschnitt finden Sie eine Liste mit den interessantesten und gängigsten Optionen (was bedeutet, dass dies längst nicht alle sind). Reichen Ihnen diese Optionen nicht aus, müssen Sie in den Manual-Page blättern. Als Beispiel dient wieder unsere Datei mrolympia.dat:
> you@host > cat mrolymia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> Österreich 1970 1971 1972 1973 1974 1975 <NAME> Argentinien 1976 1981 <NAME> USA 1982 <NAME>anon 1983 <NAME>ey USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME>ien 1992 1993 1994 1995 1996 1997 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
-n â damit wird die Zeile zurückgegeben, in der das Suchmuster gefunden wurde, und zwar mit zusätzlich n Zeilen vor und nach dieser Zeile. In der Praxis:
> you@host > grep â1 Sergio mrolymia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> 1970 1971 1972 1973 1974 1975
Hierbei wurde vor und nach der gefundenen Zeile jeweils eine zusätzliche Zeile ausgegeben. Dies ist in der Praxis bei einen etwas längeren Text mit mehreren Abschnitten sehr sinnvoll, da man häufig mit einem ausgegebenen Teilsatz aus der Zeile 343 nicht viel anfangen kann.
-A anzahl â ähnlich wie -n, nur dass noch zusätzlich »anzahl« Zeilen mit ausgegeben werden, die nach der Zeile enthalten sind.
-B anzahl â wie âA anzahl, nur dass »anzahl« Zeilen ausgegeben werden, die vor der Zeile enthalten sind, in welcher der reguläre Ausdruck abgedeckt wurde.
-c â (für count) hiermit wird nur die Anzahl von Zeilen ausgegeben, die durch den regulären Ausdruck abgedeckt werden.
> you@host > grep -c '.*' mrolymia.dat 9 you@host > grep -c 'USA' mrolymia.dat 5
Hier wurden zum Beispiel die Daten aller Sportler ausgegeben und beim nächsten Ausdruck nur noch die Teilnehmer aus den »USA«. 5 von den 9 ehemaligen Titelträgern kamen also aus den USA.
-h â bei dieser Option wird der Dateiname, in dem der Suchstring gefunden wurde, nicht vor den Zeilen mit ausgegeben:
> you@host > grep 'Ausgabe1' * script1:echo "Ausgabe1" script1~:echo "Ausgabe1" you@host > grep -h 'Ausgabe1' * echo "Ausgabe1" echo "Ausgabe1"
-l â bei dieser Option werden nur die Dateinamen aufgelistet, in denen eine Zeile mit dem entsprechenden Suchmuster gefunden wurde:
> you@host > grep -l echo * Kap 1 bis 9.doc ksh.2005â02â02.linux.i386 script1 script1~ script2 script2~
-n â hiermit wird vor jeder gefundenen Zeile in der Datei die laufende Zeilennummer mit ausgegeben. Bspw.:
> you@host > grep -n echo * script1:4: echo "Die Ausgabe wollen wir nicht!!!" script1:7:echo "Ausgabe1" script1:11:echo "Ausgabe2" ... script2:9: echo "Starte script1 ..." script2~:7: echo "Warte ein wenig ..." script2~:9: echo "Starte script1 ..."
-q â bei Verwendung dieser Option erfolgt keine Ausgabe, sondern es wird 0 zurückgegeben, wenn ein Suchtext gefunden wurde, oder 1, wenn Suche erfolglos war. Diese Option wird gewöhnlich in Shellscripts verwendet, bei denen man sich meistens nur dafür interessiert, ob eine Datei einen Suchtext enthält oder nicht.
-s â mit dieser Option werden keine Fehlermeldungen ausgegeben, wenn eine Datei nicht existiert.
> you@host > grep test /etc/gibtsnicht grep: /etc/gibtsnicht: Datei oder Verzeichnis nicht gefunden you@host > grep -s test /etc/gibtsnicht you@host -v â damit werden alle Zeilen ausgegeben, die nicht durch den angegebenen regulären Ausdruck abgedeckt werden:
> you@host > grep -v 'USA' mrolymia.dat <NAME> Österreich 1970 1971 1972 1973 1974 1975 <NAME> Argentinien 1976 1981 <NAME> 1983 <NAME> 1992 1993 1994 1995 1996 1997
-w â eine Abkürzung für \<wort\>. Damit wird nach ganzen Wörtern im Suchtext gesucht:
> you@host > grep 'Lib' mrolymia.dat <NAME> 1983 you@host > grep -w 'Lib' mrolymia.dat you@host ### 11.2.5 egrep (extended grep)Â
Wie Sie bereits erfahren haben, lassen sich mit egrep erweiterte und weitaus komplexere reguläre Ausdrücke bilden. Allerdings wird Otto Normalverbraucher (wie auch die meisten Systemadministratoren) mit grep mehr als zufrieden sein. Komplexere reguläre Ausdrücke gehen natürlich enorm auf Kosten der Ausführgeschwindigkeit. Übrigens können Sie einen egrep-Aufruf auch mit grep und der Option âE realisieren:
> grep -E regex Datei
Einige Beispiele mit egrep:
> you@host > egrep 'Colombo|Columbu' mrolymia.dat Franco Columbu Argentinien 1976 1981 you@host > egrep 'Colombo|Columbu|Col' mrolymia.dat <NAME> Argentinien 1976 1981 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Wissen Sie nicht genau, wie man einen bestimmten Namen schreibt, können Sie mit | mehrere Suchmuster miteinander verknüpfen. Im Beispiel wird nach »Colombo« oder »Columbu« und im zweiten Beispiel noch zusätzlich nach einer Zeichenfolge »Col« gesucht. Kennen Sie mehrere Personen mit dem Namen »Olivia« und wollen nach einem »<NAME>« und »<NAME>« suchen, können Sie das Ganze folgendermaßen definieren:
> egrep '(Sergio|Gregor) Olivia' mrolympia.dat
Nächstes Beispiel:
> you@host > egrep 'y+' mrolymia.dat Larry Scott USA 1965 1966 <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991
Mit dem + wird nach mindestens einem »y«-Zeichen gesucht. Wollen Sie bspw. alle Zeilen einer Datei ausgeben, die mit mindestens einem oder mehreren Leerzeichen beginnen, können Sie dies folgendermaßen definieren:
> you@host > egrep '^ +' mrolymia.dat
Nächstes Beispiel:
> you@host > egrep 'US?' mrolymia.dat Larry Scott USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> USA 1982 Lee Haney USA 1984 1985 1986 1987 1988 1989 1990 1991 Ronnie Coleman USA 1998 1999 2000 2001 2002 2003 2004
Wissen Sie jetzt nicht, ob die Zeichenfolge für die USA »USA« oder »US« lautet, können Sie das Fragezeichen verwenden. Damit legen Sie fest, dass hier ein Zeichen sein darf, aber nicht muss. Somit wird sowohl die Zeichenfolge »US« als auch »USA« gefunden.
### 11.2.6 fgrep (fixed oder fast grep)Â
fgrep wird vorwiegend für »schnelle greps« verwendet (fgrep = fast grep). Allerdings ist nur eine Suche nach einfachen Zeichenketten möglich. Reguläre Ausdrücke gibt es hierbei nicht â ein Vorteil, wenn Sie nach Zeichenfolgen suchen, die Metazeichen enthalten.
### 11.2.7 rgrepÂ
Weil grep nur im aktuellen Verzeichnis sucht, wurde rgrep entwickelt. rgrep sucht mit einer rekursiven Suche in Unterverzeichnissen nach entsprechenden Mustern. Bei einer großen Verzeichnistiefe kann dies allerdings problematisch werden. Wenn Sie bspw. nur das aktuelle Verzeichnis und die direkten Unterverzeichnisse durchsuchen wollen, so würde auch ein grep wie
> grep regex */*
ausreichen. Mehr zu rgrep entnehmen Sie bitte der Manual-Page.
# 12.2 Der sed-BefehlÂ
12.2 Der sed-BefehlÂ
Ein sed-Befehl sieht auf den ersten Blick etwas undurchsichtig aus, ist aber einfacher, als man vermuten würde. Hier ein solcher typischer sed-Befehl im Rohformat:
sed '[adresse1[,adresse2]]kommando' [datei(en)]
Alle Argumente, bis auf »kommando«, sind erst mal optional. Jeder sed-Aufruf benötigt also mindestens einen Kommandoaufruf. Die Operation »kommando« bezieht sich auf einen bestimmten Bereich bzw. eine bestimmte Adresse einer Datei (solche Adressen werden im nächsten Abschnitt behandelt). Den Bereich können Sie mit »addresse1« bestimmen. Geben Sie hierbei noch »adresse2« an, so bezieht sich der Bereich von der Zeile »addresse1« bis zur Zeile »addresse2«. Sie können diesen Bereich auch negieren, indem Sie hinter »addresse1« und »addresse2« ein !-Zeichen setzen. Dann bezieht sich »kommando« nur auf den Bereich, der nicht von »addresse1« bis »addresse2« abgedeckt wird. Ein einfaches Beispiel:
you@host > sed -n 'p' file.dat
Hiermit geben Sie praktisch die komplette Datei file.dat auf dem Bildschirm aus. Das Kommando p steht für print, also Ausgabe (auf dem Bildschirm). Damit sed die Zeile(n) nicht doppelt ausgibt, wurde die Option ân verwendet. Näheres dazu später.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 12.2 Der sed-BefehlÂ
Ein sed-Befehl sieht auf den ersten Blick etwas undurchsichtig aus, ist aber einfacher, als man vermuten würde. Hier ein solcher typischer sed-Befehl im Rohformat:
> sed '[adresse1[,adresse2]]kommando' [datei(en)]
Alle Argumente, bis auf »kommando«, sind erst mal optional. Jeder sed-Aufruf benötigt also mindestens einen Kommandoaufruf. Die Operation »kommando« bezieht sich auf einen bestimmten Bereich bzw. eine bestimmte Adresse einer Datei (solche Adressen werden im nächsten Abschnitt behandelt). Den Bereich können Sie mit »addresse1« bestimmen. Geben Sie hierbei noch »adresse2« an, so bezieht sich der Bereich von der Zeile »addresse1« bis zur Zeile »addresse2«. Sie können diesen Bereich auch negieren, indem Sie hinter »addresse1« und »addresse2« ein !-Zeichen setzen. Dann bezieht sich »kommando« nur auf den Bereich, der nicht von »addresse1« bis »addresse2« abgedeckt wird. Ein einfaches Beispiel:
> you@host > sed -n 'p' file.dat
Hiermit geben Sie praktisch die komplette Datei file.dat auf dem Bildschirm aus. Das Kommando p steht für print, also Ausgabe (auf dem Bildschirm). Damit sed die Zeile(n) nicht doppelt ausgibt, wurde die Option ân verwendet. Näheres dazu später.
# 12.3 AdressenÂ
12.3 AdressenÂ
Adressen sind, wie eben erwähnt, entweder fixe Zeilen, ganze Bereiche oder aber Zeilen, die auf einen bestimmten regulären Ausdruck passen. Hier einige Beispiele, wie Sie bestimmte Adressen definieren und welche Zeilen Sie damit selektieren. Zur Demonstration wird immer die Option ân verwendet, mit der Sie den Default-Output von sed abschalten, und das Kommando p, mit dem Sie die selektierte(n) Zeile(n) auf dem Bildschirm ausgeben lassen können.
Nur die vierte Zeile der Datei file.dat selektieren und ausgeben:
you@host > sed -n '4p' file.dat
Die Zeilen 4, 5, 6 und 7 aus der Datei file.dat selektieren und ausgeben. Geben Sie für die »bis«-Adresse einen niedrigeren Wert an als für die »von«-Adresse, so wird dieser Wert ignoriert:
you@host > sed -n '4,7p' file.dat
Hiermit werden alle Zeilen von Zeile 4 bis zum Ende der Datei ausgegeben â das Dollarzeichen steht für die letzte Zeile:
you@host > sed -n '4,$p' file.dat
Damit werden alle Zeilen selektiert, in denen sich das Wort »wort« befindet und ausgegeben:
you@host > sed -n '/wort/p' file.dat
Hier werden alle Zeilen ausgegeben, welche die Zeichenfolge »197« und eine beliebige Zahl von 0 bis 9 enthalten (1970, 1971 ... 1979):
you@host > sed -n '/197[0â9]/p' file.dat
Neben den Möglichkeiten
[Adresse1, Adresse2]Kommando
und
[Adresse]Kommando
gibt es auch noch eine dritte Möglichkeit, wie Sie Adressen angeben können. Man kann nämlich durch die Verwendung von geschweiften Klammern mehrere Kommandos zusammenfassen:
Adresse{ Kommando1 Kommando2 }
Oder auch als Einzeiler:
Adresse{ Kommando1 ; Kommando2 ; ... }
Bspw.:
you@host > sed -n '1{p ; s/USA/ASU/g ; p }' mrolympia.dat <NAME> USA 1965 1966 L<NAME> ASU 1965 1966
Hiermit bearbeiten Sie die erste Zeile der Datei mrolympia.dat. Zuerst geben Sie diese Zeile aus, anschließend führen Sie mit s/.../.../g eine globale (g) Ersetzung mittels s (substitute) der Zeichenfolge »USA« durch »ASU« durch und geben daraufhin diese Zeile (ggf. verändert) nochmals aus. Natürlich können Sie eine solche Ersetzung auch ohne Angabe einer Adresse auf die ganze Datei machen:
you@host > sed -n '{/USA/p ; s/USA/ASU/g ; /ASU/p }' mrolympia.dat
Da hierbei keine direkte Adressierung verwendet wird, können Sie das Ganze auch gleich ohne geschweifte Klammern machen:
you@host > sed -n '/USA/p ; s/USA/ASU/g ; /ASU/p' mrolympia.dat
Natürlich können Sie auch mit den geschweiften Klammern einen Adressbereich verwenden:
you@host > sed -n '1,5{/USA/p ; s/USA/ASU/g ; /ASU/p }' \ > mrolympia.dat
Hierbei wurden die Zeilen 1 bis 5 zusammengefasst, um alle Kommandos in den geschweiften Klammern darauf auszuführen.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 12.3 AdressenÂ
Adressen sind, wie eben erwähnt, entweder fixe Zeilen, ganze Bereiche oder aber Zeilen, die auf einen bestimmten regulären Ausdruck passen. Hier einige Beispiele, wie Sie bestimmte Adressen definieren und welche Zeilen Sie damit selektieren. Zur Demonstration wird immer die Option ân verwendet, mit der Sie den Default-Output von sed abschalten, und das Kommando p, mit dem Sie die selektierte(n) Zeile(n) auf dem Bildschirm ausgeben lassen können.
Nur die vierte Zeile der Datei file.dat selektieren und ausgeben:
> you@host > sed -n '4p' file.dat
Die Zeilen 4, 5, 6 und 7 aus der Datei file.dat selektieren und ausgeben. Geben Sie für die »bis«-Adresse einen niedrigeren Wert an als für die »von«-Adresse, so wird dieser Wert ignoriert:
> you@host > sed -n '4,7p' file.dat
Hiermit werden alle Zeilen von Zeile 4 bis zum Ende der Datei ausgegeben â das Dollarzeichen steht für die letzte Zeile:
> you@host > sed -n '4,$p' file.dat
Damit werden alle Zeilen selektiert, in denen sich das Wort »wort« befindet und ausgegeben:
> you@host > sed -n '/wort/p' file.dat
Hier werden alle Zeilen ausgegeben, welche die Zeichenfolge »197« und eine beliebige Zahl von 0 bis 9 enthalten (1970, 1971 ... 1979):
> you@host > sed -n '/197[0â9]/p' file.dat
Neben den Möglichkeiten
> [Adresse1, Adresse2]Kommando
und
> [Adresse]Kommando
gibt es auch noch eine dritte Möglichkeit, wie Sie Adressen angeben können. Man kann nämlich durch die Verwendung von geschweiften Klammern mehrere Kommandos zusammenfassen:
> Adresse{ Kommando1 Kommando2 }
Oder auch als Einzeiler:
> Adresse{ Kommando1 ; Kommando2 ; ... }
Bspw.:
> you@host > sed -n '1{p ; s/USA/ASU/g ; p }' mrolympia.dat <NAME> USA 1965 1966 <NAME> ASU 1965 1966
Hiermit bearbeiten Sie die erste Zeile der Datei mrolympia.dat. Zuerst geben Sie diese Zeile aus, anschließend führen Sie mit s/.../.../g eine globale (g) Ersetzung mittels s (substitute) der Zeichenfolge »USA« durch »ASU« durch und geben daraufhin diese Zeile (ggf. verändert) nochmals aus. Natürlich können Sie eine solche Ersetzung auch ohne Angabe einer Adresse auf die ganze Datei machen:
> you@host > sed -n '{/USA/p ; s/USA/ASU/g ; /ASU/p }' mrolympia.dat
Da hierbei keine direkte Adressierung verwendet wird, können Sie das Ganze auch gleich ohne geschweifte Klammern machen:
> you@host > sed -n '/USA/p ; s/USA/ASU/g ; /ASU/p' mrolympia.dat
Natürlich können Sie auch mit den geschweiften Klammern einen Adressbereich verwenden:
> you@host > sed -n '1,5{/USA/p ; s/USA/ASU/g ; /ASU/p }' \ > mrolympia.dat
Hierbei wurden die Zeilen 1 bis 5 zusammengefasst, um alle Kommandos in den geschweiften Klammern darauf auszuführen.
# 12.4 Kommandos, Substitutionsflags und Optionen von sedÂ
Tabelle 12.1 Â Gängige Basiskommandos von sed
a
(für append) Fügt eine oder mehrere Zeilen an die selektierte Zeile an.
c
(für change) Ersetzt die selektierte Zeile durch eine oder mehrere neue.
d
(für delete) Löscht Zeile(n).
g
(für get »buffer«) Kopiert den Inhalt des temporären Puffers (Holdspace) in den Arbeitspuffer (Patternspace).
G
(für GetNewline) Fügt den Inhalt des temporären Puffers (Holdspace) an den Arbeitspuffer (Patternspace) an.
h
(für hold »buffer«) Gegenstück zu g; kopiert den Inhalt des Arbeitspuffers (Patternspace) in den temporären Puffer (Holdspace).
H
(für HoldNewline) Gegenstück zu G; fügt den Inhalt des Arbeitspuffers (Patternspace) an den temporären Puffer (Holdspace) an.
i
(für insert) Fügt eine neue Zeile vor der selektierten Zeile ein.
l
(für listing) Zeigt nicht druckbare Zeichen an.
n
(für next) Verwendet das nächste Kommando statt des aktuellen auf die nächste Zeile an.
p
(für print) Gibt die Zeilen aus.
q
(für quit) Beendet sed.
r
(für read) Datei integrieren; liest Zeilen aus einer Datei ein.
s
(für substitute) Ersetzen einer Zeichenfolge durch eine andere.
x
(für Xchange) Vertauschen des temporären Puffers (Holdspace) mit dem Arbeitspuffer (Patternspace).
y
(für yank) Zeichen aus einer Liste ersetzen; Ersetzen eines Zeichens durch ein anderes.
w
(für write) Schreibt Zeilen in eine Datei.
!
Negation; wendet die Kommandos auf Zeilen an, die nicht zutreffen.
Tabelle 12.2 Â Einige Substitutionsflags
Flag
Bedeutung
g
Globale Ersetzung (aller Vorkommen eines Musters in der Zeile).
p
Ausgabe der Zeile.
w
Tauscht den Inhalt des Zwischenspeichers mit der aktuell selektierten Zeile aus.
Tabelle 12.3 Â Gängige Schalter-Optionen für sed
ân
Schaltet »Default Output« aus; mit dem Default Output ist die Ausgabe des Puffers (Patternspace) gemeint.
âe
Mehrere Befehle nacheinander ausführen; man gibt praktisch ein Script bzw. die Kommandos direkt in der Kommandozeile ein.
âf
Die sed-Befehle in einem Script (sed-Script) zusammenfassen und dieses Script mittels âf übergeben; praktisch wie die Option âe, nur dass hier anstatt des Scripts bzw. Kommandos in der Kommandozeile der Name eines Scriptfiles angegeben wird.
Hinweis   Die meisten sed-Versionen (GNU-sed, BSD-sed, FreeBSD-sed und ssed) bieten neben diesen Optionen noch einige weitere an, dessen Bedeutung Sie bei Bedarf der Manual-Page von sed entnehmen können.
Tabelle 12.4 Â Grundlegende reguläre Ausdrücke, die sed versteht
Ausdruck
Bedeutung
Beispiel
Erklärung
^
Anfang einer Zeile
/^wort/
Behandelt alle Zeilen, die mit der Zeichenfolge wort beginnen.
$
Ende einer Zeile
/wort$/
Behandelt alle Zeilen, die mit der Zeichenfolge wort enden.
.
Ein Zeichen
/w.rt/
Behandelt alle Zeilen mit den Zeichenfolgen wort, wert, wirt, wart etc. (Ausname ist das Newline-Zeichen).
*
keine, eine oder mehrere Wiederholungen des vorhergehenden Zeichens (oder Gruppe)
/*wort/
Behandelt alle Zeilen, in denen vor wort kein, ein oder mehrere Zeichen stehen.
[]
ein Zeichen aus der Menge
/[Ww]ort/
Behandelt alle Zeilen mit der Zeichenfolge wort oder Wort.
[^]
Kein Zeichen aus der Menge
/[^Ww]ort/
Behandelt alle Zeilen die nicht die Zeichenfolge wort oder Wort enthalten.
\(...\)
Speichern eines enthaltenen Musters
S/\(wort\)a/\1b/
Die Zeichenfolge wort wird in \1 gespeichert. Diese Referenz auf das Muster wort verwenden Sie nun, um in der Zeichenfolge worta das Wort wortb zu ersetzen. Damit lassen sich bis zu 9 solcher Referenzen definieren, worauf Sie mit \1, \2 ... \9 zugreifen können.
&
enthält das Suchmuster
S/wort/Ant&en/
Das Ampersand-Zeichen repräsentiert den Suchstring. Im Beispiel wird jeder Suchstring wort durch das Wort Antworten ersetzt.
\<
Wortanfang
/\<wort/
Findet alle Zeilen mit einem Wort, das mit wort beginnt, also wortreich, wortarm, nicht aber Vorwort oder Nachwort.
\>
Wortende
/wort\>/
Findet alle Zeilen mit einem Wort, das mit wort endet, also Vorwort, Nachwort, nicht aber wortreich oder wortarm.
x\{m\} x\{m,\} x\{m,n\}
m-fache Wiederholung des Zeichens x mindestens m-fache Wiederholung des Zeichens x mindestens m-, maximal n-fache Wiederholung des Zeichens x
Nachdem Sie jetzt eine Menge Theorie und viele Tabellen gesehen haben, folgt nun ein Praxisteil zu der Verwendung der einzelnen Kommandos von sed. Als Beispiel soll wieder die Datei mrolympia.dat verwendet werden:
you@host > cat mrolympia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> 1970 1971 1972 1973 1974 1975 <NAME> Argentinien 1976 1981 <NAME> USA 1982 <NAME>anon 1983 <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME>ien 1992 1993 1994 1995 1996 1997 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Des Weiteren muss erwähnt werden, dass alle Beispiele auf dem Patternspace-Puffer ausgeführt werden. Somit bezieht sich die Änderung nur auf die Ausgabe auf dem Bildschirm (Standardausgabe). Sofern Sie hier gern eine bleibende Änderung vornehmen wollen (was in der Regel der Fall sein sollte), können Sie sich auf Abschnitt 12.1.1 beziehen. Die einfachste Lösung dürfte hier wohl die Verwendung des Umlenkungszeichens am Ende des sed-Befehls sein.
12.4.1 Das a-Kommando â Zeile(n) anfügenÂ
Die Syntax:
a\ neue Zeile # oder in neueren sed-Versionen auch a neue Zeile
Mit dem a-Kommando (append) können Sie eine neue Zeile hinter einer gefundenen Zeile einfügen. Bspw.:
you@host > sed '/2004/ a <NAME> 2005' mrolympia.dat ... <NAME> 1983 Lee Haney USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> 1992 1993 1994 1995 1996 1997 Ronnie Coleman USA 1998 1999 2000 2001 2002 2003 2004 <NAME> 2005
Sollte die Zeile länger werden, können Sie der Übersichtlichkeit zuliebe ein Backslash verwenden:
you@host > sed '/2004/ a\ > Prognosse für 2005: J.Cutler; R.Coleman; M.Rühl' mrolympia.dat ... <NAME> 1992 1993 1994 1995 1996 1997 Ronnie Coleman USA 1998 1999 2000 2001 2002 2003 2004 Prognosse für 2005: J.Cutler; R.Coleman; M.Rühl'
Hinweis   Dass das Kommando a und die neue Zeile in derselben Zeile stehen, ist eine Erweiterung neuerer sed-Versionen. Ältere sed-Programme kennen häufig nur das a-Kommando gefolgt von einem Backslash und die neue Zeile dann in der nächsten Zeile. Dies sollten Sie wissen, falls Sie auf einem etwas betagteren Rechner mit sed arbeiten müssen.
Beachten Sie außerdem, dass jedes Whitespace nach dem a-Kommando hinter dem Backslash ebenfalls als ein Zeichen interpretiert und verwendet wird.
12.4.2 Das c-Kommando â Zeilen ersetzenÂ
Die Syntax:
c\ neuer Text # oder in neueren sed-Versionen c neuer Text
Wollen Sie eine gefundene Zeile komplett durch eine neue Zeile ersetzen, können Sie das c-Kommando (change) verwenden. Die Funktionsweise entspricht der des a-Kommandos.
you@host > sed '/\<Oliva\>/ c \ > <NAME>uba 1967 1968 1969' mrolympia.dat Larry Scott USA 1965 1966 <NAME> Cuba 1967 1968 1969 ...
Im Beispiel wurden alle Zeilen mit dem Vorkommen des Wortes »Oliva« durch die in der darauf folgenden Zeile vorgenommene Eingabe ersetzt. Hier wurde bspw. die Nationalität verändert. Dem c-Kommando muss wie beim a-Kommando ein Backslash und ein Newline-Zeichen folgen â obgleich auch hier die neueren sed-Versionen ohne Backslash und Newline-Zeichen auskommen.
12.4.3 Das d-Kommando â Zeilen löschenÂ
Zum Löschen von Zeilen wird das d-Kommando (delete) verwendet. Damit wird die entsprechende Adresse im Puffer gelöscht. Ein paar Beispiele.
Löscht die fünfte Zeile:
you@host > sed '5d' mrolympia.dat
Löscht ab der fünften bis zur neunten Zeile:
you@host > sed '5,9d' mrolympia.dat
Löscht alle Zeilen ab der fünften Zeile bis zum Ende der Datei:
you@host > sed '5,$d' mrolympia.dat
Löscht alle Zeilen, welche das Muster »USA« enthalten:
you@host > sed '/USA/d' mrolympia.dat
Löscht alle Zeilen, die nicht das Muster »USA« enthalten:
you@host > sed '/!USA/d' mrolympia.dat
12.4.4 Die Kommandos h, H, g, G und x â Arbeiten mit den PuffernÂ
Mit den Kommandos g (get), G (GetNewline), h (hold), H (HoldNewline) und x (Xchange) können Sie mit den Puffern (Patternspace und Holdspace) arbeiten.
Mit dem Kommando h können Sie den aktuellen Inhalt des Zwischenpuffers (Patternspace) in einen anderen Puffer (Holdspace) sichern. Mit H hängen Sie ein Newline-Zeichen an das Ende des Puffers, gefolgt vom Inhalt des Zwischenpuffers.
Mit g, dem Gegenstück zu h, ersetzen Sie die aktuelle Zeile des Zwischenpuffers durch den Inhalt des Puffers, den Sie zuvor mit h gesichert haben. Mit G hängen Sie ein Newline-Zeichen an das Ende des Zwischenpuffers, gefolgt vom Inhalt des Puffers.
Mit dem Kommando x hingegen tauschen Sie den Inhalt des Zwischenpuffers mit dem Inhalt des anderen Puffers aus.
Ein Beispiel:
you@host > sed -e '/Sergio/{h;d}' -e '$G' mrolympia.dat
Hierbei führen Sie zunächst eine Suche nach dem Muster »Sergio« in der Datei mrolympia.dat durch. Wird eine entsprechende Zeile gefunden, legen Sie diese in den Holdspace zum Zwischenspeichern (Kommando h). Im nächsten Schritt löschen Sie diese Zeile (Kommando d). Anschließend führen Sie (Option âe) das nächste Kommando aus. Hier hängen Sie praktisch die Zeile im Holdspace (G) mit einem beginnenden Newline-Zeichen an das Ende des Patternspace. Diese Zeile wird an das Ende angehängt ($-Zeichen). Wollen Sie diese Zeile bspw. in die fünfte Zeile platzieren, gehen Sie wie folgt vor:
you@host > sed -e '/Sergio/{h;d}' -e '5G' mrolympia.dat <NAME> USA 1965 1966 <NAME> Österreich 1970 1971 1972 1973 1974 1975 <NAME>inien 1976 1981 <NAME> USA 1982 <NAME> USA 1967 1968 1969 <NAME> Libanon 1983 <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> 1992 1993 1994 1995 1996 1997 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Bitte beachten Sie, dass hierbei die gelöschte Zeile beim Einfügen mit berechnet wird. Was passiert nun, wenn Sie die Zeile im Holdspace ohne Newline-Zeichen, wie dies mit g der Fall ist, im Patternspace einfügen? Folgendes:
you@host > sed -e '/Sergio/{h;d}' -e '5g' mrolympia.dat Larry Scott USA 1965 1966 Arnold Schwarzenegger Österreich 1970 1971 1972 1973 1974 1975 Franco Columbu Argentinien 1976 1981 Sergio Oliva USA 1967 1968 1969 <NAME> Libanon 1983 Lee Haney USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> 1992 1993 1994 1995 1996 1997 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Hier wird praktisch gnadenlos eine Zeile (hier »<NAME>«) überschrieben. Beachten Sie dies bitte bei der Verwendung von g und G bzw. h und H.
Noch ein Beispiel zum Kommando x:
you@host > sed -e '/Sergio/{h;d}' -e '/Dorian/x' \ > -e '$G' mrolympia.dat Larry Scott USA 1965 1966 Arnold Schwarzenegger Österreich 1970 1971 1972 1973 1974 1975 Franco Columbu Argentinien 1976 1981 <NAME> USA 1982 <NAME> Libanon 1983 Lee Haney USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> USA 1967 1968 1969 <NAME> USA 1998 1999 2000 2001 2002 2003 2004 <NAME> 2005 <NAME> 1992 1993 1994 1995 1996 1997
Hier suchen Sie zunächst nach dem Muster »Sergio«, legen dies in den Holdspace (Kommando h) und löschen daraufhin die Zeile im Patternspace (Kommando d). Jetzt suchen Sie nach dem Muster »Dorian« und tauschen den Inhalt des Patternspace (Zeile mit »Dorian«) mit dem Inhalt des Holdspace (Zeile »Sergio«). Jetzt befindet sich im Holdspace die Zeile mit »Dorian«, diese hängen Sie mit einem weiteren Befehl (Option âe) an das Ende des Patternspace.
12.4.5 Das Kommando i â Einfügen von ZeilenÂ
Die Syntax:
i\ text zum Einfügen # oder bei neueren sed-Versionen i text zum Einfügen
Bei diesem Kommando gilt all das, was schon zum Kommando a gesagt wurde. Damit können Sie eine neue Zeile vor der gefundenen Zeile einfügen.
you@host > sed '/Franco/ i \ > ---Zeile---' mrolympia.dat Larry Scott USA 1965 1966 <NAME> USA 1967 1968 1969 Ar<NAME>gger Österreich 1970 1971 1972 1973 1974 1975 ---Zeile--- Franco Columbu Argentinien 1976 1981 <NAME> USA 1982 ...
12.4.6 Das p-Kommando â Patternspace ausgebenÂ
Dieses Kommando wurde schon zur Genüge verwendet. Mit p geben Sie den Patternspace aus. Wollen Sie bspw. einfach eine komplette Datei ausgeben lassen, können Sie folgendermaßen vorgehen:
you@host > sed -n 'p' mrolympia.dat
Die Option ân ist hierbei nötig, da sonst neben dem Patternspace-Puffer auch noch der Default Output mit ausgegeben würde und Sie somit eine doppelte Ausgabe hätten. Mit ân schalten Sie den Default Output aus. Dies wird gern vergessen und führt zu Fehlern. Wollen Sie bspw. die letzte Zeile in einer Datei ausgeben lassen und vergessen dabei, den Default Output abzuschalten, bekommen Sie folgende Ausgabe zurück:
you@host > sed '$p' mrolympia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 Arnold Schwarzenegger Österreich 1970 1971 1972 1973 1974 1975 Franco Columbu Argentinien 1976 1981 <NAME> USA 1982 <NAME> Libanon 1983 Lee Haney USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME>ien 1992 1993 1994 1995 1996 1997 <NAME> USA 1998 1999 2000 2001 2002 2003 2004 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Hier wird trotzdem die komplette Datei ausgegeben, da der Default Output weiterhin über den Bildschirm rauscht. Die letzte Zeile hingegen ist doppelt vorhanden, weil hier zum einen der Default Output seine Arbeit gemacht hat und zum anderen das p-Kommando auch noch einmal die letzte Zeile mit ausgibt. Richtig wäre hier also:
you@host > sed -n '$p' mrolympia.dat <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Noch einige typische Beispiele mit dem p-Kommando:
you@host > sed -n '4p' mrolympia.dat Franco Columbu Argentinien 1976 1981
Hier wurde die vierte Zeile ausgegeben.
you@host > sed -n '4,6p' mrolympia.dat Franco Columbu Argentinien 1976 1981 <NAME> USA 1982 <NAME> Libanon 1983
Hiermit werden die Zeilen 4 bis 6 ausgegeben.
you@host > sed -n '/USA/p' mrolympia.dat Larry Scott USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> USA 1982 <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991 Ronnie Coleman USA 1998 1999 2000 2001 2002 2003 2004
Hierbei werden alle Zeilen mit der Textfolge »USA« ausgegeben.
12.4.7 Das Kommando q â BeendenÂ
Mit dem Kommando q können Sie das laufende sed-Script durch eine Verzweigung zum Scriptende beenden. Vorher wird aber noch der Patternspace ausgegeben. Ein einfaches Beispiel:
you@host > sed -n '/USA/{p;q}' mrolympia.dat Larry Scott USA 1965 1966
Hier wird der sed-Befehl nach dem ersten gefundenen Muster »USA« beendet. Zuvor wird noch der Inhalt des Patternspace ausgegeben.
12.4.8 Die Kommandos r und wÂ
Mit dem Kommando r können Sie aus einer Datei einen Text einschieben. In den seltensten Fällen werden Sie einen Text in nur einer Datei einfügen wollen. Hierzu sei folgende einfache Textdatei gegeben:
you@host > cat header.txt --------------------------------- Vorname Name Nationalität Jahr(e) ---------------------------------
Der Inhalt dieser Datei soll jetzt mit dem Kommando r in die letzte Zeile eingefügt werden:
you@host > sed '$r header.txt' mrolympia.dat Larry Scott USA 1965 1966 <NAME> USA 1967 1968 1969 Arnold Schwarzenegger Österreich 1970 1971 1972 1973 1974 1975 ... <NAME>itannien 1992 1993 1994 1995 1996 1997 <NAME> USA 1998 1999 2000 2001 2002 2003 2004 Jay Cutler 2005 --------------------------------- Vorname Name Nationalität Jahr(e) ---------------------------------
Natürlich können Sie auch die Zeilennummer oder ein Muster als Adresse für das Einfügen verwenden:
you@host > sed -e '/Arnold/r header.txt' mrolympia.dat Larry Scott USA 1965 1966 Sergio Oliva USA 1967 1968 1969 Arnold Schwarzenegger Österreich 1970 1971 1972 1973 1974 1975 --------------------------------- Vorname Name Nationalität Jahr(e) --------------------------------- Franco Columbu Argentinien 1976 1981 <NAME> USA 1982 <NAME> Libanon 1983 ...
Ebenso können Sie auch mehr als nur eine Datei mithilfe der âe-Option aus einer Datei einlesen lassen und in die entsprechenden Zeilen einfügen.
you@host > sed -e '3r file1.dat' -e '7r file2.dat' mrolympia.dat Larry Scott USA 1965 1966 Sergio Oliva USA 1967 1968 1969 Arnold Schwarzenegger Österreich 1970 1971 1972 1973 1974 1975 ----------Ich bin file1------------ Franco Columbu Argentinien 1976 1981 <NAME> USA 1982 <NAME> Libanon 1983 Lee Haney USA 1984 1985 1986 1987 1988 1989 1990 1991 ----------Ich bin file2------------ <NAME>ien 1992 1993 1994 1995 1996 1997 Ronnie Coleman USA 1998 1999 2000 2001 2002 2003 2004
Das entsprechende Gegenstück zum Kommando r erhalten Sie mit dem Kommando w, mit dem Sie das Ergebnis von sed in einer Datei speichern können:
you@host > sed -n '/USA/w USA.dat' mrolympia.dat you@host > cat USA.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> USA 1982 Lee Haney USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
12.4.9 Das Kommando s â substituteÂ
Bei dem s-Kommando handelt sich wohl um das meistverwendete Kommando in sed und es ist auch einer der Hauptgründe, sed überhaupt zu verwenden. Die Syntax:
sed -e 's/altes_Muster/neues_Muster/flag' datei sed -e 's?altes_Muster?neues_Muster?flag' datei
Damit können Sie das Muster »altes_Muster« (im Patternspace) von datei durch »neues_Muster« ersetzen. Als Trennzeichen (hier einmal mit / und ?) können Sie praktisch jedes beliebige Zeichen verwenden (außer Backslash und Newline-Zeichen), es darf nur nicht Bestandteil des Musters sein.
In der Grundform, ohne Angabe von »flag«, ersetzt das Kommando s nur das erste Vorkommen eines Musters pro Zeile. Somit können Sie mit der Angabe von »flag« auch das Verhalten von s verändern. Am häufigsten verwendet wird wohl das Flag g, mit dem alle Vorkommen eines Musters in einer Zeile nacheinander ersetzt werden â also eine globale Ersetzung. Mit dem Flag q wird nach der letzten Ersetzung der Patternspace ausgegeben, mit dem Flag w können Sie den Patternspace in die angegebene Datei schreiben. Allerdings muss diese Option die letzte im Flag sein, da der Dateiname bis zum Ende der Zeile geht. Selbstverständlich werden gerade zu diesem Kommando jetzt massenhaft Beispiele folgen.
you@host > sed 's/USA/Amerika/g' mrolympia.dat <NAME> Amerika 1965 1966 <NAME> 1967 1968 1969 <NAME> 1970 1971 1972 1973 1974 1975 <NAME> 1976 1981 <NAME> 1982 <NAME> 1983 <NAME> 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> 1992 1993 1994 1995 1996 1997 <NAME> 1998 1999 2000 2001 2002 2003 2004
Hier wurden global alle Zeichenfolgen »USA« durch »Amerika« ersetzt. Natürlich können Sie hierbei auch einzelnen Zeilen mithilfe einer Adresse verändern:
you@host > sed '/Oliva/ s/USA/Cuba/' mrolympia.dat L<NAME>cott USA 1965 1966 <NAME> Cuba 1967 1968 1969 Arnold Schwarzenegger Österreich 1970 1971 1972 1973 1974 1975 Franco Columbu Argentinien 1976 1981 ...
Hier wurde nur die Zeile ersetzt (»USA« durch »Cuba«), in der sich die Textfolge »Olivia« befindet. Wollen Sie nicht, dass hier die komplette Datei ausgegeben wird, sondern nur die Zeile(n), die ersetzt wurde(n), müssen Sie das Flag p (mit der Option ân) verwenden:
you@host > sed -n '/Oliva/ s/USA/Cuba/p' mrolympia.dat <NAME> Cuba 1967 1968 1969
Das nächste Beispiel:
you@host > sed 's/[12][90]/-/' mrolympia.dat <NAME> USA â65 1966 Sergio Oliva USA â67 1968 1969 Ar<NAME> Österreich â70 1971 1972 1973 1974 1975 1980 Franco Columbu Argentinien â76 1981 <NAME> USA â82 <NAME> Libanon â83 Lee Haney USA â84 1985 1986 1987 1988 1989 1990 1991 <NAME>ien â92 1993 1994 1995 1996 1997 <NAME> USA â98 1999 2000 2001 2002 2003 2004
Hier konnten Sie zum ersten Mal den Einfluss des g-Flags erkennen. Es sollten wohl alle Zeichenfolgen »19« und »20« durch ein Minuszeichen ersetzt werden. Dasselbe nochmals mit dem Flag g:
you@host > sed 's/[12][90]/-/g' mrolympia.dat <NAME> USA â65 â66 <NAME> USA â67 â68 â69 <NAME> â70 â71 â72 â73 â74 â75 â80 <NAME>umbu Argentinien â76 â81 <NAME> USA â82 <NAME> â83 <NAME> USA â84 â85 â86 â87 â88 â89 â90 â91 <NAME> â92 â93 â94 â95 â96 â97 <NAME> USA â98 â99 â00 â01 â02 â03 â04
Hier werden auch die Zeichenfolgen »10«, »20« und »29« beachtet. Wollen Sie statt einer Ersetzung eine Ergänzung machen, können Sie das Ampersand-Zeichen (&) beim Ersetzungsstring verwenden:
you@host > sed -n 's/Franco Columbu/Dr. &/p' mrolympia.dat Dr. <NAME> Argentinien 1976 1981
Hier wurde der Suchstring »Franco Columbu« exakt an der Position des »Ampersand-Zeichens« beim Ersetzungsstring verwendet und ein Titel davor gesetzt. Sofern Sie etwas Derartiges global durchführen müssen und nur die ersetzten Pattern ausgeben wollen, können Sie auch die Flags g und p gleichzeitig verwenden:
you@host > sed -n 's/USA/& (Amerika)/gp' mrolympia.dat Larry Scott USA (Amerika) 1965 1966 Sergio Oliva USA (Amerika) 1967 1968 1969 <NAME> USA (Amerika) 1982 Lee Haney USA (Amerika) 1984 1985 1986 1987 1988 1989 1990 1991 Ronnie Coleman USA (Amerika) 1998 1999 2000 2001 2002 2003 2004
Mit der Option âe können Sie auch mehrere Ersetzungen auf einmal durchführen lassen:
you@host > sed -e 's/USA/& (Amerika)/g' \ > -e 's/[12][90]/-/g' mrolympia.dat Larry Scott USA (Amerika) â65 â66 Sergio Oliva USA (Amerika) â67 â68 â69 <NAME> â70 â71 â72 â73 â74 â75 â80 <NAME>entinien â76 â81 <NAME> USA (Amerika) â82 <NAME> â83 Lee Haney USA (Amerika) â84 â85 â86 â87 â88 â89 â90 â91 <NAME> â92 â93 â94 â95 â96 â97 Ronnie Coleman USA (Amerika) â98 â99 â00 â01 â02 â03 â04
Gleiches können Sie übrigens auch mit einer Referenz wie folgt machen:
you@host > sed -n 's/\(USA\)/\1 (Amerika)/gp' mrolympia.dat <NAME> USA (Amerika) 1965 1966 <NAME> USA (Amerika) 1967 1968 1969 <NAME> USA (Amerika) 1982 <NAME> USA (Amerika) 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> USA (Amerika) 1998 1999 2000 2001 2002 2003 2004
Hier wird \1 als Referenz auf »USA« beim Ersetzungsstring verwendet.
Betrachten Sie mal folgendes Beispiel:
you@host > sed -n 's/\([12][90]\)\(..\)/\/\1\2\//gp' \ > mrolympia.dat
Beim Anblick dieser Zeile fällt es schon schwer, den Überblick zu behalten. Hier sollen alle Jahreszeiten zwischen zwei Schrägstrichen gesetzt werden (/2000/, /2001/ etc). Bei der Verwendung solcher Zeichen-Orgien empfiehlt es sich, häufiger mal ein anderes Trennzeichen zu verwenden:
you@host > sed -n 's#\([12][90]\)\(..\)#\/\1\2\/#gp' \ > mrolympia.dat
Jetzt kann man wenigstens leichter zwischen dem Such- und Ersetzungsstring unterscheiden.
Man könnte noch extremere Auswüchse zum Suchen und Ersetzen mit sed schreiben, doch erscheint es mir an dieser Stelle wichtiger, ein Problem mit den regulären Ausdrücken zu erwähnen, das mich zur Weißglut beim Schreiben eines Artikels zur Verwendung von sed in Verbindung mit HTML-Seiten gebracht hat. Erst ein Artikel im Internet von <NAME> hat dann für klare Verhältnisse gesorgt. Das Problem: Ein regulärer Ausdruck sucht sich immer den längsten passenden String. Bspw. folgende Textfolge in einem HTML-Dokument:
Dies ist ein <strong>verflixte</strong> Stelle in einem <strong>verflixten</strong> HTML-Dokument.
Ich wollte alle HTML-Tags aus dem Dokument entfernen, um so eine normale Textdatei daraus zu machen. Mit folgendem Konstrukt wollte ich dies realisieren:
sed -e 's/<.*>//g' index.html
Mit folgendem Ergebnis:
Das ist ein HTML-Dokument.
Leider war mir nicht (mehr) klar, dass .* so gefräßig ist (eigentlich war es mir aus Perl bekannt) und die längste Zeichenfolge sucht, also von
<strong>verflixte</strong> Stelle in einem <strong>verflixten </strong>
haben will. Um aber nur die Zeichen bis zum ersten Auftreten des Zeichens > zu löschen, muss man mit folgender Bereichsangabe arbeiten:
sed -e 's/<[^>]*>//g' index.html
Hieran können Sie erkennen, dass die Verwendung regulärer Ausdrücke recht kompliziert werden kann und man häufig um eine intensivere Beschäftigung mit ihnen nicht herumkommt. Gerade, wenn Sie reguläre Ausdrücke beim Suchen und Ersetzen mehrerer Dateien verwenden, sollten Sie sich sicher sein, was Sie tun und gegebenenfalls einen Probelauf vornehmen bzw. ein Backup zur Hand haben. Zum Glück â und vielleicht auch deswegen â wurde sed so konzipiert, dass die Manipulation erst mal nicht an der Originaldatei gemacht wird.
Manch einer mag jetzt fragen, wie man Dateien vom DOS-Format ins UNIX-Format und umgekehrt konvertieren kann. Mit sed ist dies leichter als man denkt. Man gehe davon aus, dass eine DOS-Zeile mit CR und LF endet. Ich habe das Beispiel der sed-FAQ von <NAME> entnommen:
# 3. Under UNIX: convert DOS newlines (CR/LF) to Unix format sed 's/.$//' file # assumes that all lines end with CR/LF sed 's/^M$// file # in bash/tcsh, press Ctrl-V then Ctrl-M
Allerdings stellt sich die Frage, ob man hierbei nicht auf die Tools dos2unix bzw. unix2dos zurückgreifen will. Beim vim können Sie auch mittels set (set fileformat=dos oder set fileformat=unix) das Dateiformat angeben.
12.4.10 Das Kommando yÂ
Ein weiteres Kommando, das neben dem Kommando s gern verwendet wird, ist y (yank), mit dem Sie eine Ersetzung von Zeichen aus einer Liste vornehmen können. Die Syntax:
y/Quellzeichen/Zielzeichen/
Hiermit werden alle Zeichen in »Quellzeichen« in die entsprechenden Zeichen in »Zielzeichen« umgewandelt. Ist eine der Listen leer oder unterschiedlich lang, wird sed mit einem Fehler beendet. Natürlich können Sie auch (wie schon beim Kommando s) als Trenner ein anderes Zeichen als / verwenden. Bspw. können Sie mit folgendem sed-Befehl eine Datei nach der »rot-13«-Methode verschlüsseln (hier nur auf die Kleinbuchstaben beschränkt):
you@host > cp mrolympia.dat mrolympia.bak you@host > sed âe \ > 'y/abcdefghijklmnopqrstuvwxyz/nopqrstuvwxyzabcdefghijklm/' \ > mrolympia.bak > mrolympia.dat you@host > cat mrolympia.dat Lneel Spbgg USA 1965 1966 Sretvb Oyvin USA 1967 1968 1969 Aeabyq Spujnemrarttre Öfgrervpu 1970 1971 1972 1973 1974 1975 Fenapb Cbyhzoh Aetragvavra 1976 1981 ...
Hiermit werden alle (Klein-)Buchstaben um 13 Zeichen verschoben, aus »a« wird »n«, aus »b« wird »o«, aus »c« wird »p« usw. Wollen Sie das Ganze wieder rückgängig machen, brauchen Sie »Quellzeichen« und »Zielzeichen« aus dem Beispiel nur auszutauschen:
you@host > cp mrolympia.dat mrolympia.bak you@host > sed âe \ > 'y/nopqrstuvwxyzabcdefghijklm/abcdefghijklmnopqrstuvwxyz/' \ > mrolympia.bak > mrolympia.dat you@host > cat mrolympia.dat Larry Scott USA 1965 1966 Sergio Oliva USA 1967 1968 1969 Arnold Schwarzenegger Österreich 1970 1971 1972 1973 1974 1975 Franco Columbu Argentinien 1976 1981 ...
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 12.4 Kommandos, Substitutionsflags und Optionen von sedÂ
Zuerst die wichtigsten Kommandos von sed:
Kommando | Bedeutung |
| --- | --- |
a | (für append) Fügt eine oder mehrere Zeilen an die selektierte Zeile an. |
c | (für change) Ersetzt die selektierte Zeile durch eine oder mehrere neue. |
d | (für delete) Löscht Zeile(n). |
g | (für get »buffer«) Kopiert den Inhalt des temporären Puffers (Holdspace) in den Arbeitspuffer (Patternspace). |
G | (für GetNewline) Fügt den Inhalt des temporären Puffers (Holdspace) an den Arbeitspuffer (Patternspace) an. |
h | (für hold »buffer«) Gegenstück zu g; kopiert den Inhalt des Arbeitspuffers (Patternspace) in den temporären Puffer (Holdspace). |
H | (für HoldNewline) Gegenstück zu G; fügt den Inhalt des Arbeitspuffers (Patternspace) an den temporären Puffer (Holdspace) an. |
i | (für insert) Fügt eine neue Zeile vor der selektierten Zeile ein. |
l | (für listing) Zeigt nicht druckbare Zeichen an. |
n | (für next) Verwendet das nächste Kommando statt des aktuellen auf die nächste Zeile an. |
p | (für print) Gibt die Zeilen aus. |
q | (für quit) Beendet sed. |
r | (für read) Datei integrieren; liest Zeilen aus einer Datei ein. |
s | (für substitute) Ersetzen einer Zeichenfolge durch eine andere. |
x | (für Xchange) Vertauschen des temporären Puffers (Holdspace) mit dem Arbeitspuffer (Patternspace). |
y | (für yank) Zeichen aus einer Liste ersetzen; Ersetzen eines Zeichens durch ein anderes. |
w | (für write) Schreibt Zeilen in eine Datei. |
! | Negation; wendet die Kommandos auf Zeilen an, die nicht zutreffen. |
Flag | Bedeutung |
| --- | --- |
g | Globale Ersetzung (aller Vorkommen eines Musters in der Zeile). |
p | Ausgabe der Zeile. |
w | Tauscht den Inhalt des Zwischenspeichers mit der aktuell selektierten Zeile aus. |
Option | Bedeutung |
| --- | --- |
ân | Schaltet »Default Output« aus; mit dem Default Output ist die Ausgabe des Puffers (Patternspace) gemeint. |
âe | Mehrere Befehle nacheinander ausführen; man gibt praktisch ein Script bzw. die Kommandos direkt in der Kommandozeile ein. |
âf | Die sed-Befehle in einem Script (sed-Script) zusammenfassen und dieses Script mittels âf übergeben; praktisch wie die Option âe, nur dass hier anstatt des Scripts bzw. Kommandos in der Kommandozeile der Name eines Scriptfiles angegeben wird. |
Ausdruck | Bedeutung | Beispiel | Erklärung |
| --- | --- | --- | --- |
^ | Anfang einer Zeile | /^wort/ | Behandelt alle Zeilen, die mit der Zeichenfolge wort beginnen. |
$ | Ende einer Zeile | /wort$/ | Behandelt alle Zeilen, die mit der Zeichenfolge wort enden. |
. | Ein Zeichen | /w.rt/ | Behandelt alle Zeilen mit den Zeichenfolgen wort, wert, wirt, wart etc. (Ausname ist das Newline-Zeichen). |
* | keine, eine oder mehrere Wiederholungen des vorhergehenden Zeichens (oder Gruppe) | /*wort/ | Behandelt alle Zeilen, in denen vor wort kein, ein oder mehrere Zeichen stehen. |
[] | ein Zeichen aus der Menge | /[Ww]ort/ | Behandelt alle Zeilen mit der Zeichenfolge wort oder Wort. |
[^] | Kein Zeichen aus der Menge | /[^Ww]ort/ | Behandelt alle Zeilen die nicht die Zeichenfolge wort oder Wort enthalten. |
\(...\) | Speichern eines enthaltenen Musters | S/\(wort\)a/\1b/ | Die Zeichenfolge wort wird in \1 gespeichert. Diese Referenz auf das Muster wort verwenden Sie nun, um in der Zeichenfolge worta das Wort wortb zu ersetzen. Damit lassen sich bis zu 9 solcher Referenzen definieren, worauf Sie mit \1, \2 ... \9 zugreifen können. |
& | enthält das Suchmuster | S/wort/Ant&en/ | Das Ampersand-Zeichen repräsentiert den Suchstring. Im Beispiel wird jeder Suchstring wort durch das Wort Antworten ersetzt. |
\< | Wortanfang | /\<wort/ | Findet alle Zeilen mit einem Wort, das mit wort beginnt, also wortreich, wortarm, nicht aber Vorwort oder Nachwort. |
\> | Wortende | /wort\>/ | Findet alle Zeilen mit einem Wort, das mit wort endet, also Vorwort, Nachwort, nicht aber wortreich oder wortarm. |
x\{m\} x\{m,\} x\{m,n\} | m-fache Wiederholung des Zeichens x mindestens m-fache Wiederholung des Zeichens x mindestens m-, maximal n-fache Wiederholung des Zeichens x |
Nachdem Sie jetzt eine Menge Theorie und viele Tabellen gesehen haben, folgt nun ein Praxisteil zu der Verwendung der einzelnen Kommandos von sed. Als Beispiel soll wieder die Datei mrolympia.dat verwendet werden:
> you@host > cat mrolympia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME>gger Österreich 1970 1971 1972 1973 1974 1975 <NAME>inien 1976 1981 <NAME> USA 1982 <NAME>anon 1983 <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> 1992 1993 1994 1995 1996 1997 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Des Weiteren muss erwähnt werden, dass alle Beispiele auf dem Patternspace-Puffer ausgeführt werden. Somit bezieht sich die Änderung nur auf die Ausgabe auf dem Bildschirm (Standardausgabe). Sofern Sie hier gern eine bleibende Änderung vornehmen wollen (was in der Regel der Fall sein sollte), können Sie sich auf Abschnitt 12.1.1 beziehen. Die einfachste Lösung dürfte hier wohl die Verwendung des Umlenkungszeichens am Ende des sed-Befehls sein.
### 12.4.1 Das a-Kommando â Zeile(n) anfügenÂ
Die Syntax:
> a\ neue Zeile # oder in neueren sed-Versionen auch a neue Zeile
Mit dem a-Kommando (append) können Sie eine neue Zeile hinter einer gefundenen Zeile einfügen. Bspw.:
> you@host > sed '/2004/ a Jay Cutler 2005' mrolympia.dat ... <NAME> 1983 Lee Haney USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> 1992 1993 1994 1995 1996 1997 Ronnie Coleman USA 1998 1999 2000 2001 2002 2003 2004 <NAME> 2005
Sollte die Zeile länger werden, können Sie der Übersichtlichkeit zuliebe ein Backslash verwenden:
> you@host > sed '/2004/ a\ > Prognosse für 2005: J.Cutler; R.Coleman; M.Rühl' mrolympia.dat ... <NAME> 1992 1993 1994 1995 1996 1997 Ronnie Coleman USA 1998 1999 2000 2001 2002 2003 2004 Prognosse für 2005: J.Cutler; R.Coleman; M.Rühl'
Beachten Sie außerdem, dass jedes Whitespace nach dem a-Kommando hinter dem Backslash ebenfalls als ein Zeichen interpretiert und verwendet wird.
### 12.4.2 Das c-Kommando â Zeilen ersetzenÂ
Die Syntax:
> c\ neuer Text # oder in neueren sed-Versionen c neuer Text
Wollen Sie eine gefundene Zeile komplett durch eine neue Zeile ersetzen, können Sie das c-Kommando (change) verwenden. Die Funktionsweise entspricht der des a-Kommandos.
> you@host > sed '/\<Oliva\>/ c \ > Sergio Oliva Cuba 1967 1968 1969' mrolympia.dat L<NAME> USA 1965 1966 Sergio Oliva Cuba 1967 1968 1969 ...
Im Beispiel wurden alle Zeilen mit dem Vorkommen des Wortes »Oliva« durch die in der darauf folgenden Zeile vorgenommene Eingabe ersetzt. Hier wurde bspw. die Nationalität verändert. Dem c-Kommando muss wie beim a-Kommando ein Backslash und ein Newline-Zeichen folgen â obgleich auch hier die neueren sed-Versionen ohne Backslash und Newline-Zeichen auskommen.
### 12.4.3 Das d-Kommando â Zeilen löschenÂ
Zum Löschen von Zeilen wird das d-Kommando (delete) verwendet. Damit wird die entsprechende Adresse im Puffer gelöscht. Ein paar Beispiele.
Löscht die fünfte Zeile:
> you@host > sed '5d' mrolympia.dat
Löscht ab der fünften bis zur neunten Zeile:
> you@host > sed '5,9d' mrolympia.dat
Löscht alle Zeilen ab der fünften Zeile bis zum Ende der Datei:
> you@host > sed '5,$d' mrolympia.dat
Löscht alle Zeilen, welche das Muster »USA« enthalten:
> you@host > sed '/USA/d' mrolympia.dat
Löscht alle Zeilen, die nicht das Muster »USA« enthalten:
> you@host > sed '/!USA/d' mrolympia.dat
### 12.4.4 Die Kommandos h, H, g, G und x â Arbeiten mit den PuffernÂ
Mit den Kommandos g (get), G (GetNewline), h (hold), H (HoldNewline) und x (Xchange) können Sie mit den Puffern (Patternspace und Holdspace) arbeiten.
Mit dem Kommando h können Sie den aktuellen Inhalt des Zwischenpuffers (Patternspace) in einen anderen Puffer (Holdspace) sichern. Mit H hängen Sie ein Newline-Zeichen an das Ende des Puffers, gefolgt vom Inhalt des Zwischenpuffers.
Mit g, dem Gegenstück zu h, ersetzen Sie die aktuelle Zeile des Zwischenpuffers durch den Inhalt des Puffers, den Sie zuvor mit h gesichert haben. Mit G hängen Sie ein Newline-Zeichen an das Ende des Zwischenpuffers, gefolgt vom Inhalt des Puffers.
Mit dem Kommando x hingegen tauschen Sie den Inhalt des Zwischenpuffers mit dem Inhalt des anderen Puffers aus.
Ein Beispiel:
> you@host > sed -e '/Sergio/{h;d}' -e '$G' mrolympia.dat
Hierbei führen Sie zunächst eine Suche nach dem Muster »Sergio« in der Datei mrolympia.dat durch. Wird eine entsprechende Zeile gefunden, legen Sie diese in den Holdspace zum Zwischenspeichern (Kommando h). Im nächsten Schritt löschen Sie diese Zeile (Kommando d). Anschließend führen Sie (Option âe) das nächste Kommando aus. Hier hängen Sie praktisch die Zeile im Holdspace (G) mit einem beginnenden Newline-Zeichen an das Ende des Patternspace. Diese Zeile wird an das Ende angehängt ($-Zeichen). Wollen Sie diese Zeile bspw. in die fünfte Zeile platzieren, gehen Sie wie folgt vor:
> you@host > sed -e '/Sergio/{h;d}' -e '5G' mrolympia.dat Larry Scott USA 1965 1966 Ar<NAME>enegger Österreich 1970 1971 1972 1973 1974 1975 Franco Columbu Argentinien 1976 1981 <NAME> USA 1982 Sergio Oliva USA 1967 1968 1969 <NAME> Libanon 1983 Lee Haney USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> 1992 1993 1994 1995 1996 1997 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Bitte beachten Sie, dass hierbei die gelöschte Zeile beim Einfügen mit berechnet wird. Was passiert nun, wenn Sie die Zeile im Holdspace ohne Newline-Zeichen, wie dies mit g der Fall ist, im Patternspace einfügen? Folgendes:
> you@host > sed -e '/Sergio/{h;d}' -e '5g' mrolympia.dat <NAME> USA 1965 1966 Arnold Schwarzenegger Österreich 1970 1971 1972 1973 1974 1975 Franco Columbu Argentinien 1976 1981 Sergio Oliva USA 1967 1968 1969 Samir Bannout Libanon 1983 Lee Haney USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> 1992 1993 1994 1995 1996 1997 Ronnie Coleman USA 1998 1999 2000 2001 2002 2003 2004
Hier wird praktisch gnadenlos eine Zeile (hier »<NAME>«) überschrieben. Beachten Sie dies bitte bei der Verwendung von g und G bzw. h und H.
Noch ein Beispiel zum Kommando x:
> you@host > sed -e '/Sergio/{h;d}' -e '/Dorian/x' \ > -e '$G' mrolympia.dat <NAME> USA 1965 1966 <NAME> Österreich 1970 1971 1972 1973 1974 1975 <NAME> Argentinien 1976 1981 <NAME> USA 1982 <NAME> Libanon 1983 Lee Haney USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> USA 1967 1968 1969 Ronnie Coleman USA 1998 1999 2000 2001 2002 2003 2004 <NAME> 2005 <NAME>ien 1992 1993 1994 1995 1996 1997
Hier suchen Sie zunächst nach dem Muster »Sergio«, legen dies in den Holdspace (Kommando h) und löschen daraufhin die Zeile im Patternspace (Kommando d). Jetzt suchen Sie nach dem Muster »Dorian« und tauschen den Inhalt des Patternspace (Zeile mit »Dorian«) mit dem Inhalt des Holdspace (Zeile »Sergio«). Jetzt befindet sich im Holdspace die Zeile mit »Dorian«, diese hängen Sie mit einem weiteren Befehl (Option âe) an das Ende des Patternspace.
### 12.4.5 Das Kommando i â Einfügen von ZeilenÂ
Die Syntax:
> i\ text zum Einfügen # oder bei neueren sed-Versionen i text zum Einfügen
Bei diesem Kommando gilt all das, was schon zum Kommando a gesagt wurde. Damit können Sie eine neue Zeile vor der gefundenen Zeile einfügen.
> you@host > sed '/Franco/ i \ > ---Zeile---' mrolympia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> Österreich 1970 1971 1972 1973 1974 1975 ---Zeile--- Franco Columbu Argentinien 1976 1981 <NAME> USA 1982 ...
### 12.4.6 Das p-Kommando â Patternspace ausgebenÂ
Dieses Kommando wurde schon zur Genüge verwendet. Mit p geben Sie den Patternspace aus. Wollen Sie bspw. einfach eine komplette Datei ausgeben lassen, können Sie folgendermaßen vorgehen:
> you@host > sed -n 'p' mrolympia.dat
Die Option ân ist hierbei nötig, da sonst neben dem Patternspace-Puffer auch noch der Default Output mit ausgegeben würde und Sie somit eine doppelte Ausgabe hätten. Mit ân schalten Sie den Default Output aus. Dies wird gern vergessen und führt zu Fehlern. Wollen Sie bspw. die letzte Zeile in einer Datei ausgeben lassen und vergessen dabei, den Default Output abzuschalten, bekommen Sie folgende Ausgabe zurück:
> you@host > sed '$p' mrolympia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME>gger Österreich 1970 1971 1972 1973 1974 1975 <NAME> Argentinien 1976 1981 <NAME> USA 1982 <NAME>anon 1983 <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> 1992 1993 1994 1995 1996 1997 <NAME> USA 1998 1999 2000 2001 2002 2003 2004 Ronnie Coleman USA 1998 1999 2000 2001 2002 2003 2004
Hier wird trotzdem die komplette Datei ausgegeben, da der Default Output weiterhin über den Bildschirm rauscht. Die letzte Zeile hingegen ist doppelt vorhanden, weil hier zum einen der Default Output seine Arbeit gemacht hat und zum anderen das p-Kommando auch noch einmal die letzte Zeile mit ausgibt. Richtig wäre hier also:
> you@host > sed -n '$p' mrolympia.dat Ronnie Coleman USA 1998 1999 2000 2001 2002 2003 2004
Noch einige typische Beispiele mit dem p-Kommando:
> you@host > sed -n '4p' mrolympia.dat Franco Columbu Argentinien 1976 1981
Hier wurde die vierte Zeile ausgegeben.
> you@host > sed -n '4,6p' mrolympia.dat Franco Columbu Argentinien 1976 1981 <NAME> USA 1982 <NAME> 1983
Hierbei werden alle Zeilen mit der Textfolge »USA« ausgegeben.
### 12.4.7 Das Kommando q â BeendenÂ
Mit dem Kommando q können Sie das laufende sed-Script durch eine Verzweigung zum Scriptende beenden. Vorher wird aber noch der Patternspace ausgegeben. Ein einfaches Beispiel:
> you@host > sed -n '/USA/{p;q}' mrolympia.dat Larry Scott USA 1965 1966
Hier wird der sed-Befehl nach dem ersten gefundenen Muster »USA« beendet. Zuvor wird noch der Inhalt des Patternspace ausgegeben.
### 12.4.8 Die Kommandos r und wÂ
Mit dem Kommando r können Sie aus einer Datei einen Text einschieben. In den seltensten Fällen werden Sie einen Text in nur einer Datei einfügen wollen. Hierzu sei folgende einfache Textdatei gegeben:
> you@host > cat header.txt --------------------------------- Vorname Name Nationalität Jahr(e) ---------------------------------
Der Inhalt dieser Datei soll jetzt mit dem Kommando r in die letzte Zeile eingefügt werden:
> you@host > sed '$r header.txt' mrolympia.dat Larry Scott USA 1965 1966 <NAME> USA 1967 1968 1969 Arnold Schwarzenegger Österreich 1970 1971 1972 1973 1974 1975 ... <NAME>ien 1992 1993 1994 1995 1996 1997 <NAME> USA 1998 1999 2000 2001 2002 2003 2004 <NAME> 2005 --------------------------------- Vorname Name Nationalität Jahr(e) ---------------------------------
Natürlich können Sie auch die Zeilennummer oder ein Muster als Adresse für das Einfügen verwenden:
> you@host > sed -e '/Arnold/r header.txt' mrolympia.dat Larry Scott USA 1965 1966 Sergio Oliva USA 1967 1968 1969 Arnold Schwarzenegger Österreich 1970 1971 1972 1973 1974 1975 --------------------------------- Vorname Name Nationalität Jahr(e) --------------------------------- Franco Columbu Argentinien 1976 1981 <NAME> USA 1982 <NAME> Libanon 1983 ...
Ebenso können Sie auch mehr als nur eine Datei mithilfe der âe-Option aus einer Datei einlesen lassen und in die entsprechenden Zeilen einfügen.
> you@host > sed -e '3r file1.dat' -e '7r file2.dat' mrolympia.dat Larry Scott USA 1965 1966 Sergio Oliva USA 1967 1968 1969 Arnold Schwarzenegger Österreich 1970 1971 1972 1973 1974 1975 ----------Ich bin file1------------ Franco Columbu Argentinien 1976 1981 <NAME> USA 1982 <NAME> Libanon 1983 Lee Haney USA 1984 1985 1986 1987 1988 1989 1990 1991 ----------Ich bin file2------------ <NAME>ien 1992 1993 1994 1995 1996 1997 Ronnie Coleman USA 1998 1999 2000 2001 2002 2003 2004
Das entsprechende Gegenstück zum Kommando r erhalten Sie mit dem Kommando w, mit dem Sie das Ergebnis von sed in einer Datei speichern können:
> you@host > sed -n '/USA/w USA.dat' mrolympia.dat you@host > cat USA.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> USA 1982 Lee Haney USA 1984 1985 1986 1987 1988 1989 1990 1991 Ronnie Coleman USA 1998 1999 2000 2001 2002 2003 2004
### 12.4.9 Das Kommando s â substituteÂ
Bei dem s-Kommando handelt sich wohl um das meistverwendete Kommando in sed und es ist auch einer der Hauptgründe, sed überhaupt zu verwenden. Die Syntax:
> sed -e 's/altes_Muster/neues_Muster/flag' datei sed -e 's?altes_Muster?neues_Muster?flag' datei
Damit können Sie das Muster »altes_Muster« (im Patternspace) von datei durch »neues_Muster« ersetzen. Als Trennzeichen (hier einmal mit / und ?) können Sie praktisch jedes beliebige Zeichen verwenden (außer Backslash und Newline-Zeichen), es darf nur nicht Bestandteil des Musters sein.
In der Grundform, ohne Angabe von »flag«, ersetzt das Kommando s nur das erste Vorkommen eines Musters pro Zeile. Somit können Sie mit der Angabe von »flag« auch das Verhalten von s verändern. Am häufigsten verwendet wird wohl das Flag g, mit dem alle Vorkommen eines Musters in einer Zeile nacheinander ersetzt werden â also eine globale Ersetzung. Mit dem Flag q wird nach der letzten Ersetzung der Patternspace ausgegeben, mit dem Flag w können Sie den Patternspace in die angegebene Datei schreiben. Allerdings muss diese Option die letzte im Flag sein, da der Dateiname bis zum Ende der Zeile geht. Selbstverständlich werden gerade zu diesem Kommando jetzt massenhaft Beispiele folgen.
> you@host > sed 's/USA/Amerika/g' mrolympia.dat <NAME> 1965 1966 <NAME> 1967 1968 1969 <NAME> 1970 1971 1972 1973 1974 1975 <NAME> 1976 1981 <NAME> 1982 <NAME> 1983 <NAME> 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> 1992 1993 1994 1995 1996 1997 Ronnie Coleman Amerika 1998 1999 2000 2001 2002 2003 2004
Hier wurden global alle Zeichenfolgen »USA« durch »Amerika« ersetzt. Natürlich können Sie hierbei auch einzelnen Zeilen mithilfe einer Adresse verändern:
> you@host > sed '/Oliva/ s/USA/Cuba/' mrolympia.dat Larry Scott USA 1965 1966 Sergio Oliva Cuba 1967 1968 1969 Arnold Schwarzenegger Österreich 1970 1971 1972 1973 1974 1975 Franco Columbu Argentinien 1976 1981 ...
Hier wurde nur die Zeile ersetzt (»USA« durch »Cuba«), in der sich die Textfolge »Olivia« befindet. Wollen Sie nicht, dass hier die komplette Datei ausgegeben wird, sondern nur die Zeile(n), die ersetzt wurde(n), müssen Sie das Flag p (mit der Option ân) verwenden:
> you@host > sed -n '/Oliva/ s/USA/Cuba/p' mrolympia.dat Sergio Oliva Cuba 1967 1968 1969
Das nächste Beispiel:
> you@host > sed 's/[12][90]/-/' mrolympia.dat Larry Scott USA â65 1966 Sergio Oliva USA â67 1968 1969 Arnold Schwarzenegger Österreich â70 1971 1972 1973 1974 1975 1980 Franco Columbu Argentinien â76 1981 <NAME> USA â82 Samir Bannout Libanon â83 Lee Haney USA â84 1985 1986 1987 1988 1989 1990 1991 <NAME>ien â92 1993 1994 1995 1996 1997 Ronnie Coleman USA â98 1999 2000 2001 2002 2003 2004
Hier konnten Sie zum ersten Mal den Einfluss des g-Flags erkennen. Es sollten wohl alle Zeichenfolgen »19« und »20« durch ein Minuszeichen ersetzt werden. Dasselbe nochmals mit dem Flag g:
> you@host > sed 's/[12][90]/-/g' mrolympia.dat <NAME> USA â65 â66 Sergio Oliva USA â67 â68 â69 Arnold Schwarzenegger Österreich â70 â71 â72 â73 â74 â75 â80 Franco Columbu Argentinien â76 â81 <NAME> USA â82 <NAME> Libanon â83 Lee Haney USA â84 â85 â86 â87 â88 â89 â90 â91 <NAME>ien â92 â93 â94 â95 â96 â97 Ronnie Coleman USA â98 â99 â00 â01 â02 â03 â04
Hier werden auch die Zeichenfolgen »10«, »20« und »29« beachtet. Wollen Sie statt einer Ersetzung eine Ergänzung machen, können Sie das Ampersand-Zeichen (&) beim Ersetzungsstring verwenden:
> you@host > sed -n 's/<NAME>/Dr. &/p' mrolympia.dat Dr. <NAME> Argentinien 1976 1981
Hier wurde der Suchstring »Franco Columbu« exakt an der Position des »Ampersand-Zeichens« beim Ersetzungsstring verwendet und ein Titel davor gesetzt. Sofern Sie etwas Derartiges global durchführen müssen und nur die ersetzten Pattern ausgeben wollen, können Sie auch die Flags g und p gleichzeitig verwenden:
> you@host > sed -n 's/USA/& (Amerika)/gp' mrolympia.dat Larry Scott USA (Amerika) 1965 1966 Sergio Oliva USA (Amerika) 1967 1968 1969 <NAME> USA (Amerika) 1982 <NAME> USA (Amerika) 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> USA (Amerika) 1998 1999 2000 2001 2002 2003 2004
Mit der Option âe können Sie auch mehrere Ersetzungen auf einmal durchführen lassen:
> you@host > sed -e 's/USA/& (Amerika)/g' \ > -e 's/[12][90]/-/g' mrolympia.dat Larry Scott USA (Amerika) â65 â66 Sergio Oliva USA (Amerika) â67 â68 â69 <NAME> Österreich â70 â71 â72 â73 â74 â75 â80 <NAME>entinien â76 â81 <NAME> USA (Amerika) â82 <NAME> â83 Lee Haney USA (Amerika) â84 â85 â86 â87 â88 â89 â90 â91 <NAME> â92 â93 â94 â95 â96 â97 Ronnie Coleman USA (Amerika) â98 â99 â00 â01 â02 â03 â04
Gleiches können Sie übrigens auch mit einer Referenz wie folgt machen:
> you@host > sed -n 's/\(USA\)/\1 (Amerika)/gp' mrolympia.dat Larry Scott USA (Amerika) 1965 1966 <NAME> USA (Amerika) 1967 1968 1969 <NAME> USA (Amerika) 1982 Lee Haney USA (Amerika) 1984 1985 1986 1987 1988 1989 1990 1991 Ronnie Coleman USA (Amerika) 1998 1999 2000 2001 2002 2003 2004
Hier wird \1 als Referenz auf »USA« beim Ersetzungsstring verwendet.
Betrachten Sie mal folgendes Beispiel:
> you@host > sed -n 's/\([12][90]\)\(..\)/\/\1\2\//gp' \ > mrolympia.dat
Beim Anblick dieser Zeile fällt es schon schwer, den Überblick zu behalten. Hier sollen alle Jahreszeiten zwischen zwei Schrägstrichen gesetzt werden (/2000/, /2001/ etc). Bei der Verwendung solcher Zeichen-Orgien empfiehlt es sich, häufiger mal ein anderes Trennzeichen zu verwenden:
> you@host > sed -n 's#\([12][90]\)\(..\)#\/\1\2\/#gp' \ > mrolympia.dat
Jetzt kann man wenigstens leichter zwischen dem Such- und Ersetzungsstring unterscheiden.
Man könnte noch extremere Auswüchse zum Suchen und Ersetzen mit sed schreiben, doch erscheint es mir an dieser Stelle wichtiger, ein Problem mit den regulären Ausdrücken zu erwähnen, das mich zur Weißglut beim Schreiben eines Artikels zur Verwendung von sed in Verbindung mit HTML-Seiten gebracht hat. Erst ein Artikel im Internet von <NAME> hat dann für klare Verhältnisse gesorgt. Das Problem: Ein regulärer Ausdruck sucht sich immer den längsten passenden String. Bspw. folgende Textfolge in einem HTML-Dokument:
> Dies ist ein <strong>verflixte</strong> Stelle in einem <strong>verflixten</strong> HTML-Dokument.
Ich wollte alle HTML-Tags aus dem Dokument entfernen, um so eine normale Textdatei daraus zu machen. Mit folgendem Konstrukt wollte ich dies realisieren:
> sed -e 's/<.*>//g' index.html
Mit folgendem Ergebnis:
> Das ist ein HTML-Dokument.
Leider war mir nicht (mehr) klar, dass .* so gefräßig ist (eigentlich war es mir aus Perl bekannt) und die längste Zeichenfolge sucht, also von
> <strong>verflixte</strong> Stelle in einem <strong>verflixten </stronghaben will. Um aber nur die Zeichen bis zum ersten Auftreten des Zeichens > zu löschen, muss man mit folgender Bereichsangabe arbeiten:
> sed -e 's/<[^>]*>//g' index.html
Hieran können Sie erkennen, dass die Verwendung regulärer Ausdrücke recht kompliziert werden kann und man häufig um eine intensivere Beschäftigung mit ihnen nicht herumkommt. Gerade, wenn Sie reguläre Ausdrücke beim Suchen und Ersetzen mehrerer Dateien verwenden, sollten Sie sich sicher sein, was Sie tun und gegebenenfalls einen Probelauf vornehmen bzw. ein Backup zur Hand haben. Zum Glück â und vielleicht auch deswegen â wurde sed so konzipiert, dass die Manipulation erst mal nicht an der Originaldatei gemacht wird.
Manch einer mag jetzt fragen, wie man Dateien vom DOS-Format ins UNIX-Format und umgekehrt konvertieren kann. Mit sed ist dies leichter als man denkt. Man gehe davon aus, dass eine DOS-Zeile mit CR und LF endet. Ich habe das Beispiel der sed-FAQ von <NAME> entnommen:
> # 3. Under UNIX: convert DOS newlines (CR/LF) to Unix format sed 's/.$//' file # assumes that all lines end with CR/LF sed 's/^M$// file # in bash/tcsh, press Ctrl-V then Ctrl-M
Allerdings stellt sich die Frage, ob man hierbei nicht auf die Tools dos2unix bzw. unix2dos zurückgreifen will. Beim vim können Sie auch mittels set (set fileformat=dos oder set fileformat=unix) das Dateiformat angeben.
### 12.4.10 Das Kommando yÂ
Ein weiteres Kommando, das neben dem Kommando s gern verwendet wird, ist y (yank), mit dem Sie eine Ersetzung von Zeichen aus einer Liste vornehmen können. Die Syntax:
> y/Quellzeichen/Zielzeichen/
Hiermit werden alle Zeichen in »Quellzeichen« in die entsprechenden Zeichen in »Zielzeichen« umgewandelt. Ist eine der Listen leer oder unterschiedlich lang, wird sed mit einem Fehler beendet. Natürlich können Sie auch (wie schon beim Kommando s) als Trenner ein anderes Zeichen als / verwenden. Bspw. können Sie mit folgendem sed-Befehl eine Datei nach der »rot-13«-Methode verschlüsseln (hier nur auf die Kleinbuchstaben beschränkt):
> you@host > cp mrolympia.dat mrolympia.bak you@host > sed âe \ > 'y/abcdefghijklmnopqrstuvwxyz/nopqrstuvwxyzabcdefghijklm/' \ > mrolympia.bak > mrolympia.dat you@host > cat mrolympia.dat Lneel Spbgg USA 1965 1966 Sretvb Oyvin USA 1967 1968 1969 Aeabyq Spujnemrarttre Öfgrervpu 1970 1971 1972 1973 1974 1975 Fenapb Cbyhzoh Aetragvavra 1976 1981 ...
Hiermit werden alle (Klein-)Buchstaben um 13 Zeichen verschoben, aus »a« wird »n«, aus »b« wird »o«, aus »c« wird »p« usw. Wollen Sie das Ganze wieder rückgängig machen, brauchen Sie »Quellzeichen« und »Zielzeichen« aus dem Beispiel nur auszutauschen:
> you@host > cp mrolympia.dat mrolympia.bak you@host > sed âe \ > 'y/nopqrstuvwxyzabcdefghijklm/abcdefghijklmnopqrstuvwxyz/' \ > mrolympia.bak > mrolympia.dat you@host > cat mrolympia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> Österreich 1970 1971 1972 1973 1974 1975 <NAME> Argentinien 1976 1981 ...
# 12.5 sed-ScriptsÂ
12.5 sed-ScriptsÂ
Neben der Methode, den sed-Befehl in den Shellscripts wie übliche Befehle oder sed in der Kommandozeile zu verwenden, haben Sie noch eine dritte Möglichkeit, nämlich echte sed-Scripts zu schreiben. Der Vorteil ist, dass Sie so immer wieder verwendete sed-Kommandos gleich zur Hand haben und vor allem bei etwas längeren sed-Kommandos den Überblick behalten. Das sed-Script können Sie natürlich trotzdem in einem Shellscript verwenden, nur benötigen Sie dann neben dem Shellscript auch noch das sed-Script.
Damit sed weiß, dass es sein Kommando aus einer Datei erhält, müssen Sie die Option âf (file) verwenden. Die Syntax:
sed -f sed_script.sed Datei
Folgendes müssen Sie beim Verwenden von sed-Scripts beachten:
Â
Keine Leerzeichen, Tabulatoren etc. vor und nach den Kommandos
Â
Eine Zeile, die mit # beginnt, wird als Kommentar behandelt.
Hier ein einfaches sed-Script, mit dem Sie eine Textdatei in eine HTML-Datei einbetten können:
# Name: text2html.sed # Sonderzeichen '<', '>' und '&' ins HTML-Format s/&/\&/g s/</\</g s/>/\>/g # Zeile 1 selelektieren wir mit insert für den HTML-Header 1 i\ <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"\ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">\ <html>\ <head>\ <title>\ Converted with txt2html.sed\ </title>\ </head>\ <body>\ <pre> # Hier kommt der Text der Datei hin # Mit append hängen wir den Footer ans Ende $ a\ </pre>\ </body>\ </html>
Dieses Script können Sie nun wie folgt anwenden:
you@host > sed -f text2html.sed mrolympia.dat > mrolympia.html you@host > cat mrolympia.html <head> <title> Converted with txt2html.sed </title> </head> <body> <pre> <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> Österreich 1970 1971 1972 1973 1974 1975 <NAME> Argentinien 1976 1981 <NAME> USA 1982 <NAME> Libanon 1983 Lee Haney USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME>ien 1992 1993 1994 1995 1996 1997 <NAME> USA 1998 1999 2000 2001 2002 2003 2004 </pre> </body> </html>
Dieses HTML-Dokument können Sie sich nun mit einem beliebigen Webbrowser Ihrer Wahl ansehen. In einem Shellscript mit vielen Dateien auf einmal können Sie das Ganze wie folgt realisieren:
# Alle Dateien aus der Kommandozeile; alternativ könnte hier auch # das Metazeichen * verwendet werden, sodass alle Dateien aus dem # aktuellen Verzeichnis bearbeitet werden for file in "$@" do sed -f txt2html.sed $file > temp # Achtung, hier wird das Original verändert!!! mv tmp $file done
Hinweis   An dieser Stelle muss ich nochmals darauf hinweisen, dass eine Umleitung wie sed -f script datei > datei deshalb nicht funktioniert, weil die Eingabe die Ausgabe überschreiben würde.
Jetzt haben Sie noch eine weitere Möglichkeit, sed zu verwenden und zwar als eigenständiges Programm. Hierzu müssen Sie nur die Ausführrechte entsprechend setzen und in der ersten Zeile Folgendes eingeben:
#!/usr/bin/sed -f
Jetzt können Sie (im Beispiel nochmals das text2html-Script) das sed-Script wie ein gewöhnliches Shellscript ausführen:
you@host > chmod u+x text2html.sed you@host > ./text2html.sed mrolympia.dat > mrolympia.html
Hinweis   Eine interessante Sammlung von sed-Scripts (auch von weiteren Dokumentationen, Links etc.) finden Sie unter http://sed.sourceforge.net/grabbag/.
Hinweis   sed unterstützt übrigens auch Sprungziele (label) â einen unbedingten Sprung (es wird also immer gesprungen). Diesen Hinweis wollte ich geben, falls Sie sich noch intensiver mit den sed-Scripts auseinander setzen wollen.
## 12.5 sed-ScriptsÂ
Neben der Methode, den sed-Befehl in den Shellscripts wie übliche Befehle oder sed in der Kommandozeile zu verwenden, haben Sie noch eine dritte Möglichkeit, nämlich echte sed-Scripts zu schreiben. Der Vorteil ist, dass Sie so immer wieder verwendete sed-Kommandos gleich zur Hand haben und vor allem bei etwas längeren sed-Kommandos den Überblick behalten. Das sed-Script können Sie natürlich trotzdem in einem Shellscript verwenden, nur benötigen Sie dann neben dem Shellscript auch noch das sed-Script.
Damit sed weiß, dass es sein Kommando aus einer Datei erhält, müssen Sie die Option âf (file) verwenden. Die Syntax:
> sed -f sed_script.sed Datei
Folgendes müssen Sie beim Verwenden von sed-Scripts beachten:
 | Keine Leerzeichen, Tabulatoren etc. vor und nach den Kommandos |
| --- | --- |
 | Eine Zeile, die mit # beginnt, wird als Kommentar behandelt. |
| --- | --- |
Hier ein einfaches sed-Script, mit dem Sie eine Textdatei in eine HTML-Datei einbetten können:
> # Name: text2html.sed # Sonderzeichen '<', '>' und '&' ins HTML-Format s/&/\&/g s/</\</g s/>/\>/g # Zeile 1 selelektieren wir mit insert für den HTML-Header 1 i\ <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"\ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">\ <html>\ <head>\ <title>\ Converted with txt2html.sed\ </title>\ </head>\ <body>\ <pre> # Hier kommt der Text der Datei hin # Mit append hängen wir den Footer ans Ende $ a\ </pre>\ </body>\ </htmlDieses Script können Sie nun wie folgt anwenden:
> you@host > sed -f text2html.sed mrolympia.dat > mrolympia.html you@host > cat mrolympia.html <head> <title> Converted with txt2html.sed </title> </head> <body> <pre> <NAME> USA 1965 1966 Sergio Oliva USA 1967 1968 1969 <NAME> Österreich 1970 1971 1972 1973 1974 1975 <NAME> Argentinien 1976 1981 <NAME> USA 1982 <NAME> Libanon 1983 Lee Haney USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> 1992 1993 1994 1995 1996 1997 <NAME> USA 1998 1999 2000 2001 2002 2003 2004 </pre> </body> </htmlDieses HTML-Dokument können Sie sich nun mit einem beliebigen Webbrowser Ihrer Wahl ansehen. In einem Shellscript mit vielen Dateien auf einmal können Sie das Ganze wie folgt realisieren:
> # Alle Dateien aus der Kommandozeile; alternativ könnte hier auch # das Metazeichen * verwendet werden, sodass alle Dateien aus dem # aktuellen Verzeichnis bearbeitet werden for file in "$@" do sed -f txt2html.sed $file > temp # Achtung, hier wird das Original verändert!!! mv tmp $file done
Jetzt haben Sie noch eine weitere Möglichkeit, sed zu verwenden und zwar als eigenständiges Programm. Hierzu müssen Sie nur die Ausführrechte entsprechend setzen und in der ersten Zeile Folgendes eingeben:
> #!/usr/bin/sed -f
Jetzt können Sie (im Beispiel nochmals das text2html-Script) das sed-Script wie ein gewöhnliches Shellscript ausführen:
> you@host > chmod u+x text2html.sed you@host > ./text2html.sed mrolympia.dat > mrolympia.html
# 13.2 Aufruf von awk-ProgrammenÂ
13.2 Aufruf von awk-ProgrammenÂ
Auch hier stellt sich die Frage, wie man awk aufrufen kann. Zunächst empfiehlt es sich, hierbei den Programmteil von awk in einzelne Anführungsstriche zu setzen, um Konflikte mit der Shell zu vermeiden. Die Syntax:
awk 'Programm' Datei [Datei] ... awk 'Programm Programm Programm' Datei [Datei] ... awk -f Programmdatei Datei [Datei]
Sie haben wieder die beiden Möglichkeiten, entweder alle awk-Kommandos direkt zwischen die Single Quotes zu schreiben oder Sie verwenden auch hier die Option âf (für file, wie bei sed), um alle Befehle von awk in eine Programmdatei (awk-Script) zu schreiben, um diese Programmdatei beim Start von awk auf die Datei(en) bzw. den Eingabestrom anzuwenden.
13.2.1 Grundlegender Aufbau eines awk-KommandosÂ
Im Abschnitt zuvor war die Rede von awk-Kommandos. Wie sehen denn nun solche Befehle aus? Ein reguläres awk-Kommando besteht aus mindestens einer Zeile.
Muster { Aktion }
Der Aktionsteil kann hierbei mehrere Befehle beinhalten und muss in geschweifte Klammern gesetzt werden. Nur so kann awk den Muster- und Aktionsteil voneinander unterscheiden. Beim Vergessen der geschweiften Klammern erhalten Sie ohnehin eine Fehlermeldung.
Wenn nun eine Datei geladen wurde, wird diese zeilenweise eingelesen und anhand eines Musters verglichen. Trifft entsprechendes Muster zu, wird der Aktionsteil ausgeführt. Beachten Sie bitte, dass nicht zwangsläufig ein Muster und ein Aktionsteil angegeben werden muss. Ein einfaches Beispiel (ich hoffe, Sie haben noch die Datei mrolympia.dat):
you@host > awk '$1 ~ "Samir"' mrolympia.dat <NAME> Libanon 1983
Hier wurde als Muster der Name »Samir« verwendet, der im ersten Feld einer Zeile stehen muss ($1). Findet awk ein entsprechendes Muster, wird die Zeile ausgegeben. Wollen Sie hier nur die Nationalität ausgeben, benötigen Sie auf jeden Fall einen Aktionsteil:
you@host > awk ' $1 ~ "Samir" { print $3 }' mrolympia.dat Libanon
Hier wird wiederum nach dem Muster »Samir« gesucht, das sich im ersten Feld einer Zeile befinden muss ($1). Aber anstatt die komplette Zeile auszugeben, wird hier nur das dritte Feld (bzw. Wort) ausgegeben ($3).
Andererseits müssen Sie auch nicht unbedingt ein Muster verwenden, um mit awk irgendetwas bewirken zu können. Sie können hierbei auch Folgendes schreiben:
you@host > awk '{ print }' mrolympia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 Arnold Schwarzenegger Österreich 1970 1971 1972 1973 1974 1975 ...
Damit gibt awk den vollständigen Inhalt einer Datei aus. Wollen Sie nur die Nationalität (drittes Feld) ausgeben, so müssen Sie nur Folgendes schreiben:
you@host > awk '{ print $3 }' mrolympia.dat USA USA Österreich Argentinien USA Libanon USA Grossbritannien USA
In der Praxis könnten Sie die einzelnen Nationalitäten jetzt zählen und der Rangfolge oder dem Alphabet nach ausgeben lassen. Aber bis dahin ist es noch ein etwas längerer (Lern-)Weg.
Natürlich ist es auch möglich, dass ein awk-Kommando aus mehreren Muster- und Aktionsteilen besteht. Jedes dieser Teile wird pro eingelesene Zeile einmal ausgeführt, wenn ein Muster zutrifft â vorausgesetzt, es wurde ein Muster angegeben.
Muster { Aktion } Muster { Aktion } Muster { Aktion }
Wurden alle Muster-Aktions-Teile ausgeführt, liest awk die nächste Zeile ein und beginnt wieder von vorn, alle Muster-Aktions-Teile zu durchlaufen.
13.2.2 Die Kommandozeilen-Optionen von awkÂ
awk bietet nicht sehr viele Kommandozeilen-Optionen an. Wenn Sie wissen wollen, welche dies sind, müssen Sie awk ohne jegliche Argumente in der Kommandozeile eingeben:
you@host > awk Anwendung: awk [POSIX- oder GNU-Optionen] -f PROGRAM [--] Datei Anwendung: awk [POSIX- oder GNU-Optionen] -- 'PROGRAM' Datei ... POSIX-Optionen GNU-Optionen (lang): -f PROGRAM --file=PROGRAM -F Feldtrenner --field-separator=Feldtrenner -v var=Wert --assign=var=Wert -W compat --compat -W copyleft --copyleft ...
awk selbst verfügt eigentlich nur über die drei Optionen âF, âf und âv. Alle anderen Optionen mit âW sind GNU-spezifische Optionen, die eben nur gawk zur Verfügung stehen. Wobei es hier jedoch zwei sehr interessante Schalter gibt (sofern Sie gawk verwenden), mit denen Sie einen Kompatibilitätsmodus einschalten können:
Tabelle 13.1 Â GNU-spezifische Optionen für gawk (nicht nawk)
âW compat
gawk verhält sich wie ein UNIX-awk; damit werden alle GNU-Erweiterungen ausgeschaltet.
âW posix
gawk hält sich an den POSIX-Standard.
Tabelle 13.2 Â Standard-Optionen für awk
âF
Damit können Sie das/die Trennzeichen angeben, anhand dessen awk eine Zeile in einzelne Felder zerlegen soll. So verändern Sie die spezielle Variable FS.
âf
Angabe einer Datei (awk-Script) mit awk-Anweisungen
âv
Damit erzeugen Sie eine Variable mit einem vorbelegten Wert, der dem awk-Programm gleich zum Programmstart zu Verfügung steht.
13.2.3 awk aus der Kommandozeile aufrufenÂ
Die einfachste Methode, awk zu verwenden, ist gleichzeitig auch die unpraktischste â nämlich aus der Kommandozeile. Unpraktisch dann, wenn Sie regelmäßig diverse Kommandos wiederholen bzw. immer wieder verwenden. Auch so genannte Einzeiler lohnt es, in einer Programmdatei abzuspeichern.
awk wurde bereits aus der Kommandozeile verwendet:
you@host > awk -F: '{print $3}' /etc/passwd
So legen Sie zuerst mit âF als Trennzeichen den Doppelpunkt fest. Anschließend filtern Sie mit awk alle User-IDs aus der Datei /etc/passwd aus und geben diese auf die Standardausgabe aus.
Damit können Sie wohl in der Praxis nicht viel anfangen. Warum also nicht die User-ID eines bestimmten Users ausfiltern? Mit grep und einer Pipe sollte dies nicht schwer sein:
you@host > grep you /etc/passwd | awk -F: '{print $3}' 1001
Sie haben erst mit grep nach einer Zeile in /etc/passwd mit der Textfolge »you« gesucht und die Ausgabe durch die Pipe an die Eingabe von awk weitergeleitet. awk filtert aus dieser Zeile nur noch die User-ID (Feld $3) aus.
13.2.4 awk in Shellscripts aufrufenÂ
Die für Sie als Shell-Programmierer wohl gängigste Methode dürfte die Verwendung von awk in Shellscripts sein. Hierbei werden Sie entweder die einzelnen Befehle von awk direkt ins Script schreiben oder aber wiederum ein weiteres awk-Script mit dem Schalter âf ausführen lassen. Ein einfaches Beispiel:
# Name : who_uid # Argumente vorhanden ... ? usage() { if [ $# -lt 1 ] then echo "usage: $0 user" exit 1 fi } usage $* uid=`grep $1 /etc/passwd | awk -F: '{ print $3 }'` echo "Der User $1 hat die ID $uid"
Das Script bei der Ausführung:
you@host > ./who_uid tot Der User tot hat die ID 1000 you@host > ./who_uid you Der User you hat die ID 1001 you@host > ./who_uid root Der User root hat die ID 0
Hier wurde mit einer einfachen Kommando-Substitution die User-ID wie im Abschnitt zuvor ermittelt und an eine Variable übergeben. Natürlich lassen sich auch ohne weiteres Shell-Variablen in awk verwenden:
# Name : readfield printf "Welche Datei wollen Sie verwenden : " read datei printf "Welches Feld wollen Sie hierbei ermitteln : " read feld awk '{ print $'$feld' }' $datei
Das Script bei der Ausführung:
you@host > ./readfield Welche Datei wollen Sie verwenden : mrolympia.dat Welches Feld wollen Sie hierbei ermitteln : 2 <NAME> ...
13.2.5 awk als eigenes Script ausführenÂ
awk-Scripts können Sie â wie auch viele andere Programme â mit der She-Bank-Zeile (#!) selbstständig laufen lassen. Sie müssen nur am Anfang Ihres awk-Scripts folgende Zeile einfügen:
#!/usr/bin/gawk -f
Wichtig ist hierbei die Option âf, denn nur dadurch wird das Script als auszuführende Programmdatei an awk übergeben. Hierzu ein simples awk-Script:
#!/usr/bin/awk âf # Name : awkargs BEGIN { print "Anzahl Argumente: ", ARGC; for (i=0; i < ARGC; i++) print i, ". Argument: ", ARGV[i]; }
Das awk-Script (der Name sei »awkargs«) bei der Ausführung:
you@host > chmod u+x awkargs you@host > ./awkargs Hallo awk wie gehts Anzahl Argumente: 5 0 . Argument: awk 1 . Argument: Hallo 2 . Argument: awk 3 . Argument: wie 4 . Argument: gehts
Nachdem Sie das awk-Script ausführbar gemacht haben, zählt dieses Script alle Argumente der Kommandozeile zusammen und gibt die einzelnen Argumente auf dem Bildschirm aus. Natürlich können Sie awk-Scripts auch aus Ihren Shellscripts heraus starten. Dabei ist wie bei einem gewöhnlichen Kommando vorzugehen.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 13.2 Aufruf von awk-ProgrammenÂ
Auch hier stellt sich die Frage, wie man awk aufrufen kann. Zunächst empfiehlt es sich, hierbei den Programmteil von awk in einzelne Anführungsstriche zu setzen, um Konflikte mit der Shell zu vermeiden. Die Syntax:
> awk 'Programm' Datei [Datei] ... awk 'Programm Programm Programm' Datei [Datei] ... awk -f Programmdatei Datei [Datei]
Sie haben wieder die beiden Möglichkeiten, entweder alle awk-Kommandos direkt zwischen die Single Quotes zu schreiben oder Sie verwenden auch hier die Option âf (für file, wie bei sed), um alle Befehle von awk in eine Programmdatei (awk-Script) zu schreiben, um diese Programmdatei beim Start von awk auf die Datei(en) bzw. den Eingabestrom anzuwenden.
### 13.2.1 Grundlegender Aufbau eines awk-KommandosÂ
Im Abschnitt zuvor war die Rede von awk-Kommandos. Wie sehen denn nun solche Befehle aus? Ein reguläres awk-Kommando besteht aus mindestens einer Zeile.
> Muster { Aktion }
Der Aktionsteil kann hierbei mehrere Befehle beinhalten und muss in geschweifte Klammern gesetzt werden. Nur so kann awk den Muster- und Aktionsteil voneinander unterscheiden. Beim Vergessen der geschweiften Klammern erhalten Sie ohnehin eine Fehlermeldung.
Wenn nun eine Datei geladen wurde, wird diese zeilenweise eingelesen und anhand eines Musters verglichen. Trifft entsprechendes Muster zu, wird der Aktionsteil ausgeführt. Beachten Sie bitte, dass nicht zwangsläufig ein Muster und ein Aktionsteil angegeben werden muss. Ein einfaches Beispiel (ich hoffe, Sie haben noch die Datei mrolympia.dat):
> you@host > awk '$1 ~ "Samir"' mrolympia.dat <NAME> 1983
Hier wurde als Muster der Name »Samir« verwendet, der im ersten Feld einer Zeile stehen muss ($1). Findet awk ein entsprechendes Muster, wird die Zeile ausgegeben. Wollen Sie hier nur die Nationalität ausgeben, benötigen Sie auf jeden Fall einen Aktionsteil:
> you@host > awk ' $1 ~ "Samir" { print $3 }' mrolympia.dat Libanon
Hier wird wiederum nach dem Muster »Samir« gesucht, das sich im ersten Feld einer Zeile befinden muss ($1). Aber anstatt die komplette Zeile auszugeben, wird hier nur das dritte Feld (bzw. Wort) ausgegeben ($3).
Andererseits müssen Sie auch nicht unbedingt ein Muster verwenden, um mit awk irgendetwas bewirken zu können. Sie können hierbei auch Folgendes schreiben:
> you@host > awk '{ print }' mrolympia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> Österreich 1970 1971 1972 1973 1974 1975 ...
Damit gibt awk den vollständigen Inhalt einer Datei aus. Wollen Sie nur die Nationalität (drittes Feld) ausgeben, so müssen Sie nur Folgendes schreiben:
> you@host > awk '{ print $3 }' mrolympia.dat USA USA Österreich Argentinien USA Libanon USA Grossbritannien USA
In der Praxis könnten Sie die einzelnen Nationalitäten jetzt zählen und der Rangfolge oder dem Alphabet nach ausgeben lassen. Aber bis dahin ist es noch ein etwas längerer (Lern-)Weg.
Natürlich ist es auch möglich, dass ein awk-Kommando aus mehreren Muster- und Aktionsteilen besteht. Jedes dieser Teile wird pro eingelesene Zeile einmal ausgeführt, wenn ein Muster zutrifft â vorausgesetzt, es wurde ein Muster angegeben.
> Muster { Aktion } Muster { Aktion } Muster { Aktion }
Wurden alle Muster-Aktions-Teile ausgeführt, liest awk die nächste Zeile ein und beginnt wieder von vorn, alle Muster-Aktions-Teile zu durchlaufen.
### 13.2.2 Die Kommandozeilen-Optionen von awkÂ
awk bietet nicht sehr viele Kommandozeilen-Optionen an. Wenn Sie wissen wollen, welche dies sind, müssen Sie awk ohne jegliche Argumente in der Kommandozeile eingeben:
> you@host > awk Anwendung: awk [POSIX- oder GNU-Optionen] -f PROGRAM [--] Datei Anwendung: awk [POSIX- oder GNU-Optionen] -- 'PROGRAM' Datei ... POSIX-Optionen GNU-Optionen (lang): -f PROGRAM --file=PROGRAM -F Feldtrenner --field-separator=Feldtrenner -v var=Wert --assign=var=Wert -W compat --compat -W copyleft --copyleft ...
awk selbst verfügt eigentlich nur über die drei Optionen âF, âf und âv. Alle anderen Optionen mit âW sind GNU-spezifische Optionen, die eben nur gawk zur Verfügung stehen. Wobei es hier jedoch zwei sehr interessante Schalter gibt (sofern Sie gawk verwenden), mit denen Sie einen Kompatibilitätsmodus einschalten können:
Option | Bedeutung |
| --- | --- |
âW compat | gawk verhält sich wie ein UNIX-awk; damit werden alle GNU-Erweiterungen ausgeschaltet. |
âW posix | gawk hält sich an den POSIX-Standard. |
Option | Bedeutung |
| --- | --- |
âF | Damit können Sie das/die Trennzeichen angeben, anhand dessen awk eine Zeile in einzelne Felder zerlegen soll. So verändern Sie die spezielle Variable FS. |
âf | Angabe einer Datei (awk-Script) mit awk-Anweisungen |
âv | Damit erzeugen Sie eine Variable mit einem vorbelegten Wert, der dem awk-Programm gleich zum Programmstart zu Verfügung steht. |
### 13.2.3 awk aus der Kommandozeile aufrufenÂ
Die einfachste Methode, awk zu verwenden, ist gleichzeitig auch die unpraktischste â nämlich aus der Kommandozeile. Unpraktisch dann, wenn Sie regelmäßig diverse Kommandos wiederholen bzw. immer wieder verwenden. Auch so genannte Einzeiler lohnt es, in einer Programmdatei abzuspeichern.
awk wurde bereits aus der Kommandozeile verwendet:
> you@host > awk -F: '{print $3}' /etc/passwd
So legen Sie zuerst mit âF als Trennzeichen den Doppelpunkt fest. Anschließend filtern Sie mit awk alle User-IDs aus der Datei /etc/passwd aus und geben diese auf die Standardausgabe aus.
Damit können Sie wohl in der Praxis nicht viel anfangen. Warum also nicht die User-ID eines bestimmten Users ausfiltern? Mit grep und einer Pipe sollte dies nicht schwer sein:
> you@host > grep you /etc/passwd | awk -F: '{print $3}' 1001
Sie haben erst mit grep nach einer Zeile in /etc/passwd mit der Textfolge »you« gesucht und die Ausgabe durch die Pipe an die Eingabe von awk weitergeleitet. awk filtert aus dieser Zeile nur noch die User-ID (Feld $3) aus.
### 13.2.4 awk in Shellscripts aufrufenÂ
Die für Sie als Shell-Programmierer wohl gängigste Methode dürfte die Verwendung von awk in Shellscripts sein. Hierbei werden Sie entweder die einzelnen Befehle von awk direkt ins Script schreiben oder aber wiederum ein weiteres awk-Script mit dem Schalter âf ausführen lassen. Ein einfaches Beispiel:
> # Name : who_uid # Argumente vorhanden ... ? usage() { if [ $# -lt 1 ] then echo "usage: $0 user" exit 1 fi } usage $* uid=`grep $1 /etc/passwd | awk -F: '{ print $3 }'` echo "Der User $1 hat die ID $uid"
Das Script bei der Ausführung:
> you@host > ./who_uid tot Der User tot hat die ID 1000 you@host > ./who_uid you Der User you hat die ID 1001 you@host > ./who_uid root Der User root hat die ID 0
Hier wurde mit einer einfachen Kommando-Substitution die User-ID wie im Abschnitt zuvor ermittelt und an eine Variable übergeben. Natürlich lassen sich auch ohne weiteres Shell-Variablen in awk verwenden:
> # Name : readfield printf "Welche Datei wollen Sie verwenden : " read datei printf "Welches Feld wollen Sie hierbei ermitteln : " read feld awk '{ print $'$feld' }' $datei
Das Script bei der Ausführung:
> you@host > ./readfield Welche Datei wollen Sie verwenden : mrolympia.dat Welches Feld wollen Sie hierbei ermitteln : 2 <NAME> ...
### 13.2.5 awk als eigenes Script ausführenÂ
awk-Scripts können Sie â wie auch viele andere Programme â mit der She-Bank-Zeile (#!) selbstständig laufen lassen. Sie müssen nur am Anfang Ihres awk-Scripts folgende Zeile einfügen:
> #!/usr/bin/gawk -f
Wichtig ist hierbei die Option âf, denn nur dadurch wird das Script als auszuführende Programmdatei an awk übergeben. Hierzu ein simples awk-Script:
> #!/usr/bin/awk âf # Name : awkargs BEGIN { print "Anzahl Argumente: ", ARGC; for (i=0; i < ARGC; i++) print i, ". Argument: ", ARGV[i]; }
Das awk-Script (der Name sei »awkargs«) bei der Ausführung:
> you@host > chmod u+x awkargs you@host > ./awkargs Hallo awk wie gehts Anzahl Argumente: 5 0 . Argument: awk 1 . Argument: Hallo 2 . Argument: awk 3 . Argument: wie 4 . Argument: gehts
Nachdem Sie das awk-Script ausführbar gemacht haben, zählt dieses Script alle Argumente der Kommandozeile zusammen und gibt die einzelnen Argumente auf dem Bildschirm aus. Natürlich können Sie awk-Scripts auch aus Ihren Shellscripts heraus starten. Dabei ist wie bei einem gewöhnlichen Kommando vorzugehen.
# 13.3 Grundlegende awk-Programme und -ElementeÂ
13.3 Grundlegende awk-Programme und -ElementeÂ
Wenn man noch nie einen Baum gepflanzt hat, wird man keine ganzen Wälder anlegen. Daher folgen hier zunächst die grundlegenden Elemente bzw. die einfachen Anwendungen von awk. Was man für awk zuallererst benötigt, das ist die Ausgabe von Zeilen und alles, was mit den Zeilen zu tun hat, die Verwendung von Feldern (bzw. Wörtern) und die formatierte Ausgabe bzw. die Ausgabe in einer Datei.
13.3.1 Ausgabe von Zeilen und ZeilennummernÂ
Am besten fängt man mit dem Einfachsten an:
you@host > awk '{print}' Ein Test Ein Test Noch einer Noch einer
Dieser Einzeiler macht nichts anderes, als alle Eingaben, die Sie mit (ENTER) abgeschlossen haben, wieder auf dem Bildschirm auszugeben. Und zwar so lange, bis Sie die Tastenkombination (Strg)+(D) (für EOF) drücken. Im Prinzip funktioniert dies wie bei einem einfachen cat. Schreiben Sie jetzt einen Dateinamen hinter dem awk-Programm
you@host > awk '{print}' mrolympia.dat ...
erhält awk seine Eingabe nicht mehr von der Tastatur, sondern entnimmt diese zeilenweise der Datei. Aber, so ein einfaches cat ist awk auch wieder nicht. Wollen Sie, dass awk bestimmte Wörter von der Tastatur herausfiltert, können Sie hierzu ein Muster verwenden:
you@host > awk '!/mist/ {print}' Die ist kein Mist Die ist kein Mist Aber hier ist ein mist! Ende Ende
Hiermit wird jede Zeile, welche die Textfolge »mist« enthält, nicht ausgegeben.
Benötigen Sie die laufende Zeilennummer, steht Ihnen die awk-interne Variable NR zur Verfügung, die sich immer die aktuelle Nummer der Eingabezeile merkt.
you@host > awk '{print NR " : " $0}' mrolympia.dat 1 : <NAME> USA 1965 1966 2 : <NAME> USA 1967 1968 1969 ... ... 8 : <NAME>ien 1992 1993 1994 1995 1996 1997 9 : <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Hiermit geben Sie die komplette Datei mrolympia.dat mitsamt der Zeilennummern auf dem Bildschirm aus. Eine weitere Besonderheit ist hier die Variable $0, die immer automatisch mit der gesamten eingelesenen Zeile belegt ist. Wollen Sie mit awk nach einem bestimmten Muster suchen und hierbei die entsprechende Zeile bei Übereinstimmung mit ausgeben, kann dies wie folgt realisiert werden:
you@host > awk '/Lee/ {print NR " : " $0}' mrolympia.dat 7 : <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991
Es wird nach dem Muster »Lee« in der Datei mrolympia.dat gesucht und dann mitsamt der Zeilennummer ausgegeben.
Hinweis   Sie können mit awk noch einen Schritt weiter gehen, wenn Sie nicht die Variablen NR verwenden wollen, indem Sie die Zeilen selbst hochzählen. Hierzu müssen Sie nur eine extra Variable einsetzen und diese hochzählen: awk '{print ++i, " : " $0}' mrolympia.dat
13.3.2 FelderÂ
Die nächst kleinere Einheit nach den Zeilen sind die einzelnen Felder bzw. Wörter (abhängig von FS), in die jede Eingabezeile zerlegt wird. Allerdings ist es im Grunde falsch, bei awk von Wörtern zu sprechen, vielmehr sind es Felder. Standardmäßig werden die einzelnen Zeilen durch Leerzeichen und Tabulatorzeichen aufgetrennt und jeweils in eine eigene Variable gespeichert. Die Namen der Variablen entsprechen den Positionsparametern der Shell: $1, $2, $3 ...
Damit ist es ohne größeren Umweg möglich, einzelne Spalten aus einer Datei zu extrahieren, was u. a. auch ein Hauptanwendungsgebiet von awk ist. So können Sie z. B. ohne Probleme den Inhalt der Spalte 3 und 2 einer Datei ausgeben lassen:
you@host > awk '{print $3, $2}' mrolympia.dat USA Scott USA Oliva Österreich Schwarzenegger Argentinien Columbu USA Dickerson Libanon Bannout USA Haney Grossbritannien Yates USA Coleman
Wollen Sie jetzt noch wissen, wie viele Titel einer der Teilnehmer gewonnen hat, können Sie die Variable NF (Number of Fields) verwenden:
you@host > awk '{print $3, $2, NF-3}' mrolympia.dat USA Scott 2 USA Oliva 3 Österreich Schwarzenegger 7 Argentinien Columbu 2 USA Dickerson 1 Libanon Bannout 1 USA Haney 8 Grossbritannien Yates 6 USA Coleman 7
Hier wurde von NF der Wert 3 subtrahiert, da die ersten drei Spalten eine andere Bedeutung haben. Ich kann mir schon denken, wie die nächste Frage lauten könnte: Wie kann man dies jetzt noch nach der Anzahl von Titeln sortieren? Hier gibt es immer mehrere Wege â ich könnte folgenden anbieten:
you@host > awk '{print NF-3, $3, $2}' mrolympia.dat | sort -r 8 USA Haney 7 USA Coleman 7 Österreich Schwarzenegger 6 Grossbritannien Yates 3 USA Oliva 2 USA Scott 2 Argentinien Columbu 1 USA Dickerson 1 Libanon Bannout
Verwenden Sie NF mit einem Dollarzeichen davor ($NF), befindet sich darin immer das letzte Wort der Zeile. Das Vorletzte würden Sie mit $(NFâ1) erreichen:
you@host > awk '{print $NF, $(NF-1)}' mrolympia.dat 1966 1965 1969 1968 1980 1975 1981 1976 1982 USA 1983 Libanon 1991 1990 1997 1996 2004 2003 you@host > awk '{print "Zeile " NR, "enhält " NF " Worte \ > (letztes Wort: " $NF "; vorletztes: " $(NF-1) ")"}' \ > mrolympia.dat Zeile 1 enhält 5 Worte (letztes Wort: 1966 ; vorletztes: 1965) Zeile 2 enhält 6 Worte (letztes Wort: 1969 ; vorletztes:1968 Zeile 3 enhält 10 Worte (letztes Wort: 1980 ; vorletztes: 1975) Zeile 4 enhält 5 Worte (letztes Wort: 1981 ; vorletztes: 1976) Zeile 5 enhält 4 Worte (letztes Wort: 1982 ; vorletztes: USA) Zeile 6 enhält 4 Worte (letztes Wort: 1983 ; vorletztes:Libanon) Zeile 7 enhält 11 Worte (letztes Wort: 1991 ; vorletztes: 1990) Zeile 8 enhält 9 Worte (letztes Wort: 1997 ; vorletztes: 1996) Zeile 9 enhält 10 Worte (letztes Wort: 2004 ; vorletztes: 2003)
Und natürlich gehört es zu den Grundlagen von Feldern, den Feldtrenner zu verändern. Dies kann über den Schalter âF oder in einem awk-Script über die Variable FS geschehen.
Formatierte Ausgabe und Dateiausgabe
Zur formatierten Ausgabe wurde bisher immer print verwendet, welches ähnlich wie echo in der Shell funktioniert. Da ja die einzelnen Felder anhand von Leerzeichen und Tabulatorzeichen getrennt werden, sollte es einleuchtend sein, dass bei der folgenden Ausgabe die Wörter nicht voneinander getrennt werden:
you@host > awk '{print $1 $2 }' mrolympia.dat <NAME> ... ... DorianYates RonnieColeman
Man kann entweder das Leerzeichen (oder auch ein anderes Zeichen bzw. andere Zeichenfolge) in Double Quotes eingeschlossen einfügen:
you@host > awk '{print $1 " " $2 }' mrolympia.dat
Oder aber man trennt die einzelnen Felder voneinander mit einem Komma:
you@host > awk '{print $1, $2 }' mrolympia.dat
Meistens reicht die Ausgabe von print. Aber auch hier haben Sie mit printf die Möglichkeit, Ihre Ausgabe erweitert zu formatieren. printf funktioniert hier genauso, wie es schon für die Shell beschrieben wurde, weshalb ich hier auf Tabellen mit den Typezeichen (s für String, d für Dezimal, f für Float usw.) verzichten kann. Bei Bedarf werfen Sie bitte einen Blick in Abschnitt 5.2.3. Gleiches gilt übrigens auch für die Escapesequenzen (alias Steuerzeichen) (siehe Tabelle 5.2). Einziger Unterschied im Gegensatz zum printf der Shell: Die Argumente am Ende müssen durch ein Komma getrennt werden. Ein einfaches Beispiel der formatierten Ausgabe mit printf:
you@host > awk ' \ > {printf "%-2d Titel\tLand: %-15s\tName: %-15s\n",NF-3,$3,$2}'\ > mrolympia.dat | sort -r 8 Titel Land: USA Name: Haney 7 Titel Land: USA Name: Coleman 7 Titel Land: Österreich Name: Schwarzenegger 6 Titel Land: Grossbritannien Name: Yates 3 Titel Land: USA Name: Oliva 2 Titel Land: USA Name: Scott 2 Titel Land: Argentinien Name: Columbu 1 Titel Land: USA Name: Dickerson 1 Titel Land: Libanon Name: Bannout
Zur Ausgabe in einer Datei wird für gewöhnlich eine Umlenkung verwendet:
you@host > awk \ > '{printf "%-2d Titel\tLand: %-15s\tName: %-15s\n",NF-3,$3,$2}'\ > mrolympia.dat > olymp.dat you@host > cat olymp.dat 2 Titel Land: USA Name: Scott 3 Titel Land: USA Name: Oliva 7 Titel Land: Österreich Name: Schwarzenegger 2 Titel Land: Argentinien Name: Columbu 1 Titel Land: USA Name: Dickerson 1 Titel Land: Libanon Name: Bannout 8 Titel Land: USA Name: Haney 6 Titel Land: Grossbritannien Name: Yates 7 Titel Land: USA Name: Coleman
Es ist aber auch möglich, innerhalb von awk â im Aktionsteil â in eine Datei zu schreiben:
you@host > awk '/USA/{ print > "usa.dat"}' mrolympia.dat you@host > cat usa.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> USA 1982 <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Im Beispiel wurden alle Zeilen mit einer Zeichenfolge »USA« in die Datei usa.dat geschrieben. Dies ist zum Beispiel sinnvoll, wenn Sie mehrere Muster-Aktions-Teile verwenden und nicht jede Ausgabe gleich in eine Datei geschrieben werden soll.
Um dem Ganzen die Krone aufzusetzen, folgendes Beispiel:
you@host > awk '$3 {print >> $3}' mrolympia.dat
Hier schreiben Sie für jedes Land den Inhalt des entsprechenden Eintrags ans Ende einer Datei mit dem entsprechenden Landesnamen. Genauer: Für jedes Land wird eine extra Datei angelegt, worin die einzelnen Teilnehmer immer ans Ende der Datei gehängt werden. Führen Sie das einmal in einer anderen Sprache mit einer Zeile durch.
you@host > ls -l -rw------- 1 tot users 37 2005â04â11 15:34 Argentinien -rw------- 1 tot users 58 2005â04â11 15:34 Grossbritannien -rw------- 1 tot users 27 2005â04â11 15:34 Libanon -rw------- 1 tot users 381 2005â04â08 07:32 mrolympia.dat -rw------- 1 tot users 68 2005â04â11 15:34 Österreich -rw------- 1 tot users 191 2005â04â11 15:34 USA you@host > cat Libanon Sam<NAME>annout Libanon 1983 you@host > cat Österreich Arnold Schwarzenegger Österreich 1970 1971 1972 1973 1974 1975
Natürlich können Sie das noch ganz anders erreichen:
you@host > awk '$3 {print $2, $1, $3 >> $3}' mrolympia.dat
Hier machen Sie das Gleiche, hängen jedoch nur den Inhalt der Felder $2, $3 und $1 an entsprechende Landesdatei. In Verbindung mit einer umfangreichen Datenbank ist das wirklich mächtig. Doch im Prinzip ist das noch gar nichts, awk ist bei richtiger Anwendung zu viel mehr im Stande.
Natürlich können Sie awk auch cat-ähnlich verwenden, um alle Eingaben von der Tastatur in eine Datei zu schreiben:
you@host > awk '{print > "atextfile"}' Hallo Textdatei Das steht drin (Strg)+(D) you@host > cat atextfile Hallo Textdatei Das steht drin
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 13.3 Grundlegende awk-Programme und -ElementeÂ
Wenn man noch nie einen Baum gepflanzt hat, wird man keine ganzen Wälder anlegen. Daher folgen hier zunächst die grundlegenden Elemente bzw. die einfachen Anwendungen von awk. Was man für awk zuallererst benötigt, das ist die Ausgabe von Zeilen und alles, was mit den Zeilen zu tun hat, die Verwendung von Feldern (bzw. Wörtern) und die formatierte Ausgabe bzw. die Ausgabe in einer Datei.
### 13.3.1 Ausgabe von Zeilen und ZeilennummernÂ
Am besten fängt man mit dem Einfachsten an:
> you@host > awk '{print}' Ein Test Ein Test Noch einer Noch einer
Dieser Einzeiler macht nichts anderes, als alle Eingaben, die Sie mit (ENTER) abgeschlossen haben, wieder auf dem Bildschirm auszugeben. Und zwar so lange, bis Sie die Tastenkombination (Strg)+(D) (für EOF) drücken. Im Prinzip funktioniert dies wie bei einem einfachen cat. Schreiben Sie jetzt einen Dateinamen hinter dem awk-Programm
> you@host > awk '{print}' mrolympia.dat ...
erhält awk seine Eingabe nicht mehr von der Tastatur, sondern entnimmt diese zeilenweise der Datei. Aber, so ein einfaches cat ist awk auch wieder nicht. Wollen Sie, dass awk bestimmte Wörter von der Tastatur herausfiltert, können Sie hierzu ein Muster verwenden:
> you@host > awk '!/mist/ {print}' Die ist kein Mist Die ist kein Mist Aber hier ist ein mist! Ende Ende
Hiermit wird jede Zeile, welche die Textfolge »mist« enthält, nicht ausgegeben.
Benötigen Sie die laufende Zeilennummer, steht Ihnen die awk-interne Variable NR zur Verfügung, die sich immer die aktuelle Nummer der Eingabezeile merkt.
> you@host > awk '{print NR " : " $0}' mrolympia.dat 1 : <NAME> USA 1965 1966 2 : <NAME> USA 1967 1968 1969 ... ... 8 : <NAME>ien 1992 1993 1994 1995 1996 1997 9 : <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Hiermit geben Sie die komplette Datei mrolympia.dat mitsamt der Zeilennummern auf dem Bildschirm aus. Eine weitere Besonderheit ist hier die Variable $0, die immer automatisch mit der gesamten eingelesenen Zeile belegt ist. Wollen Sie mit awk nach einem bestimmten Muster suchen und hierbei die entsprechende Zeile bei Übereinstimmung mit ausgeben, kann dies wie folgt realisiert werden:
> you@host > awk '/Lee/ {print NR " : " $0}' mrolympia.dat 7 : <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991
Es wird nach dem Muster »Lee« in der Datei mrolympia.dat gesucht und dann mitsamt der Zeilennummer ausgegeben.
### 13.3.2 FelderÂ
Die nächst kleinere Einheit nach den Zeilen sind die einzelnen Felder bzw. Wörter (abhängig von FS), in die jede Eingabezeile zerlegt wird. Allerdings ist es im Grunde falsch, bei awk von Wörtern zu sprechen, vielmehr sind es Felder. Standardmäßig werden die einzelnen Zeilen durch Leerzeichen und Tabulatorzeichen aufgetrennt und jeweils in eine eigene Variable gespeichert. Die Namen der Variablen entsprechen den Positionsparametern der Shell: $1, $2, $3 ...
Damit ist es ohne größeren Umweg möglich, einzelne Spalten aus einer Datei zu extrahieren, was u. a. auch ein Hauptanwendungsgebiet von awk ist. So können Sie z. B. ohne Probleme den Inhalt der Spalte 3 und 2 einer Datei ausgeben lassen:
> you@host > awk '{print $3, $2}' mrolympia.dat USA Scott USA Oliva Österreich Schwarzenegger Argentinien Columbu USA Dickerson Libanon Bannout USA Haney Grossbritannien Yates USA Coleman
Wollen Sie jetzt noch wissen, wie viele Titel einer der Teilnehmer gewonnen hat, können Sie die Variable NF (Number of Fields) verwenden:
> you@host > awk '{print $3, $2, NF-3}' mrolympia.dat USA Scott 2 USA Oliva 3 Österreich Schwarzenegger 7 Argentinien Columbu 2 USA Dickerson 1 Libanon Bannout 1 USA Haney 8 Grossbritannien Yates 6 USA Coleman 7
Hier wurde von NF der Wert 3 subtrahiert, da die ersten drei Spalten eine andere Bedeutung haben. Ich kann mir schon denken, wie die nächste Frage lauten könnte: Wie kann man dies jetzt noch nach der Anzahl von Titeln sortieren? Hier gibt es immer mehrere Wege â ich könnte folgenden anbieten:
> you@host > awk '{print NF-3, $3, $2}' mrolympia.dat | sort -r 8 USA Haney 7 USA Coleman 7 Österreich Schwarzenegger 6 Grossbritannien Yates 3 USA Oliva 2 USA Scott 2 Argentinien Columbu 1 USA Dickerson 1 Libanon Bannout
Verwenden Sie NF mit einem Dollarzeichen davor ($NF), befindet sich darin immer das letzte Wort der Zeile. Das Vorletzte würden Sie mit $(NFâ1) erreichen:
> you@host > awk '{print $NF, $(NF-1)}' mrolympia.dat 1966 1965 1969 1968 1980 1975 1981 1976 1982 USA 1983 Libanon 1991 1990 1997 1996 2004 2003 you@host > awk '{print "Zeile " NR, "enhält " NF " Worte \ > (letztes Wort: " $NF "; vorletztes: " $(NF-1) ")"}' \ > mrolympia.dat Zeile 1 enhält 5 Worte (letztes Wort: 1966 ; vorletztes: 1965) Zeile 2 enhält 6 Worte (letztes Wort: 1969 ; vorletztes:1968 Zeile 3 enhält 10 Worte (letztes Wort: 1980 ; vorletztes: 1975) Zeile 4 enhält 5 Worte (letztes Wort: 1981 ; vorletztes: 1976) Zeile 5 enhält 4 Worte (letztes Wort: 1982 ; vorletztes: USA) Zeile 6 enhält 4 Worte (letztes Wort: 1983 ; vorletztes:Libanon) Zeile 7 enhält 11 Worte (letztes Wort: 1991 ; vorletztes: 1990) Zeile 8 enhält 9 Worte (letztes Wort: 1997 ; vorletztes: 1996) Zeile 9 enhält 10 Worte (letztes Wort: 2004 ; vorletztes: 2003)
Und natürlich gehört es zu den Grundlagen von Feldern, den Feldtrenner zu verändern. Dies kann über den Schalter âF oder in einem awk-Script über die Variable FS geschehen.
# Formatierte Ausgabe und Dateiausgabe
Zur formatierten Ausgabe wurde bisher immer print verwendet, welches ähnlich wie echo in der Shell funktioniert. Da ja die einzelnen Felder anhand von Leerzeichen und Tabulatorzeichen getrennt werden, sollte es einleuchtend sein, dass bei der folgenden Ausgabe die Wörter nicht voneinander getrennt werden:
> you@host > awk '{print $1 $2 }' mrolympia.dat LarryScott SergioOliva ... ... DorianYates RonnieColeman
Man kann entweder das Leerzeichen (oder auch ein anderes Zeichen bzw. andere Zeichenfolge) in Double Quotes eingeschlossen einfügen:
> you@host > awk '{print $1 " " $2 }' mrolympia.dat
Oder aber man trennt die einzelnen Felder voneinander mit einem Komma:
> you@host > awk '{print $1, $2 }' mrolympia.dat
Meistens reicht die Ausgabe von print. Aber auch hier haben Sie mit printf die Möglichkeit, Ihre Ausgabe erweitert zu formatieren. printf funktioniert hier genauso, wie es schon für die Shell beschrieben wurde, weshalb ich hier auf Tabellen mit den Typezeichen (s für String, d für Dezimal, f für Float usw.) verzichten kann. Bei Bedarf werfen Sie bitte einen Blick in Abschnitt 5.2.3. Gleiches gilt übrigens auch für die Escapesequenzen (alias Steuerzeichen) (siehe Tabelle 5.2). Einziger Unterschied im Gegensatz zum printf der Shell: Die Argumente am Ende müssen durch ein Komma getrennt werden. Ein einfaches Beispiel der formatierten Ausgabe mit printf:
> you@host > awk ' \ > {printf "%-2d Titel\tLand: %-15s\tName: %-15s\n",NF-3,$3,$2}'\ > mrolympia.dat | sort -r 8 Titel Land: USA Name: Haney 7 Titel Land: USA Name: Coleman 7 Titel Land: Österreich Name: Schwarzenegger 6 Titel Land: Grossbritannien Name: Yates 3 Titel Land: USA Name: Oliva 2 Titel Land: USA Name: Scott 2 Titel Land: Argentinien Name: Columbu 1 Titel Land: USA Name: Dickerson 1 Titel Land: Libanon Name: Bannout
Zur Ausgabe in einer Datei wird für gewöhnlich eine Umlenkung verwendet:
> you@host > awk \ > '{printf "%-2d Titel\tLand: %-15s\tName: %-15s\n",NF-3,$3,$2}'\ > mrolympia.dat > olymp.dat you@host > cat olymp.dat 2 Titel Land: USA Name: Scott 3 Titel Land: USA Name: Oliva 7 Titel Land: Österreich Name: Schwarzenegger 2 Titel Land: Argentinien Name: Columbu 1 Titel Land: USA Name: Dickerson 1 Titel Land: Libanon Name: Bannout 8 Titel Land: USA Name: Haney 6 Titel Land: Grossbritannien Name: Yates 7 Titel Land: USA Name: Coleman
Es ist aber auch möglich, innerhalb von awk â im Aktionsteil â in eine Datei zu schreiben:
> you@host > awk '/USA/{ print > "usa.dat"}' mrolympia.dat you@host > cat usa.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> USA 1982 <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Im Beispiel wurden alle Zeilen mit einer Zeichenfolge »USA« in die Datei usa.dat geschrieben. Dies ist zum Beispiel sinnvoll, wenn Sie mehrere Muster-Aktions-Teile verwenden und nicht jede Ausgabe gleich in eine Datei geschrieben werden soll.
Um dem Ganzen die Krone aufzusetzen, folgendes Beispiel:
> you@host > awk '$3 {print >> $3}' mrolympia.dat
Hier schreiben Sie für jedes Land den Inhalt des entsprechenden Eintrags ans Ende einer Datei mit dem entsprechenden Landesnamen. Genauer: Für jedes Land wird eine extra Datei angelegt, worin die einzelnen Teilnehmer immer ans Ende der Datei gehängt werden. Führen Sie das einmal in einer anderen Sprache mit einer Zeile durch.
> you@host > ls -l -rw------- 1 tot users 37 2005â04â11 15:34 Argentinien -rw------- 1 tot users 58 2005â04â11 15:34 Grossbritannien -rw------- 1 tot users 27 2005â04â11 15:34 Libanon -rw------- 1 tot users 381 2005â04â08 07:32 mrolympia.dat -rw------- 1 tot users 68 2005â04â11 15:34 Österreich -rw------- 1 tot users 191 2005â04â11 15:34 USA you@host > cat Libanon Samir Bannout Libanon 1983 you@host > cat Österreich Arnold Schwarzenegger Österreich 1970 1971 1972 1973 1974 1975
Natürlich können Sie das noch ganz anders erreichen:
> you@host > awk '$3 {print $2, $1, $3 >> $3}' mrolympia.dat
Hier machen Sie das Gleiche, hängen jedoch nur den Inhalt der Felder $2, $3 und $1 an entsprechende Landesdatei. In Verbindung mit einer umfangreichen Datenbank ist das wirklich mächtig. Doch im Prinzip ist das noch gar nichts, awk ist bei richtiger Anwendung zu viel mehr im Stande.
Natürlich können Sie awk auch cat-ähnlich verwenden, um alle Eingaben von der Tastatur in eine Datei zu schreiben:
> you@host > awk '{print > "atextfile"}' Hallo Textdatei Das steht drin (Strg)+(D) you@host > cat atextfile Hallo Textdatei Das steht drin
# 13.4 Muster (bzw. Adressen) von awk-ScriptsÂ
13.4 Muster (bzw. Adressen) von awk-ScriptsÂ
Wie schon bei sed können Sie mit awk Adressen bzw. Muster benennen, die als Suchkriterium angegeben werden. Ein Muster dient auch hier dazu, den Programmfluss von awk zu steuern. Stimmt der Inhalt mit der zu bearbeitenden Zeile mit dem angegebenen Muster überein, wird der entsprechende Aktionsteil ausgeführt. Um solche Muster in awk darzustellen, haben Sie mehrere Möglichkeiten. Welche dies sind und wie Sie diese verwenden können, erfahren Sie in den folgenden Unterabschnitten.
13.4.1 ZeichenkettenvergleicheÂ
Am einfachsten und gleichzeitig auch häufigsten werden Muster bei Zeichenvergleichen eingesetzt. Diese Verwendung sieht folgendermaßen aus:
you@host > awk '/<NAME>/' Hallo Jürgen Hal<NAME> Mein Name ist <NAME> Mein Name ist <NAME> (Strg)+(D)
Hier wird nur die Eingabe von der Tastatur wiederholt, wenn in $0 die Textfolge »Jürgen Wolf« enthalten ist. Auf eine Datei wird der Zeichenkettenvergleich ähnlich ausgeführt:
you@host > awk '/Samir/' mrolympia.dat <NAME> 1983
Hier erhält awk die Eingabezeile aus der Datei mrolympia.dat, sucht nach einem Muster und gibt gegebenenfalls die komplette Zeile der Fundstelle auf den Bildschirm aus. Hier fällt außerdem auf, dass awk auch ohne den Aktionsteil und den Befehl print die komplette Zeile auf dem Bildschirm ausgibt. Natürlich werden auch Teil-Textfolgen gefunden, sprich es müssen nicht zwangsläufig ganze Wörter sein:
you@host > awk '/ie/' mrolympia.dat <NAME> 1976 1981 <NAME> 1992 1993 1994 1995 1996 1997 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
13.4.2 VergleichsausdrückeÂ
awk bietet auch Vergleichsausdrücke in den Mustern an. Hierzu werden die in vielen Programmiersprachen üblichen Vergleichsoperatoren verwendet. Ist ein Vergleichsausdruck (also das Muster) wahr, wird der entsprechende Aktionsteil ausgeführt. In der folgenden Tabelle finden Sie alle Vergleichsoperatoren aufgelistet.
Tabelle 13.3 Â Vergleichsoperatoren von awk
Operator
Bedeutung
Beispiel
<
Kleiner als
x < y
<=
Kleiner als oder gleich
x <= y
==
Gleichheit
x == y
!=
Ungleichheit
x != y
>=
Größer als oder gleich
x >= y
>
Größer als
x > y
~
Mustervergleich
x ~ /y/
!~
Negierter Mustervergleich
x !~ /y/
Ein Vergleichsausdruck lässt sich dabei sowohl auf Zahlen als auch auf Zeichenketten anwenden. Ein Beispiel:
you@host > awk '$4 > 1990' mrolympia.dat <NAME> 1992 1993 1994 1995 1996 1997 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Hier geben Sie alle Teilnehmer aus, die Ihren ersten Wettkampf nach 1990 gewonnen haben. Es werden also sämtliche Zeilen ausgegeben, bei denen der Wert des vierten Feldes größer als 1990 ist. Ein weiteres Beispiel:
you@host > awk '$2 < "H"' mrolympia.dat Franco Columbu Argentinien 1976 1981 <NAME> USA 1982 <NAME> Libanon 1983 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Hiermit werden alle Werte (Namen) der zweiten Spalte ausgegeben, deren Anfangsbuchstabe kleiner als »H« ist. Gemeint ist hier der ASCII-Wert! »C« ist zum Beispiel kleiner als »H« in der ASCII-Tabelle usw. Natürlich lässt sich dies auch auf eine ganze Zeichenkette anwenden:
you@host > awk '$1 > "Dorian" ' mrolympia.dat Larry Scott USA 1965 1966 <NAME> USA 1967 1968 1969 Franco Columbu Argentinien 1976 1981 <NAME> Libanon 1983 Lee Haney USA 1984 1985 1986 1987 1988 1989 1990 1991 Ronnie Coleman USA 1998 1999 2000 2001 2002 2003 2004
Hier werden alle Vornamen ausgegeben, deren Name im ersten Feld größer als (heißt: nicht länger) der von »Dorian« ist â auch wenn es in diesem Beispiel wenig Sinn macht. Interessant ist auch der Vergleichsoperator für Mustervergleiche, beispielsweise:
you@host > awk '$3 ~ /USA/ ' mrolympia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> USA 1982 <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Hier werden alle Zeilen selektiert, in denen sich in der dritten Spalte das Muster »USA« befindet. Damit können Sie das exakte Vorkommen eines Musters in der entsprechenden Spalte bestimmen. Natürlich können Sie dies auch negieren (verneinen):
you@host > awk '$3 !~ /USA/ ' mrolympia.dat <NAME>ger Österreich 1970 1971 1972 1973 1974 1975 <NAME> Argentinien 1976 1981 <NAME> Libanon 1983 <NAME>britannien 1992 1993 1994 1995 1996 1997
Jetzt werden alle Zeilen ausgegeben, bei denen sich in der dritten Spalte nicht das Muster »USA« befindet.
Wissen Sie jetzt zum Beispiel, dass 1988 ein US-Amerikaner den Wettkampf gewonnen hat, aber nicht genau welcher, formulieren Sie dies mit awk folgendermaßen:
you@host > awk '$3 ~ /USA/ && /1988/' mrolympia.dat <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991
Damit wählen Sie die Zeile(n) aus, bei denen sich in der dritten Spalte das Muster »USA« befindet und irgendwo in der Zeile das Jahr 1988. Hier wurde mit dem && eine logische UND-Verknüpfung vorgenommen, worauf noch in einem extra Abschnitt eingegangen wird.
Wollen Sie nur die ersten fünf Zeilen einer Datei ausgeben lassen? Nichts ist leichter als das:
you@host > awk 'NR <=5' mrolympia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> Österreich 1970 1971 1972 1973 1974 1975 <NAME> Argentinien 1976 1981 <NAME> USA 1982
Wenn Sie jeden Datensatz ausgeben lassen wollen, der weniger als 5 Felder enthält, dann machen Sie dies mit awk so:
you@host > awk 'NF < 5' mrolympia.dat <NAME> USA 1982 <NAME> Libanon 1983
13.4.3 Reguläre AusdrückeÂ
Natürlich stehen Ihnen auch mit awk wieder die regulären Ausdrücke zur Verfügung, mit denen Sie Ihre Muster formulieren können. Hierzu ein kurzer Überblick zu den Metazeichen und ihren Bedeutungen, die Sie für reguläre Ausdrücke mit awk heranziehen können. Da die regulären Ausdrücke ja bereits einige Mal behandelt wurden, finden Sie hier allerdings nur einen kurzen Überblick, da die meisten Zeichen auch hier ihre bereits bekannte Funktion erfüllen.
Tabelle 13.4 Â Metazeichen für reguläre Ausdrücke in awk
^
Anfang einer Zeile oder Zeichenkette
$
Ende einer Zeile oder Zeichenkette
.
Jedes Zeichen außer einem Zeilenumbruch
*
Null, eines oder mehrere Vorkommen
[]
Ein Zeichen aus der Menge enthalten
[^]
Kein Zeichen aus der Menge enthalten
re1|re2
ODER; entweder Muster re1 oder re2 enthalten
re1&re2
UND; Muster re1 und Muster re2 enthalten
+
Eines oder mehrere Vorkommen
(ab)+
Mindestens ein Auftreten der Menge »ab«
?
Null oder einmaliges Vorkommen
&
Enthält das Suchmuster des Ersetzungsstrings
Auch zur Verwendung muss wohl nicht mehr allzu viel geschrieben werden, da Sie ja bereits zuhauf Beispiele in sed gesehen haben. Trotzdem hierzu einige Beispiele mit regulären Ausdrücken.
you@host > awk '/[0â9]+/ { print $0 ": eine Zahl" } \ > /[A-Za-z]+/ { print $0 ": ein Wort" }' Hallo Hallo: ein Wort 1234 1234: eine Zahl
Hier können Sie ermitteln, ob es sich bei der Eingabe von der Tastatur um ein Wort oder um eine Zahl handelt. Allerdings funktioniert dieses Script nur so lange, wie es in einer Umgebung eingesetzt wird, in der sich derselbe Zeichensatz befindet wie beim Entwickler. Um hier wirklich Kompatibilität zu erreichen, sollten Sie in einem solchen Fall vordefinierte Zeichenklassen verwenden (siehe Abschnitt 1.10.6, Tabelle 1.5). In der Praxis sollte diese Zeile daher folgendermaßen aussehen:
you@host > awk '/[[:digit:]]+/ { print $0 ": eine Zahl" } \ > /[[:alpha:]]+/ { print $0 ": ein Wort" }'
Im Zusammenhang mit den regulären Ausdrücken wird zudem häufig der Match-Operator (~) eingesetzt, womit Sie ermitteln, ob ein bestimmtes Feld in einer Zeile einem bestimmten Muster (regulären Ausdruck) entspricht.
you@host > awk '$1 ~ /^[A-D]/' mrolympia.dat <NAME> Österreich 1970 1971 1972 1973 1974 1975 <NAME> USA 1982 <NAME>ien 1992 1993 1994 1995 1996 1997
Hier werden zum Beispiel alle Zeilen ausgegeben, bei denen in der ersten Spalte der erste Buchstabe in der Zeile ein »A«, »B«, »C« oder »D« ist. Das Gegenteil erreichen Sie mit:
you@host > awk '$1 ~ /^[^A-D]/' mrolympia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> Argentinien 1976 1981 ...
Hierbei lässt sich allerdings auch erkennen, dass reguläre Ausdrücke in Verbindung mit awk zum einen sehr leistungsstark sind, aber auch sehr »kryptisch« werden können. Hier zu Rezept-Zwecken einige weitere awk-Beispiele mit regulären Ausdrücken.
awk '$0 !~ /^$/' datei.dat
Damit »löschen« Sie alle leeren Zeilen in einer Datei (datei.dat) (»Löschen« trifft die Sache eigentlich nicht genau, vielmehr löscht man die Zeilen nicht in einer Datei, sondern gibt alles, was nicht leer ist, auf dem Bildschirm aus). $0 steht für die komplette Zeile und schließt alle Muster aus (!~), die eine leere Zeile enthalten. ^ steht für den Anfang und $ für das Ende einer Zeile.
Nächstes Beispiel:
you@host > awk '$2 ~ /^[CD]/ { print $3 }' mrolympia.dat Argentinien USA USA
Hier suchen Sie nach allen Zeilen, bei denen sich in der zweiten Spalte ein Wort befindet, das mit den Buchstaben »C« oder »D« beginnt, und geben bei Erfolg das dritte Feld der Zeile aus.
Noch ein Beispiel:
you@host > awk '$3 ~ /Argentinien|Libanon/' mrolympia.dat <NAME> Argentinien 1976 1981 <NAME> Libanon 1983
Hier werden alle Zeilen ausgegeben, die in der dritten Spalte die Textfolge »Argentinien« oder »Libanon« enthalten.
13.4.4 Zusammengesetzte AusdrückeÂ
Es ist ebenso möglich, mehrere Ausdrücke zu einem Ausdruck zusammenzusetzen. Hierzu werden die üblichen Verknüpfungen UND (&&) und ODER (||) zwischen den Ausdrücken verwendet. So gilt bei einer UND-Verknüpfung von zwei Ausdrücken, dass der Aktionsteil bzw. der Ausdruck wahr ist, wenn beide Ausdrücke zutreffen, zum Beispiel:
you@host > awk '$3 ~ /USA/ && $2 ~ /Dickerson/' mrolympia.dat <NAME> USA 1982
Hier wird nur die Zeile ausgegeben, welche die Textfolge »USA« in der dritten Spalte als Ausdruck UND die Textfolge »Dickerson« in der zweiten Spalte enthält. Wird keine Zeile gefunden, die mit beiden Ausdrücken übereinstimmt, wird nichts ausgegeben.
Auf der anderen Seite können Sie mit einer ODER-Verknüpfung dafür sorgen, dass nur einer der Ausdrücke zutreffen muss:
you@host > awk '$3 ~ /USA/ || $2 ~ /Yates/' mrolympia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> USA 1982 <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME>annien 1992 1993 1994 1995 1996 1997 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Hierbei werden alle Zeilen ausgegeben, bei denen das Muster »USA« in der dritten Spalte ODER das Muster »Yates« in der zweiten Spalte enthalten ist.
Natürlich lassen sich mehr als nur zwei Ausdrücke verknüpfen und auch die ODER- bzw. UND-Verknüpfungen mischen. Doch es gilt, nicht zu übertreiben, um den Überblick zu wahren.
13.4.5 BEGIN und ENDÂ
Mit BEGIN und END haben Sie zwei Muster, die vollkommend unabhängig von der Eingabezeile sind. Man kann sich dies wie bei einer Schleife vorstellen. Alles, was sich vor dem eigentlichen awk-Script noch abspielen soll, können Sie in einen BEGIN-Block schreiben:
BEGIN { Aktionsteil }
Hier können Sie bspw. eine Vorverarbeitung für den eigentlichen Hauptteil von awk festlegen. Allerdings ist damit nicht die Vorverarbeitung der Eingabezeile gemeint, da diese zu dem Zeitpunkt, wenn der BEGIN-Block ausgeführt wird, noch gar nicht eingelesen wurde. Der Aktionsteil kann genauso wie schon bei den »normalen« Mustern verwendet werden.
you@host > awk 'BEGIN { print "Vorname Name Land" } > /USA/ { printf "%-10s %-10s %-5s\n", $1, $2, $3 }' \ > mrolympia.dat Vorname Name Land <NAME> USA <NAME> USA <NAME> USA <NAME> USA <NAME> USA
Wenn der BEGIN-Block vor der Verarbeitung der Eingabezeile ausgeführt wird, werden Sie sich sicherlich wohl denken können, dass sich der END-Block auf die Ausführung nach der Verarbeitung einer Eingabezeile bezieht. Wenn also alle Zeilen im Hauptblock ausgeführt wurden, kann am Ende noch zur Nachbearbeitung ein END-Block angehängt werden. Der END-Block wird ausgeführt, nachdem die letzte Zeile einer Datei abgearbeitet wurde oder bei Eingabe von der Tastatur (Strg)+(D) gedrückt wurde. Hier das Beispiel mit dem BEGIN-Block, erweitert um einen END-Block:
you@host > awk 'BEGIN { print "\nVorname Name Land" } \ > /USA/ { printf "%-10s %-10s %-5s\n", $1, $2, $3 } \ > END { print "---------Ende------------" } ' mrolympia.dat Vorname Name Land <NAME> USA <NAME> USA <NAME> USA <NAME> USA <NAME> USA ---------Ende------------
Hier wird zuerst der BEGIN-Block ausgeführt â im Beispiel nichts anderes als eine einfache Ausgabe auf dem Bildschirm. Anschließend werden die einzelnen Zeilen der Datei mrolympia.dat eingelesen und alle Zeilen mit dem Muster »USA« formatiert mit printf ausgegeben. Am Ende wird der END-Block ausgeführt, was hier nur eine einfache Textausgabe auf dem Bildschirm bedeutet. Ein END-Block setzt übrigens in keiner Weise einen BEGIN-Block voraus und kann immer auch nach einem Hauptteil verwendet werden â oder natürlich ganz alleine:
you@host > awk '{ print } END { print "Tschüssss..." }' Hallo Hallo Welt Welt (Strg)+(D) Tschüssss...
Hier liest awk Zeile für Zeile von der Kommandozeile ein und gibt diese im Hauptblock mit print zurück. Dies geht so lange, bis Sie (Strg)+(D) drücken. Nach dem Beenden mit (Strg)+(D) wird noch der END-Block ausgeführt.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 13.4 Muster (bzw. Adressen) von awk-ScriptsÂ
Wie schon bei sed können Sie mit awk Adressen bzw. Muster benennen, die als Suchkriterium angegeben werden. Ein Muster dient auch hier dazu, den Programmfluss von awk zu steuern. Stimmt der Inhalt mit der zu bearbeitenden Zeile mit dem angegebenen Muster überein, wird der entsprechende Aktionsteil ausgeführt. Um solche Muster in awk darzustellen, haben Sie mehrere Möglichkeiten. Welche dies sind und wie Sie diese verwenden können, erfahren Sie in den folgenden Unterabschnitten.
### 13.4.1 ZeichenkettenvergleicheÂ
Am einfachsten und gleichzeitig auch häufigsten werden Muster bei Zeichenvergleichen eingesetzt. Diese Verwendung sieht folgendermaßen aus:
> you@host > awk '/<NAME>/' Hallo Jürgen Hal<NAME> Mein Name ist <NAME> Mein Name ist <NAME> (Strg)+(D)
Hier wird nur die Eingabe von der Tastatur wiederholt, wenn in $0 die Textfolge »Jürgen Wolf« enthalten ist. Auf eine Datei wird der Zeichenkettenvergleich ähnlich ausgeführt:
> you@host > awk '/Samir/' mrolympia.dat <NAME> 1983
Hier erhält awk die Eingabezeile aus der Datei mrolympia.dat, sucht nach einem Muster und gibt gegebenenfalls die komplette Zeile der Fundstelle auf den Bildschirm aus. Hier fällt außerdem auf, dass awk auch ohne den Aktionsteil und den Befehl print die komplette Zeile auf dem Bildschirm ausgibt. Natürlich werden auch Teil-Textfolgen gefunden, sprich es müssen nicht zwangsläufig ganze Wörter sein:
> you@host > awk '/ie/' mrolympia.dat <NAME> Argentinien 1976 1981 <NAME> 1992 1993 1994 1995 1996 1997 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
### 13.4.2 VergleichsausdrückeÂ
awk bietet auch Vergleichsausdrücke in den Mustern an. Hierzu werden die in vielen Programmiersprachen üblichen Vergleichsoperatoren verwendet. Ist ein Vergleichsausdruck (also das Muster) wahr, wird der entsprechende Aktionsteil ausgeführt. In der folgenden Tabelle finden Sie alle Vergleichsoperatoren aufgelistet.
Operator | Bedeutung | Beispiel |
| --- | --- | --- |
< | Kleiner als | x < y |
<= | Kleiner als oder gleich | x <= y |
== | Gleichheit | x == y |
!= | Ungleichheit | x != y |
>= | Größer als oder gleich | x >= y |
> | Größer als | x > y |
~ | Mustervergleich | x ~ /y/ |
!~ | Negierter Mustervergleich | x !~ /y/ |
Ein Vergleichsausdruck lässt sich dabei sowohl auf Zahlen als auch auf Zeichenketten anwenden. Ein Beispiel:
> you@host > awk '$4 > 1990' mrolympia.dat <NAME> 1992 1993 1994 1995 1996 1997 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Hier geben Sie alle Teilnehmer aus, die Ihren ersten Wettkampf nach 1990 gewonnen haben. Es werden also sämtliche Zeilen ausgegeben, bei denen der Wert des vierten Feldes größer als 1990 ist. Ein weiteres Beispiel:
> you@host > awk '$2 < "H"' mrolympia.dat Franco Columbu Argentinien 1976 1981 <NAME> USA 1982 <NAME> Libanon 1983 Ronnie Coleman USA 1998 1999 2000 2001 2002 2003 2004
Hiermit werden alle Werte (Namen) der zweiten Spalte ausgegeben, deren Anfangsbuchstabe kleiner als »H« ist. Gemeint ist hier der ASCII-Wert! »C« ist zum Beispiel kleiner als »H« in der ASCII-Tabelle usw. Natürlich lässt sich dies auch auf eine ganze Zeichenkette anwenden:
> you@host > awk '$1 > "Dorian" ' mrolympia.dat Larry Scott USA 1965 1966 Sergio Oliva USA 1967 1968 1969 Franco Columbu Argentinien 1976 1981 <NAME> Libanon 1983 Lee Haney USA 1984 1985 1986 1987 1988 1989 1990 1991 Ronnie Coleman USA 1998 1999 2000 2001 2002 2003 2004
Hier werden alle Vornamen ausgegeben, deren Name im ersten Feld größer als (heißt: nicht länger) der von »Dorian« ist â auch wenn es in diesem Beispiel wenig Sinn macht. Interessant ist auch der Vergleichsoperator für Mustervergleiche, beispielsweise:
> you@host > awk '$3 ~ /USA/ ' mrolympia.dat Larry Scott USA 1965 1966 Sergio Oliva USA 1967 1968 1969 <NAME> USA 1982 <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Hier werden alle Zeilen selektiert, in denen sich in der dritten Spalte das Muster »USA« befindet. Damit können Sie das exakte Vorkommen eines Musters in der entsprechenden Spalte bestimmen. Natürlich können Sie dies auch negieren (verneinen):
> you@host > awk '$3 !~ /USA/ ' mrolympia.dat <NAME> Österreich 1970 1971 1972 1973 1974 1975 <NAME> Argentinien 1976 1981 <NAME> 1983 <NAME> 1992 1993 1994 1995 1996 1997
Wissen Sie jetzt zum Beispiel, dass 1988 ein US-Amerikaner den Wettkampf gewonnen hat, aber nicht genau welcher, formulieren Sie dies mit awk folgendermaßen:
> you@host > awk '$3 ~ /USA/ && /1988/' mrolympia.dat <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991
Damit wählen Sie die Zeile(n) aus, bei denen sich in der dritten Spalte das Muster »USA« befindet und irgendwo in der Zeile das Jahr 1988. Hier wurde mit dem && eine logische UND-Verknüpfung vorgenommen, worauf noch in einem extra Abschnitt eingegangen wird.
Wollen Sie nur die ersten fünf Zeilen einer Datei ausgeben lassen? Nichts ist leichter als das:
> you@host > awk 'NR <=5' mrolympia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> Österreich 1970 1971 1972 1973 1974 1975 <NAME> Argentinien 1976 1981 <NAME> USA 1982
Wenn Sie jeden Datensatz ausgeben lassen wollen, der weniger als 5 Felder enthält, dann machen Sie dies mit awk so:
> you@host > awk 'NF < 5' mrolympia.dat <NAME> USA 1982 <NAME> Libanon 1983
### 13.4.3 Reguläre AusdrückeÂ
Natürlich stehen Ihnen auch mit awk wieder die regulären Ausdrücke zur Verfügung, mit denen Sie Ihre Muster formulieren können. Hierzu ein kurzer Überblick zu den Metazeichen und ihren Bedeutungen, die Sie für reguläre Ausdrücke mit awk heranziehen können. Da die regulären Ausdrücke ja bereits einige Mal behandelt wurden, finden Sie hier allerdings nur einen kurzen Überblick, da die meisten Zeichen auch hier ihre bereits bekannte Funktion erfüllen.
Zeichen | Bedeutung |
| --- | --- |
^ | Anfang einer Zeile oder Zeichenkette |
$ | Ende einer Zeile oder Zeichenkette |
. | Jedes Zeichen außer einem Zeilenumbruch |
* | Null, eines oder mehrere Vorkommen |
[] | Ein Zeichen aus der Menge enthalten |
[^] | Kein Zeichen aus der Menge enthalten |
re1|re2 | ODER; entweder Muster re1 oder re2 enthalten |
re1&re2 | UND; Muster re1 und Muster re2 enthalten |
+ | Eines oder mehrere Vorkommen |
(ab)+ | Mindestens ein Auftreten der Menge »ab« |
? | Null oder einmaliges Vorkommen |
& | Enthält das Suchmuster des Ersetzungsstrings |
Auch zur Verwendung muss wohl nicht mehr allzu viel geschrieben werden, da Sie ja bereits zuhauf Beispiele in sed gesehen haben. Trotzdem hierzu einige Beispiele mit regulären Ausdrücken.
> you@host > awk '/[0â9]+/ { print $0 ": eine Zahl" } \ > /[A-Za-z]+/ { print $0 ": ein Wort" }' Hallo Hallo: ein Wort 1234 1234: eine Zahl
Hier können Sie ermitteln, ob es sich bei der Eingabe von der Tastatur um ein Wort oder um eine Zahl handelt. Allerdings funktioniert dieses Script nur so lange, wie es in einer Umgebung eingesetzt wird, in der sich derselbe Zeichensatz befindet wie beim Entwickler. Um hier wirklich Kompatibilität zu erreichen, sollten Sie in einem solchen Fall vordefinierte Zeichenklassen verwenden (siehe Abschnitt 1.10.6, Tabelle 1.5). In der Praxis sollte diese Zeile daher folgendermaßen aussehen:
> you@host > awk '/[[:digit:]]+/ { print $0 ": eine Zahl" } \ > /[[:alpha:]]+/ { print $0 ": ein Wort" }'
Im Zusammenhang mit den regulären Ausdrücken wird zudem häufig der Match-Operator (~) eingesetzt, womit Sie ermitteln, ob ein bestimmtes Feld in einer Zeile einem bestimmten Muster (regulären Ausdruck) entspricht.
> you@host > awk '$1 ~ /^[A-D]/' mrolympia.dat <NAME> Österreich 1970 1971 1972 1973 1974 1975 <NAME> USA 1982 <NAME>ien 1992 1993 1994 1995 1996 1997
Hier werden zum Beispiel alle Zeilen ausgegeben, bei denen in der ersten Spalte der erste Buchstabe in der Zeile ein »A«, »B«, »C« oder »D« ist. Das Gegenteil erreichen Sie mit:
> you@host > awk '$1 ~ /^[^A-D]/' mrolympia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 Franco Columbu Argentinien 1976 1981 ...
Hierbei lässt sich allerdings auch erkennen, dass reguläre Ausdrücke in Verbindung mit awk zum einen sehr leistungsstark sind, aber auch sehr »kryptisch« werden können. Hier zu Rezept-Zwecken einige weitere awk-Beispiele mit regulären Ausdrücken.
> awk '$0 !~ /^$/' datei.dat
Damit »löschen« Sie alle leeren Zeilen in einer Datei (datei.dat) (»Löschen« trifft die Sache eigentlich nicht genau, vielmehr löscht man die Zeilen nicht in einer Datei, sondern gibt alles, was nicht leer ist, auf dem Bildschirm aus). $0 steht für die komplette Zeile und schließt alle Muster aus (!~), die eine leere Zeile enthalten. ^ steht für den Anfang und $ für das Ende einer Zeile.
Nächstes Beispiel:
> you@host > awk '$2 ~ /^[CD]/ { print $3 }' mrolympia.dat Argentinien USA USA
Hier suchen Sie nach allen Zeilen, bei denen sich in der zweiten Spalte ein Wort befindet, das mit den Buchstaben »C« oder »D« beginnt, und geben bei Erfolg das dritte Feld der Zeile aus.
Noch ein Beispiel:
> you@host > awk '$3 ~ /Argentinien|Libanon/' mrolympia.dat <NAME> Argentinien 1976 1981 <NAME> Libanon 1983
Hier werden alle Zeilen ausgegeben, die in der dritten Spalte die Textfolge »Argentinien« oder »Libanon« enthalten.
### 13.4.4 Zusammengesetzte AusdrückeÂ
Es ist ebenso möglich, mehrere Ausdrücke zu einem Ausdruck zusammenzusetzen. Hierzu werden die üblichen Verknüpfungen UND (&&) und ODER (||) zwischen den Ausdrücken verwendet. So gilt bei einer UND-Verknüpfung von zwei Ausdrücken, dass der Aktionsteil bzw. der Ausdruck wahr ist, wenn beide Ausdrücke zutreffen, zum Beispiel:
> you@host > awk '$3 ~ /USA/ && $2 ~ /Dickerson/' mrolympia.dat <NAME> USA 1982
Hier wird nur die Zeile ausgegeben, welche die Textfolge »USA« in der dritten Spalte als Ausdruck UND die Textfolge »Dickerson« in der zweiten Spalte enthält. Wird keine Zeile gefunden, die mit beiden Ausdrücken übereinstimmt, wird nichts ausgegeben.
Auf der anderen Seite können Sie mit einer ODER-Verknüpfung dafür sorgen, dass nur einer der Ausdrücke zutreffen muss:
> you@host > awk '$3 ~ /USA/ || $2 ~ /Yates/' mrolympia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> USA 1982 <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> 1992 1993 1994 1995 1996 1997 <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Natürlich lassen sich mehr als nur zwei Ausdrücke verknüpfen und auch die ODER- bzw. UND-Verknüpfungen mischen. Doch es gilt, nicht zu übertreiben, um den Überblick zu wahren.
### 13.4.5 BEGIN und ENDÂ
Mit BEGIN und END haben Sie zwei Muster, die vollkommend unabhängig von der Eingabezeile sind. Man kann sich dies wie bei einer Schleife vorstellen. Alles, was sich vor dem eigentlichen awk-Script noch abspielen soll, können Sie in einen BEGIN-Block schreiben:
> BEGIN { Aktionsteil }
Hier können Sie bspw. eine Vorverarbeitung für den eigentlichen Hauptteil von awk festlegen. Allerdings ist damit nicht die Vorverarbeitung der Eingabezeile gemeint, da diese zu dem Zeitpunkt, wenn der BEGIN-Block ausgeführt wird, noch gar nicht eingelesen wurde. Der Aktionsteil kann genauso wie schon bei den »normalen« Mustern verwendet werden.
> you@host > awk 'BEGIN { print "Vorname Name Land" } > /USA/ { printf "%-10s %-10s %-5s\n", $1, $2, $3 }' \ > mrolympia.dat Vorname Name Land Larry Scott USA <NAME> USA <NAME> USA <NAME> USA Ronnie Coleman USA
Wenn der BEGIN-Block vor der Verarbeitung der Eingabezeile ausgeführt wird, werden Sie sich sicherlich wohl denken können, dass sich der END-Block auf die Ausführung nach der Verarbeitung einer Eingabezeile bezieht. Wenn also alle Zeilen im Hauptblock ausgeführt wurden, kann am Ende noch zur Nachbearbeitung ein END-Block angehängt werden. Der END-Block wird ausgeführt, nachdem die letzte Zeile einer Datei abgearbeitet wurde oder bei Eingabe von der Tastatur (Strg)+(D) gedrückt wurde. Hier das Beispiel mit dem BEGIN-Block, erweitert um einen END-Block:
> you@host > awk 'BEGIN { print "\nVorname Name Land" } \ > /USA/ { printf "%-10s %-10s %-5s\n", $1, $2, $3 } \ > END { print "---------Ende------------" } ' mrolympia.dat Vorname Name Land Larry Scott USA <NAME> USA <NAME> USA <NAME> USA Ronnie Coleman USA ---------Ende------------
Hier wird zuerst der BEGIN-Block ausgeführt â im Beispiel nichts anderes als eine einfache Ausgabe auf dem Bildschirm. Anschließend werden die einzelnen Zeilen der Datei mrolympia.dat eingelesen und alle Zeilen mit dem Muster »USA« formatiert mit printf ausgegeben. Am Ende wird der END-Block ausgeführt, was hier nur eine einfache Textausgabe auf dem Bildschirm bedeutet. Ein END-Block setzt übrigens in keiner Weise einen BEGIN-Block voraus und kann immer auch nach einem Hauptteil verwendet werden â oder natürlich ganz alleine:
> you@host > awk '{ print } END { print "Tschüssss..." }' Hallo Hallo Welt Welt (Strg)+(D) Tschüssss...
Hier liest awk Zeile für Zeile von der Kommandozeile ein und gibt diese im Hauptblock mit print zurück. Dies geht so lange, bis Sie (Strg)+(D) drücken. Nach dem Beenden mit (Strg)+(D) wird noch der END-Block ausgeführt.
# 13.5 Die Komponenten von awk-ScriptsÂ
13.5 Die Komponenten von awk-ScriptsÂ
Die Verwendung von awk wurde bisher vorwiegend mit Einzeilern demonstriert, was häufig für den Hausgebrauch in Shellscripts ausreicht. So haben Sie die Mächtigkeit von awk schon näher kennen gelernt. Allerdings wurde bereits erwähnt, dass awk eher eine Programmiersprache als ein Tool ist, weshalb hier nun näher auf die Eigenheiten von awk als Programmiersprache eingegangen werden soll. Natürlich können Sie die awk-Scripts auch in Ihren Shellscripts einsetzen, womit Sie sich bedeutendes Know-how aneignen und sich gern als Guru bezeichnen dürfen.
Gewöhnlich verwendet man beim Erstellen eigener awk-Scripts das Verfahren mit der She-Bang-Zeile (#!) in der ersten Zeile:
#!/usr/bin/awk -f
Kommentare können Sie in gleicher Weise â wie schon bei den Shellscripts â mit # kennzeichnen.
#!/usr/bin/awk -f # # Programmname: programmname.awk # Erstellt : J.Wolf # Datum : ... # ...
Eine Dateiendung ist genauso unnötig wie in Shellscripts. Trotzdem wird hierbei relativ häufig die Endung ».awk« verwendet, damit der Anwender weiß, worum es sich bei diesem Script handelt. Man kann die Aktionen in einem awk-Script in einer Zeile schreiben, wobei hier dann jede Aktion durch ein Semikolon getrennt werden muss:
# Anfang Haupt-Aktionsteil eines awk-Scripts { aktion ; aktion ; aktion } # Ende Haupt-Aktionsteil eines awk-Scripts
Oder aber, was empfehlenswerter ist, man schreibt jede Aktion in eine extra Zeile:
# Anfang Haupt-Aktionsteil eines awk-Scripts { aktion aktion aktion } # Ende Haupt-Aktionsteil eines awk-Scripts
13.5.1 VariablenÂ
Neben den dynamisch angelegten Feldvariablen stehen Ihnen in awk auch benutzerdefinierte Variablen zur Verfügung. Die Variablen werden mit derselben Syntax wie schon in der Shell definiert und behandelt. Bei der Verwendung wird jedoch vor die Variable kein Dollarzeichen ($) gestellt, weil dieses Zeichen für die einzelnen Felder (bzw. Wörter) reserviert ist. Zahlen können Sie ohne weitere Vorkehrungen übergeben:
# val1 mit dem Wert 1234 definieren val1=1234 # val2 mit dem Wert 1234.1234 definieren val2=1234.1234
Zeichenketten müssen Sie allerdings zwischen doppelte Hochkommata schreiben:
# string1 mit einer Zeichenkette belegen string1="Ich bin ein String"
Machen Sie dies nicht wie im folgenden Beispiel
string2=teststring
weist awk der Variablen »string2« nicht die Zeichenkette »teststring« zu, sondern die Variable »teststring« â was allerdings hier keinen Fehler auslöst, da auch hier, wie schon in der Shell, nicht definierte Werte automatisch mit 0 bzw. "" vorbelegt sind. Ein einfaches Beispiel, in dem die Anzahl der Zeilen einer Datei hochgezählt wird, die Sie als Argumente in der Kommandozeile mit angeben:
#!/usr/bin/awk -f # # Programmname: countline.awk BEGIN { count=0 } # Haupt-Aktionsteil { count++ } END { printf "Anzahl Zeilen : %d\n" count }
Das Script bei der Ausführung:
you@host > chmod u+x countline.awk you@host > ./countline.awk countline.awk Anzahl Zeilen : 15
Zwar lässt sich dieses Beispiel einfacher mit der Variablen NR ausführen, aber hier geht es um die Demonstration von Variablen. Zuerst wird der BEGIN-Block ausgeführt, in dem die Variable »count« mit dem Wert 0 definiert wurde. Variablen, die Sie mit dem Wert 0 vorbelegen, könnten Sie sich sparen, da diese bei der ersten Verwendung automatisch mit 0 (bzw. ""; abhängig vom Kontext) definiert würden. Allerdings ist es hilfreich, eine solche Variable trotzdem im BEGIN-Block zu definieren â der Übersichtlichkeit zuliebe.
Im Haupt-Aktionsteil wird nur der Wert der Variablen »count« um 1 inkrementiert (erhöht). Hierzu wird der Inkrement-Operator (++) in der Postfix-Schreibweise verwendet. Wurden alle Zeilen der als erstes Argument angegebenen Datei durchlaufen (wo jeweils pro Zeile der Haupt-Aktionsteil aktiv war), wird der END-Block ausgeführt und gibt die Anzahl der Zeilen aus. Hier kann übrigens durchaus mehr als nur eine Datei in der Kommandozeile angegeben werden:
you@host > ./countline.awk countline.awk mrolympia.dat Anzahl Zeilen : 25
Das awk-Script können Sie übrigens auch wie ein wc âl benutzen. Beispiel:
you@host > ls -l | wc -l 26 you@host > ls -l | ./counterline.awk Anzahl Zeilen : 26
Aber wie bereits erwähnt, Sie können dieses Script mit der Variablen NR erheblich abkürzen, und zwar bis auf den END-Block:
#!/usr/bin/awk -f # # Programmname: countline2.awk END { printf "Anzahl Zeilen : %d\n", NR }
Kommandozeilen-Argumente
Für die Argumente aus der Kommandozeile stehen Ihnen ähnlich wie in C die beiden Variablen ARGC und ARGV zur Verfügung. ARGC (ARGument Counter) enthält immer die Anzahl der Argumente in der Kommandozeile (inklusive) dem Programmnamen awk (!) und ARGV (ARGumenten Vector) ist ein Array, worin sich die einzelnen Argumente der Kommandozeile befinden (allerdings ohne Optionen von awk). Hierzu nochmals ein Script, welches die Anzahl der Argumente und jedes einzelne davon ausgibt (im Beispiel wurde die for-Schleife vorgezogen, die gleich näher beschrieben wird).
#!/usr/bin/awk -f # # Programmname: countarg.awk BEGIN { print "Anzahl Argumente in ARGC : " , ARGC # einzelne Argumente durchlaufen for(i=0; i < ARGC; i++) printf "ARGV[%d] = %s\n", i, ARGV[i] }
Das Script bei der Ausführung:
you@host > ./countarg.awk eine Zeile mit vielen Argumenten Anzahl Argumente in ARGC : 6 ARGV[0] = awk ARGV[1] = eine ARGV[2] = Zeile ARGV[3] = mit ARGV[4] = vielen ARGV[5] = Argumenten
Vordefinierte Variablen
In awk existiert eine interessante Auswahl von vordefinierten Variablen, wovon Sie ja bereits mit NR, NF, ARGC, ARGV, FS, $0, $1 ... schon einige kennen gelernt haben. In der folgenden Tabelle finden Sie einen kurzen Überblick zu den vordefinierte Variablen sowie ein kurze Beschreibung ihrer Funktionen.
Tabelle 13.5 Â Vordefinierte Variablen in awk
ARGC
Anzahl der Argumente aus der Kommanodzeile (+1)
ARGV
Array mit dem Inhalt der Kommandozeilen-Argumente
ENVIRON
Enthält ein Array mit allen momentanen Umgebungsvariablen
FILENAME
Name der aktuellen Eingabedatei. Bei einer Eingabe von der Tastatur steht hier der Wert 'â'.
FNR
Zeilennummer der Eingabe aus der aktuellen Datei
FS
Das oder die Trennzeichen für die Zerlegung der Eingabezeile. Standardmäßig befindet sich hier das Leerzeichen.
NF
Anzahl der Felder der aktuellen Zeile. Felder werden anhand von FS getrennt.
NR
Anzahl der bisher eingelesenen Zeilen
OFMT
Ausgabeformat für Fließkommazahlen
OFS
Ausgabetrennzeichen für einzelne Felder; auch hier standardmäßig das Leerzeichen
ORS
Das Ausgabetrennzeichen für neue Datensätze (Zeilen). Standardmäßig wird hierbei das Newline-Zeichen verwendet.
RLENGTH
Gibt im Zusammenhang mit der Funktion match die Länge der übereinstimmenden Teilzeichenkette an
RS
Das Eingabetrennzeichen für neue Datensätze (Zeilen), standardmäßig das Newline-Zeichen
RSTART
Gibt im Zusammenhang mit der Zeichenkettenfunktion match den Index des Beginns der übereinstimmenden Teilzeichenkette an
SUBSEP
Das Zeichen, das in Arrays die einzelnen Elemente trennt (\034)
$0
Enthält immer den kompletten Datensatz (Eingabezeile)
$1, $2, ...
Die einzelnen Felder (Worte) der Eingabezeile, die nach dem Trennzeichen in der Variablen FS getrennt wurden
Hierzu ein einfaches Beispiel, das einige der vordefinierten Variablen demonstrieren soll:
#!/usr/bin/awk -f # # Programmname: vars.awk BEGIN { count=0 # Ausgabetrennzeichen: Minuszeichen OFS="-" } /USA/ { print $1, $2, $3 count++ } END { printf "%d Ergebnisse (von %d Zeilen) gefunden in %s\n", count, NR, FILENAME printf "Datei %s befindet sich in %s\n", FILENAME, ENVIRON["PWD"] }
Das Script bei der Ausführung:
you@host > ./vars.awk mrolympia.dat Larry-Scott-USA Sergio-Oliva-USA Chris-Dickerson-USA Lee-Haney-USA Ronnie-Coleman-USA 5 Ergebnisse (von 9 Zeilen) gefunden in mrolympia.dat Datei mrolympia.dat befindet sich in /home/you
13.5.2 ArraysÂ
Arrays stehen Ihnen in awk gleich in zweierlei Formen zur Verfügung, zum einen die »gewöhnlichen« Arrays und zum anderen assoziative Arrays. Die »normalen« Arrays werden genauso verwendet, wie Sie dies schon in Abschnitt 2.5 bei der Shell-Programmierung gesehen haben. Verwenden können Sie die Arrays wie gewöhnliche Variablen, nur benötigen Sie hier den Indexwert (eine Ganzzahl), um auf die einzelnen Elemente zuzugreifen. Ein einfaches Script zur Demonstration:
#!/usr/bin/awk -f # # Programmname: playarray.awk { # komplette Zeile in das Array mit dem Index NR ablegen line[NR]=$0 # Anzahl der Felder (Wörter) in das Array # fields mit dem Index NR ablegen fields[NR]=NF } END { for(i=1; i <=NR; i++) { printf "Zeile %2d hat %2d Felder:\n", i, fields[i] print line[i] } }
Das Script bei der Ausführung:
you@host > ./playarray.awk mrolympia.dat Zeile 1 hat 5 Felder: <NAME> USA 1965 1966 Zeile 2 hat 6 Felder: <NAME> USA 1967 1968 1969 ... Zeile 9 hat 10 Felder: <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Hier wurden im Haupt-Aktionsteil die einzelnen Zeilen im Array »line« gespeichert. Als Indexnummer wird hier die vordefinierte Variable NR verwendet. Ebenfalls wurde die Anzahl von Feldern (Wörtern) dieser Zeile in einem extra Array (»fields«) gespeichert.
Neben den »gewöhnlichen« Arrays kennt awk auch noch assoziative Arrays. Dies sind Arrays, die neben Zahlen als Indexwert auch Zeichenketten zulassen. Somit sind folgende Werteübergaben möglich:
array["Wolf"]=10 array["ION"]=1234 array["WION"] = array["Wolf"] array["GESAMT"] = array["Wolf"] + array["ION"]
Diese Variante der Arrays zu verwenden, ist wirklich außergewöhnlich. Allerdings stellen Sie sich sicherlich zunächst die Frage, wie kommt man wieder an die einzelnen Werte, also welchen Index verwendet man hierzu? Zum einen haben Sie natürlich die Möglichkeit, auf die einzelnen Werte zuzugreifen, wenn Sie das ["Wort"] des Indexwertes kennen. Aber wenn Sie nicht vorhersehen können, welche Worte hier stehen, hilft Ihnen eine spezielle, zum Arbeiten mit assoziativen Arrays gedachte Variante der for-Schleife:
for (indexwort in array)
Mit diesem Konstrukt wird das komplette assoziative Array durchlaufen. Und was eignet sich besser zur Demonstration von assoziativen Arrays als das Zählen von Wörtern einer Datei bzw. Eingabe?
#!/usr/bin/awk -f # # Programmname: countwords.awk { # Durchläuft alle Felder einer Zeile for(f=1; f <= NF; ++f) field[$f]++ } END { for(word in field) print word, field[word] }
Das Script bei der Ausführung:
you@host > ./countwords.awk mrolympia.dat Dorian 1 2000 1 2001 1 USA 5 ...
Im Beispiel durchlaufen wir mit einer for-Schleife die komplette Zeile anhand der einzelnen Felder (NF). Jedes Wort wird hier als Wortindex für ein Array benutzt und inkrementiert (um eins erhöht). Existiert ein solcher Index noch nicht, wird dieser neu erzeugt und um den Wert eins erhöht. Existiert bereits ein entsprechender »Wortindex«, so wird nur die Speicherstelle zum zugehörigen Wort um eins erhöht. Im END-Block wird dann das spezielle for-Konstrukt der assoziativen Arrays für die Ausgabe verwendet. Dabei wird zuerst der Indexname (also das Wort selber) und dann dessen Häufigkeit ausgegeben.
Die Zuweisung von Werten eines Arrays ist etwas umständlich, da die Werte häufig einzeln übergeben werden müssen. Einfacher gelingt dies mit der Funktion split. split zerlegt ein Array automatisch in die einzelnen Bestandteile (anhand der Variablen FS). Ein Beispiel:
#!/usr/bin/awk -f # # Programmname: splitting.awk /Coleman/ { # Durchläuft alle Felder einer Zeile split($0, field) } END { for(word in field) print field[word] }
Das Script bei der Ausführung:
you@host > ./splitting.awk mrolympia.dat 1998 1999 2000 2001 2002 2003 2004 <NAME> USA
Hier wird die Zeile mit der Textfolge »Coleman« in die einzelnen Bestandteile zerlegt und an das assoziative Array »field« übergeben. Sofern Sie ein anderes Zeichen zur Trennung verwenden wollen, müssen Sie die vordefinierte Variable FS entsprechend anpassen.
13.5.3 OperatorenÂ
Im Großen und Ganzen werden Sie in diesem Kapitel nicht viel Neues erfahren, da Operatoren meistens in allen Programmiersprachen dieselbe Bedeutung haben. Trotzdem stellt sich hierbei zunächst die Frage, wie awk einen Wert einer Variablen oder Konstante typisiert. In der Regel versucht awk, bei einer Berechnung beispielsweise die Typen aneinander anzupassen. Somit hängt der Typ eines Ausdrucks vom Kontext der Benutzung ab. So kann ein String "100" einmal als String verwendet werden und ein anderes Mal als die Zahl 100. Zum Beispiel stellt
print "100"
eine Ausgabe einer Zeichenkette dar. Hingegen wird
print "100"+0
als Zahl präsentiert â auch wenn jeweils immer »nur« 100 ausgegeben wird. Man spricht hierbei vom Erzwingen eines numerischen bzw. String-Kontexts.
Einen numerischen Kontext können Sie erzwingen, wenn der Typ des Ergebnisses eine Zahl ist. Befindet sich ein String in der Berechnung, wird dieser in eine Zahl umgewandelt. Ein Beispiel:
print "100Wert"+10
Hier ist das Ergebnis der Ausgabe 110 â das Suffix »Wert« wird ignoriert. Würde sich hier ein ungültiger Wert befinden, so wird dieser in 0 umgewandelt:
print "Wert"+10
Als Ergebnis würden Sie den Wert 10 zurückbekommen.
Einen String-Kontext erzwingen Sie, wenn der Typ des Ergebnisses ein String ist. Somit könnten Sie z. B. eine Zahl in einen String umwandeln, wenn Sie etwa eine »Konkatenation« (Anhängen mehrere Zeichenketten) vornehmen oder Vergleichsoperatoren einen String erwarten.
Leider ist es nicht immer so eindeutig, ob ein String als fester String oder als eine Zahl repräsentiert wird. So würde beispielsweise folgender Vergleich fehlschlagen:
if( 100=="00100" ) ...
Und zwar deshalb, weil hier die Regel in Kraft tritt: Sobald eines der Argumente ein String ist, wird auch das andere Argument als String behandelt. Daher würden Sie hier folgenden Vergleich durchführen:
if ( "100"=="00100" ) ...
Anders hingegen sieht dies aus, wenn Sie z. B. eine Feldvariable (bspw. $1) mit dem Wert »00100« vergleichen würden:
if (100==$1) ...
Hier würde der Vergleich »wahr« zurückgeben. Wie dem auch sei, Sie sehen, dass es hier zu einigen Ungereimtheiten kommen kann. Zwar könnte ich Ihnen jetzt eine Liste anführen, in welchem Fall ein String als numerischer und wann als nicht numerischer String behandelt wird. Allerdings ist es mühsam, sich dies zu merken, weshalb man â sofern man Strings mit Zahlen vermischt â hier auf Nummer sicher gehen und die Umwandlung in eine Zahl bzw. in einen String selbst vornehmen sollte. Eine erzwungene Umwandlung eines Strings können Sie wie folgt durchführen:
variable = variable ""
Jetzt können Sie sicher sein, dass »variable« ein String ist. Wollen Sie hingegen, dass »variable« eine Zahl ist, so können Sie diese Umwandlung wie folgt erzwingen:
variable = variable + 0
Hiermit haben Sie in »variable« immer eine Zahl. Befindet sich in Variable ein sinnloser Wert, so wird durch die erzwungene Umwandlung eben der Wert 0 aus der Variablen. Zugegeben, viel Theorie, aber sie ist erforderlich, um auch zu den gewünschten Ergebnissen zu kommen.
Arithmetische Operatoren
Hier finden Sie ebenfalls wieder die üblichen arithmetischen Operatoren, wie in anderen Programmiersprachen auch. Ebenso existieren hier die kombinierten Zuweisungen, mit denen Sie Berechnungen wie »var=var+5« durch »var+=5« abkürzen können. Hier eine Tabelle mit allen arithmetischen Operatoren in awk:
Tabelle 13.6 Â Arithmetische Operatoren
+
Addition
â
Subtraktion
*
Multiplikation
/
Division
%
Modulo (Rest einer Division)
^
Exponentation bspw. x^y ist x hoch y
+=
Abkürzung für var=var+x gleichwertig zu var+=x
â=
Abkürzung für var=varâx gleichwertig zu varâ=x
*=
Abkürzung für var=var*x gleichwertig zu var*=x
/=
Abkürzung für var=var/x gleichwertig zu var/=x
%=
Abkürzung für var=var%x gleichwertig zu var%=x
^=
Abkürzung für var=var^x gleichwertig zu var^=x
Ein einfaches Beispiel:
#!/usr/bin/awk -f # # Programmname: countfields.awk { field[FILENAME]+=NF } END { for(word in field) print "Anzahl Felder in " word " : " field[word] }
Das Script bei der Ausführung:
you@ghost > ./countfields.awk mrolympia.dat USA.dat Argentinien Anzahl Felder in Argentinien : 5 Anzahl Felder in USA.dat : 15 Anzahl Felder in mrolympia.dat : 64
In diesem Beispiel werden alle Felder einer Datei gezählt. Da wir nicht wissen können, wie viele Dateien der Anwender in der Kommandozeile eingibt, verwenden wir gleich ein assoziatives Array mit dem entsprechenden Dateinamen als Feldindex.
Logische Operatoren
Die logischen Operatoren wurden bereits verwendet. Hier haben Sie die Möglichkeit einer logischen UND- und einer logischen ODER-Verknüpfung. Das Prinzip der beiden Operatoren wurde schon bei der Shell-Programmierung erklärt. Auch hier gilt: Verknüpfen Sie zwei Ausdrücke mit einem logischen UND, wird wahr zurückgegeben, wenn beide Ausdrücke zutreffen. Bei einer logischen ODER-Verknüpfung hingegen genügt es, wenn einer der Ausdrücke wahr zurückgibt. Und natürlich steht Ihnen in awk auch der Negationsoperator (!) zur Verfügung, womit Sie alles Wahre unwahr und alles Unwahre wahr machen können.
Tabelle 13.7 Â Logische Operatoren in awk
||
Logisches ODER
&&
Logisches UND
!
Logische Verneinung
Das folgende Script gibt alle Gewinner der 80er-Jahre der Datei mrolympia.dat zurück:
#!/usr/bin/awk -f # # Programmname: winner80er.awk { for(i=4; i<=NF; i++) { if( $i >= 1980 && $i < 1990 ) { print $0 # gefunden -> Abbruchbedingung für for i=NF; } } }
Das Script bei der Ausführung:
you@host > ./winner80er.awk mrolympia.dat <NAME> 1970 1971 1972 1973 1974 1980 <NAME>inien 1976 1981 <NAME> USA 1982 <NAME> Libanon 1983 <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991
Bedingungsoperator
Auch awk unterstützt den ternären Bedingungsoperator ?: von C. Dieser Operator stellt eine verkürzte if-Anweisung dar. So können Sie statt
if ( Bedingung ) anweisung1 else anweisung2
den ternären Operator ?: wie folgt einsetzen:
Bedingung ?anweisung1 :anweisung2
Auch hier gilt wie in der if-Anweisung: Ist die Bedingung wahr, wird »anweisung1« ausgeführt, ansonsten wird »anweisung2« verwendet. Zwar mag dieser ternäre Operator eine Verkürzung des Programmcodes bedeuten, doch aus Übersichtlichkeitsgründen verwende ich persönlich immer noch lieber das if-else-Konstrukt (auch in C) â aber dies ist letztendlich Geschmackssache.
Inkrement- und Dekrementoperator
Den Inkrementoperator ++ haben Sie bereits verwendet. Dabei handelt es sich um einen Zähloperator, der den Inhalt einer Variablen um 1 erhöht â also eine Abkürzung für »var=var+1« ist »var++«. Gleiches erreichen Sie mit dem Dekrementoperator ââ. Damit reduzieren Sie den Wert einer Variablen um 1. Allerdings ist es entscheidend, ob Sie den Operator vor oder nach der Variablen schreiben. In Verbindung mit einer Variablen kann dabei folgender (Neben-)Effekt auftreten:
var=1 wert=var++ print wert i # wert=1, i=2
Hier wurde die Postfixschreibweise mit »var++« verwendet. In diesem Beispiel wird zuerst der Variablen »wert« der Inhalt von »var« zugewiesen und anschließend inkrementiert. Somit bekommt »wert« den Inhalt 1. Wollen Sie, dass »wert« hier 2 erhält, müssen Sie »var« in der Präfixschreibweise erhöhen.
var=1 wert=++var print wert i # wert=2, i=2
Jetzt wird der Inhalt von »var« zuerst inkrementiert und dann an »wert« übergeben. Gleiches gilt natürlich auch bei dem Dekrementoperator. Relativ häufig werden diese Operatoren in Zählschleifen verwendet.
Leerzeichen- und Feldoperator
Zwar würde man das Leerzeichen nicht als Operator bezeichnen, aber zwischen zwei Strings dient es dazu, diese miteinander zu verbinden â sprich zu einer neuen Zeichenkette zu verbinden. Folgendes Script als Beispiel:
#!/usr/bin/awk -f # # Programmname: names.awk { string = string $2 " " } END { print string }
Das Script bei der Ausführung:
you@host > ./names.awk mrolympia.dat <NAME> Columbu <NAME>
In diesem Script wurden alle Nachnamen (zweite Spalte â $2) zu einem String verbunden, ohne das Leerzeichen wäre dies nicht möglich.
Des Weiteren steht Ihnen noch der Feldzugriffsoperator $ zur Verfügung. Sie können ihn, im Gegensatz zum Positionsparameter der Shell, relativ flexibel einsetzen:
#!/usr/bin/awk -f # # Programmname: names2.awk BEGIN { i=2 } { string = string $i " " } END { print string }
Das Script macht dasselbe wie schon das Script zuvor, es verwendet alle Felder der zweiten Spalte und fügt Sie zu einem String zusammen. Allerdings wurde hier der Feldzugriffsoperator mit $i verwendet. »i« wurde im BEGIN-Block mit dem Wert 2 belegt. Dies macht den Zugriff auf die einzelnen Felder erheblich flexibler, besonders in Verbindung mit einer Schleife:
#!/usr/bin/awk -f # # Programmname: winner.awk { for(i=4; i<=NF; i++) string[$2]= string[$2] $i "-" } END { for(word in string) print "Titel von " word " in den Jahren : " string[word] }
Das Script bei der Ausführung:
you@host > ./winner.awk mrolympia.dat Titel von Bannout in den Jahren : 1983- Titel von Columbu in den Jahren : 1976â1981- ... Titel von Yates in den Jahren : 1992â1993â1994â1995â1996â1997- Titel von Dickerson in den Jahren : 1982-
Hier wurden alle Jahre, in denen ein Wettkämpfer gesiegt hat, in ein assoziatives Array gespeichert. Die einzelnen Jahre wurden mit dem Zugriffsoperator auf Felder ($) dem Array zugewiesen.
13.5.4 KontrollstrukturenÂ
Mit den Kontrollstrukturen lernen Sie eigentlich nichts mehr Neues, da Sie hier auf alle alten Bekannten wie if, for, while und Co. stoßen werden, die Sie bereits in Kapitel 4, Kontrollstrukturen, zur Shell-Programmierung kennen gelernt haben. Leider kann ich das Thema nicht ganz so schnell überfliegen, weil hier die Syntax im Gegensatz zur Shell-Programmierung ein wenig anders ist. Derjenige, der bereits C-erprobt ist (oder sich mit den C-Shells befasst hat), ist fein raus, weil die Syntax der Kontrollstrukturen in awk exakt derjenigen von C entspricht.
if-Verzweigungen
Mit der if-Verzweigung können Sie (wie gewohnt) eine bestimmte Bedingung testen. Trifft die Bedingung zu, wird wahr (TRUE), ansonsten falsch (FALSE) zurückgegeben. Optional können Sie auch hier einer if-Verzweigung ein else-Konstrukt folgen lassen. Die Syntax:
if( Bedingung ) # Bedingung-ist-wahr-Zweig else # Bedingung-ist-falsch-Zweig (optional)
Besteht die Anweisung einer Verzweigung aus mehr als nur einer Anweisung, müssen Sie diese in einem Anweisungsblock zwischen geschweiften Klammern zusammenfassen:
if( Bedingung ) { # Bedingung-ist-wahr-Zweig anweisung1 anweisung2 } else { # Bedingung-ist-falsch-Zweig (optional) anweisung1 anweisung2 }
Und selbstverständlich gibt es in awk auch das else if-Konstrukt (gleich mehr zu elif in der Shell-Programmierung):
if( Bedingung1 ) { # Bedingung-ist-wahr-Zweig anweisung1 anweisung2 } else if( Bedingung2 ) { # Bedingung2-ist-wahr-Zweig (optional) anweisung1 anweisung2 } else { # Bedingung1-und-Bedingung2-ist-falsch-Zweig (optional) anweisung1 anweisung2 }
Ein einfaches Bespiel:
#!/usr/bin/awk -f # # Programmname: findusa.awk { if( $0 ~ /USA/ ) print "USA gefunden in Zeile " NR else print "USA nicht gefunden in Zeile " NR }
Das Script bei der Ausführung:
you@host > ./findusa.awk mrolympia.dat USA gefunden in Zeile 1 USA gefunden in Zeile 2 USA nicht gefunden in Zeile 3 ...
for-Schleifen
for-Schleifen wurden auf den vorangegangenen Seiten recht häufig eingesetzt. Die Syntax zur klassischen for-Schleife sieht wie folgt aus:
for( Initialisierung; Bedingung; Zähler ) anweisung
Folgen auch hierbei mehrere Anweisungen, müssen Sie diese in einem Anweisungsblock zwischen geschweiften Klammern zusammenfassen:
for( Initialisierung; Bedingung; Zähler ) { anweisung1 anweisung2 }
Der erste Ausdruck (Initialisierung) der for-Schleife wird nur ein einziges Mal beim Starten der for-Schleife ausgeführt. Hier wird häufig eine Variable mit Startwerten initialisiert. Anschließend folgt ein Semikolon, gefolgt von einer Überprüfung der Bedingung. Ist diese wahr, so werden die Anweisungen ausgeführt. Ist die Bedingung nicht wahr, wird die Schleife beendet und das Script fährt mit seiner Ausführung hinter der Schleife fort. Sofern die Bedingung wahr ist und die Anweisungen ausgeführt wurden, wird der dritte Ausdruck in der for-Schleife â hier der Zähler â aktiv. Hierbei wird häufig ein bestimmter Wert, der entscheidend für die Abbruchbedingung des zweiten Ausdrucks der for-Schleife ist, erhöht bzw. erniedrigt. Anschließend wird wieder die Bedingung überprüft usw.
Auch die zweite Form der for-Schleife haben Sie schon des Öfteren verwendet. Sie wird in Verbindung mit assoziativen Arrays eingesetzt (siehe Abschnitt 13.5.2). Die Syntax:
for(indexname in array) anweisung
oder bei mehreren Anweisungen:
for(indexname in array) { anweisung1 anweisung2 }
Mit dieser for-Schleife wird das assoziative Array Element für Element durchlaufen. Das Indexwort steht Ihnen hierbei in »indexname« zur Verfügung, sodass es im Schleifenkörper weiterverwendet werden kann.
while- und do-while-Schleifen
Die while-Schleife kennen Sie ebenfalls aus der Shell-Programmierung. Sie führt die Anweisungen in einem Befehlsblock so lange aus, wie die Bedingung zwischen den runden Klammern erfüllt wird. Die Syntax:
while( Bedingung ) anweisung
Oder, wenn mehrere Anweisungen folgen:
while( Bedingung ) { anweisung1 anweisung2 }
Ein einfaches Anwendungsbeispiel zur while-Schleife:
#!/usr/bin/awk -f # # Programmname: countfl.awk { while( i <=NF ) { fields++ i++ } i=0 lines++ } END { print "Anzahl Felder in " FILENAME ": " fields print "Anzahl Zeilen in " FILENAME ": " lines }
Das Script bei der Ausführung:
you@host > ./countfl.awk mrolympia.dat Anzahl Felder in mrolympia.dat: 73 Anzahl Zeilen in mrolympia.dat: 9
Das Script macht nichts anderes, als die Anzahl der Felder und Zeilen zu zählen. Die while-Schleife durchläuft hierbei jedes einzelne Feld einer Zeile. In der while-Schleife wird die Variable »field« und »i« (welche als Abbruchbedingung für die while-Schleife dient) inkrementiert. Am Ende einer Zeile trifft die Abbruchbedingung zu und die while-Schleife ist fertig. Daher wird hinter der while-Schleife »i« wieder auf 0 gesetzt und der Zähler für die Zeilen inkrementiert. Jetzt ist die while-Schleife für die neue Zeile bereit, da »i« wieder 0 ist.
In awk steht Ihnen auch eine zweite Form der while-Schleife mit do-while zur Verfügung. Der Unterschied zu herkömmlichen while-Schleife besteht darin, dass die do-while-Schleife die Bedingung erst nach der Ausführung alle Anweisungen überprüft. Die Syntax:
do { anweisung1 anweisung2 } while( Bedingung)
Umgeschrieben auf das Beispiel der while-Schleife, würde dieselbe Programmausführung wie folgt aussehen:
#!/usr/bin/awk -f # # Programmname: countfl2.awk { do { fields++ i++ } while (i<=NF) i=0 lines++ } END { print "Anzahl Felder in " FILENAME ": " fields print "Anzahl Zeilen in " FILENAME ": " lines }
Sprungbefehle â break, continue, exit, next
Zur Steuerung von Schleifen können Sie auch bei awk auf die Befehle break und continue (siehe Abschnitte 4.13.1 und 4.13.2) wie in der Shell-Programmierung zurückgreifen. Mit break können Sie somit aus einer Schleife herausspringen und mit continue den aktuellen Schleifendurchgang abbrechen und mit dem nächsten fortsetzen.
exit ist zwar nicht direkt ein Sprungbefehl im Script, aber man kann es hier hinzufügen. Mit exit beenden Sie das komplette awk-Script. Gewöhnlich wird exit in awk-Scripts verwendet, wenn die Weiterarbeit keinen Sinn mehr ergibt oder ein unerwarteter Fehler aufgetreten ist. exit kann hier mit oder ohne Rückgabewert verwendet werden. Der Rückgabewert kann dann in der Shell zur Behandlung des aufgetretenen Fehlers bearbeitet werden.
while ( Bedingung ) { ... if( Bedingung ) break # while-Schleife beenden if( Bedingung ) continue # hochspringen zum nächsten Schleifendurchlauf if( Bedingung ) exit 1 # Schwerer Fehler â Script beenden ... }
Ein für Sie wahrscheinlich etwas unbekannterer Befehl ist next. Mit ihm können Sie die Verarbeitung der aktuellen Zeile unterbrechen und mit der nächsten Zeile fortfahren. Dies kann zum Beispiel sinnvoll sein, wenn Sie eine Zeile bearbeiten, die zu wenig Felder enthält:
... # weniger als 10 Felder -> überspringen if( NF < 10 ) next
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 13.5 Die Komponenten von awk-ScriptsÂ
Die Verwendung von awk wurde bisher vorwiegend mit Einzeilern demonstriert, was häufig für den Hausgebrauch in Shellscripts ausreicht. So haben Sie die Mächtigkeit von awk schon näher kennen gelernt. Allerdings wurde bereits erwähnt, dass awk eher eine Programmiersprache als ein Tool ist, weshalb hier nun näher auf die Eigenheiten von awk als Programmiersprache eingegangen werden soll. Natürlich können Sie die awk-Scripts auch in Ihren Shellscripts einsetzen, womit Sie sich bedeutendes Know-how aneignen und sich gern als Guru bezeichnen dürfen.
Gewöhnlich verwendet man beim Erstellen eigener awk-Scripts das Verfahren mit der She-Bang-Zeile (#!) in der ersten Zeile:
> #!/usr/bin/awk -f
Kommentare können Sie in gleicher Weise â wie schon bei den Shellscripts â mit # kennzeichnen.
> #!/usr/bin/awk -f # # Programmname: programmname.awk # Erstellt : J.Wolf # Datum : ... # ...
Eine Dateiendung ist genauso unnötig wie in Shellscripts. Trotzdem wird hierbei relativ häufig die Endung ».awk« verwendet, damit der Anwender weiß, worum es sich bei diesem Script handelt. Man kann die Aktionen in einem awk-Script in einer Zeile schreiben, wobei hier dann jede Aktion durch ein Semikolon getrennt werden muss:
> # Anfang Haupt-Aktionsteil eines awk-Scripts { aktion ; aktion ; aktion } # Ende Haupt-Aktionsteil eines awk-Scripts
Oder aber, was empfehlenswerter ist, man schreibt jede Aktion in eine extra Zeile:
> # Anfang Haupt-Aktionsteil eines awk-Scripts { aktion aktion aktion } # Ende Haupt-Aktionsteil eines awk-Scripts
### 13.5.1 VariablenÂ
Neben den dynamisch angelegten Feldvariablen stehen Ihnen in awk auch benutzerdefinierte Variablen zur Verfügung. Die Variablen werden mit derselben Syntax wie schon in der Shell definiert und behandelt. Bei der Verwendung wird jedoch vor die Variable kein Dollarzeichen ($) gestellt, weil dieses Zeichen für die einzelnen Felder (bzw. Wörter) reserviert ist. Zahlen können Sie ohne weitere Vorkehrungen übergeben:
> # val1 mit dem Wert 1234 definieren val1=1234 # val2 mit dem Wert 1234.1234 definieren val2=1234.1234
Zeichenketten müssen Sie allerdings zwischen doppelte Hochkommata schreiben:
> # string1 mit einer Zeichenkette belegen string1="Ich bin ein String"
Machen Sie dies nicht wie im folgenden Beispiel
> string2=teststring
weist awk der Variablen »string2« nicht die Zeichenkette »teststring« zu, sondern die Variable »teststring« â was allerdings hier keinen Fehler auslöst, da auch hier, wie schon in der Shell, nicht definierte Werte automatisch mit 0 bzw. "" vorbelegt sind. Ein einfaches Beispiel, in dem die Anzahl der Zeilen einer Datei hochgezählt wird, die Sie als Argumente in der Kommandozeile mit angeben:
> #!/usr/bin/awk -f # # Programmname: countline.awk BEGIN { count=0 } # Haupt-Aktionsteil { count++ } END { printf "Anzahl Zeilen : %d\n" count }
Das Script bei der Ausführung:
> you@host > chmod u+x countline.awk you@host > ./countline.awk countline.awk Anzahl Zeilen : 15
Zwar lässt sich dieses Beispiel einfacher mit der Variablen NR ausführen, aber hier geht es um die Demonstration von Variablen. Zuerst wird der BEGIN-Block ausgeführt, in dem die Variable »count« mit dem Wert 0 definiert wurde. Variablen, die Sie mit dem Wert 0 vorbelegen, könnten Sie sich sparen, da diese bei der ersten Verwendung automatisch mit 0 (bzw. ""; abhängig vom Kontext) definiert würden. Allerdings ist es hilfreich, eine solche Variable trotzdem im BEGIN-Block zu definieren â der Übersichtlichkeit zuliebe.
Im Haupt-Aktionsteil wird nur der Wert der Variablen »count« um 1 inkrementiert (erhöht). Hierzu wird der Inkrement-Operator (++) in der Postfix-Schreibweise verwendet. Wurden alle Zeilen der als erstes Argument angegebenen Datei durchlaufen (wo jeweils pro Zeile der Haupt-Aktionsteil aktiv war), wird der END-Block ausgeführt und gibt die Anzahl der Zeilen aus. Hier kann übrigens durchaus mehr als nur eine Datei in der Kommandozeile angegeben werden:
> you@host > ./countline.awk countline.awk mrolympia.dat Anzahl Zeilen : 25
Das awk-Script können Sie übrigens auch wie ein wc âl benutzen. Beispiel:
> you@host > ls -l | wc -l 26 you@host > ls -l | ./counterline.awk Anzahl Zeilen : 26
Aber wie bereits erwähnt, Sie können dieses Script mit der Variablen NR erheblich abkürzen, und zwar bis auf den END-Block:
> #!/usr/bin/awk -f # # Programmname: countline2.awk END { printf "Anzahl Zeilen : %d\n", NR }
# Kommandozeilen-Argumente
Für die Argumente aus der Kommandozeile stehen Ihnen ähnlich wie in C die beiden Variablen ARGC und ARGV zur Verfügung. ARGC (ARGument Counter) enthält immer die Anzahl der Argumente in der Kommandozeile (inklusive) dem Programmnamen awk (!) und ARGV (ARGumenten Vector) ist ein Array, worin sich die einzelnen Argumente der Kommandozeile befinden (allerdings ohne Optionen von awk). Hierzu nochmals ein Script, welches die Anzahl der Argumente und jedes einzelne davon ausgibt (im Beispiel wurde die for-Schleife vorgezogen, die gleich näher beschrieben wird).
> #!/usr/bin/awk -f # # Programmname: countarg.awk BEGIN { print "Anzahl Argumente in ARGC : " , ARGC # einzelne Argumente durchlaufen for(i=0; i < ARGC; i++) printf "ARGV[%d] = %s\n", i, ARGV[i] }
Das Script bei der Ausführung:
> you@host > ./countarg.awk eine Zeile mit vielen Argumenten Anzahl Argumente in ARGC : 6 ARGV[0] = awk ARGV[1] = eine ARGV[2] = Zeile ARGV[3] = mit ARGV[4] = vielen ARGV[5] = Argumenten
# Vordefinierte Variablen
In awk existiert eine interessante Auswahl von vordefinierten Variablen, wovon Sie ja bereits mit NR, NF, ARGC, ARGV, FS, $0, $1 ... schon einige kennen gelernt haben. In der folgenden Tabelle finden Sie einen kurzen Überblick zu den vordefinierte Variablen sowie ein kurze Beschreibung ihrer Funktionen.
Variable | Bedeutung |
| --- | --- |
ARGC | Anzahl der Argumente aus der Kommanodzeile (+1) |
ARGV | Array mit dem Inhalt der Kommandozeilen-Argumente |
ENVIRON | Enthält ein Array mit allen momentanen Umgebungsvariablen |
FILENAME | Name der aktuellen Eingabedatei. Bei einer Eingabe von der Tastatur steht hier der Wert 'â'. |
FNR | Zeilennummer der Eingabe aus der aktuellen Datei |
FS | Das oder die Trennzeichen für die Zerlegung der Eingabezeile. Standardmäßig befindet sich hier das Leerzeichen. |
NF | Anzahl der Felder der aktuellen Zeile. Felder werden anhand von FS getrennt. |
NR | Anzahl der bisher eingelesenen Zeilen |
OFMT | Ausgabeformat für Fließkommazahlen |
OFS | Ausgabetrennzeichen für einzelne Felder; auch hier standardmäßig das Leerzeichen |
ORS | Das Ausgabetrennzeichen für neue Datensätze (Zeilen). Standardmäßig wird hierbei das Newline-Zeichen verwendet. |
RLENGTH | Gibt im Zusammenhang mit der Funktion match die Länge der übereinstimmenden Teilzeichenkette an |
RS | Das Eingabetrennzeichen für neue Datensätze (Zeilen), standardmäßig das Newline-Zeichen |
RSTART | Gibt im Zusammenhang mit der Zeichenkettenfunktion match den Index des Beginns der übereinstimmenden Teilzeichenkette an |
SUBSEP | Das Zeichen, das in Arrays die einzelnen Elemente trennt (\034) |
$0 | Enthält immer den kompletten Datensatz (Eingabezeile) |
$1, $2, ... | Die einzelnen Felder (Worte) der Eingabezeile, die nach dem Trennzeichen in der Variablen FS getrennt wurden |
Hierzu ein einfaches Beispiel, das einige der vordefinierten Variablen demonstrieren soll:
> #!/usr/bin/awk -f # # Programmname: vars.awk BEGIN { count=0 # Ausgabetrennzeichen: Minuszeichen OFS="-" } /USA/ { print $1, $2, $3 count++ } END { printf "%d Ergebnisse (von %d Zeilen) gefunden in %s\n", count, NR, FILENAME printf "Datei %s befindet sich in %s\n", FILENAME, ENVIRON["PWD"] }
Das Script bei der Ausführung:
> you@host > ./vars.awk mrolympia.dat Larry-Scott-USA Sergio-Oliva-USA Chris-Dickerson-USA Lee-Haney-USA Ronnie-Coleman-USA 5 Ergebnisse (von 9 Zeilen) gefunden in mrolympia.dat Datei mrolympia.dat befindet sich in /home/you
### 13.5.2 ArraysÂ
Arrays stehen Ihnen in awk gleich in zweierlei Formen zur Verfügung, zum einen die »gewöhnlichen« Arrays und zum anderen assoziative Arrays. Die »normalen« Arrays werden genauso verwendet, wie Sie dies schon in Abschnitt 2.5 bei der Shell-Programmierung gesehen haben. Verwenden können Sie die Arrays wie gewöhnliche Variablen, nur benötigen Sie hier den Indexwert (eine Ganzzahl), um auf die einzelnen Elemente zuzugreifen. Ein einfaches Script zur Demonstration:
> #!/usr/bin/awk -f # # Programmname: playarray.awk { # komplette Zeile in das Array mit dem Index NR ablegen line[NR]=$0 # Anzahl der Felder (Wörter) in das Array # fields mit dem Index NR ablegen fields[NR]=NF } END { for(i=1; i <=NR; i++) { printf "Zeile %2d hat %2d Felder:\n", i, fields[i] print line[i] } }
Das Script bei der Ausführung:
> you@host > ./playarray.awk mrolympia.dat Zeile 1 hat 5 Felder: <NAME> USA 1965 1966 Zeile 2 hat 6 Felder: <NAME> USA 1967 1968 1969 ... Zeile 9 hat 10 Felder: <NAME> USA 1998 1999 2000 2001 2002 2003 2004
Hier wurden im Haupt-Aktionsteil die einzelnen Zeilen im Array »line« gespeichert. Als Indexnummer wird hier die vordefinierte Variable NR verwendet. Ebenfalls wurde die Anzahl von Feldern (Wörtern) dieser Zeile in einem extra Array (»fields«) gespeichert.
Neben den »gewöhnlichen« Arrays kennt awk auch noch assoziative Arrays. Dies sind Arrays, die neben Zahlen als Indexwert auch Zeichenketten zulassen. Somit sind folgende Werteübergaben möglich:
> array["Wolf"]=10 array["ION"]=1234 array["WION"] = array["Wolf"] array["GESAMT"] = array["Wolf"] + array["ION"]
Diese Variante der Arrays zu verwenden, ist wirklich außergewöhnlich. Allerdings stellen Sie sich sicherlich zunächst die Frage, wie kommt man wieder an die einzelnen Werte, also welchen Index verwendet man hierzu? Zum einen haben Sie natürlich die Möglichkeit, auf die einzelnen Werte zuzugreifen, wenn Sie das ["Wort"] des Indexwertes kennen. Aber wenn Sie nicht vorhersehen können, welche Worte hier stehen, hilft Ihnen eine spezielle, zum Arbeiten mit assoziativen Arrays gedachte Variante der for-Schleife:
> for (indexwort in array)
Mit diesem Konstrukt wird das komplette assoziative Array durchlaufen. Und was eignet sich besser zur Demonstration von assoziativen Arrays als das Zählen von Wörtern einer Datei bzw. Eingabe?
> #!/usr/bin/awk -f # # Programmname: countwords.awk { # Durchläuft alle Felder einer Zeile for(f=1; f <= NF; ++f) field[$f]++ } END { for(word in field) print word, field[word] }
Das Script bei der Ausführung:
> you@host > ./countwords.awk mrolympia.dat Dorian 1 2000 1 2001 1 USA 5 ...
Im Beispiel durchlaufen wir mit einer for-Schleife die komplette Zeile anhand der einzelnen Felder (NF). Jedes Wort wird hier als Wortindex für ein Array benutzt und inkrementiert (um eins erhöht). Existiert ein solcher Index noch nicht, wird dieser neu erzeugt und um den Wert eins erhöht. Existiert bereits ein entsprechender »Wortindex«, so wird nur die Speicherstelle zum zugehörigen Wort um eins erhöht. Im END-Block wird dann das spezielle for-Konstrukt der assoziativen Arrays für die Ausgabe verwendet. Dabei wird zuerst der Indexname (also das Wort selber) und dann dessen Häufigkeit ausgegeben.
Die Zuweisung von Werten eines Arrays ist etwas umständlich, da die Werte häufig einzeln übergeben werden müssen. Einfacher gelingt dies mit der Funktion split. split zerlegt ein Array automatisch in die einzelnen Bestandteile (anhand der Variablen FS). Ein Beispiel:
> #!/usr/bin/awk -f # # Programmname: splitting.awk /Coleman/ { # Durchläuft alle Felder einer Zeile split($0, field) } END { for(word in field) print field[word] }
Das Script bei der Ausführung:
> you@host > ./splitting.awk mrolympia.dat 1998 1999 2000 2001 2002 2003 2004 <NAME> USA
Hier wird die Zeile mit der Textfolge »Coleman« in die einzelnen Bestandteile zerlegt und an das assoziative Array »field« übergeben. Sofern Sie ein anderes Zeichen zur Trennung verwenden wollen, müssen Sie die vordefinierte Variable FS entsprechend anpassen.
### 13.5.3 OperatorenÂ
Im Großen und Ganzen werden Sie in diesem Kapitel nicht viel Neues erfahren, da Operatoren meistens in allen Programmiersprachen dieselbe Bedeutung haben. Trotzdem stellt sich hierbei zunächst die Frage, wie awk einen Wert einer Variablen oder Konstante typisiert. In der Regel versucht awk, bei einer Berechnung beispielsweise die Typen aneinander anzupassen. Somit hängt der Typ eines Ausdrucks vom Kontext der Benutzung ab. So kann ein String "100" einmal als String verwendet werden und ein anderes Mal als die Zahl 100. Zum Beispiel stellt
> print "100"
eine Ausgabe einer Zeichenkette dar. Hingegen wird
> print "100"+0
als Zahl präsentiert â auch wenn jeweils immer »nur« 100 ausgegeben wird. Man spricht hierbei vom Erzwingen eines numerischen bzw. String-Kontexts.
Einen numerischen Kontext können Sie erzwingen, wenn der Typ des Ergebnisses eine Zahl ist. Befindet sich ein String in der Berechnung, wird dieser in eine Zahl umgewandelt. Ein Beispiel:
> print "100Wert"+10
Hier ist das Ergebnis der Ausgabe 110 â das Suffix »Wert« wird ignoriert. Würde sich hier ein ungültiger Wert befinden, so wird dieser in 0 umgewandelt:
> print "Wert"+10
Als Ergebnis würden Sie den Wert 10 zurückbekommen.
Einen String-Kontext erzwingen Sie, wenn der Typ des Ergebnisses ein String ist. Somit könnten Sie z. B. eine Zahl in einen String umwandeln, wenn Sie etwa eine »Konkatenation« (Anhängen mehrere Zeichenketten) vornehmen oder Vergleichsoperatoren einen String erwarten.
Leider ist es nicht immer so eindeutig, ob ein String als fester String oder als eine Zahl repräsentiert wird. So würde beispielsweise folgender Vergleich fehlschlagen:
> if( 100=="00100" ) ...
Und zwar deshalb, weil hier die Regel in Kraft tritt: Sobald eines der Argumente ein String ist, wird auch das andere Argument als String behandelt. Daher würden Sie hier folgenden Vergleich durchführen:
> if ( "100"=="00100" ) ...
Anders hingegen sieht dies aus, wenn Sie z. B. eine Feldvariable (bspw. $1) mit dem Wert »00100« vergleichen würden:
> if (100==$1) ...
Hier würde der Vergleich »wahr« zurückgeben. Wie dem auch sei, Sie sehen, dass es hier zu einigen Ungereimtheiten kommen kann. Zwar könnte ich Ihnen jetzt eine Liste anführen, in welchem Fall ein String als numerischer und wann als nicht numerischer String behandelt wird. Allerdings ist es mühsam, sich dies zu merken, weshalb man â sofern man Strings mit Zahlen vermischt â hier auf Nummer sicher gehen und die Umwandlung in eine Zahl bzw. in einen String selbst vornehmen sollte. Eine erzwungene Umwandlung eines Strings können Sie wie folgt durchführen:
> variable = variable ""
Jetzt können Sie sicher sein, dass »variable« ein String ist. Wollen Sie hingegen, dass »variable« eine Zahl ist, so können Sie diese Umwandlung wie folgt erzwingen:
> variable = variable + 0
Hiermit haben Sie in »variable« immer eine Zahl. Befindet sich in Variable ein sinnloser Wert, so wird durch die erzwungene Umwandlung eben der Wert 0 aus der Variablen. Zugegeben, viel Theorie, aber sie ist erforderlich, um auch zu den gewünschten Ergebnissen zu kommen.
# Arithmetische Operatoren
Hier finden Sie ebenfalls wieder die üblichen arithmetischen Operatoren, wie in anderen Programmiersprachen auch. Ebenso existieren hier die kombinierten Zuweisungen, mit denen Sie Berechnungen wie »var=var+5« durch »var+=5« abkürzen können. Hier eine Tabelle mit allen arithmetischen Operatoren in awk:
Operator | Bedeutung |
| --- | --- |
+ | Addition |
â | Subtraktion |
* | Multiplikation |
/ | Division |
% | Modulo (Rest einer Division) |
^ | Exponentation bspw. x^y ist x hoch y |
+= | Abkürzung für var=var+x gleichwertig zu var+=x |
â= | Abkürzung für var=varâx gleichwertig zu varâ=x |
*= | Abkürzung für var=var*x gleichwertig zu var*=x |
/= | Abkürzung für var=var/x gleichwertig zu var/=x |
%= | Abkürzung für var=var%x gleichwertig zu var%=x |
^= | Abkürzung für var=var^x gleichwertig zu var^=x |
Ein einfaches Beispiel:
> #!/usr/bin/awk -f # # Programmname: countfields.awk { field[FILENAME]+=NF } END { for(word in field) print "Anzahl Felder in " word " : " field[word] }
Das Script bei der Ausführung:
> you@ghost > ./countfields.awk mrolympia.dat USA.dat Argentinien Anzahl Felder in Argentinien : 5 Anzahl Felder in USA.dat : 15 Anzahl Felder in mrolympia.dat : 64
In diesem Beispiel werden alle Felder einer Datei gezählt. Da wir nicht wissen können, wie viele Dateien der Anwender in der Kommandozeile eingibt, verwenden wir gleich ein assoziatives Array mit dem entsprechenden Dateinamen als Feldindex.
# Logische Operatoren
Die logischen Operatoren wurden bereits verwendet. Hier haben Sie die Möglichkeit einer logischen UND- und einer logischen ODER-Verknüpfung. Das Prinzip der beiden Operatoren wurde schon bei der Shell-Programmierung erklärt. Auch hier gilt: Verknüpfen Sie zwei Ausdrücke mit einem logischen UND, wird wahr zurückgegeben, wenn beide Ausdrücke zutreffen. Bei einer logischen ODER-Verknüpfung hingegen genügt es, wenn einer der Ausdrücke wahr zurückgibt. Und natürlich steht Ihnen in awk auch der Negationsoperator (!) zur Verfügung, womit Sie alles Wahre unwahr und alles Unwahre wahr machen können.
Operator | Bedeutung |
| --- | --- |
|| | Logisches ODER |
&& | Logisches UND |
! | Logische Verneinung |
Das folgende Script gibt alle Gewinner der 80er-Jahre der Datei mrolympia.dat zurück:
> #!/usr/bin/awk -f # # Programmname: winner80er.awk { for(i=4; i<=NF; i++) { if( $i >= 1980 && $i < 1990 ) { print $0 # gefunden -> Abbruchbedingung für for i=NF; } } }
Das Script bei der Ausführung:
> you@host > ./winner80er.awk mrolympia.dat <NAME> Österreich 1970 1971 1972 1973 1974 1980 <NAME> Argentinien 1976 1981 <NAME> USA 1982 <NAME> Libanon 1983 <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991
# Bedingungsoperator
Auch awk unterstützt den ternären Bedingungsoperator ?: von C. Dieser Operator stellt eine verkürzte if-Anweisung dar. So können Sie statt
> if ( Bedingung ) anweisung1 else anweisung2
den ternären Operator ?: wie folgt einsetzen:
> Bedingung ?anweisung1 :anweisung2
Auch hier gilt wie in der if-Anweisung: Ist die Bedingung wahr, wird »anweisung1« ausgeführt, ansonsten wird »anweisung2« verwendet. Zwar mag dieser ternäre Operator eine Verkürzung des Programmcodes bedeuten, doch aus Übersichtlichkeitsgründen verwende ich persönlich immer noch lieber das if-else-Konstrukt (auch in C) â aber dies ist letztendlich Geschmackssache.
# Inkrement- und Dekrementoperator
Den Inkrementoperator ++ haben Sie bereits verwendet. Dabei handelt es sich um einen Zähloperator, der den Inhalt einer Variablen um 1 erhöht â also eine Abkürzung für »var=var+1« ist »var++«. Gleiches erreichen Sie mit dem Dekrementoperator ââ. Damit reduzieren Sie den Wert einer Variablen um 1. Allerdings ist es entscheidend, ob Sie den Operator vor oder nach der Variablen schreiben. In Verbindung mit einer Variablen kann dabei folgender (Neben-)Effekt auftreten:
> var=1 wert=var++ print wert i # wert=1, i=2
Hier wurde die Postfixschreibweise mit »var++« verwendet. In diesem Beispiel wird zuerst der Variablen »wert« der Inhalt von »var« zugewiesen und anschließend inkrementiert. Somit bekommt »wert« den Inhalt 1. Wollen Sie, dass »wert« hier 2 erhält, müssen Sie »var« in der Präfixschreibweise erhöhen.
> var=1 wert=++var print wert i # wert=2, i=2
Jetzt wird der Inhalt von »var« zuerst inkrementiert und dann an »wert« übergeben. Gleiches gilt natürlich auch bei dem Dekrementoperator. Relativ häufig werden diese Operatoren in Zählschleifen verwendet.
# Leerzeichen- und Feldoperator
Zwar würde man das Leerzeichen nicht als Operator bezeichnen, aber zwischen zwei Strings dient es dazu, diese miteinander zu verbinden â sprich zu einer neuen Zeichenkette zu verbinden. Folgendes Script als Beispiel:
> #!/usr/bin/awk -f # # Programmname: names.awk { string = string $2 " " } END { print string }
Das Script bei der Ausführung:
> you@host > ./names.awk mrolympia.dat <NAME>
In diesem Script wurden alle Nachnamen (zweite Spalte â $2) zu einem String verbunden, ohne das Leerzeichen wäre dies nicht möglich.
Des Weiteren steht Ihnen noch der Feldzugriffsoperator $ zur Verfügung. Sie können ihn, im Gegensatz zum Positionsparameter der Shell, relativ flexibel einsetzen:
> #!/usr/bin/awk -f # # Programmname: names2.awk BEGIN { i=2 } { string = string $i " " } END { print string }
Das Script macht dasselbe wie schon das Script zuvor, es verwendet alle Felder der zweiten Spalte und fügt Sie zu einem String zusammen. Allerdings wurde hier der Feldzugriffsoperator mit $i verwendet. »i« wurde im BEGIN-Block mit dem Wert 2 belegt. Dies macht den Zugriff auf die einzelnen Felder erheblich flexibler, besonders in Verbindung mit einer Schleife:
> #!/usr/bin/awk -f # # Programmname: winner.awk { for(i=4; i<=NF; i++) string[$2]= string[$2] $i "-" } END { for(word in string) print "Titel von " word " in den Jahren : " string[word] }
Das Script bei der Ausführung:
> you@host > ./winner.awk mrolympia.dat Titel von Bannout in den Jahren : 1983- Titel von Columbu in den Jahren : 1976â1981- ... Titel von Yates in den Jahren : 1992â1993â1994â1995â1996â1997- Titel von Dickerson in den Jahren : 1982-
Hier wurden alle Jahre, in denen ein Wettkämpfer gesiegt hat, in ein assoziatives Array gespeichert. Die einzelnen Jahre wurden mit dem Zugriffsoperator auf Felder ($) dem Array zugewiesen.
### 13.5.4 KontrollstrukturenÂ
Mit den Kontrollstrukturen lernen Sie eigentlich nichts mehr Neues, da Sie hier auf alle alten Bekannten wie if, for, while und Co. stoßen werden, die Sie bereits in Kapitel 4, Kontrollstrukturen, zur Shell-Programmierung kennen gelernt haben. Leider kann ich das Thema nicht ganz so schnell überfliegen, weil hier die Syntax im Gegensatz zur Shell-Programmierung ein wenig anders ist. Derjenige, der bereits C-erprobt ist (oder sich mit den C-Shells befasst hat), ist fein raus, weil die Syntax der Kontrollstrukturen in awk exakt derjenigen von C entspricht.
# if-Verzweigungen
Mit der if-Verzweigung können Sie (wie gewohnt) eine bestimmte Bedingung testen. Trifft die Bedingung zu, wird wahr (TRUE), ansonsten falsch (FALSE) zurückgegeben. Optional können Sie auch hier einer if-Verzweigung ein else-Konstrukt folgen lassen. Die Syntax:
> if( Bedingung ) # Bedingung-ist-wahr-Zweig else # Bedingung-ist-falsch-Zweig (optional)
Besteht die Anweisung einer Verzweigung aus mehr als nur einer Anweisung, müssen Sie diese in einem Anweisungsblock zwischen geschweiften Klammern zusammenfassen:
> if( Bedingung ) { # Bedingung-ist-wahr-Zweig anweisung1 anweisung2 } else { # Bedingung-ist-falsch-Zweig (optional) anweisung1 anweisung2 }
Und selbstverständlich gibt es in awk auch das else if-Konstrukt (gleich mehr zu elif in der Shell-Programmierung):
> if( Bedingung1 ) { # Bedingung-ist-wahr-Zweig anweisung1 anweisung2 } else if( Bedingung2 ) { # Bedingung2-ist-wahr-Zweig (optional) anweisung1 anweisung2 } else { # Bedingung1-und-Bedingung2-ist-falsch-Zweig (optional) anweisung1 anweisung2 }
Ein einfaches Bespiel:
> #!/usr/bin/awk -f # # Programmname: findusa.awk { if( $0 ~ /USA/ ) print "USA gefunden in Zeile " NR else print "USA nicht gefunden in Zeile " NR }
Das Script bei der Ausführung:
> you@host > ./findusa.awk mrolympia.dat USA gefunden in Zeile 1 USA gefunden in Zeile 2 USA nicht gefunden in Zeile 3 ...
# for-Schleifen
for-Schleifen wurden auf den vorangegangenen Seiten recht häufig eingesetzt. Die Syntax zur klassischen for-Schleife sieht wie folgt aus:
> for( Initialisierung; Bedingung; Zähler ) anweisung
Folgen auch hierbei mehrere Anweisungen, müssen Sie diese in einem Anweisungsblock zwischen geschweiften Klammern zusammenfassen:
> for( Initialisierung; Bedingung; Zähler ) { anweisung1 anweisung2 }
Der erste Ausdruck (Initialisierung) der for-Schleife wird nur ein einziges Mal beim Starten der for-Schleife ausgeführt. Hier wird häufig eine Variable mit Startwerten initialisiert. Anschließend folgt ein Semikolon, gefolgt von einer Überprüfung der Bedingung. Ist diese wahr, so werden die Anweisungen ausgeführt. Ist die Bedingung nicht wahr, wird die Schleife beendet und das Script fährt mit seiner Ausführung hinter der Schleife fort. Sofern die Bedingung wahr ist und die Anweisungen ausgeführt wurden, wird der dritte Ausdruck in der for-Schleife â hier der Zähler â aktiv. Hierbei wird häufig ein bestimmter Wert, der entscheidend für die Abbruchbedingung des zweiten Ausdrucks der for-Schleife ist, erhöht bzw. erniedrigt. Anschließend wird wieder die Bedingung überprüft usw.
Auch die zweite Form der for-Schleife haben Sie schon des Öfteren verwendet. Sie wird in Verbindung mit assoziativen Arrays eingesetzt (siehe Abschnitt 13.5.2). Die Syntax:
> for(indexname in array) anweisung
oder bei mehreren Anweisungen:
> for(indexname in array) { anweisung1 anweisung2 }
Mit dieser for-Schleife wird das assoziative Array Element für Element durchlaufen. Das Indexwort steht Ihnen hierbei in »indexname« zur Verfügung, sodass es im Schleifenkörper weiterverwendet werden kann.
# while- und do-while-Schleifen
Die while-Schleife kennen Sie ebenfalls aus der Shell-Programmierung. Sie führt die Anweisungen in einem Befehlsblock so lange aus, wie die Bedingung zwischen den runden Klammern erfüllt wird. Die Syntax:
> while( Bedingung ) anweisung
Oder, wenn mehrere Anweisungen folgen:
> while( Bedingung ) { anweisung1 anweisung2 }
Ein einfaches Anwendungsbeispiel zur while-Schleife:
> #!/usr/bin/awk -f # # Programmname: countfl.awk { while( i <=NF ) { fields++ i++ } i=0 lines++ } END { print "Anzahl Felder in " FILENAME ": " fields print "Anzahl Zeilen in " FILENAME ": " lines }
Das Script bei der Ausführung:
> you@host > ./countfl.awk mrolympia.dat Anzahl Felder in mrolympia.dat: 73 Anzahl Zeilen in mrolympia.dat: 9
Das Script macht nichts anderes, als die Anzahl der Felder und Zeilen zu zählen. Die while-Schleife durchläuft hierbei jedes einzelne Feld einer Zeile. In der while-Schleife wird die Variable »field« und »i« (welche als Abbruchbedingung für die while-Schleife dient) inkrementiert. Am Ende einer Zeile trifft die Abbruchbedingung zu und die while-Schleife ist fertig. Daher wird hinter der while-Schleife »i« wieder auf 0 gesetzt und der Zähler für die Zeilen inkrementiert. Jetzt ist die while-Schleife für die neue Zeile bereit, da »i« wieder 0 ist.
In awk steht Ihnen auch eine zweite Form der while-Schleife mit do-while zur Verfügung. Der Unterschied zu herkömmlichen while-Schleife besteht darin, dass die do-while-Schleife die Bedingung erst nach der Ausführung alle Anweisungen überprüft. Die Syntax:
> do { anweisung1 anweisung2 } while( Bedingung)
Umgeschrieben auf das Beispiel der while-Schleife, würde dieselbe Programmausführung wie folgt aussehen:
> #!/usr/bin/awk -f # # Programmname: countfl2.awk { do { fields++ i++ } while (i<=NF) i=0 lines++ } END { print "Anzahl Felder in " FILENAME ": " fields print "Anzahl Zeilen in " FILENAME ": " lines }
# Sprungbefehle â break, continue, exit, next
Zur Steuerung von Schleifen können Sie auch bei awk auf die Befehle break und continue (siehe Abschnitte 4.13.1 und 4.13.2) wie in der Shell-Programmierung zurückgreifen. Mit break können Sie somit aus einer Schleife herausspringen und mit continue den aktuellen Schleifendurchgang abbrechen und mit dem nächsten fortsetzen.
exit ist zwar nicht direkt ein Sprungbefehl im Script, aber man kann es hier hinzufügen. Mit exit beenden Sie das komplette awk-Script. Gewöhnlich wird exit in awk-Scripts verwendet, wenn die Weiterarbeit keinen Sinn mehr ergibt oder ein unerwarteter Fehler aufgetreten ist. exit kann hier mit oder ohne Rückgabewert verwendet werden. Der Rückgabewert kann dann in der Shell zur Behandlung des aufgetretenen Fehlers bearbeitet werden.
> while ( Bedingung ) { ... if( Bedingung ) break # while-Schleife beenden if( Bedingung ) continue # hochspringen zum nächsten Schleifendurchlauf if( Bedingung ) exit 1 # Schwerer Fehler â Script beenden ... }
Ein für Sie wahrscheinlich etwas unbekannterer Befehl ist next. Mit ihm können Sie die Verarbeitung der aktuellen Zeile unterbrechen und mit der nächsten Zeile fortfahren. Dies kann zum Beispiel sinnvoll sein, wenn Sie eine Zeile bearbeiten, die zu wenig Felder enthält:
> ... # weniger als 10 Felder -> überspringen if( NF < 10 ) next
# 13.6 FunktionenÂ
13.6 FunktionenÂ
awk ist ohnehin ein mächtiges Werkzeug. Doch mit den Funktionen, die awk Ihnen jetzt auch noch anbietet, können Sie awk noch mehr erweitern. Und sollten Ihnen die Funktionen, die awk mitliefert, nicht ausreichen, können Sie immer noch eigene benutzerdefinierte Funktionen hinzufügen.
13.6.1 Mathematische FunktionenÂ
Hier zunächst ein Überblick aller arithmetischen Funktionen, die Ihnen awk zur Verfügung stellt.
Tabelle 13.8 Â Mathematische Builtin-Funktionen von awk
atan2(x,y)
Arcustangens von x/y in Radian
cos(x)
Liefert den Cosinus in Radian
exp(x)
Exponentialfunktion
int(x)
Abschneiden einer Zahl zu einer Ganzzahl
log(x)
Natürlicher Logarithmus zur Basis e
rand()
(Pseudo-)Zufallszahl zwischen 0 und 1
sin(x)
Liefert den Sinus in Radian
sqrt(x)
Quadratwurzel
srand(x)
Setzt Startwert für Zufallszahlen. Bei keiner Angabe wird die aktuelle Zeit verwendet.
Zwar werden Sie als Systemadministrator seltener mit komplizierten arithmetischen Berechnungen konfrontiert sein, dennoch soll hier auf einige Funktionen eingegangen werden, mit denen Sie es als Nicht-Mathematiker häufiger zu tun bekommen. So zum Beispiel die Funktion int, womit Sie aus einer Gleitpunktzahl eine Ganzzahl machen, indem die Nachkommastellen abgeschnitten werden. Ein Beispiel:
you@host > awk 'END { print 30/4 }' datei 7,5 you@host > awk 'END { print int(30/4) }' datei 7
Neben dem Abschneiden von Gleitpunktzahlen benötigt man manchmal auch Zufallszahl. Hier bieten Ihnen die Funktionen rand und srand eine Möglichkeit an, solche zu erzeugen.
you@host > awk 'END { print rand() }' datei 0,237788
Hier haben Sie mit der Verwendung von rand eine Zufallszahl zwischen 0 und 1 erzeugt. Wenn Sie allerdings rand erneut aufrufen, ergibt sich dasselbe Bild:
you@host > awk 'END { print rand() }' datei 0,237788
Die Zufallsfunktion rand bezieht sich auf einen Startwert, mit dem sie eine (Pseudo-)Zufallszahl generiert. Diesen Startwert können Sie mit der Funktion srand verändern:
you@host > awk 'BEGIN { print srand() }; { print rand() }' datei 0,152827 you@host > awk 'BEGIN { print srand() }; { print rand() }' datei 0,828926
Mit der Funktion srand ohne Angabe eines Parameters wird jedes Mal zum Setzen eines neuen Startwerts für die Funktion rand die aktuelle Zeit verwendet. Dadurch ist die Verwendung von rand schon wesentlich effektiver und zufälliger (wenn auch nicht perfekt).
13.6.2 Funktionen für ZeichenkettenÂ
Die Funktionen für Zeichenketten dürften die am häufigsten eingesetzten Funktionen für Sie als Shell-Programmierer sein. Oft werden Sie hierbei kein extra awk-Script schreiben, sondern die allseits beliebten Einzeiler verwenden. Trotzdem sollten Sie immer bedenken, dass wenn Sie awk-Einzeiler in Ihrem Shellscript in einer Schleife mehrmals aufrufen, dies jedes Mal den Start eines neuen (awkâ)Prozesses bedeutet. Die Performance könnte darunter erheblich leiden. Bei häufigen awk-Aufrufen in Schleifen sollten Sie daher in Erwägung ziehen, ein awk-Script zu schreiben. Wie dies geht, haben Sie ja in diesem Kapitel erfahren. Die Syntax eines solchen Einzeilers in einem Shellscript sieht häufig wie folgt aus:
echo string | awk '{ print string_funktion($0) }'
oder
cat datei | awk '{ print string_funktion($0) }'
(Globale) Ersetzung mit sub und gsub
Die Syntax:
sub (regulärer_Ausdruck, Ersetzungs_String); sub (regulärer_Ausdruck, Ersetzungs_String, Ziel_String) gsub (regulärer_Ausdruck, Ersetzungs_String); gsub (regulärer_Ausdruck, Ersetzungs_String, Ziel_String)
Mit beiden Funktionen wird das Auftreten von »regulärer_Ausdruck« durch »Ersetzungs_String« ersetzt. Wird kein »Ziel_String« mit angegeben, so wird $0 verwendet. Der Rückgabewert ist die Anzahl erfolgreicher Ersetzungen. Der Unterschied zwischen gsub und sub liegt darin, dass mit gsub (global substitution) eine globale Ersetzung und mit sub eine Ersetzung des ersten Vorkommens durchgeführt wird.
you@host > awk '{ gsub(/USA/, "Amerika"); print }' mrolympia.dat <NAME> Amerika 1965 1966 <NAME> 1967 1968 1969 ...
Hier werden alle Textfolgen »USA« der Datei mrolympia.dat durch »Amerika« ersetzt. Wollen Sie nur das erste Vorkommen ersetzen, so müssen Sie sub verwenden. Es kann aber nun sein, dass sich in einer Zeile mehrmals die Textfolge »USA« befindet, Sie aber nur eine bestimmte Spalte ersetzen wollen. Dann können Sie das dritte Argument von gsub bzw. sub verwenden:
you@host > awk '{ gsub(/USA/, "Amerika", $3); print }' mrolympia.dat <NAME> 1965 1966 <NAME> 1967 1968 1969 ...
Hier weisen Sie explizit an, dass nur wenn die dritte Spalte die Textfolge »USA« enthält, diese durch die Textfolge »Amerika« zu ersetzen ist.
Hinweis   Zu Demonstrationszwecken werden keine komplizierten regulären Ausdrücke verwendet, sondern immer einfache Textfolgen. Hier geht es lediglich um eine Funktionsbeschreibung der Zeichenketten-Funktionen von awk.
Position einer Zeichenkette ermitteln â index
Die Syntax:
index(string, substring)
Diese Funktion gibt die erste Position der Zeichenkette »substring« in »string« zurück. 0 wird zurückgegeben, wenn keine Übereinstimmung gefunden wurde. Folgendes Beispiel gibt alle Zeilen mit der Textfolge »USA« mitsamt ihren Positionen zurück:
you@host > awk '{ i=index($0, "USA"); if(i) print NR ":" i }' \ > mrolympia.dat 1:13 2:14 5:17 7:11 9:16
Länge einer Zeichenkette ermitteln â length
Die Syntax:
length(string)
Mit dieser Funktion können Sie die Länge der Zeichenkette »string« ermitteln. Nehmen Sie für »string« keine Angabe vor, wird $0 verwendet.
Folgendes Beispiel gibt die Länge der jeweils ersten Spalten einer jeden Zeile aus und das darauf folgende Beispiel, die Länge einer kompletten Zeile ($0):
you@host > awk '{ print NR ":" length($1) }' mrolympia.dat 1:5 2:6 3:6 ... you@host > awk '{ print NR ":" length }' mrolympia.dat 1:25 2:31 3:67 ...
Suchen nach Muster â match
Die Syntax:
match(string, regulärer_Ausdruck)
Mit match suchen Sie nach dem Muster »regulärer_Ausdruck« in »string«. Wird ein entsprechender Ausdruck gefunden, wird die Position zurückgegeben, ansonsten bei erfolgloser Suche lautet der Rückgabewert 0. Die Startposition des gefundenen (Teil-)Strings »regulärer_Ausdruck« finden Sie in RSTART und die Länge des Teilstücks in RLENGTH. Ein Beispiel:
you@host > awk '{ i=match($0, "Yates"); \ > if(i) print NR, RSTART, RLENGTH }' mrolympia.dat 8 8 5
Hier wird nach der Zeichenfolge »Yates« gematcht und bei Erfolg werden die Zeile, die Position in der Zeile und die Länge zurückgegeben.
Zeichenkette zerlegen â split
Die Syntax:
split (string, array, feld_trenner) split (string, array)
Mit dieser Funktion zerlegen Sie die Zeichenkette »string« und teilen die einzelnen Stücke in das Array »array« auf. Standardmäßig werden die einzelnen Zeichenketten anhand von FS (standardmäßig ein Leerzeichen) »zersplittet«. Allerdings können Sie dieses Verhalten optional über den dritten Parameter mit »feld_trenner« verändern. Als Rückgabewert erhalten Sie die höchste Indexnummer des erzeugten Arrays. Ein Beispiel zu dieser Funktion wurde bereits in Abschnitt 13.5.2 gegeben.
Eine Zeichenkette erzeugen â sprintf
Die Syntax:
string=sprintf(fmt, exprlist)
Mit sprintf erzeugen Sie eine Zeichenkette und liefern diese zurück. Der Formatstring »fmt« und die Argumente »exprlist« werden genauso verwendet wie bei printf für die Ausgabe. Ein einfaches Beispiel:
you@host > awk '{ \ > line=sprintf("%-10s\t%-15s\t%-15s", $1, $2, $3); print line }'\ > mrolympia.dat <NAME> USA <NAME> USA <NAME>ger Österreich Franco Columbu Argentinien ...
Teilstück einer Zeichenkette zurückgegeben â substr
Die Syntax:
substr(string, start_position) substr(string, start_position, länge)
Die Funktion gibt einen Teil einer Zeichenkette »string« ab der Position »start_position« entweder bis zur Länge »länge« oder bis zum Ende zurück.
you@host > awk '{ print substr($0, 5, 10)}' mrolympia.dat y Scott US io Oliva U ld Schwarz co Columbu ...
In diesem Beispiel wird (auch wenn es hier keinen Sinn macht) aus jeder Zeile der Datei mrolympia.dat eine Textfolge ab der Position 5 bis 10 ausgegeben (oder auch ausgeschnitten).
Groß- und Kleinbuchstaben â toupper und tolower
Die Syntax:
toupper(string) tolower(string)
Mit der Funktion toupper werden alle Kleinbuchstaben in »string« in Großbuchstaben und mit tolower alle Großbuchstaben in Kleinbuchstaben umgewandelt. Ein Beispiel:
you@host > awk '{ print toupper($0)}' mrolympia.dat LARRY SCOTT USA 1965 1966 SERGIO OLIVA USA 1967 1968 1969 ARNOLD SCHWARZENEGGER ÖSTERREICH 1970 1971 1972 1973 1974 1975 ... you@host > awk '{ print tolower($0)}' mrolympia.dat larry scott usa 1965 1966 sergio oliva usa 1967 1968 1969 arnold schwarzenegger österreich 1970 1971 1972 1973 1974 1975 ...
13.6.3 Funktionen für die ZeitÂ
In awk existieren auch zwei Funktionen für eine Zeitangabe: zum einen die Funktion systime (der UNIX-Timestamp), welche die aktuelle Tageszeit als Anzahl Sekunden zurückgibt, die seit dem 1.1.1970 vergangen sind. Zum anderen existiert noch die Funktion strftime, welche einen Zeitwert nach Maßangaben einer Formatanweisung (ähnlich wie bei dem Kommando date) formatiert. Diese Funktion ist außerdem der Funktion strftime() aus C nachgebildet und auch die Formatanweisungen haben dieselbe Bedeutung. Die Syntax zu strftime lautet:
strftime( format, timestamp );
Die möglichen Formatanweisungen von »format« finden Sie in der folgenden Tabelle. Den »timestamp« erhalten Sie aus dem Rückgabewert der Funktion systime. Hier zunächst die Formatanweisungen für strftime:
Tabelle 13.9 Â (Zeit-)Formatanweisungen für strftime
Format
⦠wird ersetzt durch â¦
Beispiel
%a
Wochenname (gekürzt)
Sat
%A
Wochenname (ausgeschrieben)
Saturday
%b
Monatsname (gekürzt)
Jan
%B
Monatsname (ausgeschrieben)
January
%c
Entsprechende lokale Zeit- und Datumsdarstellung
Sat Jan 22 22:22:22 MET 2003
%d
Monatstag (1â31)
22
%H
Stunde im 24-Stunden-Format (0â23)
23
%I
Stunde im 12-Stunden-Format (1â12)
5
%j
Tag des Jahres (1â366)
133
%m
Monat (1â12)
5
%M
Minute (0â59)
40
%p
AM- oder PM-Zeitangabe; Indikator für das 12-Stunden-Format (USA)
PM
%S
Sekunden (0â69)
55
%U
Wochennummer (0â53) (Sonntag als erster Tag der Woche)
33
%w
Wochentag (0â6, Sonntag = 0)
3
%W
Wochennummer (0â53) (Montag als erster Tag der Woche)
4
%x
Lokale Datumsdarstellung
02/20/02
%X
Lokale Zeitdarstellung
20:15:00
%y
Jahreszahl (ohne Jahrhundertzahl 0â99)
01 (2001)
%Y
Jahreszahl (mit Jahrhundertzahl YYYY)
2001
%Z, %z
Zeitzone (gibt nichts aus, wenn Zeitzone unbekannt)
MET
%%
Prozentzeichen
%
Hierzu ein simples Anwendungsbeispiel, welches alle Zeilen einer Datei auf dem Bildschirm und am Ende einen Zeitstempel ausgibt â eine einfache Demonstration der Funktionen systime und strftime:
#!/usr/bin/awk -f # # Programmname: timestamp.awk BEGIN { now = systime() # macht eine Ausgabe ala date timestamp = strftime("%a %b %d %H:%M:%S %Z %Y", now) } { print } END { print timestamp }
Das Script bei der Ausführung:
you@host > ./timestamp.awk mrolympia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 ... ... <NAME> USA 1998 1999 2000 2001 2002 2003 2004 Fr Apr 15 07:48:35 CEST 2005
13.6.4 SystemfunktionenÂ
Sofern Sie reine awk-Scripts schreiben und awk nicht in ein Shellscript einbauen, aber einen UNIX-Befehl ausführen wollen, gibt es hierfür die Funktion system:
system("Befehl")
Damit können Sie jeden »Befehl« wie in der Kommandozeile ausführen. Sofern Sie die Ausgabe des Befehls abfangen wollen, müssen Sie getline (siehe Abschnitt 13.6.6) dazu verwenden:
"Befehl" | getline
13.6.5 AusgabefunktionenÂ
Die eigentlichen Ausgabefunktionen print und printf haben Sie bereits kennen gelernt. Trotzdem soll hier noch eine Tabelle erstellt werden, worin Sie aufgelistet finden, wie Sie die Ausgabe anwenden können.
Tabelle 13.10 Â Mögliche print-Ausgaben
print
Ausgabe der aktuellen Zeile (Datensatz); gleichwertig mit print $0.
print /reguläerer Ausdruck/
Hier können Sie bspw. testen, ob der Ausdruck in der entsprechenden Zeile zutrifft. Trifft der Ausdruck nicht zu, wird 0, ansonsten 1 zurückgegeben. Bspw.: awk '{ print /USA/ }' mrolympia.dat gibt in jeder Zeile, in der die Textfolge »USA« enthalten ist, 1 und sonst 0 zurück.
print Ausdrucksliste
Ausgabe aller in »Ausdrucksliste« angegebenen Werte. Hierbei können Konstanten, berechnete Werte, Variablen- oder Feldwerte enthalten sein.
print Ausdrucksliste > datei
Die Ausgabe aller in »Ausdrucksliste« angegebenen Werte wird in datei geschrieben. Dabei kann es sich durchaus um einen berechneten Wert handeln.
print Ausdrucksliste >> datei
Wie eben, nur dass hierbei die Ausgabe aller in »Ausdrucksliste« enthaltenen Werte ans Ende von datei gehängt wird.
Für printf gilt das Gleiche, nur dass die Ausgabe formatiert erfolgt.
13.6.6 EingabefunktionÂ
Der eine oder andere mag jetzt ganz verwundert sein, aber awk unterstützt auch eine Eingabefunktion mit getline. getline ist ein sehr vielseitiges Kommando. Diese Funktion liefert im Fall eines Fehlers â1, bei Dateiende oder (Strg)+(D) 0 und bei erfolgreichem Lesen 1 zurück. Sie liest eine Zeile von der Eingabezeile, was stdin oder eine Datei sein kann, und speichert diese entweder in $0 oder in einer separat angegebenen Variablen.
Tabelle 13.11 Â Die getline-Eingabefunktion
getline
Verwenden Sie getline ohne Argumente, wird vom aktuellen Eingabekanal der nächste Datensatz (Zeile) eingelesen und in $0 gespeichert.
getline var
Liest die nächste Zeile vom aktuellen Eingabekanal und speichert diese Zeile in »var«. Wichtig, NR und FNR werden hochgezählt, aber NF wird nicht belegt, weil hier keine Auftrennung in Worten (Feldern) erfolgt!
getline < datei
Liest die nächste Zeile von einer Datei, welche geöffnet wird und am Ende mittels close() selbst geschlossen werden muss! Die Zeile befindet sich in $0.
getline var < datei
Liest die nächste Zeile von einer Datei, welche geöffnet wird und am Ende mittels close() selbst geschlossen werden muss, in die Variable »var« ein. Hierbei werden allerdings NF, NR und FNR nicht verändert!
string | getline
Liest die nächste Zeile von der Pipe (hier dem String »string«) ein. Die eingelesene Zeile wird in $0 gespeichert.
"kommando" | getline var
Hier wird die nächste Zeile von einem Kommando eingelesen und die eingelesene Zeile befindet sich in »var«.
Bei der Verwendung von getline war auch die Rede von der Funktion close, mit der Sie eine Datei oder eine geöffnete Pipe wieder schließen müssen.
close(Dateinamen)
Der Dateiname kann hierbei als eine Stringkonstante vorliegen oder eine Variable sein.
Hinweis   Möglicherweise ist dies etwas verwirrend, weil in awk die Datei zwar automatisch geöffnet wird, aber sobald man mit getline etwas daraus liest, diese wieder von Hand mit close geschlossen werden soll. Gerade wenn man andere Programmiersprachen verwendet, kennt man doch die Logik, ein Programm manuell zu öffnen und diese auch wieder manuell zu schließen.
Hinweis   In der Praxis sollten Sie natürlich überprüfen, ob die Datei zum Öffnen überhaupt existiert oder/und lesbar ist und gegebenenfalls abbrechen.
Das Script bei der Ausführung:
you@host > ./search.awk Wonach suchen Sie : USA In welcher Datei : mrolympia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> USA 1982 <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> USA 1998 1999 2000 2001 2002 2003 2004 you@host > ./search.awk Wonach suchen Sie : ien In welcher Datei : mrolympia.dat <NAME> Argentinien 1976 1981 <NAME>itannien 1992 1993 1994 1995 1996 1997 you@host > ./search.awk Wonach suchen Sie : Samir In welcher Datei : mrolympia.dat <NAME> 1983
Hier soll auch noch ein weiteres gängiges und mächtiges Feature von getline vorgestellt werden, und zwar die Möglichkeit, ein (Linux-/UNIX-)Kommando nach getline zu »pipen«. Das folgende Script gibt die größte Datei aus. Die Größe einer Datei finden Sie mit ls âl in der fünften Spalte.
#!/usr/bin/awk -f # # Programmname: bigfile.awk { my_ls="/bin/ls -ld '" quoting($0) "' 2>/dev/null" if( my_ls | getline ) { if( $5 > filesize ) { filename=$8 filesize=$5 } } close(my_ls) } END { if( filename ) print "Größte Datei ist " filename " mit " filesize " Bytes" else print "Konnte keine größte Datei ermitteln?!?" } function quoting(s, n) { n=s gsub(/'/, "'\"'\"'", n) return n }
Das Script bei der Ausführung:
you@host > find $HOME -print | ./bigfile.awk Größte Datei ist ~/Desktop/Trash/Trailer.mpg mit 91887620 Byte
Im Script wurde auch eine Funktion zum Schutz für Dateien mit einfachen Anführungszeichen (Single Quotes) verwendet. Mehr zu den selbst definierten Funktionen erfahren Sie gleich.
13.6.7 Benutzerdefinierte FunktionenÂ
Als Programmierer von Shellscripts dürften Sie mit dem awk-Wissen, das Sie jetzt haben, mehr als zufrieden sein. Aber neben der Verwendung von Builtin-Funktionen von awk haben Sie außerdem noch die Möglichkeit, eigenen Funktionen zu schreiben. Daher hier noch eine kurze Beschreibung, wie Sie auch dies realisieren können. Der Ort einer Funktionsdefinition ist nicht so wichtig, obgleich es sich eingebürgert hat, diese am Ende des Scripts zu definieren.
function functions_name( parameter ) { # Anweisungen }
Dem Schlüsselwort function, welches Sie immer verwenden müssen, folgt der Funktionsname. Erlaubt sind hier alle Kombinationen aus Buchstaben, Ziffern und einem Unterstrich â einzig am Anfang darf keine Ziffer stehen. Variablen und Funktion dürfen in einem Script allerdings nicht denselben Namen haben. Als Argumente können Sie einer Funktion beliebig viele oder auch gar keine Parameter angeben. Mehrere Argumente werden mit einem Komma getrennt. Die Anweisungen einer Funktion werden zwischen geschweiften Klammern (dem Anweisungsblock) zusammengefasst.
Zum Aufruf einer Funktion müssen Sie den Funktionsnamen gefolgt von runden Klammern und eventuell den einzelnen Parametern angeben. Beachten Sie außerdem, dass sich zwischen dem Funktionsnamen und der sich öffnenden Klammer kein Leerzeichen befinden darf. Werden weniger Parameter als in der Funktionsdefinition definiert angegeben, so führt dies nicht zu einem Fehler. Nicht angegebene Parameter werden â abhängig vom Kontext â als 0 oder eine leere Zeichenkette interpretiert.
Anders sieht dies allerdings aus, wenn Sie einer Funktion einen Parameter übergeben, obwohl in der Funktionsdefinition keinerlei Argumente enthalten sind. Dann wird Ihnen awk das Script mit einer Fehlermeldung abbrechen.
Alle Variablen einer Funktion sind global, abgesehen von den Variablen in den runden Klammern. Deren Gültigkeitsbereich ist nur auf die Funktion allein beschränkt.
Um Werte aus einer Funktion zurückzugeben, wird auch hierbei der return-Befehl gefolgt vom Wert verwendet. Beispielsweise:
function functions_name( parameter ) { # Anweisungen ... return Wert }
Mehrere Werte können Sie auch hier, wie in der Shell, zu einem String zusammenfassen und im Hauptteil wieder splitten:
function functions_name( parameter ) { # Anweisungen ... return Wert1 " " Wert2 " " Wert3 } ... ret=functions_name( param ) split(ret, array) ... print array[1]
Hier konnten Sie auch gleich sehen, wie Sie im Hauptteil den Rückgabewert auffangen können.
Die Übergabe von Variablen erfolgt in der Regel »by-value«, sprich: Die Werte des Funktionsaufrufs werden in die Argumente der Funktion kopiert, sodass jede Veränderung des Werts nur in der Funktion gültig ist. Anders hingegen werden Arrays behandelt, diese werden »by-reference«, also als Speicheradresse auf den ersten Wert des Arrays übergeben. Dies erscheint sinnvoll, denn müsste ein Array mit mehreren hundert Einträgen erst kopiert werden, wäre ein Funktionsaufruf wohl eine ziemliche Bremse. Somit bezieht sich allerdings eine Veränderung des Arrays in der Funktion auch auf das Original.
Hierzu ein einfaches Beispiel mit einer selbst definierten Funktion:
#!/usr/bin/awk -f # # Programmname: FILEprint.awk { FILEprint($0) } function FILEprint( line ) { if(length(line) == 0) return "Leerer String" else print FILENAME "(" NR ") : " line }
Das Script bei der Ausführung:
you@host > ./FILEprint.awk mrolympia.dat mrolympia.dat(1) : <NAME> USA 1965 1966 mrolympia.dat(2) : <NAME> USA 1967 1968 1969 mrolympia.dat(3) : <NAME> Österreich 1970 1971 1972 ... ...
Hier haben Sie eine einfache benutzerdefinierte print-Funktion, welche am Anfang einer Zeile jeweils den Dateinamen und die Zeilennummer mit ausgibt.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 13.6 FunktionenÂ
awk ist ohnehin ein mächtiges Werkzeug. Doch mit den Funktionen, die awk Ihnen jetzt auch noch anbietet, können Sie awk noch mehr erweitern. Und sollten Ihnen die Funktionen, die awk mitliefert, nicht ausreichen, können Sie immer noch eigene benutzerdefinierte Funktionen hinzufügen.
### 13.6.1 Mathematische FunktionenÂ
Hier zunächst ein Überblick aller arithmetischen Funktionen, die Ihnen awk zur Verfügung stellt.
Funktion | Bedeutung |
| --- | --- |
atan2(x,y) | Arcustangens von x/y in Radian |
cos(x) | Liefert den Cosinus in Radian |
exp(x) | Exponentialfunktion |
int(x) | Abschneiden einer Zahl zu einer Ganzzahl |
log(x) | Natürlicher Logarithmus zur Basis e |
rand() | (Pseudo-)Zufallszahl zwischen 0 und 1 |
sin(x) | Liefert den Sinus in Radian |
sqrt(x) | Quadratwurzel |
srand(x) | Setzt Startwert für Zufallszahlen. Bei keiner Angabe wird die aktuelle Zeit verwendet. |
Zwar werden Sie als Systemadministrator seltener mit komplizierten arithmetischen Berechnungen konfrontiert sein, dennoch soll hier auf einige Funktionen eingegangen werden, mit denen Sie es als Nicht-Mathematiker häufiger zu tun bekommen. So zum Beispiel die Funktion int, womit Sie aus einer Gleitpunktzahl eine Ganzzahl machen, indem die Nachkommastellen abgeschnitten werden. Ein Beispiel:
> you@host > awk 'END { print 30/4 }' datei 7,5 you@host > awk 'END { print int(30/4) }' datei 7
Neben dem Abschneiden von Gleitpunktzahlen benötigt man manchmal auch Zufallszahl. Hier bieten Ihnen die Funktionen rand und srand eine Möglichkeit an, solche zu erzeugen.
> you@host > awk 'END { print rand() }' datei 0,237788
Hier haben Sie mit der Verwendung von rand eine Zufallszahl zwischen 0 und 1 erzeugt. Wenn Sie allerdings rand erneut aufrufen, ergibt sich dasselbe Bild:
> you@host > awk 'END { print rand() }' datei 0,237788
Die Zufallsfunktion rand bezieht sich auf einen Startwert, mit dem sie eine (Pseudo-)Zufallszahl generiert. Diesen Startwert können Sie mit der Funktion srand verändern:
> you@host > awk 'BEGIN { print srand() }; { print rand() }' datei 0,152827 you@host > awk 'BEGIN { print srand() }; { print rand() }' datei 0,828926
Mit der Funktion srand ohne Angabe eines Parameters wird jedes Mal zum Setzen eines neuen Startwerts für die Funktion rand die aktuelle Zeit verwendet. Dadurch ist die Verwendung von rand schon wesentlich effektiver und zufälliger (wenn auch nicht perfekt).
### 13.6.2 Funktionen für ZeichenkettenÂ
Die Funktionen für Zeichenketten dürften die am häufigsten eingesetzten Funktionen für Sie als Shell-Programmierer sein. Oft werden Sie hierbei kein extra awk-Script schreiben, sondern die allseits beliebten Einzeiler verwenden. Trotzdem sollten Sie immer bedenken, dass wenn Sie awk-Einzeiler in Ihrem Shellscript in einer Schleife mehrmals aufrufen, dies jedes Mal den Start eines neuen (awkâ)Prozesses bedeutet. Die Performance könnte darunter erheblich leiden. Bei häufigen awk-Aufrufen in Schleifen sollten Sie daher in Erwägung ziehen, ein awk-Script zu schreiben. Wie dies geht, haben Sie ja in diesem Kapitel erfahren. Die Syntax eines solchen Einzeilers in einem Shellscript sieht häufig wie folgt aus:
> echo string | awk '{ print string_funktion($0) }'
oder
> cat datei | awk '{ print string_funktion($0) }'
# (Globale) Ersetzung mit sub und gsub
Die Syntax:
> sub (regulärer_Ausdruck, Ersetzungs_String); sub (regulärer_Ausdruck, Ersetzungs_String, Ziel_String) gsub (regulärer_Ausdruck, Ersetzungs_String); gsub (regulärer_Ausdruck, Ersetzungs_String, Ziel_String)
Mit beiden Funktionen wird das Auftreten von »regulärer_Ausdruck« durch »Ersetzungs_String« ersetzt. Wird kein »Ziel_String« mit angegeben, so wird $0 verwendet. Der Rückgabewert ist die Anzahl erfolgreicher Ersetzungen. Der Unterschied zwischen gsub und sub liegt darin, dass mit gsub (global substitution) eine globale Ersetzung und mit sub eine Ersetzung des ersten Vorkommens durchgeführt wird.
> you@host > awk '{ gsub(/USA/, "Amerika"); print }' mrolympia.dat <NAME> Amerika 1965 1966 <NAME> 1967 1968 1969 ...
Hier werden alle Textfolgen »USA« der Datei mrolympia.dat durch »Amerika« ersetzt. Wollen Sie nur das erste Vorkommen ersetzen, so müssen Sie sub verwenden. Es kann aber nun sein, dass sich in einer Zeile mehrmals die Textfolge »USA« befindet, Sie aber nur eine bestimmte Spalte ersetzen wollen. Dann können Sie das dritte Argument von gsub bzw. sub verwenden:
> you@host > awk '{ gsub(/USA/, "Amerika", $3); print }' mrolympia.dat <NAME>ika 1965 1966 <NAME> 1967 1968 1969 ...
Hier weisen Sie explizit an, dass nur wenn die dritte Spalte die Textfolge »USA« enthält, diese durch die Textfolge »Amerika« zu ersetzen ist.
# Position einer Zeichenkette ermitteln â index
Die Syntax:
> index(string, substring)
Diese Funktion gibt die erste Position der Zeichenkette »substring« in »string« zurück. 0 wird zurückgegeben, wenn keine Übereinstimmung gefunden wurde. Folgendes Beispiel gibt alle Zeilen mit der Textfolge »USA« mitsamt ihren Positionen zurück:
> you@host > awk '{ i=index($0, "USA"); if(i) print NR ":" i }' \ > mrolympia.dat 1:13 2:14 5:17 7:11 9:16
# Länge einer Zeichenkette ermitteln â length
Die Syntax:
> length(string)
Mit dieser Funktion können Sie die Länge der Zeichenkette »string« ermitteln. Nehmen Sie für »string« keine Angabe vor, wird $0 verwendet.
Folgendes Beispiel gibt die Länge der jeweils ersten Spalten einer jeden Zeile aus und das darauf folgende Beispiel, die Länge einer kompletten Zeile ($0):
> you@host > awk '{ print NR ":" length($1) }' mrolympia.dat 1:5 2:6 3:6 ... you@host > awk '{ print NR ":" length }' mrolympia.dat 1:25 2:31 3:67 ...
# Suchen nach Muster â match
Die Syntax:
> match(string, regulärer_Ausdruck)
Mit match suchen Sie nach dem Muster »regulärer_Ausdruck« in »string«. Wird ein entsprechender Ausdruck gefunden, wird die Position zurückgegeben, ansonsten bei erfolgloser Suche lautet der Rückgabewert 0. Die Startposition des gefundenen (Teil-)Strings »regulärer_Ausdruck« finden Sie in RSTART und die Länge des Teilstücks in RLENGTH. Ein Beispiel:
> you@host > awk '{ i=match($0, "Yates"); \ > if(i) print NR, RSTART, RLENGTH }' mrolympia.dat 8 8 5
Hier wird nach der Zeichenfolge »Yates« gematcht und bei Erfolg werden die Zeile, die Position in der Zeile und die Länge zurückgegeben.
# Zeichenkette zerlegen â split
Die Syntax:
> split (string, array, feld_trenner) split (string, array)
Mit dieser Funktion zerlegen Sie die Zeichenkette »string« und teilen die einzelnen Stücke in das Array »array« auf. Standardmäßig werden die einzelnen Zeichenketten anhand von FS (standardmäßig ein Leerzeichen) »zersplittet«. Allerdings können Sie dieses Verhalten optional über den dritten Parameter mit »feld_trenner« verändern. Als Rückgabewert erhalten Sie die höchste Indexnummer des erzeugten Arrays. Ein Beispiel zu dieser Funktion wurde bereits in Abschnitt 13.5.2 gegeben.
# Eine Zeichenkette erzeugen â sprintf
Die Syntax:
> string=sprintf(fmt, exprlist)
Mit sprintf erzeugen Sie eine Zeichenkette und liefern diese zurück. Der Formatstring »fmt« und die Argumente »exprlist« werden genauso verwendet wie bei printf für die Ausgabe. Ein einfaches Beispiel:
> you@host > awk '{ \ > line=sprintf("%-10s\t%-15s\t%-15s", $1, $2, $3); print line }'\ > mrolympia.dat L<NAME>cott USA <NAME> USA Ar<NAME>enegger Österreich Franco Columbu Argentinien ...
# Teilstück einer Zeichenkette zurückgegeben â substr
Die Syntax:
> substr(string, start_position) substr(string, start_position, länge)
Die Funktion gibt einen Teil einer Zeichenkette »string« ab der Position »start_position« entweder bis zur Länge »länge« oder bis zum Ende zurück.
> you@host > awk '{ print substr($0, 5, 10)}' mrolympia.dat y Scott US io Oliva U ld Schwarz co Columbu ...
In diesem Beispiel wird (auch wenn es hier keinen Sinn macht) aus jeder Zeile der Datei mrolympia.dat eine Textfolge ab der Position 5 bis 10 ausgegeben (oder auch ausgeschnitten).
# Groß- und Kleinbuchstaben â toupper und tolower
Die Syntax:
> toupper(string) tolower(string)
Mit der Funktion toupper werden alle Kleinbuchstaben in »string« in Großbuchstaben und mit tolower alle Großbuchstaben in Kleinbuchstaben umgewandelt. Ein Beispiel:
> you@host > awk '{ print toupper($0)}' mrolympia.dat LARRY SCOTT USA 1965 1966 SERGIO OLIVA USA 1967 1968 1969 ARNOLD SCHWARZENEGGER ÖSTERREICH 1970 1971 1972 1973 1974 1975 ... you@host > awk '{ print tolower($0)}' mrolympia.dat larry scott usa 1965 1966 sergio oliva usa 1967 1968 1969 arnold schwarzenegger österreich 1970 1971 1972 1973 1974 1975 ...
### 13.6.3 Funktionen für die ZeitÂ
In awk existieren auch zwei Funktionen für eine Zeitangabe: zum einen die Funktion systime (der UNIX-Timestamp), welche die aktuelle Tageszeit als Anzahl Sekunden zurückgibt, die seit dem 1.1.1970 vergangen sind. Zum anderen existiert noch die Funktion strftime, welche einen Zeitwert nach Maßangaben einer Formatanweisung (ähnlich wie bei dem Kommando date) formatiert. Diese Funktion ist außerdem der Funktion strftime() aus C nachgebildet und auch die Formatanweisungen haben dieselbe Bedeutung. Die Syntax zu strftime lautet:
> strftime( format, timestamp );
Die möglichen Formatanweisungen von »format« finden Sie in der folgenden Tabelle. Den »timestamp« erhalten Sie aus dem Rückgabewert der Funktion systime. Hier zunächst die Formatanweisungen für strftime:
Format | ⦠wird ersetzt durch ⦠| Beispiel |
| --- | --- | --- |
%a | Wochenname (gekürzt) | Sat |
%A | Wochenname (ausgeschrieben) | Saturday |
%b | Monatsname (gekürzt) | Jan |
%B | Monatsname (ausgeschrieben) | January |
%c | Entsprechende lokale Zeit- und Datumsdarstellung | Sat Jan 22 22:22:22 MET 2003 |
%d | Monatstag (1â31) | 22 |
%H | Stunde im 24-Stunden-Format (0â23) | 23 |
%I | Stunde im 12-Stunden-Format (1â12) | 5 |
%j | Tag des Jahres (1â366) | 133 |
%m | Monat (1â12) | 5 |
%M | Minute (0â59) | 40 |
%p | AM- oder PM-Zeitangabe; Indikator für das 12-Stunden-Format (USA) | PM |
%S | Sekunden (0â69) | 55 |
%U | Wochennummer (0â53) (Sonntag als erster Tag der Woche) | 33 |
%w | Wochentag (0â6, Sonntag = 0) | 3 |
%W | Wochennummer (0â53) (Montag als erster Tag der Woche) | 4 |
%x | Lokale Datumsdarstellung | 02/20/02 |
%X | Lokale Zeitdarstellung | 20:15:00 |
%y | Jahreszahl (ohne Jahrhundertzahl 0â99) | 01 (2001) |
%Y | Jahreszahl (mit Jahrhundertzahl YYYY) | 2001 |
%Z, %z | Zeitzone (gibt nichts aus, wenn Zeitzone unbekannt) | MET |
%% | Prozentzeichen | % |
Hierzu ein simples Anwendungsbeispiel, welches alle Zeilen einer Datei auf dem Bildschirm und am Ende einen Zeitstempel ausgibt â eine einfache Demonstration der Funktionen systime und strftime:
> #!/usr/bin/awk -f # # Programmname: timestamp.awk BEGIN { now = systime() # macht eine Ausgabe ala date timestamp = strftime("%a %b %d %H:%M:%S %Z %Y", now) } { print } END { print timestamp }
Das Script bei der Ausführung:
> you@host > ./timestamp.awk mrolympia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 ... ... <NAME> USA 1998 1999 2000 2001 2002 2003 2004 Fr Apr 15 07:48:35 CEST 2005
### 13.6.4 SystemfunktionenÂ
Sofern Sie reine awk-Scripts schreiben und awk nicht in ein Shellscript einbauen, aber einen UNIX-Befehl ausführen wollen, gibt es hierfür die Funktion system:
> system("Befehl")
Damit können Sie jeden »Befehl« wie in der Kommandozeile ausführen. Sofern Sie die Ausgabe des Befehls abfangen wollen, müssen Sie getline (siehe Abschnitt 13.6.6) dazu verwenden:
> "Befehl" | getline
### 13.6.5 AusgabefunktionenÂ
Die eigentlichen Ausgabefunktionen print und printf haben Sie bereits kennen gelernt. Trotzdem soll hier noch eine Tabelle erstellt werden, worin Sie aufgelistet finden, wie Sie die Ausgabe anwenden können.
Verwendung | Bedeutung |
| --- | --- |
print | Ausgabe der aktuellen Zeile (Datensatz); gleichwertig mit print $0. |
print /reguläerer Ausdruck/ | Hier können Sie bspw. testen, ob der Ausdruck in der entsprechenden Zeile zutrifft. Trifft der Ausdruck nicht zu, wird 0, ansonsten 1 zurückgegeben. Bspw.: awk '{ print /USA/ }' mrolympia.dat gibt in jeder Zeile, in der die Textfolge »USA« enthalten ist, 1 und sonst 0 zurück. |
print Ausdrucksliste | Ausgabe aller in »Ausdrucksliste« angegebenen Werte. Hierbei können Konstanten, berechnete Werte, Variablen- oder Feldwerte enthalten sein. |
print Ausdrucksliste > datei | Die Ausgabe aller in »Ausdrucksliste« angegebenen Werte wird in datei geschrieben. Dabei kann es sich durchaus um einen berechneten Wert handeln. |
print Ausdrucksliste >> datei | Wie eben, nur dass hierbei die Ausgabe aller in »Ausdrucksliste« enthaltenen Werte ans Ende von datei gehängt wird. |
Für printf gilt das Gleiche, nur dass die Ausgabe formatiert erfolgt.
### 13.6.6 EingabefunktionÂ
Der eine oder andere mag jetzt ganz verwundert sein, aber awk unterstützt auch eine Eingabefunktion mit getline. getline ist ein sehr vielseitiges Kommando. Diese Funktion liefert im Fall eines Fehlers â1, bei Dateiende oder (Strg)+(D) 0 und bei erfolgreichem Lesen 1 zurück. Sie liest eine Zeile von der Eingabezeile, was stdin oder eine Datei sein kann, und speichert diese entweder in $0 oder in einer separat angegebenen Variablen.
Verwendung | Bedeutung |
| --- | --- |
getline | Verwenden Sie getline ohne Argumente, wird vom aktuellen Eingabekanal der nächste Datensatz (Zeile) eingelesen und in $0 gespeichert. |
getline var | Liest die nächste Zeile vom aktuellen Eingabekanal und speichert diese Zeile in »var«. Wichtig, NR und FNR werden hochgezählt, aber NF wird nicht belegt, weil hier keine Auftrennung in Worten (Feldern) erfolgt! |
getline < datei | Liest die nächste Zeile von einer Datei, welche geöffnet wird und am Ende mittels close() selbst geschlossen werden muss! Die Zeile befindet sich in $0. |
getline var < datei | Liest die nächste Zeile von einer Datei, welche geöffnet wird und am Ende mittels close() selbst geschlossen werden muss, in die Variable »var« ein. Hierbei werden allerdings NF, NR und FNR nicht verändert! |
string | getline | Liest die nächste Zeile von der Pipe (hier dem String »string«) ein. Die eingelesene Zeile wird in $0 gespeichert. |
"kommando" | getline var | Hier wird die nächste Zeile von einem Kommando eingelesen und die eingelesene Zeile befindet sich in »var«. |
Bei der Verwendung von getline war auch die Rede von der Funktion close, mit der Sie eine Datei oder eine geöffnete Pipe wieder schließen müssen.
> close(Dateinamen)
Der Dateiname kann hierbei als eine Stringkonstante vorliegen oder eine Variable sein.
Das Script bei der Ausführung:
> you@host > ./search.awk Wonach suchen Sie : USA In welcher Datei : mrolympia.dat <NAME> USA 1965 1966 <NAME> USA 1967 1968 1969 <NAME> USA 1982 <NAME> USA 1984 1985 1986 1987 1988 1989 1990 1991 <NAME> USA 1998 1999 2000 2001 2002 2003 2004 you@host > ./search.awk Wonach suchen Sie : ien In welcher Datei : mrolympia.dat <NAME> Argentinien 1976 1981 <NAME> 1992 1993 1994 1995 1996 1997 you@host > ./search.awk Wonach suchen Sie : Samir In welcher Datei : mrolympia.dat <NAME> 1983
Hier soll auch noch ein weiteres gängiges und mächtiges Feature von getline vorgestellt werden, und zwar die Möglichkeit, ein (Linux-/UNIX-)Kommando nach getline zu »pipen«. Das folgende Script gibt die größte Datei aus. Die Größe einer Datei finden Sie mit ls âl in der fünften Spalte.
> #!/usr/bin/awk -f # # Programmname: bigfile.awk { my_ls="/bin/ls -ld '" quoting($0) "' 2>/dev/null" if( my_ls | getline ) { if( $5 > filesize ) { filename=$8 filesize=$5 } } close(my_ls) } END { if( filename ) print "Größte Datei ist " filename " mit " filesize " Bytes" else print "Konnte keine größte Datei ermitteln?!?" } function quoting(s, n) { n=s gsub(/'/, "'\"'\"'", n) return n }
Das Script bei der Ausführung:
> you@host > find $HOME -print | ./bigfile.awk Größte Datei ist ~/Desktop/Trash/Trailer.mpg mit 91887620 Byte
Im Script wurde auch eine Funktion zum Schutz für Dateien mit einfachen Anführungszeichen (Single Quotes) verwendet. Mehr zu den selbst definierten Funktionen erfahren Sie gleich.
### 13.6.7 Benutzerdefinierte FunktionenÂ
Als Programmierer von Shellscripts dürften Sie mit dem awk-Wissen, das Sie jetzt haben, mehr als zufrieden sein. Aber neben der Verwendung von Builtin-Funktionen von awk haben Sie außerdem noch die Möglichkeit, eigenen Funktionen zu schreiben. Daher hier noch eine kurze Beschreibung, wie Sie auch dies realisieren können. Der Ort einer Funktionsdefinition ist nicht so wichtig, obgleich es sich eingebürgert hat, diese am Ende des Scripts zu definieren.
> function functions_name( parameter ) { # Anweisungen }
Dem Schlüsselwort function, welches Sie immer verwenden müssen, folgt der Funktionsname. Erlaubt sind hier alle Kombinationen aus Buchstaben, Ziffern und einem Unterstrich â einzig am Anfang darf keine Ziffer stehen. Variablen und Funktion dürfen in einem Script allerdings nicht denselben Namen haben. Als Argumente können Sie einer Funktion beliebig viele oder auch gar keine Parameter angeben. Mehrere Argumente werden mit einem Komma getrennt. Die Anweisungen einer Funktion werden zwischen geschweiften Klammern (dem Anweisungsblock) zusammengefasst.
Zum Aufruf einer Funktion müssen Sie den Funktionsnamen gefolgt von runden Klammern und eventuell den einzelnen Parametern angeben. Beachten Sie außerdem, dass sich zwischen dem Funktionsnamen und der sich öffnenden Klammer kein Leerzeichen befinden darf. Werden weniger Parameter als in der Funktionsdefinition definiert angegeben, so führt dies nicht zu einem Fehler. Nicht angegebene Parameter werden â abhängig vom Kontext â als 0 oder eine leere Zeichenkette interpretiert.
Anders sieht dies allerdings aus, wenn Sie einer Funktion einen Parameter übergeben, obwohl in der Funktionsdefinition keinerlei Argumente enthalten sind. Dann wird Ihnen awk das Script mit einer Fehlermeldung abbrechen.
Alle Variablen einer Funktion sind global, abgesehen von den Variablen in den runden Klammern. Deren Gültigkeitsbereich ist nur auf die Funktion allein beschränkt.
Um Werte aus einer Funktion zurückzugeben, wird auch hierbei der return-Befehl gefolgt vom Wert verwendet. Beispielsweise:
> function functions_name( parameter ) { # Anweisungen ... return Wert }
Mehrere Werte können Sie auch hier, wie in der Shell, zu einem String zusammenfassen und im Hauptteil wieder splitten:
> function functions_name( parameter ) { # Anweisungen ... return Wert1 " " Wert2 " " Wert3 } ... ret=functions_name( param ) split(ret, array) ... print array[1]
Hier konnten Sie auch gleich sehen, wie Sie im Hauptteil den Rückgabewert auffangen können.
Die Übergabe von Variablen erfolgt in der Regel »by-value«, sprich: Die Werte des Funktionsaufrufs werden in die Argumente der Funktion kopiert, sodass jede Veränderung des Werts nur in der Funktion gültig ist. Anders hingegen werden Arrays behandelt, diese werden »by-reference«, also als Speicheradresse auf den ersten Wert des Arrays übergeben. Dies erscheint sinnvoll, denn müsste ein Array mit mehreren hundert Einträgen erst kopiert werden, wäre ein Funktionsaufruf wohl eine ziemliche Bremse. Somit bezieht sich allerdings eine Veränderung des Arrays in der Funktion auch auf das Original.
Hierzu ein einfaches Beispiel mit einer selbst definierten Funktion:
> #!/usr/bin/awk -f # # Programmname: FILEprint.awk { FILEprint($0) } function FILEprint( line ) { if(length(line) == 0) return "Leerer String" else print FILENAME "(" NR ") : " line }
Das Script bei der Ausführung:
> you@host > ./FILEprint.awk mrolympia.dat mrolympia.dat(1) : <NAME> USA 1965 1966 mrolympia.dat(2) : <NAME> USA 1967 1968 1969 mrolympia.dat(3) : <NAME>ger Österreich 1970 1971 1972 ... ...
Hier haben Sie eine einfache benutzerdefinierte print-Funktion, welche am Anfang einer Zeile jeweils den Dateinamen und die Zeilennummer mit ausgibt.
# 13.7 EmpfehlungÂ
13.7 EmpfehlungÂ
Sie habe in diesem Kapitel eine Menge zu awk gelernt und doch lässt sich nicht alles hier unterbringen. Gerade was die Praxis betrifft, musste ich Sie leider das ein oder andere Mal mit einem extrem kurzen Code abspeisen. Allerdings würde eine weitere Ausweitung des Kapitels dem eigentlichen Thema des Buches nicht gerecht.
Wenn Sie sich wirklich noch intensiver mit awk befassen wollen oder müssen, sei hierzu das Original-awk-Buch von Aho, Weinberger und Kernighan empfohlen, worin Sie einen noch tieferen und fundierteren Einblick in awk erhalten. Hierbei wird u. a. auch demonstriert, wie man mit awk eine eigene Programmiersprache realisieren kann.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 13.7 EmpfehlungÂ
Sie habe in diesem Kapitel eine Menge zu awk gelernt und doch lässt sich nicht alles hier unterbringen. Gerade was die Praxis betrifft, musste ich Sie leider das ein oder andere Mal mit einem extrem kurzen Code abspeisen. Allerdings würde eine weitere Ausweitung des Kapitels dem eigentlichen Thema des Buches nicht gerecht.
Wenn Sie sich wirklich noch intensiver mit awk befassen wollen oder müssen, sei hierzu das Original-awk-Buch von Aho, Weinberger und Kernighan empfohlen, worin Sie einen noch tieferen und fundierteren Einblick in awk erhalten. Hierbei wird u. a. auch demonstriert, wie man mit awk eine eigene Programmiersprache realisieren kann.
# 14.2 Dateiorientierte KommandosÂ
14.2 Dateiorientierte KommandosÂ
bzcat â Ausgabe von bzip2-komprimierten DateienÂ
Mit bzcat können Sie die Inhalte von bzip2-komprimierten Dateien ausgeben, ohne dass Sie hierbei die komprimierte Datei dekomprimieren müssen. Dies ist z. B. auch ein Grund, warum Sie mit einem Dateibrowser den Inhalt einer Datei sehen und sogar lesen können, obwohl Sie diese noch gar nicht dekomprimiert haben. Ansonsten funktioniert bzcat wie cat.
cat â Datei(en) nacheinander ausgebenÂ
cat wurde bereits mehrfach in diesem Buch verwendet und auch beschrieben. Mit diesem Kommando werden gewöhnlich Dateien ausgegeben. Geben Sie cat beim Aufruf keine Dateien zum Lesen als Argument mit, liest cat so lange aus der Standardeingabe, bis (Strg)+(D) (EOF) betätigt wurde.
Tabelle 14.2 Â Anwendungen von cat
cat file
Gibt den Inhalt von file aus
cat file | kommando
Gibt den Inhalt von file via Pipe an die Standardeingabe von kommando weiter
cat file1 file2 > file_all
Dateien aneinander hängen
cat > file
Schreibt alle Zeilen, die von der Tastatur eingegeben wurden, in die Datei file, bis (Strg)+(D) betätigt wurde
Hinweis   cat wurde bereits separat in Abschnitt 1.7.2 behandelt.
chgrp â Gruppe von Dateien oder Verzeichnissen ändernÂ
Mit chgrp ändern Sie die Gruppenzugehörigkeit einer Datei oder eines Verzeichnisses. Dieses Kommando bleibt somit nur dem Eigentümer einer Datei/Verzeichnis oder dem Superuser vorbehalten. Als Eigentümer können Sie außerdem nur diejenigen Dateien oder Verzeichnisse einer bestimmten Gruppe zuordnen, der Sie selbst auch angehören. Wollen Sie die Gruppenzugehörigkeit aller Dateien in einem Verzeichnis mit allen Unterverzeichnissen ändern, dann bietet sich hierzu die Option âR (für rekursiv) an.
cksum/md5sum/sum â eine Prüfsumme für eine Datei ermittelnÂ
Mit diesen Funktionen errechnet man die CRC-(cyclic redundancy check)-Prüfsumme und die Anzahl Bytes (Anzahl Bytes gilt nur für cksum) für eine Datei. Wird keine Datei angegeben, liest cksum diejenige aus der Standardeingabe, bis (Strg)+(D) betätigt wurde, und berechnet hieraus die Prüfsumme.
Diese Kommandos werden häufig eingesetzt um festzustellen, ob zwei Daten identisch sind. So kann z. B. überprüft werden, ob eine Datei, die Sie aus dem Netz geladen haben, auch korrekt übertragen wurde. Voraussetzung hierfür ist natürlich, dass Sie die Prüfsumme der Quelle kennen. Häufig findet man dies beim Herunterladen von ISO-Distributionen. Ein anderer Anwendungsfall wäre das Überprüfen auf Virenbefall. Hiermit kann ermittelt werden, ob sich jemand an einer Datei zu schaffen gemacht hat, beispielsweise:
you@host > cksum data.conf 2935371588 51 data.conf you@host > cksum data.conf 2935371588 51 data.conf you@host > echo Hallo >> data.conf you@host > cksum data.conf 966396470 57 data.conf
Hier eine Konfigurationsdatei data.conf, bei der zweimal mit cksum derselbe Wert berechnet wurde (nur zur Demonstration). Kurz darauf wurde am Ende dieser Datei ein Text angehängt und erneut cksum ausgeführt. Jetzt erhalten Sie eine andere Prüfsumme. Voraussetzung, dass dieses Prinzip funktioniert, ist natürlich auch eine Datei oder Datenbank, die solche Prüfsummen zu den entsprechenden Dateien speichert. Dabei können Sie auch zwei Dateien auf einmal eingeben, um die Prüfsummen zu vergleichen:
you@host > cksum data.conf data.conf~bak 966396470 57 data.conf 2131264154 10240 data.conf~bak
cksum ist gegenüber sum zu bevorzugen, da diese Version neuer ist und auch dem POSIX.2-Standard entspricht. Beachten Sie allerdings, dass alle drei Versionen zum Berechnen von Prüfsummen (sum, cksum und md5sum) untereinander inkompatibel sind und andere Prüfsummen als Ergebnis berechnen:
you@host > sum data.conf 20121 1 you@host > cksum data.conf 966396470 57 data.conf you@host > md5sum data.conf 5a04a9d083bc0b0982002a2c8894e406 data.conf
Hinweis   md5sum gibt es unter FreeBSD nicht, hier heißt es md5.
Noch ein beliebter Anwendungsfall von md5sum (bzw. md5):
cd /bin; md5 `ls -R /bin` | md5
Wenn sich jetzt jemand am Verzeichnis /bin zu schaffen gemacht hat, merkt man dies relativ schnell. Am besten lässt man hierbei einen cron-Job laufen und sich gegebenenfalls täglich per E-Mail benachrichtigen.
chmod â Zugriffsrechte von Dateien oder Verzeichnissen ändernÂ
Mit chmod setzen oder verändern Sie die Zugriffsrechte auf Dateien oder Verzeichnisse. Die Benutzung von chmod ist selbstverständlich nur dem Dateieigentümer und dem Superuser gestattet. Die Bedienung von chmod muss eigentlich jedem Systemadministrator geläufig sein, weil es ein sehr häufig verwendetes Kommando ist. chmod kann zum Glück sehr flexibel eingesetzt werden. Man kann einen numerischen Wert wie folgt verwenden:
chmod 755 file
oder
chmod 0755 file
Einfacher anzuwenden ist chmod über eine symbolische Angabe wie:
chmode u+x file
Hier bekommt der User (u; Eigentümer) der Datei file das Ausführrecht (+x) erteilt.
chmod g-x file
Damit wurde der Gruppe (g) das Ausführrecht entzogen (âx). Wollen Sie hingegen allen Teilnehmern (a) ein Ausführrecht erteilen, dann geht dies so:
chmod a+x file
Mit chmod können Sie auch die Spezialbits setzen (SUID=4000; SGUID=2000 oder Sticky=1000). Wollen Sie z. B. für eine Datei das setuid-(Set-User-ID)-Bit setzen, funktioniert dies folgendermaßen:
chmod 4744 file
Das setgid-(Set Group ID)-Bit hingegen setzen Sie mit »2xxx«.
Zu erwähnen ist auch die Option âR, mit der Sie ein Verzeichnis rekursiv durchlaufen und alle Dateien, die sich darin befinden, entsprechend den neu angegebenen Rechten ändern.
chown â Eigentümer von Dateien oder Verzeichnissen ändernÂ
Mit chown können Sie den Eigentümer von Dateien oder Verzeichnissen ändern. Als neuen Eigentümer kann man entweder den Login-Namen oder die User-ID angeben. Name oder Zahl müssen selbstverständlich in der Datei /etc/passwd vorhanden sein. Dieses Kommando kann wiederum nur vom Eigentümer selbst oder dem Superuser aufgerufen und auf Dateien bzw. Verzeichnisse angewendet werden.
chown john file1 file2
Hier wird der User »john« Eigentümer der Datei file1 und file2. Wollen Sie auch hier ein komplettes Verzeichnis mitsamt den Unterverzeichnissen erfassen, so kann auch hierbei die Option âR verwendet werden.
Wollen Sie sowohl den Eigentümer als auch die Gruppe einer Datei ändern, nutzen Sie folgende Syntax:
chown john:user file1 file2
cmp â Dateien miteinander vergleichenÂ
Mit der Funktion cmp vergleichen Sie zwei Dateien Byte für Byte miteinander und erhalten die dezimale Position und Zeilennummer vom ersten Byte zurück, bei dem sich beide Dateien unterscheiden. cmp vergleicht auch Binärdateien. Sind beide Dateien identisch, erfolgt keine Ausgabe.
you@host > cmp out.txt textfile.txt out.txt textfile.txt differieren: Byte 52, Zeile 3.
comm â zwei sortierte Textdateien miteinander vergleichenÂ
Mit comm vergleichen Sie zwei sortierte Dateien und geben die gemeinsamen und die unterschiedlichen Zeilen jeweils in Spalten aus, indem die zweite und dritte Spalte von einem bzw. zwei Tabulatorenvorschüben angeführt werden.
comm [-123] file1 file2
Die erste Spalte enthält die Zeilen, die nur in der Datei file1 enthalten sind. Die zweite Spalte hingegen beinhaltet die Zeilen, die in der zweiten Datei file2 enthalten sind, und die dritte Spalte die Zeilen, die in beiden Dateien enthalten sind.
you@host > cat file1.txt # wichtige Initialisierungsdatei # noch eine Zeile Hallo you@host > cat file2.txt # wichtige Initialisierungsdatei # noch eine Zeile Hallo you@host > comm file1.txt file2.txt # wichtige Initialisierungsdatei # noch eine Zeile Hallo you@host > echo "Neue Zeile" >> file2.txt you@host > comm file1.txt file2.txt # wichtige Initialisierungsdatei # noch eine Zeile Hallo Neue Zeile you@host > comm â3 file1.txt file2.txt Neue Zeile
In der letzten Zeile ist außerdem zu sehen, wie Sie mit dem Schalter â3 die Ausgabe der dritten Spalte ganz abschalten, um nur die Differenzen beider Dateien zu erkennen. comm arbeitet zeilenweise, weshalb hier keine Vergleiche mit binären Dateien möglich sind. Weitere Schalterstellungen und ihre Bedeutung sind:
Tabelle 14.3 Â Optionen für comm
â23 file1 file2
Es werden nur Zeilen ausgegeben, die in file1 vorkommen.
â123 file1 file2
Es wird keine Ausgabe erzeugt.
Tabelle 14.4 Â Anwendungen von cp
cp file newfile
Es wird mit newfile eine Kopie von file erzeugt.
cp âp file newfile
newfile erhält dieselben Zugriffsrechte, Eigentümer und Zeitstempel.
cp âr dir newdir
Es wird ein komplettes Verzeichnis rekursiv (âr) kopiert.
cp file1 file2 file3 dir
Es werden mehrere Dateien in ein Verzeichnis kopiert.
Hinweis   cp wurde bereits separat in Abschnitt 1.7.2 beschrieben.
csplit â Zerteilen von Dateien (kontextabhängig)Â
Mit csplit können Sie eine Datei in mehrere Teile aufteilen. Als Trennstelle kann hierbei ein Suchmuster, also auch ein regulärer Ausdruck angegeben werden. Dabei werden aus einer Eingabedatei mehrere Ausgabedateien erzeugt, deren Inhalt vom Suchmuster abhängig gemacht werden kann. Ein Beispiel:
csplit Kapitel20.txt /Abschnitt 1/ /Abschnitt 2/ /Abschnitt 3/
Hier wird das Kapitel20.txt in vier Teile aufgeteilt. Zunächst vom Anfang bis zum »Abschnitt 1«, als Nächstes von »Abschnitt 1« bis »Abschnitt 2«, dann »Abschnitt 2« bis »Abschnitt 3« und zu guter Letzt »Abschnitt 3« bis »Abschnitt4«. Hier können Sie allerdings auch einzelne Zeilen angeben, ab denen Sie eine Datei teilen wollen:
csplit -f Abschnitt Kapitel20.txt 20 40
Hier haben Sie mit der Option âf veranlasst, dass statt eines Dateinamens wie »xx01«, »xx02« usw. dem darauf folgenden Namen eine Datei wie »Abschnitt01«, »Abschnitt02« usw. erzeugt wird. Hier zerteilen Sie die Datei Kapitel20.txt in drei Dateien: »Abschnitt01« (Zeile 1â20), »Abschnitt02« (Zeile 21â40) und »Abschnitt03« (Zeile 41 bis zum Ende). Sie können mit {n} am Ende auch angeben, dass ein bestimmter Ausdruck n-mal angewendet werden soll. Beispielsweise:
csplit -k /var/spool/mail/$LOGNAME /^From / {100}
Hier zerteilen Sie in Ihrer Mailbox die einzelnen E-Mails in die einzelnen Dateien »xx01«, »xx02« ... »xx99«. Jeder Brief einer E-Mail im mbox-Format beginnt mit »From«, weshalb dies als Trennzeichen für die einzelnen Dateien dient. Weil Sie wahrscheinlich nicht genau wissen, wie viele Mails in Ihrer Mailbox liegen, können Sie durch die Angabe einer relativ hohen Zahl zusammen mit der Option âk erreichen, dass alle Mails getrennt und nach einem eventuell vorzeitigen Scheitern die bereits erzeugten Dateien nicht wieder gelöscht werden.
cut â Zeichen oder Felder aus Dateien herausschneidenÂ
Mit cut schneiden Sie bestimmte Teile aus einer Datei heraus. Dabei liest cut von der angegebenen Datei und gibt die Teile auf dem Bildschirm aus, die Sie als gewählte Option und per Wahl des Bereichs verwendet haben. Ein Bereich ist eine durch ein Komma getrennte Liste von einzelnen Zahlen bzw. Zahlenbereichen. Diese Zahlenbereiche werden in der Form »a-z« angegeben. Wird a oder z weggelassen, so wird hierzu der Anfang bzw. das Ende einer Zeile verwendet.
Hinweis   cut wurde bereits in Abschnitt 2.3.1 ausführlich beschrieben und demonstriert.
diff â Vergleichen zweier DateienÂ
diff vergleicht den Inhalt von zwei Dateien. Da diff zeilenweise vergleicht, sind keine binären Dateien erlaubt. Ein Beispiel:
you@host > diff file1.txt file2.txt 2a3 > neueZeile
Hier wurden die Dateien file1.txt und file2.txt miteinander verglichen. Die Ausgabe »2a3« besagt lediglich, dass Sie in der Datei file1.txt zwischen der Zeile 2 und 3 die Zeile »neueZeile« einfügen (a = append) müssten, damit die Datei exakt mit der Datei file2.txt übereinstimmt. Noch ein Beispiel:
you@host > diff file1.txt file2.txt 2c2 < zeile2 --- > zeile2 wurde verändert
Hier bekommen Sie mit »2c2« die Meldung, dass die zweite Zeile unterschiedlich (c = change) ist. Die darauf folgende Ausgabe zeigt auch den Unterschied dieser Zeile an. Eine sich öffnende spitze Klammer (<) zeigt file1.txt und die sich schließende spitze Klammer bezieht sich auf file2.txt. Und eine dritte Möglichkeit, die Ihnen diff meldet, wäre:
you@host > diff file1.txt file2.txt 2d1 < zeile2
Hier will Ihnen diff sagen, dass die zweite Zeile in file2.txt fehlt (d = delete) bzw. gelöscht wurde. Daraufhin wird die entsprechende Zeile auch ausgegeben. Natürlich beschränkt sich die Verwendung von diff nicht ausschließlich auf Dateien. Mit der Option âr können Sie ganze Verzeichnisse miteinander vergleichen:
diff -r dir1 dir2
diff3 â Vergleich von drei DateienÂ
Die Funktion entspricht etwa der von diff, nur dass Sie hierbei drei Dateien Zeile für Zeile miteinander vergleichen können. Folgendes besagt die Ausgabe von diff3:
diff file1 file2 file3
Tabelle 14.5 Â Bedeutung der Ausgabe von diff3
Ausgabe
Bedeutung
====
Alle drei Dateien sind unterschiedlich.
====1
file1 ist unterschiedlich.
====2
file2 ist unterschiedlich.
====3
file3 ist unterschiedlich.
dos2unix â Dateien vom DOS- in UNIX-Format umwandelnÂ
Mit dos2unix können Sie Textdateien vom DOS- in das UNIX-Format umwandeln. Alternativ gibt es außerdem noch den Befehl mac2unix, mit dem Sie Textdateien vom MAC- in das UNIX-Format konvertieren können.
you@host > dos2unix file1.txt file2.txt dos2unix: converting file file1.txt to UNIX format ... dos2unix: converting file file2.txt to UNIX format ...
expand â Tabulatoren in Leerzeichen umwandelnÂ
expand ersetzt alle Tabulatoren einer Datei durch eine Folge von Leerzeichen. Standardmäßig sind dies acht Leerzeichen, allerdings kann dieser Wert explizit mit einem Schalter verändert werden. Wollen Sie z. B., dass alle Tabulatorzeichen mit nur drei Leerzeichen ersetzt werden, erreichen Sie dies folgendermaßen:
you@host > expand â3 file
Allerdings erlaubt expand nicht das vollständige Entfernen von Tabulatorenzeichen â sprich ein Schalter mit â0 gibt eine Fehlermeldung zurück. Hierzu können Sie alternativ z. B. das Kommando tr verwenden.
file â den Inhalt von Dateien analysierenÂ
Das Kommando file versucht, die Art oder den Typ einer von Ihnen angegebenen Datei zu ermitteln. Hierzu führt file einen Dateisystemtest, einen Kennzahlentest und einen Sprachtest durch. Je nach Erfolg wird eine entsprechende Ausgabe des Tests vorgenommen. Der Dateisystemtest wird mithilfe des Systemaufrufes stat(2) ausgeführt. Dieser Aufruf erkennt viele Arten von Dateien. Der Kennzahlentest wird anhand von festgelegten Kennzahlen (der Datei /etc/magic oder /etc/usr/share/magic) durchgeführt. In dieser Datei steht beispielsweise geschrieben, welche Bytes einer Datei zu untersuchen sind und auf welches Muster man dann den Inhalt dieser Datei zurückführen kann. Am Ende erfolgt noch ein Sprachtest. Hier versucht file, eine Programmiersprache anhand von Schlüsselwörtern zu erkennen.
you@host > cat > hallo.c #include <stdio.h> int main(void) { printf("Hallo Welt\n"); return 0; } (Strg)+(D) you@host > file hallo.c hallo.c: ASCII C program text you@host > gcc -o hallo hallo.c you@host > ./hallo Hallo Welt you@host > file hallo hallo: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.2.5, dynamically linked (uses shared libs), not stripped you@host > file file1.txt file1.txt: ASCII text you@host > mkfifo abc you@host > file abc abc: fifo (named pipe) ...
find â Suchen nach DateienÂ
Zum Suchen nach Dateien wird häufig auf das Kommando find zurückgegriffen. find durchsucht eine oder mehrere Verzeichnisebenen nach Dateien mit einer bestimmten vorgegebenen Eigenschaft. Die Syntax zu find:
find [Verzeichnis] [-Option ...] [-Test ...] [-Aktion ...]
Die Optionen, Tests und Aktionen können Sie mit Operatoren zusammenfassen. Dabei wertet find jede Datei in den Verzeichnissen hinsichtlich der Optionen, Tests und Aktionen von links nach rechts aus, bis ein Wert unwahr ist oder die Kommandozeilenargumente zu Ende sind. Wenn kein Verzeichnis angegeben wird, wird das aktuelle Verzeichnis verwendet â allerdings gilt dies nur bei GNU-find. Von daher sollte man aus Kompatibilitätsgründen möglichst das Verzeichnis angeben. Wenn keine Aktion angegeben ist, wird meistens âprint (abhängig von einer eventuell angegebene Option) für die Ausgabe auf dem Bildschirm verwendet. Hierzu einige Beispiele.
Alle Verzeichnisse und Unterverzeichnisse ab dem Heimverzeichnis ausgeben:
find $HOME -print
Gibt alle Dateien mit dem Namen »kapitel« aus dem Verzeichnis (und dessen Unterverzeichnisse) /dokus aus:
find /dokus -name kapitel -print
Gibt alle Dateien aus dem Verzeichnis (und dessen Unterverzeichnisse) dokus mit dem Namen »kap...«, bei denen »you« der Eigentümer ist, aus:
find /dokus /usr -name 'kap*' -user you -print
Damit durchsuchen Sie ab dem Wurzelverzeichnis nach einem Verzeichnis (âtype d = directory) mit dem Namen »dok...« und geben dies auf dem Bildschirm aus:
find / -type d -name 'dok*' -print
Sucht leere Dateien (size = 0) und löscht diese nach einer Rückfrage (âok):
find / -size 0 -ok rm {} \;
Gibt alle Dateien ab dem Wurzelverzeichnis aus, die in den letzten sieben Tagen verändert wurden:
find / -mtime â7 -print
fold â einfaches Formatieren von DateienÂ
Mit fold können Sie Textdateien ab einer bestimmten Zeilenlänge umbrechen. Standardmäßig sind hierbei 80 Zeichen pro Zeile eingestellt. Da fold die Bildschirmspalten und nicht die Zeichen zählt, werden auch Tabulatorzeichen korrekt behandelt. Wollen Sie etwa eine Textdatei nach 50 Zeichen umbrechen, gehen Sie folgendermaßen vor:
you@host > fold â50 Kap003.txt ... Sicherlich erscheint Ihnen das Ganze nicht sonderl ich elegant oder sinnvoll, aber bspw. in Schleifen eingesetzt, können Sie hierbei hervorragend alle A rgumente der Kommandozeile zur Verarbeitung von Op tionen heranziehen. Als Beispiel ein kurzer theoreti scher Code-Ausschnitt, wie so etwas in der Praxis realisiert werden kann.
Allerdings kann man an der Ausgabe erkennen, dass einfach die Wörter abgeschnitten und in der nächsten Zeile fortgeführt werden. Wollen Sie dies unterbinden, können Sie die Option âs verwenden. Damit findet der Zeilenumbruch beim letzten Leerzeichen der Zeile statt, wenn in der Zeile ein Leerzeichen vorhanden ist.
you@host > fold -s â50 Kap003.txt ... Sicherlich erscheint Ihnen das Ganze nicht sonderlich elegant oder sinnvoll, aber bspw. in Schleifen eingesetzt, können Sie hierbei hervorragend alle Argumente der Kommandozeile zur Verarbeitung von Optionen heranziehen. Als Beispiel ein kurzer theoretischer Code-Ausschnitt, wie so etwas in der Praxis realisiert werden kann.
Ein recht typischer Anwendungsfall ist es, Text für eine E-Mail zu formatieren:
you@host > fold -s â72 text.txt | mail -s "Betreff" <EMAIL>
head â Anfang einer Datei ausgebenÂ
Mit der Funktion head geben Sie immer die ersten Zeilen einer Datei auf dem Bildschirm aus. Standardmäßig werden dabei die ersten zehn Zeilen ausgegeben. Wollen Sie selbst bestimmen, wie viele Zeilen vom Anfang der Datei ausgegeben werden sollen, können Sie dies explizit mit ân angeben:
you@host > head â5 file
Hier werden die ersten fünf Zeilen von file auf dem Bildschirm ausgegeben.
less â Datei(en) seitenweise ausgebenÂ
Mit less geben Sie eine Datei seitenweise auf dem Bildschirm aus. Der Vorteil von less gegenüber more ist, dass Sie mit less auch zurückblättern können. Da less von der Standardeingabe liest, ist so auch eine Umleitung eines anderen Kommandos mit einer Pipe möglich. Mit der (Leertaste) blättern Sie eine Seite weiter und mit (B) können Sie jeweils eine Seite zurückblättern. Die meisten less-Versionen bieten außerdem das Scrollen nach unten bzw. oben mit den Pfeiltasten an. Mit (Q) wird less beendet. less bietet außerdem eine Unmenge von Optionen und weiterer Features an, über die Sie sich durch Drücken von (H) informieren können.
ln â Links auf eine Datei erzeugenÂ
Wenn eine Datei erzeugt wird, werden im Verzeichnis der Name, ein Verweis auf eine Inode, die Zugriffsrechte, der Dateityp und gegebenenfalls die Anzahl der belegten Blöcke eingetragen. Mit ln wiederum wird ein neuer Eintrag im Verzeichnis abgelegt, der auf die Inode einer existierenden Datei zeigt. Man spricht dabei von einem Hardlink. Er wird standardmäßig ohne weitere Angaben angelegt. Es ist allerdings nicht möglich, diese Hardlinks über Dateisystemgrenzen hinweg anzulegen. Hierzu müssen Sie einen symbolischen Link mit der Option âs erzeugen.
ln -s filea fileb
Damit haben Sie einen symbolischen Link auf die bestehende Datei filea mit dem Namen fileb angelegt.
Wollen Sie hingegen einen Hardlink auf die bestehende Datei filea mit dem Namen fileb anlegen, so gehen Sie wie folgt vor:
ln filea fileb
Smalltalk   Mit den Hardlinks kann man unter BSD und Jails nette Sachen machen. Unter BSD gibt es ja das »imutable flag« für Dateien, das das Schreiben auch für root verbietet. Nur root kann das Flag verändern. Man kann aber mit »make world« ein Betriebssystem in einem Verzeichnis für eine Jail bauen. Das Verzeichnis kann man dann rekursiv mit dem »imutable flag« versehen und per Hardlink in die Jail verlinken. Das »imutable flag« kann nur root vom Hostsystem aus verändern, Nicht-root aus der Jail. Somit kann man das root-Passwort im Internet bekannt geben (wurde auch schon oft gemacht) und es hat bisher noch nie jemand geschafft, solch eine Jail zu knacken.
ls â Verzeichnisinhalt auflistenÂ
Mit ls wird der Inhalt eines Verzeichnisses auf dem Dateisystem angezeigt. Da ls bereits in Abschnitt 1.7.2 behandelt wurde, wird hier nicht mehr näher darauf eingegangen.
more â Datei(en) seitenweise ausgebenÂ
more wird genauso eingesetzt wie less, und zwar zum seitenweisen Lesen von Dateien. Allerdings bietet less gegenüber more erheblich mehr Features und Funktionalitäten an.
mv â Datei(en) und Verzeichnisse verschieben oder umbenennenÂ
Mit mv können Sie eine oder mehrere Dateien bzw. Verzeichnisse verschieben oder umbenennen.
Tabelle 14.6 Â Anwendungen von mv
mv file filenew
Eine Datei umbenennen
mv file dir
Eine Datei in ein Verzeichnis verschieben
mv dir dirnew
Ein Verzeichnis in ein anderes Verzeichnis verschieben
Hinweis   mv wurde bereits in Abschnitt 1.7.2 behandelt.
nl â Datei mit Zeilennummer ausgebenÂ
Mit nl geben Sie die Zeilen einer Datei mit deren Nummer auf dem Bildschirm aus. Dabei ist nl nicht nur ein »dummer« Zeilenzähler, sondern kann die Zeilen einer Seite auch in einen Header, Body und einen Footer unterteilen und in unterschiedlichen Stilen nummerieren, zum Beispiel:
you@host > ls | nl -w3 -s') ' 1) abc 2) bin 3) cxoffice 4) Desktop 5) Documents 6) file1.txt ...
Wenn Sie mehrere Dateien verwenden, beginnt die Nummerierung allerdings nicht mehr neu, dann werden mehrere Dateien wie eine behandelt. Die Zeilennummer wird nicht zurückgesetzt. Ein weiteres Beispiel:
you@host > nl hallo.c -s' : ' > hallo_line you@host > cat hallo_line 1 : #include <stdio.h> 2 : int main(void) { 3 : printf("Hallo Welt\n"); 4 : return 0; 5 : }
Mit der Option âs (optional) geben Sie das Zeichen an, das zwischen der Zeilennummer und der eigentlichen Zeile stehen soll.
od â Datei(en) hexadezimal bzw. oktal ausgebenÂ
od liest von der Standardeingabe eine Datei ein und gibt diese â Byte für Byte â formatiert und kodiert auf dem Bildschirm aus. Standardmäßig wird dabei die siebenstellige Oktalzahl in je acht Spalten zu zwei Bytes verwendet:
you@host > od file1.txt 0000000 064546 062554 035061 062572 066151 030545 063012 066151 0000020 030545 075072 064545 062554 005062 064546 062554 035062 0000040 062572 066151 031545 000012 0000047
Jede Zeile enthält in der ersten Spalte die Positionsnummer in Bytes vom Dateianfang an. Mit der Option âh erfolgt die Ausgabe in hexadezimaler und mit âc in ASCII-Form.
paste â Dateien spaltenweise verknüpfenÂ
Mit paste führen Sie Zeilen von mehreren Dateien zusammen. Das Kommando wurde bereits in Abschnitt 2.3.1 behandelt, weshalb Sie gegebenenfalls hierhin zurückblättern sollten.
pcat â Ausgabe von pack-komprimierten DateienÂ
Mit pcat kann man den Inhalt von pack-komprimierten Dateien ausgeben, ohne dass man die komprimierte Datei dekomprimieren muss. Ansonsten funktioniert pcat wie cat.
rm â Dateien und Verzeichnisse löschenÂ
Mit dem Kommando rm können Sie Dateien und Verzeichnisse löschen.
Tabelle 14.7 Â Anwendungen von rm
rm datei
Löscht eine Datei
rm dir
Löscht ein leeres Verzeichnis
rm âr dir
Löscht ein Verzeichnis rekursiv
rm ârf dir
Erzwingt rekursives Löschen, ohne eine Warnung auszugeben
Hinweis   Das Kommando rm wurde bereits in den Abschnitten 1.7.2 und 1.7.3 (rmdir) behandelt.
Tabelle 14.8 Â Optionen für das Kommando sort
ân
Sortiert eine Datei numerisch
âf
Unterscheidet nicht zwischen Klein- und Großbuchstaben
âr
Sortiert nach Alphabet in umgekehrter Reihenfolge
ân âr
Sortiert numerisch in umgekehrter Reihenfolge
âc
Überprüft, ob die Dateien bereits sortiert sind. Wenn nicht, wird mit einer Fehlermeldung und dem Rückgabewert 1 abgebrochen.
âu
Gibt keine doppelt vorkommenden Zeilen aus
Alternativ gibt es hierzu noch das Kommando tsort, welches Dateien topologisch sortiert.
split â Dateien in mehrere Teile zerlegenÂ
Mit split teilen Sie eine Datei in mehrere Teile auf. Ohne Angabe einer Option wird eine Datei in je 1000 Zeilen aufgeteilt. Die Ausgabe erfolgt in Dateien mit »x...« oder einem entsprechenden Präfix, wenn eines angegeben wurde:
you@host > split â50 kommandos.txt you@host > ls x* xaa xab xac xad xae
Die Datei können Sie folgendermaßen wieder zusammensetzen:
for file in `ls x* | sort`; do cat $file >> new.txt; done
Hier wurde z. B. die Textdatei kommandos.txt in je 50-zeilige Häppchen aufgeteilt. Wollen Sie den Namen der neu erzeugten Datei verändern, gehen Sie wie folgt vor:
you@host > split â50 kommandos.txt kommandos you@host > ls komm* kommandosaa kommandosab kommandosac kommandosad kommandosae kommandos.txt
Das Kommando split wird häufig eingesetzt, um große Dateien zu splitten, die nicht auf ein einzelnes Speichermedium passen.
tac â Dateien rückwärts ausgebenÂ
Vereinfacht ausgedrückt ist tac wie cat (daher auch der rückwärts geschriebene Kommandoname), nur dass tac die einzelnen Zeilen rückwärts ausgibt. Es wird somit zuerst die letzte Zeile ausgegeben, dann die vorletzte usw. bis zur ersten Zeile.
you@host > cat file1.txt file1:zeile1 file1:zeile2 file2:zeile3 you@host > tac file1.txt file2:zeile3 file1:zeile2 file1:zeile1
tail â Ende einer Datei ausgebenÂ
tail gibt die letzten Zeilen (standardmäßig, ohne spezielle Angaben die letzten zehn) einer Datei aus.
you@host > tail â5 kommandos.txt write â Nachrichten an andere Benutzer verschicken zcat â Ausgabe von gunzip-komprimierten Dateien zip/unzip â (De-) Komprimieren von Dateien zless â gunzip-komprimierte Dateien seitenweise ausgeben zmore â gunzip-komprimierte Dateien seitenweise ausgeben
Hier gibt tail die letzten fünf Zeilen der Datei kommandos.txt aus. Wollen Sie eine Datei ab einer bestimmten Zeile ausgeben lassen, gehen Sie wie folgt vor:
you@host > tail +100 kommandos.txt
Hier werden alle Zeilen ab Zeile 100 ausgegeben. Wollen Sie tail wie tac verwenden, können Sie die Option âr verwenden:
you@host > tail -r kommandos.txt
Hiermit wird die komplette Datei zeilenweise rückwärts, von der letzten zur ersten Zeile ausgegeben. Häufig verwendet wird auch die Option âf (follow), die immer wieder das Dateiende ausgibt. Dadurch kann man eine Datei beim Wachsen beobachten, da jede neu hinzugekommene Zeile angezeigt wird. Natürlich lässt sich diese Option nur auf eine Datei gleichzeitig anwenden.
tee â Ausgabe duplizierenÂ
Mit tee lesen Sie von der Standardeingabe und verzweigen die Ausgabe auf die Standardausgabe und Datei. Da tee ein eigener Abschnitt (1.10.5) im Buch gewidmet ist, sei hier auf diesen verwiesen.
touch â Anlegen von Dateien oder Zeitstempel verändernÂ
Mit touch verändern Sie die Zugriffs- und Änderungszeit einer Datei auf die aktuelle Zeit. Existiert eine solche Datei nicht, wird diese angelegt. touch wurde bereits in Abschnitt 1.7.2 behandelt, aber dennoch sollen hier noch einige Optionen zu touch und ihre jeweilige Bedeutung erwähnt werden:
Tabelle 14.9 Â Optionen für das Kommando touch
âa
Damit ändern Sie nur die Zugriffszeit.
âc
Falls eine Datei nicht existiert, wird diese trotzdem nicht erzeugt.
âm
Ändert nur die Änderungszeit
tr â Zeichen ersetzen bzw. Umformen von DateienÂ
Mit tr können Zeichen durch andere Zeichen ersetzt werden. Dies gilt auch für nicht druckbare Zeichen.
tr str1 str2 file
Wird in der Datei file ein Zeichen aus »str1« gefunden, wird es durch das entsprechende Zeichen in »str2« ersetzt.
Hinweis   tr wurde bereits in Abschnitt 2.3.1 behandelt.
Hinweis   type selbst ist ein Builtin und daher nicht in jeder Shell verfügbar.
umask â Dateierstellungsmaske ändern bzw. ausgebenÂ
Mit der Shell-Funktion umask setzen Sie eine Maske, mit der die Zugriffsrechte auf eine Datei bzw. auf Verzeichnisse direkt nach der Erzeugung durch einen von der Shell kontrollierten Prozess bestimmt wird. Die in der Maske gesetzten Bits werden bei den Zugriffsrechten für die neue Datei bzw. das Verzeichnis gelöscht (man spricht auch von: Sie werden maskiert). Mehr zu diesem Kommando entnehmen Sie bitte dem Abschnitt 9.4, wo es näher behandelt wurde.
uniq â doppelte Zeilen nur einmal ausgebenÂ
Mit uniq können Sie doppelt vorhandene Zeilen löschen. Voraussetzung ist allerdings, dass die Datei sortiert ist und die doppelten Zeilen direkt hintereinander folgen. Beispielsweise:
you@host > cat file1.txt file1:zeile1 file1:zeile2 file1:zeile2 file2:zeile3 you@host > uniq file1.txt file1:zeile1 file1:zeile2 file2:zeile3
unix2dos â Dateien vom UNIX- in DOS-Format umwandelnÂ
Das Gegenstück von dos2unix. Damit wandeln Sie eine Textdatei vom Unix-Format wieder zurück in das DOS-Format um.
unix2dos fileunix filedos
wc â Zeilen, Wörter und Zeichen einer Datei zählenÂ
Mit wc können Sie die Zeichen, Wörter und/oder Zeilen einer Datei zählen. Ohne spezielle Optionen wird eine Zeile mit den folgenden Zahlen ausgegeben:
you@host > wc file1.txt 4 4 52 file1.txt
Die erste Spalte enthält die Anzahl der Zeilen, gefolgt von der Anzahl der Worte und am Ende die Anzahl der Zeichen. Einzeln können Sie dies mit der Option âl (lines = Zeilen), âw (words = Wörter) und âc (characters = Zeichen) ermitteln.
Hinweis   wc wurde bereits in Abschnitt 1.7.2 behandelt.
whereis â Suche nach DateienÂ
Mit dem Kommando whereis wird vorwiegend in wichtigen Pfaden (meistens allen Einträge in PATH) nach Binärdateien oder man-Dateien gesucht. whereis ist nicht so flexibel wie find, aber dafür erheblich schneller.
you@host > whereis ls /bin/ls /usr/share/man/man1/ls.1.gz /usr/share/man/man1p/ls.1p.gz you@host > whereis -b ls /bin/ls you@host > whereis -m ls /usr/share/man/man1/ls.1.gz /usr/share/man/man1p/ls.1p.gz
Zuerst wurde der Pfad zum Programm ls ermittelt. Hierbei werden allerdings auch gleich die Pfade zu den man-Seiten mit ausgegeben. Wollen Sie nur den Pfad zum Binärprogramm erhalten, müssen Sie die Option âb verwenden. Wünschen Sie nur den Pfad zu den man-Seiten, so verwenden Sie die Option âm, wie im Beispiel gesehen.
zcat, zless, zmore â (seitenweise) Ausgabe von gunzip-komprimierten DateienÂ
Alle drei Funktionen haben dieselbe Funktionsweise wie Ihre Gegenstücke ohne »z«, nur dass hiermit gzip- bzw. gunzip-komprimierte Dateien gelesen und ausgegeben werden können, ohne dass diese dekomprimiert werden müssen. Auf manchen Systemen gibt es mit zgrep auch noch eine entsprechende grep-Version.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 14.2 Dateiorientierte KommandosÂ
### bzcat â Ausgabe von bzip2-komprimierten DateienÂ
Mit bzcat können Sie die Inhalte von bzip2-komprimierten Dateien ausgeben, ohne dass Sie hierbei die komprimierte Datei dekomprimieren müssen. Dies ist z. B. auch ein Grund, warum Sie mit einem Dateibrowser den Inhalt einer Datei sehen und sogar lesen können, obwohl Sie diese noch gar nicht dekomprimiert haben. Ansonsten funktioniert bzcat wie cat.
### cat â Datei(en) nacheinander ausgebenÂ
cat wurde bereits mehrfach in diesem Buch verwendet und auch beschrieben. Mit diesem Kommando werden gewöhnlich Dateien ausgegeben. Geben Sie cat beim Aufruf keine Dateien zum Lesen als Argument mit, liest cat so lange aus der Standardeingabe, bis (Strg)+(D) (EOF) betätigt wurde.
Verwendung | Bedeutung |
| --- | --- |
cat file | Gibt den Inhalt von file aus |
cat file | kommando | Gibt den Inhalt von file via Pipe an die Standardeingabe von kommando weiter |
cat file1 file2 > file_all | Dateien aneinander hängen |
cat > file | Schreibt alle Zeilen, die von der Tastatur eingegeben wurden, in die Datei file, bis (Strg)+(D) betätigt wurde |
### chgrp â Gruppe von Dateien oder Verzeichnissen ändernÂ
Mit chgrp ändern Sie die Gruppenzugehörigkeit einer Datei oder eines Verzeichnisses. Dieses Kommando bleibt somit nur dem Eigentümer einer Datei/Verzeichnis oder dem Superuser vorbehalten. Als Eigentümer können Sie außerdem nur diejenigen Dateien oder Verzeichnisse einer bestimmten Gruppe zuordnen, der Sie selbst auch angehören. Wollen Sie die Gruppenzugehörigkeit aller Dateien in einem Verzeichnis mit allen Unterverzeichnissen ändern, dann bietet sich hierzu die Option âR (für rekursiv) an.
### cksum/md5sum/sum â eine Prüfsumme für eine Datei ermittelnÂ
Mit diesen Funktionen errechnet man die CRC-(cyclic redundancy check)-Prüfsumme und die Anzahl Bytes (Anzahl Bytes gilt nur für cksum) für eine Datei. Wird keine Datei angegeben, liest cksum diejenige aus der Standardeingabe, bis (Strg)+(D) betätigt wurde, und berechnet hieraus die Prüfsumme.
Diese Kommandos werden häufig eingesetzt um festzustellen, ob zwei Daten identisch sind. So kann z. B. überprüft werden, ob eine Datei, die Sie aus dem Netz geladen haben, auch korrekt übertragen wurde. Voraussetzung hierfür ist natürlich, dass Sie die Prüfsumme der Quelle kennen. Häufig findet man dies beim Herunterladen von ISO-Distributionen. Ein anderer Anwendungsfall wäre das Überprüfen auf Virenbefall. Hiermit kann ermittelt werden, ob sich jemand an einer Datei zu schaffen gemacht hat, beispielsweise:
> you@host > cksum data.conf 2935371588 51 data.conf you@host > cksum data.conf 2935371588 51 data.conf you@host > echo Hallo >> data.conf you@host > cksum data.conf 966396470 57 data.conf
Hier eine Konfigurationsdatei data.conf, bei der zweimal mit cksum derselbe Wert berechnet wurde (nur zur Demonstration). Kurz darauf wurde am Ende dieser Datei ein Text angehängt und erneut cksum ausgeführt. Jetzt erhalten Sie eine andere Prüfsumme. Voraussetzung, dass dieses Prinzip funktioniert, ist natürlich auch eine Datei oder Datenbank, die solche Prüfsummen zu den entsprechenden Dateien speichert. Dabei können Sie auch zwei Dateien auf einmal eingeben, um die Prüfsummen zu vergleichen:
> you@host > cksum data.conf data.conf~bak 966396470 57 data.conf 2131264154 10240 data.conf~bak
cksum ist gegenüber sum zu bevorzugen, da diese Version neuer ist und auch dem POSIX.2-Standard entspricht. Beachten Sie allerdings, dass alle drei Versionen zum Berechnen von Prüfsummen (sum, cksum und md5sum) untereinander inkompatibel sind und andere Prüfsummen als Ergebnis berechnen:
> you@host > sum data.conf 20121 1 you@host > cksum data.conf 966396470 57 data.conf you@host > md5sum data.conf 5a04a9d083bc0b0982002a2c8894e406 data.conf
Noch ein beliebter Anwendungsfall von md5sum (bzw. md5):
> cd /bin; md5 `ls -R /bin` | md5
Wenn sich jetzt jemand am Verzeichnis /bin zu schaffen gemacht hat, merkt man dies relativ schnell. Am besten lässt man hierbei einen cron-Job laufen und sich gegebenenfalls täglich per E-Mail benachrichtigen.
### chmod â Zugriffsrechte von Dateien oder Verzeichnissen ändernÂ
Mit chmod setzen oder verändern Sie die Zugriffsrechte auf Dateien oder Verzeichnisse. Die Benutzung von chmod ist selbstverständlich nur dem Dateieigentümer und dem Superuser gestattet. Die Bedienung von chmod muss eigentlich jedem Systemadministrator geläufig sein, weil es ein sehr häufig verwendetes Kommando ist. chmod kann zum Glück sehr flexibel eingesetzt werden. Man kann einen numerischen Wert wie folgt verwenden:
> chmod 755 file
oder
> chmod 0755 file
Einfacher anzuwenden ist chmod über eine symbolische Angabe wie:
> chmode u+x file
Hier bekommt der User (u; Eigentümer) der Datei file das Ausführrecht (+x) erteilt.
> chmod g-x file
Damit wurde der Gruppe (g) das Ausführrecht entzogen (âx). Wollen Sie hingegen allen Teilnehmern (a) ein Ausführrecht erteilen, dann geht dies so:
> chmod a+x file
Mit chmod können Sie auch die Spezialbits setzen (SUID=4000; SGUID=2000 oder Sticky=1000). Wollen Sie z. B. für eine Datei das setuid-(Set-User-ID)-Bit setzen, funktioniert dies folgendermaßen:
> chmod 4744 file
Das setgid-(Set Group ID)-Bit hingegen setzen Sie mit »2xxx«.
Zu erwähnen ist auch die Option âR, mit der Sie ein Verzeichnis rekursiv durchlaufen und alle Dateien, die sich darin befinden, entsprechend den neu angegebenen Rechten ändern.
### chown â Eigentümer von Dateien oder Verzeichnissen ändernÂ
Mit chown können Sie den Eigentümer von Dateien oder Verzeichnissen ändern. Als neuen Eigentümer kann man entweder den Login-Namen oder die User-ID angeben. Name oder Zahl müssen selbstverständlich in der Datei /etc/passwd vorhanden sein. Dieses Kommando kann wiederum nur vom Eigentümer selbst oder dem Superuser aufgerufen und auf Dateien bzw. Verzeichnisse angewendet werden.
> chown john file1 file2
Hier wird der User »john« Eigentümer der Datei file1 und file2. Wollen Sie auch hier ein komplettes Verzeichnis mitsamt den Unterverzeichnissen erfassen, so kann auch hierbei die Option âR verwendet werden.
Wollen Sie sowohl den Eigentümer als auch die Gruppe einer Datei ändern, nutzen Sie folgende Syntax:
> chown john:user file1 file2
### cmp â Dateien miteinander vergleichenÂ
Mit der Funktion cmp vergleichen Sie zwei Dateien Byte für Byte miteinander und erhalten die dezimale Position und Zeilennummer vom ersten Byte zurück, bei dem sich beide Dateien unterscheiden. cmp vergleicht auch Binärdateien. Sind beide Dateien identisch, erfolgt keine Ausgabe.
> you@host > cmp out.txt textfile.txt out.txt textfile.txt differieren: Byte 52, Zeile 3.
### comm â zwei sortierte Textdateien miteinander vergleichenÂ
Mit comm vergleichen Sie zwei sortierte Dateien und geben die gemeinsamen und die unterschiedlichen Zeilen jeweils in Spalten aus, indem die zweite und dritte Spalte von einem bzw. zwei Tabulatorenvorschüben angeführt werden.
> comm [-123] file1 file2
Die erste Spalte enthält die Zeilen, die nur in der Datei file1 enthalten sind. Die zweite Spalte hingegen beinhaltet die Zeilen, die in der zweiten Datei file2 enthalten sind, und die dritte Spalte die Zeilen, die in beiden Dateien enthalten sind.
> you@host > cat file1.txt # wichtige Initialisierungsdatei # noch eine Zeile Hallo you@host > cat file2.txt # wichtige Initialisierungsdatei # noch eine Zeile Hallo you@host > comm file1.txt file2.txt # wichtige Initialisierungsdatei # noch eine Zeile Hallo you@host > echo "Neue Zeile" >> file2.txt you@host > comm file1.txt file2.txt # wichtige Initialisierungsdatei # noch eine Zeile Hallo Neue Zeile you@host > comm â3 file1.txt file2.txt Neue Zeile
In der letzten Zeile ist außerdem zu sehen, wie Sie mit dem Schalter â3 die Ausgabe der dritten Spalte ganz abschalten, um nur die Differenzen beider Dateien zu erkennen. comm arbeitet zeilenweise, weshalb hier keine Vergleiche mit binären Dateien möglich sind. Weitere Schalterstellungen und ihre Bedeutung sind:
Verwendung | Bedeutung |
| --- | --- |
â23 file1 file2 | Es werden nur Zeilen ausgegeben, die in file1 vorkommen. |
â123 file1 file2 | Es wird keine Ausgabe erzeugt. |
### cp â Dateien kopierenÂ
Verwendung | Bedeutung |
| --- | --- |
cp file newfile | Es wird mit newfile eine Kopie von file erzeugt. |
cp âp file newfile | newfile erhält dieselben Zugriffsrechte, Eigentümer und Zeitstempel. |
cp âr dir newdir | Es wird ein komplettes Verzeichnis rekursiv (âr) kopiert. |
cp file1 file2 file3 dir | Es werden mehrere Dateien in ein Verzeichnis kopiert. |
### csplit â Zerteilen von Dateien (kontextabhängig)Â
Mit csplit können Sie eine Datei in mehrere Teile aufteilen. Als Trennstelle kann hierbei ein Suchmuster, also auch ein regulärer Ausdruck angegeben werden. Dabei werden aus einer Eingabedatei mehrere Ausgabedateien erzeugt, deren Inhalt vom Suchmuster abhängig gemacht werden kann. Ein Beispiel:
> csplit Kapitel20.txt /Abschnitt 1/ /Abschnitt 2/ /Abschnitt 3/
Hier wird das Kapitel20.txt in vier Teile aufgeteilt. Zunächst vom Anfang bis zum »Abschnitt 1«, als Nächstes von »Abschnitt 1« bis »Abschnitt 2«, dann »Abschnitt 2« bis »Abschnitt 3« und zu guter Letzt »Abschnitt 3« bis »Abschnitt4«. Hier können Sie allerdings auch einzelne Zeilen angeben, ab denen Sie eine Datei teilen wollen:
> csplit -f Abschnitt Kapitel20.txt 20 40
Hier haben Sie mit der Option âf veranlasst, dass statt eines Dateinamens wie »xx01«, »xx02« usw. dem darauf folgenden Namen eine Datei wie »Abschnitt01«, »Abschnitt02« usw. erzeugt wird. Hier zerteilen Sie die Datei Kapitel20.txt in drei Dateien: »Abschnitt01« (Zeile 1â20), »Abschnitt02« (Zeile 21â40) und »Abschnitt03« (Zeile 41 bis zum Ende). Sie können mit {n} am Ende auch angeben, dass ein bestimmter Ausdruck n-mal angewendet werden soll. Beispielsweise:
> csplit -k /var/spool/mail/$LOGNAME /^From / {100}
Hier zerteilen Sie in Ihrer Mailbox die einzelnen E-Mails in die einzelnen Dateien »xx01«, »xx02« ... »xx99«. Jeder Brief einer E-Mail im mbox-Format beginnt mit »From«, weshalb dies als Trennzeichen für die einzelnen Dateien dient. Weil Sie wahrscheinlich nicht genau wissen, wie viele Mails in Ihrer Mailbox liegen, können Sie durch die Angabe einer relativ hohen Zahl zusammen mit der Option âk erreichen, dass alle Mails getrennt und nach einem eventuell vorzeitigen Scheitern die bereits erzeugten Dateien nicht wieder gelöscht werden.
### cut â Zeichen oder Felder aus Dateien herausschneidenÂ
Mit cut schneiden Sie bestimmte Teile aus einer Datei heraus. Dabei liest cut von der angegebenen Datei und gibt die Teile auf dem Bildschirm aus, die Sie als gewählte Option und per Wahl des Bereichs verwendet haben. Ein Bereich ist eine durch ein Komma getrennte Liste von einzelnen Zahlen bzw. Zahlenbereichen. Diese Zahlenbereiche werden in der Form »a-z« angegeben. Wird a oder z weggelassen, so wird hierzu der Anfang bzw. das Ende einer Zeile verwendet.
### diff â Vergleichen zweier DateienÂ
diff vergleicht den Inhalt von zwei Dateien. Da diff zeilenweise vergleicht, sind keine binären Dateien erlaubt. Ein Beispiel:
> you@host > diff file1.txt file2.txt 2a3 > neueZeile
Hier wurden die Dateien file1.txt und file2.txt miteinander verglichen. Die Ausgabe »2a3« besagt lediglich, dass Sie in der Datei file1.txt zwischen der Zeile 2 und 3 die Zeile »neueZeile« einfügen (a = append) müssten, damit die Datei exakt mit der Datei file2.txt übereinstimmt. Noch ein Beispiel:
> you@host > diff file1.txt file2.txt 2c2 < zeile2 --- > zeile2 wurde verändert
Hier bekommen Sie mit »2c2« die Meldung, dass die zweite Zeile unterschiedlich (c = change) ist. Die darauf folgende Ausgabe zeigt auch den Unterschied dieser Zeile an. Eine sich öffnende spitze Klammer (<) zeigt file1.txt und die sich schließende spitze Klammer bezieht sich auf file2.txt. Und eine dritte Möglichkeit, die Ihnen diff meldet, wäre:
> you@host > diff file1.txt file2.txt 2d1 < zeile2
Hier will Ihnen diff sagen, dass die zweite Zeile in file2.txt fehlt (d = delete) bzw. gelöscht wurde. Daraufhin wird die entsprechende Zeile auch ausgegeben. Natürlich beschränkt sich die Verwendung von diff nicht ausschließlich auf Dateien. Mit der Option âr können Sie ganze Verzeichnisse miteinander vergleichen:
> diff -r dir1 dir2
### diff3 â Vergleich von drei DateienÂ
Die Funktion entspricht etwa der von diff, nur dass Sie hierbei drei Dateien Zeile für Zeile miteinander vergleichen können. Folgendes besagt die Ausgabe von diff3:
> diff file1 file2 file3
Ausgabe | Bedeutung |
| --- | --- |
==== | Alle drei Dateien sind unterschiedlich. |
====1 | file1 ist unterschiedlich. |
====2 | file2 ist unterschiedlich. |
====3 | file3 ist unterschiedlich. |
### dos2unix â Dateien vom DOS- in UNIX-Format umwandelnÂ
Mit dos2unix können Sie Textdateien vom DOS- in das UNIX-Format umwandeln. Alternativ gibt es außerdem noch den Befehl mac2unix, mit dem Sie Textdateien vom MAC- in das UNIX-Format konvertieren können.
> you@host > dos2unix file1.txt file2.txt dos2unix: converting file file1.txt to UNIX format ... dos2unix: converting file file2.txt to UNIX format ...
### expand â Tabulatoren in Leerzeichen umwandelnÂ
expand ersetzt alle Tabulatoren einer Datei durch eine Folge von Leerzeichen. Standardmäßig sind dies acht Leerzeichen, allerdings kann dieser Wert explizit mit einem Schalter verändert werden. Wollen Sie z. B., dass alle Tabulatorzeichen mit nur drei Leerzeichen ersetzt werden, erreichen Sie dies folgendermaßen:
> you@host > expand â3 file
Allerdings erlaubt expand nicht das vollständige Entfernen von Tabulatorenzeichen â sprich ein Schalter mit â0 gibt eine Fehlermeldung zurück. Hierzu können Sie alternativ z. B. das Kommando tr verwenden.
### file â den Inhalt von Dateien analysierenÂ
Das Kommando file versucht, die Art oder den Typ einer von Ihnen angegebenen Datei zu ermitteln. Hierzu führt file einen Dateisystemtest, einen Kennzahlentest und einen Sprachtest durch. Je nach Erfolg wird eine entsprechende Ausgabe des Tests vorgenommen. Der Dateisystemtest wird mithilfe des Systemaufrufes stat(2) ausgeführt. Dieser Aufruf erkennt viele Arten von Dateien. Der Kennzahlentest wird anhand von festgelegten Kennzahlen (der Datei /etc/magic oder /etc/usr/share/magic) durchgeführt. In dieser Datei steht beispielsweise geschrieben, welche Bytes einer Datei zu untersuchen sind und auf welches Muster man dann den Inhalt dieser Datei zurückführen kann. Am Ende erfolgt noch ein Sprachtest. Hier versucht file, eine Programmiersprache anhand von Schlüsselwörtern zu erkennen.
> you@host > cat > hallo.c #include <stdio.h> int main(void) { printf("Hallo Welt\n"); return 0; } (Strg)+(D) you@host > file hallo.c hallo.c: ASCII C program text you@host > gcc -o hallo hallo.c you@host > ./hallo Hallo Welt you@host > file hallo hallo: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.2.5, dynamically linked (uses shared libs), not stripped you@host > file file1.txt file1.txt: ASCII text you@host > mkfifo abc you@host > file abc abc: fifo (named pipe) ...
### find â Suchen nach DateienÂ
Zum Suchen nach Dateien wird häufig auf das Kommando find zurückgegriffen. find durchsucht eine oder mehrere Verzeichnisebenen nach Dateien mit einer bestimmten vorgegebenen Eigenschaft. Die Syntax zu find:
> find [Verzeichnis] [-Option ...] [-Test ...] [-Aktion ...]
Die Optionen, Tests und Aktionen können Sie mit Operatoren zusammenfassen. Dabei wertet find jede Datei in den Verzeichnissen hinsichtlich der Optionen, Tests und Aktionen von links nach rechts aus, bis ein Wert unwahr ist oder die Kommandozeilenargumente zu Ende sind. Wenn kein Verzeichnis angegeben wird, wird das aktuelle Verzeichnis verwendet â allerdings gilt dies nur bei GNU-find. Von daher sollte man aus Kompatibilitätsgründen möglichst das Verzeichnis angeben. Wenn keine Aktion angegeben ist, wird meistens âprint (abhängig von einer eventuell angegebene Option) für die Ausgabe auf dem Bildschirm verwendet. Hierzu einige Beispiele.
Alle Verzeichnisse und Unterverzeichnisse ab dem Heimverzeichnis ausgeben:
> find $HOME -print
Gibt alle Dateien mit dem Namen »kapitel« aus dem Verzeichnis (und dessen Unterverzeichnisse) /dokus aus:
> find /dokus -name kapitel -print
Gibt alle Dateien aus dem Verzeichnis (und dessen Unterverzeichnisse) dokus mit dem Namen »kap...«, bei denen »you« der Eigentümer ist, aus:
> find /dokus /usr -name 'kap*' -user you -print
Damit durchsuchen Sie ab dem Wurzelverzeichnis nach einem Verzeichnis (âtype d = directory) mit dem Namen »dok...« und geben dies auf dem Bildschirm aus:
> find / -type d -name 'dok*' -print
Sucht leere Dateien (size = 0) und löscht diese nach einer Rückfrage (âok):
> find / -size 0 -ok rm {} \;
Gibt alle Dateien ab dem Wurzelverzeichnis aus, die in den letzten sieben Tagen verändert wurden:
> find / -mtime â7 -print
### fold â einfaches Formatieren von DateienÂ
Mit fold können Sie Textdateien ab einer bestimmten Zeilenlänge umbrechen. Standardmäßig sind hierbei 80 Zeichen pro Zeile eingestellt. Da fold die Bildschirmspalten und nicht die Zeichen zählt, werden auch Tabulatorzeichen korrekt behandelt. Wollen Sie etwa eine Textdatei nach 50 Zeichen umbrechen, gehen Sie folgendermaßen vor:
> you@host > fold â50 Kap003.txt ... Sicherlich erscheint Ihnen das Ganze nicht sonderl ich elegant oder sinnvoll, aber bspw. in Schleifen eingesetzt, können Sie hierbei hervorragend alle A rgumente der Kommandozeile zur Verarbeitung von Op tionen heranziehen. Als Beispiel ein kurzer theoreti scher Code-Ausschnitt, wie so etwas in der Praxis realisiert werden kann.
Allerdings kann man an der Ausgabe erkennen, dass einfach die Wörter abgeschnitten und in der nächsten Zeile fortgeführt werden. Wollen Sie dies unterbinden, können Sie die Option âs verwenden. Damit findet der Zeilenumbruch beim letzten Leerzeichen der Zeile statt, wenn in der Zeile ein Leerzeichen vorhanden ist.
> you@host > fold -s â50 Kap003.txt ... Sicherlich erscheint Ihnen das Ganze nicht sonderlich elegant oder sinnvoll, aber bspw. in Schleifen eingesetzt, können Sie hierbei hervorragend alle Argumente der Kommandozeile zur Verarbeitung von Optionen heranziehen. Als Beispiel ein kurzer theoretischer Code-Ausschnitt, wie so etwas in der Praxis realisiert werden kann.
Ein recht typischer Anwendungsfall ist es, Text für eine E-Mail zu formatieren:
> you@host > fold -s â72 text.txt | mail -s "Betreff" <EMAIL>
### head â Anfang einer Datei ausgebenÂ
Mit der Funktion head geben Sie immer die ersten Zeilen einer Datei auf dem Bildschirm aus. Standardmäßig werden dabei die ersten zehn Zeilen ausgegeben. Wollen Sie selbst bestimmen, wie viele Zeilen vom Anfang der Datei ausgegeben werden sollen, können Sie dies explizit mit ân angeben:
> you@host > head â5 file
Hier werden die ersten fünf Zeilen von file auf dem Bildschirm ausgegeben.
Mit less geben Sie eine Datei seitenweise auf dem Bildschirm aus. Der Vorteil von less gegenüber more ist, dass Sie mit less auch zurückblättern können. Da less von der Standardeingabe liest, ist so auch eine Umleitung eines anderen Kommandos mit einer Pipe möglich. Mit der (Leertaste) blättern Sie eine Seite weiter und mit (B) können Sie jeweils eine Seite zurückblättern. Die meisten less-Versionen bieten außerdem das Scrollen nach unten bzw. oben mit den Pfeiltasten an. Mit (Q) wird less beendet. less bietet außerdem eine Unmenge von Optionen und weiterer Features an, über die Sie sich durch Drücken von (H) informieren können.
### ln â Links auf eine Datei erzeugenÂ
Wenn eine Datei erzeugt wird, werden im Verzeichnis der Name, ein Verweis auf eine Inode, die Zugriffsrechte, der Dateityp und gegebenenfalls die Anzahl der belegten Blöcke eingetragen. Mit ln wiederum wird ein neuer Eintrag im Verzeichnis abgelegt, der auf die Inode einer existierenden Datei zeigt. Man spricht dabei von einem Hardlink. Er wird standardmäßig ohne weitere Angaben angelegt. Es ist allerdings nicht möglich, diese Hardlinks über Dateisystemgrenzen hinweg anzulegen. Hierzu müssen Sie einen symbolischen Link mit der Option âs erzeugen.
> ln -s filea fileb
Damit haben Sie einen symbolischen Link auf die bestehende Datei filea mit dem Namen fileb angelegt.
Wollen Sie hingegen einen Hardlink auf die bestehende Datei filea mit dem Namen fileb anlegen, so gehen Sie wie folgt vor:
> ln filea fileb
### ls â Verzeichnisinhalt auflistenÂ
Mit ls wird der Inhalt eines Verzeichnisses auf dem Dateisystem angezeigt. Da ls bereits in Abschnitt 1.7.2 behandelt wurde, wird hier nicht mehr näher darauf eingegangen.
more wird genauso eingesetzt wie less, und zwar zum seitenweisen Lesen von Dateien. Allerdings bietet less gegenüber more erheblich mehr Features und Funktionalitäten an.
### mv â Datei(en) und Verzeichnisse verschieben oder umbenennenÂ
Mit mv können Sie eine oder mehrere Dateien bzw. Verzeichnisse verschieben oder umbenennen.
Verwendung | Bedeutung |
| --- | --- |
mv file filenew | Eine Datei umbenennen |
mv file dir | Eine Datei in ein Verzeichnis verschieben |
mv dir dirnew | Ein Verzeichnis in ein anderes Verzeichnis verschieben |
### nl â Datei mit Zeilennummer ausgebenÂ
Mit nl geben Sie die Zeilen einer Datei mit deren Nummer auf dem Bildschirm aus. Dabei ist nl nicht nur ein »dummer« Zeilenzähler, sondern kann die Zeilen einer Seite auch in einen Header, Body und einen Footer unterteilen und in unterschiedlichen Stilen nummerieren, zum Beispiel:
> you@host > ls | nl -w3 -s') ' 1) abc 2) bin 3) cxoffice 4) Desktop 5) Documents 6) file1.txt ...
Wenn Sie mehrere Dateien verwenden, beginnt die Nummerierung allerdings nicht mehr neu, dann werden mehrere Dateien wie eine behandelt. Die Zeilennummer wird nicht zurückgesetzt. Ein weiteres Beispiel:
> you@host > nl hallo.c -s' : ' > hallo_line you@host > cat hallo_line 1 : #include <stdio.h> 2 : int main(void) { 3 : printf("Hallo Welt\n"); 4 : return 0; 5 : }
Mit der Option âs (optional) geben Sie das Zeichen an, das zwischen der Zeilennummer und der eigentlichen Zeile stehen soll.
### od â Datei(en) hexadezimal bzw. oktal ausgebenÂ
od liest von der Standardeingabe eine Datei ein und gibt diese â Byte für Byte â formatiert und kodiert auf dem Bildschirm aus. Standardmäßig wird dabei die siebenstellige Oktalzahl in je acht Spalten zu zwei Bytes verwendet:
> you@host > od file1.txt 0000000 064546 062554 035061 062572 066151 030545 063012 066151 0000020 030545 075072 064545 062554 005062 064546 062554 035062 0000040 062572 066151 031545 000012 0000047
Jede Zeile enthält in der ersten Spalte die Positionsnummer in Bytes vom Dateianfang an. Mit der Option âh erfolgt die Ausgabe in hexadezimaler und mit âc in ASCII-Form.
### paste â Dateien spaltenweise verknüpfenÂ
Mit paste führen Sie Zeilen von mehreren Dateien zusammen. Das Kommando wurde bereits in Abschnitt 2.3.1 behandelt, weshalb Sie gegebenenfalls hierhin zurückblättern sollten.
### pcat â Ausgabe von pack-komprimierten DateienÂ
Mit pcat kann man den Inhalt von pack-komprimierten Dateien ausgeben, ohne dass man die komprimierte Datei dekomprimieren muss. Ansonsten funktioniert pcat wie cat.
### rm â Dateien und Verzeichnisse löschenÂ
Mit dem Kommando rm können Sie Dateien und Verzeichnisse löschen.
Verwendung | Bedeutung |
| --- | --- |
rm datei | Löscht eine Datei |
rm dir | Löscht ein leeres Verzeichnis |
rm âr dir | Löscht ein Verzeichnis rekursiv |
rm ârf dir | Erzwingt rekursives Löschen, ohne eine Warnung auszugeben |
### sort â Dateien sortierenÂ
Häufig verwendete Optionen zum Sortieren, die mit sort benutzt werden:
Option | Bedeutung |
| --- | --- |
ân | Sortiert eine Datei numerisch |
âf | Unterscheidet nicht zwischen Klein- und Großbuchstaben |
âr | Sortiert nach Alphabet in umgekehrter Reihenfolge |
ân âr | Sortiert numerisch in umgekehrter Reihenfolge |
âc | Überprüft, ob die Dateien bereits sortiert sind. Wenn nicht, wird mit einer Fehlermeldung und dem Rückgabewert 1 abgebrochen. |
âu | Gibt keine doppelt vorkommenden Zeilen aus |
Alternativ gibt es hierzu noch das Kommando tsort, welches Dateien topologisch sortiert.
### split â Dateien in mehrere Teile zerlegenÂ
Mit split teilen Sie eine Datei in mehrere Teile auf. Ohne Angabe einer Option wird eine Datei in je 1000 Zeilen aufgeteilt. Die Ausgabe erfolgt in Dateien mit »x...« oder einem entsprechenden Präfix, wenn eines angegeben wurde:
> you@host > split â50 kommandos.txt you@host > ls x* xaa xab xac xad xae
Die Datei können Sie folgendermaßen wieder zusammensetzen:
> for file in `ls x* | sort`; do cat $file >> new.txt; done
Hier wurde z. B. die Textdatei kommandos.txt in je 50-zeilige Häppchen aufgeteilt. Wollen Sie den Namen der neu erzeugten Datei verändern, gehen Sie wie folgt vor:
> you@host > split â50 kommandos.txt kommandos you@host > ls komm* kommandosaa kommandosab kommandosac kommandosad kommandosae kommandos.txt
Das Kommando split wird häufig eingesetzt, um große Dateien zu splitten, die nicht auf ein einzelnes Speichermedium passen.
### tac â Dateien rückwärts ausgebenÂ
Vereinfacht ausgedrückt ist tac wie cat (daher auch der rückwärts geschriebene Kommandoname), nur dass tac die einzelnen Zeilen rückwärts ausgibt. Es wird somit zuerst die letzte Zeile ausgegeben, dann die vorletzte usw. bis zur ersten Zeile.
> you@host > cat file1.txt file1:zeile1 file1:zeile2 file2:zeile3 you@host > tac file1.txt file2:zeile3 file1:zeile2 file1:zeile1
### tail â Ende einer Datei ausgebenÂ
tail gibt die letzten Zeilen (standardmäßig, ohne spezielle Angaben die letzten zehn) einer Datei aus.
> you@host > tail â5 kommandos.txt write â Nachrichten an andere Benutzer verschicken zcat â Ausgabe von gunzip-komprimierten Dateien zip/unzip â (De-) Komprimieren von Dateien zless â gunzip-komprimierte Dateien seitenweise ausgeben zmore â gunzip-komprimierte Dateien seitenweise ausgeben
Hier gibt tail die letzten fünf Zeilen der Datei kommandos.txt aus. Wollen Sie eine Datei ab einer bestimmten Zeile ausgeben lassen, gehen Sie wie folgt vor:
> you@host > tail +100 kommandos.txt
Hier werden alle Zeilen ab Zeile 100 ausgegeben. Wollen Sie tail wie tac verwenden, können Sie die Option âr verwenden:
> you@host > tail -r kommandos.txt
Hiermit wird die komplette Datei zeilenweise rückwärts, von der letzten zur ersten Zeile ausgegeben. Häufig verwendet wird auch die Option âf (follow), die immer wieder das Dateiende ausgibt. Dadurch kann man eine Datei beim Wachsen beobachten, da jede neu hinzugekommene Zeile angezeigt wird. Natürlich lässt sich diese Option nur auf eine Datei gleichzeitig anwenden.
### tee â Ausgabe duplizierenÂ
Mit tee lesen Sie von der Standardeingabe und verzweigen die Ausgabe auf die Standardausgabe und Datei. Da tee ein eigener Abschnitt (1.10.5) im Buch gewidmet ist, sei hier auf diesen verwiesen.
### touch â Anlegen von Dateien oder Zeitstempel verändernÂ
Mit touch verändern Sie die Zugriffs- und Änderungszeit einer Datei auf die aktuelle Zeit. Existiert eine solche Datei nicht, wird diese angelegt. touch wurde bereits in Abschnitt 1.7.2 behandelt, aber dennoch sollen hier noch einige Optionen zu touch und ihre jeweilige Bedeutung erwähnt werden:
Option | Bedeutung |
| --- | --- |
âa | Damit ändern Sie nur die Zugriffszeit. |
âc | Falls eine Datei nicht existiert, wird diese trotzdem nicht erzeugt. |
âm | Ändert nur die Änderungszeit |
### tr â Zeichen ersetzen bzw. Umformen von DateienÂ
Mit tr können Zeichen durch andere Zeichen ersetzt werden. Dies gilt auch für nicht druckbare Zeichen.
> tr str1 str2 file
Wird in der Datei file ein Zeichen aus »str1« gefunden, wird es durch das entsprechende Zeichen in »str2« ersetzt.
### type â Kommandos klassifizierenÂ
### umask â Dateierstellungsmaske ändern bzw. ausgebenÂ
Mit der Shell-Funktion umask setzen Sie eine Maske, mit der die Zugriffsrechte auf eine Datei bzw. auf Verzeichnisse direkt nach der Erzeugung durch einen von der Shell kontrollierten Prozess bestimmt wird. Die in der Maske gesetzten Bits werden bei den Zugriffsrechten für die neue Datei bzw. das Verzeichnis gelöscht (man spricht auch von: Sie werden maskiert). Mehr zu diesem Kommando entnehmen Sie bitte dem Abschnitt 9.4, wo es näher behandelt wurde.
### uniq â doppelte Zeilen nur einmal ausgebenÂ
Mit uniq können Sie doppelt vorhandene Zeilen löschen. Voraussetzung ist allerdings, dass die Datei sortiert ist und die doppelten Zeilen direkt hintereinander folgen. Beispielsweise:
> you@host > cat file1.txt file1:zeile1 file1:zeile2 file1:zeile2 file2:zeile3 you@host > uniq file1.txt file1:zeile1 file1:zeile2 file2:zeile3
### unix2dos â Dateien vom UNIX- in DOS-Format umwandelnÂ
Das Gegenstück von dos2unix. Damit wandeln Sie eine Textdatei vom Unix-Format wieder zurück in das DOS-Format um.
> unix2dos fileunix filedos
### wc â Zeilen, Wörter und Zeichen einer Datei zählenÂ
Mit wc können Sie die Zeichen, Wörter und/oder Zeilen einer Datei zählen. Ohne spezielle Optionen wird eine Zeile mit den folgenden Zahlen ausgegeben:
> you@host > wc file1.txt 4 4 52 file1.txt
Die erste Spalte enthält die Anzahl der Zeilen, gefolgt von der Anzahl der Worte und am Ende die Anzahl der Zeichen. Einzeln können Sie dies mit der Option âl (lines = Zeilen), âw (words = Wörter) und âc (characters = Zeichen) ermitteln.
### whereis â Suche nach DateienÂ
Mit dem Kommando whereis wird vorwiegend in wichtigen Pfaden (meistens allen Einträge in PATH) nach Binärdateien oder man-Dateien gesucht. whereis ist nicht so flexibel wie find, aber dafür erheblich schneller.
> you@host > whereis ls /bin/ls /usr/share/man/man1/ls.1.gz /usr/share/man/man1p/ls.1p.gz you@host > whereis -b ls /bin/ls you@host > whereis -m ls /usr/share/man/man1/ls.1.gz /usr/share/man/man1p/ls.1p.gz
Zuerst wurde der Pfad zum Programm ls ermittelt. Hierbei werden allerdings auch gleich die Pfade zu den man-Seiten mit ausgegeben. Wollen Sie nur den Pfad zum Binärprogramm erhalten, müssen Sie die Option âb verwenden. Wünschen Sie nur den Pfad zu den man-Seiten, so verwenden Sie die Option âm, wie im Beispiel gesehen.
### zcat, zless, zmore â (seitenweise) Ausgabe von gunzip-komprimierten DateienÂ
Alle drei Funktionen haben dieselbe Funktionsweise wie Ihre Gegenstücke ohne »z«, nur dass hiermit gzip- bzw. gunzip-komprimierte Dateien gelesen und ausgegeben werden können, ohne dass diese dekomprimiert werden müssen. Auf manchen Systemen gibt es mit zgrep auch noch eine entsprechende grep-Version.
# 14.3 Verzeichnisorientierte KommandosÂ
14.3 Verzeichnisorientierte KommandosÂ
basename â gibt den Dateianteil eines Pfadnamens zurückÂ
basename liefert den Dateiname ohne den Pfadnamen zurück, indem dieser abgeschnitten wird. Geben Sie ein Suffix an, wird auch die Dateiendung abgeschnitten. basename wurde bereits in Abschnitt 9.3 behandelt.
cd â Verzeichnis wechselnÂ
Das Shellkommando cd wird zum Wechseln des aktuellen Verzeichnisses verwendet. Wird kein Verzeichnis angegeben, wird in das Heimverzeichnis gewechselt. Das Kommando wurde bereits ausführlich in Abschnitt 1.7.3 beschrieben.
dircmp â Verzeichnisse rekursiv vergleichenÂ
Mit dem Kommando dircmp vergleichen Sie zwei Verzeichnisse mit allen ihren Dateien und Unterverzeichnissen auf Gleichheit.
dircmp dir1 dir2
Auf der ersten Seite gibt dircmp die Dateinamen aus, die nur in einem der Verzeichnisse vorkommen. Auf der zweiten Seite werden die Dateinamen aufgelistet, die zwar in beiden Verzeichnissen vorhanden sind, aber unterschiedliche Inhalte aufweisen. Auf der dritten Seite werden alle Dateien mit dem gleichen Inhalt aufgelistet. Die Namen der identischen Dateien können Sie mit der Option âs unterdrücken.
dirname â Verzeichnisanteil eines Pfadnamens zurückgebenÂ
dirname ist das Gegenstück zu basename und gibt den Verzeichnisanteil zurück. Es wird hierbei also der Dateiname aus der absoluten Pfadangabe »ausgeschnitten«. dirname wurde bereits in Abschnitt 9.3 behandelt.
mkdir â ein Verzeichnis anlegenÂ
Mit mkdir legen Sie ein leeres Verzeichnis an. Wollen Sie gleich beim Anlegen die Zugriffsrechte erteilen, können Sie dies mit der Option âm vornehmen:
you@host > mkdir -m 600 mydir
Wollen Sie ein neues Verzeichnis mitsamt Elternverzeichnissen anlegen, können Sie die Option âp verwenden:
you@host > mkdir doku/neu/buch mkdir: kann Verzeichnis doku/neu/buch nicht anlegen: Datei oder Verzeichnis nicht gefunden you@host > mkdir -p doku/neu/buch
Hinweis   mkdir wurde bereits ausführlich in Abschnitt 1.7.3 behandelt.
pwd â Ausgeben des aktuellen ArbeitsverzeichnissesÂ
Mit pwd lassen Sie das aktuelle Arbeitsverzeichnis ausgeben, in dem Sie sich gerade befinden.
rmdir â ein leeres Verzeichnis löschenÂ
Mit der Funktion rmdir können Sie ein leeres Verzeichnis löschen. Nicht leere Verzeichnisse können Sie mit rm âr rekursiv löschen. Etwas, was rm âr allerdings nicht kann, ist Verzeichnisse zu löschen, für die kein Ausführrecht vorhanden ist. Irgendwie ist dies auch logisch, weil ja rm mit der Option âr im Verzeichnis enthalten sein muss. rmdir hingegen verrichtet hierbei seine Arbeit klaglos:
you@host > mkdir -m 600 mydir you@host > rm -r mydir rm: kann nicht aus Verzeichnis . in mydir wechseln: Keine Berechtigung you@host > rmdir mydir
Hinweis   Beide Befehle wurde bereits in den Abschnitten 1.7.2 (rm) und 1.7.3 (rmdir) behandelt.
## 14.3 Verzeichnisorientierte KommandosÂ
### basename â gibt den Dateianteil eines Pfadnamens zurückÂ
basename liefert den Dateiname ohne den Pfadnamen zurück, indem dieser abgeschnitten wird. Geben Sie ein Suffix an, wird auch die Dateiendung abgeschnitten. basename wurde bereits in Abschnitt 9.3 behandelt.
### cd â Verzeichnis wechselnÂ
Das Shellkommando cd wird zum Wechseln des aktuellen Verzeichnisses verwendet. Wird kein Verzeichnis angegeben, wird in das Heimverzeichnis gewechselt. Das Kommando wurde bereits ausführlich in Abschnitt 1.7.3 beschrieben.
### dircmp â Verzeichnisse rekursiv vergleichenÂ
Mit dem Kommando dircmp vergleichen Sie zwei Verzeichnisse mit allen ihren Dateien und Unterverzeichnissen auf Gleichheit.
> dircmp dir1 dir2
Auf der ersten Seite gibt dircmp die Dateinamen aus, die nur in einem der Verzeichnisse vorkommen. Auf der zweiten Seite werden die Dateinamen aufgelistet, die zwar in beiden Verzeichnissen vorhanden sind, aber unterschiedliche Inhalte aufweisen. Auf der dritten Seite werden alle Dateien mit dem gleichen Inhalt aufgelistet. Die Namen der identischen Dateien können Sie mit der Option âs unterdrücken.
### dirname â Verzeichnisanteil eines Pfadnamens zurückgebenÂ
dirname ist das Gegenstück zu basename und gibt den Verzeichnisanteil zurück. Es wird hierbei also der Dateiname aus der absoluten Pfadangabe »ausgeschnitten«. dirname wurde bereits in Abschnitt 9.3 behandelt.
### mkdir â ein Verzeichnis anlegenÂ
Mit mkdir legen Sie ein leeres Verzeichnis an. Wollen Sie gleich beim Anlegen die Zugriffsrechte erteilen, können Sie dies mit der Option âm vornehmen:
> you@host > mkdir -m 600 mydir
Wollen Sie ein neues Verzeichnis mitsamt Elternverzeichnissen anlegen, können Sie die Option âp verwenden:
> you@host > mkdir doku/neu/buch mkdir: kann Verzeichnis doku/neu/buch nicht anlegen: Datei oder Verzeichnis nicht gefunden you@host > mkdir -p doku/neu/buch
### pwd â Ausgeben des aktuellen ArbeitsverzeichnissesÂ
Mit pwd lassen Sie das aktuelle Arbeitsverzeichnis ausgeben, in dem Sie sich gerade befinden.
### rmdir â ein leeres Verzeichnis löschenÂ
Mit der Funktion rmdir können Sie ein leeres Verzeichnis löschen. Nicht leere Verzeichnisse können Sie mit rm âr rekursiv löschen. Etwas, was rm âr allerdings nicht kann, ist Verzeichnisse zu löschen, für die kein Ausführrecht vorhanden ist. Irgendwie ist dies auch logisch, weil ja rm mit der Option âr im Verzeichnis enthalten sein muss. rmdir hingegen verrichtet hierbei seine Arbeit klaglos:
> you@host > mkdir -m 600 mydir you@host > rm -r mydir rm: kann nicht aus Verzeichnis . in mydir wechseln: Keine Berechtigung you@host > rmdir mydir
# 14.4 Verwaltung von Benutzern und GruppeÂ
14.4 Verwaltung von Benutzern und GruppeÂ
exit, logout â eine Session (Sitzung) beendenÂ
Mit beiden Befehlen beenden Sie eine Shellsitzung (eine Textkonsole bzw. ein Shell-Fenster). Gleiches würde man auch mit (Strg)+(D) erreichen.
finger â Informationen zu anderen Benutzern abfragenÂ
Mit finger können Sie detaillierte Informationen zu momentan angemeldeten Benutzern abfragen (ähnlich wie mit who, nur dass die Terminals nicht einzeln aufgelistet werden):
you@host > finger Login Name Tty Idle Login Time Where j<NAME> 3 2 Thu 02:31 tot J.Wolf :0 14d Wed 22:30 console you Dr.No 2 2 Thu 02:31
Ohne irgendwelche Optionen gibt finger zu allen aktiven Benutzern eine Informationszeile aus. Geben Sie einen Benutzernamen ein, bekommen Sie eine detailliertere Auskunft (im Langformat):
you@host > finger you Login: you Name: Directory: /home/you Shell: /bin/bash On since Thu Apr 21 02:31 (CEST) on tty2, idle 0:04 Mail last read Fri Feb 25 04:21 2005 (CET) No Plan.
Natürlich können Sie auch zu allen anderen aktiven Benutzern dieses Langformat mit der Option âl ausgeben lassen. Wollen Sie einen Benutzer auf einem entfernten System suchen, müssen Sie »benutzername@hostname« für den Benutzer angeben.
groupadd, groupmod, groupdel â Gruppenverwaltung (distributionsabhängig)Â
Eine neue Gruppe können Sie mit groupadd anlegen:
groupadd [-g GID] gruppenname
Die ID einer Gruppe (gid) können Sie mit groupmod verändern:
groupmod [-g neueGID] gruppenname
Eine Gruppe wieder löschen können Sie mit groupdel:
groupdel gruppenname
groups â Gruppenzugehörigkeit ausgebenÂ
Um alle Gruppen eines Benutzers zu ermitteln, wird groups verwendet. Wird groups ohne Angabe eines bestimmten Benutzers ausgeführt, werden alle Gruppen des aktuellen Benutzers ausgegeben.
id â eigene Benutzer- und Gruppen-ID ermittelnÂ
Mit id können Sie die User- und Gruppen-ID eines Benutzers ermitteln. Geben Sie keinen bestimmten Benutzer an, so wird die UID und GID des aktuellen Benutzers ermittelt und ausgegeben.
last â An- und Abmeldezeit eines Benutzers ermittelnÂ
Einen Überblick zu den letzten An- und Abmeldezeiten von Benutzern können Sie mit last erhalten:
you@host > last john tty3 Thu Apr 21 02:31 still logged in you tty2 Thu Apr 21 02:31 still logged in tot :0 console Wed Apr 20 22:30 still logged in reboot system boot 2.6.4â52-default Wed Apr 20 22:30 (04:42) tot :0 console Wed Apr 20 06:40 â 07:42 (01:02) reboot system boot 2.6.4â52-default Wed Apr 20 06:39 (01:03) wtmp begins Wed Apr 20 05:26:11 2005
Wollen Sie nur die Login-Zeiten eines einzelnen Users ermitteln, so müssen Sie diesen als Argument angeben.
logname â Name des aktuellen Benutzers anzeigenÂ
Mit logname erhalten Sie den Benutzernamen, der von getty in der Datei /var/run/utmp gespeichert wird. Hierzu muss man allerdings auch in einem echten Terminal mit getty angemeldet sein. Unter einem Konsolenfenster wie dem xterm werden Sie hier keinen Namen erhalten.
newgrp â Gruppenzugehörigkeit kurzzeitig wechseln (betriebssystemspezifisch)Â
Mit newgrp kann ein Benutzer während einer Sitzung in eine andere Gruppe wechseln (in der er ebenfalls Mitglied ist). Wird keine Gruppe als Argument verwendet, wird in eine Standardgruppe von /etc/passwd gewechselt. Als Argument wird der Gruppenname â wie dieser in /etc/group eingetragen ist â erwartet, nicht die Gruppen-ID.
passwd â Passwort ändern bzw. vergebenÂ
Mit dem Kommando passwd können Sie die Passwörter aller Benutzer in der Datei /etc/passwd ändern. Damit hierbei wenigstens der Benutzer selbst (nicht root) die Möglichkeit hat, sein eigenes Passwort zu ändern, läuft passwd als SUID root. Damit hat der Anwender für kurze Zeit root-Rechte und kann somit sein Passwort ändern und darf in die Datei schreiben. Alle Passwörter darf nur root verändern.
linux:/home/you # passwd john Changing password for john. New password:******** Re-enter new password:******** Password changed
Wenn Sie der allmächtige root auf dem Rechner sind, haben Sie noch folgende Optionen, einen Benutzer mit den Passworteinstellungen zu verwalten:
Tabelle 14.10 Â Optionen für das Kommando passwd
passwd âl benutzername
Den Benutzer sperren (âl = lock)
passwd âf benutzername
Den Benutzer dazu zwingen, beim nächsten Anmelden das Passwort zu verändern
passwd âd benutzername
Passwort eines Benutzers löschen (dannach kann sich ein Benutzer ohne Passwort anmelden, bspw. für eingeschränkte Test-Accounts geeignet)
Hinweis   Diese Erläuterungen gelten zumindest für Linux. Unter FreeBSD bedeutet der Schalter -l z. B., dass das Passwort in der lokalen Passwortdatei geändert wird, in der Kerberos DB aber erhalten bleibt. Somit scheinen die Optionen von passwd betriebssystem- bzw. distributionsspezifisch zu sein. Hier sollte ein Blick in die man-Page für Klarheit sorgen.
useradd/adduser, userdel, usermod â Benutzerverwaltung (distributionsabhängig)Â
Einen neuen Benutzer anlegen können Sie mit useradd bzw. adduser:
# useradd testuser # passwd testuser Changing password for testuser. New password:******** Re-enter new password:******** Password changed
Die Eigenschaften eines Users können Sie mit usermod modifizieren.
# usermod -u 1235 -c "Test User" \ > -s /bin/bash -d /home/testdir testuser
Hier haben Sie z. B. einem User die ID 1235, den Kommentar bzw. Namen »Test User«, die Bash als Shell und als Verzeichnis /home/testdir zugewiesen. Hierzu gibt es noch eine Menge Optionen mehr, die Sie mit usermod einstellen können (siehe auch man-Seite zu usermod).
Wollen Sie einen Benutzer wieder entfernen, können Sie dies mit userdel erreichen:
# userdel testuser
Beim Löschen wird eventuell noch überprüft, ob sich in crontab ein Eintrag für diesen User befindet. Dies ist sinnvoll, da der cron-Daemon sonst unnötig ins Leere laufen würde.
who â eingeloggte Benutzer anzeigenÂ
Mit dem Kommando who werden alle angemeldeten Benutzer mitsamt dem Namen, der Login-Zeit und dem Terminal ausgegeben.
you@host > who you tty2 Apr 21 22:41 john tty3 Apr 21 22:42 tot :0 Apr 21 22:38 (console)
whoami â Name des aktuellen Benutzers anzeigenÂ
Mit whoami können Sie ermitteln, unter welchem Namen Sie gerade arbeiten. Dies wird oft verwendet, um zu überprüfen, ob man als root oder »normaler« User arbeitet.
you@host > su Password:******** linux:/home/you # whoami root linux:/home/you # exit exit you@host > whoami you
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>inwerk-verlag.de.
## 14.4 Verwaltung von Benutzern und GruppeÂ
### exit, logout â eine Session (Sitzung) beendenÂ
Mit beiden Befehlen beenden Sie eine Shellsitzung (eine Textkonsole bzw. ein Shell-Fenster). Gleiches würde man auch mit (Strg)+(D) erreichen.
### finger â Informationen zu anderen Benutzern abfragenÂ
Mit finger können Sie detaillierte Informationen zu momentan angemeldeten Benutzern abfragen (ähnlich wie mit who, nur dass die Terminals nicht einzeln aufgelistet werden):
> you@host > finger Login Name Tty Idle Login Time Where john <NAME> 3 2 Thu 02:31 tot J.Wolf :0 14d Wed 22:30 console you Dr.No 2 2 Thu 02:31
Ohne irgendwelche Optionen gibt finger zu allen aktiven Benutzern eine Informationszeile aus. Geben Sie einen Benutzernamen ein, bekommen Sie eine detailliertere Auskunft (im Langformat):
> you@host > finger you Login: you Name: Directory: /home/you Shell: /bin/bash On since Thu Apr 21 02:31 (CEST) on tty2, idle 0:04 Mail last read Fri Feb 25 04:21 2005 (CET) No Plan.
Natürlich können Sie auch zu allen anderen aktiven Benutzern dieses Langformat mit der Option âl ausgeben lassen. Wollen Sie einen Benutzer auf einem entfernten System suchen, müssen Sie »benutzername@hostname« für den Benutzer angeben.
### groupadd, groupmod, groupdel â Gruppenverwaltung (distributionsabhängig)Â
Eine neue Gruppe können Sie mit groupadd anlegen:
> groupadd [-g GID] gruppenname
Die ID einer Gruppe (gid) können Sie mit groupmod verändern:
> groupmod [-g neueGID] gruppenname
Eine Gruppe wieder löschen können Sie mit groupdel:
> groupdel gruppenname
### groups â Gruppenzugehörigkeit ausgebenÂ
Um alle Gruppen eines Benutzers zu ermitteln, wird groups verwendet. Wird groups ohne Angabe eines bestimmten Benutzers ausgeführt, werden alle Gruppen des aktuellen Benutzers ausgegeben.
### id â eigene Benutzer- und Gruppen-ID ermittelnÂ
Mit id können Sie die User- und Gruppen-ID eines Benutzers ermitteln. Geben Sie keinen bestimmten Benutzer an, so wird die UID und GID des aktuellen Benutzers ermittelt und ausgegeben.
### last â An- und Abmeldezeit eines Benutzers ermittelnÂ
Einen Überblick zu den letzten An- und Abmeldezeiten von Benutzern können Sie mit last erhalten:
> you@host > last john tty3 Thu Apr 21 02:31 still logged in you tty2 Thu Apr 21 02:31 still logged in tot :0 console Wed Apr 20 22:30 still logged in reboot system boot 2.6.4â52-default Wed Apr 20 22:30 (04:42) tot :0 console Wed Apr 20 06:40 â 07:42 (01:02) reboot system boot 2.6.4â52-default Wed Apr 20 06:39 (01:03) wtmp begins Wed Apr 20 05:26:11 2005
Wollen Sie nur die Login-Zeiten eines einzelnen Users ermitteln, so müssen Sie diesen als Argument angeben.
### logname â Name des aktuellen Benutzers anzeigenÂ
Mit logname erhalten Sie den Benutzernamen, der von getty in der Datei /var/run/utmp gespeichert wird. Hierzu muss man allerdings auch in einem echten Terminal mit getty angemeldet sein. Unter einem Konsolenfenster wie dem xterm werden Sie hier keinen Namen erhalten.
### newgrp â Gruppenzugehörigkeit kurzzeitig wechseln (betriebssystemspezifisch)Â
Mit newgrp kann ein Benutzer während einer Sitzung in eine andere Gruppe wechseln (in der er ebenfalls Mitglied ist). Wird keine Gruppe als Argument verwendet, wird in eine Standardgruppe von /etc/passwd gewechselt. Als Argument wird der Gruppenname â wie dieser in /etc/group eingetragen ist â erwartet, nicht die Gruppen-ID.
### passwd â Passwort ändern bzw. vergebenÂ
Mit dem Kommando passwd können Sie die Passwörter aller Benutzer in der Datei /etc/passwd ändern. Damit hierbei wenigstens der Benutzer selbst (nicht root) die Möglichkeit hat, sein eigenes Passwort zu ändern, läuft passwd als SUID root. Damit hat der Anwender für kurze Zeit root-Rechte und kann somit sein Passwort ändern und darf in die Datei schreiben. Alle Passwörter darf nur root verändern.
> linux:/home/you # passwd john Changing password for john. New password:******** Re-enter new password:******** Password changed
Wenn Sie der allmächtige root auf dem Rechner sind, haben Sie noch folgende Optionen, einen Benutzer mit den Passworteinstellungen zu verwalten:
Verwendung | Bedeutung |
| --- | --- |
passwd âl benutzername | Den Benutzer sperren (âl = lock) |
passwd âf benutzername | Den Benutzer dazu zwingen, beim nächsten Anmelden das Passwort zu verändern |
passwd âd benutzername | Passwort eines Benutzers löschen (dannach kann sich ein Benutzer ohne Passwort anmelden, bspw. für eingeschränkte Test-Accounts geeignet) |
### useradd/adduser, userdel, usermod â Benutzerverwaltung (distributionsabhängig)Â
Einen neuen Benutzer anlegen können Sie mit useradd bzw. adduser:
> # useradd testuser # passwd testuser Changing password for testuser. New password:******** Re-enter new password:******** Password changed
Die Eigenschaften eines Users können Sie mit usermod modifizieren.
> # usermod -u 1235 -c "Test User" \ > -s /bin/bash -d /home/testdir testuser
Hier haben Sie z. B. einem User die ID 1235, den Kommentar bzw. Namen »Test User«, die Bash als Shell und als Verzeichnis /home/testdir zugewiesen. Hierzu gibt es noch eine Menge Optionen mehr, die Sie mit usermod einstellen können (siehe auch man-Seite zu usermod).
Wollen Sie einen Benutzer wieder entfernen, können Sie dies mit userdel erreichen:
> # userdel testuser
Beim Löschen wird eventuell noch überprüft, ob sich in crontab ein Eintrag für diesen User befindet. Dies ist sinnvoll, da der cron-Daemon sonst unnötig ins Leere laufen würde.
### who â eingeloggte Benutzer anzeigenÂ
Mit dem Kommando who werden alle angemeldeten Benutzer mitsamt dem Namen, der Login-Zeit und dem Terminal ausgegeben.
> you@host > who you tty2 Apr 21 22:41 john tty3 Apr 21 22:42 tot :0 Apr 21 22:38 (console)
### whoami â Name des aktuellen Benutzers anzeigenÂ
Mit whoami können Sie ermitteln, unter welchem Namen Sie gerade arbeiten. Dies wird oft verwendet, um zu überprüfen, ob man als root oder »normaler« User arbeitet.
> you@host > su Password:******** linux:/home/you # whoami root linux:/home/you # exit exit you@host > whoami you
# 14.5 Programm- und ProzessverwaltungÂ
14.5 Programm- und ProzessverwaltungÂ
at â Kommando zu einem bestimmten Zeitpunkt ausführen lassenÂ
Mit dem Kommando at können Sie ein Kommando zum angegebenen Zeitpunkt ausführen lassen, auch wenn der Benutzer zu diesem Zeitpunkt nicht angemeldet ist. Beispielsweise können Sie mit
at 2130 -f myscript
das Script »myscript« um 21:30 Uhr ausführen lassen. Natürlich lassen sich mehrere solcher zeitgesteuerten Kommandos einrichten. Jeder dieser at-Aufrufe wird an die at-Queue (atq) angehängt. Natürlich funktioniert dies auch mit Datum:
at 2200 apr 21 -f myscript
So würde das Script »myscript« am 2. April um 22 Uhr ausgeführt. Wollen Sie sich alle Aufträge der atq auflisten lassen, müssen Sie die Option âl verwenden:
at -l
Wollen Sie den Status des Auftrags mit der Nummer 33 anzeigen lassen, geben Sie Folgendes ein:
at -l 33
Soll dieser Auftrag gelöscht werden, so kann die Option âd verwendet werden:
at -d 33
batch â Kommando irgendwann später ausführen lassenÂ
Mit batch lesen Sie Kommandos von der Kommandozeile, welche zu einem späteren Zeitpunkt ausgeführt werden, sobald das System Zeit hat. Dies wird bei extrem belasteten Rechnern gern verwendet, wenn man das Kommando bzw. Script zu einer Zeit ausführen lassen will, in der die Systemlast definitiv niedrig ist, und dies nicht nur zu vermuten ist. Die angegebenen Kommandos werden auch dann ausgeführt, wenn der Benutzer nicht angemeldet ist. Um batch auszuführen, muss auch hier der at-Daemon, der auch für das Kommando at verantwortlich ist, laufen.
you@host > batch warning: commands will be executed using /bin/sh at> ls -l at> ./myscript at> sleep 1 at> (Strg)+(D) job 1 at 2005â04â21 23:30
Das Ende der Kommandozeileneingabe von batch müssen Sie mit (Strg)+(D) angeben.
bg â einen angehaltenen Prozess im Hintergrund fortsetzenÂ
Mit dem Kommando bg können Sie einen (z. B. mit (Strg)+(Z)) angehaltenen Prozess im Hintergrund fortsetzen. bg wurde bereits in Abschnitt 8.7 behandelt.
cron/crontab â Programme in bestimmten Zeitintervallen ausführen lassenÂ
Mit cron können Sie beliebig viele Kommandos automatisch in bestimmten Zeitintervallen ausführen lassen. Einmal pro Minute sieht dieser Dämon in einen Terminkalender (crontab) nach und führt gegebenenfalls darin enthaltene Kommandos aus. cron wurde bereits ausführlich in Abschnitt 8.8 behandelt.
fg â einen angehaltenen Prozess im Vordergrund fortsetzenÂ
Mit dem Kommando fg können Sie einen (z. B. mit (Strg)+(Z)) angehaltenen Prozess im Vordergrund fortsetzen. fg wurde bereits in Abschnitt 8.7 behandelt.
jobs â Anzeigen angehaltener bzw. im Hintergrund laufender ProzesseÂ
Mit jobs bekommen Sie eine Liste mit den aktuellen Jobs zurückgegeben. Neben der Jobnummer steht bei jedem Job der Kommandoname, der Status und eine Markierung. Die Markierung »+« steht für den aktuellen Job, »-« für den vorhergehenden Job. Das Kommando job wurde bereits in Abschnitt 8.7 behandelt.
kill â Signale an Prozesse mit einer Prozessnummer sendenÂ
Mit kill senden Sie den Prozessen durch Angabe der Prozessnummer ein Signal. Standardmäßig wird das Signal SIGTERM zum Beenden des Prozesses gesendet. Es lassen sich aber auch beliebige andere Signale senden. Das Signal wird dabei als Nummer oder als Name übermittelt. Einen Überblick zu den möglichen Signalnamen finden Sie mit der Option âl. Das Kommando kill wurde bereits ausführlicher in Abschnitt 7.2 beschrieben.
killall â Signale an Prozesse mit einem Prozessnamen sendenÂ
Der Name killall führt schnell in die Irre. Damit lassen sich nicht etwa alle Prozesse »killen«, sondern killall stellt eher eine Erleichterung für kill dar. Anstatt wie mit kill einen Prozess mit der Prozessnummer zu beenden bzw. ein Signal zu senden, kann mit killall der Prozessname verwendet werden. Was gerade bei unzählig vielen gleichzeitig laufenden Prozessen eine erhebliche Erleichterung darstellt, weil man hier nicht mühsam erst nach der Prozessnummer (z. B. mit dem Kommando ps) suchen muss. Ansonsten lässt sich killall ähnlich wie kill verwenden, nur dass der Signalname ohne das vorangestellte SIG angegeben wird. Eine Liste aller Signale erhalten Sie auch hier mit der Option âl.
you@host > sleep 60 & [1] 5286 you@host > killall sleep [1]+ Beendet sleep 60
nice â Prozesse mit anderer Priorität ausführen lassenÂ
Mit nice können Sie veranlassen, dass ein Kommando mit einer niedrigeren Priorität ausgeführt wird.
nice [-n] kommando [argumente]
Für n können Sie dabei eine Ziffer angeben, um wie viel die Priorität verändert werden soll. Der Standardwert, falls keine Angabe erfolgt, lautet 10 (-20 ist die höchste und 19 die niedrigste Priorität). Prioritäten höher als 0 darf ohnehin nur root starten. Häufig wird man das Kommando mit nice im Hintergrund starten wollen.
nice find / -name document -print > /home/tmp/find.txt &
Hier wird mit find nach einer Datei »document« gesucht und die Ausgabe in die Datei find.txt geschrieben. Der Hintergrundprozess find wird dabei von nice mit einer niedrigen (10) Priorität gestartet. Dies stellt eine gängige Verwendung von nice dar.
nohup â Prozesse beim Beenden einer Sitzung weiterlaufen lassenHHÂ
Mit nohup schützen Sie Prozesse vor dem HANGUP-Signal. Dadurch ist es möglich, dass ein Prozess im Hintergrund weiterlaufen kann, auch wenn sich ein Benutzer abmeldet. Ohne nohup würden sonst alle Prozesse einer Login-Shell des Anwenders durch das Signal SIGHUP beendet. Allerdings tritt dieses Verhalten nicht auf allen Systemen auf. Mehr zu nohup finden Sie in Abschnitt 8.4, wo dieses Kommando ausführlicher behandelt wurde.
ps â Prozessinformationen anzeigenÂ
Hinweis   ps ist eines dieser Kommandos, welches vollkommen unterschiedliche Optionen auf den unterschiedlichen Betriebssystemen bietet. Die folgenden Beschreibung bezieht sich daher auf Linux.
ps ist wohl das wichtigste Kommando für Systemadministratoren, um an Informationen zu aktiven Prozessen zu gelangen (neben top). Rufen Sie ps ohne irgendwelche Argumente auf, werden Ihnen die zum jeweiligen Terminal gestarteten Prozesse aufgelistet. Zu jedem Prozess erhalten Sie die Prozessnummer (PID), den Terminal-Namen (TTY), die verbrauchte Rechenzeit (TIME) und den Kommandonamen (COMMAND). Aber neben diesen Informationen lassen sich über Optionen noch viele weitere Informationen entlocken. Häufig verwendet wird dabei der Schalter âe, womit Informationen zu allen Prozessen zu gewinnen sind (also nicht nur zum jeweiligen Terminal), und ebenso der Schalter âf, womit Sie noch vollständigere Informationen bekommen:
you@host > ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 Apr21 ? 00:00:04 init [5] root 2 1 0 Apr21 ? 00:00:00 [ksoftirqd/0] root 3 1 0 Apr21 ? 00:00:00 [events/0] root 23 3 0 Apr21 ? 00:00:00 [kblockd/0] ... postfix 5942 2758 0 00:17 ? 00:00:00 pickup -l -t fifo -u you 6270 1 0 00:26 ? 00:00:00 /opt/kde3/bin/kdesud you 7256 3188 0 01:50 pts/36 00:00:00 ps -ef
Mittlerweile beinhaltet das Kommando ps unglaublich viele Optionen, die zum Teil systemabhängig sind, weshalb auf jeden Fall die man-Seite von ps zur Rate gezogen werden sollte. Häufig sucht man in der Liste von Prozessen nach einem bestimmten. Diesen können Sie folgendermaßen »herauspicken«:
you@host > ps -ef | grep kamix tot 3171 1 0 Apr21 ? 00:00:00 kamix
Damit noch nicht zufrieden, will man gern auch gleich die Prozessnummer (PID) erhalten:
you@host > ps -ef | grep kamix | awk '{ print $2; exit }' 3171
Aber so umständlich muss dies nicht sein. Sie können hierzu auch pgrep (sofern vorhanden) verwenden.
pgrep â Prozesse über ihren Namen findenÂ
Im Abschnitt zuvor wurde pgrep kurz angesprochen. Sofern Sie die Prozessnummer eines Prozessnamens benötigen, ist pgrep das Kommando der Wahl.
you@host > pgrep kamix 3171
pgrep liefert zu jedem Prozessnamen die Prozessnummer, sofern ein entsprechender Prozess gerade aktiv ist und in der Prozessliste (ps) auftaucht.
pstree â Prozesshierachie in Baumform ausgebenÂ
Mit pstree können Sie die aktuelle Prozesshierarchie in Baumform ausgeben lassen. Ohne Angabe von Argumenten zeigt pstree alle Prozesse an, angefangen vom ersten Prozess init (PID=1). Geben Sie hingegen eine PID oder einen Loginnamen an, so werden nur die Prozesse des Benutzers oder der Prozessnummer hierarchisch angezeigt.
renice â Priorität laufender Prozesse verändernÂ
Mit dem Kommando renice können Sie im Gegensatz zu nice die Priorität von bereits laufenden Prozessen verändern. Ansonsten gilt auch hier alles schon beim Kommando nice Gesagte. Das Kommando renice wurde bereits in Abschnitt 8.1 beschrieben.
Tipp für Linux   Komfortabler können Sie die Priorität laufender Prozesse mit dem Kommando top verändern. Ein Tastendruck (R) fragt Sie nach der Prozessnummer und dem nice-Wert, von welchem Prozess Sie die Priorität verändern wollen.
sleep â Prozesse suspendieren (schlafen legen)Â
Mit sleep legen Sie einen Prozess für n Sekunden schlafen. Voreingestellt sind zwar Sekunden, aber über Optionen können Sie hierbei auch Minuten, Stunden oder gar Tage verwenden.
su â Ändern der Benutzerkennung (ohne Neuanmeldung)Â
Dieses Kommando wird irrtümlicherweise immer als SuperUser abgekürzt, heißt aber in Wirklichkeit SwitchUser. su hat also zunächst überhaupt nichts mit dem »Superuser« (alias root) zu tun. Dieses Missverständnis resultiert daraus, dass die meisten Anwender dieses Kommando dazu verwenden, sich kurz die Root-Rechte (sofern diese vorhanden sind) anzueignen. Durch die Angabe ohne Argumente wird hierbei gewöhnlich auch nach dem Passwort des Superusers gefragt.
you@host > whoami you you@host > su Password:******** linux:/home/you # whoami root linux:/home/you # exit exit you@host > whoami you you@host > su john Password:******** you@linux:/home/john> whoami john you@linux:/home/john> exit exit
su startet immer eine neue Shell mit der neuen Benutzer- (UID) und Gruppenkennung (GID). Wie bei einem neuen Login wird nach einem Passwort gefragt. Geben Sie keinen Benutzernamen an, versucht su zu UID 0 zu wechseln, was ja der Superuser ist. Sofern Sie »Superuser« sind, können Sie auch die Identität eines jeden Benutzers annehmen, ohne ein entsprechendes Passwort zu kennen.
sudo â Programm als anderer Benutzer ausführenÂ
sudo ist ein Kommando, womit ein bestimmter Benutzer ein Kommando ausführen kann, wozu er normalerweise nicht die Rechte hat (bspw. für administrative Aufgaben). Dazu legt root gewöhnlich in der Datei /etc/sudoers folgenden Eintrag ab:
john ALL=/usr/bin/kommando
Jetzt kann der User »john« das Kommando mit folgendem Aufruf starten:
you@host > sudo /usr/bin/kommando Passwort: *********
Nachdem »john« sein Passwort eingegeben hat, wird das entsprechende Kommando ausgeführt, wozu er normalerweise ohne den Eintrag in /etc/sudoers nicht im Stande wäre. Der Eintrag wird gewöhnlich mit dem Editor vi mit dem Aufruf visudo vorgenommen.
Natürlich können Sie als normaler Benutzer auch mittels sudo Programme unter einem anderen Login-Namen ausführen. Dies erledigen Sie mit folgendem Aufruf (hier bspw. das Kommando whoami ausgeführt):
you@host > whoami you you@host > sudo -u john whoami Password:******** john you@host >
Das Kommando sudo hat bei mir bis vor kurzem eine Art Schattendasein geführt. Wenn ich ein Kommando benötigt habe, dass root-Rechte erfordert, habe ich mich schnell mit su befördert. Mittlerweile allerdings, seitdem ich das immer beliebter werdende »(k)ubuntu« verwende, bei dem es gar keinen root mehr gibt, sondern sudo für solche Zwecke verwendet wird, hat sich das Kommando einen »Stammplatz« erobert. Darauf wollte ich kurz hinweisen, bevor Sie auf die Idee kommen, »(k)ubuntu« nochmals zu installieren, weil Sie vielleicht vermuten, irgendwo das root-Passwort vergessen zu haben. »(k)ubuntu« ist da einfach anders als andere Distributionen.
time â Zeitmessung für ProzesseÂ
Mit time führen Sie das in der Kommandozeile angegebene Kommando bzw. Script aus und bekommen die Zeit, die dafür benötigt wurde zurück. Diese Zeit wird dabei aufgeteilt in die tatsächlich benötigte Zeit (real), die Rechenzeit im Usermodus (user) und diejenige im Kernelmodus (sys). Das Kommando time wurde bereits in Abschnitt 9.6 behandelt.
top â Prozesse nach CPU-Auslastung anzeigen (betriebssystemspezifisch)Â
Mit top bekommen Sie eine Liste der gerade aktiven Prozesse angezeigt. Die Liste wird nach CPU-Belastung sortiert. Standardmäßig wird top alle fünf Sekunden aktualisiert; beendet wird top mit (Q). top kann aber noch mehr, als die Auslastung der einzelnen Prozesse anzuzeigen, beispielsweise können Sie mit dem Tastendruck (K) (kill) einem bestimmten Prozess ein Signal senden oder mit r (renice) die Priorität eines laufenden Prozesses verändern. Ein Blick auf die man-Seite von top bringt außerdem noch einen gewaltigen Überblick zu diesem auf den ersten Blick einfachen Kommando zum Vorschein.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 14.5 Programm- und ProzessverwaltungÂ
### at â Kommando zu einem bestimmten Zeitpunkt ausführen lassenÂ
Mit dem Kommando at können Sie ein Kommando zum angegebenen Zeitpunkt ausführen lassen, auch wenn der Benutzer zu diesem Zeitpunkt nicht angemeldet ist. Beispielsweise können Sie mit
> at 2130 -f myscript
das Script »myscript« um 21:30 Uhr ausführen lassen. Natürlich lassen sich mehrere solcher zeitgesteuerten Kommandos einrichten. Jeder dieser at-Aufrufe wird an die at-Queue (atq) angehängt. Natürlich funktioniert dies auch mit Datum:
> at 2200 apr 21 -f myscript
So würde das Script »myscript« am 2. April um 22 Uhr ausgeführt. Wollen Sie sich alle Aufträge der atq auflisten lassen, müssen Sie die Option âl verwenden:
> at -l
Wollen Sie den Status des Auftrags mit der Nummer 33 anzeigen lassen, geben Sie Folgendes ein:
> at -l 33
Soll dieser Auftrag gelöscht werden, so kann die Option âd verwendet werden:
> at -d 33
### batch â Kommando irgendwann später ausführen lassenÂ
Mit batch lesen Sie Kommandos von der Kommandozeile, welche zu einem späteren Zeitpunkt ausgeführt werden, sobald das System Zeit hat. Dies wird bei extrem belasteten Rechnern gern verwendet, wenn man das Kommando bzw. Script zu einer Zeit ausführen lassen will, in der die Systemlast definitiv niedrig ist, und dies nicht nur zu vermuten ist. Die angegebenen Kommandos werden auch dann ausgeführt, wenn der Benutzer nicht angemeldet ist. Um batch auszuführen, muss auch hier der at-Daemon, der auch für das Kommando at verantwortlich ist, laufen.
> you@host > batch warning: commands will be executed using /bin/sh at> ls -l at> ./myscript at> sleep 1 at> (Strg)+(D) job 1 at 2005â04â21 23:30
Das Ende der Kommandozeileneingabe von batch müssen Sie mit (Strg)+(D) angeben.
### bg â einen angehaltenen Prozess im Hintergrund fortsetzenÂ
Mit dem Kommando bg können Sie einen (z. B. mit (Strg)+(Z)) angehaltenen Prozess im Hintergrund fortsetzen. bg wurde bereits in Abschnitt 8.7 behandelt.
### cron/crontab â Programme in bestimmten Zeitintervallen ausführen lassenÂ
Mit cron können Sie beliebig viele Kommandos automatisch in bestimmten Zeitintervallen ausführen lassen. Einmal pro Minute sieht dieser Dämon in einen Terminkalender (crontab) nach und führt gegebenenfalls darin enthaltene Kommandos aus. cron wurde bereits ausführlich in Abschnitt 8.8 behandelt.
### fg â einen angehaltenen Prozess im Vordergrund fortsetzenÂ
Mit dem Kommando fg können Sie einen (z. B. mit (Strg)+(Z)) angehaltenen Prozess im Vordergrund fortsetzen. fg wurde bereits in Abschnitt 8.7 behandelt.
### jobs â Anzeigen angehaltener bzw. im Hintergrund laufender ProzesseÂ
Mit jobs bekommen Sie eine Liste mit den aktuellen Jobs zurückgegeben. Neben der Jobnummer steht bei jedem Job der Kommandoname, der Status und eine Markierung. Die Markierung »+« steht für den aktuellen Job, »-« für den vorhergehenden Job. Das Kommando job wurde bereits in Abschnitt 8.7 behandelt.
### kill â Signale an Prozesse mit einer Prozessnummer sendenÂ
Mit kill senden Sie den Prozessen durch Angabe der Prozessnummer ein Signal. Standardmäßig wird das Signal SIGTERM zum Beenden des Prozesses gesendet. Es lassen sich aber auch beliebige andere Signale senden. Das Signal wird dabei als Nummer oder als Name übermittelt. Einen Überblick zu den möglichen Signalnamen finden Sie mit der Option âl. Das Kommando kill wurde bereits ausführlicher in Abschnitt 7.2 beschrieben.
### killall â Signale an Prozesse mit einem Prozessnamen sendenÂ
Der Name killall führt schnell in die Irre. Damit lassen sich nicht etwa alle Prozesse »killen«, sondern killall stellt eher eine Erleichterung für kill dar. Anstatt wie mit kill einen Prozess mit der Prozessnummer zu beenden bzw. ein Signal zu senden, kann mit killall der Prozessname verwendet werden. Was gerade bei unzählig vielen gleichzeitig laufenden Prozessen eine erhebliche Erleichterung darstellt, weil man hier nicht mühsam erst nach der Prozessnummer (z. B. mit dem Kommando ps) suchen muss. Ansonsten lässt sich killall ähnlich wie kill verwenden, nur dass der Signalname ohne das vorangestellte SIG angegeben wird. Eine Liste aller Signale erhalten Sie auch hier mit der Option âl.
> you@host > sleep 60 & [1] 5286 you@host > killall sleep [1]+ Beendet sleep 60
### nice â Prozesse mit anderer Priorität ausführen lassenÂ
Mit nice können Sie veranlassen, dass ein Kommando mit einer niedrigeren Priorität ausgeführt wird.
> nice [-n] kommando [argumente]
Für n können Sie dabei eine Ziffer angeben, um wie viel die Priorität verändert werden soll. Der Standardwert, falls keine Angabe erfolgt, lautet 10 (-20 ist die höchste und 19 die niedrigste Priorität). Prioritäten höher als 0 darf ohnehin nur root starten. Häufig wird man das Kommando mit nice im Hintergrund starten wollen.
> nice find / -name document -print > /home/tmp/find.txt &
Hier wird mit find nach einer Datei »document« gesucht und die Ausgabe in die Datei find.txt geschrieben. Der Hintergrundprozess find wird dabei von nice mit einer niedrigen (10) Priorität gestartet. Dies stellt eine gängige Verwendung von nice dar.
### nohup â Prozesse beim Beenden einer Sitzung weiterlaufen lassenHHÂ
Mit nohup schützen Sie Prozesse vor dem HANGUP-Signal. Dadurch ist es möglich, dass ein Prozess im Hintergrund weiterlaufen kann, auch wenn sich ein Benutzer abmeldet. Ohne nohup würden sonst alle Prozesse einer Login-Shell des Anwenders durch das Signal SIGHUP beendet. Allerdings tritt dieses Verhalten nicht auf allen Systemen auf. Mehr zu nohup finden Sie in Abschnitt 8.4, wo dieses Kommando ausführlicher behandelt wurde.
### ps â Prozessinformationen anzeigenÂ
ps ist wohl das wichtigste Kommando für Systemadministratoren, um an Informationen zu aktiven Prozessen zu gelangen (neben top). Rufen Sie ps ohne irgendwelche Argumente auf, werden Ihnen die zum jeweiligen Terminal gestarteten Prozesse aufgelistet. Zu jedem Prozess erhalten Sie die Prozessnummer (PID), den Terminal-Namen (TTY), die verbrauchte Rechenzeit (TIME) und den Kommandonamen (COMMAND). Aber neben diesen Informationen lassen sich über Optionen noch viele weitere Informationen entlocken. Häufig verwendet wird dabei der Schalter âe, womit Informationen zu allen Prozessen zu gewinnen sind (also nicht nur zum jeweiligen Terminal), und ebenso der Schalter âf, womit Sie noch vollständigere Informationen bekommen:
> you@host > ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 Apr21 ? 00:00:04 init [5] root 2 1 0 Apr21 ? 00:00:00 [ksoftirqd/0] root 3 1 0 Apr21 ? 00:00:00 [events/0] root 23 3 0 Apr21 ? 00:00:00 [kblockd/0] ... postfix 5942 2758 0 00:17 ? 00:00:00 pickup -l -t fifo -u you 6270 1 0 00:26 ? 00:00:00 /opt/kde3/bin/kdesud you 7256 3188 0 01:50 pts/36 00:00:00 ps -ef
Mittlerweile beinhaltet das Kommando ps unglaublich viele Optionen, die zum Teil systemabhängig sind, weshalb auf jeden Fall die man-Seite von ps zur Rate gezogen werden sollte. Häufig sucht man in der Liste von Prozessen nach einem bestimmten. Diesen können Sie folgendermaßen »herauspicken«:
> you@host > ps -ef | grep kamix tot 3171 1 0 Apr21 ? 00:00:00 kamix
Damit noch nicht zufrieden, will man gern auch gleich die Prozessnummer (PID) erhalten:
> you@host > ps -ef | grep kamix | awk '{ print $2; exit }' 3171
Aber so umständlich muss dies nicht sein. Sie können hierzu auch pgrep (sofern vorhanden) verwenden.
### pgrep â Prozesse über ihren Namen findenÂ
Im Abschnitt zuvor wurde pgrep kurz angesprochen. Sofern Sie die Prozessnummer eines Prozessnamens benötigen, ist pgrep das Kommando der Wahl.
> you@host > pgrep kamix 3171
pgrep liefert zu jedem Prozessnamen die Prozessnummer, sofern ein entsprechender Prozess gerade aktiv ist und in der Prozessliste (ps) auftaucht.
### pstree â Prozesshierachie in Baumform ausgebenÂ
Mit pstree können Sie die aktuelle Prozesshierarchie in Baumform ausgeben lassen. Ohne Angabe von Argumenten zeigt pstree alle Prozesse an, angefangen vom ersten Prozess init (PID=1). Geben Sie hingegen eine PID oder einen Loginnamen an, so werden nur die Prozesse des Benutzers oder der Prozessnummer hierarchisch angezeigt.
### renice â Priorität laufender Prozesse verändernÂ
Mit dem Kommando renice können Sie im Gegensatz zu nice die Priorität von bereits laufenden Prozessen verändern. Ansonsten gilt auch hier alles schon beim Kommando nice Gesagte. Das Kommando renice wurde bereits in Abschnitt 8.1 beschrieben.
### sleep â Prozesse suspendieren (schlafen legen)Â
Mit sleep legen Sie einen Prozess für n Sekunden schlafen. Voreingestellt sind zwar Sekunden, aber über Optionen können Sie hierbei auch Minuten, Stunden oder gar Tage verwenden.
### su â Ändern der Benutzerkennung (ohne Neuanmeldung)Â
Dieses Kommando wird irrtümlicherweise immer als SuperUser abgekürzt, heißt aber in Wirklichkeit SwitchUser. su hat also zunächst überhaupt nichts mit dem »Superuser« (alias root) zu tun. Dieses Missverständnis resultiert daraus, dass die meisten Anwender dieses Kommando dazu verwenden, sich kurz die Root-Rechte (sofern diese vorhanden sind) anzueignen. Durch die Angabe ohne Argumente wird hierbei gewöhnlich auch nach dem Passwort des Superusers gefragt.
> you@host > whoami you you@host > su Password:******** linux:/home/you # whoami root linux:/home/you # exit exit you@host > whoami you you@host > su john Password:******** you@linux:/home/john> whoami john you@linux:/home/john> exit exit
su startet immer eine neue Shell mit der neuen Benutzer- (UID) und Gruppenkennung (GID). Wie bei einem neuen Login wird nach einem Passwort gefragt. Geben Sie keinen Benutzernamen an, versucht su zu UID 0 zu wechseln, was ja der Superuser ist. Sofern Sie »Superuser« sind, können Sie auch die Identität eines jeden Benutzers annehmen, ohne ein entsprechendes Passwort zu kennen.
### sudo â Programm als anderer Benutzer ausführenÂ
sudo ist ein Kommando, womit ein bestimmter Benutzer ein Kommando ausführen kann, wozu er normalerweise nicht die Rechte hat (bspw. für administrative Aufgaben). Dazu legt root gewöhnlich in der Datei /etc/sudoers folgenden Eintrag ab:
> john ALL=/usr/bin/kommando
Jetzt kann der User »john« das Kommando mit folgendem Aufruf starten:
> you@host > sudo /usr/bin/kommando Passwort: *********
Nachdem »john« sein Passwort eingegeben hat, wird das entsprechende Kommando ausgeführt, wozu er normalerweise ohne den Eintrag in /etc/sudoers nicht im Stande wäre. Der Eintrag wird gewöhnlich mit dem Editor vi mit dem Aufruf visudo vorgenommen.
Natürlich können Sie als normaler Benutzer auch mittels sudo Programme unter einem anderen Login-Namen ausführen. Dies erledigen Sie mit folgendem Aufruf (hier bspw. das Kommando whoami ausgeführt):
> you@host > whoami you you@host > sudo -u john whoami Password:******** john you@host Das Kommando sudo hat bei mir bis vor kurzem eine Art Schattendasein geführt. Wenn ich ein Kommando benötigt habe, dass root-Rechte erfordert, habe ich mich schnell mit su befördert. Mittlerweile allerdings, seitdem ich das immer beliebter werdende »(k)ubuntu« verwende, bei dem es gar keinen root mehr gibt, sondern sudo für solche Zwecke verwendet wird, hat sich das Kommando einen »Stammplatz« erobert. Darauf wollte ich kurz hinweisen, bevor Sie auf die Idee kommen, »(k)ubuntu« nochmals zu installieren, weil Sie vielleicht vermuten, irgendwo das root-Passwort vergessen zu haben. »(k)ubuntu« ist da einfach anders als andere Distributionen.
### time â Zeitmessung für ProzesseÂ
Mit time führen Sie das in der Kommandozeile angegebene Kommando bzw. Script aus und bekommen die Zeit, die dafür benötigt wurde zurück. Diese Zeit wird dabei aufgeteilt in die tatsächlich benötigte Zeit (real), die Rechenzeit im Usermodus (user) und diejenige im Kernelmodus (sys). Das Kommando time wurde bereits in Abschnitt 9.6 behandelt.
### top â Prozesse nach CPU-Auslastung anzeigen (betriebssystemspezifisch)Â
Mit top bekommen Sie eine Liste der gerade aktiven Prozesse angezeigt. Die Liste wird nach CPU-Belastung sortiert. Standardmäßig wird top alle fünf Sekunden aktualisiert; beendet wird top mit (Q). top kann aber noch mehr, als die Auslastung der einzelnen Prozesse anzuzeigen, beispielsweise können Sie mit dem Tastendruck (K) (kill) einem bestimmten Prozess ein Signal senden oder mit r (renice) die Priorität eines laufenden Prozesses verändern. Ein Blick auf die man-Seite von top bringt außerdem noch einen gewaltigen Überblick zu diesem auf den ersten Blick einfachen Kommando zum Vorschein.
# 14.6 SpeicherplatzinformationenÂ
14.6 SpeicherplatzinformationenÂ
df â Abfrage des benötigten Speicherplatzes für die DateisystemeÂ
df zeigt Ihnen den freien Festplattenplatz für das Dateisystem an, wenn Sie einen Pfad als Verzeichnis angegeben haben. Wird kein Verzeichnis angegeben, dann wird der freie Plattenplatz für alle montierten Dateisysteme angezeigt.
you@host > df Dateisystem 1K-Blöcke Benutzt Verfügbar Ben% Eingehängt auf /dev/hda6 15528224 2450788 13077436 16 % / tmpfs 128256 16 128240 1 % /dev/shm /dev/hda1 13261624 9631512 3630112 73 % /windows/C
Die erste Spalte stellt hierbei den Montierpunkt da, gefolgt von der Anzahl 1K-Blöcken, dann wie viel Speicherplatz benutzt wird und anschließend, wie viel noch verfügbar ist. Die Prozentangabe gibt den prozentualen Wert der Plattennutzung an. Am Ende finden Sie noch die Gerätedatei, über die die Platte im Dateisystem eingehängt ist.
du â Größe eines Verzeichnisbaums ermittelnÂ
du zeigt die Belegung (Anzahl von KByte-Blöcken) durch die Dateien an. In Verzeichnissen wird dabei die Belegung der darin enthaltenen Dateibäume ausgegeben. Um den Wert der Ausgabe zu steuern, können folgende Optionen verwendet werden:
Tabelle 14.11 Â Optionen für das Kommando du
âb
Ausgabe in Bytes
âk
Ausgabe in Kilobytes
âm
Ausgabe in Megabytes
âh
(human-readable) vernünftige Ausgabe in Byte, KB, MB oder GB
Häufig will man nicht, dass bei umfangreichen Verzeichnissen jede Datei einzeln ausgegeben wird, sondern nur die Gesamtzahl der Blöcke. Dann verwendet man den Schalter âs (man nutzt diesen praktisch immer).
you@host > du -s /home 582585 /home you@host > du -sh /home 569M /home
Wollen Sie nicht der kompletten Tiefe eines Verzeichnisses folgen, so können Sie sie auch mittels ââmaxâdepth=n festlegen:
you@host > du --max-depth=1 -m /home 519 /home/tot 25 /home/you 26 /home/john 569 /home
free â verfügbaren Speicherplatz (RAM und Swap) anzeigen (betriebssystemabhängig)Â
Den verfügbaren Speicherplatz (RAM und Swap-Speicher, also Arbeitsspeicher und Auslagerungsspeicher auf der Festplatte) können Sie sich mit free anzeigen lassen:
you@host > free total used free shared buffers cached Mem: 256516 253820 2696 0 38444 60940 Swap: 512024 24 512000
swap â Swap-Space anzeigen (nicht Linux)Â
Hiermit können Sie die Swap-Belegung (Auslagerungsspeicher) unter Nicht-Linux-Sytemen ermitteln.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 14.6 SpeicherplatzinformationenÂ
### df â Abfrage des benötigten Speicherplatzes für die DateisystemeÂ
df zeigt Ihnen den freien Festplattenplatz für das Dateisystem an, wenn Sie einen Pfad als Verzeichnis angegeben haben. Wird kein Verzeichnis angegeben, dann wird der freie Plattenplatz für alle montierten Dateisysteme angezeigt.
> you@host > df Dateisystem 1K-Blöcke Benutzt Verfügbar Ben% Eingehängt auf /dev/hda6 15528224 2450788 13077436 16 % / tmpfs 128256 16 128240 1 % /dev/shm /dev/hda1 13261624 9631512 3630112 73 % /windows/C
Die erste Spalte stellt hierbei den Montierpunkt da, gefolgt von der Anzahl 1K-Blöcken, dann wie viel Speicherplatz benutzt wird und anschließend, wie viel noch verfügbar ist. Die Prozentangabe gibt den prozentualen Wert der Plattennutzung an. Am Ende finden Sie noch die Gerätedatei, über die die Platte im Dateisystem eingehängt ist.
### du â Größe eines Verzeichnisbaums ermittelnÂ
du zeigt die Belegung (Anzahl von KByte-Blöcken) durch die Dateien an. In Verzeichnissen wird dabei die Belegung der darin enthaltenen Dateibäume ausgegeben. Um den Wert der Ausgabe zu steuern, können folgende Optionen verwendet werden:
Option | Bedeutung |
| --- | --- |
âb | Ausgabe in Bytes |
âk | Ausgabe in Kilobytes |
âm | Ausgabe in Megabytes |
âh | (human-readable) vernünftige Ausgabe in Byte, KB, MB oder GB |
Häufig will man nicht, dass bei umfangreichen Verzeichnissen jede Datei einzeln ausgegeben wird, sondern nur die Gesamtzahl der Blöcke. Dann verwendet man den Schalter âs (man nutzt diesen praktisch immer).
> you@host > du -s /home 582585 /home you@host > du -sh /home 569M /home
Wollen Sie nicht der kompletten Tiefe eines Verzeichnisses folgen, so können Sie sie auch mittels ââmaxâdepth=n festlegen:
> you@host > du --max-depth=1 -m /home 519 /home/tot 25 /home/you 26 /home/john 569 /home
### free â verfügbaren Speicherplatz (RAM und Swap) anzeigen (betriebssystemabhängig)Â
Den verfügbaren Speicherplatz (RAM und Swap-Speicher, also Arbeitsspeicher und Auslagerungsspeicher auf der Festplatte) können Sie sich mit free anzeigen lassen:
> you@host > free total used free shared buffers cached Mem: 256516 253820 2696 0 38444 60940 Swap: 512024 24 512000
### swap â Swap-Space anzeigen (nicht Linux)Â
Hiermit können Sie die Swap-Belegung (Auslagerungsspeicher) unter Nicht-Linux-Sytemen ermitteln.
# 14.7 Dateisystem-KommandosÂ
14.7 Dateisystem-KommandosÂ
Bei den Kommandos zu den Dateisystemen handelt es sich häufig um Kommandos für Linux-Systeme, welche zumeist auch nur als root ausführbar sind. Aus Erfahrung weiß ich aber, dass man schnell gern mit root-Rechten spielt, um mit solchen Kommandos zu experimentieren. Hiervor möchte ich Sie aber warnen, sofern Sie nicht sicher sind, was Sie da tun. Mit zahlreichen dieser Kommandos verlieren Sie häufig nicht nur ein paar Daten, sondern können z. T. ein ganzes Dateisystem korrumpieren â was sich so weit auswirken kann, dass Ihr Betriebssystem nicht mehr startet.
badblocks â überprüft, ob ein Datenträger defekte Sektoren hatÂ
Mit dem Kommando badblocks testen Sie den physischen Zustand eines Datenträgers. Dabei sucht badblocks auf einer Diskette oder Festplatte nach defekten Blöcken.
# badblocks -s -o block.log /dev/fd0 1440
Hier wird z. B. bei einer Diskette (1,44 MB) nach defekten Blöcken gesucht. Das Fortschreiten der Überprüfung wird durch die Option âs simuliert. Mit âo schreiben Sie das Ergebnis defekter Blöcke in die Datei block.log, welche wiederum von anderen Programmen verwendet werden kann, damit diese beschädigten Blöcke nicht mehr genutzt werden. Die Syntax sieht somit wie folgt aus:
badblocks [-optionen] gerätedatei [startblock]
Die »gerätedatei« ist der Pfad zum entsprechenden Speichermedium (bspw. /dev/hda1 = erste Festplatte). Es kann außerdem auch optional der »startblock« festgelegt werden, von dem mit dem Testen angefangen werden soll. Die Ausführung dieses Kommandos bleibt selbstverständlich nur dem Superuser überlassen.
cfdisk â Partitionieren von FestplattenÂ
Mit cfdisk teilen Sie eine formatierte Festplatte in verschiedene Partitionen ein. Natürlich kann man auch eine bestehende Partition löschen oder eine vorhandene verändern (bspw. eine andere Systemkennung geben). Man kann mit cfdisk allerdings auch eine »rohe«, ganze Festplatte bearbeiten (nicht nur einzelne Partitionen). Die Gerätedatei ist also /dev/hda für die erste IDE-Festplatte, /dev/hdb für die zweite Festplatte, /dev/sda für die erste SCSI-Festplatte, /dev/sdb für die zweite SCSI-Festplatte ... usw.
Starten Sie cfdisk, werden alle gefundene Partitionen mitsamt deren Größe angezeigt. Mit den Pfeiltasten »nach-oben« und »nach-unten« können Sie sich hierbei eine Partition auswählen und mit den Pfeiltasten »nach-rechts« bzw. »nach-links« ein Kommando. Mit (Q) können Sie cfdisk beenden.
Achtung   Selbstverständlich ist cfdisk nur als root ausführbar. Sollten Sie solche Rechte haben und ohne Vorwissen mit cfdisk herumspielen, ist Ihnen mindestens ein Datenverlust sicher. Wenn Sie Ihre Festplatte wirklich »zerschneiden« wollen, so sollten Sie vor dem Partitionieren die alte Partitionstabelle sichern. Im Fall eines Missgeschicks kann so die alte Partitionstabelle wieder hergestellt werden, wodurch auf die Daten dann noch zugegriffen werden kann.
dd â Datenblöcke zwischen Device (Low Level) kopieren (und konvertieren)Â
Mit dd können Sie eine Datei lesen und den Inhalt in einer bestimmten Blockgröße mit verschiedenen Konvertierungen zwischen verschiedenen Speichermedien (Festplatte, Diskette usw.) übertragen. Damit lassen sich neben einfachen Dateien auch ganze Festplattenpartitionen kopieren. Ein komplettes Backup kann mit diesem Kommando realisiert werden. Ein Beispiel:
# dd if=/dev/hda bs=512 count=1
Damit geben Sie den Bootsektor (nur als root möglich) auf dem Bildschirm aus. Wollen Sie jetzt auch noch ein Backup des Bootsektors auf einer Diskette sichern, dann gehen Sie folgendermaßen vor:
# dd if=/dev/hda of=/dev/fd0 bs=512 count=1
Bevor Sie jetzt noch weitere Beispiele zum mächtigen Werkzeug dd sehen werden, müssen Sie sich zunächst mit den Optionen vertraut machen â im Beispiel mit if, of, bs und count verwendet.
Tabelle 14.12 Â Optionen zur Steuerung des Kommandos dd
if=Datei
(input file) Hier gibt man den Namen der Eingabedatei (Quelldatei) an â ohne Angaben wird die Standardeingabe verwendet.
of=Datei
(output file) Hier kommt der Name der Ausgabedatei (Zieldatei) hin â ohne Angabe wird hier die Standardausgabe verwendet.
ibs=Schritt
(input block size) Hier wird die Blockgröße der Eingabedatei angegeben.
obs=Schritt
(output block size) Hier wird die Blockgröße der Ausgabedatei angegeben.
bs=Schritt
(block size) Hier legt man die Blockgröße für Ein- und Ausgabedatei fest.
cbs=Schritt
(conversion block size) Die Blockgröße für die Konvertierung wird bestimmt.
skip=Blocks
Hier können Sie eine Anzahl Blocks angeben, die von der Eingabe zu Beginn ignoriert werden sollen.
seek=Blocks
Hier können Sie eine Anzahl Blocks angeben, die von der Ausgabe am Anfang ignoriert werden sollen; unterdrückt am Anfang die Ausgabe der angegebenen Anzahl Blocks.
count=Blocks
Hier wird angegeben, wie viele Blöcke kopiert werden sollen.
Tabelle 14.13 Â Spezielle Optionen für Konvertierungen mit dem Kommando dd
Option
Konvertierung
conv=ascii
EBCDIC nach ASCII
conv=ebcdic
ASCII nach EBCDIC
conv=ibm
ASCII nach big blue special EBCDIC
conv=block
Es werden Zeilen in Felder mit der Größe cbs geschrieben und das Zeilenende wird durch Leerzeichen ersetzt. Der Rest des Feldes wird mit Leerzeichen aufgefüllt.
conv=unblock
Abschließende Leerzeichen eines Blocks der Größe cbs werden durch ein Zeilenende ersetzt.
conv=swab
Vertauscht je zwei Bytes der Eingabe. Ist die Anzahl der gelesenen Bytes ungerade, wird das letzte Byte einfach kopiert.
conv=noerror
Lesefehler werden ignoriert.
conv=sync
Füllt Eingabeblöcke bis zur Größe von ibs mit Nullen
Jetzt noch einige interessante Beispiel zu dd:
# dd if=/vmlinuz of=/dev/fd0
Damit kopieren Sie den Kernel (hier in »vmlinuz« â bitte anpassen) in den ersten Sektor der Diskette, welche als Bootdiskette verwendet werden kann.
# dd if=/dev/hda of=/dev/hdc
Mächtig, hiermit klonen Sie praktisch in einem Schritt die erste Festplatte am Master IDE-Kontroller auf die Festplatte am zweiten Master-Anschluss. Somit haben Sie auf /dev/hdc denselben Inhalt wie auf /dev/hda. Natürlich kann die Ausgabedatei auch ganz woanders hingeschrieben werden, bspw. auf einen DVD-Brenner, eine Festplatte am USB-Anschluss oder in eine Datei.
Achtung   Zwar ist dd ein mächtigeres Werkzeug, als es hier vielleicht den Anschein hat, doch trotzdem sollten Sie gewarnt sein vor wirren dd-Aufrufen. Der Datensalat ist auch hier schneller entstanden als sonstwo. Daher benötigt man wieder die allmächtigen root-Rechte. Falls Sie größere Datenmengen mit dd kopieren, können Sie dem Programm von einer anderen Konsole aus mittels kill das Signal SIGUSR1 senden, um dd zu veranlassen, den aktuellen Fortschritt auszugeben.
dd_rescue â fehlertolerantes Kopieren von DateiblöckenÂ
Falls Sie z. B. eine defekte Festplatte â oder eine Partition auf derselben â kopieren wollen, stößt dd schnell an seine Grenzen. Zudem ist beim Retten von Daten eines defekten Speichermediums die Geschwindigkeit wichtig, da das Medium weitere Fehler verursachen kann und somit weitere Dateien korrumpiert werden können. Ein Fehlversuch mit dd kann hier also fatale Folgen haben.
An dieser Stelle bietet sich dd_rescue an, welches inzwischen bei allen gängigen Linux-Distributionen mitgeliefert wird. Sie können damit â ähnlich wie mit dd â Dateiblöcke auf Low-Level-Basis auf ein anderes Medium kopieren. Als Zielort ist eine Datei auf einem anderen Speichermedium sinnvoll. Von diesem Abbild der defekten Festplatte können Sie eine Kopie erstellen, um das ursprüngliche Abbild nicht zu verändern, und in einem der Abbilder versuchen, das Dateisystem mittels fsck wieder zu reparieren. Ist dies gelungen, können Sie das Abbild wieder mit dd_rescue auf eine neue Festplatte kopieren.
Ein Beispiel:
# dd_rescue -v /dev/hda1 /mnt/rescue/hda1.img
In dem Beispiel wird die Partition /dev/hda1 in die Abbilddatei /mnt/rescue/hda1.img kopiert.
dumpe2fs â zeigt Informationen über ein ext2/ext3-Dateisystem anÂ
dumpe2fs gibt eine Menge interne Informationen zum Superblock und anderen Block-Gruppen zu einem ext2/ext3-Dateisystem aus (vorausgesetzt, dieses Dateisystem wird auch verwendet â logisch), zum Beispiel:
# dumpe2fs -b /dev/hda6
Mit der Option âb werden alle Blöcke von /dev/hda6 auf die Konsole ausgegeben, die als »schlecht« markiert wurden.
e2fsck â repariert ein ext2/ext3-DateisystemÂ
e2fsck überprüft ein ext2/ext3 Dateisystem und repariert den Fehler. Damit e2fsck verwendet werden kann, muss fsck.ext2 installiert sein, welches das eigentliche Programm ist. e2fsck ist nur ein »Frontend« dafür.
e2fsck Gerätedatei
Mit der »Gerätedatei« geben Sie die Partition an, auf der das Dateisystem überprüft werden soll (was selbstverständlich wieder ein ext2/ext3-Dateisystem sein muss). Bei den Dateien, bei denen die Inodes in keinem Verzeichnis notiert sind, werden sie von e2fsck im Verzeichnis lost+found eingetragen und können so repariert werden. e2fsck gibt beim Überprüfen einen Exit-Code zurück, den Sie mit echo $? abfragen können. Folgende wichtigen Exit-Codes und deren Bedeutung können dabei zurückgegeben werden:
Tabelle 14.14 Â Exit-Codes des Kommandos e2fsck
8
Fehler bei der Kommandoausführung von e2fsck
16
Falsche Verwendung von e2fsck
Tabelle 14.15 Â Optionen für das Kommando e2fsck
âp
Alle Fehler automatisch reparieren ohne Rückfragen
âc
Durchsucht das Dateisystem nach schlechten Blöcken
âf
Erzwingt eine Überprüfung des Dateisystems, auch wenn der Kernel das System für OK befunden hat (valid-Flag gesetzt)
Hinweis   Einige fsck-Versionen fragen nach, ob das Kommando wirklich ausgeführt werden soll. Bei der Antwort genügt der Anfangsbuchstabe »j« oder »y« als Antwort nicht, sondern es muss »yes« oder »ja« (je nach Fragestellung) eingegeben werden. Ansonsten bricht fsck an dieser Stelle kommentarlos ab.
fdformat â formatiert eine DisketteÂ
Auch wenn viele Rechner mittlerweile ohne Diskettenlaufwerk ausgeliefert werden, wird das Diskettenlaufwerk immer wieder einmal benötigt (bspw. für eine Rettungsdiskette mit einem Mini-Linux). Mit dem Kommando fdformat formatieren Sie eine Diskette. Das Format wird dabei anhand vom Kernel gespeicherten Parametern erzeugt. Beachten Sie allerdings, dass die Diskette nur mit leeren Blöcken beschrieben wird und nicht mit einem Dateisystem. Zum Erstellen von Dateisystemen stehen Ihnen die Kommandos mkfs, mk2fs oder mkreiserfs für Linux-Systeme zur Verfügung.
fdformat Gerätedatei
fdisk â Partitionieren von SpeichermedienÂ
fdisk ist die etwas unkomfortablere Alternative gegenüber cfdisk, eine Festplatte in verschiedene Partitionen aufzuteilen, zu löschen oder gegebenenfalls zu ändern. Im Gegensatz zu cfisk können Sie hier nicht mit den Pfeiltasten navigieren und müssen einzelne Tastenkürzel verwenden. Allerdings hat fdisk den Vorteil, fast überall und immer präsent zu sein.
Noch ein Vorzug ist, dass fdisk nicht interaktiv läuft. Man kann es z. B. benutzen, um einen ganzen Schlag Festplatten automatisch zu formatieren. Das ist ganz praktisch, wenn man ein System identisch auf einer ganzen Anzahl Rechner installieren muss. Man installiert nur auf einem, erzeugt mit dd ein Image, erstellt sich ein kleines Script, bootet die anderen Rechner z. B. von »damnsmall-Linux« (eine Mini-Distribution für u. a. den USB-Stick) und führt das Script aus, das dann per fdisk formatiert und per dd das Image des Prototypen installiert. Danach muss man nur noch die IP-Adresse und den Hostname anpassen, was Sie auch scriptgesteuert vornehmen können.
Einen komfortablen Überblick zu allen Partitionen auf allen Festplatten können Sie sich mit der Option âl anzeigen lassen:
linux:/home/you # fdisk -l Platte /dev/hda: 30.0 GByte, 30005821440 Byte 16 Köpfe, 63 Sektoren/Spuren, 58140 Zylinder Einheiten = Zylinder von 1008 * 512 = 516096 Bytes Gerät Boot Start End Blocks Id System /dev/hda1 1 26313 13261626 7 HPFS/NTFS /dev/hda2 26314 58140 16040808 f W95 Ext'd (LBA) /dev/hda5 26314 27329 512032+ 82 Linux Swap /dev/hda6 27330 58140 15528712+ 83 Linux
Zum Partitionieren starten müssen Sie fdisk mit der Angabe der Gerätedatei:
# fdisk /dev/hda
Die wichtigsten Tastenkürzel zur Partitionierung selbst sind:
Tabelle 14.16 Â Tastenkürzel zur Verwendung von fdisk
Taste
Bedeutung
(b)
»bsd disklabel« bearbeiten
(d)
Eine Partition löschen
(l)
Die bekannten Dateisystemtypen anzeigen (Sie benötigen die Nummer)
(m)
Ein Menü mit allen Befehlen anzeigen
(n)
Eine neue Partition anlegen
(p)
Die Partitionstabelle anzeigen
(q)
Ende ohne Speichern der Änderungen
(s)
Einen neuen leeren »Sun disklabel« anlegen
(t)
Den Dateisystemtyp einer Partition ändern
(u)
Die Einheit für die Anzeige/Eingabe ändern
(v)
Die Partitionstabelle überprüfen
(W)
Die Tabelle auf die Festplatte schreiben und das Programm beenden
(X)
Zusätzliche Funktionen (nur für Experten)
Ansonsten gilt auch hier, Finger weg, wenn Sie nicht sicher sind, was Sie hier genau tun, es sei denn, Sie probieren dies alles an einem Testsystem aus.
fsck â Reparieren und Überprüfen von DateisystemenÂ
fsck ist ein unabhängiges Frontend zum Prüfen und Reparieren der Dateisystem-Struktur. fsck ruft gewöhnlich je nach Dateisystem das entsprechende Programm auf. Bei ext2/ext3 ist dies bspw. fsck.ext2, bei einem Minix-System fsck.minix, bei ReiserFS reiserfsck usw. Die Zuordnung des entsprechenden Dateisystems nimmt fsck anhand der Partitionstabelle oder durch eine Kommandozeilenoption vor. Die meisten dieser Programme unterstützen die Optionen âa, âA, âl, âr, âs und âv. Meistens wird hierbei die Option âa âA verwendet. Mit âa veranlassen Sie eine automatische Reparatur, sofern dies möglich ist, und mit âA geben Sie an, alle Dateisysteme zu testen, die in /etc/fstab eingetragen sind.
fsck gibt beim Überprüfen einen Exit-Code zurück, den Sie mit echo $? abfragen können. Folgende wichtigen Exit-Codes und deren Bedeutung können dabei zurückgegeben werden:
Tabelle 14.17 Â Exit-Codes von fsck und ihre Bedeutung
8
Fehler bei der Kommandoausführung von fsck
16
Falsche Verwendung von fsck
Ganz wichtig ist es auch, fsck immer auf nicht eingebundene bzw. nur im readonly-Modus eingebundene Dateisysteme anzuwenden. Denn fsck kann sonst eventuell ein Dateisystem verändern (reparieren), ohne dass das System dies zu realisieren vermag. Ein Systemabsturz ist dann vorprogrammiert. Gewöhnlich wird fsck oder reiserfsck beim Systemstart automatisch ausgeführt, wenn eine Partition nicht sauber ausgebunden wurde oder nach jedem n-ten Booten. Wer mit einem ext2/ext3-Dateisystem arbeitet, kennt das wohl zu genüge, wenn fsck nach einem Systemabsturz beim Starten des Systems zunächst das komplette Dateisystem überprüft (Kaffee holen war angesagt).
mkfs â Dateisystem einrichtenÂ
Mit mkfs können Sie auf einer zuvor formatierten Festplatte bzw. Diskette ein Dateisystem anlegen. Wie schon fsck ist mkfs ein unabhängiges Frontend, das die Erzeugung des Dateisystems nicht selbst übernimmt, sondern auch hier das spezifische Programm zur Erzeugung des entsprechenden Dateisystems verwendet. Auch hierbei richtet sich mkfs wieder nach den Dateisystemen, die in der Partitionstabelle aufgelistet sind, oder gegebenenfalls nach der Kommandozeilenoption.
Abhängig vom Dateisystemtyp ruft mkfs dann das Kommando mkfs.minix (für Minix), mk2fs (für ext2/ext3), mkreiserfs (für ReiserFS) usw. auf.
mkfs [option] Gerätedatei [blöcke]
Für die »Gerätedatei« müssen Sie den entsprechenden Pfad angeben (bspw. /dev/hda1). Es kann außerdem auch die Anzahl von Blöcken angegeben werden, die das Dateisystem belegen soll. Auch mkfs gibt einen Exit-Code über den Verlauf der Kommandoausführung zurück, den Sie mit echo $? auswerten können.
Tabelle 14.18 Â Exit-Codes von mkfs und ihre Bedeutung
0
Alles erfolgreich durchgeführt
8
Ein Fehler bei der Programmausführung
16
Ein Fehler in der Kommandozeile
Interessant ist auch noch die Option ât, womit Sie den Dateisystemtyp des zu erzeugenden Dateisystems selbst festlegen können. Ohne ât würde hier wieder versucht, das Dateisystem anhand der Partitionstabelle zu bestimmen. So erzeugen Sie bspw. mit
mkfs -f xiafs /dev/hda7
auf der Partition /dev/hda7 ein Dateisystem xiafs (alternativ zu ext2).
mkswap â eine Swap-Partition einrichtenÂ
Mit mkswap legen Sie eine Swap-Partition an. Diese können Sie z. B. dazu verwenden, schlafende Prozesse, die auf das Ende von anderen Prozessen warten, in die Swap-Partition der Festplatte auszulagern. So halten Sie Platz im Arbeitsspeicher für andere laufende Prozesse frei. Sofern Sie nicht schon bei der Installation die (gewöhnlich) vorgeschlagene Swap-Partiton eingerichtet haben (je nach Distribution), können Sie dies nachträglich mit dem Kommando mkswap vornehmen. Zum Aktivieren einer Swap-Partition müssen Sie das Kommando swapon aufrufen.
Ist Ihr Arbeitsspeicher wieder ausgelastet, können Sie auch kurzfristig solch einen Swap-Speicher einrichten. Ein Beispiel:
# dd bs=1024 if=/dev/zero of=/tmp/myswap count=4096 4096+0 Datensätze ein 4096+0 Datensätze aus # mkswap -c /tmp/myswap 4096 Swapbereich Version 1 wird angelegt, Größe 4190 KBytes # sync # swapon /tmp/myswap
Zuerst legen Sie mit dd einen leere Swapdatei mit 4 Megabytes Größe mit Null-Bytes an. Anschließend verwenden Sie diesen Bereich als Swap-Datei. Nach einem Aufruf von sync müssen Sie nur noch den Swap-Speicher aktivieren. Wie dieser Swap-Bereich allerdings verwendet wird, haben Sie nicht in der Hand, sondern wird vom Kernel mit dem »Paging« gesteuert.
Hinweis   Eine Datei, die als Swap-Bereich eingebunden wird, sollte nur genutzt werden, wenn keine Partition dafür zur Verfügung steht, da die Methode erheblich langsamer ist als eine Swap-Partition.
mount, umount â An- bzw. Abhängen eines DateisystemsÂ
mount hängt einzelne Dateisysteme mit den verschiedensten Medien (Festplatte, CD-ROM, Diskette ...) an einen einzigen Dateisystembaum an. Die einzelnen Partitionen werden dabei als Gerätedateien im Ordner /dev angezeigt. Rufen Sie mount ohne irgendwelche Argumente auf, werden alle »gemounteten« Dateisysteme aus /etc/mtab aufgelistet. Auch hier bleibt es wieder root überlassen, ob ein Benutzer ein bestimmtes Dateisystem einbinden kann oder nicht. Hierzu muss nur ein entsprechender Eintrag in /etc/fstab vorgenommen werden.
Einige Beispiele, wie verschiedene Dateisysteme eingehängt werden können:
Tabelle 14.19 Â Anwendungsbeispiele des Kommandos mount
mount /dev/fd0
Hängt das Diskettenlaufwerk ein
mount /dev/hda9 /home/you
Hier wird das Dateisystem /dev/hda9 an der Verzeichnis /home/you gemountet.
mount goliath:/progs /home/progs
Mountet ein Dateisystem per NFS von einem Rechner namens »goliath« und hängt diesen an das lokale Verzeichnis /home/progs
Wollen Sie ein Dateisystem wieder aushängen, egal ob jetzt lokale oder entfernte Partitionen, dann nehmen Sie dies mit umount vor:
umount /dev/fd0
Hier wird das Diskettenlaufwerk aus dem Dateisystem ausgehängt.
Hinweis   Wenn ein Eintrag für ein Dateisystem in der /etc/fstab besteht, reicht es aus, mount mit dem Device oder dem Mountpoint als Argument aufzurufen: »mount /dev/fd0«.
parted â Partitionen anlegen, verschieben, vergrößern oder verkleinernÂ
Mit parted können Sie nicht nur â wie mit fdisk bzw. cfdisk â Partitionen anlegen oder löschen, sondern auch vergrößern, verkleinern, kopieren und verschieben. parted wird gern verwendet, wenn man Platz auf der Festplatte schaffen will für ein neues Betriebssystem oder alle Daten einer Festplatte auf eine neue kopieren will. Mehr hierzu entnehmen Sie bitte der Manual-Seite von parted.
prtvtoc â Partitionstabellen ausgebenÂ
Mit prtvtoc können Sie ähnlich wie unter Linux mit fdisk die Partitionstabelle einer Festplatte ausgeben. Dieses Kommando ist bspw. unter Solaris vorhanden.
swapon, swapoff â Swap-Datei oder Partition (de)aktivierenÂ
Wenn Sie auf dem System eine Swap-Partition eingerichtet haben (siehe mkswap), existiert diese zwar, muss aber noch mit dem Kommando swapon aktiviert werden. Den so aktivierten Bereich können Sie jederzeit mit swapoff wieder aus dem laufenden System deaktivieren.
sync â alle gepufferten Schreiboperationen ausführenÂ
Normalerweise verwendet Linux einen Puffer (Cache) im Arbeitsspeicher, worin sich ganze Datenblöcke eines Massenspeichers befinden. So werden Daten häufig temporär erst im Arbeitsspeicher verwaltet, da sich ein dauernd schreibender Prozess äußerst negativ auf die Performance des Systems auswirken würde. Stellen Sie sich das einmal bei 100 Prozessen vor! Gewöhnlich übernimmt ein Daemon die Arbeit und entscheidet, wann die veränderten Datenblöcke auf die Festplatte geschrieben werden.
Mit dem Kommando sync können Sie nun veranlassen, dass veränderte Daten sofort auf die Festplatte (oder auch jeden anderen Massenspeicher) geschrieben werden. Dies kann häufig der letzte Rettungsanker sein, wenn das System sich nicht mehr richtig beenden lässt. Können Sie hierbei noch schnell ein sync ausführen, werden alle Daten zuvor nochmals gesichert und der Datenverlust kann eventuell ganz verhindert werden.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 14.7 Dateisystem-KommandosÂ
Bei den Kommandos zu den Dateisystemen handelt es sich häufig um Kommandos für Linux-Systeme, welche zumeist auch nur als root ausführbar sind. Aus Erfahrung weiß ich aber, dass man schnell gern mit root-Rechten spielt, um mit solchen Kommandos zu experimentieren. Hiervor möchte ich Sie aber warnen, sofern Sie nicht sicher sind, was Sie da tun. Mit zahlreichen dieser Kommandos verlieren Sie häufig nicht nur ein paar Daten, sondern können z. T. ein ganzes Dateisystem korrumpieren â was sich so weit auswirken kann, dass Ihr Betriebssystem nicht mehr startet.
### badblocks â überprüft, ob ein Datenträger defekte Sektoren hatÂ
Mit dem Kommando badblocks testen Sie den physischen Zustand eines Datenträgers. Dabei sucht badblocks auf einer Diskette oder Festplatte nach defekten Blöcken.
> # badblocks -s -o block.log /dev/fd0 1440
Hier wird z. B. bei einer Diskette (1,44 MB) nach defekten Blöcken gesucht. Das Fortschreiten der Überprüfung wird durch die Option âs simuliert. Mit âo schreiben Sie das Ergebnis defekter Blöcke in die Datei block.log, welche wiederum von anderen Programmen verwendet werden kann, damit diese beschädigten Blöcke nicht mehr genutzt werden. Die Syntax sieht somit wie folgt aus:
> badblocks [-optionen] gerätedatei [startblock]
Die »gerätedatei« ist der Pfad zum entsprechenden Speichermedium (bspw. /dev/hda1 = erste Festplatte). Es kann außerdem auch optional der »startblock« festgelegt werden, von dem mit dem Testen angefangen werden soll. Die Ausführung dieses Kommandos bleibt selbstverständlich nur dem Superuser überlassen.
### cfdisk â Partitionieren von FestplattenÂ
Mit cfdisk teilen Sie eine formatierte Festplatte in verschiedene Partitionen ein. Natürlich kann man auch eine bestehende Partition löschen oder eine vorhandene verändern (bspw. eine andere Systemkennung geben). Man kann mit cfdisk allerdings auch eine »rohe«, ganze Festplatte bearbeiten (nicht nur einzelne Partitionen). Die Gerätedatei ist also /dev/hda für die erste IDE-Festplatte, /dev/hdb für die zweite Festplatte, /dev/sda für die erste SCSI-Festplatte, /dev/sdb für die zweite SCSI-Festplatte ... usw.
Starten Sie cfdisk, werden alle gefundene Partitionen mitsamt deren Größe angezeigt. Mit den Pfeiltasten »nach-oben« und »nach-unten« können Sie sich hierbei eine Partition auswählen und mit den Pfeiltasten »nach-rechts« bzw. »nach-links« ein Kommando. Mit (Q) können Sie cfdisk beenden.
### dd â Datenblöcke zwischen Device (Low Level) kopieren (und konvertieren)Â
Mit dd können Sie eine Datei lesen und den Inhalt in einer bestimmten Blockgröße mit verschiedenen Konvertierungen zwischen verschiedenen Speichermedien (Festplatte, Diskette usw.) übertragen. Damit lassen sich neben einfachen Dateien auch ganze Festplattenpartitionen kopieren. Ein komplettes Backup kann mit diesem Kommando realisiert werden. Ein Beispiel:
> # dd if=/dev/hda bs=512 count=1
Damit geben Sie den Bootsektor (nur als root möglich) auf dem Bildschirm aus. Wollen Sie jetzt auch noch ein Backup des Bootsektors auf einer Diskette sichern, dann gehen Sie folgendermaßen vor:
> # dd if=/dev/hda of=/dev/fd0 bs=512 count=1
Bevor Sie jetzt noch weitere Beispiele zum mächtigen Werkzeug dd sehen werden, müssen Sie sich zunächst mit den Optionen vertraut machen â im Beispiel mit if, of, bs und count verwendet.
Option | Bedeutung |
| --- | --- |
if=Datei | (input file) Hier gibt man den Namen der Eingabedatei (Quelldatei) an â ohne Angaben wird die Standardeingabe verwendet. |
of=Datei | (output file) Hier kommt der Name der Ausgabedatei (Zieldatei) hin â ohne Angabe wird hier die Standardausgabe verwendet. |
ibs=Schritt | (input block size) Hier wird die Blockgröße der Eingabedatei angegeben. |
obs=Schritt | (output block size) Hier wird die Blockgröße der Ausgabedatei angegeben. |
bs=Schritt | (block size) Hier legt man die Blockgröße für Ein- und Ausgabedatei fest. |
cbs=Schritt | (conversion block size) Die Blockgröße für die Konvertierung wird bestimmt. |
skip=Blocks | Hier können Sie eine Anzahl Blocks angeben, die von der Eingabe zu Beginn ignoriert werden sollen. |
seek=Blocks | Hier können Sie eine Anzahl Blocks angeben, die von der Ausgabe am Anfang ignoriert werden sollen; unterdrückt am Anfang die Ausgabe der angegebenen Anzahl Blocks. |
count=Blocks | Hier wird angegeben, wie viele Blöcke kopiert werden sollen. |
Option | Konvertierung |
| --- | --- |
conv=ascii | EBCDIC nach ASCII |
conv=ebcdic | ASCII nach EBCDIC |
conv=ibm | ASCII nach big blue special EBCDIC |
conv=block | Es werden Zeilen in Felder mit der Größe cbs geschrieben und das Zeilenende wird durch Leerzeichen ersetzt. Der Rest des Feldes wird mit Leerzeichen aufgefüllt. |
conv=unblock | Abschließende Leerzeichen eines Blocks der Größe cbs werden durch ein Zeilenende ersetzt. |
conv=lcase | Großbuchstaben in Kleinbuchstaben |
conv=ucase | Kleinbuchstaben in Großbuchstaben |
conv=swab | Vertauscht je zwei Bytes der Eingabe. Ist die Anzahl der gelesenen Bytes ungerade, wird das letzte Byte einfach kopiert. |
conv=noerror | Lesefehler werden ignoriert. |
conv=sync | Füllt Eingabeblöcke bis zur Größe von ibs mit Nullen |
Jetzt noch einige interessante Beispiel zu dd:
> # dd if=/vmlinuz of=/dev/fd0
Damit kopieren Sie den Kernel (hier in »vmlinuz« â bitte anpassen) in den ersten Sektor der Diskette, welche als Bootdiskette verwendet werden kann.
> # dd if=/dev/hda of=/dev/hdc
Mächtig, hiermit klonen Sie praktisch in einem Schritt die erste Festplatte am Master IDE-Kontroller auf die Festplatte am zweiten Master-Anschluss. Somit haben Sie auf /dev/hdc denselben Inhalt wie auf /dev/hda. Natürlich kann die Ausgabedatei auch ganz woanders hingeschrieben werden, bspw. auf einen DVD-Brenner, eine Festplatte am USB-Anschluss oder in eine Datei.
### dd_rescue â fehlertolerantes Kopieren von DateiblöckenÂ
Falls Sie z. B. eine defekte Festplatte â oder eine Partition auf derselben â kopieren wollen, stößt dd schnell an seine Grenzen. Zudem ist beim Retten von Daten eines defekten Speichermediums die Geschwindigkeit wichtig, da das Medium weitere Fehler verursachen kann und somit weitere Dateien korrumpiert werden können. Ein Fehlversuch mit dd kann hier also fatale Folgen haben.
An dieser Stelle bietet sich dd_rescue an, welches inzwischen bei allen gängigen Linux-Distributionen mitgeliefert wird. Sie können damit â ähnlich wie mit dd â Dateiblöcke auf Low-Level-Basis auf ein anderes Medium kopieren. Als Zielort ist eine Datei auf einem anderen Speichermedium sinnvoll. Von diesem Abbild der defekten Festplatte können Sie eine Kopie erstellen, um das ursprüngliche Abbild nicht zu verändern, und in einem der Abbilder versuchen, das Dateisystem mittels fsck wieder zu reparieren. Ist dies gelungen, können Sie das Abbild wieder mit dd_rescue auf eine neue Festplatte kopieren.
Ein Beispiel:
> # dd_rescue -v /dev/hda1 /mnt/rescue/hda1.img
In dem Beispiel wird die Partition /dev/hda1 in die Abbilddatei /mnt/rescue/hda1.img kopiert.
### dumpe2fs â zeigt Informationen über ein ext2/ext3-Dateisystem anÂ
dumpe2fs gibt eine Menge interne Informationen zum Superblock und anderen Block-Gruppen zu einem ext2/ext3-Dateisystem aus (vorausgesetzt, dieses Dateisystem wird auch verwendet â logisch), zum Beispiel:
> # dumpe2fs -b /dev/hda6
Mit der Option âb werden alle Blöcke von /dev/hda6 auf die Konsole ausgegeben, die als »schlecht« markiert wurden.
### e2fsck â repariert ein ext2/ext3-DateisystemÂ
e2fsck überprüft ein ext2/ext3 Dateisystem und repariert den Fehler. Damit e2fsck verwendet werden kann, muss fsck.ext2 installiert sein, welches das eigentliche Programm ist. e2fsck ist nur ein »Frontend« dafür.
> e2fsck Gerätedatei
Mit der »Gerätedatei« geben Sie die Partition an, auf der das Dateisystem überprüft werden soll (was selbstverständlich wieder ein ext2/ext3-Dateisystem sein muss). Bei den Dateien, bei denen die Inodes in keinem Verzeichnis notiert sind, werden sie von e2fsck im Verzeichnis lost+found eingetragen und können so repariert werden. e2fsck gibt beim Überprüfen einen Exit-Code zurück, den Sie mit echo $? abfragen können. Folgende wichtigen Exit-Codes und deren Bedeutung können dabei zurückgegeben werden:
Option | Bedeutung |
| --- | --- |
âp | Alle Fehler automatisch reparieren ohne Rückfragen |
âc | Durchsucht das Dateisystem nach schlechten Blöcken |
âf | Erzwingt eine Überprüfung des Dateisystems, auch wenn der Kernel das System für OK befunden hat (valid-Flag gesetzt) |
### fdformat â formatiert eine DisketteÂ
Auch wenn viele Rechner mittlerweile ohne Diskettenlaufwerk ausgeliefert werden, wird das Diskettenlaufwerk immer wieder einmal benötigt (bspw. für eine Rettungsdiskette mit einem Mini-Linux). Mit dem Kommando fdformat formatieren Sie eine Diskette. Das Format wird dabei anhand vom Kernel gespeicherten Parametern erzeugt. Beachten Sie allerdings, dass die Diskette nur mit leeren Blöcken beschrieben wird und nicht mit einem Dateisystem. Zum Erstellen von Dateisystemen stehen Ihnen die Kommandos mkfs, mk2fs oder mkreiserfs für Linux-Systeme zur Verfügung.
> fdformat Gerätedatei
### fdisk â Partitionieren von SpeichermedienÂ
fdisk ist die etwas unkomfortablere Alternative gegenüber cfdisk, eine Festplatte in verschiedene Partitionen aufzuteilen, zu löschen oder gegebenenfalls zu ändern. Im Gegensatz zu cfisk können Sie hier nicht mit den Pfeiltasten navigieren und müssen einzelne Tastenkürzel verwenden. Allerdings hat fdisk den Vorteil, fast überall und immer präsent zu sein.
Noch ein Vorzug ist, dass fdisk nicht interaktiv läuft. Man kann es z. B. benutzen, um einen ganzen Schlag Festplatten automatisch zu formatieren. Das ist ganz praktisch, wenn man ein System identisch auf einer ganzen Anzahl Rechner installieren muss. Man installiert nur auf einem, erzeugt mit dd ein Image, erstellt sich ein kleines Script, bootet die anderen Rechner z. B. von »damnsmall-Linux« (eine Mini-Distribution für u. a. den USB-Stick) und führt das Script aus, das dann per fdisk formatiert und per dd das Image des Prototypen installiert. Danach muss man nur noch die IP-Adresse und den Hostname anpassen, was Sie auch scriptgesteuert vornehmen können.
Einen komfortablen Überblick zu allen Partitionen auf allen Festplatten können Sie sich mit der Option âl anzeigen lassen:
> linux:/home/you # fdisk -l Platte /dev/hda: 30.0 GByte, 30005821440 Byte 16 Köpfe, 63 Sektoren/Spuren, 58140 Zylinder Einheiten = Zylinder von 1008 * 512 = 516096 Bytes Gerät Boot Start End Blocks Id System /dev/hda1 1 26313 13261626 7 HPFS/NTFS /dev/hda2 26314 58140 16040808 f W95 Ext'd (LBA) /dev/hda5 26314 27329 512032+ 82 Linux Swap /dev/hda6 27330 58140 15528712+ 83 Linux
Zum Partitionieren starten müssen Sie fdisk mit der Angabe der Gerätedatei:
> # fdisk /dev/hda
Die wichtigsten Tastenkürzel zur Partitionierung selbst sind:
Taste | Bedeutung |
| --- | --- |
(b) | »bsd disklabel« bearbeiten |
(d) | Eine Partition löschen |
(l) | Die bekannten Dateisystemtypen anzeigen (Sie benötigen die Nummer) |
(m) | Ein Menü mit allen Befehlen anzeigen |
(n) | Eine neue Partition anlegen |
(p) | Die Partitionstabelle anzeigen |
(q) | Ende ohne Speichern der Änderungen |
(s) | Einen neuen leeren »Sun disklabel« anlegen |
(t) | Den Dateisystemtyp einer Partition ändern |
(u) | Die Einheit für die Anzeige/Eingabe ändern |
(v) | Die Partitionstabelle überprüfen |
(W) | Die Tabelle auf die Festplatte schreiben und das Programm beenden |
(X) | Zusätzliche Funktionen (nur für Experten) |
Ansonsten gilt auch hier, Finger weg, wenn Sie nicht sicher sind, was Sie hier genau tun, es sei denn, Sie probieren dies alles an einem Testsystem aus.
### fsck â Reparieren und Überprüfen von DateisystemenÂ
fsck ist ein unabhängiges Frontend zum Prüfen und Reparieren der Dateisystem-Struktur. fsck ruft gewöhnlich je nach Dateisystem das entsprechende Programm auf. Bei ext2/ext3 ist dies bspw. fsck.ext2, bei einem Minix-System fsck.minix, bei ReiserFS reiserfsck usw. Die Zuordnung des entsprechenden Dateisystems nimmt fsck anhand der Partitionstabelle oder durch eine Kommandozeilenoption vor. Die meisten dieser Programme unterstützen die Optionen âa, âA, âl, âr, âs und âv. Meistens wird hierbei die Option âa âA verwendet. Mit âa veranlassen Sie eine automatische Reparatur, sofern dies möglich ist, und mit âA geben Sie an, alle Dateisysteme zu testen, die in /etc/fstab eingetragen sind.
fsck gibt beim Überprüfen einen Exit-Code zurück, den Sie mit echo $? abfragen können. Folgende wichtigen Exit-Codes und deren Bedeutung können dabei zurückgegeben werden:
Ganz wichtig ist es auch, fsck immer auf nicht eingebundene bzw. nur im readonly-Modus eingebundene Dateisysteme anzuwenden. Denn fsck kann sonst eventuell ein Dateisystem verändern (reparieren), ohne dass das System dies zu realisieren vermag. Ein Systemabsturz ist dann vorprogrammiert. Gewöhnlich wird fsck oder reiserfsck beim Systemstart automatisch ausgeführt, wenn eine Partition nicht sauber ausgebunden wurde oder nach jedem n-ten Booten. Wer mit einem ext2/ext3-Dateisystem arbeitet, kennt das wohl zu genüge, wenn fsck nach einem Systemabsturz beim Starten des Systems zunächst das komplette Dateisystem überprüft (Kaffee holen war angesagt).
### mkfs â Dateisystem einrichtenÂ
Mit mkfs können Sie auf einer zuvor formatierten Festplatte bzw. Diskette ein Dateisystem anlegen. Wie schon fsck ist mkfs ein unabhängiges Frontend, das die Erzeugung des Dateisystems nicht selbst übernimmt, sondern auch hier das spezifische Programm zur Erzeugung des entsprechenden Dateisystems verwendet. Auch hierbei richtet sich mkfs wieder nach den Dateisystemen, die in der Partitionstabelle aufgelistet sind, oder gegebenenfalls nach der Kommandozeilenoption.
Abhängig vom Dateisystemtyp ruft mkfs dann das Kommando mkfs.minix (für Minix), mk2fs (für ext2/ext3), mkreiserfs (für ReiserFS) usw. auf.
> mkfs [option] Gerätedatei [blöcke]
Für die »Gerätedatei« müssen Sie den entsprechenden Pfad angeben (bspw. /dev/hda1). Es kann außerdem auch die Anzahl von Blöcken angegeben werden, die das Dateisystem belegen soll. Auch mkfs gibt einen Exit-Code über den Verlauf der Kommandoausführung zurück, den Sie mit echo $? auswerten können.
Exit-Code | Bedeutung |
| --- | --- |
0 | Alles erfolgreich durchgeführt |
8 | Ein Fehler bei der Programmausführung |
16 | Ein Fehler in der Kommandozeile |
Interessant ist auch noch die Option ât, womit Sie den Dateisystemtyp des zu erzeugenden Dateisystems selbst festlegen können. Ohne ât würde hier wieder versucht, das Dateisystem anhand der Partitionstabelle zu bestimmen. So erzeugen Sie bspw. mit
> mkfs -f xiafs /dev/hda7
auf der Partition /dev/hda7 ein Dateisystem xiafs (alternativ zu ext2).
### mkswap â eine Swap-Partition einrichtenÂ
Mit mkswap legen Sie eine Swap-Partition an. Diese können Sie z. B. dazu verwenden, schlafende Prozesse, die auf das Ende von anderen Prozessen warten, in die Swap-Partition der Festplatte auszulagern. So halten Sie Platz im Arbeitsspeicher für andere laufende Prozesse frei. Sofern Sie nicht schon bei der Installation die (gewöhnlich) vorgeschlagene Swap-Partiton eingerichtet haben (je nach Distribution), können Sie dies nachträglich mit dem Kommando mkswap vornehmen. Zum Aktivieren einer Swap-Partition müssen Sie das Kommando swapon aufrufen.
Ist Ihr Arbeitsspeicher wieder ausgelastet, können Sie auch kurzfristig solch einen Swap-Speicher einrichten. Ein Beispiel:
> # dd bs=1024 if=/dev/zero of=/tmp/myswap count=4096 4096+0 Datensätze ein 4096+0 Datensätze aus # mkswap -c /tmp/myswap 4096 Swapbereich Version 1 wird angelegt, Größe 4190 KBytes # sync # swapon /tmp/myswap
Zuerst legen Sie mit dd einen leere Swapdatei mit 4 Megabytes Größe mit Null-Bytes an. Anschließend verwenden Sie diesen Bereich als Swap-Datei. Nach einem Aufruf von sync müssen Sie nur noch den Swap-Speicher aktivieren. Wie dieser Swap-Bereich allerdings verwendet wird, haben Sie nicht in der Hand, sondern wird vom Kernel mit dem »Paging« gesteuert.
### mount, umount â An- bzw. Abhängen eines DateisystemsÂ
mount hängt einzelne Dateisysteme mit den verschiedensten Medien (Festplatte, CD-ROM, Diskette ...) an einen einzigen Dateisystembaum an. Die einzelnen Partitionen werden dabei als Gerätedateien im Ordner /dev angezeigt. Rufen Sie mount ohne irgendwelche Argumente auf, werden alle »gemounteten« Dateisysteme aus /etc/mtab aufgelistet. Auch hier bleibt es wieder root überlassen, ob ein Benutzer ein bestimmtes Dateisystem einbinden kann oder nicht. Hierzu muss nur ein entsprechender Eintrag in /etc/fstab vorgenommen werden.
Einige Beispiele, wie verschiedene Dateisysteme eingehängt werden können:
Verwendung | Bedeutung |
| --- | --- |
mount /dev/fd0 | Hängt das Diskettenlaufwerk ein |
mount /dev/hda9 /home/you | Hier wird das Dateisystem /dev/hda9 an der Verzeichnis /home/you gemountet. |
mount goliath:/progs /home/progs | Mountet ein Dateisystem per NFS von einem Rechner namens »goliath« und hängt diesen an das lokale Verzeichnis /home/progs |
Wollen Sie ein Dateisystem wieder aushängen, egal ob jetzt lokale oder entfernte Partitionen, dann nehmen Sie dies mit umount vor:
> umount /dev/fd0
Hier wird das Diskettenlaufwerk aus dem Dateisystem ausgehängt.
### parted â Partitionen anlegen, verschieben, vergrößern oder verkleinernÂ
Mit parted können Sie nicht nur â wie mit fdisk bzw. cfdisk â Partitionen anlegen oder löschen, sondern auch vergrößern, verkleinern, kopieren und verschieben. parted wird gern verwendet, wenn man Platz auf der Festplatte schaffen will für ein neues Betriebssystem oder alle Daten einer Festplatte auf eine neue kopieren will. Mehr hierzu entnehmen Sie bitte der Manual-Seite von parted.
### prtvtoc â Partitionstabellen ausgebenÂ
Mit prtvtoc können Sie ähnlich wie unter Linux mit fdisk die Partitionstabelle einer Festplatte ausgeben. Dieses Kommando ist bspw. unter Solaris vorhanden.
### swapon, swapoff â Swap-Datei oder Partition (de)aktivierenÂ
Wenn Sie auf dem System eine Swap-Partition eingerichtet haben (siehe mkswap), existiert diese zwar, muss aber noch mit dem Kommando swapon aktiviert werden. Den so aktivierten Bereich können Sie jederzeit mit swapoff wieder aus dem laufenden System deaktivieren.
### sync â alle gepufferten Schreiboperationen ausführenÂ
Normalerweise verwendet Linux einen Puffer (Cache) im Arbeitsspeicher, worin sich ganze Datenblöcke eines Massenspeichers befinden. So werden Daten häufig temporär erst im Arbeitsspeicher verwaltet, da sich ein dauernd schreibender Prozess äußerst negativ auf die Performance des Systems auswirken würde. Stellen Sie sich das einmal bei 100 Prozessen vor! Gewöhnlich übernimmt ein Daemon die Arbeit und entscheidet, wann die veränderten Datenblöcke auf die Festplatte geschrieben werden.
Mit dem Kommando sync können Sie nun veranlassen, dass veränderte Daten sofort auf die Festplatte (oder auch jeden anderen Massenspeicher) geschrieben werden. Dies kann häufig der letzte Rettungsanker sein, wenn das System sich nicht mehr richtig beenden lässt. Können Sie hierbei noch schnell ein sync ausführen, werden alle Daten zuvor nochmals gesichert und der Datenverlust kann eventuell ganz verhindert werden.
# 14.8 Archivierung und BackupÂ
14.8 Archivierung und BackupÂ
bzip2/bunzip2 â (De-)Komprimieren von DateienÂ
bzip2 ist dem Kommando gzip nicht unähnlich, nur dass bzip2 einen besseren Komprimierungsgrad als gzip erreicht. bzip arbeitet mit dem Burrows-Wheeler Block-Sorting-Algorithmus, der den Text zwar nicht komprimiert, aber leichter komprimierbar macht, sowie mit der Huffman-Kodierung. Allerdings ist die Kompression mit bzip2 erheblich besser, aber dafür auch erheblich langsamer, sofern die Geschwindigkeit eine Rolle spielen sollte. Alle Dateien, die mit bzip2 komprimiert werden, bekommen automatisch die Dateiendung ».bz2«. TAR-Dateien, die mit bzip2 komprimiert werden, erhalten üblicherweise die Endung ».tbz«. Die »2« hinter bzip2 bzw. bunzip2 kam dadurch, dass der Vorgänger bzip lautete, der allerdings aus patentrechtlichen Gründen nicht mehr weiterentwickelt wurde. Komprimieren können Sie Dateien mit bzip2 folgendermaßen:
you@host > bzip2 file.txt
Entpacken können Sie die komprimierte Datei wieder mit der Option âd und bzip2 oder mit bunzip2.
you@host > bzip2 -d file.txt.bz2
oder
you@host > bunzip2 file.txt.bz2
Sie können gern bzip2 mit der Option âd bevorzugen, weil bunzip2 wieder nichts anderes ist als ein Link auf bzip2, wobei allerdings automatisch die Option âd verwendet wird:
you@host > which bunzip2 /usr/bin/bunzip2 you@host > ls -l /usr/bin/bunzip2 lrwxrwxrwx 1 root root 5 /usr/bin/bunzip2 -> bzip2
Interessant in diesem Zusammenhang ist auch das Kommando bzcat, womit Sie bzip2-komprimierte Dateien lesen können, ohne diese vorher zu dekomprimieren.
you@host > bzcat file.txt.bz2 ...
compress/uncompress â (De-)Komprimieren von DateienÂ
compress zum Komprimieren und uncompress zum Dekomprimieren wird heute zwar kaum noch verwendet, aber man sollte beide aus Kompatibilitätsgründen hier doch erwähnen, falls Sie einmal auf ein Uralt-Archiv stoßen, bei dem die Dateien mit einem ».Z« enden. compress beruht auf einer älteren Version des Lempel-Ziv-Algorithmus.
cpio, afio â Dateien und Verzeichnisse archivierenÂ
cpio (copy in and out) eignet sich hervorragend, um ganze Verzeichnisbäume zu archivieren. Etwas ungewöhnlich auf de ersten Blick ist, dass cpio die zu archivierenden Dateien nicht von der Kommandozeile, sondern von der Standardeingabe liest. Häufig wird daher cpio in Verbindung mit den Kommandos ls oder find und einer Pipe verwendet. Damit ist es möglich, ein spezielles Archiv zu erzeugen, das den Eigentümer, die Zugriffsrechte, die Erzeugungszeit, die Dateigröße usw. berücksichtigt. Gewöhnlich wird cpio in drei verschiedenen Arten aufgerufen.
copy out
Diese Art wird mit der Option âo bzw. ââcreate verwendet. So werden die Pfadnamen der zu kopierenden Dateien von der Standardeingabe eingelesen und auf die Standardausgabe kopiert. Dabei können Dinge, wie etwa der Eigentümer, die Zugriffsrechte, die Dateigröße etc., berücksichtigt bzw. ausgegeben werden. Gewöhnlich verwendet man copy out mit einer Pipe und einer Umlenkung:
you@host > cd Shellbuch you@host > ls Kap003.txt kap005.txt~ Kap008.txt Kap010.txt~ Kap013.txt Kap001.doc Kap003.txt~ Kap006.txt Kap008.txt~ Kap011.txt Kap013.txt~ Kap001.sxw kap004.txt Kap006.txt~ Kap009.txt Kap011.txt~ Kap014.txt Kap002.doc kap004.txt~ Kap007.txt Kap009.txt~ Kap012.txt Kap014.txt~ Kap002.sxw kap005.txt Kap007.txt~ Kap010.txt Kap012.txt~ Planung_und_Bewerbung chm_pdf you@host > ls *.txt | cpio -o > Shellbuch.cpio 1243 blocks
Hier wurden z. B. alle Textdateien im Verzeichnis Shellbuch zu einem cpio-Archiv (Shellbuch.cpio) gepackt. Allerdings konnten hier nur bestimmte Dateien erfasst werden. Wollen Sie ganze Verzeichnisbäume archivieren, dann verwenden Sie das Kommando find:
you@host > find $HOME/Shellbuch -print | cpio -o > Shellbuch.cpio cpio: ~/Shellbuch: truncating inode number cpio: ~/Shellbuch/Planung_und_Bewerbung: truncating inode number cpio: ~/Shellbuch/chm_pdf: truncating inode number 130806 blocks
Natürlich können Sie hierbei mit find-üblichen Anweisungen nur bestimmte Dateien archivieren. Ebenso lassen sich Dateien auch auf ein anderes Laufwerk archivieren:
you@host > ls -a | cpio -o > /dev/fd0
Hier wurden beispielsweise alle Dateien des aktuellen Verzeichnisses auf die Diskette kopiert. Dabei können Sie genauso gut einen Streamer, eine andere Festplatte oder gar einen anderen Rechner verwenden.
copy in
Wollen Sie das mit cpio âo erzeugte Archiv wieder entpacken bzw. zurückspielen, so wird cpio mit dem Schalter âi bzw. ââextract verwendet. Damit liest cpio die archivierten Dateien von der Standardeingabe ein. Es ist sogar möglich, hierbei reguläre Ausdrücke zu verwenden. Mit folgender Befehlsausführung entpacken Sie das Archiv wieder:
you@host > cpio -i < Shellbuch.cpio
Wollen Sie nicht das komplette Archiv entpacken, sondern nur bestimmte Dateien, können Sie dies mit einem regulären Ausdruck wie folgt angeben:
you@host > cpio -i "*.txt" < Shellbuch.cpio
Hier entpacken Sie nur Textdateien mit der Endung ».txt«. Natürlich funktioniert das Ganze auch mit den verschiedensten Speichermedien:
you@host > cpio -i "*.txt" < /dev/fd0
Hier werden alle Textdateien von einer Diskette entpackt. Allerdings werden die Dateien immer ins aktuelle Arbeitsverzeichnis zurückgespielt, sodass Sie den Zielort schon zuvor angeben müssen, zum Beispiel:
you@host > cd $HOME/Shellbuch/testdir ; \ > cpio -i < /archive/Shellbuch.cpio
Hier wechseln Sie zunächst in entsprechendes Verzeichnis, wo Sie anschließend das Archiv entpacken wollen. Wollen Sie außerdem erst wissen, was Sie entpacken, also was sich in einem cpio-Archiv für Dateien befinden, müssen Sie cpio nur mit der Option ât verwenden:
you@host > cpio -t < Shellbuch.cpio ...
copy pass
Mit copy pass werden die Dateien von der Standardeingabe gelesen und in ein entsprechendes Verzeichnis kopiert, ohne dass ein Archiv erzeugt wird. Hierzu wird die Option âp eingesetzt. Voraussetzung ist natürlich, dass ein entsprechendes Verzeichnis existiert. Wenn nicht, können Sie zusätzlich die Option âd verwenden, womit dann ein solches Verzeichnis erzeugt wird.
you@host > ls *.txt | cpio -pd /archive/testdir2
Hiermit werden aus dem aktuellen Verzeichnis alle Textdateien in das Verzeichnis testdir2 kopiert. Der Aufruf entspricht demselben wie:
you@host > cp *.txt /archive/testdir2
Einen ganzen Verzeichnisbaum des aktuellen Arbeitsverzeichnisses könnten Sie somit mit folgendem Aufruf kopieren:
you@host > find . -print | cpio -pd /archiv/testdir3
afio
afio bietet im Gegensatz zu cpio die Möglichkeit, die einzelnen Dateien zu komprimieren. Somit stellt afio eine interessante tar-Alternative dar (für jeden, der tar mit seinen typischen Optionsparametern nicht mag). afio komprimiert die einzelnen Dateien noch, bevor Sie in ein Archiv zusammengefasst werden. Ein einfaches Beispiel:
you@host > ls *.txt | afio -o -Z Shellbook.afio
Sie komprimieren zunächst alle Textdateien im aktuellen Verzeichnis und fassen dies dann in das Archiv Shellbook.afio zusammen. Die Platzeinsparung im Vergleich zu cpio ist beachtlich:
you@host > ls -l Shellbook* -rw------- 1 tot users 209920 2005â04â24 14:59 Shellbook.afio -rw------- 1 tot users 640512 2005â04â24 15:01 Shellbook.cpio
Entpacken können Sie das Archiv wieder wie bei cpio mit dem Schalter âi:
you@host > afio -i -Z Shellbook.afio
crypt â Dateien verschlüsselnÂ
Mit dem Kommando crypt wird ein ver-/entschlüsselnder Text von der Standardeingabe gelesen, um diesen wieder ver-/entschlüsselnd auf die Standardausgabe auszugeben.
crypt 32asdf32 < file.txt > file.txt.crypt
Hier wird beispielsweise die Datei file.txt eingelesen. Zum Verschlüsseln wurde das Passwort »32asdf32« verwendet. Der verschlüsselte, nicht mehr lesbare Text wird nun unter file.txt.crypt gespeichert. Jetzt sollte die Datei file.txt gelöscht werden.
Wenn es um wichtige Daten geht, sollte man auch dafür sorgen, dass diese nicht wieder sichtbar gemacht werden können. Zuerst mit Zufallsdaten überschreiben und dann löschen â natürlich ein Fall für awk:
you@host > dd if=/dev/urandom of=file.txt bs=1 \ > count=`wc -c file.txt | awk '{ print $1 }'`; rm file.txt
Die Datei file.txt.crypt können Sie mit
crypt 32asdf32 < file.txt.crypt > file.txt
wieder entschlüsseln. Wenn Sie das Passwort vergessen haben, dann werden Sie wohl ein großes Problem haben, die Verschlüsselung zu knacken.
Hinweis   Beachten Sie bitte die Gesetzeslage zur Verschlüsselung in Ihrem Heimatland. Kryptographie ist inzwischen in einigen Ländern verboten. So darf man in Frankreich zum Beispiel kein PGP einsetzen. crypt gilt übrigens nicht wirklich als »starke« Kryptographie, da gibt es »Härteres«. Was nicht heißen soll, dass das so einfach knackbar wäre. Ich wüsste noch nicht einmal, ob es überhaupt knackbar ist.
Hinweis   Da dump ein komplettes Dateisystem sichert, sollten Sie sicher sein, dass sich dieses in einem stabilen Zustand befindet. Denn ist das Dateisystem inkonsistent und Sie führen einen dump aus und wollen diesen dump irgendwann wieder einspielen, so spielen Sie auch sämtliche Instabiltäten mit ein. Daher empfiehlt sich als sicherster Weg der Single-User-Mode zum »Dumpen«.
Eine feine Sache sind auch die Dump-Levels, welche einfache inkrementelle Backups erlauben. Dazu verwendet man ein Backup-Level zwischen 0 bis 9. 0 steht hierbei für ein Full-Backup und alle anderen Zahlen sind inkrementelle Backups. Dabei werden immer nur die Dateien gesichert, die sich seit dem letzten Backup verändert haben â sprich dessen Level kleiner oder gleich dem aktuellen war. Geben Sie bspw. einen Level-2-Backup-Auftrag mit dump an, so werden alle Dateien gesichert, die sich seit dem Level 0, 1, 2 und 3 verändert haben.
Die Syntax zu dump/usfdump:
dump -[0â9][u]f Archiv Verzeichnis
In der Praxis schreiben Sie bspw. Folgendes:
dump â0uf /dev/rmt/1 /
Damit führen Sie ein komplettes Backup des Dateisystems durch. Alle Daten werden auf ein Band /dev/rmt/1 gesichert. Dieses Full-Backup müssen Sie zum Glück nur beim ersten Mal vornehmen (können es aber selbstverständlich immer durchführen), was gerade bei umfangreichen Dateisystemen Zeit raubend ist. Wollen Sie beim nächsten Mal jetzt immer die Dateien sichern, die sich seit dem letzten Dump-Level verändert haben, so müssen Sie nur den nächsten Level angeben:
dump â1uf /dev/rmt/1 /
Hierbei lassen sich richtige dump-Strategien einrichten, z. B. einmal in der Woche mit einem Shellscript einen Full-Backup-dump mit Level 0. In den darauf folgenden Tagen können Sie bspw. den Dump-Level jeweils um den Wert 1 erhöhen.
Damit die Dump-Levels auch tadellos funktionieren, wird die Datei /etc/dumpdates benötigt, worin das Datum der letzten Sicherung gespeichert wird. Gewöhnlich müssen Sie diese Datei vor dem ersten inkrementellen Dump mittels »touch /etc/dumdates« anlegen. Damit dies auch mit dem dump-Aufruf klappt, wird die Option âu verwendet. Mit dieser Option wird das aktuelle Datum in der Datei /etc/dumpdates vermerkt. Die Option âu ist also sehr wichtig, wenn ein inkrementelles »Dumpen« funktionieren soll. Außerdem wurde die Option âf dateiname für die Standardausgabe verwendet. Fehlt diese Option, dann wird auf die Umgebungsvariable TAPE und letztlich auf einen einkompilierten »Standard« zurückgegriffen. Zwangsläufig müssen Sie also einen dump nicht auf ein Tape vornehmen (auch wenn dies ein häufiger Einsatz ist), sondern können auch mit der Angabe von âf eine andere Archiv-Datei nutzen:
dump â0f /pfadzumArchiv/archiv_usr.121105 /usr
Hier wird bspw. das Verzeichnis /usr in einer Archiv-Datei gesichert.
Zum Wiederherstellen bzw. Zurücklesen von mit dump/ufsdump gesicherten Daten wird restore verwendet. Die Syntax von restore ähnelt der des Gegenstücks dump â allerdings findet man hier weitere Optionen für den Arbeitsmodus:
Tabelle 14.20 Â Optionen für das Kommando restore/ufsrestore
i
Interaktives Rückspielen (interactive)
r
Wiederherstellen eines Dateisystems (rebuild)
t
Auflisten des Inhalts (table)
x
Extrahieren einzelner, als zusätzliche Argumente aufgeführter Dateien (extract). Verzeichnisse werden dabei rekursiv ausgepackt.
So können Sie zum Beispiel das /usr-Dateisystem folgendermaßen mit restore wieder zurücksichern:
# cd /usr # restore rf /pfadzumArchiv/archiv_usr.121105
Wichtiges zu dump und restore noch zum Schluss. Da diese Werkzeuge auf einer Dateisystemebene arbeiten, ist die »Portabilität« der Archive recht unzuverlässig. Bedenken Sie, dass Sie unter Linux häufig einen dump auf ext2/ext3-Ebene durchführen und unter Solaris mit ufsdump eben auf UFS-Ebene. Sie können also nicht mal eben einen dump auf einem anderen Rechner vornehmen, bei dem bspw. als Dateisystem ext2 verwendet wurde, und diesen auf einen Rechner mit UFS als Dateisystem einspielen. Zwar klappt dies manchmal, wie bspw. zwischen Linux (ext2) und FreeBSD (UFS), doch leider kann man hierbei keine generelle Aussage treffen. Also sollten Sie sich mit den Dateisystemen schon sehr gut auskennen, um hier mit dump und restore zu experimentieren!
Des Weiteren benötigt dump zum Erstellen eines kompletten Indexes des Dateisystems eine Menge Arbeitsspeicher. Bei einem kleinen Hauptspeicher sollten Sie den Swap-Bereich erheblich vergrößern.
gzip/gunzip â (De-)Komprimieren von DateienÂ
gzip komprimiert Dateien und fügt am Ende des Dateinamen die Endung ».gz« an. Die Originaldatei wird hierbei durch die komprimierte Datei ersetzt. gzip basiert auf dem deflate-Algorithmus, der eine Kombination aus LZ77 und Huffman-Kodierung ist. Der Zeitstempel einer Datei und auch die Zugriffsrechte bleiben beim Komprimieren und auch beim Entpacken mit gunzip erhalten.
Hinweis   Sofern Sie Software-Entwickler in bspw. C sind und Datenkompression verwenden wollen, sollten Sie sich die Bibliothek zlib ansehen. Diese Bibliothek unterstützt das gzip-Dateiformat.
Komprimieren können Sie eine oder mehrere Datei(en) ganz einfach mit:
you@host > gzip file1.txt
Und Dekomprimieren geht entweder mit gzip und der Option âd
you@host > gzip -d file1.txt.gz
oder mit gunzip:
you@host > gunzip file1.txt.gz
Wobei gunzip hier kein symbolischer Link ist, sondern ein echtes Kommando. gunzip kann neben gzip-Dateien auch Dateien dekomprimieren, die mit zip (eine_Datei), compress oder pack komprimiert wurden.
Wollen Sie, dass beim (De-)Komprimieren nicht die Originaldatei berührt wird, so müssen Sie die Option âc verwenden:
you@host > gzip -c file1.txt > file1.txt.gz
Dies können Sie ebenso mit gunzip anwenden:
you@host > gunzip -c file1.txt.gz > file1_neu.txt
Damit lassen Sie die gzip-komprimierte Datei file1.txt.gz unberührt und erzeugen eine neue dekomprimierte Datei file1_neu.txt. Selbiges können Sie auch mit zcat erledigen:
you@host > zcat file1.txt.gz > file1_neu.txt
mt â Streamer steuernÂ
Mit mt können Magnetbänder vor- oder zurückgespult, positioniert und gelöscht werden. mt wird häufig in Shellscripts in Verbindung mit tar, cpio oder afio verwendet, weil jedes der Kommandos zwar auf Magnetbänder schreiben kann, aber diese nicht steuern können. So wird mt wie folgt in der Praxis verwendet:
mt -f tape befehl [nummer]
Mit »tape« geben Sie den Pfad zu Ihrem Bandlaufwerk an und mit »nummer«, wie oft »befehl« ausgeführt werden soll. Hierzu die gängigsten Befehle, die dabei eingesetzt werden:
Tabelle 14.21 Â Befehle für das Kommando mt
eom
Band bis zum Ende der letzten Datei spulen. Ab hier kann mit tar, cpio und afio (oder gar dump/ufsdump) ein Backup aufgespielt werden.
fsf Anzahl
Das Band um »Anzahl« Archive (Dateiendemarken) vorspulen. Nicht gleichzusetzen mit der letzten Datei (eom).
nbsf Anzahl
Das Band um »Anzahl« Archive (Dateiendemarken) zurückspulen
rewind
Das Band an den Anfang zurückspulen
status
Statusinformationen vom Magnetlaufwerk ermitteln und ausgeben (bspw. ist ein Band eingelegt oder nicht)
erase
Das Band löschen und initialisieren
retension
Das Band einmal ans Ende spulen und wieder zum Anfang zurück, um es neu zu »spannen«
offline
Band zum Anfang zurückspulen und auswerfen
pack/unpack â (De-)Komprimieren von DateienÂ
Diese beiden Programme werden nur noch der Vollständigkeit erwähnt und sind mittlerweile von älteren Semestern. Eine mit pack komprimiert Datei hat die Endung ».z« und kann mit unpack (oder gunzip) wieder entpackt werden.
tar â Dateien und Verzeichnisse archivierenÂ
tar (tape archiver) wurde ursprünglich zur Verwaltung von Bandarchiven verwendet. Mittlerweile aber wird tar auch auf Disketten oder normale Dateien oder Verzeichnisse angewendet. Das Kommando wird zur Erstellung von Sicherungen bzw. Archiven sowie zu ihrem Zurückladen genutzt.
tar funktion [optionen] [datei(en)]
Mit »funktion« geben Sie an, wie die Erstellung bzw. das Zurückladen von Archiven erfolgen soll. Mit »datei(en)« bestimmen Sie, welche Dateien oder Dateibäume herausgeschrieben oder wieder eingelesen werden sollen. Gibt man hierbei ein Verzeichnis an, so wird der gesamte darin enthaltene Dateibaum verwendet. Gewöhnlich werden Archive mittels tar nicht komprimiert, aber auch hier kann man mit tar die Ein- und Ausgabe durch einen Kompressor leiten. Neuere Versionen unterstützen sowohl compress als auch gzip und bzip2, das inzwischen einen recht hohen Stellenwert hat. Das gilt auch nicht nur für GNU-tar.
tar (besonders GNU-tar) ist gewaltig, was die Anzahl von Funktionen und Optionen betrifft. Da tar eigentlich zu einem sehr beliebten Archivierungswerkzeug gehört, sollen hierbei auch mehrere Optionen und Funktionen erwähnt werden.
Hinweis   Um mich hier nicht auf einen Glaubenskrieg von tar-Fanatikern einzulassen (die gibt es wirklich), sei erwähnt, dass es viele tar-Versionen gibt (GNU-tar, star, bsdtar, usw.). Gewöhnlich reicht für den Normalanwender GNU-tar aus â aber es gibt auch Anwender, die auf star schwören, was in punkto Funktionsumfang der »Overkill« ist. Ich mache es mir hier jetzt einfach, indem ich darauf hinweise, dass die Funktionen, die gleich folgen werden, nicht von allen tar-Versionen unterstützt werden (wohl aber von GNU-tar).
Tabelle 14.22 Â Optionen für das Kommando tar
âA
Hängt ein komplettes Archiv an ein zweites vorhandenes Archiv an (oder fügt es auf dem Band hinten an)
âc
Erzeugt ein neues Archiv
âd
Vergleicht die im Archiv abgelegten Dateien mit den angegebenen Dateien
ââdelete Datei(en)
Löscht die angegebenen »Datei(en)« aus dem Archiv (nicht für Magnetbänder)
âr
Erwähnte Dateien werden ans Ende von einem bereits existierenden Archiv angehängt (nicht für Magnetbänder)
ât
Zeigt den Inhalt eines Archivs an
âu
Benannte Dateien werden nur dann ins Archiv aufgenommen, wenn diese neuer als die bereits archivierten Versionen oder noch überhaupt nicht im Archiv vorhanden sind (nicht für Magnetbänder).
âx
Bestimmte Dateien sollen aus einem Archiv gelesen werden; werden keine Dateien erwähnt, werden alle Dateien aus dem Archiv extrahiert.
Tabelle 14.23 Â Zusatzoptionen für das Kommando tar
âf Datei
Benutzt »Datei« oder das damit verbundenen Gerät als Archiv; die Datei darf auch Teil von einem anderen Rechner sein.
âl
Geht beim Archivieren nicht über die Dateisystemgrenze hinaus
âv
tar gibt gewöhnlich keine speziellen Meldungen aus; mit dieser Option wird jede Aktion von tar gemeldet.
âw
Schaltet tar in einen interaktiven Modus, wo zu jeder Aktion eine Bestätigung erfolgen muss
ây
Komprimiert oder dekomrimiert die Dateien bei einer tar-Operation mit bzip2
âz
Komprimiert oder dekomprimiert die Dateien bei einer tar-Operation mit gzip bzw. gunzip
âZ
Komprimiert oder dekomprimiert die Dateien bei einer tar-Operation mit compress bzw. uncompress
Neben diesen zusätzlichen Optionen bietet tar noch eine Menge Schalter mehr an. Weitere Informationen entnehmen Sie bei Bedarf der man-Seite zu tar. Allerdings dürften Sie mit den hier genannten Optionen recht weit kommen. Daher hierzu einige Beispiele.
Am häufigsten wird tar wohl in folgender Grundform verwendet:
tar cf Archiv_Name Verzeichnis
Damit wird aus dem Verzeichniszweig Verzeichnis ein Archiv mit dem Namen Archiv_Name erstellt. In der Praxis:
you@host > tar cf Shellbuch_mai05 Shellbuch
Aus dem kompletten Verzeichniszweig Shellbuch wird das Archiv Shellbuch_mai05 erstellt. Damit Sie beim Zurückspielen der Daten flexibler sind, sollten Sie einen relativen Verzeichnispfad angeben:
cd Verzeichnis ; tar cf Archiv_Name .
Im Beispiel:
you@host > cd Shellbuch ; tar cf Shellbuch_mai05 .
Hierfür steht Ihnen für das relative Verzeichnis auch die Option âC Verzeichnis zur Verfügung:
tar cf Archiv_Name -C Verzeichnis .
Wollen Sie die Dateien und Verzeichnisse des Archivs wiederherstellen, lautet der gängige Befehl hierzu wie folgt:
tar xf Archiv_Name
Um auf unser Beispiel zurückzukommen:
you@host > tar xf Shellbuch_mai05
Wollen Sie einzelne Dateien aus einem Archiv wiederherstellen, so ist dies auch kein allzu großer Aufwand:
tar xf Shellbuch_mai05 datei1 datei2 ...
Beachten Sie allerdings, wie Sie die Datei ins Archiv gesichert haben (relativer oder absoluter Pfad). Wollen Sie bspw. die Datei Shellbook.cpio mit dem relativen Pfad wiederherstellen, so können Sie dies wie folgt tun:
you@host > tar xf Shellbuch_mai05 ./Shellbook.cpio
Hinweis   Ein häufiger Fehler, wenn etwas mit tar nicht klappt, ist eine falsche Pfadangabe. Man sollte daher immer überlegen, wo liegt das Archiv, wo soll rückgesichert werden und wie wurden die Dateien ins Archiv aufgenommen (absoluter oder relativer Pfad).
Wollen Sie außerdem mitverfolgen, was beim Erstellen oder Auspacken eines Archivs alles passiert, sollten Sie die Option v verwenden. Hierbei können Sie außerdem gleich erkennen, ob Sie die Dateien mit dem absoluten oder relativen Pfad gesichert haben.
# neues Archiv erzeugen mit Ausgabe aller Aktionen tar cvf Archiv_Name Verzeichnis # Archiv zurückspielen, mit Ausgabe aller Aktionen tar xvf Archiv_Name
Den Inhalt eines Archivs können Sie mit der Option t ansehen:
tar tf Archiv_Name
In unserem Beispiel:
you@host > tar tf Shellbuch_mai05 ./ ./Planung_und_Bewerbung/ ./Planung_und_Bewerbung/shellprogrammierung.doc ./Planung_und_Bewerbung/shellprogrammierung.sxw ./kap004.txt ./kap005.txt ./testdir2/ ...
Hier wurde also der relative Pfadname verwendet. Wollen Sie ein ganzes Verzeichnis auf Diskette sichern, erledigen Sie dies folgendermaßen:
tar cf /dev/fd0 Shellbuch
Hier kopieren Sie das ganze Verzeichnis auf eine Diskette. Zurückspielen können Sie das Ganze wieder wie folgt:
tar xvf /dev/fd0 Shellbuch
Wollen Sie einen kompletten Verzeichnisbaum mit Kompression archivieren (bspw. mit gzip), gehen Sie so vor:
you@host > tar czvf Shellbuch_mai05.tgz Shellbuch
Dank der Option z wird jetzt das ganze Archiv auch noch komprimiert. Ansehen können Sie sich das komprimierte Archiv weiterhin mit der Option t:
you@host > tar tzf Shellbuch_mai05.tgz Shellbuch Shellbuch/ Shellbuch/Planung_und_Bewerbung/ Shellbuch/Planung_und_Bewerbung/shellprogrammierung.doc Shellbuch/Planung_und_Bewerbung/shellprogrammierung.sxw Shellbuch/kap004.txt Shellbuch/kap005.txt Shellbuch/testdir2/ ...
Hier wurde also der absolute Pfadname verwendet. Entpacken und wieder einspielen können Sie das komprimierte Archiv wieder mit (mit Meldungen):
you@host > tar xzvf Shellbuch_mai05.tgz Shellbuch
Wollen Sie allerdings nur Dateien mit der Endung ».txt« aus dem Archiv extrahieren, können Sie dies so vornehmen:
you@host > tar xzf Shellbuch_mai05.tgz '*.txt'
zip/unzip â (De-)Komprimieren von DateienÂ
Mit zip können Sie einzelne Dateien bis hin zu ganzen Verzeichnissen komprimieren und archivieren. Besonders gern werden zip und unzip allerdings verwendet, weil diese gänzlich mit den Versionen von Windows und DOS kompatibel sind. Wer sich also immer schon geärgert hat, dass sein Mail-Anhang wieder einmal etwas im ZIP-Format enthält, kann hier auf unzip zurückgreifen.
Ein ZIP-Archiv aus mehreren Dateien können Sie so erstellen:
you@host > zip files.zip file1.txt file2.txt file3.txt adding: file1.txt (deflated 56 %) adding: file2.txt (deflated 46 %) adding: file3.txt (deflated 24 %)
Hier packen und komprimieren Sie die Dateien zu einem Archiv namens files.zip. Wollen Sie eine neue Datei zum Archiv hinzufügen, nichts einfacher als das:
you@host > zip files.zip hallo.c adding: hallo.c (deflated 3 %)
Möchten Sie alle Dateien des Archivs in das aktuelle Arbeitsverzeichnis entpacken, dann tun Sie dies so:
you@host > unzip files.zip Archive: files.zip inflating: file1.txt inflating: file2.txt inflating: file3.txt inflating: hallo.c
Wenn Sie eine ganze Verzeichnishierarchie packen und komprimieren wollen, so müssen Sie die Option âr (rekursive) verwenden:
you@host > zip -r Shellbuch.zip $HOME/Shellbuch ...
Entpacken können Sie das Archiv allerdings wieder wie gewohnt mittels unzip.
Übersicht zu Dateiendungen und den Pack-ProgrammenÂ
In der folgenden Tabelle wird eine kurze Übersicht zu den Dateiendungen und den zugehörigen (De-)Komprimierungsprogrammen gegeben.
Tabelle 14.24 Â Gängige Endungen und deren (De-)Kompremierungs-Kommando
Endung
gepackt mit
entpackt mit
*.bz und *.bz2
bzip2
bzip2
*.gz
gzip
gzip, gunzip oder zcat
*.zip
InfoâZip, PKZip, zip
InfoâUnzip, PKUnzip, unzip, gunzip (eineDatei)
*.tar
tar
tar
*.tbz
tar und bzip2
tar und bzip2
*.tgz; *.tar.gz
tar und gzip
tar und g(un)zip
*.Z
compress
uncompress; gunzip
*.tar.Z
tar und compress
tar und uncompress
*.pak
pack
unpack, gunzip
## 14.8 Archivierung und BackupÂ
### bzip2/bunzip2 â (De-)Komprimieren von DateienÂ
bzip2 ist dem Kommando gzip nicht unähnlich, nur dass bzip2 einen besseren Komprimierungsgrad als gzip erreicht. bzip arbeitet mit dem Burrows-Wheeler Block-Sorting-Algorithmus, der den Text zwar nicht komprimiert, aber leichter komprimierbar macht, sowie mit der Huffman-Kodierung. Allerdings ist die Kompression mit bzip2 erheblich besser, aber dafür auch erheblich langsamer, sofern die Geschwindigkeit eine Rolle spielen sollte. Alle Dateien, die mit bzip2 komprimiert werden, bekommen automatisch die Dateiendung ».bz2«. TAR-Dateien, die mit bzip2 komprimiert werden, erhalten üblicherweise die Endung ».tbz«. Die »2« hinter bzip2 bzw. bunzip2 kam dadurch, dass der Vorgänger bzip lautete, der allerdings aus patentrechtlichen Gründen nicht mehr weiterentwickelt wurde. Komprimieren können Sie Dateien mit bzip2 folgendermaßen:
> you@host > bzip2 file.txt
Entpacken können Sie die komprimierte Datei wieder mit der Option âd und bzip2 oder mit bunzip2.
> you@host > bzip2 -d file.txt.bz2
oder
> you@host > bunzip2 file.txt.bz2
Sie können gern bzip2 mit der Option âd bevorzugen, weil bunzip2 wieder nichts anderes ist als ein Link auf bzip2, wobei allerdings automatisch die Option âd verwendet wird:
> you@host > which bunzip2 /usr/bin/bunzip2 you@host > ls -l /usr/bin/bunzip2 lrwxrwxrwx 1 root root 5 /usr/bin/bunzip2 -> bzip2
Interessant in diesem Zusammenhang ist auch das Kommando bzcat, womit Sie bzip2-komprimierte Dateien lesen können, ohne diese vorher zu dekomprimieren.
> you@host > bzcat file.txt.bz2 ...
### compress/uncompress â (De-)Komprimieren von DateienÂ
compress zum Komprimieren und uncompress zum Dekomprimieren wird heute zwar kaum noch verwendet, aber man sollte beide aus Kompatibilitätsgründen hier doch erwähnen, falls Sie einmal auf ein Uralt-Archiv stoßen, bei dem die Dateien mit einem ».Z« enden. compress beruht auf einer älteren Version des Lempel-Ziv-Algorithmus.
### cpio, afio â Dateien und Verzeichnisse archivierenÂ
cpio (copy in and out) eignet sich hervorragend, um ganze Verzeichnisbäume zu archivieren. Etwas ungewöhnlich auf de ersten Blick ist, dass cpio die zu archivierenden Dateien nicht von der Kommandozeile, sondern von der Standardeingabe liest. Häufig wird daher cpio in Verbindung mit den Kommandos ls oder find und einer Pipe verwendet. Damit ist es möglich, ein spezielles Archiv zu erzeugen, das den Eigentümer, die Zugriffsrechte, die Erzeugungszeit, die Dateigröße usw. berücksichtigt. Gewöhnlich wird cpio in drei verschiedenen Arten aufgerufen.
# copy out
Diese Art wird mit der Option âo bzw. ââcreate verwendet. So werden die Pfadnamen der zu kopierenden Dateien von der Standardeingabe eingelesen und auf die Standardausgabe kopiert. Dabei können Dinge, wie etwa der Eigentümer, die Zugriffsrechte, die Dateigröße etc., berücksichtigt bzw. ausgegeben werden. Gewöhnlich verwendet man copy out mit einer Pipe und einer Umlenkung:
> you@host > cd Shellbuch you@host > ls Kap003.txt kap005.txt~ Kap008.txt Kap010.txt~ Kap013.txt Kap001.doc Kap003.txt~ Kap006.txt Kap008.txt~ Kap011.txt Kap013.txt~ Kap001.sxw kap004.txt Kap006.txt~ Kap009.txt Kap011.txt~ Kap014.txt Kap002.doc kap004.txt~ Kap007.txt Kap009.txt~ Kap012.txt Kap014.txt~ Kap002.sxw kap005.txt Kap007.txt~ Kap010.txt Kap012.txt~ Planung_und_Bewerbung chm_pdf you@host > ls *.txt | cpio -o > Shellbuch.cpio 1243 blocks
Hier wurden z. B. alle Textdateien im Verzeichnis Shellbuch zu einem cpio-Archiv (Shellbuch.cpio) gepackt. Allerdings konnten hier nur bestimmte Dateien erfasst werden. Wollen Sie ganze Verzeichnisbäume archivieren, dann verwenden Sie das Kommando find:
> you@host > find $HOME/Shellbuch -print | cpio -o > Shellbuch.cpio cpio: ~/Shellbuch: truncating inode number cpio: ~/Shellbuch/Planung_und_Bewerbung: truncating inode number cpio: ~/Shellbuch/chm_pdf: truncating inode number 130806 blocks
Natürlich können Sie hierbei mit find-üblichen Anweisungen nur bestimmte Dateien archivieren. Ebenso lassen sich Dateien auch auf ein anderes Laufwerk archivieren:
> you@host > ls -a | cpio -o > /dev/fd0
Hier wurden beispielsweise alle Dateien des aktuellen Verzeichnisses auf die Diskette kopiert. Dabei können Sie genauso gut einen Streamer, eine andere Festplatte oder gar einen anderen Rechner verwenden.
# copy in
Wollen Sie das mit cpio âo erzeugte Archiv wieder entpacken bzw. zurückspielen, so wird cpio mit dem Schalter âi bzw. ââextract verwendet. Damit liest cpio die archivierten Dateien von der Standardeingabe ein. Es ist sogar möglich, hierbei reguläre Ausdrücke zu verwenden. Mit folgender Befehlsausführung entpacken Sie das Archiv wieder:
> you@host > cpio -i < Shellbuch.cpio
Wollen Sie nicht das komplette Archiv entpacken, sondern nur bestimmte Dateien, können Sie dies mit einem regulären Ausdruck wie folgt angeben:
> you@host > cpio -i "*.txt" < Shellbuch.cpio
Hier entpacken Sie nur Textdateien mit der Endung ».txt«. Natürlich funktioniert das Ganze auch mit den verschiedensten Speichermedien:
> you@host > cpio -i "*.txt" < /dev/fd0
Hier werden alle Textdateien von einer Diskette entpackt. Allerdings werden die Dateien immer ins aktuelle Arbeitsverzeichnis zurückgespielt, sodass Sie den Zielort schon zuvor angeben müssen, zum Beispiel:
> you@host > cd $HOME/Shellbuch/testdir ; \ > cpio -i < /archive/Shellbuch.cpio
Hier wechseln Sie zunächst in entsprechendes Verzeichnis, wo Sie anschließend das Archiv entpacken wollen. Wollen Sie außerdem erst wissen, was Sie entpacken, also was sich in einem cpio-Archiv für Dateien befinden, müssen Sie cpio nur mit der Option ât verwenden:
> you@host > cpio -t < Shellbuch.cpio ...
# copy pass
Mit copy pass werden die Dateien von der Standardeingabe gelesen und in ein entsprechendes Verzeichnis kopiert, ohne dass ein Archiv erzeugt wird. Hierzu wird die Option âp eingesetzt. Voraussetzung ist natürlich, dass ein entsprechendes Verzeichnis existiert. Wenn nicht, können Sie zusätzlich die Option âd verwenden, womit dann ein solches Verzeichnis erzeugt wird.
> you@host > ls *.txt | cpio -pd /archive/testdir2
Hiermit werden aus dem aktuellen Verzeichnis alle Textdateien in das Verzeichnis testdir2 kopiert. Der Aufruf entspricht demselben wie:
> you@host > cp *.txt /archive/testdir2
Einen ganzen Verzeichnisbaum des aktuellen Arbeitsverzeichnisses könnten Sie somit mit folgendem Aufruf kopieren:
> you@host > find . -print | cpio -pd /archiv/testdir3
# afio
afio bietet im Gegensatz zu cpio die Möglichkeit, die einzelnen Dateien zu komprimieren. Somit stellt afio eine interessante tar-Alternative dar (für jeden, der tar mit seinen typischen Optionsparametern nicht mag). afio komprimiert die einzelnen Dateien noch, bevor Sie in ein Archiv zusammengefasst werden. Ein einfaches Beispiel:
> you@host > ls *.txt | afio -o -Z Shellbook.afio
Sie komprimieren zunächst alle Textdateien im aktuellen Verzeichnis und fassen dies dann in das Archiv Shellbook.afio zusammen. Die Platzeinsparung im Vergleich zu cpio ist beachtlich:
> you@host > ls -l Shellbook* -rw------- 1 tot users 209920 2005â04â24 14:59 Shellbook.afio -rw------- 1 tot users 640512 2005â04â24 15:01 Shellbook.cpio
Entpacken können Sie das Archiv wieder wie bei cpio mit dem Schalter âi:
> you@host > afio -i -Z Shellbook.afio
### crypt â Dateien verschlüsselnÂ
Mit dem Kommando crypt wird ein ver-/entschlüsselnder Text von der Standardeingabe gelesen, um diesen wieder ver-/entschlüsselnd auf die Standardausgabe auszugeben.
> crypt 32asdf32 < file.txt > file.txt.crypt
Hier wird beispielsweise die Datei file.txt eingelesen. Zum Verschlüsseln wurde das Passwort »32asdf32« verwendet. Der verschlüsselte, nicht mehr lesbare Text wird nun unter file.txt.crypt gespeichert. Jetzt sollte die Datei file.txt gelöscht werden.
Wenn es um wichtige Daten geht, sollte man auch dafür sorgen, dass diese nicht wieder sichtbar gemacht werden können. Zuerst mit Zufallsdaten überschreiben und dann löschen â natürlich ein Fall für awk:
> you@host > dd if=/dev/urandom of=file.txt bs=1 \ > count=`wc -c file.txt | awk '{ print $1 }'`; rm file.txt
Die Datei file.txt.crypt können Sie mit
> crypt 32asdf32 < file.txt.crypt > file.txt
wieder entschlüsseln. Wenn Sie das Passwort vergessen haben, dann werden Sie wohl ein großes Problem haben, die Verschlüsselung zu knacken.
### dump/restore bzw. ufsdump/ufsrestore â Vollsicherung bzw. Wiederherstellen eines DateisystemsÂ
Eine feine Sache sind auch die Dump-Levels, welche einfache inkrementelle Backups erlauben. Dazu verwendet man ein Backup-Level zwischen 0 bis 9. 0 steht hierbei für ein Full-Backup und alle anderen Zahlen sind inkrementelle Backups. Dabei werden immer nur die Dateien gesichert, die sich seit dem letzten Backup verändert haben â sprich dessen Level kleiner oder gleich dem aktuellen war. Geben Sie bspw. einen Level-2-Backup-Auftrag mit dump an, so werden alle Dateien gesichert, die sich seit dem Level 0, 1, 2 und 3 verändert haben.
Die Syntax zu dump/usfdump:
> dump -[0â9][u]f Archiv Verzeichnis
In der Praxis schreiben Sie bspw. Folgendes:
> dump â0uf /dev/rmt/1 /
Damit führen Sie ein komplettes Backup des Dateisystems durch. Alle Daten werden auf ein Band /dev/rmt/1 gesichert. Dieses Full-Backup müssen Sie zum Glück nur beim ersten Mal vornehmen (können es aber selbstverständlich immer durchführen), was gerade bei umfangreichen Dateisystemen Zeit raubend ist. Wollen Sie beim nächsten Mal jetzt immer die Dateien sichern, die sich seit dem letzten Dump-Level verändert haben, so müssen Sie nur den nächsten Level angeben:
> dump â1uf /dev/rmt/1 /
Hierbei lassen sich richtige dump-Strategien einrichten, z. B. einmal in der Woche mit einem Shellscript einen Full-Backup-dump mit Level 0. In den darauf folgenden Tagen können Sie bspw. den Dump-Level jeweils um den Wert 1 erhöhen.
Damit die Dump-Levels auch tadellos funktionieren, wird die Datei /etc/dumpdates benötigt, worin das Datum der letzten Sicherung gespeichert wird. Gewöhnlich müssen Sie diese Datei vor dem ersten inkrementellen Dump mittels »touch /etc/dumdates« anlegen. Damit dies auch mit dem dump-Aufruf klappt, wird die Option âu verwendet. Mit dieser Option wird das aktuelle Datum in der Datei /etc/dumpdates vermerkt. Die Option âu ist also sehr wichtig, wenn ein inkrementelles »Dumpen« funktionieren soll. Außerdem wurde die Option âf dateiname für die Standardausgabe verwendet. Fehlt diese Option, dann wird auf die Umgebungsvariable TAPE und letztlich auf einen einkompilierten »Standard« zurückgegriffen. Zwangsläufig müssen Sie also einen dump nicht auf ein Tape vornehmen (auch wenn dies ein häufiger Einsatz ist), sondern können auch mit der Angabe von âf eine andere Archiv-Datei nutzen:
> dump â0f /pfadzumArchiv/archiv_usr.121105 /usr
Hier wird bspw. das Verzeichnis /usr in einer Archiv-Datei gesichert.
Zum Wiederherstellen bzw. Zurücklesen von mit dump/ufsdump gesicherten Daten wird restore verwendet. Die Syntax von restore ähnelt der des Gegenstücks dump â allerdings findet man hier weitere Optionen für den Arbeitsmodus:
Option | Bedeutung |
| --- | --- |
i | Interaktives Rückspielen (interactive) |
r | Wiederherstellen eines Dateisystems (rebuild) |
t | Auflisten des Inhalts (table) |
x | Extrahieren einzelner, als zusätzliche Argumente aufgeführter Dateien (extract). Verzeichnisse werden dabei rekursiv ausgepackt. |
So können Sie zum Beispiel das /usr-Dateisystem folgendermaßen mit restore wieder zurücksichern:
> # cd /usr # restore rf /pfadzumArchiv/archiv_usr.121105
Wichtiges zu dump und restore noch zum Schluss. Da diese Werkzeuge auf einer Dateisystemebene arbeiten, ist die »Portabilität« der Archive recht unzuverlässig. Bedenken Sie, dass Sie unter Linux häufig einen dump auf ext2/ext3-Ebene durchführen und unter Solaris mit ufsdump eben auf UFS-Ebene. Sie können also nicht mal eben einen dump auf einem anderen Rechner vornehmen, bei dem bspw. als Dateisystem ext2 verwendet wurde, und diesen auf einen Rechner mit UFS als Dateisystem einspielen. Zwar klappt dies manchmal, wie bspw. zwischen Linux (ext2) und FreeBSD (UFS), doch leider kann man hierbei keine generelle Aussage treffen. Also sollten Sie sich mit den Dateisystemen schon sehr gut auskennen, um hier mit dump und restore zu experimentieren!
Des Weiteren benötigt dump zum Erstellen eines kompletten Indexes des Dateisystems eine Menge Arbeitsspeicher. Bei einem kleinen Hauptspeicher sollten Sie den Swap-Bereich erheblich vergrößern.
### gzip/gunzip â (De-)Komprimieren von DateienÂ
gzip komprimiert Dateien und fügt am Ende des Dateinamen die Endung ».gz« an. Die Originaldatei wird hierbei durch die komprimierte Datei ersetzt. gzip basiert auf dem deflate-Algorithmus, der eine Kombination aus LZ77 und Huffman-Kodierung ist. Der Zeitstempel einer Datei und auch die Zugriffsrechte bleiben beim Komprimieren und auch beim Entpacken mit gunzip erhalten.
Komprimieren können Sie eine oder mehrere Datei(en) ganz einfach mit:
> you@host > gzip file1.txt
Und Dekomprimieren geht entweder mit gzip und der Option âd
> you@host > gzip -d file1.txt.gz
oder mit gunzip:
> you@host > gunzip file1.txt.gz
Wobei gunzip hier kein symbolischer Link ist, sondern ein echtes Kommando. gunzip kann neben gzip-Dateien auch Dateien dekomprimieren, die mit zip (eine_Datei), compress oder pack komprimiert wurden.
Wollen Sie, dass beim (De-)Komprimieren nicht die Originaldatei berührt wird, so müssen Sie die Option âc verwenden:
> you@host > gzip -c file1.txt > file1.txt.gz
Dies können Sie ebenso mit gunzip anwenden:
> you@host > gunzip -c file1.txt.gz > file1_neu.txt
Damit lassen Sie die gzip-komprimierte Datei file1.txt.gz unberührt und erzeugen eine neue dekomprimierte Datei file1_neu.txt. Selbiges können Sie auch mit zcat erledigen:
> you@host > zcat file1.txt.gz > file1_neu.txt
### mt â Streamer steuernÂ
Mit mt können Magnetbänder vor- oder zurückgespult, positioniert und gelöscht werden. mt wird häufig in Shellscripts in Verbindung mit tar, cpio oder afio verwendet, weil jedes der Kommandos zwar auf Magnetbänder schreiben kann, aber diese nicht steuern können. So wird mt wie folgt in der Praxis verwendet:
> mt -f tape befehl [nummer]
Mit »tape« geben Sie den Pfad zu Ihrem Bandlaufwerk an und mit »nummer«, wie oft »befehl« ausgeführt werden soll. Hierzu die gängigsten Befehle, die dabei eingesetzt werden:
Befehl | Bedeutung |
| --- | --- |
eom | Band bis zum Ende der letzten Datei spulen. Ab hier kann mit tar, cpio und afio (oder gar dump/ufsdump) ein Backup aufgespielt werden. |
fsf Anzahl | Das Band um »Anzahl« Archive (Dateiendemarken) vorspulen. Nicht gleichzusetzen mit der letzten Datei (eom). |
nbsf Anzahl | Das Band um »Anzahl« Archive (Dateiendemarken) zurückspulen |
rewind | Das Band an den Anfang zurückspulen |
status | Statusinformationen vom Magnetlaufwerk ermitteln und ausgeben (bspw. ist ein Band eingelegt oder nicht) |
erase | Das Band löschen und initialisieren |
retension | Das Band einmal ans Ende spulen und wieder zum Anfang zurück, um es neu zu »spannen« |
offline | Band zum Anfang zurückspulen und auswerfen |
### pack/unpack â (De-)Komprimieren von DateienÂ
Diese beiden Programme werden nur noch der Vollständigkeit erwähnt und sind mittlerweile von älteren Semestern. Eine mit pack komprimiert Datei hat die Endung ».z« und kann mit unpack (oder gunzip) wieder entpackt werden.
### tar â Dateien und Verzeichnisse archivierenÂ
tar (tape archiver) wurde ursprünglich zur Verwaltung von Bandarchiven verwendet. Mittlerweile aber wird tar auch auf Disketten oder normale Dateien oder Verzeichnisse angewendet. Das Kommando wird zur Erstellung von Sicherungen bzw. Archiven sowie zu ihrem Zurückladen genutzt.
> tar funktion [optionen] [datei(en)]
Mit »funktion« geben Sie an, wie die Erstellung bzw. das Zurückladen von Archiven erfolgen soll. Mit »datei(en)« bestimmen Sie, welche Dateien oder Dateibäume herausgeschrieben oder wieder eingelesen werden sollen. Gibt man hierbei ein Verzeichnis an, so wird der gesamte darin enthaltene Dateibaum verwendet. Gewöhnlich werden Archive mittels tar nicht komprimiert, aber auch hier kann man mit tar die Ein- und Ausgabe durch einen Kompressor leiten. Neuere Versionen unterstützen sowohl compress als auch gzip und bzip2, das inzwischen einen recht hohen Stellenwert hat. Das gilt auch nicht nur für GNU-tar.
tar (besonders GNU-tar) ist gewaltig, was die Anzahl von Funktionen und Optionen betrifft. Da tar eigentlich zu einem sehr beliebten Archivierungswerkzeug gehört, sollen hierbei auch mehrere Optionen und Funktionen erwähnt werden.
Option | Bedeutung |
| --- | --- |
âA | Hängt ein komplettes Archiv an ein zweites vorhandenes Archiv an (oder fügt es auf dem Band hinten an) |
âc | Erzeugt ein neues Archiv |
âd | Vergleicht die im Archiv abgelegten Dateien mit den angegebenen Dateien |
ââdelete Datei(en) | Löscht die angegebenen »Datei(en)« aus dem Archiv (nicht für Magnetbänder) |
âr | Erwähnte Dateien werden ans Ende von einem bereits existierenden Archiv angehängt (nicht für Magnetbänder) |
ât | Zeigt den Inhalt eines Archivs an |
âu | Benannte Dateien werden nur dann ins Archiv aufgenommen, wenn diese neuer als die bereits archivierten Versionen oder noch überhaupt nicht im Archiv vorhanden sind (nicht für Magnetbänder). |
âx | Bestimmte Dateien sollen aus einem Archiv gelesen werden; werden keine Dateien erwähnt, werden alle Dateien aus dem Archiv extrahiert. |
Option | Bedeutung |
| --- | --- |
âf Datei | Benutzt »Datei« oder das damit verbundenen Gerät als Archiv; die Datei darf auch Teil von einem anderen Rechner sein. |
âl | Geht beim Archivieren nicht über die Dateisystemgrenze hinaus |
âv | tar gibt gewöhnlich keine speziellen Meldungen aus; mit dieser Option wird jede Aktion von tar gemeldet. |
âw | Schaltet tar in einen interaktiven Modus, wo zu jeder Aktion eine Bestätigung erfolgen muss |
ây | Komprimiert oder dekomrimiert die Dateien bei einer tar-Operation mit bzip2 |
âz | Komprimiert oder dekomprimiert die Dateien bei einer tar-Operation mit gzip bzw. gunzip |
âZ | Komprimiert oder dekomprimiert die Dateien bei einer tar-Operation mit compress bzw. uncompress |
Neben diesen zusätzlichen Optionen bietet tar noch eine Menge Schalter mehr an. Weitere Informationen entnehmen Sie bei Bedarf der man-Seite zu tar. Allerdings dürften Sie mit den hier genannten Optionen recht weit kommen. Daher hierzu einige Beispiele.
Am häufigsten wird tar wohl in folgender Grundform verwendet:
> tar cf Archiv_Name Verzeichnis
Damit wird aus dem Verzeichniszweig Verzeichnis ein Archiv mit dem Namen Archiv_Name erstellt. In der Praxis:
> you@host > tar cf Shellbuch_mai05 Shellbuch
Aus dem kompletten Verzeichniszweig Shellbuch wird das Archiv Shellbuch_mai05 erstellt. Damit Sie beim Zurückspielen der Daten flexibler sind, sollten Sie einen relativen Verzeichnispfad angeben:
> cd Verzeichnis ; tar cf Archiv_Name .
Im Beispiel:
> you@host > cd Shellbuch ; tar cf Shellbuch_mai05 .
Hierfür steht Ihnen für das relative Verzeichnis auch die Option âC Verzeichnis zur Verfügung:
> tar cf Archiv_Name -C Verzeichnis .
Wollen Sie die Dateien und Verzeichnisse des Archivs wiederherstellen, lautet der gängige Befehl hierzu wie folgt:
> tar xf Archiv_Name
Um auf unser Beispiel zurückzukommen:
> you@host > tar xf Shellbuch_mai05
Wollen Sie einzelne Dateien aus einem Archiv wiederherstellen, so ist dies auch kein allzu großer Aufwand:
> tar xf Shellbuch_mai05 datei1 datei2 ...
Beachten Sie allerdings, wie Sie die Datei ins Archiv gesichert haben (relativer oder absoluter Pfad). Wollen Sie bspw. die Datei Shellbook.cpio mit dem relativen Pfad wiederherstellen, so können Sie dies wie folgt tun:
> you@host > tar xf Shellbuch_mai05 ./Shellbook.cpio
Wollen Sie außerdem mitverfolgen, was beim Erstellen oder Auspacken eines Archivs alles passiert, sollten Sie die Option v verwenden. Hierbei können Sie außerdem gleich erkennen, ob Sie die Dateien mit dem absoluten oder relativen Pfad gesichert haben.
> # neues Archiv erzeugen mit Ausgabe aller Aktionen tar cvf Archiv_Name Verzeichnis # Archiv zurückspielen, mit Ausgabe aller Aktionen tar xvf Archiv_Name
Den Inhalt eines Archivs können Sie mit der Option t ansehen:
> tar tf Archiv_Name
In unserem Beispiel:
> you@host > tar tf Shellbuch_mai05 ./ ./Planung_und_Bewerbung/ ./Planung_und_Bewerbung/shellprogrammierung.doc ./Planung_und_Bewerbung/shellprogrammierung.sxw ./kap004.txt ./kap005.txt ./testdir2/ ...
Hier wurde also der relative Pfadname verwendet. Wollen Sie ein ganzes Verzeichnis auf Diskette sichern, erledigen Sie dies folgendermaßen:
> tar cf /dev/fd0 Shellbuch
Hier kopieren Sie das ganze Verzeichnis auf eine Diskette. Zurückspielen können Sie das Ganze wieder wie folgt:
> tar xvf /dev/fd0 Shellbuch
Wollen Sie einen kompletten Verzeichnisbaum mit Kompression archivieren (bspw. mit gzip), gehen Sie so vor:
> you@host > tar czvf Shellbuch_mai05.tgz Shellbuch
Dank der Option z wird jetzt das ganze Archiv auch noch komprimiert. Ansehen können Sie sich das komprimierte Archiv weiterhin mit der Option t:
> you@host > tar tzf Shellbuch_mai05.tgz Shellbuch Shellbuch/ Shellbuch/Planung_und_Bewerbung/ Shellbuch/Planung_und_Bewerbung/shellprogrammierung.doc Shellbuch/Planung_und_Bewerbung/shellprogrammierung.sxw Shellbuch/kap004.txt Shellbuch/kap005.txt Shellbuch/testdir2/ ...
Hier wurde also der absolute Pfadname verwendet. Entpacken und wieder einspielen können Sie das komprimierte Archiv wieder mit (mit Meldungen):
> you@host > tar xzvf Shellbuch_mai05.tgz Shellbuch
Wollen Sie allerdings nur Dateien mit der Endung ».txt« aus dem Archiv extrahieren, können Sie dies so vornehmen:
> you@host > tar xzf Shellbuch_mai05.tgz '*.txt'
### zip/unzip â (De-)Komprimieren von DateienÂ
Mit zip können Sie einzelne Dateien bis hin zu ganzen Verzeichnissen komprimieren und archivieren. Besonders gern werden zip und unzip allerdings verwendet, weil diese gänzlich mit den Versionen von Windows und DOS kompatibel sind. Wer sich also immer schon geärgert hat, dass sein Mail-Anhang wieder einmal etwas im ZIP-Format enthält, kann hier auf unzip zurückgreifen.
Ein ZIP-Archiv aus mehreren Dateien können Sie so erstellen:
> you@host > zip files.zip file1.txt file2.txt file3.txt adding: file1.txt (deflated 56 %) adding: file2.txt (deflated 46 %) adding: file3.txt (deflated 24 %)
Hier packen und komprimieren Sie die Dateien zu einem Archiv namens files.zip. Wollen Sie eine neue Datei zum Archiv hinzufügen, nichts einfacher als das:
> you@host > zip files.zip hallo.c adding: hallo.c (deflated 3 %)
Möchten Sie alle Dateien des Archivs in das aktuelle Arbeitsverzeichnis entpacken, dann tun Sie dies so:
> you@host > unzip files.zip Archive: files.zip inflating: file1.txt inflating: file2.txt inflating: file3.txt inflating: hallo.c
Wenn Sie eine ganze Verzeichnishierarchie packen und komprimieren wollen, so müssen Sie die Option âr (rekursive) verwenden:
> you@host > zip -r Shellbuch.zip $HOME/Shellbuch ...
Entpacken können Sie das Archiv allerdings wieder wie gewohnt mittels unzip.
### Übersicht zu Dateiendungen und den Pack-ProgrammenÂ
In der folgenden Tabelle wird eine kurze Übersicht zu den Dateiendungen und den zugehörigen (De-)Komprimierungsprogrammen gegeben.
Endung | gepackt mit | entpackt mit |
| --- | --- | --- |
*.bz und *.bz2 | bzip2 | bzip2 |
*.gz | gzip | gzip, gunzip oder zcat |
*.zip | InfoâZip, PKZip, zip | InfoâUnzip, PKUnzip, unzip, gunzip (eineDatei) |
*.tar | tar | tar |
*.tbz | tar und bzip2 | tar und bzip2 |
*.tgz; *.tar.gz | tar und gzip | tar und g(un)zip |
*.Z | compress | uncompress; gunzip |
*.tar.Z | tar und compress | tar und uncompress |
*.pak | pack | unpack, gunzip |
# 14.9 SysteminformationenÂ
Date: 2010-06-02
Categories:
Tags:
14.9 SysteminformationenÂ
cal â zeigt einen Kalender anÂ
Das Kommando cal zeigt Ihnen einen Kalender wie folgt an:
you@host > cal April 2005 So Mo Di Mi Do Fr Sa 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Ohne Angaben wird immer der aktuelle Monat aufgeführt. Wünschen Sie einen Kalender zu einem bestimmten Monat und Jahr, müssen Sie nur diese Syntax verwenden:
cal monat jahr
Wobei »monat« und »jahr« jeweils numerisch angegeben werden müssen. Den Kalender für April 2023 erhalten Sie so:
you@host > cal 4 2023 April 2023 So Mo Di Mi Do Fr Sa 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
date â Datum und UhrzeitÂ
Mit date lesen bzw. setzen Sie die Linux Systemzeit. date wird weniger in der Kommandozeile als vielmehr in Shellscripts eingesetzt, wie Sie in diesem Buch bereits mehrmals gesehen haben. date ist im Grunde »nur« ein Frontend für die C-Bibliotheksfunktion strftime(3). Dabei leitet man durch ein Pluszeichen einen Formatstring ein, welcher durch entsprechende Datumsangaben ergänzt wird. Die Ausgabe erfolgt anschließend auf die Standardausgabe. Ein Beispiel:
you@host > date +'%Y-%m-%d' 2005â04â26
Die Formatangaben zu date entsprechen denjenigen von strftime, was auch mit awk verwendet wurde. Zur genaueren Erläuterung können Sie daher zu Abschnitt 13.6.3 (Tabelle 13.9) zurückblättern oder gleich die man-Seiten zu date oder strftime(3) lesen.
uname â Rechnername, Architektur und OS ausgebenÂ
Mit uname können Sie das Betriebssystem, den Rechnernamen, OS-Releases, OS-Version, Plattform, Prozessortyp und Hardwareklasse des Rechners anzeigen lassen. Hierzu ruft man gewöhnlich uname mit der Option âa auf:
you@host > uname -a Linux goliath.myhoster.de 2.6.10â1.770_FC3 #1 Thu Feb 24 14:00:06 EST 2005 i686 i686 i386 GNU/Linux
oder unter FreeBSD:
you@host > uname -a FreeBSD juergen.penguin 4.11-RELEASE FreeBSD 4.11-RELEASE #5: Mon Jan 31 14:06:17 CET 2005 [email protected]:/usr/obj/usr/src/sys/SEMA i386
uptime â Laufzeit des RechnersÂ
Das Kommando uptime zeigt die Zeit an, wie lange der Rechner bereits läuft.
you@host > uptime 09:27:25 up 15 days, 15:44, 1 user, load average: 0.37,0.30,0.30
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 14.9 SysteminformationenÂ
### cal â zeigt einen Kalender anÂ
Das Kommando cal zeigt Ihnen einen Kalender wie folgt an:
> you@host > cal April 2005 So Mo Di Mi Do Fr Sa 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Ohne Angaben wird immer der aktuelle Monat aufgeführt. Wünschen Sie einen Kalender zu einem bestimmten Monat und Jahr, müssen Sie nur diese Syntax verwenden:
> cal monat jahr
Wobei »monat« und »jahr« jeweils numerisch angegeben werden müssen. Den Kalender für April 2023 erhalten Sie so:
> you@host > cal 4 2023 April 2023 So Mo Di Mi Do Fr Sa 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
### date â Datum und UhrzeitÂ
Mit date lesen bzw. setzen Sie die Linux Systemzeit. date wird weniger in der Kommandozeile als vielmehr in Shellscripts eingesetzt, wie Sie in diesem Buch bereits mehrmals gesehen haben. date ist im Grunde »nur« ein Frontend für die C-Bibliotheksfunktion strftime(3). Dabei leitet man durch ein Pluszeichen einen Formatstring ein, welcher durch entsprechende Datumsangaben ergänzt wird. Die Ausgabe erfolgt anschließend auf die Standardausgabe. Ein Beispiel:
> you@host > date +'%Y-%m-%d' 2005â04â26
Die Formatangaben zu date entsprechen denjenigen von strftime, was auch mit awk verwendet wurde. Zur genaueren Erläuterung können Sie daher zu Abschnitt 13.6.3 (Tabelle 13.9) zurückblättern oder gleich die man-Seiten zu date oder strftime(3) lesen.
### uname â Rechnername, Architektur und OS ausgebenÂ
Mit uname können Sie das Betriebssystem, den Rechnernamen, OS-Releases, OS-Version, Plattform, Prozessortyp und Hardwareklasse des Rechners anzeigen lassen. Hierzu ruft man gewöhnlich uname mit der Option âa auf:
> you@host > uname -a Linux goliath.myhoster.de 2.6.10â1.770_FC3 #1 Thu Feb 24 14:00:06 EST 2005 i686 i686 i386 GNU/Linux
oder unter FreeBSD:
> you@host > uname -a FreeBSD juergen.penguin 4.11-RELEASE FreeBSD 4.11-RELEASE #5: Mon Jan 31 14:06:17 CET 2005 [email protected]:/usr/obj/usr/src/sys/SEMA i386
### uptime â Laufzeit des RechnersÂ
Das Kommando uptime zeigt die Zeit an, wie lange der Rechner bereits läuft.
> you@host > uptime 09:27:25 up 15 days, 15:44, 1 user, load average: 0.37,0.30,0.30
# 14.10 System-KommandosÂ
14.10 System-KommandosÂ
dmesg â letzte Boot-Meldung des Kernels anzeigenÂ
Wollen Sie sich die Kernel-Meldung des letzten Bootvorgangs ansehen, können Sie sich dies mit dem Kommando dmesg anzeigen lassen. Dabei können Sie feststellen, welche Hardware beim Booten erkannt und initialisiert wurde. dmesg wird gern zur Diagnose verwendet, ob eine interne bzw. externe Hardware auch vom Betriebssystem korrekt erkannt wurde. Natürlich setzt dies auch entsprechende Kenntnisse zur Hardware auf dem Computer und ihrer Bezeichnungen voraus.
halt â alle laufenden Prozesse beendenÂ
Mit dem Kommando halt beenden Sie alle laufenden Prozesse. Damit wird das System komplett angehalten und reagiert auf keine Eingabe mehr. Selbstverständlich ist solch ein Befehl nur vom root ausführbar. Meistens ist halt ein Verweis auf shutdown.
reboot â alle laufenden Prozesse beenden und System neu startenÂ
Mit reboot werden alle noch laufenden Prozess auf dem System unverzüglich beendet und das System neu gestartet. Bei einem System im Runlevel 1 bis 5 wird hierzu ein shutdown aufgerufen. Selbstverständlich bleibt auch dieses Kommando dem root vorbehalten.
Hinweis   Runlevel 1 bis 5 trifft nicht auf alle Systeme zu. Die Debian-Distributionen haben z. B. meist als »default runlevel« 2. Auf sie trifft das Geschriebene aber ebenso zu. System-V-init würde es besser treffen, aber wäre auch unpräzise. BSD-style-init ruft auch einen shutdown auf. Bei vielen Desktop-Distributionen ist das »shutdown binary« auch mit dem »suid-bit« versehen, damit auch normale User den Rechner ausschalten dürfen.
shutdown â System herunterfahrenÂ
Mit shutdown können Sie (root-Rechte vorausgesetzt) das System herunterfahren. Mit den Optionen âr und âh kann man dabei zwischen einem »Halt« des Systems und einem »Reboot« auswählen. Damit das System auch ordentlich gestoppt wird, wird jedem Prozess zunächst das Signal SIGTERM gesendet, womit sich ein Prozess noch ordentlich beenden kann. Nach einer bestimmten Zeit (Standard ist zwei Sekunden oder einstellbar mit ât <SEKUNDEN>) wird das Signal SIGKILL an die Prozesse gesendet. Natürlich werden auch die Dateisysteme ordentlich abgehängt (umount), sync ausgeführt und in einen anderen Runlevel gewechselt (bei System-V-init). Die Syntax zu shutdown lautet:
shutdown [Optionen] Zeitpunkt [Nachricht]
Den Zeitpunkt zum Ausführen des shutdown-Kommandos können Sie entweder im Format hh:mm als Uhrzeit übergeben (bspw. 23:30) oder alternativ können Sie auch eine Angabe wie +m vornehmen, womit Sie die noch verbleibenden Minuten angeben (bspw. mit +5 wird in 5 Minuten der shutdown-Befehl ausgeführt). Ein sofortiger shutdown kann auch mit now bewirkt werden. Das Kommando shutdown benachrichtigt außerdem alle Benutzer, dass das System bald heruntergefahren wird, und lässt somit auch keine Neuanmeldungen zu. Hier können Sie gegebenenfalls auch eine eigene Nachricht an die Benutzer senden.
Folgende Optionen stehen Ihnen bei shutdown unter Linux zur Verfügung:
Tabelle 14.25 Â Optionen für das Kommando shutdown
ât Sekunden
Zeit in »Sekunden«, die zwischen den SIGTERM- und SIGKILL-Signalen zum Beenden von Prozessen gewartet wird
âk
Hier wird kein Shutdown ausgeführt, sondern es werden nur Meldungen an alle anderen Benutzer gesendet.
âr
(reboot) Neustart nach dem Herunterfahren
âh
System anhalten nach dem Herunterfahren
âf
Beim nächsten Systemstart keinen Dateisystem-Check ausführen
âF
Beim nächsten Systemstart einen Dateisystem-Check ausführen
âc
Wenn möglich, wird der laufende Shutdown abgebrochen.
Hinweis   Die Optionen sind betriebssystemabhängig. Ein -h unter BSD fährt zwar den Rechner herunter, schaltet ihn aber nicht ab. Hier wird dann -p (power down) verwendet. Keine Angabe bringt den Rechner hier in den »Singleuser-Mode«.
## 14.10 System-KommandosÂ
### dmesg â letzte Boot-Meldung des Kernels anzeigenÂ
Wollen Sie sich die Kernel-Meldung des letzten Bootvorgangs ansehen, können Sie sich dies mit dem Kommando dmesg anzeigen lassen. Dabei können Sie feststellen, welche Hardware beim Booten erkannt und initialisiert wurde. dmesg wird gern zur Diagnose verwendet, ob eine interne bzw. externe Hardware auch vom Betriebssystem korrekt erkannt wurde. Natürlich setzt dies auch entsprechende Kenntnisse zur Hardware auf dem Computer und ihrer Bezeichnungen voraus.
### halt â alle laufenden Prozesse beendenÂ
Mit dem Kommando halt beenden Sie alle laufenden Prozesse. Damit wird das System komplett angehalten und reagiert auf keine Eingabe mehr. Selbstverständlich ist solch ein Befehl nur vom root ausführbar. Meistens ist halt ein Verweis auf shutdown.
### reboot â alle laufenden Prozesse beenden und System neu startenÂ
Mit reboot werden alle noch laufenden Prozess auf dem System unverzüglich beendet und das System neu gestartet. Bei einem System im Runlevel 1 bis 5 wird hierzu ein shutdown aufgerufen. Selbstverständlich bleibt auch dieses Kommando dem root vorbehalten.
### shutdown â System herunterfahrenÂ
Mit shutdown können Sie (root-Rechte vorausgesetzt) das System herunterfahren. Mit den Optionen âr und âh kann man dabei zwischen einem »Halt« des Systems und einem »Reboot« auswählen. Damit das System auch ordentlich gestoppt wird, wird jedem Prozess zunächst das Signal SIGTERM gesendet, womit sich ein Prozess noch ordentlich beenden kann. Nach einer bestimmten Zeit (Standard ist zwei Sekunden oder einstellbar mit ât <SEKUNDEN>) wird das Signal SIGKILL an die Prozesse gesendet. Natürlich werden auch die Dateisysteme ordentlich abgehängt (umount), sync ausgeführt und in einen anderen Runlevel gewechselt (bei System-V-init). Die Syntax zu shutdown lautet:
> shutdown [Optionen] Zeitpunkt [Nachricht]
Den Zeitpunkt zum Ausführen des shutdown-Kommandos können Sie entweder im Format hh:mm als Uhrzeit übergeben (bspw. 23:30) oder alternativ können Sie auch eine Angabe wie +m vornehmen, womit Sie die noch verbleibenden Minuten angeben (bspw. mit +5 wird in 5 Minuten der shutdown-Befehl ausgeführt). Ein sofortiger shutdown kann auch mit now bewirkt werden. Das Kommando shutdown benachrichtigt außerdem alle Benutzer, dass das System bald heruntergefahren wird, und lässt somit auch keine Neuanmeldungen zu. Hier können Sie gegebenenfalls auch eine eigene Nachricht an die Benutzer senden.
Folgende Optionen stehen Ihnen bei shutdown unter Linux zur Verfügung:
Option | Bedeutung |
| --- | --- |
ât Sekunden | Zeit in »Sekunden«, die zwischen den SIGTERM- und SIGKILL-Signalen zum Beenden von Prozessen gewartet wird |
âk | Hier wird kein Shutdown ausgeführt, sondern es werden nur Meldungen an alle anderen Benutzer gesendet. |
âr | (reboot) Neustart nach dem Herunterfahren |
âh | System anhalten nach dem Herunterfahren |
âf | Beim nächsten Systemstart keinen Dateisystem-Check ausführen |
âF | Beim nächsten Systemstart einen Dateisystem-Check ausführen |
âc | Wenn möglich, wird der laufende Shutdown abgebrochen. |
# 14.12 NetzwerkbefehleÂ
14.12 NetzwerkbefehleÂ
Netzwerkbefehle erfordern ein tieferes Verständnis. Wenn Sie als Administrator mit Begriffen wie IP-Adresse, MAC-Adresse, DNS, FTP, SSH usw. nichts anfangen können, wäre eine fortführende Lektüre mehr als vonnöten. Leider kann ich aufgrund des eingeschränkten Umfangs nicht auf die Fachbegriffe der Netzwerktechnik eingehen. Nebenbei erwähnt, sind diese Themen zum Teil schon ein ganzes Buch wert, weshalb die Beschreibung hier wohl für die meisten eher enttäuschend ausfallen dürfte.
arp â Ausgeben von MAC-AdressenÂ
Wenn Sie die Tabelle mit den MAC-Adressen der kontaktierten Rechner benötigen, können Sie das Kommandos arp verwenden. Ein Beispiel:
you@host > arp -a ... juergen.penguin (192.168.0.xxx) at 00:30:84:7a:9e:0e on rl0 permanent [ethernet] ...
Die MAC-Adresse ist hierbei die sechsstellige Hexadezimalzahl »00:30:84:7a:9e:0e«. Benötigen Sie hingegen die MAC-Nummer Ihrer eigenen Netzwerkkarte, so können Sie diese mit ifconfig ermitteln:
# ifconfig -a eth0 Protokoll:Ethernet Hardware Adresse 00:00:39:2D:01:A1 ...
In der Zeile »eth0« (unter Linux) finden Sie hierbei die entsprechende MAC-Adresse unter »Hardware Adresse«. Hier sind wieder systemspezifische Kenntnisse von Vorteil.
ftp â Dateien zu einem anderen Rechner übertragenÂ
Mit Hilfe von ftp (File Transfer Protocol) können Sie Dateien innerhalb eines Netzwerks (bspw. Internet) zwischen verschiedenen Rechnern transportieren. Da ftp über eine Menge Features und Funktionen verfügt, soll hier nur auf den Hausgebrauch eingegangen werden, und zwar wie man Daten von einem entfernten Rechner abholt und hinbringt.
Zunächst müssen Sie sich auf dem Server einloggen. Dies geschieht üblicherweise mit
ftp Server_Name
In meinem Beispiel lautet der Servername (und auch mein Webhoster) myhoster.de:
you@host > ftp myhoster.de Connected to myhoster.de (194.150.178.34). 220 194.150.178.34 FTP server ready Name (myhoster.de:you): us10129 331 Password required for us10129. Password:******** 230 User us10129 logged in. Remote system type is UNIX. Using binary mode to transfer files. ftp>
Nachdem ftp eine Verbindung mit dem Server aufgenommen hat, werden Sie aufgefordert, Ihren Benutzernamen (hier »us10129«) und anschließend das Passwort einzugeben. Wenn alles korrekt war, befindet sich vor dem Prompt ftp> und wartet auf weitere Anweisungen. Jetzt können Sie im Eingabeprompt Folgendes machen:
Â
Gewünschtes Verzeichnis durchsuchen: cd, dir, lcd (lokales Verzeichnis)
Â
FTP-Parameter einstellen: type binary, hash, prompt
Â
Datei(en) abholen: get, mget
Â
Datei(en) hinbringen: put, mput
Natürlich bietet ftp weitaus mehr Befehle als diese an, aber alles andere würde hier über den Rahmen des Buchs hinausgehen.
Zunächst werden Sie sich wohl das Inhaltverzeichnis ansehen wollen. Hierzu können Sie den Befehl dir (welcher auf *nix Systemen meistens dem Aufruf von ls âl entspricht) zum Auflisten verwenden:
ftp> dir 200 PORT command successful 150 Opening ASCII mode data connection for file list drwxrwx--x 9 4096 Apr 8 20:31 . drwxrwx--x 9 4096 Apr 8 20:31 .. -rw------- 1 26680 Apr 26 09:00 .bash_history ... lrwxrwxrwx 1 18 Aug 10 2004 logs -> /home/logs/us10129 drwxrwxr-x 2 4096 Mar 28 16:03 mysqldump drwxr-xr-x 20 4096 Apr 3 08:13 www.pronix.de 226 Transfer complete.
Wollen Sie nun in ein Verzeichnis wechseln, können Sie auch hier das schon bekannte Kommando cd verwenden. Ebenso sieht es aus, wenn Sie das aktuelle Arbeitsverzeichnis wissen wollen, in dem Sie sich gerade befinden. Hier leistet das bekannte pwd seine Dienste.
Das aktuelle Verzeichnis auf dem lokalen Rechner können Sie mit dem Kommando lcd wechseln. Sie können übrigens auch die Befehle auf Ihrem lokalen Rechner verwenden, wenn Sie ein !-Zeichen davor setzen. Hierzu ein Beispiel, welches die Befehle nochmals demonstriert.
ftp> pwd 257 "/" is current directory. ftp> cd backups/Shellbuch 250 CWD command successful ftp> pwd 257 "/backups/Shellbuch" is current directory. ftp> dir 200 PORT command successful 150 Opening ASCII mode data connection for file list drwxrwxr-x 2 us10129 us10129 4096 Apr 26 09:07 . drwx------ 3 us10129 us10129 4096 Jan 15 14:15 .. ... -rw-r--r-- 1 us10129 us10129 126445 Mar 13 11:40 kap005.txt -rw------- 1 us10129 us10129 3231 Apr 20 05:26 whoami.txt 226 Transfer complete. ftp>
Hier befinde ich mich auf dem Rechner myhoster.de in meinem Heimverzeichnis in ~/backups/Shellbuch. Ich möchte mir jetzt die Datei whoami.txt auf meinen lokalen Rechner kopieren. Ich hole sie mit dem Kommando get. Zuvor will ich aber noch auf meinem lokalen Rechner in ein Verzeichnis namens mydir wechseln.
ftp> lcd mydir Local directory now /home/you/mydir ftp> !pwd /home/you/mydir ftp> !ls file1.txt file2.txt file3.txt files.zip hallo.c ftp> get whoami.txt local: whoami.txt remote: whoami.txt 200 PORT command successful 150 Opening BINARY mode data connection whoami.txt (3231 bytes) 226 Transfer complete. 3231 bytes received in 0.0608 secs (52 Kbytes/sec) ftp> !ls file1.txt file2.txt file3.txt files.zip hallo.c whoami.txt ftp>
Und schon habe ich die Datei whoami.txt auf meinem lokalen Rechner ins Verzeichnis mydir kopiert. Wollen Sie mehrere Dateien oder gar ganze Verzeichnisse holen, müssen Sie mget verwenden. Hierbei stehen Ihnen auch die Wildcard-Zeichen * und ? zur Verfügung. Da mget Sie nicht jedes Mal bei mehreren Dateien fragt, ob Sie diese wirklich holen wollen, können Sie den interaktiven Modus mit prompt abstellen.
Haben Sie jetzt die Datei whoami.txt bearbeitet und wollen diese wieder hochladen, verwenden Sie put (oder bei mehreren Dateien mput).
ftp> put whoami.txt local: whoami.txt remote: whoami.txt 200 PORT command successful 150 Opening BINARY mode data connection for whoami.txt 226 Transfer complete. 3231 bytes sent in 0.000106 secs (3e+04 Kbytes/sec) ftp>
Sie sehen außerdem, dass hierbei die Datenübertragung binär (BINARY) stattgefunden hat. Wollen Sie hier auf ASCII umstellen, müssen Sie nur type verwenden:
ftp> type ascii 200 Type set to A ftp> put whoami.txt local: whoami.txt remote: whoami.txt 200 PORT command successful 150 Opening ASCII mode data connection for whoami.txt 226 Transfer complete. 3238 bytes sent in 0.000487 secs (6.5e+03 Kbytes/sec) ftp>
Zurücksetzen können Sie das wieder mit type binary. Und wie schon erwähnt, beim Übertragen mehrerer Dateien mit mget und mput werden Sie immer gefragt, ob Sie die Datei transferieren wollen. Diese Abfrage können Sie mit prompt abstellen. Je nachdem, ob Sie eine Fortschrittsanzeige wünschen, können Sie dies mit einem Aufruf von hash ab- oder anschalten. Gerade bei umfangreicheren Dateien ist dies sinnvoll. Dabei wird während der Übertragung alle 1024 Zeichen ein »#« ausgegeben.
Den Script-Betrieb können Sie verwenden, wenn sich eine Datei namens .netrc im Heimverzeichnis des FTP-Servers befindet. Dabei können Sie sich z. B. beim Einloggen die Abfrage von Usernamen und Passwort ersparen. Natürlich kann man solch eine Datei auch im heimischen Heimverzeichnis anlegen. Allerdings darf diese Datei nur vom Eigentümer gelesen werden, also »chmod 0600« für ~/.netrc.
So sieht zum Beispiel der Vorgang, die Datei whoami.txt wie eben demonstriert vom Server zu holen, mit .netrc folgendermaßen aus:
machine myhoster.de login us10129 password asdf1234 macdef init cd $HOME/backups/Shellbuch get whoami.txt bye
Rufen Sie jetzt noch ftp wie folgt auf:
you@host > ftp myhoster.de Connected to myhoster.de (194.150.178.34). 220 194.150.178.34 FTP server ready 331 Password required for us10129. 230 User us10129 logged in. cd $HOME/backups/Shellbuch 550 $HOME/backups/Shellbuch: No such file or directory get whoami.txt local: whoami.txt remote: whoami.txt 200 PORT command successful 550 whoami.txt: No such file or directory Remote system type is UNIX. Using binary mode to transfer files. bye 221 Goodbye.
Und alles geschieht vollautomatisch. Ebenso sieht dies mit dem Hochladen von whoami.txt aus. Wenn die Datei editiert wurde, können Sie diese wieder wie folgt im Script-Modus hochladen. Hier die Datei .netrc:
machine myhoster.de login us10129 password asdf1234 macdef init lcd mydir cd $HOME/backups/Shellbuch put whoami.txt bye
Jetzt nur noch ftp aufrufen:
you@host > ftp myhoster.de Connected to myhoster.de (194.150.178.34). 220 194.150.178.34 FTP server ready 331 Password required for us10129. 230 User us10129 logged in. lcd mydir Local directory now /home/tot/mydir cd $HOME/backups/Shellbuch 550 $HOME/backups/Shellbuch: No such file or directory put whoami.txt local: whoami.txt remote: whoami.txt 200 PORT command successful 150 Opening ASCII mode data connection for whoami.txt 226 Transfer complete. 3238 bytes sent in 0.000557 secs (5.7e+03 Kbytes/sec) bye 221 Goodbye.
Noch mehr Hinweise für den Script-Modus und die Datei netrc entnehmen Sie bitte aus man-Seite von netrc (man netrc).
Hinweis   Bitte bedenken Sie, dass ftp nicht ganz sicher ist, da Sie bei der Authentifizierung das Passwort unverschlüsselt übertragen.
hostname â Rechnername ermittelnÂ
Das Kommando hostname können Sie verwenden, um den Namen des lokalen Rechners anzuzeigen bzw. zu setzen oder zu verändern. So ein Name hat eigentlich erst im Netzwerkbetrieb seine echte Bedeutung. Im Netz besteht ein vollständiger Rechnername (Fully Qualified Domain Name) aus einem Eigennamen und einem Domainnamen. Der (DNS-)Domainname bezeichnet das lokale Netz, an dem der Rechner hängt.
you@host > hostname goliath.myhoster.de you@host > hostname -s goliath you@host > hostname -d myhoster.de
Ohne Angabe einer Option wird der vollständige Rechnername ausgegeben. Mit der Option âs geben Sie nur den Eigennamen des Rechners aus und mit âd nur den (DNS-)Domainnamen des lokalen Netzes.
ifconfig â Netzwerkzugang konfigurierenÂ
Mit dem Kommando ifconfig kann man die Einstellungen einer Netzwerk-Schnittstelle abfragen oder setzen. Alle Einstellungen können Sie sich mit der Option âa anzeigen lassen. Die Syntax zu ifconfig:
ifconfig schnittstelle [addresse [parameter]]
Dabei geben Sie den Namen der zu konfigurierenden Schnittstelle an. Befindet sich bspw. auf Ihrem Rechner eine Netzwerkkarte, so lautet unter Linux die Schnittstelle hierzu »eth0«, die zweite Netzwerkkarte im Rechner (sofern eine vorhanden ist) wird mit »eth1« angesprochen. Auf anderen Systemen lautet der Name der Schnittstelle zur Netzwerkkarte wiederum anders. Daher sollte man ja auch ifconfig mit der Option âa aufrufen, um mehr in Erfahrung darüber zu bringen. Die »adresse« ist die IP-Adresse, die der Schnittstelle zugewiesen werden soll. Hierbei kann man die Dezimalnotation (xxx.xxx.xxx.xxx) verwenden oder einen Namen, den ifconfig in /etc/host nachschlägt.
Verwenden Sie ifconfig ohne die Option âa, um sich einen Überblick zu verschaffen, dann werden die inaktiven Schnittstellen nicht mit angezeigt.
Der Aufruf für die Schnittstelle zu Ethernetkarte »eth0« sieht beispielsweise wie folgt aus (Debian Sarge):
# ifconfig eth0 Link encap:Ethernet HWaddr 00:02:2A:D4:2C:EB inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:80 errors:0 dropped:0 overruns:0 frame:0 TX packets:59 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:8656 (8.4 KiB) TX bytes:8409 (8.2 KiB) Interrupt:11 Base address:0xa000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:560 (560.0 b) TX bytes:560 (560.0 b)
Wenn IPv6 konfiguriert ist, kommt noch die IPv6-Adresse dazu.
Aus der Ausgabe kann man entnehmen, dass auf dieser Netzwerkkarte 59 Pakete gesendet (TX) und 80 empfangen (RX) wurden. Die maximale Größe einzelner Pakete beträgt 1500 bytes (MTU). Die MAC-Adresse (Hardware-Adresse), welche unsere Netzwerkkarte eindeutig identifiziert (außer diese wird manipuliert) lautet »00:02:2A:D4:2C:EB«.
Wollen Sie eine Schnittstelle ein- bzw. ausschalten, können Sie dies mit den zusätzlichen Parametern up (für Einschalten) und down (für Abschalten) vornehmen. Als Beispiel wieder die Netzwerkkarte mit dem Namen »eth0« als Schnittstelle:
ifconfig eth0 down
Hier haben Sie die Netzwerkkarte »eth0« abgeschaltet. Einschalten können Sie diese folgendermaßen:
ifconfig eth0 up
Eine IP-Adresse stellen Sie ein oder verändern Sie ebenfalls mit ifconfig:
ifconfig eth0 192.18.19.91
Wollen Sie bei der Schnittstelle die Netzmaske und Broadcast verändern, so ist dies mit ifconfig wenig Arbeit (unterlassen Sie es, wenn Sie nicht genau wissen, was die Netzmaske und Broadcast ist):
ifconfig eth0 10.25.38.41 netmask \ 255.255.255.0 broadcast 10.25.38.255
Damit weisen Sie der Netzwerkkarte die IP-Adresse 10.25.38.41 aus dem Netz 10.25.38.xxx zu. Mit »netmask« geben Sie an, wie groß das Netz ist (hier ein Netzwerk der Klasse C).
mail/mailx â E-Mails schreiben und empfangen (und auswerten)Â
Mit dem Kommando mail können Sie aus einem Shellscript heraus E-Mails versenden. Mithilfe der Option âs können Sie eine einfache Textmail mit Betreff (âs = Subject) an eine Adresse schicken, beispielsweise:
you@host > echo "Hallo" | mail -s "Betreff" <EMAIL>
Da nicht alle mail-Kommandos die Option âs für einen Betreff haben, können Sie gegebenenfalls auch auf mailx oder Mail (mit großen »M« ) zurückgreifen, die auf einigen Systemen vorhanden sind. Mit cat können Sie natürlich auch den Inhalt einer ganzen Datei an die Mailadresse senden:
you@host > cat whoami.txt | mail -s "Ein Textdatei" \ > <EMAIL>
Dabei kann man allerlei Ausgaben eines Kommandos per mail an eine Adresse versenden:
you@host > ps -ef | mail -s "Prozesse 12Uhr" <EMAIL>
Sinnvoll kann dies z. B. sein, wenn auf einem System ein bestimmtes Limit überschritten wurde. Dann können Sie sich (oder einem anderen Benutzer) eine Nachricht zukommen lassen. Ebenso kann überprüft werden, ob ein Server dauerhaft verfügbar ist. Testen Sie etwa stündlich (bspw. mit cron) mittels nmap (hier kann man nicht nur nachsehen, ob die Netzwerkkarte das UDP-Paket zurückschickt, sondern kann direkt nachschauen, ob der Port des betreffenden Dienstes noch offen ist), ob der Server erreichbar ist, und ist er es einmal nicht, können Sie sich hierbei eine Nachricht zukommen lassen.
Zusätzliche Optionen, die Sie mit mail bzw. mailx verwenden können, sind:
Tabelle 14.27 Â Optionen für das Kommando mail/mailx
âs Betreff
Hier können Sie den Betreff (Subject) der E-Mail angeben.
âc adresse
Diese Adresse bekommt eine Kopie der Mail.
âb adresse
Diese Adresse bekommt eine blind carbon copy der Mail.
uuencode/uudecode â Text- bzw. Binärdateien codierenÂ
Mit den Kommandos uuencode/uudecode können Sie Textdateien in Binärdateien und wieder zurück umwandeln. Solche Umwandlungen werden zum Beispiel beim Datenaustausch über das Internet notwendig, weil sonst hierbei die Sonderzeichen (bspw. Umlaute bzw. alle ASCII-Zeichen über 127) auf manchen Rechnern nicht richtig dargestellt werden können. Was bei einer Textdatei nicht so wild ist, weil eben nicht darstellbare Zeichen »verhunzt« werden. Doch bei einer binären Datei bedeutet dies, dass diese schlicht nicht mehr funktioniert. Die meisten modernen E-Mail-Programme unterstützen MIME und erkennen solche codierten Dateien als Anhang automatisch, daher erfolgt hierbei die Umwandlung von selbst, ohne dass Sie als Benutzer etwas davon mitbekommen.
uuencode macht im Prinzip nichts anderes, als dass es jeweils drei 8-Bit-Zeichen zu vier 6-Bit-Zeichen umwandelt und für jedes Zeichen 32 addiert. Damit werden alle Zeichen in einen Satz von Standardzeichen umgewandelt, die relativ verlässlich übertragen werden.
Gewöhnlich werden Sie wohl uuencode verwenden, um Anhänge zu Ihren E-Mails mit dem Programm mail, Mail oder mailx hinzuzufügen. Wollen Sie einfach eine Datei namens archiv.tgz per Anhang mit mail versenden, gehen Sie wie folgt vor:
you@host > uuencode archiv.tgz archiv.tgz | \ > mail -s 'Anhang: archiv.tgz' user@host
Das hierbei zweimal archiv.tgz verwendet wurde, ist übrigens kein Fehler, sondern wird von uuencode erwartet.
netstat â Statusinformationen über das NetzwerkÂ
Für die Anwendung von netstat gibt es viele Möglichkeiten. Mit einem einfachen Aufruf von netstat zeigen Sie den Zustand einer bestehenden Netzwerkverbindung an. Neben der Überprüfung von Netzwerkverbindungen können Sie mit netstat Routentabellen, Statistiken zu Schnittstellen, maskierte Verbindungen und noch vieles mehr anzeigen lassen. In der Praxis lässt sich somit ohne Problem die IP oder der Port eines ICQ-Users (Opfer) ermitteln oder ob ein Rechner mit einen Trojaner infiziert ist. Hier einige Beispiele:
you@host > netstat -nr
Hiermit lassen Sie die Routingtabelle (âr) des Kernels ausgeben.
you@host > netstat -i
Mit der Option âi erhalten Sie die Schnittstellenstatistik.
you@host > netstat -ta
Mit âta erhalten Sie die Anzeige aller Verbindungen. Die Option ât steht dabei für TCP. Mit âu, âw bzw. âx zeigen Sie die UDP-, RAW bzw. UNIX-Sockets an. Mit âa werden dabei auch die Sockets angezeigt, die noch auf eine Verbindung warten.
nslookup (host/dig) â DNS-Server abfragenÂ
Mit nslookup können Sie aus dem Domainnamen eine IP-Adresse bzw. die IP-Adresse zu einem Domainnamen ermitteln. Zur Auflösung des Namens wird gewöhnlich der DNS-Server verwendet.
Hinweis   Bei der Verwendung von nslookup werden Sie lesen, dass nslookup künftig von den Kommandos host oder dig abgelöst wird.
Hier nslookup und host bei der Ausführung:
you@host > nslookup pronix.de Server: 217.237.150.141 Address: 217.237.150.141#53 Non-authoritative answer: Name: pronix.de Address: 194.150.178.34 you@host > host pronix.de pronix.de has address 194.150.178.34 you@host > host 194.150.178.34 34.178.150.194.in-addr.arpa domain name pointer goliath.myhoster.de.
ping â Verbindung zu anderem Rechner testenÂ
Wollen Sie die Netzwerkverbindung zu einem anderen Rechner testen oder einfach nur den lokalen TCP/IP-Stack überprüfen, können Sie das Kommando ping (Paket Internet Groper) verwenden.
ping host
ping überprüft dabei, ob »host« (IP-Adresse oder Domainname) antwortet. ping bietet noch eine Menge Optionen an, die noch mehr Infos liefern, die allerdings hier nicht genauer erläutert werden. Zur Überprüfung sendet ping ein ICMP-Paket vom Type ICMP Echo Request an die Netzwerkstation. Hat die Netzwerkstation das Paket empfangen, sendet es ebenfalls ein ICMP-Paket, allerdings vom Typ ICMP Echo Reply zurück.
you@host > ping -c5 www.pronix.de PING www.pronix.de (194.150.178.34) 56(84) bytes of data. 64 bytes from goliath.myhoster.de (194.150.178.34): icmp_seq=1 ttl=56 time=79.0 ms 64 bytes from goliath.myhoster.de (194.150.178.34): icmp_seq=2 ttl=56 time=76.8 ms 64 bytes from goliath.myhoster.de (194.150.178.34): icmp_seq=3 ttl=56 time=78.2 ms 64 bytes from goliath.myhoster.de (194.150.178.34): icmp_seq=4 ttl=56 time=76.8 ms 64 bytes from goliath.myhoster.de (194.150.178.34): icmp_seq=5 ttl=56 time=79.2 ms --- www.pronix.de ping statistics --- 5 packets transmitted, 5 received, 0 % packet loss, time 4001ms rtt min/avg/max/mdev = 76.855/78.058/79.228/1.061 ms
Hier wurden z. B. 5 Pakete (mit der Option âc kann die Anzahl der Pakete angegeben werden) an www.pronix.de gesendet und wieder erfolgreich empfangen, wie aus der Zusammenfassung am Ende zu entnehmen ist. Rufen Sie ping hingegen ohne eine Option auf
ping www.pronix.de
so müssen Sie selbst für eine Beendigung des Datenaustausches zwischen den Rechnern sorgen. Ein einfaches (Strg)+(C) tut da seinen Dienst und man erhält ebenfalls wieder eine Zusammenfassung. Neben der Möglichkeit, auf die Verfügbarkeit eines Rechners und des lokalen TCP/IP-Stacks zu prüfen (ping localhost), können Sie außerdem auch die Laufzeit von Paketen vom Sender zum Empfänger ermitteln. Hierzu wird die Zeit halbiert, bis das »Reply« eintrifft.
Die r-Kommandos von Berkeley (rcp, rlogin, rsh, rwho)Â
Aus Sicherheitsgründen sei empfohlen, diese Tools nicht mehr einzusetzen und stattdessen auf die mittlerweile sichereren Alternativen ssh und scp zu setzen. Es fängt schon damit an, dass hier das Passwort beim Einloggen im Klartext, ohne jede Verschlüsselung übertragen wird. Bedenken Sie, dass ein unverschlüsseltes Passwort, das zwischen zwei Rechnern im Internet übertragen wird, jederzeit (bspw. mit einem »Sniffer«) abgefangen und mitgelesen werden kann. Für Passwörter gilt im Allgemeinen, dass man diese niemals im Netz unverschlüsselt übertragen sollte. Da es mittlerweile zur Passwortübertragung mit Secure Shell (ssh), SecureRPC von SUN und Kerberos vom MIT sehr gute Lösungen gibt, haben die r-Kommandos eigentlich keine Berechtigung mehr.
Schlimmer noch, für die Befehle rsh und rcp war auf den Zielrechnern nicht einmal ein Passwort nötig. Eine Authentifizierung erfolgte hierbei über die Datei /etc/hosts.equiv und ~/.rhosts. Darin wurden einzelne Rechner eingetragen, die als vertrauenswürdig empfunden wurden und so die Passwort-Authentifizierung umgehen konnten.
ssh â sichere Shell auf anderem Rechner startenÂ
ssh (Sercure Shell) zählt mittlerweile zu einem der wichtigsten Dienste überhaupt. Mit diesem Dienst ist es möglich, eine verschlüsselte Verbindung zwischen zwei Rechnern aufzubauen. ssh wurde aus der Motivation heraus entwickelt, sichere Alternativen zu telnet und den r-Kommandos von Berkeley zu schaffen.
Wenn Sie zum ersten Mal eine Verbindung zu einem anderen Rechner herstellen, bekommen Sie gewöhnlich eine Warnung, in der ssh nachfragt, ob Sie dem anderen Rechner vertrauen wollen. Wenn Sie mit »yes« antworten, speichert ssh den Namen und den RSA-Fingerprint (ein Code zur eindeutigen Identifizierung des anderen Rechners) in der Datei ~/.ssh/know_hosts. Beim nächsten Starten von ssh erfolgt diese Abfrage dann nicht mehr.
Im nächsten Schritt erfolgt die Passwortabfrage, welche verschlüsselt übertragen wird. Bei korrekter Eingabe des Passworts beginnt die Sitzung am anderen Rechner (als würde man diesen Rechner vor sich haben). Die Syntax:
ssh -l loginname rechnername
In meinem Fall lautet der »loginname« bspw. »us10129« und der »rechnername« (mein Webhoster, auf dem sich pronix.de befindet) myhoster.de. Das Einloggen mit ssh verläuft hierbei wie folgt:
you@host > hostname linux.home you@host > ssh -l us10129 myhoster.de [email protected]'s password:******** Last login: Sat Apr 30 12:52:05 2005 from p549b6d72.dip.t-dialin.net [us10129@goliath ~]$ hostname goliath.myhoster.de [us10129@goliath ~]$ exit Connection to myhoster.de closed. you@host >
Oder ein weiteres Beispiel â ein Login zu meinem Fachgutachter auf einer FreeBSD-Jail:
you@host > ssh -l juergen123 192.135.147.2 Password:******** Last login: Wed Apr 27 15:26:24 2005 from ftpmirror.speed Copyright (c) 1980, 1983, 1986, 1988, 1990, 1991, 1993, 1994 FreeBSD 4.11-RELEASE (SEMA) #5: Mon Jan 31 14:06:17 CET 2005 juergen@juergen$ hostname juergen123.penguin juergen123@juergen$
Noch ein paar Zeilen für die ganz Ängstlichen. Für jede Verbindung über ssh wird zwischen den Rechnern immer ein neuer Sitzungsschlüssel ausgehandelt. Will man einen solchen Schlüssel knacken, benötigt der Angreifer unglaublich viel Zeit. Sobald Sie sich ausloggen, müsste der Angreifer erneut versuchen, den Schlüssel zu knacken. Dies natürlich nur rein theoretisch, denn hierbei handelt es sich immerhin um Schlüssel wie RSA, BLOWFISH, IDEA und TRIPLEDES, zwischen denen man hier wählen kann. Alle diese Schlüssel gelten als sehr sicher.
scp â Dateien kopieren zwischen unterschiedlichen RechnernÂ
Das Kommando scp ist Teil einer ssh-Installation, womit man Dateien sicher zwischen unterschiedlichen Rechnern kopieren kann. scp funktioniert genauso wie das lokale cp. Der einzige Unterschied ist natürlich die Angabe der Pfade auf den entfernten Rechnern. Dabei sieht die Verwendung des Rechnernamens wie folgt aus:
benutzer@rechner:/verzeichnis/zum/ziel
Um auf meinen Account zurückzukommen, mein Benutzername lautet »us10129« und der Rechner myhoster.de:
you@host > scp whoami.txt <EMAIL>:~ <EMAIL>'s password:******** whoami.txt 100 % 3231 3.2KB/s 00:00 you@host > scp <EMAIL>:~/grafik/baum.gif $HOME <EMAIL>'s password:******** baum.gif 100 % 8583 8.4KB/s 00:00 you@host >
Zuerst wurde die Datei whoami.txt aus dem aktuellen lokalen Verzeichnis ins Heimverzeichnis von pronix.de kopiert (/home/us10129). Anschließend habe ich mir aus dem Verzeichnis /home/us10129/grafik die GIF-Datei baum.gif auf meinen lokalen Rechner kopiert. scp ist in der Tat eine interessante Lösung, um Dateien auf mehreren Rechnern mit einem Script zu kopieren.
Was allerdings bei der Scriptausführung stören dürfte (besonders wenn es automatisch geschehen sollte), ist die Passwortabfrage (hierbei würde der Prozess angehalten). Hierzu bietet es sich an, sich mithilfe eines asymmetrischen Verschlüsselungsverfahrens ein Login ohne Passwort zu verschaffen. Dazu stellt man am besten auf dem Clientrechner mit dem Programm sshâkeygen ein entsprechendes Schlüsselpaar (hier mit einem RSA-Schlüssel) bereit:
you@host > ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/you/.ssh/id_rsa):(ENTER) Enter passphrase (empty for no passphrase):(ENTER) Enter same passphrase again:(ENTER) Your identification has been saved in /home/you/.ssh/id_rsa. Your public key has been saved in /home/you/.ssh/id_rsa.pub. The key fingerprint is: bb:d9:6b:b6:61:0e:46:e2:6a:8d:75:f5:b3:41:99:f9 you@linux
Hier wurden zwei RSA-Schlüssel ohne Passphrase erstellt. Jetzt haben Sie zwei Schlüssel, eine privaten (id_rsa.) und einen öffentlichen (id_rsa.pub). Damit Sie jetzt alle ssh-Aktionen ohne Passwort durchführen können, müssen Sie den öffentlichen Schlüssel nur noch auf den Benutzeraccount des Servers hochladen.
you@host > scp .ssh/id_rsa.pub <EMAIL>:~/.ssh/ <EMAIL>'s password:******** id_rsa.pub 100 % 219 0.2KB/s 00:00 you@host >
Jetzt nochmals einloggen und die Datei id_rsa.pub an die Datei ~/.ssh/ authorized_keys hängen:
you@host > ssh <EMAIL> <EMAIL>'s password:******** Last login: Sat Apr 30 13:25:22 2005 from p549b6d72.dip.t-dialin.net [us10129@goliath ~]$ cd ~/.ssh [us10129@goliath .ssh]$ ls id_rsa.pub known_hosts [us10129@goliath .ssh]$ cat id_rsa.pub >> authorized_keys
Nach erneutem Einloggen über ssh oder dem Kopieren mit scp sollte die Passwortabfrage der Vergangenheit angehören.
rsync â Replizieren von Dateien und VerzeichnissenÂ
rsync wird verwendet, um Dateien bzw. ganze Verzeichnis(bäume) zu synchronisieren. Hierbei kann sowohl eine lokale als auch eine entfernte Synchronisation vorgenommen werden. Der Ausdruck »synchronisieren« ist eigentlich rein syntaktisch nicht richtig. Man kann zwar bei einem Verzeichnisbaum »X« Daten hinzufügen, dass dieser exakt denselben Inhalt erhält wie der Verzeichnisbaum »Y«. Dies funktioniert allerdings umgekehrt gleichzeitig nicht. Man spricht hierbei vom Replizieren. Wollen Sie echtes bidirektionales Synchronisieren realisieren (bspw. Daten zwischen zwei PCs), müssen Sie auf unision zurückgreifen.
Die Syntax zu rsync lautet:
rsync [optionen] ziel quelle
Einige Beispiele:
rsync -avzb -e ssh pronix.de:/ /home/you/backups/
Damit wird meine Webseite im Internet pronix.de mit dem lokalen Verzeichnis /home/you/backups synchronisiert. Mit a verwenden Sie den archive-Modus, mit b werden Backups erstellt und mit v (für verbose) wird rsync etwas gesprächiger. Durch die Option z werden die Daten komprimiert übertragen. Außerdem wird mit der Option âe und ssh eine verschlüsselte Datenübertragung verwendet.
Geben Sie bei der Quelle als letztes Zeichen einen Slash (/) an, wird dieses Verzeichnis nicht mitkopiert, sondern nur der darin enthaltene Inhalt, beispielsweise:
rsync -av /home/you/Shellbuch/ /home/you/backups
Hier wird der Inhalt von /home/you/Shellbuch nach /home/you/backups kopiert. Würden Sie hingegen Folgendes schreiben
rsync -av /home/you/Shellbuch /home/you/backups
so würde in /home/you/backups das Verzeichnis Shellbuch angelegt (/home/ you/backups/Shellbuch/) und alles dorthin kopiert. Das hat schon vielen einige Nerven gekostet.
Es folgt nun ein Überblick zu einigen Optionen von rsync.
Tabelle 14.28 Â Gängige Optionen von rsync
âa
(archive mode): Kopiert alle Unterverzeichnisse, mitsamt Attributen (Symlinks, Rechte, Dateidatum, Gruppe, Devices) und (wenn man root ist) den Eigentümer der Datei(en)
âv
(verbose): Gibt während der Übertragung eine Liste der übertragenen Dateien aus
ân
(dry-run): Nichts schreiben, sondern den Vorgang nur simulieren â ideal zum Testen
âe Programm
Wenn in der Quelle oder dem Ziel ein Doppelpunkt enthalten ist, interpretiert rsync den Teil vor dem Doppelpunkt als Hostnamen und kommuniziert über das mit âe spezifizierte Programm. Gewöhnlich wird hierbei als Programm ssh verwendet. Weitere Parameter können Sie diesem Programm in Anführungszeichen gesetzt übergeben.
âz
Der Parameter âz bewirkt, daß rsync die Daten komprimiert überträgt.
ââdelete ââforce ââdeleteâexcluded
Damit werden alle Einträge im Zielverzeichnis gelöscht, die im Quellverzeichnis nicht (mehr) vorhanden sind.
ââpartial
Wurde die Verbindung zwischen zwei Rechnern getrennt, wird die nicht vollständig empfangene Datei nicht gelöscht. So kann bei einem erneuten rsync die Datenübertragung fortgesetzt werden.
ââexclude=Pattern
Hier kann man Dateien (mit Pattern) angeben, die man ignorieren möchte. Selbstverständlich sind hierbei reguläre Ausdrücke möglich.
âx
Damit werden alle Dateien auf einem Filesystem ausgeschlossen, die in ein Quellverzeichnis hineingemountet sind.
Noch mehr zu rsync finden Sie auf der entsprechenden Webseite von rsync (http://rsync.samba.org/) oder wie üblich auf der Manual-Seite.
traceroute â Route zu einem Rechner verfolgenÂ
traceroute ist ein TCP/IP-Tool, mit dem Informationen darüber ermittelt werden können, welche Computer ein Datenpaket über ein Netzwerk passiert, bis es bei einem bestimmten Host ankommt. Beispielsweise:
you@host > traceroute www.microsoft.com traceroute to www.microsoft.com.nsatc.net (207.46.199.30), 30 hops max, 38 byte packets 1 164â138â193.gateway.dus1.myhoster.de (193.138.164.1) 0.350 ms 0.218 ms 0.198 ms 2 ddf-b1-geth3â2â11.telia.net (213.248.68.129) 0.431 ms 0.185 ms 0.145 ms 3 hbg-bb2-pos1â2â0.telia.net (213.248.65.109) 5.775 ms 5.785 ms 5.786 ms 4 adm-bb2-pos7â0â0.telia.net (213.248.65.161) 11.949 ms 11.879 ms 11.874 ms 5 ldn-bb2-pos7â2â0.telia.net (213.248.65.157) 19.611 ms 19.598 ms 19.585 ms ...
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 14.12 NetzwerkbefehleÂ
Netzwerkbefehle erfordern ein tieferes Verständnis. Wenn Sie als Administrator mit Begriffen wie IP-Adresse, MAC-Adresse, DNS, FTP, SSH usw. nichts anfangen können, wäre eine fortführende Lektüre mehr als vonnöten. Leider kann ich aufgrund des eingeschränkten Umfangs nicht auf die Fachbegriffe der Netzwerktechnik eingehen. Nebenbei erwähnt, sind diese Themen zum Teil schon ein ganzes Buch wert, weshalb die Beschreibung hier wohl für die meisten eher enttäuschend ausfallen dürfte.
### arp â Ausgeben von MAC-AdressenÂ
Wenn Sie die Tabelle mit den MAC-Adressen der kontaktierten Rechner benötigen, können Sie das Kommandos arp verwenden. Ein Beispiel:
> you@host > arp -a ... juergen.penguin (192.168.0.xxx) at 00:30:84:7a:9e:0e on rl0 permanent [ethernet] ...
Die MAC-Adresse ist hierbei die sechsstellige Hexadezimalzahl »00:30:84:7a:9e:0e«. Benötigen Sie hingegen die MAC-Nummer Ihrer eigenen Netzwerkkarte, so können Sie diese mit ifconfig ermitteln:
> # ifconfig -a eth0 Protokoll:Ethernet Hardware Adresse 00:00:39:2D:01:A1 ...
In der Zeile »eth0« (unter Linux) finden Sie hierbei die entsprechende MAC-Adresse unter »Hardware Adresse«. Hier sind wieder systemspezifische Kenntnisse von Vorteil.
### ftp â Dateien zu einem anderen Rechner übertragenÂ
Mit Hilfe von ftp (File Transfer Protocol) können Sie Dateien innerhalb eines Netzwerks (bspw. Internet) zwischen verschiedenen Rechnern transportieren. Da ftp über eine Menge Features und Funktionen verfügt, soll hier nur auf den Hausgebrauch eingegangen werden, und zwar wie man Daten von einem entfernten Rechner abholt und hinbringt.
Zunächst müssen Sie sich auf dem Server einloggen. Dies geschieht üblicherweise mit
> ftp Server_Name
In meinem Beispiel lautet der Servername (und auch mein Webhoster) myhoster.de:
> you@host > ftp myhoster.de Connected to myhoster.de (194.150.178.34). 220 194.150.178.34 FTP server ready Name (myhoster.de:you): us10129 331 Password required for us10129. Password:******** 230 User us10129 logged in. Remote system type is UNIX. Using binary mode to transfer files. ftpNachdem ftp eine Verbindung mit dem Server aufgenommen hat, werden Sie aufgefordert, Ihren Benutzernamen (hier »us10129«) und anschließend das Passwort einzugeben. Wenn alles korrekt war, befindet sich vor dem Prompt ftp> und wartet auf weitere Anweisungen. Jetzt können Sie im Eingabeprompt Folgendes machen:
 | Gewünschtes Verzeichnis durchsuchen: cd, dir, lcd (lokales Verzeichnis) |
| --- | --- |
 | FTP-Parameter einstellen: type binary, hash, prompt |
| --- | --- |
 | Datei(en) abholen: get, mget |
| --- | --- |
 | Datei(en) hinbringen: put, mput |
| --- | --- |
Natürlich bietet ftp weitaus mehr Befehle als diese an, aber alles andere würde hier über den Rahmen des Buchs hinausgehen.
Zunächst werden Sie sich wohl das Inhaltverzeichnis ansehen wollen. Hierzu können Sie den Befehl dir (welcher auf *nix Systemen meistens dem Aufruf von ls âl entspricht) zum Auflisten verwenden:
> ftp> dir 200 PORT command successful 150 Opening ASCII mode data connection for file list drwxrwx--x 9 4096 Apr 8 20:31 . drwxrwx--x 9 4096 Apr 8 20:31 .. -rw------- 1 26680 Apr 26 09:00 .bash_history ... lrwxrwxrwx 1 18 Aug 10 2004 logs -> /home/logs/us10129 drwxrwxr-x 2 4096 Mar 28 16:03 mysqldump drwxr-xr-x 20 4096 Apr 3 08:13 www.pronix.de 226 Transfer complete.
Wollen Sie nun in ein Verzeichnis wechseln, können Sie auch hier das schon bekannte Kommando cd verwenden. Ebenso sieht es aus, wenn Sie das aktuelle Arbeitsverzeichnis wissen wollen, in dem Sie sich gerade befinden. Hier leistet das bekannte pwd seine Dienste.
Das aktuelle Verzeichnis auf dem lokalen Rechner können Sie mit dem Kommando lcd wechseln. Sie können übrigens auch die Befehle auf Ihrem lokalen Rechner verwenden, wenn Sie ein !-Zeichen davor setzen. Hierzu ein Beispiel, welches die Befehle nochmals demonstriert.
> ftp> pwd 257 "/" is current directory. ftp> cd backups/Shellbuch 250 CWD command successful ftp> pwd 257 "/backups/Shellbuch" is current directory. ftp> dir 200 PORT command successful 150 Opening ASCII mode data connection for file list drwxrwxr-x 2 us10129 us10129 4096 Apr 26 09:07 . drwx------ 3 us10129 us10129 4096 Jan 15 14:15 .. ... -rw-r--r-- 1 us10129 us10129 126445 Mar 13 11:40 kap005.txt -rw------- 1 us10129 us10129 3231 Apr 20 05:26 whoami.txt 226 Transfer complete. ftpHier befinde ich mich auf dem Rechner myhoster.de in meinem Heimverzeichnis in ~/backups/Shellbuch. Ich möchte mir jetzt die Datei whoami.txt auf meinen lokalen Rechner kopieren. Ich hole sie mit dem Kommando get. Zuvor will ich aber noch auf meinem lokalen Rechner in ein Verzeichnis namens mydir wechseln.
> ftp> lcd mydir Local directory now /home/you/mydir ftp> !pwd /home/you/mydir ftp> !ls file1.txt file2.txt file3.txt files.zip hallo.c ftp> get whoami.txt local: whoami.txt remote: whoami.txt 200 PORT command successful 150 Opening BINARY mode data connection whoami.txt (3231 bytes) 226 Transfer complete. 3231 bytes received in 0.0608 secs (52 Kbytes/sec) ftp> !ls file1.txt file2.txt file3.txt files.zip hallo.c whoami.txt ftpUnd schon habe ich die Datei whoami.txt auf meinem lokalen Rechner ins Verzeichnis mydir kopiert. Wollen Sie mehrere Dateien oder gar ganze Verzeichnisse holen, müssen Sie mget verwenden. Hierbei stehen Ihnen auch die Wildcard-Zeichen * und ? zur Verfügung. Da mget Sie nicht jedes Mal bei mehreren Dateien fragt, ob Sie diese wirklich holen wollen, können Sie den interaktiven Modus mit prompt abstellen.
Haben Sie jetzt die Datei whoami.txt bearbeitet und wollen diese wieder hochladen, verwenden Sie put (oder bei mehreren Dateien mput).
> ftp> put whoami.txt local: whoami.txt remote: whoami.txt 200 PORT command successful 150 Opening BINARY mode data connection for whoami.txt 226 Transfer complete. 3231 bytes sent in 0.000106 secs (3e+04 Kbytes/sec) ftpSie sehen außerdem, dass hierbei die Datenübertragung binär (BINARY) stattgefunden hat. Wollen Sie hier auf ASCII umstellen, müssen Sie nur type verwenden:
> ftp> type ascii 200 Type set to A ftp> put whoami.txt local: whoami.txt remote: whoami.txt 200 PORT command successful 150 Opening ASCII mode data connection for whoami.txt 226 Transfer complete. 3238 bytes sent in 0.000487 secs (6.5e+03 Kbytes/sec) ftpZurücksetzen können Sie das wieder mit type binary. Und wie schon erwähnt, beim Übertragen mehrerer Dateien mit mget und mput werden Sie immer gefragt, ob Sie die Datei transferieren wollen. Diese Abfrage können Sie mit prompt abstellen. Je nachdem, ob Sie eine Fortschrittsanzeige wünschen, können Sie dies mit einem Aufruf von hash ab- oder anschalten. Gerade bei umfangreicheren Dateien ist dies sinnvoll. Dabei wird während der Übertragung alle 1024 Zeichen ein »#« ausgegeben.
Den Script-Betrieb können Sie verwenden, wenn sich eine Datei namens .netrc im Heimverzeichnis des FTP-Servers befindet. Dabei können Sie sich z. B. beim Einloggen die Abfrage von Usernamen und Passwort ersparen. Natürlich kann man solch eine Datei auch im heimischen Heimverzeichnis anlegen. Allerdings darf diese Datei nur vom Eigentümer gelesen werden, also »chmod 0600« für ~/.netrc.
So sieht zum Beispiel der Vorgang, die Datei whoami.txt wie eben demonstriert vom Server zu holen, mit .netrc folgendermaßen aus:
> machine myhoster.de login us10129 password asdf1234 macdef init cd $HOME/backups/Shellbuch get whoami.txt bye
Rufen Sie jetzt noch ftp wie folgt auf:
> you@host > ftp myhoster.de Connected to myhoster.de (194.150.178.34). 220 194.150.178.34 FTP server ready 331 Password required for us10129. 230 User us10129 logged in. cd $HOME/backups/Shellbuch 550 $HOME/backups/Shellbuch: No such file or directory get whoami.txt local: whoami.txt remote: whoami.txt 200 PORT command successful 550 whoami.txt: No such file or directory Remote system type is UNIX. Using binary mode to transfer files. bye 221 Goodbye.
Und alles geschieht vollautomatisch. Ebenso sieht dies mit dem Hochladen von whoami.txt aus. Wenn die Datei editiert wurde, können Sie diese wieder wie folgt im Script-Modus hochladen. Hier die Datei .netrc:
> machine myhoster.de login us10129 password asdf1234 macdef init lcd mydir cd $HOME/backups/Shellbuch put whoami.txt bye
Jetzt nur noch ftp aufrufen:
> you@host > ftp myhoster.de Connected to myhoster.de (194.150.178.34). 220 194.150.178.34 FTP server ready 331 Password required for us10129. 230 User us10129 logged in. lcd mydir Local directory now /home/tot/mydir cd $HOME/backups/Shellbuch 550 $HOME/backups/Shellbuch: No such file or directory put whoami.txt local: whoami.txt remote: whoami.txt 200 PORT command successful 150 Opening ASCII mode data connection for whoami.txt 226 Transfer complete. 3238 bytes sent in 0.000557 secs (5.7e+03 Kbytes/sec) bye 221 Goodbye.
Noch mehr Hinweise für den Script-Modus und die Datei netrc entnehmen Sie bitte aus man-Seite von netrc (man netrc).
### hostname â Rechnername ermittelnÂ
Das Kommando hostname können Sie verwenden, um den Namen des lokalen Rechners anzuzeigen bzw. zu setzen oder zu verändern. So ein Name hat eigentlich erst im Netzwerkbetrieb seine echte Bedeutung. Im Netz besteht ein vollständiger Rechnername (Fully Qualified Domain Name) aus einem Eigennamen und einem Domainnamen. Der (DNS-)Domainname bezeichnet das lokale Netz, an dem der Rechner hängt.
> you@host > hostname goliath.myhoster.de you@host > hostname -s goliath you@host > hostname -d myhoster.de
Ohne Angabe einer Option wird der vollständige Rechnername ausgegeben. Mit der Option âs geben Sie nur den Eigennamen des Rechners aus und mit âd nur den (DNS-)Domainnamen des lokalen Netzes.
### ifconfig â Netzwerkzugang konfigurierenÂ
Mit dem Kommando ifconfig kann man die Einstellungen einer Netzwerk-Schnittstelle abfragen oder setzen. Alle Einstellungen können Sie sich mit der Option âa anzeigen lassen. Die Syntax zu ifconfig:
> ifconfig schnittstelle [addresse [parameter]]
Dabei geben Sie den Namen der zu konfigurierenden Schnittstelle an. Befindet sich bspw. auf Ihrem Rechner eine Netzwerkkarte, so lautet unter Linux die Schnittstelle hierzu »eth0«, die zweite Netzwerkkarte im Rechner (sofern eine vorhanden ist) wird mit »eth1« angesprochen. Auf anderen Systemen lautet der Name der Schnittstelle zur Netzwerkkarte wiederum anders. Daher sollte man ja auch ifconfig mit der Option âa aufrufen, um mehr in Erfahrung darüber zu bringen. Die »adresse« ist die IP-Adresse, die der Schnittstelle zugewiesen werden soll. Hierbei kann man die Dezimalnotation (xxx.xxx.xxx.xxx) verwenden oder einen Namen, den ifconfig in /etc/host nachschlägt.
Verwenden Sie ifconfig ohne die Option âa, um sich einen Überblick zu verschaffen, dann werden die inaktiven Schnittstellen nicht mit angezeigt.
Der Aufruf für die Schnittstelle zu Ethernetkarte »eth0« sieht beispielsweise wie folgt aus (Debian Sarge):
> # ifconfig eth0 Link encap:Ethernet HWaddr 00:02:2A:D4:2C:EB inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:80 errors:0 dropped:0 overruns:0 frame:0 TX packets:59 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:8656 (8.4 KiB) TX bytes:8409 (8.2 KiB) Interrupt:11 Base address:0xa000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:560 (560.0 b) TX bytes:560 (560.0 b)
Wenn IPv6 konfiguriert ist, kommt noch die IPv6-Adresse dazu.
Aus der Ausgabe kann man entnehmen, dass auf dieser Netzwerkkarte 59 Pakete gesendet (TX) und 80 empfangen (RX) wurden. Die maximale Größe einzelner Pakete beträgt 1500 bytes (MTU). Die MAC-Adresse (Hardware-Adresse), welche unsere Netzwerkkarte eindeutig identifiziert (außer diese wird manipuliert) lautet »00:02:2A:D4:2C:EB«.
Wollen Sie eine Schnittstelle ein- bzw. ausschalten, können Sie dies mit den zusätzlichen Parametern up (für Einschalten) und down (für Abschalten) vornehmen. Als Beispiel wieder die Netzwerkkarte mit dem Namen »eth0« als Schnittstelle:
> ifconfig eth0 down
Hier haben Sie die Netzwerkkarte »eth0« abgeschaltet. Einschalten können Sie diese folgendermaßen:
> ifconfig eth0 up
Eine IP-Adresse stellen Sie ein oder verändern Sie ebenfalls mit ifconfig:
> ifconfig eth0 192.18.19.91
Wollen Sie bei der Schnittstelle die Netzmaske und Broadcast verändern, so ist dies mit ifconfig wenig Arbeit (unterlassen Sie es, wenn Sie nicht genau wissen, was die Netzmaske und Broadcast ist):
> ifconfig eth0 10.25.38.41 netmask \ 255.255.255.0 broadcast 10.25.38.255
Damit weisen Sie der Netzwerkkarte die IP-Adresse 10.25.38.41 aus dem Netz 10.25.38.xxx zu. Mit »netmask« geben Sie an, wie groß das Netz ist (hier ein Netzwerk der Klasse C).
### mail/mailx â E-Mails schreiben und empfangen (und auswerten)Â
Mit dem Kommando mail können Sie aus einem Shellscript heraus E-Mails versenden. Mithilfe der Option âs können Sie eine einfache Textmail mit Betreff (âs = Subject) an eine Adresse schicken, beispielsweise:
> you@host > echo "Hallo" | mail -s "Betreff" <EMAIL>
Da nicht alle mail-Kommandos die Option âs für einen Betreff haben, können Sie gegebenenfalls auch auf mailx oder Mail (mit großen »M« ) zurückgreifen, die auf einigen Systemen vorhanden sind. Mit cat können Sie natürlich auch den Inhalt einer ganzen Datei an die Mailadresse senden:
> you@host > cat whoami.txt | mail -s "Ein Textdatei" \ > <EMAIL>
Dabei kann man allerlei Ausgaben eines Kommandos per mail an eine Adresse versenden:
> you@host > ps -ef | mail -s "Prozesse 12Uhr" <EMAIL>
Sinnvoll kann dies z. B. sein, wenn auf einem System ein bestimmtes Limit überschritten wurde. Dann können Sie sich (oder einem anderen Benutzer) eine Nachricht zukommen lassen. Ebenso kann überprüft werden, ob ein Server dauerhaft verfügbar ist. Testen Sie etwa stündlich (bspw. mit cron) mittels nmap (hier kann man nicht nur nachsehen, ob die Netzwerkkarte das UDP-Paket zurückschickt, sondern kann direkt nachschauen, ob der Port des betreffenden Dienstes noch offen ist), ob der Server erreichbar ist, und ist er es einmal nicht, können Sie sich hierbei eine Nachricht zukommen lassen.
Zusätzliche Optionen, die Sie mit mail bzw. mailx verwenden können, sind:
Option | Bedeutung |
| --- | --- |
âs Betreff | Hier können Sie den Betreff (Subject) der E-Mail angeben. |
âc adresse | Diese Adresse bekommt eine Kopie der Mail. |
âb adresse | Diese Adresse bekommt eine blind carbon copy der Mail. |
### uuencode/uudecode â Text- bzw. Binärdateien codierenÂ
Mit den Kommandos uuencode/uudecode können Sie Textdateien in Binärdateien und wieder zurück umwandeln. Solche Umwandlungen werden zum Beispiel beim Datenaustausch über das Internet notwendig, weil sonst hierbei die Sonderzeichen (bspw. Umlaute bzw. alle ASCII-Zeichen über 127) auf manchen Rechnern nicht richtig dargestellt werden können. Was bei einer Textdatei nicht so wild ist, weil eben nicht darstellbare Zeichen »verhunzt« werden. Doch bei einer binären Datei bedeutet dies, dass diese schlicht nicht mehr funktioniert. Die meisten modernen E-Mail-Programme unterstützen MIME und erkennen solche codierten Dateien als Anhang automatisch, daher erfolgt hierbei die Umwandlung von selbst, ohne dass Sie als Benutzer etwas davon mitbekommen.
uuencode macht im Prinzip nichts anderes, als dass es jeweils drei 8-Bit-Zeichen zu vier 6-Bit-Zeichen umwandelt und für jedes Zeichen 32 addiert. Damit werden alle Zeichen in einen Satz von Standardzeichen umgewandelt, die relativ verlässlich übertragen werden.
Gewöhnlich werden Sie wohl uuencode verwenden, um Anhänge zu Ihren E-Mails mit dem Programm mail, Mail oder mailx hinzuzufügen. Wollen Sie einfach eine Datei namens archiv.tgz per Anhang mit mail versenden, gehen Sie wie folgt vor:
> you@host > uuencode archiv.tgz archiv.tgz | \ > mail -s 'Anhang: archiv.tgz' user@host
Das hierbei zweimal archiv.tgz verwendet wurde, ist übrigens kein Fehler, sondern wird von uuencode erwartet.
### netstat â Statusinformationen über das NetzwerkÂ
Für die Anwendung von netstat gibt es viele Möglichkeiten. Mit einem einfachen Aufruf von netstat zeigen Sie den Zustand einer bestehenden Netzwerkverbindung an. Neben der Überprüfung von Netzwerkverbindungen können Sie mit netstat Routentabellen, Statistiken zu Schnittstellen, maskierte Verbindungen und noch vieles mehr anzeigen lassen. In der Praxis lässt sich somit ohne Problem die IP oder der Port eines ICQ-Users (Opfer) ermitteln oder ob ein Rechner mit einen Trojaner infiziert ist. Hier einige Beispiele:
> you@host > netstat -nr
Hiermit lassen Sie die Routingtabelle (âr) des Kernels ausgeben.
> you@host > netstat -i
Mit der Option âi erhalten Sie die Schnittstellenstatistik.
> you@host > netstat -ta
Mit âta erhalten Sie die Anzeige aller Verbindungen. Die Option ât steht dabei für TCP. Mit âu, âw bzw. âx zeigen Sie die UDP-, RAW bzw. UNIX-Sockets an. Mit âa werden dabei auch die Sockets angezeigt, die noch auf eine Verbindung warten.
### nslookup (host/dig) â DNS-Server abfragenÂ
Mit nslookup können Sie aus dem Domainnamen eine IP-Adresse bzw. die IP-Adresse zu einem Domainnamen ermitteln. Zur Auflösung des Namens wird gewöhnlich der DNS-Server verwendet.
Hier nslookup und host bei der Ausführung:
> you@host > nslookup pronix.de Server: 217.237.150.141 Address: 217.237.150.141#53 Non-authoritative answer: Name: pronix.de Address: 194.150.178.34 you@host > host pronix.de pronix.de has address 194.150.178.34 you@host > host 194.150.178.34 34.178.150.194.in-addr.arpa domain name pointer goliath.myhoster.de.
### ping â Verbindung zu anderem Rechner testenÂ
Wollen Sie die Netzwerkverbindung zu einem anderen Rechner testen oder einfach nur den lokalen TCP/IP-Stack überprüfen, können Sie das Kommando ping (Paket Internet Groper) verwenden.
> ping host
ping überprüft dabei, ob »host« (IP-Adresse oder Domainname) antwortet. ping bietet noch eine Menge Optionen an, die noch mehr Infos liefern, die allerdings hier nicht genauer erläutert werden. Zur Überprüfung sendet ping ein ICMP-Paket vom Type ICMP Echo Request an die Netzwerkstation. Hat die Netzwerkstation das Paket empfangen, sendet es ebenfalls ein ICMP-Paket, allerdings vom Typ ICMP Echo Reply zurück.
> you@host > ping -c5 www.pronix.de PING www.pronix.de (194.150.178.34) 56(84) bytes of data. 64 bytes from goliath.myhoster.de (194.150.178.34): icmp_seq=1 ttl=56 time=79.0 ms 64 bytes from goliath.myhoster.de (194.150.178.34): icmp_seq=2 ttl=56 time=76.8 ms 64 bytes from goliath.myhoster.de (194.150.178.34): icmp_seq=3 ttl=56 time=78.2 ms 64 bytes from goliath.myhoster.de (194.150.178.34): icmp_seq=4 ttl=56 time=76.8 ms 64 bytes from goliath.myhoster.de (194.150.178.34): icmp_seq=5 ttl=56 time=79.2 ms --- www.pronix.de ping statistics --- 5 packets transmitted, 5 received, 0 % packet loss, time 4001ms rtt min/avg/max/mdev = 76.855/78.058/79.228/1.061 ms
Hier wurden z. B. 5 Pakete (mit der Option âc kann die Anzahl der Pakete angegeben werden) an www.pronix.de gesendet und wieder erfolgreich empfangen, wie aus der Zusammenfassung am Ende zu entnehmen ist. Rufen Sie ping hingegen ohne eine Option auf
> ping www.pronix.de
so müssen Sie selbst für eine Beendigung des Datenaustausches zwischen den Rechnern sorgen. Ein einfaches (Strg)+(C) tut da seinen Dienst und man erhält ebenfalls wieder eine Zusammenfassung. Neben der Möglichkeit, auf die Verfügbarkeit eines Rechners und des lokalen TCP/IP-Stacks zu prüfen (ping localhost), können Sie außerdem auch die Laufzeit von Paketen vom Sender zum Empfänger ermitteln. Hierzu wird die Zeit halbiert, bis das »Reply« eintrifft.
### Die r-Kommandos von Berkeley (rcp, rlogin, rsh, rwho)Â
Aus Sicherheitsgründen sei empfohlen, diese Tools nicht mehr einzusetzen und stattdessen auf die mittlerweile sichereren Alternativen ssh und scp zu setzen. Es fängt schon damit an, dass hier das Passwort beim Einloggen im Klartext, ohne jede Verschlüsselung übertragen wird. Bedenken Sie, dass ein unverschlüsseltes Passwort, das zwischen zwei Rechnern im Internet übertragen wird, jederzeit (bspw. mit einem »Sniffer«) abgefangen und mitgelesen werden kann. Für Passwörter gilt im Allgemeinen, dass man diese niemals im Netz unverschlüsselt übertragen sollte. Da es mittlerweile zur Passwortübertragung mit Secure Shell (ssh), SecureRPC von SUN und Kerberos vom MIT sehr gute Lösungen gibt, haben die r-Kommandos eigentlich keine Berechtigung mehr.
Schlimmer noch, für die Befehle rsh und rcp war auf den Zielrechnern nicht einmal ein Passwort nötig. Eine Authentifizierung erfolgte hierbei über die Datei /etc/hosts.equiv und ~/.rhosts. Darin wurden einzelne Rechner eingetragen, die als vertrauenswürdig empfunden wurden und so die Passwort-Authentifizierung umgehen konnten.
### ssh â sichere Shell auf anderem Rechner startenÂ
ssh (Sercure Shell) zählt mittlerweile zu einem der wichtigsten Dienste überhaupt. Mit diesem Dienst ist es möglich, eine verschlüsselte Verbindung zwischen zwei Rechnern aufzubauen. ssh wurde aus der Motivation heraus entwickelt, sichere Alternativen zu telnet und den r-Kommandos von Berkeley zu schaffen.
Wenn Sie zum ersten Mal eine Verbindung zu einem anderen Rechner herstellen, bekommen Sie gewöhnlich eine Warnung, in der ssh nachfragt, ob Sie dem anderen Rechner vertrauen wollen. Wenn Sie mit »yes« antworten, speichert ssh den Namen und den RSA-Fingerprint (ein Code zur eindeutigen Identifizierung des anderen Rechners) in der Datei ~/.ssh/know_hosts. Beim nächsten Starten von ssh erfolgt diese Abfrage dann nicht mehr.
Im nächsten Schritt erfolgt die Passwortabfrage, welche verschlüsselt übertragen wird. Bei korrekter Eingabe des Passworts beginnt die Sitzung am anderen Rechner (als würde man diesen Rechner vor sich haben). Die Syntax:
> ssh -l loginname rechnername
In meinem Fall lautet der »loginname« bspw. »us10129« und der »rechnername« (mein Webhoster, auf dem sich pronix.de befindet) myhoster.de. Das Einloggen mit ssh verläuft hierbei wie folgt:
> you@host > hostname linux.home you@host > ssh -l us10129 myhoster.de [email protected]'s password:******** Last login: Sat Apr 30 12:52:05 2005 from p549b6d72.dip.t-dialin.net [us10129@goliath ~]$ hostname goliath.myhoster.de [us10129@goliath ~]$ exit Connection to myhoster.de closed. you@host Oder ein weiteres Beispiel â ein Login zu meinem Fachgutachter auf einer FreeBSD-Jail:
> you@host > ssh -l juergen123 192.135.147.2 Password:******** Last login: Wed Apr 27 15:26:24 2005 from ftpmirror.speed Copyright (c) 1980, 1983, 1986, 1988, 1990, 1991, 1993, 1994 FreeBSD 4.11-RELEASE (SEMA) #5: Mon Jan 31 14:06:17 CET 2005 juergen@juergen$ hostname juergen123.penguin juergen123@juergen$
Noch ein paar Zeilen für die ganz Ängstlichen. Für jede Verbindung über ssh wird zwischen den Rechnern immer ein neuer Sitzungsschlüssel ausgehandelt. Will man einen solchen Schlüssel knacken, benötigt der Angreifer unglaublich viel Zeit. Sobald Sie sich ausloggen, müsste der Angreifer erneut versuchen, den Schlüssel zu knacken. Dies natürlich nur rein theoretisch, denn hierbei handelt es sich immerhin um Schlüssel wie RSA, BLOWFISH, IDEA und TRIPLEDES, zwischen denen man hier wählen kann. Alle diese Schlüssel gelten als sehr sicher.
### scp â Dateien kopieren zwischen unterschiedlichen RechnernÂ
Das Kommando scp ist Teil einer ssh-Installation, womit man Dateien sicher zwischen unterschiedlichen Rechnern kopieren kann. scp funktioniert genauso wie das lokale cp. Der einzige Unterschied ist natürlich die Angabe der Pfade auf den entfernten Rechnern. Dabei sieht die Verwendung des Rechnernamens wie folgt aus:
> benutzer@rechner:/verzeichnis/zum/ziel
Um auf meinen Account zurückzukommen, mein Benutzername lautet »us10129« und der Rechner myhoster.de:
> you@host > scp whoami.txt <EMAIL>:~ <EMAIL>'s password:******** whoami.txt 100 % 3231 3.2KB/s 00:00 you@host > scp <EMAIL>:~/grafik/baum.gif $HOME <EMAIL>'s password:******** baum.gif 100 % 8583 8.4KB/s 00:00 you@host Zuerst wurde die Datei whoami.txt aus dem aktuellen lokalen Verzeichnis ins Heimverzeichnis von pronix.de kopiert (/home/us10129). Anschließend habe ich mir aus dem Verzeichnis /home/us10129/grafik die GIF-Datei baum.gif auf meinen lokalen Rechner kopiert. scp ist in der Tat eine interessante Lösung, um Dateien auf mehreren Rechnern mit einem Script zu kopieren.
Was allerdings bei der Scriptausführung stören dürfte (besonders wenn es automatisch geschehen sollte), ist die Passwortabfrage (hierbei würde der Prozess angehalten). Hierzu bietet es sich an, sich mithilfe eines asymmetrischen Verschlüsselungsverfahrens ein Login ohne Passwort zu verschaffen. Dazu stellt man am besten auf dem Clientrechner mit dem Programm sshâkeygen ein entsprechendes Schlüsselpaar (hier mit einem RSA-Schlüssel) bereit:
> you@host > ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/you/.ssh/id_rsa):(ENTER) Enter passphrase (empty for no passphrase):(ENTER) Enter same passphrase again:(ENTER) Your identification has been saved in /home/you/.ssh/id_rsa. Your public key has been saved in /home/you/.ssh/id_rsa.pub. The key fingerprint is: bb:d9:6b:b6:61:0e:46:e2:6a:8d:75:f5:b3:41:99:f9 you@linux
Hier wurden zwei RSA-Schlüssel ohne Passphrase erstellt. Jetzt haben Sie zwei Schlüssel, eine privaten (id_rsa.) und einen öffentlichen (id_rsa.pub). Damit Sie jetzt alle ssh-Aktionen ohne Passwort durchführen können, müssen Sie den öffentlichen Schlüssel nur noch auf den Benutzeraccount des Servers hochladen.
> you@host > scp .ssh/id_rsa.pub <EMAIL>:~/.ssh/ <EMAIL>'s password:******** id_rsa.pub 100 % 219 0.2KB/s 00:00 you@host Jetzt nochmals einloggen und die Datei id_rsa.pub an die Datei ~/.ssh/ authorized_keys hängen:
> you@host > ssh <EMAIL> <EMAIL>'s password:******** Last login: Sat Apr 30 13:25:22 2005 from p549b6d72.dip.t-dialin.net [us10129@goliath ~]$ cd ~/.ssh [us10129@goliath .ssh]$ ls id_rsa.pub known_hosts [us10129@goliath .ssh]$ cat id_rsa.pub >> authorized_keys
Nach erneutem Einloggen über ssh oder dem Kopieren mit scp sollte die Passwortabfrage der Vergangenheit angehören.
### rsync â Replizieren von Dateien und VerzeichnissenÂ
rsync wird verwendet, um Dateien bzw. ganze Verzeichnis(bäume) zu synchronisieren. Hierbei kann sowohl eine lokale als auch eine entfernte Synchronisation vorgenommen werden. Der Ausdruck »synchronisieren« ist eigentlich rein syntaktisch nicht richtig. Man kann zwar bei einem Verzeichnisbaum »X« Daten hinzufügen, dass dieser exakt denselben Inhalt erhält wie der Verzeichnisbaum »Y«. Dies funktioniert allerdings umgekehrt gleichzeitig nicht. Man spricht hierbei vom Replizieren. Wollen Sie echtes bidirektionales Synchronisieren realisieren (bspw. Daten zwischen zwei PCs), müssen Sie auf unision zurückgreifen.
Die Syntax zu rsync lautet:
> rsync [optionen] ziel quelle
Einige Beispiele:
> rsync -avzb -e ssh pronix.de:/ /home/you/backups/
Damit wird meine Webseite im Internet pronix.de mit dem lokalen Verzeichnis /home/you/backups synchronisiert. Mit a verwenden Sie den archive-Modus, mit b werden Backups erstellt und mit v (für verbose) wird rsync etwas gesprächiger. Durch die Option z werden die Daten komprimiert übertragen. Außerdem wird mit der Option âe und ssh eine verschlüsselte Datenübertragung verwendet.
Geben Sie bei der Quelle als letztes Zeichen einen Slash (/) an, wird dieses Verzeichnis nicht mitkopiert, sondern nur der darin enthaltene Inhalt, beispielsweise:
> rsync -av /home/you/Shellbuch/ /home/you/backups
Hier wird der Inhalt von /home/you/Shellbuch nach /home/you/backups kopiert. Würden Sie hingegen Folgendes schreiben
> rsync -av /home/you/Shellbuch /home/you/backups
so würde in /home/you/backups das Verzeichnis Shellbuch angelegt (/home/ you/backups/Shellbuch/) und alles dorthin kopiert. Das hat schon vielen einige Nerven gekostet.
Es folgt nun ein Überblick zu einigen Optionen von rsync.
Option | Bedeutung |
| --- | --- |
âa | (archive mode): Kopiert alle Unterverzeichnisse, mitsamt Attributen (Symlinks, Rechte, Dateidatum, Gruppe, Devices) und (wenn man root ist) den Eigentümer der Datei(en) |
âv | (verbose): Gibt während der Übertragung eine Liste der übertragenen Dateien aus |
ân | (dry-run): Nichts schreiben, sondern den Vorgang nur simulieren â ideal zum Testen |
âe Programm | Wenn in der Quelle oder dem Ziel ein Doppelpunkt enthalten ist, interpretiert rsync den Teil vor dem Doppelpunkt als Hostnamen und kommuniziert über das mit âe spezifizierte Programm. Gewöhnlich wird hierbei als Programm ssh verwendet. Weitere Parameter können Sie diesem Programm in Anführungszeichen gesetzt übergeben. |
âz | Der Parameter âz bewirkt, daß rsync die Daten komprimiert überträgt. |
ââdelete ââforce ââdeleteâexcluded | Damit werden alle Einträge im Zielverzeichnis gelöscht, die im Quellverzeichnis nicht (mehr) vorhanden sind. |
ââpartial | Wurde die Verbindung zwischen zwei Rechnern getrennt, wird die nicht vollständig empfangene Datei nicht gelöscht. So kann bei einem erneuten rsync die Datenübertragung fortgesetzt werden. |
ââexclude=Pattern | Hier kann man Dateien (mit Pattern) angeben, die man ignorieren möchte. Selbstverständlich sind hierbei reguläre Ausdrücke möglich. |
âx | Damit werden alle Dateien auf einem Filesystem ausgeschlossen, die in ein Quellverzeichnis hineingemountet sind. |
Noch mehr zu rsync finden Sie auf der entsprechenden Webseite von rsync (http://rsync.samba.org/) oder wie üblich auf der Manual-Seite.
### traceroute â Route zu einem Rechner verfolgenÂ
traceroute ist ein TCP/IP-Tool, mit dem Informationen darüber ermittelt werden können, welche Computer ein Datenpaket über ein Netzwerk passiert, bis es bei einem bestimmten Host ankommt. Beispielsweise:
> you@host > traceroute www.microsoft.com traceroute to www.microsoft.com.nsatc.net (207.46.199.30), 30 hops max, 38 byte packets 1 164â138â193.gateway.dus1.myhoster.de (193.138.164.1) 0.350 ms 0.218 ms 0.198 ms 2 ddf-b1-geth3â2â11.telia.net (213.248.68.129) 0.431 ms 0.185 ms 0.145 ms 3 hbg-bb2-pos1â2â0.telia.net (213.248.65.109) 5.775 ms 5.785 ms 5.786 ms 4 adm-bb2-pos7â0â0.telia.net (213.248.65.161) 11.949 ms 11.879 ms 11.874 ms 5 ldn-bb2-pos7â2â0.telia.net (213.248.65.157) 19.611 ms 19.598 ms 19.585 ms ...
# 14.13 BenutzerkommunikationÂ
14.13 BenutzerkommunikationÂ
wall â Nachrichten an alle Benutzer verschickenÂ
Mit dem Kommando wall senden Sie eine Nachricht an alle aktiven Benutzer auf dem Rechner. Damit ein Benutzer auch Nachrichten empfangen kann, muss dieser mit mesg yes diese Option einschalten. Natürlich kann ein Benutzer das Empfangen von Nachrichten auch mit mesg no abschalten. Nachrichten werden nach einem Aufruf von wall von der Standardeingabe eingelesen und mit der Tastenkombination (Strg)+(D) abgeschlossen und versendet. Gewöhnlich wird wall vom Systemadministrator verwendet, um den Benutzer auf bestimmte Ereignisse, wie etwa das Neustarten des Systems hinzuweisen.
write â Nachrichten an andere Benutzer verschickenÂ
Ähnlich wie mit wall können Sie mit write eine Nachricht versenden, allerdings an einen bestimmten oder mehrere Benutzer.
wall Benutzer1 ...
Ansonsten gilt bei der Verwendung von write das Gleiche wie für wall. Auch hier wird die von der Standardeingabe eingelesene Nachricht mit (Strg)+(D) beendet und auf der Gegenstelle muss auch hier mesg yes gelten, damit der Benutzer die Nachricht empfangen kann. Natürlich ist es dem Benutzer root gestattet, jedem Benutzer eine Nachricht zu senden, auch wenn dieser das Empfangen von Nachrichten mit mesg no« abgeschaltet hat.
mesg â Nachrichten auf die Dialogstation zulassen oder unterbindenÂ
Mit dem Kommando mesg können Sie einem anderen Benutzer erlauben, auf das Terminal (bspw. mittels write oder wall) zu schreiben oder eben dies zu sperren. Rufen Sie mesg ohne Optionen auf, wird so ausgegeben, wie die Zugriffsrechte gesetzt sind â y (für yes) und n (für no). Wollen Sie dem Benutzer erlauben, dass er auf Ihre Dialogstation schreiben darf, können Sie dies mit mesg folgendermaßen erreichen:
mesg yes
oder
mesg -y
Wollen Sie hingegeben unterbinden, dass jemand Nachrichten auf Ihre Dialogstation ausgibt, wird mesg so verwendet:
mesg no
oder
mesg -n
Beispielsweise:
you@host > mesg is n you@host > mesg yes you@host > mesg ist y
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 14.13 BenutzerkommunikationÂ
### wall â Nachrichten an alle Benutzer verschickenÂ
Mit dem Kommando wall senden Sie eine Nachricht an alle aktiven Benutzer auf dem Rechner. Damit ein Benutzer auch Nachrichten empfangen kann, muss dieser mit mesg yes diese Option einschalten. Natürlich kann ein Benutzer das Empfangen von Nachrichten auch mit mesg no abschalten. Nachrichten werden nach einem Aufruf von wall von der Standardeingabe eingelesen und mit der Tastenkombination (Strg)+(D) abgeschlossen und versendet. Gewöhnlich wird wall vom Systemadministrator verwendet, um den Benutzer auf bestimmte Ereignisse, wie etwa das Neustarten des Systems hinzuweisen.
### write â Nachrichten an andere Benutzer verschickenÂ
Ähnlich wie mit wall können Sie mit write eine Nachricht versenden, allerdings an einen bestimmten oder mehrere Benutzer.
> wall Benutzer1 ...
Ansonsten gilt bei der Verwendung von write das Gleiche wie für wall. Auch hier wird die von der Standardeingabe eingelesene Nachricht mit (Strg)+(D) beendet und auf der Gegenstelle muss auch hier mesg yes gelten, damit der Benutzer die Nachricht empfangen kann. Natürlich ist es dem Benutzer root gestattet, jedem Benutzer eine Nachricht zu senden, auch wenn dieser das Empfangen von Nachrichten mit mesg no« abgeschaltet hat.
### mesg â Nachrichten auf die Dialogstation zulassen oder unterbindenÂ
Mit dem Kommando mesg können Sie einem anderen Benutzer erlauben, auf das Terminal (bspw. mittels write oder wall) zu schreiben oder eben dies zu sperren. Rufen Sie mesg ohne Optionen auf, wird so ausgegeben, wie die Zugriffsrechte gesetzt sind â y (für yes) und n (für no). Wollen Sie dem Benutzer erlauben, dass er auf Ihre Dialogstation schreiben darf, können Sie dies mit mesg folgendermaßen erreichen:
> mesg yes
oder
> mesg -y
Wollen Sie hingegeben unterbinden, dass jemand Nachrichten auf Ihre Dialogstation ausgibt, wird mesg so verwendet:
> mesg no
oder
> mesg -n
Beispielsweise:
> you@host > mesg is n you@host > mesg yes you@host > mesg ist y
14.14 Bildschirm- und TerminalkommandosÂ
clear â Löschen des BildschirmsÂ
Mit dem Kommando clear löschen Sie den Bildschirm, sofern dies möglich ist. Das Kommando sucht in der Umgebung nach dem Terminaltyp und dann in der terminfo-Datenbank, um herauszufinden, wie der Bildschirm für das entsprechende Terminal gelöscht wird.
reset â Zeichensatz für ein Terminal wiederherstellenÂ
Mit dem Kommando reset können Sie jedes virtuelle Terminal wieder in einen definierten Zustand (zurück) setzen. Gibt es kein Kommando mit diesem Namen, können Sie Selbiges auch mit setterm âreset erreichen.
setterm â Terminal-Einstellung verändernÂ
Mit setterm können die Terminal-Einstellungen wie bspw. die Hintergrund- bzw. Vordergrundfarbe verändert werden. Ruft man setterm ohne Optionen auf, erhält man einen Überblick zu allen möglichen Optionen von setterm. Sie können setterm entweder interaktiv verwenden
you@host > setterm -bold on
(hier schalten Sie bspw. die Fettschrift an) oder aber Sie sichern die Einstellungen dauerhaft in der Datei ~/.profile. Einige wichtige Einstellungen von setterm sind:
Tabelle 14.29 Â Einige häufig benötigte Optionen für setterm
setterm âclear
Löscht den Bildschirm
setterm âreset
Terminal wieder in einen defnierten Zustand zurückbringen
setterm âblank n
Bildschirm nach n Minuten Untätigkeit abschalten
stty â Terminal-Einstellung abfragen oder setzenÂ
Mit stty können Sie die Terminal-Einstellung abfragen oder verändern. Rufen Sie stty ohne Argumente auf, wird die Leitungsgeschwindigkeit des aktuellen Terminals ausgegeben. Wenn Sie stty mit der Option âa aufrufen, erhalten Sie die aktuelle Terminal-Einstellung.
you@host > stty -a speed 38400 baud; rows 23; columns 72; line = 0; intr=^C; quit=^\; erase=^?; kill=^U; eof=^D; eol=<undef>; eol2 = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0; -parenb -parodd cs8 -hupcl -cstopb cread -clocal -crtscts -ignbrk brkint ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff -iuclc -ixany imaxbel -iutf8 opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0 isig icanon iexten echo echoe echok -echonl -noflsh -xcase âtostop -echoprt echoctl echoke
Die Einstellungen lassen sich häufig schwer beschreiben. Hierzu bedarf es schon einer intensiveren Befassung mit der Funktionsweise zeichenorientierter Gerätetreiber im Kernel und der seriellen Schnittstelle â was hier allerdings nicht zur Diskussion steht.
Alle Flags, die sich mit stty verändern lassen, können Sie sich mit stty ââhelp auflisten lassen. Viele dieser Flags lassen Sie mit einem vorangestellten Minus abschalten und ohne ein Minus (wieder) aktivieren. Wenn Sie beim Probieren der verschiedenen Flags das Terminal nicht mehr vernünftig steuern können, hilft Ihnen das Kommando reset oder setterm âreset, um das Terminal wiederherzustellen. Über
you@host > stty -echo
beispielsweise schalten Sie die Ausgabe des Terminals ab und mit
you@host > stty echo
stellen Sie die Ausgabe auf dem Bildschirm wieder her. Allerdings müssen Sie hier recht sicher im Umgang mit der Tastatur sein, weil Sie ja zuvor die Ausgabe deaktiviert haben.
tty â Terminal-Name erfragenÂ
Mit tty können Sie den Terminal-Namen inklusive Pfad erfragen, der die Standardeingabe entgegennimmt.
you@host > tty /dev/pts/36
Verwenden Sie die Option âs, erfolgt keine Ausgabe, vielmehr wird nur der Status gesetzt. Dabei haben diese Wert folgende Bedeutung:
Tabelle 14.30 Â Rückgabestatus von tty mit der Option -s
Status
Bedeutung
0
Standardeingabe ist ein Terminal
1
Standardeingabe ist kein Terminal
2
Unbekannter Fehler ist aufgetreten
Ein Beispiel: you@host > tty -s you@host > echo $? 0
Diese Abfrage wird bspw. gern in Scripts verwendet, um zu ermitteln, ob für die Standardeingabe ein Terminal zur Verfügung steht. Einem Hintergrundprozess zum Beispiel steht keine Standardeingabe zur Verfügung.
tput â Terminal- und CursorsteuerungÂ
Das Kommando tput wurde umfangreich in Abschnitt 5.2.4 behandelt.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
### clear â Löschen des BildschirmsÂ
Mit dem Kommando clear löschen Sie den Bildschirm, sofern dies möglich ist. Das Kommando sucht in der Umgebung nach dem Terminaltyp und dann in der terminfo-Datenbank, um herauszufinden, wie der Bildschirm für das entsprechende Terminal gelöscht wird.
### reset â Zeichensatz für ein Terminal wiederherstellenÂ
Mit dem Kommando reset können Sie jedes virtuelle Terminal wieder in einen definierten Zustand (zurück) setzen. Gibt es kein Kommando mit diesem Namen, können Sie Selbiges auch mit setterm âreset erreichen.
### setterm â Terminal-Einstellung verändernÂ
Mit setterm können die Terminal-Einstellungen wie bspw. die Hintergrund- bzw. Vordergrundfarbe verändert werden. Ruft man setterm ohne Optionen auf, erhält man einen Überblick zu allen möglichen Optionen von setterm. Sie können setterm entweder interaktiv verwenden
> you@host > setterm -bold on
(hier schalten Sie bspw. die Fettschrift an) oder aber Sie sichern die Einstellungen dauerhaft in der Datei ~/.profile. Einige wichtige Einstellungen von setterm sind:
Verwendung | Bedeutung |
| --- | --- |
setterm âclear | Löscht den Bildschirm |
setterm âreset | Terminal wieder in einen defnierten Zustand zurückbringen |
setterm âblank n | Bildschirm nach n Minuten Untätigkeit abschalten |
### stty â Terminal-Einstellung abfragen oder setzenÂ
Mit stty können Sie die Terminal-Einstellung abfragen oder verändern. Rufen Sie stty ohne Argumente auf, wird die Leitungsgeschwindigkeit des aktuellen Terminals ausgegeben. Wenn Sie stty mit der Option âa aufrufen, erhalten Sie die aktuelle Terminal-Einstellung.
> you@host > stty -a speed 38400 baud; rows 23; columns 72; line = 0; intr=^C; quit=^\; erase=^?; kill=^U; eof=^D; eol=<undef>; eol2 = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0; -parenb -parodd cs8 -hupcl -cstopb cread -clocal -crtscts -ignbrk brkint ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff -iuclc -ixany imaxbel -iutf8 opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0 isig icanon iexten echo echoe echok -echonl -noflsh -xcase âtostop -echoprt echoctl echoke
Die Einstellungen lassen sich häufig schwer beschreiben. Hierzu bedarf es schon einer intensiveren Befassung mit der Funktionsweise zeichenorientierter Gerätetreiber im Kernel und der seriellen Schnittstelle â was hier allerdings nicht zur Diskussion steht.
Alle Flags, die sich mit stty verändern lassen, können Sie sich mit stty ââhelp auflisten lassen. Viele dieser Flags lassen Sie mit einem vorangestellten Minus abschalten und ohne ein Minus (wieder) aktivieren. Wenn Sie beim Probieren der verschiedenen Flags das Terminal nicht mehr vernünftig steuern können, hilft Ihnen das Kommando reset oder setterm âreset, um das Terminal wiederherzustellen. Über
> you@host > stty -echo
beispielsweise schalten Sie die Ausgabe des Terminals ab und mit
> you@host > stty echo
stellen Sie die Ausgabe auf dem Bildschirm wieder her. Allerdings müssen Sie hier recht sicher im Umgang mit der Tastatur sein, weil Sie ja zuvor die Ausgabe deaktiviert haben.
### tty â Terminal-Name erfragenÂ
Mit tty können Sie den Terminal-Namen inklusive Pfad erfragen, der die Standardeingabe entgegennimmt.
> you@host > tty /dev/pts/36
Verwenden Sie die Option âs, erfolgt keine Ausgabe, vielmehr wird nur der Status gesetzt. Dabei haben diese Wert folgende Bedeutung:
Status | Bedeutung |
| --- | --- |
0 | Standardeingabe ist ein Terminal |
1 | Standardeingabe ist kein Terminal |
2 | Unbekannter Fehler ist aufgetreten |
> Ein Beispiel: you@host > tty -s you@host > echo $? 0
Diese Abfrage wird bspw. gern in Scripts verwendet, um zu ermitteln, ob für die Standardeingabe ein Terminal zur Verfügung steht. Einem Hintergrundprozess zum Beispiel steht keine Standardeingabe zur Verfügung.
### tput â Terminal- und CursorsteuerungÂ
Das Kommando tput wurde umfangreich in Abschnitt 5.2.4 behandelt.
# 14.15 Online-HilfenÂ
14.15 Online-Hilfen apropos â nach Schlüsselwörtern in man-Seiten suchen Die Syntax:apropos keywordMit apropos werden alle man-Seiten aufgelistet, in denen sich das Wort »keyword« befindet. Selbiges kann auch mit dem Kommando man und der Option âk erreicht werden.info â GNU-Online-Manual info ist das Hilfe-System für die bei Linux mitgelieferte GNU-Software.info [kommando]Die wichtigsten Tasten zum Verwenden der Info-Seiten sind:Tabelle 14.31  Gängige Tasten zur Steuerung von info-SeitenTasteBedeutung(SPACE)Eine Seite nach unten blättern(BACKSPACE)Eine Seite nach oben blättern(B)Anfang des info-Textes(E)Ende des info-Textes(Ë_)Zum nächsten Querverweis springen(ENTER)Querverweis folgen(H)Anleitung zur Bedienung von info(?)Kommandoübersicht von info(Q)info beendenman â die traditionelle Online-Hilfe Mit man geben Sie die Manual-Seiten zu einem entsprechenden Namen aus:man NameDie Anzeige der man-Seite erfolgt über einen Pager, was meistens less oder eventuell auch more ist. Zur Steuerung dieser Pager blättern Sie bitte zur gegebenen Seite zurück. Den Pager können Sie aber auch mit der Option âP oder der Umgebungsvariablen PAGER selbst bestimmen. Aufgeteilt werden die man-Seiten in verschiedene Kategorien:1. Benutzerkommandos      2. Systemaufrufe      3. C-Bibliotheksfunktionen      4. Beschreibungen der Gerätedateien      5. Dateiformate      7. Makropakete für die Textformatierer      8. Kommandos für die Systemverwalterin      9. KernelroutinenDie Reihenfolge, in der die Sektionen nach einer bestimmten Manual-Page durchsucht werden, ist in der Konfigurationsdatei /etc/man.config festgelegt. In der MANSEC-Umgebungsvariablen kann jeder User für sich eine andere Reihenfolge bestimmen.Ebenso sind die Verzeichnisse, in denen nach den man-Seiten gesucht werden soll, in /etc/man.config festgeschrieben. Da die Datei /etc/man.config nur von root bearbeitet werden darf, besteht auch hierbei die Möglichkeit, dass der Benutzer mit der Umgebungsvariablen MANPATH ein anderes Verzeichnis angeben kann.Das Kommando man hat eine Reihe von Optionen, hier die wichtigsten: -a â häufig gibt es gleichnamige man-Seiten in verschiedenen Kategorien. Geben Sie bspw. man sleep ein, bekommen Sie die erste gefundene Sektion (abhängig von der Reihenfolge, die in /etc/man.config oder MANSEC angegeben wurde) mit entsprechenden Namen ausgegeben. Wollen Sie alle man-Seiten zu einem bestimmten Namen bzw. Kommando lesen, so müssen Sie nur die Option -a verwenden. Mit man -a sleep erhalten Sie jetzt alle man-Seiten mit sleep (in meinen Fall waren es drei man-Seiten zu sleep). -k keyword â entspricht apropos keyword; damit werden alle man-Seiten ausgegeben, die das Wort keyword enthalten. -f keyword â entspricht whatis keyword; damit wird eine einzeilige Bedeutung von keyword ausgegeben.whatis â Kurzbeschreibung zu einem Kommando Die Syntax:whatis keywordMit dem Kommando whatis wird die Bedeutung von »keyword« als ein einzeiliger Text ausgegeben. whatis entspricht einem Aufruf von man âf keyword.Ihre MeinungWie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 14.15 Online-HilfenÂ
### apropos â nach Schlüsselwörtern in man-Seiten suchenÂ
Die Syntax:
> apropos keyword
Mit apropos werden alle man-Seiten aufgelistet, in denen sich das Wort »keyword« befindet. Selbiges kann auch mit dem Kommando man und der Option âk erreicht werden.
### info â GNU-Online-ManualÂ
info ist das Hilfe-System für die bei Linux mitgelieferte GNU-Software.
> info [kommando]
Die wichtigsten Tasten zum Verwenden der Info-Seiten sind:
Taste | Bedeutung |
| --- | --- |
(SPACE) | Eine Seite nach unten blättern |
(BACKSPACE) | Eine Seite nach oben blättern |
(B) | Anfang des info-Textes |
(E) | Ende des info-Textes |
(Ë_) | Zum nächsten Querverweis springen |
(ENTER) | Querverweis folgen |
(H) | Anleitung zur Bedienung von info |
(?) | Kommandoübersicht von info |
(Q) | info beenden |
### man â die traditionelle Online-HilfeÂ
Mit man geben Sie die Manual-Seiten zu einem entsprechenden Namen aus:
> man Name
Die Anzeige der man-Seite erfolgt über einen Pager, was meistens less oder eventuell auch more ist. Zur Steuerung dieser Pager blättern Sie bitte zur gegebenen Seite zurück. Den Pager können Sie aber auch mit der Option âP oder der Umgebungsvariablen PAGER selbst bestimmen. Aufgeteilt werden die man-Seiten in verschiedene Kategorien:
1. | Benutzerkommandos |
| --- | --- |
   |    |
2. | Systemaufrufe |
| --- | --- |
   |    |
3. | C-Bibliotheksfunktionen |
| --- | --- |
   |    |
4. | Beschreibungen der Gerätedateien |
| --- | --- |
   |    |
5. | Dateiformate |
| --- | --- |
   |    |
7. | Makropakete für die Textformatierer |
| --- | --- |
   |    |
8. | Kommandos für die Systemverwalterin |
| --- | --- |
   |    |
9. Kernelroutinen
Die Reihenfolge, in der die Sektionen nach einer bestimmten Manual-Page durchsucht werden, ist in der Konfigurationsdatei /etc/man.config festgelegt. In der MANSEC-Umgebungsvariablen kann jeder User für sich eine andere Reihenfolge bestimmen.
Ebenso sind die Verzeichnisse, in denen nach den man-Seiten gesucht werden soll, in /etc/man.config festgeschrieben. Da die Datei /etc/man.config nur von root bearbeitet werden darf, besteht auch hierbei die Möglichkeit, dass der Benutzer mit der Umgebungsvariablen MANPATH ein anderes Verzeichnis angeben kann.
Das Kommando man hat eine Reihe von Optionen, hier die wichtigsten:
 | -a â häufig gibt es gleichnamige man-Seiten in verschiedenen Kategorien. Geben Sie bspw. man sleep ein, bekommen Sie die erste gefundene Sektion (abhängig von der Reihenfolge, die in /etc/man.config oder MANSEC angegeben wurde) mit entsprechenden Namen ausgegeben. Wollen Sie alle man-Seiten zu einem bestimmten Namen bzw. Kommando lesen, so müssen Sie nur die Option -a verwenden. Mit man -a sleep erhalten Sie jetzt alle man-Seiten mit sleep (in meinen Fall waren es drei man-Seiten zu sleep). |
| --- | --- |
 | -k keyword â entspricht apropos keyword; damit werden alle man-Seiten ausgegeben, die das Wort keyword enthalten. |
| --- | --- |
 | -f keyword â entspricht whatis keyword; damit wird eine einzeilige Bedeutung von keyword ausgegeben. |
| --- | --- |
### whatis â Kurzbeschreibung zu einem KommandoÂ
Die Syntax:
> whatis keyword
Mit dem Kommando whatis wird die Bedeutung von »keyword« als ein einzeiliger Text ausgegeben. whatis entspricht einem Aufruf von man âf keyword.
# 14.17 Gemischte KommandosÂ
14.17 Gemischte KommandosÂ
alias/unalias â Kurznamen für Kommandos vergeben bzw. löschenÂ
Mit alias können Sie für einfache Kommandos benutzerdefinierte Namen anlegen. Löschen können Sie dieses Synonym wieder mit unalias. alias/ unalias wurde bereits in Abschnitt 6.6 beschrieben.
bc â TaschenrechnerÂ
bc ist ein arithmetischer, sehr umfangreicher Taschenrechner für die Konsole mit vielen ausgereiften Funktionen. Dieser Taschenrechner wurde bereits in Abschnitt 2.2.3 kurz behandelt.
printenv bzw. env â Umgebungsvariablen anzeigenÂ
Mit printenv können Sie Umgebungsvariablen für einen Prozess anzeigen lassen. Geben Sie kein Argument an, werden alle Variablen ausgegeben, ansonsten der entsprechende Wert der Umgebungsvariablen.
you@host > printenv PAGER less you@host > printenv MANPATH /usr/local/man:/usr/share/man:/usr/X11R6/man:/opt/gnome/share/man
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 14.17 Gemischte KommandosÂ
### alias/unalias â Kurznamen für Kommandos vergeben bzw. löschenÂ
Mit alias können Sie für einfache Kommandos benutzerdefinierte Namen anlegen. Löschen können Sie dieses Synonym wieder mit unalias. alias/ unalias wurde bereits in Abschnitt 6.6 beschrieben.
### bc â TaschenrechnerÂ
bc ist ein arithmetischer, sehr umfangreicher Taschenrechner für die Konsole mit vielen ausgereiften Funktionen. Dieser Taschenrechner wurde bereits in Abschnitt 2.2.3 kurz behandelt.
### printenv bzw. env â Umgebungsvariablen anzeigenÂ
Mit printenv können Sie Umgebungsvariablen für einen Prozess anzeigen lassen. Geben Sie kein Argument an, werden alle Variablen ausgegeben, ansonsten der entsprechende Wert der Umgebungsvariablen.
> you@host > printenv PAGER less you@host > printenv MANPATH /usr/local/man:/usr/share/man:/usr/X11R6/man:/opt/gnome/share/man
# 15.2 Datei-UtilitiesÂ
15.2 Datei-UtilitiesÂ
15.2.1 Leerzeichen im Dateinamen ersetzenÂ
Befinden sich auf Ihrem Rechner mal wieder eine Menge Dateien oder Verzeichnisse mit einem Leerzeichen zwischen dem Dateinamen, wie dies häufig bei MS-Windows- oder MP3-Dateien verwendet wird, dann können Sie entweder jede Datei von Hand oder aber gleich alle auf einmal mit einem Shellscript ersetzen. Das folgende Shellscript übernimmt diese Arbeit für Sie. Es werden alle Dateien und Verzeichnisse im aktuellen Verzeichnis mit einem oder mehreren Leerzeichen ersetzt. Statt eines Leerzeichens wird hierbei das Unterstrichzeichen verwendet â natürlich können Sie selbst das Zeichen wählen.
#! /bin/sh # Name: replaceSpace # Ersetzt Leerzeichen in Datei- bzw. Verzeichnisnamen durch '_' space=' ' replace='_' # Ersetzungszeichen # Ersetzt alle Datei- und Verzeichnisnamen im # aktuellen Verzeichnis for source in * do case "$source" in # Ist ein Leerzeichen im Namen vorhanden ... *"$space"*) # Erst mal den Namen in dest speichern ... dest=`echo "$source" | sed "s/$space/$replace/g"` # ... überprüfen, ob bereits eine Datei bzw. # ein Verzeichnis mit gleichem Namen existiert if test -f "$dest" then echo "Achtung: "$dest" existiert bereits ... \ (Überspringen)" 1>&2 continue fi # Vorgang auf der Standardausgabe mitschreiben echo mv "$source" "$dest" # Jetzt ersetzen ... mv "$source" "$dest" ;; esac done
Das Script bei der Ausführung:
you@host > ./replaceSpace mv 01 I Believe I can fly.mp3 01_I_Believe_I_can_fly.mp3 mv Default User Default_User mv Meine Webseiten Meine_Webseiten mv Dokumente und Einstellungen Dokumente_und_Einstellungen mv Eigene Dateien Eigene_Dateien
Natürlich lässt sich das Script erheblich erweitern. So könnten Sie in der Kommandozeile beispielsweise selbst angeben, was als Ersetzungszeichen verwendet werden soll.
15.2.2 Dateiendungen verändernÂ
Manchmal will man z. B. aus Backup-Zwecken die Endung von Dateien verändern. Bei einer Datei ist dies kein Problem, aber wenn hierbei ein paar Dutzend Dateien umbenannt werden sollen, wird einem mit einem Shell-Script geholfen. Das folgende Script ändert alle Dateiendungen eines bestimmten Verzeichnisses.
#!/bin/sh # renExtension â Alle Dateiendungen eines speziellen # Verzeichnisses ändern # Verwendung : renExtension directory ext1 ext2 # Beispiel: renExt mydir .abc .xyz â "ändert '.abc' nach '.xyz' renExtension() { if [ $# -lt 3 ] then echo "usage: $0 Verzeichnis ext1 ext2" echo "ex: $0 mydir .abc .xyz (ändert .abc zu .xyz)" return 1 fi # Erstes Argument muss ein Verzeichnis sein if [ -d "$1" ] then : else echo "Argument $1 ist kein Verzeichnis!" return 1 fi # Nach allen Dateien mit der Endung $2 in $1 suchen for i in `find . $1 -name "*$2"` do # Suffix $2 vom Dateinamen entfernen base=`basename $i $2` echo "Verändert: $1/$i Nach: $1/${base}$3" # Umbenennen mit Suffix $3 mv $i $1/${base}$3 done return 0 } # Zum Testen # renExtension $1 $2 $3
Das Script bei der Ausführung:
you@host > ls mydir file1.c file2.c file3.c you@host > ./renExtension mydir .c .cpp Verändert: mydir/mydir/file1.c Nach: mydir/file1.cpp Verändert: mydir/mydir/file2.c Nach: mydir/file2.cpp Verändert: mydir/mydir/file3.c Nach: mydir/file3.cpp you@host > ls mydir file1.cpp file2.cpp file3.cpp
Tipp   Wollen Sie hierbei lieber eine Kopie erstellen, statt eine Umbenennung des Namens vorzunehmen, so müssen Sie hier nur das Kommando mv durch cp ersetzen.
15.2.3 Nach veränderten Dateien in zwei Verzeichnissen vergleichenÂ
Gern kopiert man ein Verzeichnis, um eine Sicherungskopie in der Hinterhand zu haben. Wenn Sie nun nach längerer Zeit wieder an den Dateien in dem Verzeichnis gearbeitet haben oder eventuell mehrere Personen einer Gruppe mit diesen Dateien arbeiten, möchte man doch wissen, welche und wie viele Dateien sich seitdem geändert haben. Sicherlich gibt es hierzu bessere Werkzeuge wie CVS oder SUBVERSION, aber manchmal erscheint mir dies doch ein wenig überdimensioniert zu sein. Hierzu ein einfaches Shellscript, womit Sie sich einen schnellen Überblick verschaffen können.
#!/bin/sh # diffDir() vergleicht zwei Verzeichnisse mit einfachen # Dateien miteinander diffDir() { if [ -d "$1" -a -d "$2" ] then : else echo "usage: $0 dir1 dir2" return 1 fi count1=0; count2=0 echo echo "Unterschiedliche Dateien : " for i in $1/* do count1=`expr $count1 + 1` base=`basename $i` diff -b $1/$base $2/$base > /dev/null if [ "$?" -gt 0 ] then echo " $1/$base $2/$base" count2=`expr $count2 + 1` fi done echo "-------------------------------" echo "$count2 von $count1 Dateien sind unterschiedlich \ in $1 und $2" return 0 } # Zum Testen ... diffDir $1 $2
Das Script bei der Ausführung:
you@host > ./diffDir Shellbuch_backup Shellbuch_aktuell Unterschiedliche Dateien : ------------------------------- 0 von 14 Dateien sind unterschiedlich in Shellbuch_backup und Shellbuch_aktuell you@host > cd Shellbuch_aktuell you@host > echo Hallo >> Kap003.txt you@host > echo Hallo >> Kap004.txt you@host > echo Hallo >> Kap005.txt you@host > ./diffDir Shellbuch_backup Shellbuch_aktuell Unterschiedliche Dateien : Shellbuch_backup/Kap003.txt Shellbuch_aktuell/Kap003.txt Shellbuch_backup/kap004.txt Shellbuch_aktuell/kap004.txt Shellbuch_backup/kap005.txt Shellbuch_aktuell/kap005.txt ------------------------------- 3 von 14 Dateien waren unterschiedlich in Shellbuch_backup und Shellbuch_aktuell
Wem das ganze Beispiel zu umfangreich ist, dem kann ich auch noch einen Einzeiler mit rsync anbieten, der ebenfalls nach veränderten Daten zwischen zwei Verzeichnissen vergleicht:
rsync -avnx --numeric-ids --delete $1/ $2
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 15.2 Datei-UtilitiesÂ
### 15.2.1 Leerzeichen im Dateinamen ersetzenÂ
Befinden sich auf Ihrem Rechner mal wieder eine Menge Dateien oder Verzeichnisse mit einem Leerzeichen zwischen dem Dateinamen, wie dies häufig bei MS-Windows- oder MP3-Dateien verwendet wird, dann können Sie entweder jede Datei von Hand oder aber gleich alle auf einmal mit einem Shellscript ersetzen. Das folgende Shellscript übernimmt diese Arbeit für Sie. Es werden alle Dateien und Verzeichnisse im aktuellen Verzeichnis mit einem oder mehreren Leerzeichen ersetzt. Statt eines Leerzeichens wird hierbei das Unterstrichzeichen verwendet â natürlich können Sie selbst das Zeichen wählen.
> #! /bin/sh # Name: replaceSpace # Ersetzt Leerzeichen in Datei- bzw. Verzeichnisnamen durch '_' space=' ' replace='_' # Ersetzungszeichen # Ersetzt alle Datei- und Verzeichnisnamen im # aktuellen Verzeichnis for source in * do case "$source" in # Ist ein Leerzeichen im Namen vorhanden ... *"$space"*) # Erst mal den Namen in dest speichern ... dest=`echo "$source" | sed "s/$space/$replace/g"` # ... überprüfen, ob bereits eine Datei bzw. # ein Verzeichnis mit gleichem Namen existiert if test -f "$dest" then echo "Achtung: "$dest" existiert bereits ... \ (Überspringen)" 1>&2 continue fi # Vorgang auf der Standardausgabe mitschreiben echo mv "$source" "$dest" # Jetzt ersetzen ... mv "$source" "$dest" ;; esac done
Das Script bei der Ausführung:
> you@host > ./replaceSpace mv 01 I Believe I can fly.mp3 01_I_Believe_I_can_fly.mp3 mv Default User Default_User mv Meine Webseiten Meine_Webseiten mv Dokumente und Einstellungen Dokumente_und_Einstellungen mv Eigene Dateien Eigene_Dateien
Natürlich lässt sich das Script erheblich erweitern. So könnten Sie in der Kommandozeile beispielsweise selbst angeben, was als Ersetzungszeichen verwendet werden soll.
### 15.2.2 Dateiendungen verändernÂ
Manchmal will man z. B. aus Backup-Zwecken die Endung von Dateien verändern. Bei einer Datei ist dies kein Problem, aber wenn hierbei ein paar Dutzend Dateien umbenannt werden sollen, wird einem mit einem Shell-Script geholfen. Das folgende Script ändert alle Dateiendungen eines bestimmten Verzeichnisses.
> #!/bin/sh # renExtension â Alle Dateiendungen eines speziellen # Verzeichnisses ändern # Verwendung : renExtension directory ext1 ext2 # Beispiel: renExt mydir .abc .xyz â "ändert '.abc' nach '.xyz' renExtension() { if [ $# -lt 3 ] then echo "usage: $0 Verzeichnis ext1 ext2" echo "ex: $0 mydir .abc .xyz (ändert .abc zu .xyz)" return 1 fi # Erstes Argument muss ein Verzeichnis sein if [ -d "$1" ] then : else echo "Argument $1 ist kein Verzeichnis!" return 1 fi # Nach allen Dateien mit der Endung $2 in $1 suchen for i in `find . $1 -name "*$2"` do # Suffix $2 vom Dateinamen entfernen base=`basename $i $2` echo "Verändert: $1/$i Nach: $1/${base}$3" # Umbenennen mit Suffix $3 mv $i $1/${base}$3 done return 0 } # Zum Testen # renExtension $1 $2 $3
Das Script bei der Ausführung:
> you@host > ls mydir file1.c file2.c file3.c you@host > ./renExtension mydir .c .cpp Verändert: mydir/mydir/file1.c Nach: mydir/file1.cpp Verändert: mydir/mydir/file2.c Nach: mydir/file2.cpp Verändert: mydir/mydir/file3.c Nach: mydir/file3.cpp you@host > ls mydir file1.cpp file2.cpp file3.cpp
### 15.2.3 Nach veränderten Dateien in zwei Verzeichnissen vergleichenÂ
Gern kopiert man ein Verzeichnis, um eine Sicherungskopie in der Hinterhand zu haben. Wenn Sie nun nach längerer Zeit wieder an den Dateien in dem Verzeichnis gearbeitet haben oder eventuell mehrere Personen einer Gruppe mit diesen Dateien arbeiten, möchte man doch wissen, welche und wie viele Dateien sich seitdem geändert haben. Sicherlich gibt es hierzu bessere Werkzeuge wie CVS oder SUBVERSION, aber manchmal erscheint mir dies doch ein wenig überdimensioniert zu sein. Hierzu ein einfaches Shellscript, womit Sie sich einen schnellen Überblick verschaffen können.
> #!/bin/sh # diffDir() vergleicht zwei Verzeichnisse mit einfachen # Dateien miteinander diffDir() { if [ -d "$1" -a -d "$2" ] then : else echo "usage: $0 dir1 dir2" return 1 fi count1=0; count2=0 echo echo "Unterschiedliche Dateien : " for i in $1/* do count1=`expr $count1 + 1` base=`basename $i` diff -b $1/$base $2/$base > /dev/null if [ "$?" -gt 0 ] then echo " $1/$base $2/$base" count2=`expr $count2 + 1` fi done echo "-------------------------------" echo "$count2 von $count1 Dateien sind unterschiedlich \ in $1 und $2" return 0 } # Zum Testen ... diffDir $1 $2
Das Script bei der Ausführung:
> you@host > ./diffDir Shellbuch_backup Shellbuch_aktuell Unterschiedliche Dateien : ------------------------------- 0 von 14 Dateien sind unterschiedlich in Shellbuch_backup und Shellbuch_aktuell you@host > cd Shellbuch_aktuell you@host > echo Hallo >> Kap003.txt you@host > echo Hallo >> Kap004.txt you@host > echo Hallo >> Kap005.txt you@host > ./diffDir Shellbuch_backup Shellbuch_aktuell Unterschiedliche Dateien : Shellbuch_backup/Kap003.txt Shellbuch_aktuell/Kap003.txt Shellbuch_backup/kap004.txt Shellbuch_aktuell/kap004.txt Shellbuch_backup/kap005.txt Shellbuch_aktuell/kap005.txt ------------------------------- 3 von 14 Dateien waren unterschiedlich in Shellbuch_backup und Shellbuch_aktuell
Wem das ganze Beispiel zu umfangreich ist, dem kann ich auch noch einen Einzeiler mit rsync anbieten, der ebenfalls nach veränderten Daten zwischen zwei Verzeichnissen vergleicht:
> rsync -avnx --numeric-ids --delete $1/ $2
# 15.3 SystemadministrationÂ
Date: 2010-12-14
Categories:
Tags:
15.3 SystemadministrationÂ
Die Systemadministration dürfte wohl eines der Hauptgründe sein, weshalb Sie sich entschieden haben, die Shellscript-Programmierung zu erlernen. Zur Systemadministration gehören u. a. zentrale Themen wie die Benutzer- und Prozessverwaltung, Systemüberwachung, Backup-Strategien und das Auswerten bzw. Analysieren von Log-Dateien. Zu jedem dieser Themen werden Sie ein Beispiel für die Praxis kennen lernen und, falls das Thema recht speziell ist, auch eine Einführung.
15.3.1 BenutzerverwaltungÂ
Plattenplatzbenutzung einzelner Benutzer auf dem Rechner
Wenn Sie einen Rechner mit vielen Benutzern verwalten müssen, so sollte man dem einzelnen Benutzer auch eine gewisse Grenze setzen, was den Plattenverbrauch betrifft. Die einfachste Möglichkeit ist es, die Heimverzeichnisse der einzelnen User zu überprüfen. Natürlich schließt dies nicht nur das /home-Verzeichnis ein (auch wenn es im Beispiel so verwendet wird). Am einfachsten sucht man in entsprechenden Verzeichnissen nach Dateien, die einen bestimmten Benutzer als Eigentümer haben, und addiert die Größe einer jeden gefundenen Datei.
Damit auch alle Benutzer erfasst werden, deren User-ID größer als 99 ist, werden sie einfach alle aus /etc/passwd extrahiert. Die Werte zwischen 1 und 99 sind gewöhnlich den System-Daemons bzw. dem root vorbehalten. Ein guter Grund übrigens, dass Sie, wenn Sie einen neuen User anlegen, die UID immer über 100 wählen.
Hinweis   Die meisten Systeme nutzen inzwischen auch UIDs für die User, die größer als 1000 sind. Diese Angabe kann von System zu System variieren. Viele Systeme verwenden als »uid_start« auch den Wert 1000. Es könnte also sein, dass der Wert 100 â wie hier verwendet â zu niedrig ist. Allerdings sollte es für Sie wohl kein Problem darstellen, diesen Wert im Script zu ändern.
Hinweis   An dieser Stelle sollte auch auf das Quota-System hingewiesen werden. Das Quota-System wird verwendet, wenn Benutzer/Gruppen, die an einem System arbeiten und dort ein eigenes Verzeichnis besitzen, zu viele Daten in diesem Verzeichnis speichern/sammeln.
Mit Quota können Sie als Systemadministrator den verfügbaren Plattenplatz für jede/n Benutzer/Gruppe einschränken. Hierbei existieren zwei Grenzen, das Softlimit und das Hardlimit. Beim Softlimit darf der Benutzer die Grenze für eine kurze Zeit überschreiten. Dieser Zeitraum wird durch die »Grace Period« festgelegt. Beim Hardlimit darf der Benutzer (oder die Gruppe) diese Grenze keinesfalls überschreiten. Es gibt also keine Möglichkeit, dieses Limit zu umgehen. Das Quota-System liegt gewöhnlich jeder Distribution bei. Mehr dazu finden Sie auch in einem Mini-Howto in deutscher Sprache.
Hinweis   Bitte beachten Sie, wenn Sie weitere Verzeichnisse angeben, in denen nach Dateien eines bestimmten Benutzers gesucht werden soll (oder gar das Wurzelverzeichnis), dass dies sehr viel Zeit in Anspruch nehmen kann.
Prozesse nach User sortiert mit Instanzen ausgeben
Einen weiteren Komfort, den man als Systemadministrator gern nutzen würde, wäre eine Ausgabe der Prozessüberwachung einzelner Benutzer, sortiert nach diesen. Das folgende, gut kommentierte Script sortiert die Prozesse nach Benutzern und gar nach deren einzelnen Instanzen, wenn der Benutzer von einer Instanz mehrere ausführt. Führt der Benutzer z. B. dreimal bash als Shell aus, finden Sie in der Zusammenfassung 3 Instanz(en) von /bin/bash, statt jede Instanz einzeln aufzulisten.
#!/bin/sh # Name: psusers # Voraussetzung, dass dieses Script funktioniert, ist, dass die # Ausgabe von ps -ef auf Ihrem Rechner folgendes Format hat: # # you@host > ps -ef # UID PID PPID C STIME TTY TIME CMD # root 1 0 0 00:38 ? 00:00:04 init [5] # root 2 1 0 00:38 ? 00:00:00 [ksoftirqd/0] # # Wenn die Ausgabe bei Ihnen etwas anders aussieht, müssen Sie # das Script entsprechend anpassen (erste Zuweisung von USERNAME # und PROGNAME) # Variablen deklarieren # COUNTER=0; CHECKER=0; UCOUNT=1 PSPROG='/bin/ps -ef' SORTPROG='/bin/sort +0 â1 +7 â8' TMPFILE=/tmp/proclist_$$ # Beim ordentlichen Beenden TMPFILE wieder löschen trap "/bin/rm -f $TMPFILE" EXIT # Die aktuelle Prozessliste in TMPFILE speichern # $PSPROG | $SORTPROG > $TMPFILE # Daten in TMPFILE verarbeiten # grep -v 'UID[ ]*PID' $TMPFILE | while read LINE do # Zeilen in einzelne Felder aufbrechen set -- $LINE # Einzelne Felder der Ausgabe von ps -ef lauten: # UID PID PPID C STIME TTY TIME CMD # Anzahl der Parameter einer Zeile größer als 0 ... if [ $# -gt 0 ] then # Erstes Feld (UID) einer Zeile der Variablen # USERNAME zuordnen USERNAME=$1 # Die ersten sieben Felder einer Zeile entfernen shift 7 # Kommandonamen (CMD) der Variablen PROGNAME zuordnen PROGNAME=$* fi # Testet die Kopfzeile # if [ "$USERNAME" = "UID" ] then continue # nächsten Wert in der Schleife holen ... fi # Überprüfen, ob es sich um die erste Zeile von Daten handelt # if [ "$CHECKER" = "0" ] then CHECKER=1 UCOUNT=0 LASTUSERNAME="$USERNAME" # Programmname für die Ausgabe formatieren # auf 40 Zeichen beschränken .... # LASTPROGNAME=`echo $PROGNAME | \ awk '{print substr($0, 0, 40)}'` COUNTER=1; LASTCOUNT=1 echo "" echo "$USERNAME führt aus:....." continue # nächsten Wert von USERNAME holen fi # Logische Überprüfung durchführen # if [ $CHECKER -gt 0 -a "$USERNAME" = "$LASTUSERNAME" ] then if [ "$PROGNAME" = "$LASTPROGNAME" ] then COUNTER=`expr $COUNTER + 1` else # Ausgabe auf dem Bildschirm ... if [ $LASTCOUNT -gt 1 ] then echo " $LASTCOUNT Instanz(en) von ->"\ " $LASTPROGNAME" else echo " $LASTCOUNT Instanz(en) von ->"\ " $LASTPROGNAME" fi COUNTER=1 fi # Programmname für die Ausgabe formatieren # auf 40 Zeichen beschränken .... # LASTPROGNAME=`echo $PROGNAME | \ awk '{print substr($0, 0, 40)}'` LASTCOUNT=$COUNTER elif [ $CHECKER -gt 0 -a "$USERNAME" != "$LASTUSERNAME" ] then if [ $LASTCOUNT -gt 1 ] then echo " $LASTCOUNT Instanz(en) von >> $LASTPROGNAME" else echo " $LASTCOUNT Instanz(en) von >>"\ " $LASTPROGNAME" fi echo echo "$USERNAME führt aus:....." LASTUSERNAME="$USERNAME" # Programmname für die Ausgabe formatieren # auf 40 Zeichen beschränken .... # LASTPROGNAME=`echo $PROGNAME | \ awk '{print substr($0, 0, 40)}'` COUNTER=1 LASTCOUNT=$COUNTER fi done # DISPLAY THE FINAL USER INSTANCE DETAILS # if [ $COUNTER -eq 1 -a $LASTCOUNT -ge 1 ] then if [ $LASTCOUNT -gt 1 ] then echo " $LASTCOUNT Instanz(en) von >> $LASTPROGNAME" else echo " $LASTCOUNT Instanz(en) von >> $LASTPROGNAME" fi fi echo "------" echo "Fertig" echo "------"
Das Script bei der Ausführung:
you@host > ./psusers bin führt aus:..... 1 Instanz(en) von >> /sbin/portmap lp führt aus:..... 1 Instanz(en) von >> /usr/sbin/cupsd postfix führt aus:..... 1 Instanz(en) von -> pickup -l -t fifo -u 1 Instanz(en) von >> qmgr -l -t fifo -u root führt aus:..... 1 Instanz(en) von -> -:0 1 Instanz(en) von -> [aio/0] 1 Instanz(en) von -> /bin/bash /sbin/hotplug pci 1 Instanz(en) von -> /bin/bash /etc/hotplug/pci.agent ... you führt aus:..... 3 Instanz(en) von -> /bin/bash 1 Instanz(en) von -> /bin/ps -ef 1 Instanz(en) von -> /bin/sh /opt/kde3/bin/startkde 1 Instanz(en) von -> /bin/sh ./testscript 1 Instanz(en) von -> gpg-agent --daemon --no-detach 1 Instanz(en) von -> kaffeine -session 117f000002000111 1 Instanz(en) von -> kamix 1 Instanz(en) von -> kdeinit: Running... ...
Prozesse bestimmter Benutzer beenden
Häufig kommt es vor, dass bei einem Benutzer einige Prozesse »Amok« laufen bzw. man einen Prozess einfach beenden will (warum auch immer). Mit dem folgenden Script können Sie (als root) die Prozesse eines Benutzers mithilfe einer interaktiven Abfrage beenden. Nach der Eingabe des Benutzers werden alle laufenden Prozesse in einem Array gespeichert. Anschließend wird das komplette Array durchlaufen und nachgefragt, ob Sie den Prozess beenden wollen oder nicht. Zuerst wird immer versucht, den Prozess normal mit SIGTERM zu beenden. Gelingt dies nicht mehr, muss SIGKILL herhalten.
Hinweis   Da dieses Script Arrays verwendet, ist es nur in der bash bzw. Korn-Shell ausführbar. Muss das Script unbedingt auch in einer Bourne-Shell laufen, so könnte man die einzelnen Prozesse statt in ein Array auch in eine Datei (zeilenweise) schreiben und aus dieser zeilenweise wieder lesen.
#!/bin/ksh # Name: killuser # Wegen der Benutzung von Arrays "nur" für bash und Korn-Shell # nicht aber für Bourne-Shell (sh) geeignet while true do # Bildschirm löschen clear echo "Diese Script erlaubt Ihnen bestimmte Benutzer Prozesse" echo "zu beenden." echo echo "Name des Benutzers eingeben (mit q beenden) : " | \ tr -d '\n' read unam # Wurde kein Benutzer angegeben unam=${unam:-null_value} export unam case $unam in null_value) echo "Bitte einen Namen eingeben!" ;; [Qq]) exit 0 ;; root) echo "Benutzer 'root' ist nicht erlaubt!" ;; *) echo "Überprüfe $unam ..." typeset -i x=0 typeset -i n=0 if $(ps -ef | grep "^[ ]*$unam" > /dev/null) then for a in $(ps -ef | awk -v unam="$unam" '$1 ~ unam { print $2, $8}'| \ sort -nr +1 â2 ) do if [ $n -eq 0 ] then x=`expr $x + 1` var[$x]=$a n=1 elif [ $n -eq 1 ] then var2[$x]=$a n=0 fi done if [ $x -eq 0 ] then echo "Hier gibt es keine Prozesse zum Beenden!" else typeset -i y=1 clear while [ $y -le $x ] do echo "Prozess beenden PID: ${var[$y]} -> CMD: "\ " ${var2[$y]} (J/N) : " | tr -d '\n' read resp case "$resp" in [Jj]*) echo "Prozess wird beendet ..." # Zuerst versuchen, "normal" zu beenden echo "Versuche, normal zu beenden " \ " (15=SIGTERM)" kill â15 ${var[$y]} 2>/dev/null # Überprüfen, ob es geklappt hat # -> ansonsten # mit dem Hammmer killen if ps -p ${var[$y]} >/dev/null 2>&1 then echo "Versuche, 'brutal' zu beenden"\ " (9=SIGKILL)" kill â9 ${var[$y]} 2>/dev/null fi ;; *) echo "Prozess wird weiter ausgeführt"\ " ( ${var2[y]} )" ;; esac y=`expr $y + 1` echo done fi fi ;; esac sleep 2 done
Das Script bei der Ausführung:
# ./killuser Dieses Script erlaubt Ihnen, bestimmte Benutzer-Prozesse zu beenden. Name des Benutzers eingeben (mit q beenden) : john Prozess beenden PID: 4388 -> CMD: sleep (J/N) : J Prozess wird beendet ... Versuche, normal zu beenden (15=SIGTERM) Prozess beenden PID: 4259 -> CMD: holdonrunning (J/N) : N Prozess wird weiter ausgeführt ( holdonrunning ) Prozess beenden PID: 4203 -> CMD: -csh (J/N) : ...
Überwachung, wer sich im System einloggt
Einen Überblick, wer sich alles im System einloggt und eingeloggt hat, können Sie sich wie bekannt mit dem Kommando last ermitteln lassen. Gern würde man sich den Aufruf von last ersparen, um so immer aktuell neu eingeloggte Benutzer im System zu ermitteln. Dies kann man dann z. B. verwenden, um dem Benutzer eine Nachricht zukommen zu lassen, oder eben zu Überwachungszwecken.
Das Überwachen, ob sich ein neuer Benutzer im System eingeloggt hat, lässt sich auch in einem Shellscript mit last relativ leicht ermitteln. Hierzu müssen Sie eigentlich nur die Anzahl von Zeilen von last zählen und in einer bestimmten Zeit wieder miteinander vergleichen, ob eine neue Zeile hinzugekommen ist. Die Differenz beider Werte lässt sich dann mit last und einer Pipe nach head ausgeben. Dabei werden immer nur die neu hinzugekommenen letzten Zeilen mit head ausgegeben. Im Beispiel wird die Differenz beider last-Aufrufe alle 30 Sekunden ermittelt. Dieser Wert lässt sich natürlich beliebig hoch- bzw. runtersetzen. Außerdem wird im Beispiel nur eine Ausgabe auf das aktuelle Terminal (/dev/tty) vorgenommen. Hierzu würde sich beispielsweise das Kommando write oder wall sehr gut eignen. Als root könnten Sie somit jederzeit einem User eine Nachricht zukommen lassen, wenn dieser sich einloggt.
#! /bin/sh # Name: loguser # Das Script überprüft, ob sich jemand im System eingeloggt hat # Pseudonym für das aktuelle Terminal outdev=/dev/tty fcount=0; newcount=0; timer=30; displaylines=0 # Die Anzahl Zeilend des last-Kommandos zählen fcount=`last | wc -l` while true do # Erneut die Anzahl Zeilen des last-Kommandos zählen ... newcount=`last | wc -l` # ... und vergleichen, ob neue hinzugekommen sind if [ $newcount -gt $fcount ] then # Wie viele neue Zeilen sind hinzugekommen ... displaylines=`expr $newcount â $fcount` # Entsprechend neue Zeilen ausgeben auf outdev # Hier würde sich auch eine Datei oder das Kommando # write sehr gut eignen, damit die Kommandozeile # nicht blockiert wird ... last | head -$displaylines > $outdev # neuen Wert an fcount zuweisen fcount=$newcount # timer Sekunden warten, bis zur nächsten Überprüfung sleep $timer fi done
Das Script bei der Ausführung:
you@host > ./loguser john tty2 Wed May 4 23:46 still logged in root tty4 Wed May 4 23:46 still logged in tot tty2 Wed May 4 23:47 still logged in you tty5 Wed May 4 23:49 still logged in
Benutzer komfortabel anlegen, löschen, sperren und wieder aufheben
Eine ziemlich wichtige und regelmäßige Aufgabe, die Ihnen als Systemadministrator zufällt, dürfte das Anlegen und Löschen neuer Benutzerkonten sein.
Hinweis   Ich habe mir lange überlegt, diesen Part der User-Verwaltung wieder zu streichen. Zum einen zeigt das Script hervorragend, wie man sich eine eigene Userverwaltung bauen kann und was dabei alles so zu beachten ist. Allerdings bietet mittlerweile jede Distribution mindestens eine solche Userverwaltung (und das meistens erheblich komfortabler) an. Das Script sollte eigentlich nur unter Linux ordentlich laufen, aber selbst hier könnten Sie noch Probleme mit der Portabilität bekommen, weil auch die einzelnen Distributionen hier ihr eigenes Süppchen kochen, mal heißt es hier useradd dann wieder adduser, die Optionen von passwd bspw. sind teilweise auch unterschiedlich.
Um einen neuen Benutzer-Account anzulegen, wird ein neuer Eintrag in der Datei /etc/passwd angelegt. Dieser Eintrag beinhaltet gewöhnlich einen Benutzernamen aus acht Zeichen, eine User-ID (UID), eine Gruppen-ID (GID), ein Heimverzeichnis (/home) und eine Login-Shell. Die meisten Linux-/UNIX-Systeme speichern dann noch ein verschlüsseltes Password in /etc/shadow â was natürlich bedeutet, dass Sie auch hier einen Eintrag (mit passwd) vornehmen müssen. Beim Anlegen eines neuen Benutzers können Sie entweder zum Teil vorgegebene Standardwerte verwenden oder eben eigene Einträge anlegen. Es ist außerdem möglich, die Dauer der Gültigkeit des Accounts festzulegen.
Im Beispiel ist es nicht möglich, auch noch eine neue Gruppe anzulegen, sprich, Sie können nur einen neuen Benutzer anlegen und diesem eine bereits vorhandene Gruppe (setzt einen vorhandenen Eintrag in /etc/group voraus) zuweisen. Auf das Anlegen einer neuen Gruppe wurde aus Übersichtlichkeitsgründen verzichtet, da sich das Script sonst unnötig in die Länge ziehen würde. Allerdings sollte es Ihnen mit der Vorlage dieses Scripts nicht schwer fallen, ein recht ähnliches Script für das Anlegen einer Gruppe zu schreiben.
Nebenbei ist es auch realisierbar, einen Benutzer mit passwd zu sperren und die Sperre wieder aufzuheben. Beim Löschen eines Benutzers werden zuvor noch all seine Daten gesucht und gelöscht, bevor der eigentliche Benutzer-Account aus /etc/passwd gelöscht werden kann. Im Beispiel wird die Suche wieder nur auf das /home-Verzeichnis beschränkt, was Sie allerdings in der Praxis wieder den Gegebenheiten anpassen sollten.
#! /bin/sh # Name: account # Mit diesem Script können Sie einen Benutzer # * Anlegen # * Löschen # * Sperren # * Sperre wieder aufheben # Pfade, die beim Löschen eines Accounts benötigt werden, # ggf. erweitern und ergänzen um bspw. /var /tmp ... # überall eben, wo sich Dateien vom Benutzer befinden können searchpath="/home" usage() { echo "Usage: $0 Benutzer (Neuen Benutzer anlegen)" echo "Usage: $0 -d Benutzer (Benutzer löschen)" echo "Usage: $0 -l Benutzer (Benutzer sperren)" echo "Usage: $0 -u Benutzer (Gesperrten Benutzer wieder freigeben)" } # Nur root darf dieses Script ausführen ... # if [ `id -u` != 0 ] then echo "Es werden root-Rechte für dieses Script benötigt!" exit 1 fi # Ist das Kommando useradd auf dem System vorhanden ... # which useradd > /dev/null 2>1& if [ $? -ne 0 ] then echo "Das Kommando 'useradd' konnte auf dem System nicht "\ " gefunden werden!" exit 1 fi if [ $# -eq 0 ] then usage exit 0 fi if [ $# -eq 2 ] then case $1 in -d) # Existiert ein entsprechender Benutzer if [ "`grep $2 /etc/passwd | \ awk -F : '{print $1}'`" = "$2" ] then echo "Dateien und Verzeichnisse von '$2' "\ "werden gelöscht" # Alle Dateien und Verz. des Benutzers löschen find $searchpath -user $2 -print | sort -r | while read file do if [ -d $file ] then rmdir $file else rm $file fi done else echo "Ein Benutzer '$2' existiert nicht in "\ "/etc/passwd!" exit 1 fi # Benutzer aus /etc/passwd und /etc/shadow löschen userdel -r $2 2>/dev/null echo "Benutzer '$2' erfolgreich gelöscht!" exit 0 ;; -l) # Existiert ein entsprechender Benutzer if [ "`grep $2 /etc/passwd | \ awk -F : '{print $1}'`" = "$2" ] then passwd -l $2 fi echo "Benutzer '$2' wurde gesperrt" exit 0 ;; -u) # Existiert ein entsprechender Benutzer if [ "`grep $2 /etc/passwd | \ awk -F : '{print $1}'`" = "$2" ] then passwd -u $2 fi echo "Benutzer '$2': Sperre aufgehoben" exit 0 ;; -h) usage exit 1 ;; -*) usage exit 1 ;; *) usage exit 1 ;; esac fi if [ $# -gt 2 ] then usage exit 1 fi ##################################################### # Einen neuen Benutzer anlegen # # Existiert bereits ein entsprechender Benutzer # if [ "`grep $1 /etc/passwd | awk -F : '{print $1}'`" = "$1" ] then echo "Ein Benutzer '$1' existiert bereits in /etc/passwd ...!" exit 1 fi # Bildschirm löschen clear # Zuerst wird die erste freie verwendbare User-ID gesucht, # vorgeschlagen und bei Bestätigung verwendet, oder es wird eine # eingegebene User-ID verwendet, die allerdings ebenfalls # überprüft wird, ob sie bereits in /etc/passwd existiert. # userid=`tail â1 /etc/passwd |awk -F : '{print $3 + 1}'` echo "Eingabe der UID [default: $userid] " | tr -d '\n' read _UIDOK # ... wurde nur ENTER betätigt if [ "$_UIDOK" = "" ] then _UIDOK=$userid # ... es wurde eine UID eingegeben -> # Überprüfen ob bereits vorhanden ... elif [ `grep $_UIDOK /etc/passwd | awk -F : '{print $3}'` = "" ] then _UIDOK=$userid else echo "UID existiert bereits! ENTER=Neustart / STRG+C=Ende" read $0 $1 fi # Selbiges mit Gruppen-ID # groupid=`grep users /etc/group |awk -F : '{print $3}'` echo "Eingabe der GID: [default: $groupid] " | tr -d '\n' read _GIDOK if [ "$_GIDOK" = "" ] then _GIDOK=$groupid elif [ "`grep $_GIDOK /etc/group`" = "" ] then echo "Dies Gruppe existiert nicht in /etc/group! "\ "ENTER=Neustart / STRG+C=Ende" read $0 $1 fi # Das Benutzer-Heimverzeichnis /home abfragen # echo "Eingabe des Heimverzeichnisses: [default: /home/$1] " | \ tr -d '\n' read _HOME # Wurde nur ENTER gedrückt, default verwenden ... if [ "$_HOME" = "" ] then _HOME="/home/$1" fi # Die Standard-Shell für den Benutzer festlegen # echo "Eingabe der Shell: [default: /bin/bash] " | tr -d '\n' read _SHELL # Wurde nur ENTER gedrückt, default verwenden ... if [ "$_SHELL" = "" ] then _SHELL=/bin/bash # Gibt es überhaupt eine solche Shell in /etc/shells ... elif [ "`grep $_SHELL /etc/shells`" = "" ] then echo "'$_SHELL' gibt es nicht in /etc/shells! "\ " ENTER=Neustart / STRG+C=Ende" read $0 $1 fi # Kommentar oder Namen eingeben echo "Eingabe eines Namens: [beliebig] " | tr -d '\n' read _REALNAME # Expire date echo "Ablaufdatum des Accounts: [MM/DD/YY] " | tr -d '\n' read _EXPIRE clear echo echo "Folgende Eingaben wurden erfasst:" echo "---------------------------------" echo "User-ID : [$_UIDOK]" echo "Gruppen-ID : [$_GIDOK]" echo "Heimverzeichnis : [$_HOME]" echo "Login-Shell : [$_SHELL]" echo "Name/Komentar : [$_REALNAME]" echo "Account läuft aus : [$_EXPIRE]" echo echo "Account erstellen? (j/n) " read _verify case $_verify in [nN]*) echo "Account wurde nicht erstellt!" | tr -d '\n' exit 0 ;; [jJ]*) useradd -u $_UIDOK -g $_GIDOK -d $_HOME -s $_SHELL \ -c "$_REALNAME" -e "$_EXPIRE" $1 cp -r /etc/skel $_HOME chown -R $_UIDOK:$_GIDOK $_HOME passwd $1 echo "Benutzer $1 [$_REALNAME] hinzugefügt "\ "am `date`" >> /var/adm/newuser.log finger -m $1 |head â2 sleep 2 echo "Benutzer $1 erfolgreich hinzugefügt!" ;; *) exit 1;; esac
Das Script bei der Ausführung:
linux:/home/you # ./account jack Eingabe der UID [default: 1003](ENTER) Eingabe der GID: [default: 100](ENTER) Eingabe des Heimverzeichnisses: [default: /home/jack](ENTER) Eingabe der Shell: [default: /bin/bash](ENTER) Eingabe eines Namens: [beliebig] J.Wolf Ablaufdatum des Accounts : [MM/DD/YY](ENTER) ... Folgende Eingaben wurden erfasst: --------------------------------- User-ID : [1003] Gruppen-ID : [100] Heimverzeichnis : [/home/jack] Login-Shell : [/bin/bash] Name/Komentar : [J.Wolf] Account läuft aus : [] Account erstellen? (j/n) j Changing password for jack. New password:******** Re-enter new password:******** Password changed Login: jack Name: J.Wolf Directory: /home/jack Shell: /bin/bash Benutzer jack erfolgreich hinzugefügt! linux:/home/you # ./account -l jack Passwort geändert. Benutzer 'jack' wurde gesperrt linux:/home/you # ./account -u jack Passwort geändert. Benutzer 'jack': Sperre aufgehoben linux:/home/you # ./account -d jack Dateien und Verzeichnisse von 'jack' werden gelöscht Benutzer 'jack' erfolgreich gelöscht!
15.3.2 SystemüberwachungÂ
Warnung, dass der Plattenplatz des Dateisystems an seine Grenzen stößt
Gerade, wenn man mehrere Dateisysteme oder gar Server betreuen muss, fällt es oft schwer, sich auch noch über den Plattenplatz Gedanken zu machen. Hierzu eignet sich ein Script, mit dem Sie den fünften Wert von »df âk« auswerten und daraufhin überprüfen, ob eine bestimmte von Ihnen festgelegte Grenze erreicht wurde. Ist die Warnschwelle erreicht, können Sie eine Mail an eine bestimmte Adresse verschicken oder eventuell ein weiteres Script starten lassen, welches diverse Aufräum- oder Komprimierarbeiten durchführt. Natürlich macht dieses Script vor allem dann Sinn, wenn es im Intervall mit einem cron-Job gestartet wird.
Hinweis   Auch hier sei nochmals auf das schon beschriebene Quota-System vor dem Script »DiskQuota« hingewiesen.
#!/bin/sh # Name: chcklimit # Dieses Script verschickt eine Mail, wenn der Plattenverbrauch # eines Filesystems an ein bestimmtes Limit stösst. # Ab wie viel Prozent soll ein Warnung verschickt werden WARN_CAPACITY=80 # Wohin soll eine Mail verschickt werden [email protected] call_mail_fn() { servername=`hostname` msg_subject="$servername â Dateisystem(${FILESYSTEM}) "\ "verwendet ${FN_VAR1}% â festgestellt am: `date`" echo $msg_subject | mail -s "${servername}:Warnung" $TOUSER } if [ $# -lt 1 ] then echo "usage: $0 FILESYSTEM" echo "Bpsw.: $0 /dev/hda6" fi # Format von df -k: # Dateisystem 1K-Blöcke Benutzt Verfügbar Ben% Eingehängt auf # /dev/hda4 15528224 2610376 12917848 17 % / # Den fünften Wert wollen wir haben: 'Ben%' # VAR1=`df -k ${1} | /usr/bin/tail â1 | \ /usr/bin/awk '{print $5}' ` # Prozentzeichen herausschneiden VAR2=`echo $VAR1 | \ /usr/bin/awk '{ print substr($1,1,length($1)-1) }' ` # Wurde die Warnschwelle erreicht ... ? if [ $VAR2 -ge ${WARN_CAPACITY} ] then FN_VAR1=$VAR2 call_mail_fn fi
Das Script bei der Ausführung:
you@host > ./chcklimit /dev/hda6 ... you@host > mail >N 1 <EMAIL> Mon May 2 16:18 18/602 linux:Warnung ? 1 Message 1: From: <EMAIL> (J.Wolf) linux â Dateisystem() verwendet 88 % â festgestellt am: Mo Mai 2 16:17:59 CEST 2005
Kommandos bzw. Scripts auf einem entfernten Rechner ausführen
Das Ausführen von Kommandos oder gar von Scripts auf mehreren Rechnern, gestartet vom lokalen Rechner, hört sich komplizierter an als es ist. Und vor allem es ist auch sicherer, als manch einer jetzt vielleicht denken mag. Dank guter Verschlüsselungstechnik und hoher Präsenz bietet sich hierzu ssh an (die r-Tools fallen wegen der Sicherheitslücken flach â siehe Abschnitt 14.12.10). Die Syntax, um mit ssh Kommandos oder Scripts auf einem anderen Rechner auszuführen, sieht wie folgt aus:
ssh username@hostname "kommando1 ; kommando2 ; script"
Mit diesem Wissen fällt es nicht schwer, sich ein entsprechendes Script zusammenzubasteln. Damit es ein wenig flexibler ist, soll es auch möglich sein, Shellscripts, die noch nicht auf dem entfernten Rechner liegen, zuvor noch mit scp in ein bestimmtes Verzeichnis hochzuladen, um es anschließend auszuführen. Ebenso soll es möglich sein, in ein bestimmtes Verzeichnis zu wechseln, um dann entsprechende Kommandos oder Scripts auszuführen. Natürlich setzt dies voraus, dass auf den Rechnern auch ein entsprechendes Verzeichnis existiert. Die Rechner, auf denen Kommandos oder Scripts ausgeführt werden sollen, tragen Sie in die Datei namens hostlist.txt ein. Bei mir sieht diese Datei wie folgt aus:
you@host > cat hostlist.txt <EMAIL> [email protected]
Im Beispiel finden Sie also zwei entfernte Rechner, bei denen das gleich folgende Script dafür sorgt, dass Sie von Ihrem lokalen Rechner aus schnell beliebige Kommandos bzw. Scripts ausführen können.
Hinweis   Damit Sie nicht andauernd ein Passwort eingeben müssen, empfiehlt es sich auch hier, SSH-Schlüssel zu verwenden (siehe Abschnitt 14.12.12).
#!/bin/sh # Name: sshell # Kommandos bzw. Scripts auf entfernten Rechnern ausführen # ggf. den Pfad zur Datei anpassen HOSTS="hostlist.txt" usage() { echo "usage: progname [-option] [Verzeichnis] "\ " Kommando_oder_Script" echo echo "Option:" echo "-d :in ein bestimmtes Verzeichnis auf dem Host wechseln" echo "-s :Script in ein bestimmtes Verzeichnis hochladen und"\ " Ausführen" echo echo "Syntax der Host-Liste: " echo "Username@hostname1" echo "Username@hostname2" echo "..." exit 1 } if [ $# -eq 0 ] then usage fi # Datei 'hostlist.txt' überprüfen if [ -e $HOSTS ] then : else echo "Datei $HOSTS existiert nicht ..." touch hostlist.txt if [ $? -ne 0 ] then echo "Konnte $HOSTS nicht anlegen ...!" exit 1 else echo "Datei $HOSTS erzeugt, aber noch leer ...!" usage exit 1 fi fi # Optionen überprüfen ... case $1 in -d) if [ $# -lt 3 ] then usage fi DIR=$2 shift; shift ;; -s) if [ $# -lt 3 ] then usage fi DIR=$2 SCRIPT="yes" shift; shift ;; -*) usage ;; esac # Die einzelnen Hosts durchlaufen ... for host in `cat $HOSTS` do echo "$host : " CMD=$* if [ "$SCRIPT" = "yes" ] then scp $CMD ${host}:${DIR} fi ret=`ssh $host "cd $DIR; $CMD"` echo "$ret" done
Das Script bei der Ausführung:
Inhalt des Heimverzeichnisses ausgeben:
you@host > ./sshell ls -l <EMAIL> : total 44 drwx------ 4 us10129 us10129 4096 May 14 13:45 backups drwxr-xr-x 8 us10129 us10129 4096 May 9 10:13 beta.pronix.de -rw-rw-r-- 1 us10129 us10129 66 Dec 2 02:13 db_cms.bak drwxrwxr-x 2 us10129 us10129 4096 Mar 11 07:49 dump -rw------- 1 us10129 us10129 952 May 14 14:00 mbox drwxrwxr-x 2 us10129 us10129 4096 Mar 28 18:03 mysqldump drwxr-xr-x 20 us10129 us10129 4096 May 19 19:56 www.pronix.de [email protected] : total 24 drwxr-xr-x 2 jwolf jwolf 512 May 8 14:03 backups drwxr-xr-x 3 jwolf jwolf 21504 Sep 2 2004 dev
Inhalt des Verzeichnisses $HOME/backups ausgeben:
you@host > ./sshell -d backups ls -l <EMAIL> : total 8 drwxrwxr-x 3 us10129 us10129 4096 May 18 18:38 Shellbuch drwxrwxr-x 3 us10129 us10129 4096 May 14 13:46 Shellbuch_bak -rw-rw-r-- 1 us10129 us10129 0 May 20 12:45 file1 -rw-rw-r-- 1 us10129 us10129 0 May 20 12:45 file2 -rw-rw-r-- 1 us10129 us10129 0 May 20 12:45 file3 [email protected] : total 6 -rw-r--r-- 1 jwolf jwolf 0 May 8 13:38 file1 -rw-r--r-- 1 jwolf jwolf 0 May 8 13:38 file2 -rw-r--r-- 1 jwolf jwolf 0 May 8 13:38 file3 -rwx------ 1 jwolf jwolf 29 May 8 13:58 hallo.sh -rwx------ 1 jwolf jwolf 29 May 8 13:48 mhallo -rwx------ 1 jwolf jwolf 29 May 8 14:03 nhallo
Dateien beginnend mit »file*« im Verzeichnis $HOME/backups löschen:
you@host > ./sshell -d backups rm file* <EMAIL> : [email protected] :
Inhalt des Verzeichnisses $HOME/backups erneut ausgeben:
you@host > ./sshell -d backups ls -l <EMAIL> : total 5 drwxrwxr-x 3 us10129 us10129 4096 May 18 18:38 Shellbuch drwxrwxr-x 3 us10129 us10129 4096 May 14 13:46 Shellbuch_bak [email protected] : total 3 -rwx------ 1 jwolf jwolf 29 May 8 13:58 hallo.sh -rwx------ 1 jwolf jwolf 29 May 8 13:48 mhallo -rwx------ 1 jwolf jwolf 29 May 8 14:03 nhallo
Neues Verzeichnis testdir anlegen:
you@host > ./sshell mkdir testdir <EMAIL> : [email protected] :
Script hallo.sh ins neue Verzeichnis (testscript) schieben und ausführen:
you@host > ./sshell -s testdir ./hallo.sh <EMAIL> : hallo.sh 100 % 67 0.1KB/s 00:00 Ich bin das Hallo Welt-Script! [email protected] : hallo.sh 100 % 67 0.1KB/s 00:00 Ich bin das Hallo Welt-Script!
Verzeichnis testdir wieder löschen:
tot@linux:~> ./sshell rm -r testdir <EMAIL> : [email protected] :
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 15.3 SystemadministrationÂ
Die Systemadministration dürfte wohl eines der Hauptgründe sein, weshalb Sie sich entschieden haben, die Shellscript-Programmierung zu erlernen. Zur Systemadministration gehören u. a. zentrale Themen wie die Benutzer- und Prozessverwaltung, Systemüberwachung, Backup-Strategien und das Auswerten bzw. Analysieren von Log-Dateien. Zu jedem dieser Themen werden Sie ein Beispiel für die Praxis kennen lernen und, falls das Thema recht speziell ist, auch eine Einführung.
### 15.3.1 BenutzerverwaltungÂ
# Plattenplatzbenutzung einzelner Benutzer auf dem Rechner
Wenn Sie einen Rechner mit vielen Benutzern verwalten müssen, so sollte man dem einzelnen Benutzer auch eine gewisse Grenze setzen, was den Plattenverbrauch betrifft. Die einfachste Möglichkeit ist es, die Heimverzeichnisse der einzelnen User zu überprüfen. Natürlich schließt dies nicht nur das /home-Verzeichnis ein (auch wenn es im Beispiel so verwendet wird). Am einfachsten sucht man in entsprechenden Verzeichnissen nach Dateien, die einen bestimmten Benutzer als Eigentümer haben, und addiert die Größe einer jeden gefundenen Datei.
Damit auch alle Benutzer erfasst werden, deren User-ID größer als 99 ist, werden sie einfach alle aus /etc/passwd extrahiert. Die Werte zwischen 1 und 99 sind gewöhnlich den System-Daemons bzw. dem root vorbehalten. Ein guter Grund übrigens, dass Sie, wenn Sie einen neuen User anlegen, die UID immer über 100 wählen.
Das Script bei der Ausführung:
> # ./DiskQuota User tot hat den Account überzogen (Ist:543,72MB Soll:100MB) ---- Inzwischen beim User tot ---- tot@host > mail ... ... ... N 14 <EMAIL> Mon May 2 15:14 22/786 Limit überzogen ... ... ... ? 14 Message 14: From: <EMAIL> (J.Wolf) Hallo tot, Soeben 2005-Mai-02 15:14:47 wurde festgestellt, dass Sie Ihr Limit von 100MB überzogen haben. Derzeit beträgt Ihr verwendeter Speicher 543,72MB. Bitte beheben Sie den Umstand sobald wie möglich. Vielen Dank für das Verständnis.
# Prozesse nach User sortiert mit Instanzen ausgeben
Einen weiteren Komfort, den man als Systemadministrator gern nutzen würde, wäre eine Ausgabe der Prozessüberwachung einzelner Benutzer, sortiert nach diesen. Das folgende, gut kommentierte Script sortiert die Prozesse nach Benutzern und gar nach deren einzelnen Instanzen, wenn der Benutzer von einer Instanz mehrere ausführt. Führt der Benutzer z. B. dreimal bash als Shell aus, finden Sie in der Zusammenfassung 3 Instanz(en) von /bin/bash, statt jede Instanz einzeln aufzulisten.
> #!/bin/sh # Name: psusers # Voraussetzung, dass dieses Script funktioniert, ist, dass die # Ausgabe von ps -ef auf Ihrem Rechner folgendes Format hat: # # you@host > ps -ef # UID PID PPID C STIME TTY TIME CMD # root 1 0 0 00:38 ? 00:00:04 init [5] # root 2 1 0 00:38 ? 00:00:00 [ksoftirqd/0] # # Wenn die Ausgabe bei Ihnen etwas anders aussieht, müssen Sie # das Script entsprechend anpassen (erste Zuweisung von USERNAME # und PROGNAME) # Variablen deklarieren # COUNTER=0; CHECKER=0; UCOUNT=1 PSPROG='/bin/ps -ef' SORTPROG='/bin/sort +0 â1 +7 â8' TMPFILE=/tmp/proclist_$$ # Beim ordentlichen Beenden TMPFILE wieder löschen trap "/bin/rm -f $TMPFILE" EXIT # Die aktuelle Prozessliste in TMPFILE speichern # $PSPROG | $SORTPROG > $TMPFILE # Daten in TMPFILE verarbeiten # grep -v 'UID[ ]*PID' $TMPFILE | while read LINE do # Zeilen in einzelne Felder aufbrechen set -- $LINE # Einzelne Felder der Ausgabe von ps -ef lauten: # UID PID PPID C STIME TTY TIME CMD # Anzahl der Parameter einer Zeile größer als 0 ... if [ $# -gt 0 ] then # Erstes Feld (UID) einer Zeile der Variablen # USERNAME zuordnen USERNAME=$1 # Die ersten sieben Felder einer Zeile entfernen shift 7 # Kommandonamen (CMD) der Variablen PROGNAME zuordnen PROGNAME=$* fi # Testet die Kopfzeile # if [ "$USERNAME" = "UID" ] then continue # nächsten Wert in der Schleife holen ... fi # Überprüfen, ob es sich um die erste Zeile von Daten handelt # if [ "$CHECKER" = "0" ] then CHECKER=1 UCOUNT=0 LASTUSERNAME="$USERNAME" # Programmname für die Ausgabe formatieren # auf 40 Zeichen beschränken .... # LASTPROGNAME=`echo $PROGNAME | \ awk '{print substr($0, 0, 40)}'` COUNTER=1; LASTCOUNT=1 echo "" echo "$USERNAME führt aus:....." continue # nächsten Wert von USERNAME holen fi # Logische Überprüfung durchführen # if [ $CHECKER -gt 0 -a "$USERNAME" = "$LASTUSERNAME" ] then if [ "$PROGNAME" = "$LASTPROGNAME" ] then COUNTER=`expr $COUNTER + 1` else # Ausgabe auf dem Bildschirm ... if [ $LASTCOUNT -gt 1 ] then echo " $LASTCOUNT Instanz(en) von ->"\ " $LASTPROGNAME" else echo " $LASTCOUNT Instanz(en) von ->"\ " $LASTPROGNAME" fi COUNTER=1 fi # Programmname für die Ausgabe formatieren # auf 40 Zeichen beschränken .... # LASTPROGNAME=`echo $PROGNAME | \ awk '{print substr($0, 0, 40)}'` LASTCOUNT=$COUNTER elif [ $CHECKER -gt 0 -a "$USERNAME" != "$LASTUSERNAME" ] then if [ $LASTCOUNT -gt 1 ] then echo " $LASTCOUNT Instanz(en) von >> $LASTPROGNAME" else echo " $LASTCOUNT Instanz(en) von >>"\ " $LASTPROGNAME" fi echo echo "$USERNAME führt aus:....." LASTUSERNAME="$USERNAME" # Programmname für die Ausgabe formatieren # auf 40 Zeichen beschränken .... # LASTPROGNAME=`echo $PROGNAME | \ awk '{print substr($0, 0, 40)}'` COUNTER=1 LASTCOUNT=$COUNTER fi done # DISPLAY THE FINAL USER INSTANCE DETAILS # if [ $COUNTER -eq 1 -a $LASTCOUNT -ge 1 ] then if [ $LASTCOUNT -gt 1 ] then echo " $LASTCOUNT Instanz(en) von >> $LASTPROGNAME" else echo " $LASTCOUNT Instanz(en) von >> $LASTPROGNAME" fi fi echo "------" echo "Fertig" echo "------"
Das Script bei der Ausführung:
> you@host > ./psusers bin führt aus:..... 1 Instanz(en) von >> /sbin/portmap lp führt aus:..... 1 Instanz(en) von >> /usr/sbin/cupsd postfix führt aus:..... 1 Instanz(en) von -> pickup -l -t fifo -u 1 Instanz(en) von >> qmgr -l -t fifo -u root führt aus:..... 1 Instanz(en) von -> -:0 1 Instanz(en) von -> [aio/0] 1 Instanz(en) von -> /bin/bash /sbin/hotplug pci 1 Instanz(en) von -> /bin/bash /etc/hotplug/pci.agent ... you führt aus:..... 3 Instanz(en) von -> /bin/bash 1 Instanz(en) von -> /bin/ps -ef 1 Instanz(en) von -> /bin/sh /opt/kde3/bin/startkde 1 Instanz(en) von -> /bin/sh ./testscript 1 Instanz(en) von -> gpg-agent --daemon --no-detach 1 Instanz(en) von -> kaffeine -session 117f000002000111 1 Instanz(en) von -> kamix 1 Instanz(en) von -> kdeinit: Running... ...
# Prozesse bestimmter Benutzer beenden
Häufig kommt es vor, dass bei einem Benutzer einige Prozesse »Amok« laufen bzw. man einen Prozess einfach beenden will (warum auch immer). Mit dem folgenden Script können Sie (als root) die Prozesse eines Benutzers mithilfe einer interaktiven Abfrage beenden. Nach der Eingabe des Benutzers werden alle laufenden Prozesse in einem Array gespeichert. Anschließend wird das komplette Array durchlaufen und nachgefragt, ob Sie den Prozess beenden wollen oder nicht. Zuerst wird immer versucht, den Prozess normal mit SIGTERM zu beenden. Gelingt dies nicht mehr, muss SIGKILL herhalten.
> #!/bin/ksh # Name: killuser # Wegen der Benutzung von Arrays "nur" für bash und Korn-Shell # nicht aber für Bourne-Shell (sh) geeignet while true do # Bildschirm löschen clear echo "Diese Script erlaubt Ihnen bestimmte Benutzer Prozesse" echo "zu beenden." echo echo "Name des Benutzers eingeben (mit q beenden) : " | \ tr -d '\n' read unam # Wurde kein Benutzer angegeben unam=${unam:-null_value} export unam case $unam in null_value) echo "Bitte einen Namen eingeben!" ;; [Qq]) exit 0 ;; root) echo "Benutzer 'root' ist nicht erlaubt!" ;; *) echo "Überprüfe $unam ..." typeset -i x=0 typeset -i n=0 if $(ps -ef | grep "^[ ]*$unam" > /dev/null) then for a in $(ps -ef | awk -v unam="$unam" '$1 ~ unam { print $2, $8}'| \ sort -nr +1 â2 ) do if [ $n -eq 0 ] then x=`expr $x + 1` var[$x]=$a n=1 elif [ $n -eq 1 ] then var2[$x]=$a n=0 fi done if [ $x -eq 0 ] then echo "Hier gibt es keine Prozesse zum Beenden!" else typeset -i y=1 clear while [ $y -le $x ] do echo "Prozess beenden PID: ${var[$y]} -> CMD: "\ " ${var2[$y]} (J/N) : " | tr -d '\n' read resp case "$resp" in [Jj]*) echo "Prozess wird beendet ..." # Zuerst versuchen, "normal" zu beenden echo "Versuche, normal zu beenden " \ " (15=SIGTERM)" kill â15 ${var[$y]} 2>/dev/null # Überprüfen, ob es geklappt hat # -> ansonsten # mit dem Hammmer killen if ps -p ${var[$y]} >/dev/null 2>&1 then echo "Versuche, 'brutal' zu beenden"\ " (9=SIGKILL)" kill â9 ${var[$y]} 2>/dev/null fi ;; *) echo "Prozess wird weiter ausgeführt"\ " ( ${var2[y]} )" ;; esac y=`expr $y + 1` echo done fi fi ;; esac sleep 2 done
Das Script bei der Ausführung:
> # ./killuser Dieses Script erlaubt Ihnen, bestimmte Benutzer-Prozesse zu beenden. Name des Benutzers eingeben (mit q beenden) : john Prozess beenden PID: 4388 -> CMD: sleep (J/N) : J Prozess wird beendet ... Versuche, normal zu beenden (15=SIGTERM) Prozess beenden PID: 4259 -> CMD: holdonrunning (J/N) : N Prozess wird weiter ausgeführt ( holdonrunning ) Prozess beenden PID: 4203 -> CMD: -csh (J/N) : ...
# Überwachung, wer sich im System einloggt
Einen Überblick, wer sich alles im System einloggt und eingeloggt hat, können Sie sich wie bekannt mit dem Kommando last ermitteln lassen. Gern würde man sich den Aufruf von last ersparen, um so immer aktuell neu eingeloggte Benutzer im System zu ermitteln. Dies kann man dann z. B. verwenden, um dem Benutzer eine Nachricht zukommen zu lassen, oder eben zu Überwachungszwecken.
Das Überwachen, ob sich ein neuer Benutzer im System eingeloggt hat, lässt sich auch in einem Shellscript mit last relativ leicht ermitteln. Hierzu müssen Sie eigentlich nur die Anzahl von Zeilen von last zählen und in einer bestimmten Zeit wieder miteinander vergleichen, ob eine neue Zeile hinzugekommen ist. Die Differenz beider Werte lässt sich dann mit last und einer Pipe nach head ausgeben. Dabei werden immer nur die neu hinzugekommenen letzten Zeilen mit head ausgegeben. Im Beispiel wird die Differenz beider last-Aufrufe alle 30 Sekunden ermittelt. Dieser Wert lässt sich natürlich beliebig hoch- bzw. runtersetzen. Außerdem wird im Beispiel nur eine Ausgabe auf das aktuelle Terminal (/dev/tty) vorgenommen. Hierzu würde sich beispielsweise das Kommando write oder wall sehr gut eignen. Als root könnten Sie somit jederzeit einem User eine Nachricht zukommen lassen, wenn dieser sich einloggt.
> #! /bin/sh # Name: loguser # Das Script überprüft, ob sich jemand im System eingeloggt hat # Pseudonym für das aktuelle Terminal outdev=/dev/tty fcount=0; newcount=0; timer=30; displaylines=0 # Die Anzahl Zeilend des last-Kommandos zählen fcount=`last | wc -l` while true do # Erneut die Anzahl Zeilen des last-Kommandos zählen ... newcount=`last | wc -l` # ... und vergleichen, ob neue hinzugekommen sind if [ $newcount -gt $fcount ] then # Wie viele neue Zeilen sind hinzugekommen ... displaylines=`expr $newcount â $fcount` # Entsprechend neue Zeilen ausgeben auf outdev # Hier würde sich auch eine Datei oder das Kommando # write sehr gut eignen, damit die Kommandozeile # nicht blockiert wird ... last | head -$displaylines > $outdev # neuen Wert an fcount zuweisen fcount=$newcount # timer Sekunden warten, bis zur nächsten Überprüfung sleep $timer fi done
Das Script bei der Ausführung:
> you@host > ./loguser john tty2 Wed May 4 23:46 still logged in root tty4 Wed May 4 23:46 still logged in tot tty2 Wed May 4 23:47 still logged in you tty5 Wed May 4 23:49 still logged in
# Benutzer komfortabel anlegen, löschen, sperren und wieder aufheben
Eine ziemlich wichtige und regelmäßige Aufgabe, die Ihnen als Systemadministrator zufällt, dürfte das Anlegen und Löschen neuer Benutzerkonten sein.
Um einen neuen Benutzer-Account anzulegen, wird ein neuer Eintrag in der Datei /etc/passwd angelegt. Dieser Eintrag beinhaltet gewöhnlich einen Benutzernamen aus acht Zeichen, eine User-ID (UID), eine Gruppen-ID (GID), ein Heimverzeichnis (/home) und eine Login-Shell. Die meisten Linux-/UNIX-Systeme speichern dann noch ein verschlüsseltes Password in /etc/shadow â was natürlich bedeutet, dass Sie auch hier einen Eintrag (mit passwd) vornehmen müssen. Beim Anlegen eines neuen Benutzers können Sie entweder zum Teil vorgegebene Standardwerte verwenden oder eben eigene Einträge anlegen. Es ist außerdem möglich, die Dauer der Gültigkeit des Accounts festzulegen.
Im Beispiel ist es nicht möglich, auch noch eine neue Gruppe anzulegen, sprich, Sie können nur einen neuen Benutzer anlegen und diesem eine bereits vorhandene Gruppe (setzt einen vorhandenen Eintrag in /etc/group voraus) zuweisen. Auf das Anlegen einer neuen Gruppe wurde aus Übersichtlichkeitsgründen verzichtet, da sich das Script sonst unnötig in die Länge ziehen würde. Allerdings sollte es Ihnen mit der Vorlage dieses Scripts nicht schwer fallen, ein recht ähnliches Script für das Anlegen einer Gruppe zu schreiben.
Nebenbei ist es auch realisierbar, einen Benutzer mit passwd zu sperren und die Sperre wieder aufzuheben. Beim Löschen eines Benutzers werden zuvor noch all seine Daten gesucht und gelöscht, bevor der eigentliche Benutzer-Account aus /etc/passwd gelöscht werden kann. Im Beispiel wird die Suche wieder nur auf das /home-Verzeichnis beschränkt, was Sie allerdings in der Praxis wieder den Gegebenheiten anpassen sollten.
> #! /bin/sh # Name: account # Mit diesem Script können Sie einen Benutzer # * Anlegen # * Löschen # * Sperren # * Sperre wieder aufheben # Pfade, die beim Löschen eines Accounts benötigt werden, # ggf. erweitern und ergänzen um bspw. /var /tmp ... # überall eben, wo sich Dateien vom Benutzer befinden können searchpath="/home" usage() { echo "Usage: $0 Benutzer (Neuen Benutzer anlegen)" echo "Usage: $0 -d Benutzer (Benutzer löschen)" echo "Usage: $0 -l Benutzer (Benutzer sperren)" echo "Usage: $0 -u Benutzer (Gesperrten Benutzer wieder freigeben)" } # Nur root darf dieses Script ausführen ... # if [ `id -u` != 0 ] then echo "Es werden root-Rechte für dieses Script benötigt!" exit 1 fi # Ist das Kommando useradd auf dem System vorhanden ... # which useradd > /dev/null 2>1& if [ $? -ne 0 ] then echo "Das Kommando 'useradd' konnte auf dem System nicht "\ " gefunden werden!" exit 1 fi if [ $# -eq 0 ] then usage exit 0 fi if [ $# -eq 2 ] then case $1 in -d) # Existiert ein entsprechender Benutzer if [ "`grep $2 /etc/passwd | \ awk -F : '{print $1}'`" = "$2" ] then echo "Dateien und Verzeichnisse von '$2' "\ "werden gelöscht" # Alle Dateien und Verz. des Benutzers löschen find $searchpath -user $2 -print | sort -r | while read file do if [ -d $file ] then rmdir $file else rm $file fi done else echo "Ein Benutzer '$2' existiert nicht in "\ "/etc/passwd!" exit 1 fi # Benutzer aus /etc/passwd und /etc/shadow löschen userdel -r $2 2>/dev/null echo "Benutzer '$2' erfolgreich gelöscht!" exit 0 ;; -l) # Existiert ein entsprechender Benutzer if [ "`grep $2 /etc/passwd | \ awk -F : '{print $1}'`" = "$2" ] then passwd -l $2 fi echo "Benutzer '$2' wurde gesperrt" exit 0 ;; -u) # Existiert ein entsprechender Benutzer if [ "`grep $2 /etc/passwd | \ awk -F : '{print $1}'`" = "$2" ] then passwd -u $2 fi echo "Benutzer '$2': Sperre aufgehoben" exit 0 ;; -h) usage exit 1 ;; -*) usage exit 1 ;; *) usage exit 1 ;; esac fi if [ $# -gt 2 ] then usage exit 1 fi ##################################################### # Einen neuen Benutzer anlegen # # Existiert bereits ein entsprechender Benutzer # if [ "`grep $1 /etc/passwd | awk -F : '{print $1}'`" = "$1" ] then echo "Ein Benutzer '$1' existiert bereits in /etc/passwd ...!" exit 1 fi # Bildschirm löschen clear # Zuerst wird die erste freie verwendbare User-ID gesucht, # vorgeschlagen und bei Bestätigung verwendet, oder es wird eine # eingegebene User-ID verwendet, die allerdings ebenfalls # überprüft wird, ob sie bereits in /etc/passwd existiert. # userid=`tail â1 /etc/passwd |awk -F : '{print $3 + 1}'` echo "Eingabe der UID [default: $userid] " | tr -d '\n' read _UIDOK # ... wurde nur ENTER betätigt if [ "$_UIDOK" = "" ] then _UIDOK=$userid # ... es wurde eine UID eingegeben -> # Überprüfen ob bereits vorhanden ... elif [ `grep $_UIDOK /etc/passwd | awk -F : '{print $3}'` = "" ] then _UIDOK=$userid else echo "UID existiert bereits! ENTER=Neustart / STRG+C=Ende" read $0 $1 fi # Selbiges mit Gruppen-ID # groupid=`grep users /etc/group |awk -F : '{print $3}'` echo "Eingabe der GID: [default: $groupid] " | tr -d '\n' read _GIDOK if [ "$_GIDOK" = "" ] then _GIDOK=$groupid elif [ "`grep $_GIDOK /etc/group`" = "" ] then echo "Dies Gruppe existiert nicht in /etc/group! "\ "ENTER=Neustart / STRG+C=Ende" read $0 $1 fi # Das Benutzer-Heimverzeichnis /home abfragen # echo "Eingabe des Heimverzeichnisses: [default: /home/$1] " | \ tr -d '\n' read _HOME # Wurde nur ENTER gedrückt, default verwenden ... if [ "$_HOME" = "" ] then _HOME="/home/$1" fi # Die Standard-Shell für den Benutzer festlegen # echo "Eingabe der Shell: [default: /bin/bash] " | tr -d '\n' read _SHELL # Wurde nur ENTER gedrückt, default verwenden ... if [ "$_SHELL" = "" ] then _SHELL=/bin/bash # Gibt es überhaupt eine solche Shell in /etc/shells ... elif [ "`grep $_SHELL /etc/shells`" = "" ] then echo "'$_SHELL' gibt es nicht in /etc/shells! "\ " ENTER=Neustart / STRG+C=Ende" read $0 $1 fi # Kommentar oder Namen eingeben echo "Eingabe eines Namens: [beliebig] " | tr -d '\n' read _REALNAME # Expire date echo "Ablaufdatum des Accounts: [MM/DD/YY] " | tr -d '\n' read _EXPIRE clear echo echo "Folgende Eingaben wurden erfasst:" echo "---------------------------------" echo "User-ID : [$_UIDOK]" echo "Gruppen-ID : [$_GIDOK]" echo "Heimverzeichnis : [$_HOME]" echo "Login-Shell : [$_SHELL]" echo "Name/Komentar : [$_REALNAME]" echo "Account läuft aus : [$_EXPIRE]" echo echo "Account erstellen? (j/n) " read _verify case $_verify in [nN]*) echo "Account wurde nicht erstellt!" | tr -d '\n' exit 0 ;; [jJ]*) useradd -u $_UIDOK -g $_GIDOK -d $_HOME -s $_SHELL \ -c "$_REALNAME" -e "$_EXPIRE" $1 cp -r /etc/skel $_HOME chown -R $_UIDOK:$_GIDOK $_HOME passwd $1 echo "Benutzer $1 [$_REALNAME] hinzugefügt "\ "am `date`" >> /var/adm/newuser.log finger -m $1 |head â2 sleep 2 echo "Benutzer $1 erfolgreich hinzugefügt!" ;; *) exit 1;; esac
Das Script bei der Ausführung:
> linux:/home/you # ./account jack Eingabe der UID [default: 1003](ENTER) Eingabe der GID: [default: 100](ENTER) Eingabe des Heimverzeichnisses: [default: /home/jack](ENTER) Eingabe der Shell: [default: /bin/bash](ENTER) Eingabe eines Namens: [beliebig] J.Wolf Ablaufdatum des Accounts : [MM/DD/YY](ENTER) ... Folgende Eingaben wurden erfasst: --------------------------------- User-ID : [1003] Gruppen-ID : [100] Heimverzeichnis : [/home/jack] Login-Shell : [/bin/bash] Name/Komentar : [J.Wolf] Account läuft aus : [] Account erstellen? (j/n) j Changing password for jack. New password:******** Re-enter new password:******** Password changed Login: jack Name: J.Wolf Directory: /home/jack Shell: /bin/bash Benutzer jack erfolgreich hinzugefügt! linux:/home/you # ./account -l jack Passwort geändert. Benutzer 'jack' wurde gesperrt linux:/home/you # ./account -u jack Passwort geändert. Benutzer 'jack': Sperre aufgehoben linux:/home/you # ./account -d jack Dateien und Verzeichnisse von 'jack' werden gelöscht Benutzer 'jack' erfolgreich gelöscht!
### 15.3.2 SystemüberwachungÂ
# Warnung, dass der Plattenplatz des Dateisystems an seine Grenzen stößt
Gerade, wenn man mehrere Dateisysteme oder gar Server betreuen muss, fällt es oft schwer, sich auch noch über den Plattenplatz Gedanken zu machen. Hierzu eignet sich ein Script, mit dem Sie den fünften Wert von »df âk« auswerten und daraufhin überprüfen, ob eine bestimmte von Ihnen festgelegte Grenze erreicht wurde. Ist die Warnschwelle erreicht, können Sie eine Mail an eine bestimmte Adresse verschicken oder eventuell ein weiteres Script starten lassen, welches diverse Aufräum- oder Komprimierarbeiten durchführt. Natürlich macht dieses Script vor allem dann Sinn, wenn es im Intervall mit einem cron-Job gestartet wird.
> #!/bin/sh # Name: chcklimit # Dieses Script verschickt eine Mail, wenn der Plattenverbrauch # eines Filesystems an ein bestimmtes Limit stösst. # Ab wie viel Prozent soll ein Warnung verschickt werden WARN_CAPACITY=80 # Wohin soll eine Mail verschickt werden [email protected] call_mail_fn() { servername=`hostname` msg_subject="$servername â Dateisystem(${FILESYSTEM}) "\ "verwendet ${FN_VAR1}% â festgestellt am: `date`" echo $msg_subject | mail -s "${servername}:Warnung" $TOUSER } if [ $# -lt 1 ] then echo "usage: $0 FILESYSTEM" echo "Bpsw.: $0 /dev/hda6" fi # Format von df -k: # Dateisystem 1K-Blöcke Benutzt Verfügbar Ben% Eingehängt auf # /dev/hda4 15528224 2610376 12917848 17 % / # Den fünften Wert wollen wir haben: 'Ben%' # VAR1=`df -k ${1} | /usr/bin/tail â1 | \ /usr/bin/awk '{print $5}' ` # Prozentzeichen herausschneiden VAR2=`echo $VAR1 | \ /usr/bin/awk '{ print substr($1,1,length($1)-1) }' ` # Wurde die Warnschwelle erreicht ... ? if [ $VAR2 -ge ${WARN_CAPACITY} ] then FN_VAR1=$VAR2 call_mail_fn fi
Das Script bei der Ausführung:
> you@host > ./chcklimit /dev/hda6 ... you@host > mail >N 1 <EMAIL> Mon May 2 16:18 18/602 linux:Warnung ? 1 Message 1: From: <EMAIL> (J.Wolf) linux â Dateisystem() verwendet 88 % â festgestellt am: Mo Mai 2 16:17:59 CEST 2005
# Kommandos bzw. Scripts auf einem entfernten Rechner ausführen
Das Ausführen von Kommandos oder gar von Scripts auf mehreren Rechnern, gestartet vom lokalen Rechner, hört sich komplizierter an als es ist. Und vor allem es ist auch sicherer, als manch einer jetzt vielleicht denken mag. Dank guter Verschlüsselungstechnik und hoher Präsenz bietet sich hierzu ssh an (die r-Tools fallen wegen der Sicherheitslücken flach â siehe Abschnitt 14.12.10). Die Syntax, um mit ssh Kommandos oder Scripts auf einem anderen Rechner auszuführen, sieht wie folgt aus:
> ssh username@hostname "kommando1 ; kommando2 ; script"
Mit diesem Wissen fällt es nicht schwer, sich ein entsprechendes Script zusammenzubasteln. Damit es ein wenig flexibler ist, soll es auch möglich sein, Shellscripts, die noch nicht auf dem entfernten Rechner liegen, zuvor noch mit scp in ein bestimmtes Verzeichnis hochzuladen, um es anschließend auszuführen. Ebenso soll es möglich sein, in ein bestimmtes Verzeichnis zu wechseln, um dann entsprechende Kommandos oder Scripts auszuführen. Natürlich setzt dies voraus, dass auf den Rechnern auch ein entsprechendes Verzeichnis existiert. Die Rechner, auf denen Kommandos oder Scripts ausgeführt werden sollen, tragen Sie in die Datei namens hostlist.txt ein. Bei mir sieht diese Datei wie folgt aus:
> you@host > cat hostlist.txt [email protected] [email protected]
Im Beispiel finden Sie also zwei entfernte Rechner, bei denen das gleich folgende Script dafür sorgt, dass Sie von Ihrem lokalen Rechner aus schnell beliebige Kommandos bzw. Scripts ausführen können.
> #!/bin/sh # Name: sshell # Kommandos bzw. Scripts auf entfernten Rechnern ausführen # ggf. den Pfad zur Datei anpassen HOSTS="hostlist.txt" usage() { echo "usage: progname [-option] [Verzeichnis] "\ " Kommando_oder_Script" echo echo "Option:" echo "-d :in ein bestimmtes Verzeichnis auf dem Host wechseln" echo "-s :Script in ein bestimmtes Verzeichnis hochladen und"\ " Ausführen" echo echo "Syntax der Host-Liste: " echo "Username@hostname1" echo "Username@hostname2" echo "..." exit 1 } if [ $# -eq 0 ] then usage fi # Datei 'hostlist.txt' überprüfen if [ -e $HOSTS ] then : else echo "Datei $HOSTS existiert nicht ..." touch hostlist.txt if [ $? -ne 0 ] then echo "Konnte $HOSTS nicht anlegen ...!" exit 1 else echo "Datei $HOSTS erzeugt, aber noch leer ...!" usage exit 1 fi fi # Optionen überprüfen ... case $1 in -d) if [ $# -lt 3 ] then usage fi DIR=$2 shift; shift ;; -s) if [ $# -lt 3 ] then usage fi DIR=$2 SCRIPT="yes" shift; shift ;; -*) usage ;; esac # Die einzelnen Hosts durchlaufen ... for host in `cat $HOSTS` do echo "$host : " CMD=$* if [ "$SCRIPT" = "yes" ] then scp $CMD ${host}:${DIR} fi ret=`ssh $host "cd $DIR; $CMD"` echo "$ret" done
Inhalt des Heimverzeichnisses ausgeben:
> you@host > ./sshell ls -l [email protected] : total 44 drwx------ 4 us10129 us10129 4096 May 14 13:45 backups drwxr-xr-x 8 us10129 us10129 4096 May 9 10:13 beta.pronix.de -rw-rw-r-- 1 us10129 us10129 66 Dec 2 02:13 db_cms.bak drwxrwxr-x 2 us10129 us10129 4096 Mar 11 07:49 dump -rw------- 1 us10129 us10129 952 May 14 14:00 mbox drwxrwxr-x 2 us10129 us10129 4096 Mar 28 18:03 mysqldump drwxr-xr-x 20 us10129 us10129 4096 May 19 19:56 www.pronix.de [email protected] : total 24 drwxr-xr-x 2 jwolf jwolf 512 May 8 14:03 backups drwxr-xr-x 3 jwolf jwolf 21504 Sep 2 2004 dev
Inhalt des Verzeichnisses $HOME/backups ausgeben:
> you@host > ./sshell -d backups ls -l <EMAIL> : total 8 drwxrwxr-x 3 us10129 us10129 4096 May 18 18:38 Shellbuch drwxrwxr-x 3 us10129 us10129 4096 May 14 13:46 Shellbuch_bak -rw-rw-r-- 1 us10129 us10129 0 May 20 12:45 file1 -rw-rw-r-- 1 us10129 us10129 0 May 20 12:45 file2 -rw-rw-r-- 1 us10129 us10129 0 May 20 12:45 file3 [email protected] : total 6 -rw-r--r-- 1 jwolf jwolf 0 May 8 13:38 file1 -rw-r--r-- 1 jwolf jwolf 0 May 8 13:38 file2 -rw-r--r-- 1 jwolf jwolf 0 May 8 13:38 file3 -rwx------ 1 jwolf jwolf 29 May 8 13:58 hallo.sh -rwx------ 1 jwolf jwolf 29 May 8 13:48 mhallo -rwx------ 1 jwolf jwolf 29 May 8 14:03 nhallo
Dateien beginnend mit »file*« im Verzeichnis $HOME/backups löschen:
> you@host > ./sshell -d backups rm file* <EMAIL> : [email protected] :
Inhalt des Verzeichnisses $HOME/backups erneut ausgeben:
> you@host > ./sshell -d backups ls -l <EMAIL> : total 5 drwxrwxr-x 3 us10129 us10129 4096 May 18 18:38 Shellbuch drwxrwxr-x 3 us10129 us10129 4096 May 14 13:46 Shellbuch_bak [email protected] : total 3 -rwx------ 1 jwolf jwolf 29 May 8 13:58 hallo.sh -rwx------ 1 jwolf jwolf 29 May 8 13:48 mhallo -rwx------ 1 jwolf jwolf 29 May 8 14:03 nhallo
Neues Verzeichnis testdir anlegen:
> you@host > ./sshell mkdir testdir <EMAIL> : [email protected] :
Script hallo.sh ins neue Verzeichnis (testscript) schieben und ausführen:
> you@host > ./sshell -s testdir ./hallo.sh <EMAIL> : hallo.sh 100 % 67 0.1KB/s 00:00 Ich bin das Hallo Welt-Script! [email protected] : hallo.sh 100 % 67 0.1KB/s 00:00 Ich bin das Hallo Welt-Script!
Verzeichnis testdir wieder löschen:
> tot@linux:~> ./sshell rm -r testdir <EMAIL> : [email protected] :
# 15.4 Backup-StrategienÂ
Date: 2012-12-14
Categories:
Tags:
15.4 Backup-Strategien Beim Thema Backup handelt es sich um ein sehr spezielles und vor allem enorm wichtiges Thema, weshalb hier eine umfassendere Einführung unumgänglich ist. Anhand dieser kurzen Einführung werden Sie schnell feststellen, wie viele Aspekte es gibt, die man bei der richtigen Backup-Lösung beachten muss. Zwar finden Sie anschließend auch einige Scripts in der Praxis dazu, doch handelt es sich bei diesem Thema schon eher um einen sehr speziellen Fall, bei dem man einiges Wissen benötigt und auch so manches berücksichtigen muss, sodass sich keine ultimative Lösung erstellen lässt.15.4.1 Warum ein Backup? Es gibt drei verschiedene Fälle von Datenverlusten: Eine einzelne Datei (oder wenige Dateien) wird aus Versehen gelöscht oder falsch modifiziert (bspw. von einem Virus infiziert). In solch einem Fall ist das Wiederherstellen einer Datei häufig nicht allzu kompliziert. Bei solchen Daten ist eine tar-, cpio- oder afio-Sicherung recht schnell wieder zurückgeladen. Natürlich ist dies immer abhängig vom Speichermedium. Wenn Sie dieses Backup auf einem Band linear suchen müssen, ist das Wiederherstellen nicht unbedingt vorteilhaft. Hierzu ist es häufig sinnvoll, immer wieder ein Backup in einem anderen (entfernten) Verzeichnis zu abzuspeichern. Eine Festplatte ist defekt. Häufig denkt man, ist bei mir noch nie vorgekommen, aber wenn es dann mal crasht, ist guter Rat oft teuer. Hier gibt es eine recht elegante Lösung, wenn man RAID-Systeme im Level 1 oder 5 verwendet. Dabei werden ja die Daten redundant auf mehreren Platten gespeichert. Das bedeutet, dass Sie beim Ausfall einer Festplatte die Informationen jederzeit von der Kopie wieder holen können. Sobald Sie die defekte Platte gegen eine neue austauschen, synchronisiert das RAID-System die Platten wieder, sodass alle Daten nach einer gewissen Zeit wieder redundant auf den Platten vorhanden sind. Zur Verwendung von RAID gibt es eine Software- und eine Hardwarelösung mit einem RAID-Controller. Zwar ist die Hardwarelösung erheblich schneller, aber auch teurer.   Auf der anderen Seite sollte man bei der Verwendung von RAID bedenken, dass jeder Fehler, der z. B. einem Benutzer oder gar dem Administrator selbst unterläuft, wie bspw. Software-Fehler, Viren, instabiles System, versehentliches Löschen von Daten usw., sofort auf das RAID-System bzw. auf alle Speichermedien im System repliziert wird.       Datenverlust durch Hardware- oder Softwarefehler oder Elementarschäden (wie Feuer, Wasser oder Überspannung). Hier wird eine Komplettsicherung der Datenbestände nötig, was bei den Giga- bis Terrabytes an Daten, die häufig vorliegen, kein allzu leichtes Unterfangen darstellt. Gewöhnlich geht man hierbei zwei Wege:Man schiebt die Daten auf einen entfernten Rechner (bspw. mit cp oder scp), am besten gleich in komprimierter Form (mittels gzip oder bzip2). Die wohl gängigste Methode dürfte das Archivieren auf wechselbaren Datenträgern wie Magnetband, CD oder DVD sein (abhängig vom Datenumfang). Meistens werden hierzu die klassischen Tools wie cp, dd, scp oder rsync verwendet. Auch sollte man unter Umständen eine Komprimierung mit gzip oder bzip2 vorziehen. Für Bänder und Streamer kommen häufig die klassischen Tools wie tar, afio, cpio oder taper zum Einsatz.15.4.2 Sicherungsmedien Über das Speichermedium macht man sich wohl zunächst keine allzu großen Gedanken. Meistens greift man auf eine billigere Lösung zurück. Ohne auf die einzelnen Speichermedien genauer einzugehen, gibt es hierbei einige Punkte, die es zu überdenken gilt: Maximales Speichervolumen â will man mal eben schnell sein lokales Verzeichnis sichern, wird man wohl mit einer CD bzw. DVD als Speichermedium auskommen. Doch bevor Sie vorschnell urteilen, lesen Sie am besten noch die weiteren Punkte. Zugriffsgeschwindigkeit â wenn Sie wichtige Datenbestände im Umfang von mehreren Gigabytes schnell wiederherstellen müssen, werden Sie wohl kaum ein Speichermedium wählen, welches nur 1 MB in der Sekunde übertragen kann. Hier gilt es also, die Transferrate in MB pro Sekunde im Auge zu behalten. Zuverlässigkeit â wie lange ist die Haltbarkeit von Daten auf einer CD oder DVD oder gar auf einem Magnetband. Darüber werden Sie sich wohl bisher recht selten Gedanken gemacht haben â doch es gibt in der Tat eine durchschnittliche Haltbarkeit von Daten auf Speichermedien. Zulässigkeit â in manchen Branchen, z. B. in der Buchhaltung, ist es gesetzlich vorgeschrieben, nur einmal beschreibbare optische Datenträger zu verwenden. Bei der Verwendung von mehrfach beschreibbaren Medien sollte man immer die Risiken bedenken, dass diese wieder beschrieben werden können (sei es nun mit Absicht oder aus Versehen).15.4.3 Varianten der Sicherungen Generell kann man von zwei Varianten einer Sicherung sprechen: Vollsicherung â dabei werden alle zu sichernden Daten vollständig gesichert. Der Vorteil ist, dass man jederzeit ohne größeren Aufwand die gesicherten Daten wieder zurückladen kann. Man sollte allerdings bedenken, ob man bestimmte Daten bei einer Vollsicherung ausgrenzen sollte â besonders die sicherheitsrelevanten. Der Nachteil daran ist ganz klar: Eine Vollsicherung kann eine ganz schöne Menge an Platz (und auch Zeit) auf einem Speichermedium verbrauchen. Inkrementelle Sicherung â hierbei wird nur einmal eine Vollsicherung vorgenommen. Anschließend werden immer nur diejenigen Daten gesichert, die sich seit der letzten Sicherung verändert haben. Hierbei wird weniger Platz auf einem Speichermedium (und auch Zeit) benötigt.15.4.4 Bestimmte Bereiche sichern Hierzu einige kurze Vorschläge, wie man bestimmte Bereiche sinnvoll sichert. Einzelne Dateien â einzelne Dateien kann man mal schnell mit cp oder scp auf eine andere Festplatte, Diskette oder ein anderes Verzeichnis übertragen. Natürlich steht Ihnen hierzu auch die Möglichkeit mittels tar, afio oder cpio zur Verfügung, um die Daten auf externe Datenträger zu sichern. Gewöhnlich werden diese Archive auch noch mittels gzip oder bzip2 komprimiert. Gern werden hierzu auch Datenträger wie CD oder DVD verwendet. Dateibäume â ganze Dateibäume beinhalten häufig eine Menge Daten. Hier verwendet man als Speichermedium häufig Streamer, Magnetbänder oder optische Datenträger (CD, DVD), je nach Umfang. Als Werkzeuge werden gewöhnlich tar, afio und cpio eingesetzt. Aber auch Programme zur Datensynchronisation wie rsync oder unison werden oft genutzt. Zum Sichern auf optische Speichermedien wie CD oder DVD werden gewöhnlich die Tools mkisofs, (oder mit GUI) xcdroast, K3b oder gtoaster verwendet. Ganze Festplatte â zum Sichern ganzer Festplatten verwendet man unter Linux häufig das Programmpaket amanda (Advanced Maryland Automatic Network Disk Archiver), das ein komplettes Backup-System zur Verfügung stellt. Da es nach dem Client-Server-Prinzip arbeitet, ist somit auch eine Datensicherung über das Netzwerk möglich. Der Funktionsumfang von amanda ist gewaltig, weshalb ich hier auf die Webseite http://www.amanda.org. Dateisysteme â mithilfe des Kommandos dd lässt sich ein komplettes Dateisystem auf eine andere Platte oder auf ein Band sichern. Beachten Sie allerdings, dass beim Duplizieren beide Partitionen die gleiche Größe haben müssen (falls Sie Festplatten duplizieren wollen) und bestenfalls beide Platten nicht gemountet sind (zumindest nicht die Zielplatte). Da dd selbst physikalisch Block für Block kopiert, kann das Tool nicht auf defekte Blöcke überprüfen. Daher sollte man hierbei auch gleich noch mit dem Kommando badblocks nach defekten Blöcken suchen. Natürlich müssen Sie auch beim Zurückspielen darauf achten, dass die Zielpartition nicht kleiner als die Quellpartition ist. Ein anderes Tool, das auch auf Low-Level-Ebene und fehlertolerant arbeitet, ist dd_rescue.   Ein weiteres hervorragendes Tool zum Sichern und Wiederherstellen ganzer Partitionen ist partimage, welches auch wie dd und dd_rescue in der Lage ist, unterschiedliche Dateisysteme zu sichern (ext2/ext3; reiserfs, xfs, UFS (Unix), HFS (MAC), NTFS/FAT16/FAT32 (Win32)), da es wie die beiden anderen genannten Tools auf Low-Level-Ebene arbeitet. Mehr zu partimage entnehmen Sie bitte http://www.partimage.org. Natürlich bietet sich Ihnen auch die Möglichkeit, Dateisysteme zu synchronisieren. Hierbei stehen Ihnen Werkzeuge wie rsync, unision, WebDAV usw. zur Verfügung.      15.4.5 Backup über ssh mit tar Das Sichern von Dateien zwischen Servern ist eigentlich mit dem Kommando scp recht einfach:scp some-archive.tgz user@host:/home/backupsNachteile von scp beim Durchlaufen ganzer Verzeichnisbäume (mit der Option âr) sind die vielen Kommandoaufrufe (alles wird einzeln kopiert) und die unflexiblen Kompressionsmöglichkeiten.Hierzu wird in der Praxis oft tar verwendet. Verknüpfen wir nun tar mit ssh, haben wir exakt das, was wir wollen. Wenn Sie nämlich ssh ohne eine interaktive Login-Sitzung starten, erwartet ssh Daten von der Standardeingabe und gibt das Ergebnis auf die Standardausgabe aus. Das hört sich stark nach der Verwendung einer Pipe an. Wollen Sie beispielsweise alle Daten aus Ihrem Heimverzeichnis auf einem entfernten Rechner archivieren, können Sie wie folgt vorgehen:tar zcvf â /home | ssh user@host "cat > homes.tgz"Natürlich ist es möglich, das komprimierte Archiv auch auf ein Magnetband des entfernten Rechners zu schreiben (entsprechendes Medium und Rechte vorausgesetzt):tar zcvf â /home | ssh user@host "cat > /dev/tape"Wollen Sie stattdessen eine Kopie einer Verzeichnisstruktur auf Ihrer lokalen Maschine direkt auf das Filesystem einer anderen Maschine kopieren, so können Sie dies so erreichen (Sie synchronisieren das entfernte Verzeichnis mit dem lokalen):cd /home/us10129/www.pronix.de ; tar zcf â html/ \ | ssh user@host \ "cd /home/us10129/www.pronix.de; mv html html.bak; tar zpxvf -"Hier sichern Sie u. a. auch das Verzeichnis html auf host, indem Sie es umbenennen (html.bak) â für den Fall der Fälle. Dann erstellen Sie eine exakte Kopie von /home/us10129/www.pronix.de/html â Ihrem lokalen Verzeichnis â mit sämtlichen identischen Zugriffsrechten und der Verzeichnisstruktur auf dem entfernten Rechner. Da hierbei tar mit der Option z verwendet wird, werden die Daten vor dem »Hochladen« komprimiert, was natürlich bedeutet, dass eine geringere Datenmenge transferiert werden muss und der Vorgang erheblich schneller vonstatten gehen kann. Natürlich ist dies abhängig von der Geschwindigkeit beider Rechner, also wie schnell bei diesen die (De-)Kompression durchgeführt werden kann.Müssen Sie auf dem entfernten Rechner etwas wiederherstellen und verfügen über ein Backup auf der lokalen Maschine, ist dies mit folgender Kommandoverkettung kein allzu großes Unterfangen mehr:ssh user@host "cd /home/us10129/www.pronix.de; tar zpvxf -" \ < big-archive.tgzSo stellen Sie das komplette Verzeichnis /home/us10129/www.pronix.de/ mit dem Archiv big-archive.tgz wieder her. Gleiches können Sie natürlich auch jederzeit in die andere Richtung vornehmen:ssh user@host "cat big-archive.tgz" | tar zpvxf -Damit Sie nicht andauernd ein Passwort eingeben müssen, empfiehlt es sich auch hier, SSH-Schlüssel zu verwenden (siehe Abschnitt 14.12.12). Das folgende Script demonstriert Ihnen die Möglichkeit, ssh und tar in einem Backup-Script zu verwenden.#!/bin/sh # Name: ssh_tar # Backups mit tar über ssh # Konfiguration, entsprechend anpassen # SSH_OPT="-l" SSH_HOST="192.135.147.2" SSH_USER="jwolf" # Default-Angaben # LOCAL_DIR="/home/tot/backups" REMOTE_DIR="/home/jwolf/backups" stamp=`date +%d_%m_%Y` BACKUP_FILE="backup_${stamp}.tgz" usage() { echo "usage: star [-ph] [-pl] [-sh] [-sl] [-r] [-l] ..." echo echo "Optionen : " echo " âph : (lokales) Verzeichnis packen und hochladen " \ " (remote) in 'REMOTE_DIR'" echo " Beispiel: star -ph lokalesVerzeichnis " echo " -pl = (remote) Verzeichnis packen und runterladen"\ " (lokal) in 'LOCAL_DIR'" echo " Beispiel: star -pl remoteVerzeichnis " echo " -sh = Synchronisiert ein Host-Verzeichnis mit einem "\ "lokalen Verzeichnis" echo " Beispiel: star -sh lokalesVerzeichnis "\ "remoteVerzeichnis syncVerzeichnis " echo " -sl = Synchronisiert ein lokales Verzeichnis mit "\ "einem Host-Verzeichnis" echo " Beispiel: star -sl remoteVerzeichnis "\ "lokalesVerzeichnis syncVerzeichnis " echo " -r = (remote) Wiederherstellen eines"\ " Host-Verzeichnisses" echo " Beispiel: star -r remoteVerzeichnis "\ "lokalTarArchiv.tgz" echo " -l = (lokal) Wiederherstellen eines lokalen "\ "Verzeichnisses" echo " Beispiel: star -l lokalesVerzeichnis "\ "remoteTarArchiv.tgz" # ... exit 1 } case "$1" in -ph) if [ $# -ne 2 ] then usage else cd $2; tar zcvf â "." | \ ssh $SSH_OPT $SSH_USER $SSH_HOST \ "cat > ${REMOTE_DIR}/${BACKUP_FILE}" echo "Verzeichnis '$2' nach "\ "${SSH_HOST}:${REMOTE_DIR}/${BACKUP_FILE} "\ "gesichert" fi ;; -pl) if [ $# -ne 2 ] then usage else ssh $SSH_OPT $SSH_USER $SSH_HOST \ "cd $2; tar zcvf â ." | \ cat > ${LOCAL_DIR}/${BACKUP_FILE} echo "Verzeichnis ${SSH_HOST}:${2} nach "\ "${LOCAL_DIR}/${BACKUP_FILE} gesichert" fi ;; -sh) if [ $# -ne 4 ] then usage else cd $2 tar zcf â $4/ | ssh $SSH_OPT $SSH_USER $SSH_HOST \ "cd $3; mv $4 ${4}.bak; tar zpxvf -" echo "Verzeichnis ${2}/${4} mit"\ " ${SSH_HOST}:${3}/${4} synchronisiert" fi ;; -sl) if [ $# -ne 4 ] then usage else cd $3; mv $4 ${4}.bak ssh $SSH_OPT $SSH_USER $SSH_HOST "cd ${2}; tar zcvf â ${4}"\ | tar zpvxf - echo "Verzeichnis ${SSH_HOST}:${2}/${4} mit"\ " ${3}/${4} synchronisiert" fi ;; -r) if [ $# -ne 3 ] then usage else ssh $SSH_OPT $SSH_USER $SSH_HOST \ "cd ${2}; tar zpvxf -" < $3 echo "${SSH_HOST}:$2 mit dem Archiv $3 "\ "Wiederhergestellt" fi ;; -l) if [ $# -ne 3 ] then usage else cd $2 ssh $SSH_OPT $SSH_USER $SSH_HOST "cat $3" | \ tar zpvxf - echo "$2 mit dem Archiv ${SSH_HOST}:${3} "\ "Wiederhergestellt" fi ;; -*) usage;; *) usage;; esacDas Script bei der Ausführung:you@host > ./ssh_tar -ph Shellbuch_aktuell/ ./ ./kap004.txt ./kap005.txt ... ... ./Kap013.txt ./Kap014.txt Verzeichnis 'Shellbuch_aktuell/' nach 192.135.147.2:/home/jwolf/backups/backup_20_05_2005.tgz gesichertEin Blick zum Rechner »192.135.147.2«:jwolf@jwolf$ ls backups/ backup_20_05_2005.tgz you@host > ./ssh_tar -pl backups/ ./ ./backup_20_05_2005.tgz ./kap004.txt ./kap005.txt ... ... ./Kap012.txt ./Kap013.txt ./Kap014.txt Verzeichnis 192.135.147.2:backups/ nach /home/you/backups/ backup_20_05_2005.tgz gesichert you@host > ls backups/ backup_07_05_2005.tgz backup_20_05_2005.tgz ShellbuchErstellt im Remoteverzeichnis backups ein Ebenbild des Verzeichnisses Shellbuch_aktuell aus dem Heimverzeichnis des lokalen Rechners:you@host > ./ssh_tar -sh $HOME backups Shellbuch_aktuell Shellbuch_aktuell/ Shellbuch_aktuell/kap004.txt Shellbuch_aktuell/kap005.txt ... Shellbuch_aktuell/Kap013.txt Shellbuch_aktuell/Kap014.txt Verzeichnis /home/you/Shellbuch_aktuell mit 192.135.147.2:backups/Shellbuch_aktuell synchronisiertErstellt im lokalen Heimverzeichnis $HOME/backup eine exakte Kopie des entfernten Remote-Verzeichnisses backups/Shellbuch:you@host > ./ssh_tar -sl backups $HOME/backups Shellbuch_aktuell Shellbuch_aktuell/ Shellbuch_aktuell/kap004.txt Shellbuch_aktuell/kap005.txt ... ... Shellbuch_aktuell/Kap013.txt Shellbuch_aktuell/Kap014.txt Verzeichnis 192.135.147.2:backups/Shellbuch_aktuell mit /home/tot/backups/Shellbuch_aktuell synchronisiert you@host > ls backups/ backup_07_05_2005.tgz backup_20_05_2005.tgz Shellbuch_aktuell Shellbuch_aktuell.bak you@host > ls backups/Shellbuch_aktuell Kap003.txt kap005.txt Kap007.txt Kap010.txt Kap013.txt ...Wiederherstellen eines entfernten Verzeichnisses mit einem lokalen Archiv. Im Beispiel wird das entfernte Verzeichnis backups/Shellbuch_aktuell mit dem lokalen Archiv backup_20_05_2005.tgz wiederhergestellt:you@host > ls backups/ backup_07_05_2005.tgz backup_20_05_2005.tgz Shellbuch_aktuell Shellbuch_aktuell.bak you@host > ./ssh_tar -r backups/Shellbuch_aktuell > backups/backup_20_05_2005.tgz ./ ./kap004.txt ./kap005.txt ... ... ./Kap013.txt ./Kap014.txt 192.135.147.2:backups/Shellbuch_aktuell mit dem Archiv backups/backup_20_05_2005.tgz wiederhergestelltDasselbe Beispiel in anderer Richtung. Hier wird das lokale Verzeichnis backups/Shellbuch_aktuell mit dem Archiv backup_20_05_2005.tgz, welches sich auf dem entfernten Rechner im Verzeichnis backups befindet, wiederhergestellt:you@host > ./ssh_tar -l backups/Shellbuch_aktuell > backups/backup_20_05_2005.tgz ./ ./kap004.txt ./kap005.txt ... ... ./Kap013.txt ./Kap014.txt backups/Shellbuch_aktuell mit dem Archiv 192.135.147.2:backups/backup_20_05_2005.tgz wiederhergestelltHinweis   Natürlich funktioniert dieses Script wie hier demonstriert nur mit einem Key-Login (siehe Abschnitt 14.12.12). Da Backup-Scripts aber ohnehin chronolgoisch laufen sollten, ist ein ssh-key immer sinnvoll, da man das Passwort nicht in einer Textdatei speichern muss.15.4.6 Daten mit rsync synchronisieren Was sich mit ssh und tar realisieren lässt, gelingt natürlich auch mit rsync. Das folgende einfache Script synchronisiert entweder ein lokales Verzeichnis mit einem entfernten Verzeichnis oder umgekehrt. Um hierbei auch die Vertraulichkeit zu gewährleisten, »tunnelt« man das Ganze durch ssh. Das folgende Script demonstriert Ihnen, wie Sie rsync zum komfortablen Synchronisieren zweier entfernter Verzeichnisse verwenden können.#!/bin/sh # ssyncron # Script zum Synchronisieren von Daten usage() { echo "usage: prgname [-option] [Verzeichnis]" echo echo "-u : Ein Verzeichnis auf dem Server mit einem"\ " lokalen synchronisieren" echo "-d : Ein lokales Verzeichnis mit einem Verzeichnis"\ " auf dem Server synchronisieren" exit 1 } # Konfigurationsdaten # # Pfad zu den Daten (Lokal) local_path="$HOME/" # Pfad zu den Dateien (Server) remote_path="/home/us10129" # Loginname username="<EMAIL>" # Optionen zum Download '-d' D_OPTIONS="-e ssh -av --exclude '*.xvpics' --exclude 'cache' --exclude 'bestellen'" # Optionen zum Hochladen '-u' U_OPTIONS="-e ssh -av" # rsync vorhanden ... if [ `which rsync` = "" ] then echo "Das Script benötigt 'rsync' zur Ausführung ...!" exit 1 fi # Pfad zu rsync RSYNC=`which rsync` site=$2 case "$1" in # Webseite herunterladen â Synchronisieren Lokal mit Server -d) [ -z $2 ] && usage # Verzeichnis fehlt ... $RSYNC $D_OPTIONS \ $username:${remote_path}/${site}/ ${local_path}${site}/ ;; # Webseite updaten â Synchronisieren Server mit Lokal -u) $RSYNC $U_OPTIONS \ ${local_path}${site}/ $username:${remote_path}/${site}/ ;; -*) usage ;; *) usage ;; esacDas Script bei der Ausführung:Entferntes Verzeichnis mit dem lokalen Verzeichnis synchronisieren:you@host > ./ssyncron -d backups/Shellbuch receiving file list ... done ./ Martin/ Kap001.doc Kap001.sxw Kap002.sxw Kap003.txt Kap007.txt ... ... Martin/Kap001.doc Martin/Kap002.sxw Martin/Kap003.sxw Martin/Kap004.sxw Martin/Kap005.sxw kap004.txt kap005.txt newfile.txt whoami.txt wrote 516 bytes read 1522877 bytes 38566.91 bytes/sec total size is 1521182 speedup is 1.00Eine neue lokale Datei atestfile.txt erzeugen und in das entfernte Verzeichnis synchronisieren:you@host > touch backups/Shellbuch/atestfile.txt you@host > ./ssyncron -u backups/Shellbuch building file list ... done Shellbuch/ Shellbuch/atestfile.txt wrote 607 bytes read 40 bytes 86.27 bytes/sec total size is 1521182 speedup is 2351.13Einige Dateien im lokalen Verzeichnis Shellbuch löschen (Datenverlust simulieren) und anschließend mit dem entfernten Verzeichnis Shellbuch wiederherstellen (synchronisieren):you@host > rm backups/Shellbuch/Kap00[1â9]* you@host > ./ssyncron -d backups/Shellbuch receiving file list ... done ./ Kap001.doc Kap001.sxw Kap002.sxw Kap003.txt Kap007.txt Kap008.txt Kap009.txt wrote 196 bytes read 501179 bytes 28650.00 bytes/sec total size is 3042364 speedup is 6.07Wenn in beiden Richtungen nichts mehr zu tun ist, dann ist alles synchronisiert:you@host > ./ssyncron -u backups/Shellbuch building file list ... done wrote 551 bytes read 20 bytes 87.85 bytes/sec total size is 1521182 speedup is 2664.07 you@host > ./ssyncron -d backups/Shellbuch receiving file list ... done wrote 56 bytes read 570 bytes 96.31 bytes/sec total size is 1521182 speedup is 2430.00Hinweis   In der Praxis würde es sich außerdem anbieten, rsync mit den Optionen -b für Backup zu verwenden, womit Backup-Kopien alter Dateiversionen angelegt werden, sodass man gegebenenfalls auf mehrere Versionen zurückgreifen kann, und die Option -z zu nutzen, mit der Kompression möglich ist.Hinweis   Hier gilt dasselbe wie schon beim Script zuvor. Hier sollten Sie ebenfalls einen ssh-key für ein Key-Login verwenden (siehe Abschnitt 14.12.12).15.4.7 Dateien und Verzeichnisse per E-Mail versenden Sie wollen sich ein Backup mit cpio erstellen und es sich automatisch zukommen lassen. Als einfachster Weg würde sich hier der Transfer per E-Mail als Anhang eignen. Dank megabyteschwerer Postfächer sollte dies heutzutage kein Problem mehr sein. Allerdings lässt sich ein Anhang nicht einfach so hinzufügen. Hierzu benötigen Sie uuencode (siehe Abschnitt 14.12.6). Zuerst müssen Sie also cpio-typisch ein komplettes Verzeichnis durchlaufen und die Ausgabe gefundener Dateien (mit find) durch eine Pipe an cpio übergeben. Damit daraus eine einzige Datei als Anhang wird, schickt cpio wiederum die Ausgabe durch die Pipe an gzip, um das vollständige Verzeichnis zu komprimieren. Diesen »gzipten« Anhang müssen Sie nun mit uuencode enkodieren, um ihn daraufhin mit mail oder mailx (oder ggf. mit sendmail) an den gewünschten Absender zu schicken. Alle diese Aktionen werden durch eine Pipe an das andere Kommando geschickt: find "$Dir" -type f -print | cpio -o${option} | gzip | uuencode "$Archive" | $Mail -s "$Archive" "$User" || exit 1Hierzu das komplette Script:#!/bin/sh # Name: mailcpio # Archiviert Dateien und Verzeichnisse per cpio, komprimiert mit # gzip und verschickt das Archiv an eine bestimmte E-Mail-Adresse PROGN="$0" # Benötigte Programme: mail oder mailx... if [ "`which mailx`" != "" ] then Mail="mailx" if [ "`which mail`" != "" ] then Mail="mail" fi else echo "Das Script benötigt 'mail' bzw. 'mailx' zur Ausführung!" exit 1 fi # Benötigt 'uuencode' für den Anhang if [ "`which uuencode`" = "" ] then echo "Das Script benötigt 'uuencode' zur Ausführung!" exit 1 fi # Benötigt 'cpio' if [ "`which cpio`" = "" ] then echo "Das Script benötigt 'cpio' zur Ausführung!" exit 1 fi Usage () { echo "$PROGN â Versendet ganze Verzeichnisse per E-Mail" echo "usage: $PROGN [option] e-mail-adresse"\ " {datei|Verzeichnis} [datei|Verzeichnis] ..." echo echo "Hierbei werden alle angegebenen Dateien und "\ "Verzeichnisse (inskl. Unterverzeichnisse)" echo "an eine angegebene Mail-Adresse gesendet. Das"\ " Archiv wird mittels gzip komprimiert." echo "Option:" echo "-s : Keine Ausgabe von cpio" echo "-v : Macht cpio gesprächig" exit 1 } while [ $# -gt 0 ] do case "$1" in -v) option=Bv ;; -s) Silent=yes ;; --) shift; break ;; -*) Usage ;; *) break ;; esac shift done if [ $# -lt 2 ] then Usage fi User="$1"; shift for Dir do Archive="${Dir}.cpio.gz" # Verzeichnis nicht lesbar ... if [ ! -r "$Dir" ] then echo "Kann $Dir nicht lesen â (wird ignoriert)" continue fi [ "$Silent" = "" ] && echo "$Archive -> " find "$Dir" -type f -print | cpio -o${option} | gzip | uuencode "$Archive" | $Mail -s "$Archive" "$User" || exit 1 doneDas Script bei der Ausführung:you@host > ./mailcpio-s <EMAIL> logfiles 3 blocks you@host > ./mailcpio -v <EMAIL> logfiles logfiles.cpio.gz -> logfiles/testscript.log.mail logfiles/testscript.log 1 block15.4.8 Startup-Scripts Wenn der Kernel gestartet wurde, kann dieser vorerst nur im Lese-Modus auf die Root-Partition zugreifen. Als ersten Prozess startet der Kernel init (/sbin/init) mit der PID 1. Dieser Prozess gilt ja als Elternteil aller weiteren Prozesse, die noch gestartet werden. Der Prozess init kümmert sich zunächst um die Konfiguration des Systems und den Start von zahlreichen Daemon-Prozessen.Hinweis   Es muss gleich darauf hingewiesen werden, dass sich über init keine hundertprozentigen Aussagen treffen lassen. Hier kochen die jeweiligen Distributionen häufig Ihr eigenes Süppchen. Die Unterschiede beziehen sich wieder insbesondere auf diejenigen Verzeichnisse, in denen sich die Init-Dateien befinden, und auf die hierbei berücksichtigten Konfigurationsdateien. Natürlich heißt dies auch, dass die Init-Pakete verschiedener Distributionen gewöhnlich gänzlich inkompatibel sind und nicht untereinander ausgetauscht werden können. Daher folgt hier zunächst ein allgemeiner Überblick über den Init-Prozess (genauer System-V-init).Bevor hier ein wenig ins Detail gegangen wird, zunächst ein kurzer Init-Überblick über einen normalen Systemstart: Nach dem Systemstart, wenn der Kernel geladen wurde, startet dieser das Programm (besser den Prozess) /sbin/init mit der PID 1. init wertet zunächst die Konfigurationsdatei /etc/inittab aus. Jetzt führt init ein Script zur Systeminitialisierung aus. Name und Pfad des Scripts sind stark distributionsabhängig. Als Nächstes führt init das Script rc aus, welches sich auch an unterscheidlichen Orten (und eventuell unter verschiedenen Namen) auf den Distributionen befindet. rc startet nun einzelne Scripts, die sich in einem Verzeichnis namens rc[n].d befinden. n ist hierbei der Runlevel. Diese Scriptdateien in den Verzeichnissen rc[n].d starten nun die Systemdienste (Daemonen), auch Startup-Scripts genannt. Der Speicherort dieser Verzeichnisse ist distributionsabhängig. Für gewöhnlich finden Sie die Verzeichnisse unter /etc oder unter /etc/init.d.init und der RunlevelNormalerweise verwendet init sieben Runlevel, die jeweils eine Gruppe von Diensten enthalten, die das System beim Starten (oder Beenden) ausführen soll. Sicherlich haben Sie schon davon gehört, dass Linux/UNIX ein Multi-User-Betriebssystem ist, also im Multi-User-Modus läuft. Es ist aber auch möglich, dass Sie Linux/UNIX im Single-User-Modus starten. Dass dies überhaupt realisiert werden kann, ist den verschiedenen Runlevel zu verdanken.In der Datei /etc/inittab wird unter Linux der »default runlevel« festgelegt, es ist stark distributionsabhängig. Welcher Runlevel welche Dienste startet, ist von »Symlinks« in den einzelnen Verzeichnissen rc[n].d abhängig. Diese »Symlinks« zeigen meist auf die Startscripts der Dienste, die im Allgemeinen unter /etc/init.d abgelegt sind.Zunächst ein kurzer Überblick zu den unterschiedlichen Runleveln und ihren Bedeutungen bei den gängigsten Distributionen:Tabelle 15.1  Runlevel und ihre BedeutungRunlevelBedeutung0Dieser Runlevel hält das System komplett an (Shutdown mit Halt).1 oder SSingle-User-Modus (Einzelbenutzer-Modus)2 oder MMulti-User-Modus (Mehrbenutzer-Modus) ohne Netzwerk3Multi-User-Modus (Mehrbenutzer-Modus) mit einem Netzwerk, aber ohne XâStart4Dieser Runlevel wird gewöhnlich nicht verwendet und steht somit zur freien Verfügung.5Multi-User-Modus (Mehrbenutzer-Modus) mit einem Netzwerk und einem grafischen Login (X-Start). Nach dem Login wird gewöhnlich die grafische Oberfläche gestartet.6Dieser Runlevel startet das System neu (Shutdown mit Reboot).Wie Sie sehen konnten, handelt es sich bei den Runleveln 0 und 6 um spezielle Fälle, in denen sich das System nicht lange halten kann. Hierbei wird das System heruntergefahren (Runlevel 0) oder eben neu gestartet (Runlevel 1). Zumeist werden die Runlevel 2 und 3 und die Mehrbenutzer-Level l und 5 verwendet, womit eine X-Anmeldeprozedur (wie xdm, kdm, gdm etc.) gestartet wird. Runlevel 1 (bzw. S) ist für die meisten Systeme unterschiedlich definiert und Runlevel 4 wird fast nie benutzt.Hinweis   Linux unterstützt übrigens 10 Runlevel, wobei die Runlevel 7 bis 9 nicht definiert sind.Runlevel 1 bzw. SIm Runlevel 1, dem Einzelbenutzer-Modus, werden meist alle Mehrbenutzer- und Anmeldeprozesse beendet. Somit ist sicher, dass das System mit geringem Aufwand ausgeführt wird. Natürlich bietet Runlevel 1 vollen root-Zugriff auf das System. Damit ein System in Runlevel 1 auch nach einem root-Passwort abfragt, wurde der Runlevel S entwickelt. Unter System-V-init existiert kein echter Runlevel S und er dient daher nur dazu, das root-Passwort abzufragen. Dieser Runlevel ist gewöhnlich dazu gedacht, dass Administratoren diverse Patches einspielen können./etc/inittabIn der Datei /etc/inittab steht, was init auf den verschiedenen Runleveln zu tun hat. Wenn der Rechner gestartet wird, durchläuft init Runlevel 0 bis hin zum Runlevel, der in /etc/inittab als Standard (default runlevel) festgelegt wurde. Damit der Übergang von einem Level zum nächsten reibungslos läuft, führt init die in /etc/inittab angegebenen Aktionen aus. Dasselbe geschieht natürlich auch im umgekehrten Fall beim Herunterfahren bzw. Neustarten des Systems.Die Verwendung von inittab ist allerdings nicht unbedingt das Gelbe vom Ei, weshalb noch zusätzliche Schichten in Form eines Scripts für das Wechseln der Runlevel eingebaut wurden. Dieses Script (rc â zu finden meist unter /etc/init.d/rc oder /etc/rc.d/rc) wird gewöhnlich aus inittab aufgerufen. Es (rc) führt wiederum weitere Scripts in einem vom Runlevel abhängigen Verzeichnis aus, um das System in seinen neuen (Runlevel-)Zustand zu versetzen.In vielen Büchern wird jetzt hier die Datei /etc/inittab etwas genauer zerlegt und beschrieben (bspw. inittab-Schlüsselwörter), was natürlich sehr lehrreich ist, doch in der Praxis müssen Sie sich als Systemadministrator eigentlich nicht damit befassen, da die eben erwähnte Schnittstelle für jede Anwendung geeignet ist.Startup-Scripts erstellen und ausführenDie Terminologie von Startup-Scripts ist häufig nicht einfach zu durchschauen, weshalb hier ein Beispiel gegeben werden soll. Gewöhnlich finden Sie die Hauptkopien der Startup-Scripts im Verzeichnis /etc/inid.d.you@host > ls -l /etc/init.d/ insgesamt 308 -rwxr-xr-x 1 root root 2570 2005â03â02 18:45 acpid -rwxr-xr-x 1 root root 1098 2005â02â24 10:29 acpi-support -rwxr-xr-x 1 root root 11038 2005â03â25 23:08 alsa -rwxr-xr-x 1 root root 1015 2004â11â26 12:36 anacron -rwxr-xr-x 1 root root 1388 2005â03â01 04:11 apmd -rwxr-xr-x 1 root root 1080 2005â02â18 11:37 atd -rw-r--r-- 1 root root 2805 2005â01â07 19:35 bootclean.sh -rwxr-xr-x 1 root root 1468 2005â01â07 19:36 bootlogd -rwxr-xr-x 1 root root 1371 2005â01â07 19:35 bootmisc.sh -rwxr-xr-x 1 root root 1316 2005â01â07 19:35 checkfs.sh -rwxr-xr-x 1 root root 7718 2005â01â07 19:35 checkroot.sh -rwxr-xr-x 1 root root 5449 2004â12â26 14:12 console-screen.sh -rwxr-xr-x 1 root root 1168 2004â10â29 18:05 cron ...Jedes dieser Scripts ist für einen Daemon oder einen anderen Aspekt des Systems verantwortlich. Und jedes dieser Scripts verarbeitet die Argumente »start« und »stop«, womit Sie den entsprechenden Dienst initialisieren oder beenden können, zum Beispiel:# /etc/init.d/cron * Usage: /etc/init.d/cron start|stop|restart|reload|force-reloadHier haben Sie versucht, das Script »cron«, welches für die Ausführung und Beendung des cron-Daemons verantwortlich, ist aufzurufen. Sie bekommen hierbei die möglichen Argumente mitgegeben, wie Sie das Script aufrufen können. Neben »start« und »stop« finden Sie häufig auch noch »restart«, was im Prinzip dasselbe bewirkt, wie ein »stop« mit anschließendem »start«. Des Weiteren findet man gewöhnlich auch eine Option »reload«, die den Dienst nicht beendet, sondern ihn auffordert, seine Konfiguration neu einzulesen (meist per Signal SIGHUP).Im folgenden Beispiel soll das Script »sleep_daemon« beim Systemstart automatisch gestartet und beim Beenden wieder automatisch beendet werden. Hier das Script:#!/bin/sh # Name : sleep_daemon sleep 1000 &Das Script macht nichts anderes, als einen »schlafenden Dämon« zu erzeugen. Neben einfachen Shellscripts können Sie in der Praxis natürlich jedes andere Programm â sei es nun ein Binary oder ein Script â in beliebiger Sprache ausführen lassen. Tatsächlich können Sie alle Shellscripts in diesem Buch dazu verwenden (ob sinnvoll oder nicht). Dieses Script »sleep_daemon« habe ich nun in das Verzeichnis /usr/sbin verschoben und natürlich entsprechende Ausführrechte (für alle) gesetzt. Für diesen Vorgang werden Sie wohl root-Rechte benötigen. Alternativ können Sie auch das Verzeichnis /usr/local/sbin verwenden.Folgendermaßen sieht nun das Startup-Script aus, womit Sie »sleep_daemon« starten, beenden und neu starten können.#!/bin/sh # Name : sleep_daemon DAEMON="/usr/sbin/sleep_daemon" test -f $DAEMON || exit 0 case "$1" in start) echo -n "Starte sleep_daemon" $DAEMON echo "." ;; stop) echo -n "Stoppe sleep_daemon" killall sleep echo "." ;; restart) echo -n "Stoppe sleep_daemon" killall sleep echo "." echo -n "Starte sleep_daemon" $DAEMON echo "." ;; # Hierzu wird auch gern folgende Sntax eingesetzt: # $0 stop # $0 start;; *) echo "Usage: $0 {start|stop|restart}" exit 1 ;; esacIch habe hier denselben Namen verwendet, was Sie in der Praxis nicht tun müssen. Dieses Startup-Script können Sie nun in das Verzeichnis /etc/init.d verschieben und müssen es auch wieder ausführbar machen. Theoretisch können Sie jetzt schon den Dienst von Hand starten bzw. beenden:# /etc/init.d/sleep_daemon start Starte sleep_daemon. # /etc/init.d/sleep_daemon restart Stoppe sleep_daemon. Starte sleep_daemon. # /etc/init.d/sleep_daemon stop Stoppe sleep_daemon.Damit unser Dienst auch automatisch beim Eintreten bzw. Austreten eines bestimmten Runlevels gestartet bzw. beendet wird, benötigt das von init gestartete Master-Controll-Script (rc) zusätzliche Informationen, welche Scripts ausgeführt werden sollen. Denn anstatt im Verzeichnis /etc/init.d nachzublättern, wann bei welchem Runlevel ein Script gestartet werden muss, sieht das Master-Controll-Script in einem Verzeichnis namens rc[n].d (bspw. rc1.d; rc2.d; ... rc6.d) nach. n steht für den Runlevel, in den es gelangen will. Zum Beispiel hier unter »Ubuntu Linux« im Verzeichnis rc2.d:# ls -l /etc/rc2.d/ lrwxrwxrwx 1 root root 17 K11anacron -> ../init.d/anacron lrwxrwxrwx 1 root root 17 S05vbesave -> ../init.d/vbesave lrwxrwxrwx 1 root root 18 S10sysklogd -> ../init.d/sysklogd lrwxrwxrwx 1 root root 15 S11klogd -> ../init.d/klogd lrwxrwxrwx 1 root root 14 S12alsa -> ../init.d/alsa lrwxrwxrwx 1 root root 13 S14ppp -> ../init.d/ppp lrwxrwxrwx 1 root root 16 S19cupsys -> ../init.d/cupsys lrwxrwxrwx 1 root root 15 S20acpid -> ../init.d/acpid lrwxrwxrwx 1 root root 14 S20apmd -> ../init.d/apmd ...Sie können gleich erkennen, dass diese Einträge in rc[n].d gewöhnlich symbolische Links sind, die auf die Startup-Scripts im init.d-Verzeichnis verweisen. Wenn Sie die anderen Runlevel-Verzeichnisse ebenfalls ansehen, werden Sie feststellen, dass hierbei alle Namen der symbolischen Links entweder mit einem »S« oder einen »K«, gefolgt von einer Nummer und dem eigentlichen Dienst beginnen (bspw. S14alsa). Auch dies ist recht schnell erklärt. Steigt init von einem Runlevel in den nächst höheren Level auf, werden alle Scripts mit »S« beginnend ausgeführt. Diese Scripts werden also mit dem Argument »start« ausgeführt. Wenn init hingegen von einem höheren Runlevel in einen niedrigeren wechselt, werden alle Scripts mit »K« (kill) beginnend ausgeführt. Hierbei wird gewöhnlich das Script mit dem Argument »stop« ausgeführt. Die Nummern haben eine Art Prioritätsbedeutung. Die Scripts im Runlevel-Verzeichnis werden bei der Option »start« in alphabetischer und bei »stop« in umgekehrter Reihenfolge ausgeführt. Somit bedient man sich einer Nummerierung, um die Reihenfolge der Ausführung zu beeinflussen.Jetzt wollen Sie natürlich auch Ihr Startup-Script hinzufügen. Um also dem System mitzuteilen, dass Sie einen Daemon starten wollen, müssen Sie ebenfalls einen symbolischen Link in das entsprechende Verzeichnis setzen. Die meisten Dienste starten Ihre Dämonen im Runlevel 2, weshalb hier ebenfalls ein Eintrag vorgenommen wird. Um den Daemon auch ordentlich wieder zu beenden, müssen Sie natürlich auch einen Link im Runlevel 0 eintragen. Da es bei einigen Systemen Unterschiede zwischen Shutdown und Reboot gibt, sollten Sie auch einen Eintrag im Runlevel 6 erstellen, um auf Nummer sicher zu gehen, dass sich der Daemon beim Neustart korrekt beendet hat.# ln -s /etc/init.d/sleep_daemon /etc/rc2.d/S23sleep_daemon # ln -s /etc/init.d/sleep_daemon /etc/rc0.d/K23sleep_daemon # ln -s /etc/init.d/sleep_daemon /etc/rc6.d/K23sleep_daemonMit der ersten Zeile weisen Sie das System nun an, das Startup-Script /etc/init.d/sleep_daemon mit dem Argument »start« auszuführen, wenn es im Runlevel 2 angelangt ist. Mit der zweiten Zeile legen Sie fest, dass /etc/init.d/sleep_daemon beim Herunterfahren des Systems mit dem Argument »stop« beendet wird, wenn es im Runlevel 0 angekommen. ist. Gleiches nehmen Sie auch mit der dritten Zeile vor, nur eben für den Neustart des Systems in Runlevel 6.Wenn Sie jetzt das System beenden und wieder hochfahren, können Sie in der Startup-Sitzung (sofern diese sichtbar ist) Ihren Dienst beim Starten beobachten. Hier werden Sie irgendwann ein Zeile wieStarte sleep_daemon.finden. Beim Beenden bzw. Neustarten des Systems entsteht diese Meldung, nur dass eben der »sleep_daemon« beendet wurde.Ihre MeinungWie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 15.4 Backup-StrategienÂ
Beim Thema Backup handelt es sich um ein sehr spezielles und vor allem enorm wichtiges Thema, weshalb hier eine umfassendere Einführung unumgänglich ist. Anhand dieser kurzen Einführung werden Sie schnell feststellen, wie viele Aspekte es gibt, die man bei der richtigen Backup-Lösung beachten muss. Zwar finden Sie anschließend auch einige Scripts in der Praxis dazu, doch handelt es sich bei diesem Thema schon eher um einen sehr speziellen Fall, bei dem man einiges Wissen benötigt und auch so manches berücksichtigen muss, sodass sich keine ultimative Lösung erstellen lässt.
### 15.4.1 Warum ein Backup?Â
Es gibt drei verschiedene Fälle von Datenverlusten:
 | Eine einzelne Datei (oder wenige Dateien) wird aus Versehen gelöscht oder falsch modifiziert (bspw. von einem Virus infiziert). In solch einem Fall ist das Wiederherstellen einer Datei häufig nicht allzu kompliziert. Bei solchen Daten ist eine tar-, cpio- oder afio-Sicherung recht schnell wieder zurückgeladen. Natürlich ist dies immer abhängig vom Speichermedium. Wenn Sie dieses Backup auf einem Band linear suchen müssen, ist das Wiederherstellen nicht unbedingt vorteilhaft. Hierzu ist es häufig sinnvoll, immer wieder ein Backup in einem anderen (entfernten) Verzeichnis zu abzuspeichern. |
| --- | --- |
 | Eine Festplatte ist defekt. Häufig denkt man, ist bei mir noch nie vorgekommen, aber wenn es dann mal crasht, ist guter Rat oft teuer. Hier gibt es eine recht elegante Lösung, wenn man RAID-Systeme im Level 1 oder 5 verwendet. Dabei werden ja die Daten redundant auf mehreren Platten gespeichert. Das bedeutet, dass Sie beim Ausfall einer Festplatte die Informationen jederzeit von der Kopie wieder holen können. Sobald Sie die defekte Platte gegen eine neue austauschen, synchronisiert das RAID-System die Platten wieder, sodass alle Daten nach einer gewissen Zeit wieder redundant auf den Platten vorhanden sind. Zur Verwendung von RAID gibt es eine Software- und eine Hardwarelösung mit einem RAID-Controller. Zwar ist die Hardwarelösung erheblich schneller, aber auch teurer. |
| --- | --- |
   | Auf der anderen Seite sollte man bei der Verwendung von RAID bedenken, dass jeder Fehler, der z. B. einem Benutzer oder gar dem Administrator selbst unterläuft, wie bspw. Software-Fehler, Viren, instabiles System, versehentliches Löschen von Daten usw., sofort auf das RAID-System bzw. auf alle Speichermedien im System repliziert wird. |
| --- | --- |
   |    |
 | Datenverlust durch Hardware- oder Softwarefehler oder Elementarschäden (wie Feuer, Wasser oder Überspannung). Hier wird eine Komplettsicherung der Datenbestände nötig, was bei den Giga- bis Terrabytes an Daten, die häufig vorliegen, kein allzu leichtes Unterfangen darstellt. Gewöhnlich geht man hierbei zwei Wege: |
| --- | --- |
Man schiebt die Daten auf einen entfernten Rechner (bspw. mit cp oder scp), am besten gleich in komprimierter Form (mittels gzip oder bzip2). Die wohl gängigste Methode dürfte das Archivieren auf wechselbaren Datenträgern wie Magnetband, CD oder DVD sein (abhängig vom Datenumfang). Meistens werden hierzu die klassischen Tools wie cp, dd, scp oder rsync verwendet. Auch sollte man unter Umständen eine Komprimierung mit gzip oder bzip2 vorziehen. Für Bänder und Streamer kommen häufig die klassischen Tools wie tar, afio, cpio oder taper zum Einsatz.
### 15.4.2 SicherungsmedienÂ
Über das Speichermedium macht man sich wohl zunächst keine allzu großen Gedanken. Meistens greift man auf eine billigere Lösung zurück. Ohne auf die einzelnen Speichermedien genauer einzugehen, gibt es hierbei einige Punkte, die es zu überdenken gilt:
 | Maximales Speichervolumen â will man mal eben schnell sein lokales Verzeichnis sichern, wird man wohl mit einer CD bzw. DVD als Speichermedium auskommen. Doch bevor Sie vorschnell urteilen, lesen Sie am besten noch die weiteren Punkte. |
| --- | --- |
 | Zugriffsgeschwindigkeit â wenn Sie wichtige Datenbestände im Umfang von mehreren Gigabytes schnell wiederherstellen müssen, werden Sie wohl kaum ein Speichermedium wählen, welches nur 1 MB in der Sekunde übertragen kann. Hier gilt es also, die Transferrate in MB pro Sekunde im Auge zu behalten. |
| --- | --- |
 | Zuverlässigkeit â wie lange ist die Haltbarkeit von Daten auf einer CD oder DVD oder gar auf einem Magnetband. Darüber werden Sie sich wohl bisher recht selten Gedanken gemacht haben â doch es gibt in der Tat eine durchschnittliche Haltbarkeit von Daten auf Speichermedien. |
| --- | --- |
 | Zulässigkeit â in manchen Branchen, z. B. in der Buchhaltung, ist es gesetzlich vorgeschrieben, nur einmal beschreibbare optische Datenträger zu verwenden. Bei der Verwendung von mehrfach beschreibbaren Medien sollte man immer die Risiken bedenken, dass diese wieder beschrieben werden können (sei es nun mit Absicht oder aus Versehen). |
| --- | --- |
### 15.4.3 Varianten der SicherungenÂ
Generell kann man von zwei Varianten einer Sicherung sprechen:
 | Vollsicherung â dabei werden alle zu sichernden Daten vollständig gesichert. Der Vorteil ist, dass man jederzeit ohne größeren Aufwand die gesicherten Daten wieder zurückladen kann. Man sollte allerdings bedenken, ob man bestimmte Daten bei einer Vollsicherung ausgrenzen sollte â besonders die sicherheitsrelevanten. Der Nachteil daran ist ganz klar: Eine Vollsicherung kann eine ganz schöne Menge an Platz (und auch Zeit) auf einem Speichermedium verbrauchen. |
| --- | --- |
 | Inkrementelle Sicherung â hierbei wird nur einmal eine Vollsicherung vorgenommen. Anschließend werden immer nur diejenigen Daten gesichert, die sich seit der letzten Sicherung verändert haben. Hierbei wird weniger Platz auf einem Speichermedium (und auch Zeit) benötigt. |
| --- | --- |
### 15.4.4 Bestimmte Bereiche sichernÂ
Hierzu einige kurze Vorschläge, wie man bestimmte Bereiche sinnvoll sichert.
 | Einzelne Dateien â einzelne Dateien kann man mal schnell mit cp oder scp auf eine andere Festplatte, Diskette oder ein anderes Verzeichnis übertragen. Natürlich steht Ihnen hierzu auch die Möglichkeit mittels tar, afio oder cpio zur Verfügung, um die Daten auf externe Datenträger zu sichern. Gewöhnlich werden diese Archive auch noch mittels gzip oder bzip2 komprimiert. Gern werden hierzu auch Datenträger wie CD oder DVD verwendet. |
| --- | --- |
 | Dateibäume â ganze Dateibäume beinhalten häufig eine Menge Daten. Hier verwendet man als Speichermedium häufig Streamer, Magnetbänder oder optische Datenträger (CD, DVD), je nach Umfang. Als Werkzeuge werden gewöhnlich tar, afio und cpio eingesetzt. Aber auch Programme zur Datensynchronisation wie rsync oder unison werden oft genutzt. Zum Sichern auf optische Speichermedien wie CD oder DVD werden gewöhnlich die Tools mkisofs, (oder mit GUI) xcdroast, K3b oder gtoaster verwendet. |
| --- | --- |
 | Ganze Festplatte â zum Sichern ganzer Festplatten verwendet man unter Linux häufig das Programmpaket amanda (Advanced Maryland Automatic Network Disk Archiver), das ein komplettes Backup-System zur Verfügung stellt. Da es nach dem Client-Server-Prinzip arbeitet, ist somit auch eine Datensicherung über das Netzwerk möglich. Der Funktionsumfang von amanda ist gewaltig, weshalb ich hier auf die Webseite http://www.amanda.org. |
| --- | --- |
 | Dateisysteme â mithilfe des Kommandos dd lässt sich ein komplettes Dateisystem auf eine andere Platte oder auf ein Band sichern. Beachten Sie allerdings, dass beim Duplizieren beide Partitionen die gleiche Größe haben müssen (falls Sie Festplatten duplizieren wollen) und bestenfalls beide Platten nicht gemountet sind (zumindest nicht die Zielplatte). Da dd selbst physikalisch Block für Block kopiert, kann das Tool nicht auf defekte Blöcke überprüfen. Daher sollte man hierbei auch gleich noch mit dem Kommando badblocks nach defekten Blöcken suchen. Natürlich müssen Sie auch beim Zurückspielen darauf achten, dass die Zielpartition nicht kleiner als die Quellpartition ist. Ein anderes Tool, das auch auf Low-Level-Ebene und fehlertolerant arbeitet, ist dd_rescue. |
| --- | --- |
   | Ein weiteres hervorragendes Tool zum Sichern und Wiederherstellen ganzer Partitionen ist partimage, welches auch wie dd und dd_rescue in der Lage ist, unterschiedliche Dateisysteme zu sichern (ext2/ext3; reiserfs, xfs, UFS (Unix), HFS (MAC), NTFS/FAT16/FAT32 (Win32)), da es wie die beiden anderen genannten Tools auf Low-Level-Ebene arbeitet. Mehr zu partimage entnehmen Sie bitte http://www.partimage.org. Natürlich bietet sich Ihnen auch die Möglichkeit, Dateisysteme zu synchronisieren. Hierbei stehen Ihnen Werkzeuge wie rsync, unision, WebDAV usw. zur Verfügung. |
| --- | --- |
   |    |
### 15.4.5 Backup über ssh mit tarÂ
Das Sichern von Dateien zwischen Servern ist eigentlich mit dem Kommando scp recht einfach:
> scp some-archive.tgz user@host:/home/backups
Nachteile von scp beim Durchlaufen ganzer Verzeichnisbäume (mit der Option âr) sind die vielen Kommandoaufrufe (alles wird einzeln kopiert) und die unflexiblen Kompressionsmöglichkeiten.
Hierzu wird in der Praxis oft tar verwendet. Verknüpfen wir nun tar mit ssh, haben wir exakt das, was wir wollen. Wenn Sie nämlich ssh ohne eine interaktive Login-Sitzung starten, erwartet ssh Daten von der Standardeingabe und gibt das Ergebnis auf die Standardausgabe aus. Das hört sich stark nach der Verwendung einer Pipe an. Wollen Sie beispielsweise alle Daten aus Ihrem Heimverzeichnis auf einem entfernten Rechner archivieren, können Sie wie folgt vorgehen:
> tar zcvf â /home | ssh user@host "cat > homes.tgz"
Natürlich ist es möglich, das komprimierte Archiv auch auf ein Magnetband des entfernten Rechners zu schreiben (entsprechendes Medium und Rechte vorausgesetzt):
> tar zcvf â /home | ssh user@host "cat > /dev/tape"
Wollen Sie stattdessen eine Kopie einer Verzeichnisstruktur auf Ihrer lokalen Maschine direkt auf das Filesystem einer anderen Maschine kopieren, so können Sie dies so erreichen (Sie synchronisieren das entfernte Verzeichnis mit dem lokalen):
> cd /home/us10129/www.pronix.de ; tar zcf â html/ \ | ssh user@host \ "cd /home/us10129/www.pronix.de; mv html html.bak; tar zpxvf -"
Hier sichern Sie u. a. auch das Verzeichnis html auf host, indem Sie es umbenennen (html.bak) â für den Fall der Fälle. Dann erstellen Sie eine exakte Kopie von /home/us10129/www.pronix.de/html â Ihrem lokalen Verzeichnis â mit sämtlichen identischen Zugriffsrechten und der Verzeichnisstruktur auf dem entfernten Rechner. Da hierbei tar mit der Option z verwendet wird, werden die Daten vor dem »Hochladen« komprimiert, was natürlich bedeutet, dass eine geringere Datenmenge transferiert werden muss und der Vorgang erheblich schneller vonstatten gehen kann. Natürlich ist dies abhängig von der Geschwindigkeit beider Rechner, also wie schnell bei diesen die (De-)Kompression durchgeführt werden kann.
Müssen Sie auf dem entfernten Rechner etwas wiederherstellen und verfügen über ein Backup auf der lokalen Maschine, ist dies mit folgender Kommandoverkettung kein allzu großes Unterfangen mehr:
> ssh user@host "cd /home/us10129/www.pronix.de; tar zpvxf -" \ < big-archive.tgz
So stellen Sie das komplette Verzeichnis /home/us10129/www.pronix.de/ mit dem Archiv big-archive.tgz wieder her. Gleiches können Sie natürlich auch jederzeit in die andere Richtung vornehmen:
> ssh user@host "cat big-archive.tgz" | tar zpvxf -
Damit Sie nicht andauernd ein Passwort eingeben müssen, empfiehlt es sich auch hier, SSH-Schlüssel zu verwenden (siehe Abschnitt 14.12.12). Das folgende Script demonstriert Ihnen die Möglichkeit, ssh und tar in einem Backup-Script zu verwenden.
> #!/bin/sh # Name: ssh_tar # Backups mit tar über ssh # Konfiguration, entsprechend anpassen # SSH_OPT="-l" SSH_HOST="192.135.147.2" SSH_USER="jwolf" # Default-Angaben # LOCAL_DIR="/home/tot/backups" REMOTE_DIR="/home/jwolf/backups" stamp=`date +%d_%m_%Y` BACKUP_FILE="backup_${stamp}.tgz" usage() { echo "usage: star [-ph] [-pl] [-sh] [-sl] [-r] [-l] ..." echo echo "Optionen : " echo " âph : (lokales) Verzeichnis packen und hochladen " \ " (remote) in 'REMOTE_DIR'" echo " Beispiel: star -ph lokalesVerzeichnis " echo " -pl = (remote) Verzeichnis packen und runterladen"\ " (lokal) in 'LOCAL_DIR'" echo " Beispiel: star -pl remoteVerzeichnis " echo " -sh = Synchronisiert ein Host-Verzeichnis mit einem "\ "lokalen Verzeichnis" echo " Beispiel: star -sh lokalesVerzeichnis "\ "remoteVerzeichnis syncVerzeichnis " echo " -sl = Synchronisiert ein lokales Verzeichnis mit "\ "einem Host-Verzeichnis" echo " Beispiel: star -sl remoteVerzeichnis "\ "lokalesVerzeichnis syncVerzeichnis " echo " -r = (remote) Wiederherstellen eines"\ " Host-Verzeichnisses" echo " Beispiel: star -r remoteVerzeichnis "\ "lokalTarArchiv.tgz" echo " -l = (lokal) Wiederherstellen eines lokalen "\ "Verzeichnisses" echo " Beispiel: star -l lokalesVerzeichnis "\ "remoteTarArchiv.tgz" # ... exit 1 } case "$1" in -ph) if [ $# -ne 2 ] then usage else cd $2; tar zcvf â "." | \ ssh $SSH_OPT $SSH_USER $SSH_HOST \ "cat > ${REMOTE_DIR}/${BACKUP_FILE}" echo "Verzeichnis '$2' nach "\ "${SSH_HOST}:${REMOTE_DIR}/${BACKUP_FILE} "\ "gesichert" fi ;; -pl) if [ $# -ne 2 ] then usage else ssh $SSH_OPT $SSH_USER $SSH_HOST \ "cd $2; tar zcvf â ." | \ cat > ${LOCAL_DIR}/${BACKUP_FILE} echo "Verzeichnis ${SSH_HOST}:${2} nach "\ "${LOCAL_DIR}/${BACKUP_FILE} gesichert" fi ;; -sh) if [ $# -ne 4 ] then usage else cd $2 tar zcf â $4/ | ssh $SSH_OPT $SSH_USER $SSH_HOST \ "cd $3; mv $4 ${4}.bak; tar zpxvf -" echo "Verzeichnis ${2}/${4} mit"\ " ${SSH_HOST}:${3}/${4} synchronisiert" fi ;; -sl) if [ $# -ne 4 ] then usage else cd $3; mv $4 ${4}.bak ssh $SSH_OPT $SSH_USER $SSH_HOST "cd ${2}; tar zcvf â ${4}"\ | tar zpvxf - echo "Verzeichnis ${SSH_HOST}:${2}/${4} mit"\ " ${3}/${4} synchronisiert" fi ;; -r) if [ $# -ne 3 ] then usage else ssh $SSH_OPT $SSH_USER $SSH_HOST \ "cd ${2}; tar zpvxf -" < $3 echo "${SSH_HOST}:$2 mit dem Archiv $3 "\ "Wiederhergestellt" fi ;; -l) if [ $# -ne 3 ] then usage else cd $2 ssh $SSH_OPT $SSH_USER $SSH_HOST "cat $3" | \ tar zpvxf - echo "$2 mit dem Archiv ${SSH_HOST}:${3} "\ "Wiederhergestellt" fi ;; -*) usage;; *) usage;; esac
Das Script bei der Ausführung:
> you@host > ./ssh_tar -ph Shellbuch_aktuell/ ./ ./kap004.txt ./kap005.txt ... ... ./Kap013.txt ./Kap014.txt Verzeichnis 'Shellbuch_aktuell/' nach 192.135.147.2:/home/jwolf/backups/backup_20_05_2005.tgz gesichert
Ein Blick zum Rechner »192.135.147.2«:
> jwolf@jwolf$ ls backups/ backup_20_05_2005.tgz you@host > ./ssh_tar -pl backups/ ./ ./backup_20_05_2005.tgz ./kap004.txt ./kap005.txt ... ... ./Kap012.txt ./Kap013.txt ./Kap014.txt Verzeichnis 192.135.147.2:backups/ nach /home/you/backups/ backup_20_05_2005.tgz gesichert you@host > ls backups/ backup_07_05_2005.tgz backup_20_05_2005.tgz Shellbuch
Erstellt im Remoteverzeichnis backups ein Ebenbild des Verzeichnisses Shellbuch_aktuell aus dem Heimverzeichnis des lokalen Rechners:
> you@host > ./ssh_tar -sh $HOME backups Shellbuch_aktuell Shellbuch_aktuell/ Shellbuch_aktuell/kap004.txt Shellbuch_aktuell/kap005.txt ... Shellbuch_aktuell/Kap013.txt Shellbuch_aktuell/Kap014.txt Verzeichnis /home/you/Shellbuch_aktuell mit 192.135.147.2:backups/Shellbuch_aktuell synchronisiert
Erstellt im lokalen Heimverzeichnis $HOME/backup eine exakte Kopie des entfernten Remote-Verzeichnisses backups/Shellbuch:
> you@host > ./ssh_tar -sl backups $HOME/backups Shellbuch_aktuell Shellbuch_aktuell/ Shellbuch_aktuell/kap004.txt Shellbuch_aktuell/kap005.txt ... ... Shellbuch_aktuell/Kap013.txt Shellbuch_aktuell/Kap014.txt Verzeichnis 192.135.147.2:backups/Shellbuch_aktuell mit /home/tot/backups/Shellbuch_aktuell synchronisiert you@host > ls backups/ backup_07_05_2005.tgz backup_20_05_2005.tgz Shellbuch_aktuell Shellbuch_aktuell.bak you@host > ls backups/Shellbuch_aktuell Kap003.txt kap005.txt Kap007.txt Kap010.txt Kap013.txt ...
Wiederherstellen eines entfernten Verzeichnisses mit einem lokalen Archiv. Im Beispiel wird das entfernte Verzeichnis backups/Shellbuch_aktuell mit dem lokalen Archiv backup_20_05_2005.tgz wiederhergestellt:
> you@host > ls backups/ backup_07_05_2005.tgz backup_20_05_2005.tgz Shellbuch_aktuell Shellbuch_aktuell.bak you@host > ./ssh_tar -r backups/Shellbuch_aktuell > backups/backup_20_05_2005.tgz ./ ./kap004.txt ./kap005.txt ... ... ./Kap013.txt ./Kap014.txt 192.135.147.2:backups/Shellbuch_aktuell mit dem Archiv backups/backup_20_05_2005.tgz wiederhergestellt
Dasselbe Beispiel in anderer Richtung. Hier wird das lokale Verzeichnis backups/Shellbuch_aktuell mit dem Archiv backup_20_05_2005.tgz, welches sich auf dem entfernten Rechner im Verzeichnis backups befindet, wiederhergestellt:
> you@host > ./ssh_tar -l backups/Shellbuch_aktuell > backups/backup_20_05_2005.tgz ./ ./kap004.txt ./kap005.txt ... ... ./Kap013.txt ./Kap014.txt backups/Shellbuch_aktuell mit dem Archiv 192.135.147.2:backups/backup_20_05_2005.tgz wiederhergestellt
Hinweis   Natürlich funktioniert dieses Script wie hier demonstriert nur mit einem Key-Login (siehe Abschnitt 14.12.12). Da Backup-Scripts aber ohnehin chronolgoisch laufen sollten, ist ein ssh-key immer sinnvoll, da man das Passwort nicht in einer Textdatei speichern muss.
### 15.4.6 Daten mit rsync synchronisierenÂ
Was sich mit ssh und tar realisieren lässt, gelingt natürlich auch mit rsync. Das folgende einfache Script synchronisiert entweder ein lokales Verzeichnis mit einem entfernten Verzeichnis oder umgekehrt. Um hierbei auch die Vertraulichkeit zu gewährleisten, »tunnelt« man das Ganze durch ssh. Das folgende Script demonstriert Ihnen, wie Sie rsync zum komfortablen Synchronisieren zweier entfernter Verzeichnisse verwenden können.
> #!/bin/sh # ssyncron # Script zum Synchronisieren von Daten usage() { echo "usage: prgname [-option] [Verzeichnis]" echo echo "-u : Ein Verzeichnis auf dem Server mit einem"\ " lokalen synchronisieren" echo "-d : Ein lokales Verzeichnis mit einem Verzeichnis"\ " auf dem Server synchronisieren" exit 1 } # Konfigurationsdaten # # Pfad zu den Daten (Lokal) local_path="$HOME/" # Pfad zu den Dateien (Server) remote_path="/home/us10129" # Loginname username="<EMAIL>" # Optionen zum Download '-d' D_OPTIONS="-e ssh -av --exclude '*.xvpics' --exclude 'cache' --exclude 'bestellen'" # Optionen zum Hochladen '-u' U_OPTIONS="-e ssh -av" # rsync vorhanden ... if [ `which rsync` = "" ] then echo "Das Script benötigt 'rsync' zur Ausführung ...!" exit 1 fi # Pfad zu rsync RSYNC=`which rsync` site=$2 case "$1" in # Webseite herunterladen â Synchronisieren Lokal mit Server -d) [ -z $2 ] && usage # Verzeichnis fehlt ... $RSYNC $D_OPTIONS \ $username:${remote_path}/${site}/ ${local_path}${site}/ ;; # Webseite updaten â Synchronisieren Server mit Lokal -u) $RSYNC $U_OPTIONS \ ${local_path}${site}/ $username:${remote_path}/${site}/ ;; -*) usage ;; *) usage ;; esac
Entferntes Verzeichnis mit dem lokalen Verzeichnis synchronisieren:
> you@host > ./ssyncron -d backups/Shellbuch receiving file list ... done ./ Martin/ Kap001.doc Kap001.sxw Kap002.sxw Kap003.txt Kap007.txt ... ... Martin/Kap001.doc Martin/Kap002.sxw Martin/Kap003.sxw Martin/Kap004.sxw Martin/Kap005.sxw kap004.txt kap005.txt newfile.txt whoami.txt wrote 516 bytes read 1522877 bytes 38566.91 bytes/sec total size is 1521182 speedup is 1.00
Eine neue lokale Datei atestfile.txt erzeugen und in das entfernte Verzeichnis synchronisieren:
> you@host > touch backups/Shellbuch/atestfile.txt you@host > ./ssyncron -u backups/Shellbuch building file list ... done Shellbuch/ Shellbuch/atestfile.txt wrote 607 bytes read 40 bytes 86.27 bytes/sec total size is 1521182 speedup is 2351.13
Einige Dateien im lokalen Verzeichnis Shellbuch löschen (Datenverlust simulieren) und anschließend mit dem entfernten Verzeichnis Shellbuch wiederherstellen (synchronisieren):
> you@host > rm backups/Shellbuch/Kap00[1â9]* you@host > ./ssyncron -d backups/Shellbuch receiving file list ... done ./ Kap001.doc Kap001.sxw Kap002.sxw Kap003.txt Kap007.txt Kap008.txt Kap009.txt wrote 196 bytes read 501179 bytes 28650.00 bytes/sec total size is 3042364 speedup is 6.07
Wenn in beiden Richtungen nichts mehr zu tun ist, dann ist alles synchronisiert:
> you@host > ./ssyncron -u backups/Shellbuch building file list ... done wrote 551 bytes read 20 bytes 87.85 bytes/sec total size is 1521182 speedup is 2664.07 you@host > ./ssyncron -d backups/Shellbuch receiving file list ... done wrote 56 bytes read 570 bytes 96.31 bytes/sec total size is 1521182 speedup is 2430.00
Hinweis   In der Praxis würde es sich außerdem anbieten, rsync mit den Optionen -b für Backup zu verwenden, womit Backup-Kopien alter Dateiversionen angelegt werden, sodass man gegebenenfalls auf mehrere Versionen zurückgreifen kann, und die Option -z zu nutzen, mit der Kompression möglich ist.Hinweis   Hier gilt dasselbe wie schon beim Script zuvor. Hier sollten Sie ebenfalls einen ssh-key für ein Key-Login verwenden (siehe Abschnitt 14.12.12).
### 15.4.7 Dateien und Verzeichnisse per E-Mail versendenÂ
Sie wollen sich ein Backup mit cpio erstellen und es sich automatisch zukommen lassen. Als einfachster Weg würde sich hier der Transfer per E-Mail als Anhang eignen. Dank megabyteschwerer Postfächer sollte dies heutzutage kein Problem mehr sein. Allerdings lässt sich ein Anhang nicht einfach so hinzufügen. Hierzu benötigen Sie uuencode (siehe Abschnitt 14.12.6). Zuerst müssen Sie also cpio-typisch ein komplettes Verzeichnis durchlaufen und die Ausgabe gefundener Dateien (mit find) durch eine Pipe an cpio übergeben. Damit daraus eine einzige Datei als Anhang wird, schickt cpio wiederum die Ausgabe durch die Pipe an gzip, um das vollständige Verzeichnis zu komprimieren. Diesen »gzipten« Anhang müssen Sie nun mit uuencode enkodieren, um ihn daraufhin mit mail oder mailx (oder ggf. mit sendmail) an den gewünschten Absender zu schicken. Alle diese Aktionen werden durch eine Pipe an das andere Kommando geschickt:
> find "$Dir" -type f -print | cpio -o${option} | gzip | uuencode "$Archive" | $Mail -s "$Archive" "$User" || exit 1
Hierzu das komplette Script:
> #!/bin/sh # Name: mailcpio # Archiviert Dateien und Verzeichnisse per cpio, komprimiert mit # gzip und verschickt das Archiv an eine bestimmte E-Mail-Adresse PROGN="$0" # Benötigte Programme: mail oder mailx... if [ "`which mailx`" != "" ] then Mail="mailx" if [ "`which mail`" != "" ] then Mail="mail" fi else echo "Das Script benötigt 'mail' bzw. 'mailx' zur Ausführung!" exit 1 fi # Benötigt 'uuencode' für den Anhang if [ "`which uuencode`" = "" ] then echo "Das Script benötigt 'uuencode' zur Ausführung!" exit 1 fi # Benötigt 'cpio' if [ "`which cpio`" = "" ] then echo "Das Script benötigt 'cpio' zur Ausführung!" exit 1 fi Usage () { echo "$PROGN â Versendet ganze Verzeichnisse per E-Mail" echo "usage: $PROGN [option] e-mail-adresse"\ " {datei|Verzeichnis} [datei|Verzeichnis] ..." echo echo "Hierbei werden alle angegebenen Dateien und "\ "Verzeichnisse (inskl. Unterverzeichnisse)" echo "an eine angegebene Mail-Adresse gesendet. Das"\ " Archiv wird mittels gzip komprimiert." echo "Option:" echo "-s : Keine Ausgabe von cpio" echo "-v : Macht cpio gesprächig" exit 1 } while [ $# -gt 0 ] do case "$1" in -v) option=Bv ;; -s) Silent=yes ;; --) shift; break ;; -*) Usage ;; *) break ;; esac shift done if [ $# -lt 2 ] then Usage fi User="$1"; shift for Dir do Archive="${Dir}.cpio.gz" # Verzeichnis nicht lesbar ... if [ ! -r "$Dir" ] then echo "Kann $Dir nicht lesen â (wird ignoriert)" continue fi [ "$Silent" = "" ] && echo "$Archive -> " find "$Dir" -type f -print | cpio -o${option} | gzip | uuencode "$Archive" | $Mail -s "$Archive" "$User" || exit 1 done
Das Script bei der Ausführung:
> you@host > ./mailcpio-s <EMAIL> logfiles 3 blocks you@host > ./mailcpio -v <EMAIL>.de logfiles logfiles.cpio.gz -> logfiles/testscript.log.mail logfiles/testscript.log 1 block
### 15.4.8 Startup-ScriptsÂ
Wenn der Kernel gestartet wurde, kann dieser vorerst nur im Lese-Modus auf die Root-Partition zugreifen. Als ersten Prozess startet der Kernel init (/sbin/init) mit der PID 1. Dieser Prozess gilt ja als Elternteil aller weiteren Prozesse, die noch gestartet werden. Der Prozess init kümmert sich zunächst um die Konfiguration des Systems und den Start von zahlreichen Daemon-Prozessen.
Hinweis   Es muss gleich darauf hingewiesen werden, dass sich über init keine hundertprozentigen Aussagen treffen lassen. Hier kochen die jeweiligen Distributionen häufig Ihr eigenes Süppchen. Die Unterschiede beziehen sich wieder insbesondere auf diejenigen Verzeichnisse, in denen sich die Init-Dateien befinden, und auf die hierbei berücksichtigten Konfigurationsdateien. Natürlich heißt dies auch, dass die Init-Pakete verschiedener Distributionen gewöhnlich gänzlich inkompatibel sind und nicht untereinander ausgetauscht werden können. Daher folgt hier zunächst ein allgemeiner Überblick über den Init-Prozess (genauer System-V-init).
Bevor hier ein wenig ins Detail gegangen wird, zunächst ein kurzer Init-Überblick über einen normalen Systemstart:
 | Nach dem Systemstart, wenn der Kernel geladen wurde, startet dieser das Programm (besser den Prozess) /sbin/init mit der PID 1. |
| --- | --- |
 | init wertet zunächst die Konfigurationsdatei /etc/inittab aus. |
| --- | --- |
 | Jetzt führt init ein Script zur Systeminitialisierung aus. Name und Pfad des Scripts sind stark distributionsabhängig. |
| --- | --- |
 | Als Nächstes führt init das Script rc aus, welches sich auch an unterscheidlichen Orten (und eventuell unter verschiedenen Namen) auf den Distributionen befindet. |
| --- | --- |
 | rc startet nun einzelne Scripts, die sich in einem Verzeichnis namens rc[n].d befinden. n ist hierbei der Runlevel. Diese Scriptdateien in den Verzeichnissen rc[n].d starten nun die Systemdienste (Daemonen), auch Startup-Scripts genannt. Der Speicherort dieser Verzeichnisse ist distributionsabhängig. Für gewöhnlich finden Sie die Verzeichnisse unter /etc oder unter /etc/init.d. |
| --- | --- |
# init und der Runlevel
Normalerweise verwendet init sieben Runlevel, die jeweils eine Gruppe von Diensten enthalten, die das System beim Starten (oder Beenden) ausführen soll. Sicherlich haben Sie schon davon gehört, dass Linux/UNIX ein Multi-User-Betriebssystem ist, also im Multi-User-Modus läuft. Es ist aber auch möglich, dass Sie Linux/UNIX im Single-User-Modus starten. Dass dies überhaupt realisiert werden kann, ist den verschiedenen Runlevel zu verdanken.
In der Datei /etc/inittab wird unter Linux der »default runlevel« festgelegt, es ist stark distributionsabhängig. Welcher Runlevel welche Dienste startet, ist von »Symlinks« in den einzelnen Verzeichnissen rc[n].d abhängig. Diese »Symlinks« zeigen meist auf die Startscripts der Dienste, die im Allgemeinen unter /etc/init.d abgelegt sind.
Zunächst ein kurzer Überblick zu den unterschiedlichen Runleveln und ihren Bedeutungen bei den gängigsten Distributionen:
Runlevel | Bedeutung |
| --- | --- |
0 | Dieser Runlevel hält das System komplett an (Shutdown mit Halt). |
1 oder S | Single-User-Modus (Einzelbenutzer-Modus) |
2 oder M | Multi-User-Modus (Mehrbenutzer-Modus) ohne Netzwerk |
3 | Multi-User-Modus (Mehrbenutzer-Modus) mit einem Netzwerk, aber ohne XâStart |
4 | Dieser Runlevel wird gewöhnlich nicht verwendet und steht somit zur freien Verfügung. |
5 | Multi-User-Modus (Mehrbenutzer-Modus) mit einem Netzwerk und einem grafischen Login (X-Start). Nach dem Login wird gewöhnlich die grafische Oberfläche gestartet. |
6 | Dieser Runlevel startet das System neu (Shutdown mit Reboot). |
Wie Sie sehen konnten, handelt es sich bei den Runleveln 0 und 6 um spezielle Fälle, in denen sich das System nicht lange halten kann. Hierbei wird das System heruntergefahren (Runlevel 0) oder eben neu gestartet (Runlevel 1). Zumeist werden die Runlevel 2 und 3 und die Mehrbenutzer-Level l und 5 verwendet, womit eine X-Anmeldeprozedur (wie xdm, kdm, gdm etc.) gestartet wird. Runlevel 1 (bzw. S) ist für die meisten Systeme unterschiedlich definiert und Runlevel 4 wird fast nie benutzt.
Hinweis   Linux unterstützt übrigens 10 Runlevel, wobei die Runlevel 7 bis 9 nicht definiert sind.
# Runlevel 1 bzw. S
Im Runlevel 1, dem Einzelbenutzer-Modus, werden meist alle Mehrbenutzer- und Anmeldeprozesse beendet. Somit ist sicher, dass das System mit geringem Aufwand ausgeführt wird. Natürlich bietet Runlevel 1 vollen root-Zugriff auf das System. Damit ein System in Runlevel 1 auch nach einem root-Passwort abfragt, wurde der Runlevel S entwickelt. Unter System-V-init existiert kein echter Runlevel S und er dient daher nur dazu, das root-Passwort abzufragen. Dieser Runlevel ist gewöhnlich dazu gedacht, dass Administratoren diverse Patches einspielen können.
# /etc/inittab
In der Datei /etc/inittab steht, was init auf den verschiedenen Runleveln zu tun hat. Wenn der Rechner gestartet wird, durchläuft init Runlevel 0 bis hin zum Runlevel, der in /etc/inittab als Standard (default runlevel) festgelegt wurde. Damit der Übergang von einem Level zum nächsten reibungslos läuft, führt init die in /etc/inittab angegebenen Aktionen aus. Dasselbe geschieht natürlich auch im umgekehrten Fall beim Herunterfahren bzw. Neustarten des Systems.
Die Verwendung von inittab ist allerdings nicht unbedingt das Gelbe vom Ei, weshalb noch zusätzliche Schichten in Form eines Scripts für das Wechseln der Runlevel eingebaut wurden. Dieses Script (rc â zu finden meist unter /etc/init.d/rc oder /etc/rc.d/rc) wird gewöhnlich aus inittab aufgerufen. Es (rc) führt wiederum weitere Scripts in einem vom Runlevel abhängigen Verzeichnis aus, um das System in seinen neuen (Runlevel-)Zustand zu versetzen.
In vielen Büchern wird jetzt hier die Datei /etc/inittab etwas genauer zerlegt und beschrieben (bspw. inittab-Schlüsselwörter), was natürlich sehr lehrreich ist, doch in der Praxis müssen Sie sich als Systemadministrator eigentlich nicht damit befassen, da die eben erwähnte Schnittstelle für jede Anwendung geeignet ist.
# Startup-Scripts erstellen und ausführen
Die Terminologie von Startup-Scripts ist häufig nicht einfach zu durchschauen, weshalb hier ein Beispiel gegeben werden soll. Gewöhnlich finden Sie die Hauptkopien der Startup-Scripts im Verzeichnis /etc/inid.d.
> you@host > ls -l /etc/init.d/ insgesamt 308 -rwxr-xr-x 1 root root 2570 2005â03â02 18:45 acpid -rwxr-xr-x 1 root root 1098 2005â02â24 10:29 acpi-support -rwxr-xr-x 1 root root 11038 2005â03â25 23:08 alsa -rwxr-xr-x 1 root root 1015 2004â11â26 12:36 anacron -rwxr-xr-x 1 root root 1388 2005â03â01 04:11 apmd -rwxr-xr-x 1 root root 1080 2005â02â18 11:37 atd -rw-r--r-- 1 root root 2805 2005â01â07 19:35 bootclean.sh -rwxr-xr-x 1 root root 1468 2005â01â07 19:36 bootlogd -rwxr-xr-x 1 root root 1371 2005â01â07 19:35 bootmisc.sh -rwxr-xr-x 1 root root 1316 2005â01â07 19:35 checkfs.sh -rwxr-xr-x 1 root root 7718 2005â01â07 19:35 checkroot.sh -rwxr-xr-x 1 root root 5449 2004â12â26 14:12 console-screen.sh -rwxr-xr-x 1 root root 1168 2004â10â29 18:05 cron ...
Jedes dieser Scripts ist für einen Daemon oder einen anderen Aspekt des Systems verantwortlich. Und jedes dieser Scripts verarbeitet die Argumente »start« und »stop«, womit Sie den entsprechenden Dienst initialisieren oder beenden können, zum Beispiel:
> # /etc/init.d/cron * Usage: /etc/init.d/cron start|stop|restart|reload|force-reload
Hier haben Sie versucht, das Script »cron«, welches für die Ausführung und Beendung des cron-Daemons verantwortlich, ist aufzurufen. Sie bekommen hierbei die möglichen Argumente mitgegeben, wie Sie das Script aufrufen können. Neben »start« und »stop« finden Sie häufig auch noch »restart«, was im Prinzip dasselbe bewirkt, wie ein »stop« mit anschließendem »start«. Des Weiteren findet man gewöhnlich auch eine Option »reload«, die den Dienst nicht beendet, sondern ihn auffordert, seine Konfiguration neu einzulesen (meist per Signal SIGHUP).
Im folgenden Beispiel soll das Script »sleep_daemon« beim Systemstart automatisch gestartet und beim Beenden wieder automatisch beendet werden. Hier das Script:
> #!/bin/sh # Name : sleep_daemon sleep 1000 &
Das Script macht nichts anderes, als einen »schlafenden Dämon« zu erzeugen. Neben einfachen Shellscripts können Sie in der Praxis natürlich jedes andere Programm â sei es nun ein Binary oder ein Script â in beliebiger Sprache ausführen lassen. Tatsächlich können Sie alle Shellscripts in diesem Buch dazu verwenden (ob sinnvoll oder nicht). Dieses Script »sleep_daemon« habe ich nun in das Verzeichnis /usr/sbin verschoben und natürlich entsprechende Ausführrechte (für alle) gesetzt. Für diesen Vorgang werden Sie wohl root-Rechte benötigen. Alternativ können Sie auch das Verzeichnis /usr/local/sbin verwenden.
Folgendermaßen sieht nun das Startup-Script aus, womit Sie »sleep_daemon« starten, beenden und neu starten können.
> #!/bin/sh # Name : sleep_daemon DAEMON="/usr/sbin/sleep_daemon" test -f $DAEMON || exit 0 case "$1" in start) echo -n "Starte sleep_daemon" $DAEMON echo "." ;; stop) echo -n "Stoppe sleep_daemon" killall sleep echo "." ;; restart) echo -n "Stoppe sleep_daemon" killall sleep echo "." echo -n "Starte sleep_daemon" $DAEMON echo "." ;; # Hierzu wird auch gern folgende Sntax eingesetzt: # $0 stop # $0 start;; *) echo "Usage: $0 {start|stop|restart}" exit 1 ;; esac
Ich habe hier denselben Namen verwendet, was Sie in der Praxis nicht tun müssen. Dieses Startup-Script können Sie nun in das Verzeichnis /etc/init.d verschieben und müssen es auch wieder ausführbar machen. Theoretisch können Sie jetzt schon den Dienst von Hand starten bzw. beenden:
> # /etc/init.d/sleep_daemon start Starte sleep_daemon. # /etc/init.d/sleep_daemon restart Stoppe sleep_daemon. Starte sleep_daemon. # /etc/init.d/sleep_daemon stop Stoppe sleep_daemon.
Damit unser Dienst auch automatisch beim Eintreten bzw. Austreten eines bestimmten Runlevels gestartet bzw. beendet wird, benötigt das von init gestartete Master-Controll-Script (rc) zusätzliche Informationen, welche Scripts ausgeführt werden sollen. Denn anstatt im Verzeichnis /etc/init.d nachzublättern, wann bei welchem Runlevel ein Script gestartet werden muss, sieht das Master-Controll-Script in einem Verzeichnis namens rc[n].d (bspw. rc1.d; rc2.d; ... rc6.d) nach. n steht für den Runlevel, in den es gelangen will. Zum Beispiel hier unter »Ubuntu Linux« im Verzeichnis rc2.d:
> # ls -l /etc/rc2.d/ lrwxrwxrwx 1 root root 17 K11anacron -> ../init.d/anacron lrwxrwxrwx 1 root root 17 S05vbesave -> ../init.d/vbesave lrwxrwxrwx 1 root root 18 S10sysklogd -> ../init.d/sysklogd lrwxrwxrwx 1 root root 15 S11klogd -> ../init.d/klogd lrwxrwxrwx 1 root root 14 S12alsa -> ../init.d/alsa lrwxrwxrwx 1 root root 13 S14ppp -> ../init.d/ppp lrwxrwxrwx 1 root root 16 S19cupsys -> ../init.d/cupsys lrwxrwxrwx 1 root root 15 S20acpid -> ../init.d/acpid lrwxrwxrwx 1 root root 14 S20apmd -> ../init.d/apmd ...
Sie können gleich erkennen, dass diese Einträge in rc[n].d gewöhnlich symbolische Links sind, die auf die Startup-Scripts im init.d-Verzeichnis verweisen. Wenn Sie die anderen Runlevel-Verzeichnisse ebenfalls ansehen, werden Sie feststellen, dass hierbei alle Namen der symbolischen Links entweder mit einem »S« oder einen »K«, gefolgt von einer Nummer und dem eigentlichen Dienst beginnen (bspw. S14alsa). Auch dies ist recht schnell erklärt. Steigt init von einem Runlevel in den nächst höheren Level auf, werden alle Scripts mit »S« beginnend ausgeführt. Diese Scripts werden also mit dem Argument »start« ausgeführt. Wenn init hingegen von einem höheren Runlevel in einen niedrigeren wechselt, werden alle Scripts mit »K« (kill) beginnend ausgeführt. Hierbei wird gewöhnlich das Script mit dem Argument »stop« ausgeführt. Die Nummern haben eine Art Prioritätsbedeutung. Die Scripts im Runlevel-Verzeichnis werden bei der Option »start« in alphabetischer und bei »stop« in umgekehrter Reihenfolge ausgeführt. Somit bedient man sich einer Nummerierung, um die Reihenfolge der Ausführung zu beeinflussen.
Jetzt wollen Sie natürlich auch Ihr Startup-Script hinzufügen. Um also dem System mitzuteilen, dass Sie einen Daemon starten wollen, müssen Sie ebenfalls einen symbolischen Link in das entsprechende Verzeichnis setzen. Die meisten Dienste starten Ihre Dämonen im Runlevel 2, weshalb hier ebenfalls ein Eintrag vorgenommen wird. Um den Daemon auch ordentlich wieder zu beenden, müssen Sie natürlich auch einen Link im Runlevel 0 eintragen. Da es bei einigen Systemen Unterschiede zwischen Shutdown und Reboot gibt, sollten Sie auch einen Eintrag im Runlevel 6 erstellen, um auf Nummer sicher zu gehen, dass sich der Daemon beim Neustart korrekt beendet hat.
> # ln -s /etc/init.d/sleep_daemon /etc/rc2.d/S23sleep_daemon # ln -s /etc/init.d/sleep_daemon /etc/rc0.d/K23sleep_daemon # ln -s /etc/init.d/sleep_daemon /etc/rc6.d/K23sleep_daemon
Mit der ersten Zeile weisen Sie das System nun an, das Startup-Script /etc/init.d/sleep_daemon mit dem Argument »start« auszuführen, wenn es im Runlevel 2 angelangt ist. Mit der zweiten Zeile legen Sie fest, dass /etc/init.d/sleep_daemon beim Herunterfahren des Systems mit dem Argument »stop« beendet wird, wenn es im Runlevel 0 angekommen. ist. Gleiches nehmen Sie auch mit der dritten Zeile vor, nur eben für den Neustart des Systems in Runlevel 6.
Wenn Sie jetzt das System beenden und wieder hochfahren, können Sie in der Startup-Sitzung (sofern diese sichtbar ist) Ihren Dienst beim Starten beobachten. Hier werden Sie irgendwann ein Zeile wie
> Starte sleep_daemon.
finden. Beim Beenden bzw. Neustarten des Systems entsteht diese Meldung, nur dass eben der »sleep_daemon« beendet wurde.
# 15.5 World Wide Web und HTMLÂ
15.5 World Wide Web und HTMLÂ
Der Umgang mit Logdateien gehört zu einem weiteren wichtigen Bereich eines Systemadministrators. Schließlich stellen die Log(-bücher) so etwas wie das Tagebuch eines Webservers dar. Dort hinein wird immer dann etwas geschrieben, wenn etwas passiert. So kann man mit einem Blick in die Logdateien recht schnell feststellen, ob alles seine Ordnung hat. Allerdings befinden sich in solchen Logdateien häufig unglaublich viele Informationen. Das Problem ist weniger, an diese Informationen zu kommen, sondern vielmehr die wichtigen Informationen daraus zu gewinnen.
In den folgenden Beispielen sollen nur die Logdateien access_log und error_log des Apache Webserver betrachtet werden, zwei Logfiles, die der Apache standardmäßig als Tagebuch verwendet. Da der Apache wie auch viele andere Webserver das Common Log Format verwendet (einen informellen Standard) sollte es nicht schwer sein, nach diesem Kapitel auch Logdateien anderer Server auszuwerten. Wobei sich wohl die meisten mit den Logfiles des Apache zufrieden geben dürften, da dieser mit gut 60 % Marktanteil (vor allem unter den Webhostern) der verbreitetste ist. Wo (und ggf. auch was) der Apache in diese beiden Logfiles schreibt, wird gewöhnlich in httpd.conf festgelegt.
Anhand dieser beiden Logfiles werden übrigens die tollen Statistiken erzeugt, die Ihnen Ihr Webhoster anbietet. Darin finden Sie Zugriffszahlen, Transferraten (gesamt, Tagesdurchschnitt ...), den Browser, Fehlercodes (z. B. »Seite nicht vorhanden«) die IP-Adresse oder Domain-Adresse des Clients, Datum, Uhrzeit und eine Menge mehr.
Als normaler Inhaber einer Domain mit ssh-Zugang werden Sie das eine oder andere Mal diese Logfiles wohl auch mit einem Script auswerten wollen. Als Systemadministrator eines Webservers werden Sie dies allerdings zu einer Ihrer Hauptaufgaben machen und diese Logfiles auch archivieren müssen. Wie dem auch sei, wir begnügen uns mit dem Einfachsten, und zwar dem Auswerten der Logfiles, um den Inhalt in einer vernünftigen Form lesbar darzustellen.
15.5.1 Analysieren von access_log (Apache)Â
Die access_log-Datei ist eine einfache Textdatei, in der der Webserver bei jedem Besucher, der eine Webseite besucht, einen Eintrag in folgender Form oder ähnlich hinterlässt (eine Zeile):
169.229.76.87 â â [13/May/2001:00:00:37 â0800] "GET /htdocs/inhalt.html HTTP/1.1" 200 15081 "http://www.irgendwoher.de/links.html" "Mozilla/4.0 (compatible; MSIE 5.0; Windows 98; DigExt)"
Was sagt die Zeile:
Die Auswertung von error_log lässt sich etwas einfacher realisieren, da hierbei wesentlich weniger Daten als bei access_log geschrieben werden (sollten).
#!/bin/sh # Name: readerrorlog # Wertet error_log des Apaches aus # Anzahl der Einträge, die pro Kategorie angezeigt werden sollen MAXERRORS=10 # Sortierte Ausgabe von jeweils MAXERRORS pro Fehler # ggf. sollte man eine Datei zum Zwischenspeichern, # anstatt wie hier die Variable ret, verwenden ... # print_error_log() { ret=`grep "${2}" "$1" | awk '{print $NF}' |\ sort | uniq -c | sort -rn | head -$MAXERRORS` if [ "$ret" != "" ] ; then echo echo "[$2] Fehler:" echo "$ret" fi } if [ $# -ne 1 ] then echo "usage $0 error_log" exit 1 fi # Anzahl der Einträge in error_log echo "'$1' hat `wc -l < $1` Einträge" # Erster Eintrag in error_log dateHead=`grep -E '\[.*:.*:.*\]' "$1" | head â1 | \ awk '{print $1" "$2" "$3" "$4" "$5}'` # Letzter Eintrag in error_log dateTail=`grep -E '\[.*:.*:.*\]' "$1" | tail â1 | \ awk '{print $1" "$2" "$3" "$4" "$5}'` echo "Einträge vom : $dateHead " echo "bis zum : $dateTail " echo # Wir geben einige Fehler sortiert nach Fehlern aus # Die Liste kann beliebig nach Fehlern erweitert werden ... # print_error_log "$1" "File does not exist" print_error_log "$1" "Invalid error redirection directive" print_error_log "$1" "premature EOF" print_error_log "$1" "script not found or unable to stat" print_error_log "$1" "Premature end of script headers" print_error_log "$1" "Directory index forbidden by rule"
Das Script bei der Ausführung:
you@host > ./readerrorlog logs/www.pronix.de/error_log 'logs/www.pronix.de/error_log' hat 2941 Einträge Einträge vom : [Sun May 8 05:08:42 2005] bis zum : [Fri May 13 07:46:45 2005] [File does not exist] Fehler: 71 .../www.pronix.de/cmsimages/grafik/Grafik/bloodshed1.png 69 .../www.pronix.de/cmsimages/grafik/Grafik/bloodshed2.png 68 .../www.pronix.de/cmsimages/grafik/Grafik/bloodshed6.png 68 .../www.pronix.de/cmsimages/grafik/Grafik/bloodshed5.png 68 .../www.pronix.de/cmsimages/grafik/Grafik/bloodshed4.png 68 .../www.pronix.de/cmsimages/grafik/Grafik/bloodshed3.png 53 .../www.pronix.de/cmsimages/grafik/Grafik/borland5.gif 53 .../www.pronix.de/cmsimages/grafik/Grafik/borland4.gif 53 .../www.pronix.de/cmsimages/grafik/Grafik/borland3.gif 53 .../www.pronix.de/cmsimages/grafik/Grafik/borland2.gif [script not found or unable to stat] Fehler: 750 /home/us10129/www.pronix.de/search.php [Directory index forbidden by rule] Fehler: 151 .../www.pronix.de/cmsimages/grafik/ 19 .../www.pronix.de/cmsimages/ 17 .../www.pronix.de/cmsimages/download/ 14 .../www.pronix.de/themes/young_leaves/ 3 .../www.pronix.de/css/ 1 .../www.pronix.de/cmsimages/linux/grafik/ 1 .../www.pronix.de/cmsimages/linux/ ... ...
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
15.5 World Wide Web und HTMLÂ
Der Umgang mit Logdateien gehört zu einem weiteren wichtigen Bereich eines Systemadministrators. Schließlich stellen die Log(-bücher) so etwas wie das Tagebuch eines Webservers dar. Dort hinein wird immer dann etwas geschrieben, wenn etwas passiert. So kann man mit einem Blick in die Logdateien recht schnell feststellen, ob alles seine Ordnung hat. Allerdings befinden sich in solchen Logdateien häufig unglaublich viele Informationen. Das Problem ist weniger, an diese Informationen zu kommen, sondern vielmehr die wichtigen Informationen daraus zu gewinnen.
In den folgenden Beispielen sollen nur die Logdateien access_log und error_log des Apache Webserver betrachtet werden, zwei Logfiles, die der Apache standardmäßig als Tagebuch verwendet. Da der Apache wie auch viele andere Webserver das Common Log Format verwendet (einen informellen Standard) sollte es nicht schwer sein, nach diesem Kapitel auch Logdateien anderer Server auszuwerten. Wobei sich wohl die meisten mit den Logfiles des Apache zufrieden geben dürften, da dieser mit gut 60 % Marktanteil (vor allem unter den Webhostern) der verbreitetste ist. Wo (und ggf. auch was) der Apache in diese beiden Logfiles schreibt, wird gewöhnlich in httpd.conf festgelegt.
Anhand dieser beiden Logfiles werden übrigens die tollen Statistiken erzeugt, die Ihnen Ihr Webhoster anbietet. Darin finden Sie Zugriffszahlen, Transferraten (gesamt, Tagesdurchschnitt ...), den Browser, Fehlercodes (z. B. »Seite nicht vorhanden«) die IP-Adresse oder Domain-Adresse des Clients, Datum, Uhrzeit und eine Menge mehr.
Als normaler Inhaber einer Domain mit ssh-Zugang werden Sie das eine oder andere Mal diese Logfiles wohl auch mit einem Script auswerten wollen. Als Systemadministrator eines Webservers werden Sie dies allerdings zu einer Ihrer Hauptaufgaben machen und diese Logfiles auch archivieren müssen. Wie dem auch sei, wir begnügen uns mit dem Einfachsten, und zwar dem Auswerten der Logfiles, um den Inhalt in einer vernünftigen Form lesbar darzustellen.
15.5.1 Analysieren von access_log (Apache)Â
Die access_log-Datei ist eine einfache Textdatei, in der der Webserver bei jedem Besucher, der eine Webseite besucht, einen Eintrag in folgender Form oder ähnlich hinterlässt (eine Zeile):
169.229.76.87 â â [13/May/2001:00:00:37 â0800] "GET /htdocs/inhalt.html HTTP/1.1" 200 15081 "http://www.irgendwoher.de/links.html" "Mozilla/4.0 (compatible; MSIE 5.0; Windows 98; DigExt)"
Was sagt die Zeile:
Die Auswertung von error_log lässt sich etwas einfacher realisieren, da hierbei wesentlich weniger Daten als bei access_log geschrieben werden (sollten).
#!/bin/sh # Name: readerrorlog # Wertet error_log des Apaches aus # Anzahl der Einträge, die pro Kategorie angezeigt werden sollen MAXERRORS=10 # Sortierte Ausgabe von jeweils MAXERRORS pro Fehler # ggf. sollte man eine Datei zum Zwischenspeichern, # anstatt wie hier die Variable ret, verwenden ... # print_error_log() { ret=`grep "${2}" "$1" | awk '{print $NF}' |\ sort | uniq -c | sort -rn | head -$MAXERRORS` if [ "$ret" != "" ] ; then echo echo "[$2] Fehler:" echo "$ret" fi } if [ $# -ne 1 ] then echo "usage $0 error_log" exit 1 fi # Anzahl der Einträge in error_log echo "'$1' hat `wc -l < $1` Einträge" # Erster Eintrag in error_log dateHead=`grep -E '\[.*:.*:.*\]' "$1" | head â1 | \ awk '{print $1" "$2" "$3" "$4" "$5}'` # Letzter Eintrag in error_log dateTail=`grep -E '\[.*:.*:.*\]' "$1" | tail â1 | \ awk '{print $1" "$2" "$3" "$4" "$5}'` echo "Einträge vom : $dateHead " echo "bis zum : $dateTail " echo # Wir geben einige Fehler sortiert nach Fehlern aus # Die Liste kann beliebig nach Fehlern erweitert werden ... # print_error_log "$1" "File does not exist" print_error_log "$1" "Invalid error redirection directive" print_error_log "$1" "premature EOF" print_error_log "$1" "script not found or unable to stat" print_error_log "$1" "Premature end of script headers" print_error_log "$1" "Directory index forbidden by rule"
Das Script bei der Ausführung:
you@host > ./readerrorlog logs/www.pronix.de/error_log 'logs/www.pronix.de/error_log' hat 2941 Einträge Einträge vom : [Sun May 8 05:08:42 2005] bis zum : [Fri May 13 07:46:45 2005] [File does not exist] Fehler: 71 .../www.pronix.de/cmsimages/grafik/Grafik/bloodshed1.png 69 .../www.pronix.de/cmsimages/grafik/Grafik/bloodshed2.png 68 .../www.pronix.de/cmsimages/grafik/Grafik/bloodshed6.png 68 .../www.pronix.de/cmsimages/grafik/Grafik/bloodshed5.png 68 .../www.pronix.de/cmsimages/grafik/Grafik/bloodshed4.png 68 .../www.pronix.de/cmsimages/grafik/Grafik/bloodshed3.png 53 .../www.pronix.de/cmsimages/grafik/Grafik/borland5.gif 53 .../www.pronix.de/cmsimages/grafik/Grafik/borland4.gif 53 .../www.pronix.de/cmsimages/grafik/Grafik/borland3.gif 53 .../www.pronix.de/cmsimages/grafik/Grafik/borland2.gif [script not found or unable to stat] Fehler: 750 /home/us10129/www.pronix.de/search.php [Directory index forbidden by rule] Fehler: 151 .../www.pronix.de/cmsimages/grafik/ 19 .../www.pronix.de/cmsimages/ 17 .../www.pronix.de/cmsimages/download/ 14 .../www.pronix.de/themes/young_leaves/ 3 .../www.pronix.de/css/ 1 .../www.pronix.de/cmsimages/linux/grafik/ 1 .../www.pronix.de/cmsimages/linux/ ... ...
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 15.5 World Wide Web und HTMLÂ
Der Umgang mit Logdateien gehört zu einem weiteren wichtigen Bereich eines Systemadministrators. Schließlich stellen die Log(-bücher) so etwas wie das Tagebuch eines Webservers dar. Dort hinein wird immer dann etwas geschrieben, wenn etwas passiert. So kann man mit einem Blick in die Logdateien recht schnell feststellen, ob alles seine Ordnung hat. Allerdings befinden sich in solchen Logdateien häufig unglaublich viele Informationen. Das Problem ist weniger, an diese Informationen zu kommen, sondern vielmehr die wichtigen Informationen daraus zu gewinnen.
In den folgenden Beispielen sollen nur die Logdateien access_log und error_log des Apache Webserver betrachtet werden, zwei Logfiles, die der Apache standardmäßig als Tagebuch verwendet. Da der Apache wie auch viele andere Webserver das Common Log Format verwendet (einen informellen Standard) sollte es nicht schwer sein, nach diesem Kapitel auch Logdateien anderer Server auszuwerten. Wobei sich wohl die meisten mit den Logfiles des Apache zufrieden geben dürften, da dieser mit gut 60 % Marktanteil (vor allem unter den Webhostern) der verbreitetste ist. Wo (und ggf. auch was) der Apache in diese beiden Logfiles schreibt, wird gewöhnlich in httpd.conf festgelegt.
Anhand dieser beiden Logfiles werden übrigens die tollen Statistiken erzeugt, die Ihnen Ihr Webhoster anbietet. Darin finden Sie Zugriffszahlen, Transferraten (gesamt, Tagesdurchschnitt ...), den Browser, Fehlercodes (z. B. »Seite nicht vorhanden«) die IP-Adresse oder Domain-Adresse des Clients, Datum, Uhrzeit und eine Menge mehr.
Als normaler Inhaber einer Domain mit ssh-Zugang werden Sie das eine oder andere Mal diese Logfiles wohl auch mit einem Script auswerten wollen. Als Systemadministrator eines Webservers werden Sie dies allerdings zu einer Ihrer Hauptaufgaben machen und diese Logfiles auch archivieren müssen. Wie dem auch sei, wir begnügen uns mit dem Einfachsten, und zwar dem Auswerten der Logfiles, um den Inhalt in einer vernünftigen Form lesbar darzustellen.
### 15.5.1 Analysieren von access_log (Apache)Â
Die access_log-Datei ist eine einfache Textdatei, in der der Webserver bei jedem Besucher, der eine Webseite besucht, einen Eintrag in folgender Form oder ähnlich hinterlässt (eine Zeile):
> 169.229.76.87 â â [13/May/2001:00:00:37 â0800] "GET /htdocs/inhalt.html HTTP/1.1" 200 15081 "http://www.irgendwoher.de/links.html" "Mozilla/4.0 (compatible; MSIE 5.0; Windows 98; DigExt)"
Was sagt die Zeile:
 | - (zweiter Strich) â Login-Name des Benutzers, falls eine Passwort/Benutzername-Authentifizierung erforderlich ist. |
| --- | --- |
 | "http://www.irgendwoher.de/links.html" â wenn die URL nicht direkt eingegeben wurde, steht hier, von wo der Client gekommen ist. |
| --- | --- |
Um access_log nun auszuwerten, gilt es, einfach die einzelnen Werte, die man benötigt, aus den einzelnen Zeilen von access_log zu extrahieren und diese entsprechend auszuwerten. Das folgende Script demonstriert diesen Vorgang:
> #!/bin/sh # readaccess # Diese Script analysiert die access_log-Datei des # Apache Webservers mit interessanten Informationen # mathematische Funktion (Wrapper für bc) ... calc() { bc -q << EOF scale=2 $* quit EOF } # Damit verhindern Sie, dass bei der Auswertung der # 'Referred Hits' der eigene Domainname berücksichtigt wird. host="pronix.de" # Anzahl der am meisten besuchten Seiten POPULAR=10 # Anzahl der Referrer-Seiten REFERRER=10 if [ $# -eq 0 ] then echo "Usage: $0 logfile" exit 1 fi # Logfile lesbar oder vorhanden ... if [ -r "$1" ] then : else echo "Fehler: Kann Log-Datei '$1' nicht finden...!" exit 1 fi # Erster Eintrag im Logfile ... dateHead=`head â1 "$1" | awk '{print $4}' | sed 's/\[//'` # Letzter Eintrag im Logfile ... dateTail=`tail â1 "$1" | awk '{print $4}' | sed 's/\[//'` echo "Ergebnis der Log-Datei '$1'" echo echo " Start Datum : `echo $dateHead | sed 's/:/ um /'`" echo " End Datum : `echo $dateTail | sed 's/:/ um /'`" # Anzahl Besucher, einfach mit wc zählen hits=`wc -l < "$1" | sed 's/[^[:digit:]]//g'` echo " Hits : $hits (Zugriffe insgesamt)" # Seitenzugriffe ohne Dateien .txt; .gif; .jpg und .png pages=`grep -ivE '(.txt|.gif|.jpg|.png)' "$1" | wc -l | \ sed 's/[^[:digit:]]//g'` echo "Seitenzugriffe: $pages (Zugriffe ohne Grafiken"\ " (jpg, gif, png und txt)" # Datentransfer â Traffic totalbytes=`awk '{sum+=$10} END {print sum}' "$1"` echo -n " Übertragen : $totalbytes Bytes " # Anzahl Bytes in einem GB = 1073741824 # Anzahl Bytes in einem MB = 1048576 if [ $totalbytes -gt 1073741824 ] ; then ret=`calc "$totalbytes / 1073741824"` echo "($ret GB)" elif [ $totalbytes -gt 1048576 ] ; then ret=`calc "$totalbytes / 1048576"` echo "($ret MB)" else echo fi # Interessante Statistiken echo echo "Die $POPULAR beliebtesten Seiten sind " \ " (ohne gif, jpg, png, css, ico und js):" awk '{print $7}' "$1" | \ grep -ivE '(.gif|.jpg|.png|.css|.ico|.js)' | \ sed 's/\/$//g' | sort | \ uniq -c | sort -rn | head -$POPULAR echo echo "Woher kamen die Besucher ($REFERRER besten URLs'):" awk '{print $11}' "$1" | \ grep -vE "(^\"-\"$|/www.$host|/$host)" | \ sort | uniq -c | sort -rn | head -$REFERRER
Das Script bei der Ausführung:
> you@host > ./readaccess logs/www.pronix.de/access_log Ergebnis der Log-Datei 'logs/www.pronix.de/access_log' Start Datum : 08/May/2005 um 04:47:45 End Datum : 13/May/2005 um 07:13:27 Hits : 168334 (Zugriffe insgesamt) Seitenzugriffe: 127803 (Zugriffe o. Grafiken (jpg, gif, png, txt) Übertragen : 1126397222 Bytes (1.04 GB) Die 10 beliebtesten Seiten sind (ohne gif,jpg,png,css,ico,js): 3498 /pronix-4.html 1974 /pronix-6.html 1677 /modules/newbb 1154 /userinfo.php?uid=1 1138 /userinfo.php?uid=102 1137 /userinfo.php?uid=109 991 /userinfo.php?uid=15 924 /modules/news 875 /userinfo.php?uid=643 Woher kamen die Besucher (10 besten URLs'): 96 "http://homepages.rtlnet.de/algeyer001937/587.html" 65 "http://www.computer-literatur.de/buecher/... " 47 "http://www.linuxi.de/ebooksprogramm.html" 37 "http://www.programmier-hilfe.de/index.php/C___C__/46/0/" 35 "http://216.239.59.104/search?q=cache:kFL_ot8c2s0J:www... 30 "http://www.tutorials.de/tutorials184964.html" 27 "http://64.233.183.104/search?q=cache:EqX_ZN... 25 "http://216.239.59.104/search?q=cache:KkL81cub2C... 24 "http://216.239.59.104/search?q=cache:R0gWKuwrF...
### 15.5.2 Analysieren von error_log (Apache)Â
In der Logdatei error_log finden Sie alle Fehler, die beim Aufrufen einer Seite oder eines Dokuments Ihrer URL aufgetreten sind. Die Zeile eines solchen Eintrags hat beim Apache folgende Form:
> [Fri May 13 02:48:13 2005] [error] [client 66.185.100.10] File does not exist: /home/us10129/www.pronix.de/modules/ mylinks/modlink.php
Die Bedeutungen der einzelnen Einträge lassen sich schnell erklären:
 | [Fri May 13 02:48:13 2005] â wann trat dieser Fehler auf? |
| --- | --- |
 | /home/us10129/www.pronix.de/modules/mylinks/modlink.php â die Datei, welche den Fehler ausgelöst hat. |
| --- | --- |
Die Auswertung von error_log lässt sich etwas einfacher realisieren, da hierbei wesentlich weniger Daten als bei access_log geschrieben werden (sollten).
> #!/bin/sh # Name: readerrorlog # Wertet error_log des Apaches aus # Anzahl der Einträge, die pro Kategorie angezeigt werden sollen MAXERRORS=10 # Sortierte Ausgabe von jeweils MAXERRORS pro Fehler # ggf. sollte man eine Datei zum Zwischenspeichern, # anstatt wie hier die Variable ret, verwenden ... # print_error_log() { ret=`grep "${2}" "$1" | awk '{print $NF}' |\ sort | uniq -c | sort -rn | head -$MAXERRORS` if [ "$ret" != "" ] ; then echo echo "[$2] Fehler:" echo "$ret" fi } if [ $# -ne 1 ] then echo "usage $0 error_log" exit 1 fi # Anzahl der Einträge in error_log echo "'$1' hat `wc -l < $1` Einträge" # Erster Eintrag in error_log dateHead=`grep -E '\[.*:.*:.*\]' "$1" | head â1 | \ awk '{print $1" "$2" "$3" "$4" "$5}'` # Letzter Eintrag in error_log dateTail=`grep -E '\[.*:.*:.*\]' "$1" | tail â1 | \ awk '{print $1" "$2" "$3" "$4" "$5}'` echo "Einträge vom : $dateHead " echo "bis zum : $dateTail " echo # Wir geben einige Fehler sortiert nach Fehlern aus # Die Liste kann beliebig nach Fehlern erweitert werden ... # print_error_log "$1" "File does not exist" print_error_log "$1" "Invalid error redirection directive" print_error_log "$1" "premature EOF" print_error_log "$1" "script not found or unable to stat" print_error_log "$1" "Premature end of script headers" print_error_log "$1" "Directory index forbidden by rule"
Das Script bei der Ausführung:
> you@host > ./readerrorlog logs/www.pronix.de/error_log 'logs/www.pronix.de/error_log' hat 2941 Einträge Einträge vom : [Sun May 8 05:08:42 2005] bis zum : [Fri May 13 07:46:45 2005] [File does not exist] Fehler: 71 .../www.pronix.de/cmsimages/grafik/Grafik/bloodshed1.png 69 .../www.pronix.de/cmsimages/grafik/Grafik/bloodshed2.png 68 .../www.pronix.de/cmsimages/grafik/Grafik/bloodshed6.png 68 .../www.pronix.de/cmsimages/grafik/Grafik/bloodshed5.png 68 .../www.pronix.de/cmsimages/grafik/Grafik/bloodshed4.png 68 .../www.pronix.de/cmsimages/grafik/Grafik/bloodshed3.png 53 .../www.pronix.de/cmsimages/grafik/Grafik/borland5.gif 53 .../www.pronix.de/cmsimages/grafik/Grafik/borland4.gif 53 .../www.pronix.de/cmsimages/grafik/Grafik/borland3.gif 53 .../www.pronix.de/cmsimages/grafik/Grafik/borland2.gif [script not found or unable to stat] Fehler: 750 /home/us10129/www.pronix.de/search.php [Directory index forbidden by rule] Fehler: 151 .../www.pronix.de/cmsimages/grafik/ 19 .../www.pronix.de/cmsimages/ 17 .../www.pronix.de/cmsimages/download/ 14 .../www.pronix.de/themes/young_leaves/ 3 .../www.pronix.de/css/ 1 .../www.pronix.de/cmsimages/linux/grafik/ 1 .../www.pronix.de/cmsimages/linux/ ... ...
# 15.6 CGI (Common Gateway Interface)Â
15.6 CGI (Common Gateway Interface)Â
CGI ist eine Schnittstelle, mit der Sie z. B. Anwendungen für das Internet schreiben können. Diese CGI-Anwendungen laufen dabei auf einem (Web-)Server (wie beispielsweise dem Apache) und werden meistens von einer HTML-Webseite mithilfe eines Webbrowsers aufgerufen.
Das Verfahren der CGI-Schnittstelle ist ziemlich einfach. Die Daten werden ganz normal von der Standardeingabe (stdin) oder von den Umgebungsvariablen empfangen und wenn nötig über die Standardausgabe (stdout) ausgegeben. Im Allgemeinen handelt es sich dabei um ein dynamisch erzeugtes HTML-Dokument, das Sie in Ihrem Browser betrachten können. Sind diese Voraussetzungen gegeben, können CGI-Anwendungen praktisch mit jeder Programmiersprache erstellt werden. Das CGI-Programm selbst, welches Sie erstellen, ist ein ausführbares Programm auf dem Webserver, das nicht von einem normalen User gestartet wird, sondern vom Webserver als ein neuer Prozess.
Normalerweise setzt man für CGI-Scripts die Programmiersprachen Perl, PHP, Java, Python, Ruby oder C ein, aber im Grunde ist die Sprache absolut zweitrangig. Die Hauptsache ist, dass sie eine Standardein- und eine Standardausgabe besitzt und selbstverständlich auf dem Rechner, auf dem diese ausgeführt wird, auch verstanden wird. Für kleinere Auf- bzw. Ausgaben können Sie selbstverständlich auch ein Shellscript verwenden. Damit können Sie beinahe alle Shellscripts, die Sie in diesem Buch geschrieben haben, auch über den Browser aufrufen und ausführen lassen.
Hinweis   Wer denkt, CGI in C ist etwas »Unmögliches«, der darf gern auf meine Webseite kommen. Das komplette Content Management System (kurz CMS) wurde von meinem Fachgutachter in C geschrieben.
Auch wenn es sich recht aufregend anhört, CGI-Scripts in Shell zu erstellen, finden Sie hier nur einen kurzen Anriss zu diesem Thema. Vorwiegend geht es mir darum, dass Sie mithilfe der Shellscripts eine Art Schnittstelle für Ihre geschrieben Shellscripts schreiben können, beispielsweise um die Auswertung von access_log auf dem Browser vorzunehmen.
15.6.1 CGI-Scripts ausführenÂ
Um CGI-Scripts auf einem Server auszuführen, ist leider mehr als nur ein Browser erforderlich. Sofern Sie den ganzen Vorgang lokal testen wollen, benötigen Sie einen laufenden Webserver (im Beispiel ist immer vom Apache Webserver die Rede). Das Ganze ist gerade für Anfänger nicht ganz so einfach nachzuvollziehen. Einfach haben Sie es natürlich, wenn Sie Ihre Scripts auf einem bereits konfigurierten Webserver ausführen können â wie dies etwa bei einem Webhoster der Fall ist.
Zuerst müssen Sie herausfinden, wo Sie die Shellscripts auf Ihrem Webserver ausführen können, gewöhnlich ist dies im /cgi-bin-Verzeichnis. Allerdings kann man einen Webserver auch so konfigurieren (httpd.conf), dass man aus jedem Unterverzeichnis ein Script ausführen kann. In der Regel verwendet man als Dateiendung ».cgi«. Wenn es nicht möglich ist, ein CGI-Script auszuführen, kann es sein, dass Sie die Option ExecCGI in der Konfigurationsdatei httpd.conf anpassen oder die Schnittstelle mit der Datei .htaccess aktivieren müssen. Außerdem muss beachtet werden, dass dieses Script für jedermann lesbar und ausführbar ist, weil die meisten Webserver eine Webanfrage als User nobody oder similar ausführen. Leider kann ich hier nicht im Detail darauf eingehen, daher hierzu nochmals kurz die nötigen (möglichen) Schritte, um ein CGI-Script auf einem Webserver auszuführen (entsprechende Rechte vorausgesetzt):
Â
Setzen Sie die Zugriffsrechte für das entsprechende Script (chmod go+rx script.cgi).
Â
Starten Sie den Webserver, falls dies nicht schon geschehen ist.
Â
HTTP-Anfrage-Header des Webbrowsers
Das Script bei der Ausführung sehen Sie in Abbildung 15.1.
Einen Hinweis noch zu:
echo "Content-type: text/html"
Damit teilen Sie dem Webserver mit, dass Ihre CGI-Anwendung ein bestimmtes Dokument ausgibt (Content-Type-Dokument). Im Beispiel handelt es sich um ein HTML-Dokument. Außerdem ist es von Bedeutung, dass hinter dieser Angabe zwei Newline-Zeichen stehen (hier durch die beiden echo-Aufrufe). Damit signalisieren Sie dem Webserver, dass es sich um die letzte »Headerzeile« handelt.
15.6.3 Einfache Ausgabe als TextÂ
Ähnlich einfach wie die Ausgabe der Umgebungsvariablen lässt sich jede andere Ausgabe auf dem Browser realisieren. Wollen Sie zum Beispiel ein Shellscript über einen Link starten, so müssen Sie sich nur ein HTML-Dokument wie folgt zusammenbasteln:
<html> <head> <title>TestCGIs</title> </head> <body> <A HREF="/cgi-bin/areader.cgi?/srv/www/htdocs/docs/kap003.txt">Ein Link</A> </body> </html>
Das folgende Script areader.cgi macht wiederum nichts anderes, als die Textdatei kap003.txt mittels cat auf dem Browser auszugeben. Als Query-String geben Sie hinter dem Fragezeichen an, welche Datei hier gelesen werden soll. Für das Shellscript stellt dies wiederum den ersten Positionsparameter $1 dar. Der Link
/cgi-bin/areader.cgi?/srv/www/htdocs/docs/kap003.txt
besteht also aus zwei Werten, dem Pfad zum Shellscript
/cgi-bin/areader.cgi
und dem ersten Positionsparameter (Query-String)
/srv/www/htdocs/docs/kap003.txt
Ein solcher Query-String kann natürlich ganz andere Formen annehmen, dazu aber später mehr. Hierzu nun das Shellscript areader.cgi:
#!/bin/sh # Name: areader.cgi echo "Content-type: text/plain" echo cat $1 exit 0
Die Ausführung ist in Abbildung 15.3 zu sehen.
Sicherheitsbedenken
Natürlich dienen diese Beispiele nur als Mittel zum Zweck, um Ihnen zu demonstrieren, wie Sie Shellscripts auch als CGI-Scripts verwenden können, ohne allerdings den Sicherheitsaspekt zu berücksichtigen. Denn im Grunde ist die Zeile
/cgi-bin/areader.cgi?/srv/www/htdocs/docs/kap003.txt
ein ganz böses Foul, das Hackern Tür und Tor öffnet. Manipuliert man diese Zeile um in
/cgi-bin/areader.cgi?/etc/passwd /cgi-bin/areader.cgi?/etc/shadow
kann der Angreifer in aller Ruhe zu Hause per »Brute Force« das Passwort knacken (sofern dies nicht allzu stark ist â was leider meistens der Fall ist). Zumindest kann der Angreifer auf jeden Fall Systeminformationen sammeln, um eventuelle Schwachstellen des Rechners aufzuspüren.
Ein sicherere Alternative wäre es beispielsweise, die Usereingabe auf [a-z] zu prüfen. Wenn andere Zeichen vorkommen, dann abbrechen, und ansonsten den Pfad und die Endung vom Script ergänzen lassen.
if echo $1 | grep -q '[^a-z]' then echo Schlingel exit 0 fi file="/pfad/nach/irgendwo/$1.txt" if [ -e $file ] then echo Nueschte exit 0 fi
Des Weiteren sollten Sie bei CGI-Programmen immer 0 zurückgeben, wenn alles glatt verlaufen ist, damit überhaupt etwas auf dem Bildschirm ausgegeben wird. Bei vielen Installationen entsteht nämlich bei Rückgabe ungleich 0 der 500er-Fehler.
15.6.4 Ausgabe als HTML formatierenÂ
In der Praxis werden Sie wohl kaum immer nur reinen Text ausgeben, sondern die Ausgabe mit vielen HTML- oder CSS-Elementen verschönern wollen. Im Gegensatz zum Beispiel zuvor ist hierfür nicht allzu viel nötig (geringe HTML-Kenntnisse vorausgesetzt). Zuerst wieder das HTML-Dokument (unverändert):
<html> <head> <title>TestCGIs</title> </head> <body> <A HREF="/cgi-bin/a_html_reader.cgi?/srv/www/htdocs/docs/kap003.txt">Ein Link</A> </body> </html>
Als Nächstes das Shellscript areader.cgi, allerdings ein wenig umgeschrieben (und umbenannt) mit einigen HTML-Konstrukten, um den Text browsergerecht zu servieren:
#!/bin/sh # a_html_reader.cgi echo "Content-type: text/html" echo "" cat << HEADER <HTML> <HEAD><TITLE>Ausgabe der Datei: $1</TITLE> </HEAD> <BODY bgcolor="#FFFF00" text="#000000"> <HR SIZE=3> <H1>Ausgabe der Datei: $1 </H1> <HR SIZE=3> <P> <SMALL> <PRE> HEADER cat $1 cat << FOOTER </PRE> </SMALL> <P> <HR SIZE=3> <H1><B>Ende</B> Ausgabe der Datei: $1 </H1> <HR SIZE=3> </BODY> </HTML> FOOTER exit 0
Das Script bei der Ausführung zeigt Abbildung 15.4.
Soweit haben Sie nun eine HTML-formatierte Ausgabe. Allerdings ist es noch praktischer, den HTML-Teil wiederum in eine extra Datei auszulagern. Dies erhält zum einen die Übersicht des Shellscripts, und zum anderen kann man jederzeit das »Layout« gegen ein anderes austauschen. Ich teile z. B. die beiden Bereiche um den eigentlichen Arbeitsteil des Shellscripts in einen Kopfteil (header.txt) und einen Fußteil (footer.txt) auf. Zuerst die Datei header.txt:
<HTML> <HEAD><TITLE>Ein Titel</TITLE> </HEAD> <BODY bgcolor="#FFFF00" text="#000000"> <HR SIZE=3> <H1>Ausgabe Anfang: </H1> <HR SIZE=3> <P> <SMALL> <PRE>
Und jetzt noch die Datei footer.txt:
</PRE> </SMALL> <P> <HR SIZE=3> <H1><B>Ende</B> der Ausgabe</H1> <HR SIZE=3> </BODY> </HTML>
Beide Dateien können Sie nun auch wieder im Shellscript einfach mit cat ausgeben lassen. Somit sieht das eigentliche Shellscript gegenüber der Version a_html_reader.cgi wie folgt aus:
#!/bin/sh echo "Content-type: text/html" echo "" cat /srv/www/htdocs/docs/header.txt cat $1 cat /srv/www/htdocs/docs/footer.txt exit 0
15.6.5 Systeminformationen ausgebenÂ
Wie Sie im Beispiel eben den Befehl cat verwendet haben, so können Sie auch alle anderen Kommandos (je nach Rechten) bzw. ganze Shellscripts ausführen. Das folgende Script gibt z. B. Auskunft über alle aktuell laufenden Prozesse:
#!/bin/sh # Name: html_ps.cgi echo "Content-type: text/html" echo cat /srv/www/htdocs/docs/header.txt ps -ef cat /srv/www/htdocs/docs/footer.txt exit 0
Das Script bei der Ausführung zeigt Abbildung 15.5.
Ebenso einfach können Sie hierbei auch andere Shellscripts ausführen lassen. Sinnvoll erscheint mir hierbei die Ausgabe von access_log und error_log. Das folgende Beispiel verwendet das Script readaccess aus Abschnitt 15.5.1:
#!/bin/sh # Name: html_access_log.cgi # Apaches access_log analysieren und ausgeben echo "Content-type: text/html" echo cat /srv/www/htdocs/docs/header.txt ./readaccess /pfad/zum/logfile/access_log cat /srv/www/htdocs/docs/footer.txt exit 0
Abbildung 15.6 zeigt das Script bei der Ausführung.
Sie sehen, CGI-Scripts können mit der Shell beinahe unbegrenzt verwendet werden.
15.6.6 KontaktformularÂ
Ein Kontaktformular erfordert eine Eingabe vom Benutzer. Dies auszuwerten, ist wiederum nicht ganz so einfach. Zunächst muss man festlegen, mit welcher Methode diese Daten empfangen werden. Hierbei gibt es die Möglichkeit GET, bei der der Browser die Zeichenkette(n) der Eingaben am Ende der URL anhängt. Bei einem Textfeld »name« mit der Benutzereingabe »Wolf« sieht die URL wie folgt aus:
http://www.pronix.de/cgi-bin/script.cgi?name=wolf
Befindet sich jetzt hier auch noch ein weiteres Textfeld namens »vorname« und die Benutzereingabe lautet »John«, so sieht die URL wie folgt aus:
http://www.pronix.de/cgi-bin/script.cgi?name=wolf&vorname=john
Das Ampersandzeichen & dient hier als Trenner zwischen den einzelnen Variablen/Werte-Paaren. Der Webserver entfernt gewöhnlich diese Zeichenkette und übergibt dies der Variablen QUERY_STRING. Den QUERY_STRING auszuwerten ist die Aufgabe des Programmierers.
Die zweite Möglichkeit ist die Übergabe der Werte mit der POST-Methode. Bei ihr werden die Daten nicht in einer der Umgebungsvariablen abgelegt, sondern in die Standardeingabe (stdin). Sie können somit die CGI-Anwendung so schreiben, als würde die Eingabe von der Tastatur vorgenommen.
Damit Sie sich schon ein Bild vom folgenden Beispiel machen können, soll hier das HTML-Formular bereits erstellt werden, in das anschließend die Eingabe des Benutzers vorgenommen werden kann:
<html> <head> <title>Kontakt</title> </head> <body> <form method="post" action="/cgi-bin/contact.cgi"> <h2>Kontakt-Formular</h2> <pre> Name : <input type="text" name="name"><br> Email: <input type="text" name="email"><br> Ihre Nachricht:<br> <textarea rows="5" cols="50" name="comments"></textarea><br> <input type="submit" value="submit"> </pre> </form> </body> </html>
Das Formular zeigt Abbildung 15.7.
In der Zeile
<form method="post" action="/cgi-bin/contact.cgi"
können Sie erkennen, dass in diesem Beispiel die POST-Methode verwendet wird, womit Sie die Daten von der Standardeingabe erhalten.
Würden Sie jetzt in Ihrem Script von der Standardeingabe lesen und dies ausgeben lassen, so würde Folgendes angezeigt (eine Zeile):
name=Juergen+Wolf&email=pronix%40t-online.de&comments= Eine+Tolle+%22Seite%22+hast+Du+da%21
Also noch lange nicht das gewünschte Ergebnis. Um hieraus die Ergebnisse zu extrahieren, müssen Sie die Standardeingabe parsen, sprich lesefreundlich dekodieren. Ein Überblick über die Zeichen, die hier von besonderer Bedeutung sind:
name=Juergen+Wolf email=pronix%40t-online.de comments= Eine+Tolle+%22Seite%22+hast+Du+da%21
Â
+ â damit werden die Leerzeichen der eingegebenen Daten getrennt.
Die ersten drei Punkte können Sie bspw. folgendermaßen dekodieren:
tr '&+=' '\n \t'
Hiermit ersetzen Sie das Zeichen »&« durch ein Newline-Zeichen, das »+«-Zeichen durch ein Leerzeichen und das »=«-Zeichen durch ein Tabulatorzeichen.
Die hexadezimale Ziffer dekodieren Sie mit einer einzelnen sed-Zeile:
echo -e `sed 's/%\(..\)/\\\x\1/g'`
Hinweis   Nicht mit eingeschlossen sind hierbei die deutschen Umlaute »üäöß...«.
Mehr ist nicht erforderlich, um den Eingabestring von einem HTML-Formular zu parsen. Das folgende Script wertet den Inhalt eines solchen Kontaktformulars aus und sendet diesen mit sendmail an die Adresse »empfaenger«. Am Ende wird noch eine HTML-Seite als Bestätigung auf dem Browser erzeugt.
#!/bin/sh # Name: contact.cgi # Auswerten der Formmail empfaenger="you@host" ( cat << MAIL From: www@`hostname` To: $empfaenger Subject: Kontakt-Anfrage Ihrer Webseite Inhalt der Eingabe lautet: MAIL # Eingabestring dekodieren cat â | tr '&+=' '\n \t' | echo -e `sed 's/%\(..\)/\\\x\1/g'` echo "" echo "Abgeschickt am `date`" ) | sendmail -t echo "Content-type: text/html" echo "" echo "<html><body>" echo "Vielen Dank fuer Ihre Anfrage!" echo "</body></html>" exit 0
Das Script bei der Ausführung:
Wenn Sie jetzt einen Blick in Ihr Mail-Postfach werfen, werden Sie eine entsprechende E-Mail finden, die zeigt, was im Kontakt-Formular eingegeben wurde.
15.6.7 Noch ein TippÂ
Natürlich ist anzumerken, dass Sie nur einen ersten Einblick zum Thema CGI erhalten haben. Wie bereits erwähnt, können Sie sich hierzu auf meiner Homepage http://www.pronix.de (allerdings in der Programmiersprache C) einen etwas umfangreicheren Überblick verschaffen, falls, Sie mehr Informationen dazu benötigen.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## 15.6 CGI (Common Gateway Interface)Â
CGI ist eine Schnittstelle, mit der Sie z. B. Anwendungen für das Internet schreiben können. Diese CGI-Anwendungen laufen dabei auf einem (Web-)Server (wie beispielsweise dem Apache) und werden meistens von einer HTML-Webseite mithilfe eines Webbrowsers aufgerufen.
Das Verfahren der CGI-Schnittstelle ist ziemlich einfach. Die Daten werden ganz normal von der Standardeingabe (stdin) oder von den Umgebungsvariablen empfangen und wenn nötig über die Standardausgabe (stdout) ausgegeben. Im Allgemeinen handelt es sich dabei um ein dynamisch erzeugtes HTML-Dokument, das Sie in Ihrem Browser betrachten können. Sind diese Voraussetzungen gegeben, können CGI-Anwendungen praktisch mit jeder Programmiersprache erstellt werden. Das CGI-Programm selbst, welches Sie erstellen, ist ein ausführbares Programm auf dem Webserver, das nicht von einem normalen User gestartet wird, sondern vom Webserver als ein neuer Prozess.
Normalerweise setzt man für CGI-Scripts die Programmiersprachen Perl, PHP, Java, Python, Ruby oder C ein, aber im Grunde ist die Sprache absolut zweitrangig. Die Hauptsache ist, dass sie eine Standardein- und eine Standardausgabe besitzt und selbstverständlich auf dem Rechner, auf dem diese ausgeführt wird, auch verstanden wird. Für kleinere Auf- bzw. Ausgaben können Sie selbstverständlich auch ein Shellscript verwenden. Damit können Sie beinahe alle Shellscripts, die Sie in diesem Buch geschrieben haben, auch über den Browser aufrufen und ausführen lassen.
Auch wenn es sich recht aufregend anhört, CGI-Scripts in Shell zu erstellen, finden Sie hier nur einen kurzen Anriss zu diesem Thema. Vorwiegend geht es mir darum, dass Sie mithilfe der Shellscripts eine Art Schnittstelle für Ihre geschrieben Shellscripts schreiben können, beispielsweise um die Auswertung von access_log auf dem Browser vorzunehmen.
### 15.6.1 CGI-Scripts ausführenÂ
Um CGI-Scripts auf einem Server auszuführen, ist leider mehr als nur ein Browser erforderlich. Sofern Sie den ganzen Vorgang lokal testen wollen, benötigen Sie einen laufenden Webserver (im Beispiel ist immer vom Apache Webserver die Rede). Das Ganze ist gerade für Anfänger nicht ganz so einfach nachzuvollziehen. Einfach haben Sie es natürlich, wenn Sie Ihre Scripts auf einem bereits konfigurierten Webserver ausführen können â wie dies etwa bei einem Webhoster der Fall ist.
Zuerst müssen Sie herausfinden, wo Sie die Shellscripts auf Ihrem Webserver ausführen können, gewöhnlich ist dies im /cgi-bin-Verzeichnis. Allerdings kann man einen Webserver auch so konfigurieren (httpd.conf), dass man aus jedem Unterverzeichnis ein Script ausführen kann. In der Regel verwendet man als Dateiendung ».cgi«. Wenn es nicht möglich ist, ein CGI-Script auszuführen, kann es sein, dass Sie die Option ExecCGI in der Konfigurationsdatei httpd.conf anpassen oder die Schnittstelle mit der Datei .htaccess aktivieren müssen. Außerdem muss beachtet werden, dass dieses Script für jedermann lesbar und ausführbar ist, weil die meisten Webserver eine Webanfrage als User nobody oder similar ausführen. Leider kann ich hier nicht im Detail darauf eingehen, daher hierzu nochmals kurz die nötigen (möglichen) Schritte, um ein CGI-Script auf einem Webserver auszuführen (entsprechende Rechte vorausgesetzt):
 | Setzen Sie die Zugriffsrechte für das entsprechende Script (chmod go+rx script.cgi). |
| --- | --- |
 | Starten Sie den Webserver, falls dies nicht schon geschehen ist. |
| --- | --- |
### 15.6.2 CGI-Environment ausgebenÂ
 | HTTP-Anfrage-Header des Webbrowsers |
| --- | --- |
Das Script bei der Ausführung sehen Sie in Abbildung 15.1.
Einen Hinweis noch zu:
> echo "Content-type: text/html"
Damit teilen Sie dem Webserver mit, dass Ihre CGI-Anwendung ein bestimmtes Dokument ausgibt (Content-Type-Dokument). Im Beispiel handelt es sich um ein HTML-Dokument. Außerdem ist es von Bedeutung, dass hinter dieser Angabe zwei Newline-Zeichen stehen (hier durch die beiden echo-Aufrufe). Damit signalisieren Sie dem Webserver, dass es sich um die letzte »Headerzeile« handelt.
### 15.6.3 Einfache Ausgabe als TextÂ
Ähnlich einfach wie die Ausgabe der Umgebungsvariablen lässt sich jede andere Ausgabe auf dem Browser realisieren. Wollen Sie zum Beispiel ein Shellscript über einen Link starten, so müssen Sie sich nur ein HTML-Dokument wie folgt zusammenbasteln:
> <html> <head> <title>TestCGIs</title> </head> <body> <A HREF="/cgi-bin/areader.cgi?/srv/www/htdocs/docs/kap003.txt">Ein Link</A> </body> </htmlDas folgende Script areader.cgi macht wiederum nichts anderes, als die Textdatei kap003.txt mittels cat auf dem Browser auszugeben. Als Query-String geben Sie hinter dem Fragezeichen an, welche Datei hier gelesen werden soll. Für das Shellscript stellt dies wiederum den ersten Positionsparameter $1 dar. Der Link
> /cgi-bin/areader.cgi?/srv/www/htdocs/docs/kap003.txt
besteht also aus zwei Werten, dem Pfad zum Shellscript
> /cgi-bin/areader.cgi
und dem ersten Positionsparameter (Query-String)
> /srv/www/htdocs/docs/kap003.txt
Ein solcher Query-String kann natürlich ganz andere Formen annehmen, dazu aber später mehr. Hierzu nun das Shellscript areader.cgi:
> #!/bin/sh # Name: areader.cgi echo "Content-type: text/plain" echo cat $1 exit 0
Die Ausführung ist in Abbildung 15.3 zu sehen.
# Sicherheitsbedenken
Natürlich dienen diese Beispiele nur als Mittel zum Zweck, um Ihnen zu demonstrieren, wie Sie Shellscripts auch als CGI-Scripts verwenden können, ohne allerdings den Sicherheitsaspekt zu berücksichtigen. Denn im Grunde ist die Zeile
> /cgi-bin/areader.cgi?/srv/www/htdocs/docs/kap003.txt
ein ganz böses Foul, das Hackern Tür und Tor öffnet. Manipuliert man diese Zeile um in
> /cgi-bin/areader.cgi?/etc/passwd /cgi-bin/areader.cgi?/etc/shadow
kann der Angreifer in aller Ruhe zu Hause per »Brute Force« das Passwort knacken (sofern dies nicht allzu stark ist â was leider meistens der Fall ist). Zumindest kann der Angreifer auf jeden Fall Systeminformationen sammeln, um eventuelle Schwachstellen des Rechners aufzuspüren.
Ein sicherere Alternative wäre es beispielsweise, die Usereingabe auf [a-z] zu prüfen. Wenn andere Zeichen vorkommen, dann abbrechen, und ansonsten den Pfad und die Endung vom Script ergänzen lassen.
> if echo $1 | grep -q '[^a-z]' then echo Schlingel exit 0 fi file="/pfad/nach/irgendwo/$1.txt" if [ -e $file ] then echo Nueschte exit 0 fi
Des Weiteren sollten Sie bei CGI-Programmen immer 0 zurückgeben, wenn alles glatt verlaufen ist, damit überhaupt etwas auf dem Bildschirm ausgegeben wird. Bei vielen Installationen entsteht nämlich bei Rückgabe ungleich 0 der 500er-Fehler.
### 15.6.4 Ausgabe als HTML formatierenÂ
In der Praxis werden Sie wohl kaum immer nur reinen Text ausgeben, sondern die Ausgabe mit vielen HTML- oder CSS-Elementen verschönern wollen. Im Gegensatz zum Beispiel zuvor ist hierfür nicht allzu viel nötig (geringe HTML-Kenntnisse vorausgesetzt). Zuerst wieder das HTML-Dokument (unverändert):
> <html> <head> <title>TestCGIs</title> </head> <body> <A HREF="/cgi-bin/a_html_reader.cgi?/srv/www/htdocs/docs/kap003.txt">Ein Link</A> </body> </htmlAls Nächstes das Shellscript areader.cgi, allerdings ein wenig umgeschrieben (und umbenannt) mit einigen HTML-Konstrukten, um den Text browsergerecht zu servieren:
> #!/bin/sh # a_html_reader.cgi echo "Content-type: text/html" echo "" cat << HEADER <HTML> <HEAD><TITLE>Ausgabe der Datei: $1</TITLE> </HEAD> <BODY bgcolor="#FFFF00" text="#000000"> <HR SIZE=3> <H1>Ausgabe der Datei: $1 </H1> <HR SIZE=3> <P> <SMALL> <PRE> HEADER cat $1 cat << FOOTER </PRE> </SMALL> <P> <HR SIZE=3> <H1><B>Ende</B> Ausgabe der Datei: $1 </H1> <HR SIZE=3> </BODY> </HTML> FOOTER exit 0
Das Script bei der Ausführung zeigt Abbildung 15.4.
Soweit haben Sie nun eine HTML-formatierte Ausgabe. Allerdings ist es noch praktischer, den HTML-Teil wiederum in eine extra Datei auszulagern. Dies erhält zum einen die Übersicht des Shellscripts, und zum anderen kann man jederzeit das »Layout« gegen ein anderes austauschen. Ich teile z. B. die beiden Bereiche um den eigentlichen Arbeitsteil des Shellscripts in einen Kopfteil (header.txt) und einen Fußteil (footer.txt) auf. Zuerst die Datei header.txt:
> <HTML> <HEAD><TITLE>Ein Titel</TITLE> </HEAD> <BODY bgcolor="#FFFF00" text="#000000"> <HR SIZE=3> <H1>Ausgabe Anfang: </H1> <HR SIZE=3> <P> <SMALL> <PREUnd jetzt noch die Datei footer.txt:
> </PRE> </SMALL> <P> <HR SIZE=3> <H1><B>Ende</B> der Ausgabe</H1> <HR SIZE=3> </BODY> </HTMLBeide Dateien können Sie nun auch wieder im Shellscript einfach mit cat ausgeben lassen. Somit sieht das eigentliche Shellscript gegenüber der Version a_html_reader.cgi wie folgt aus:
> #!/bin/sh echo "Content-type: text/html" echo "" cat /srv/www/htdocs/docs/header.txt cat $1 cat /srv/www/htdocs/docs/footer.txt exit 0
### 15.6.5 Systeminformationen ausgebenÂ
Wie Sie im Beispiel eben den Befehl cat verwendet haben, so können Sie auch alle anderen Kommandos (je nach Rechten) bzw. ganze Shellscripts ausführen. Das folgende Script gibt z. B. Auskunft über alle aktuell laufenden Prozesse:
> #!/bin/sh # Name: html_ps.cgi echo "Content-type: text/html" echo cat /srv/www/htdocs/docs/header.txt ps -ef cat /srv/www/htdocs/docs/footer.txt exit 0
Das Script bei der Ausführung zeigt Abbildung 15.5.
Ebenso einfach können Sie hierbei auch andere Shellscripts ausführen lassen. Sinnvoll erscheint mir hierbei die Ausgabe von access_log und error_log. Das folgende Beispiel verwendet das Script readaccess aus Abschnitt 15.5.1:
> #!/bin/sh # Name: html_access_log.cgi # Apaches access_log analysieren und ausgeben echo "Content-type: text/html" echo cat /srv/www/htdocs/docs/header.txt ./readaccess /pfad/zum/logfile/access_log cat /srv/www/htdocs/docs/footer.txt exit 0
Abbildung 15.6 zeigt das Script bei der Ausführung.
Sie sehen, CGI-Scripts können mit der Shell beinahe unbegrenzt verwendet werden.
### 15.6.6 KontaktformularÂ
Ein Kontaktformular erfordert eine Eingabe vom Benutzer. Dies auszuwerten, ist wiederum nicht ganz so einfach. Zunächst muss man festlegen, mit welcher Methode diese Daten empfangen werden. Hierbei gibt es die Möglichkeit GET, bei der der Browser die Zeichenkette(n) der Eingaben am Ende der URL anhängt. Bei einem Textfeld »name« mit der Benutzereingabe »Wolf« sieht die URL wie folgt aus:
> http://www.pronix.de/cgi-bin/script.cgi?name=wolf
Befindet sich jetzt hier auch noch ein weiteres Textfeld namens »vorname« und die Benutzereingabe lautet »John«, so sieht die URL wie folgt aus:
> http://www.pronix.de/cgi-bin/script.cgi?name=wolf&vorname=john
Das Ampersandzeichen & dient hier als Trenner zwischen den einzelnen Variablen/Werte-Paaren. Der Webserver entfernt gewöhnlich diese Zeichenkette und übergibt dies der Variablen QUERY_STRING. Den QUERY_STRING auszuwerten ist die Aufgabe des Programmierers.
Die zweite Möglichkeit ist die Übergabe der Werte mit der POST-Methode. Bei ihr werden die Daten nicht in einer der Umgebungsvariablen abgelegt, sondern in die Standardeingabe (stdin). Sie können somit die CGI-Anwendung so schreiben, als würde die Eingabe von der Tastatur vorgenommen.
Damit Sie sich schon ein Bild vom folgenden Beispiel machen können, soll hier das HTML-Formular bereits erstellt werden, in das anschließend die Eingabe des Benutzers vorgenommen werden kann:
> <html> <head> <title>Kontakt</title> </head> <body> <form method="post" action="/cgi-bin/contact.cgi"> <h2>Kontakt-Formular</h2> <pre> Name : <input type="text" name="name"><br> Email: <input type="text" name="email"><br> Ihre Nachricht:<br> <textarea rows="5" cols="50" name="comments"></textarea><br> <input type="submit" value="submit"> </pre> </form> </body> </htmlDas Formular zeigt Abbildung 15.7.
In der Zeile
> <form method="post" action="/cgi-bin/contact.cgi"
können Sie erkennen, dass in diesem Beispiel die POST-Methode verwendet wird, womit Sie die Daten von der Standardeingabe erhalten.
Würden Sie jetzt in Ihrem Script von der Standardeingabe lesen und dies ausgeben lassen, so würde Folgendes angezeigt (eine Zeile):
> name=Juergen+Wolf&email=pronix%40t-online.de&comments= Eine+Tolle+%22Seite%22+hast+Du+da%21
Also noch lange nicht das gewünschte Ergebnis. Um hieraus die Ergebnisse zu extrahieren, müssen Sie die Standardeingabe parsen, sprich lesefreundlich dekodieren. Ein Überblick über die Zeichen, die hier von besonderer Bedeutung sind:
> name=Juergen+Wolf email=pronix%40t-online.de comments= Eine+Tolle+%22Seite%22+hast+Du+da%21
 | + â damit werden die Leerzeichen der eingegebenen Daten getrennt. |
| --- | --- |
Die ersten drei Punkte können Sie bspw. folgendermaßen dekodieren:
> tr '&+=' '\n \t'
Hiermit ersetzen Sie das Zeichen »&« durch ein Newline-Zeichen, das »+«-Zeichen durch ein Leerzeichen und das »=«-Zeichen durch ein Tabulatorzeichen.
Die hexadezimale Ziffer dekodieren Sie mit einer einzelnen sed-Zeile:
> echo -e `sed 's/%\(..\)/\\\x\1/g'`
Mehr ist nicht erforderlich, um den Eingabestring von einem HTML-Formular zu parsen. Das folgende Script wertet den Inhalt eines solchen Kontaktformulars aus und sendet diesen mit sendmail an die Adresse »empfaenger«. Am Ende wird noch eine HTML-Seite als Bestätigung auf dem Browser erzeugt.
> #!/bin/sh # Name: contact.cgi # Auswerten der Formmail empfaenger="you@host" ( cat << MAIL From: www@`hostname` To: $empfaenger Subject: Kontakt-Anfrage Ihrer Webseite Inhalt der Eingabe lautet: MAIL # Eingabestring dekodieren cat â | tr '&+=' '\n \t' | echo -e `sed 's/%\(..\)/\\\x\1/g'` echo "" echo "Abgeschickt am `date`" ) | sendmail -t echo "Content-type: text/html" echo "" echo "<html><body>" echo "Vielen Dank fuer Ihre Anfrage!" echo "</body></html>" exit 0
Wenn Sie jetzt einen Blick in Ihr Mail-Postfach werfen, werden Sie eine entsprechende E-Mail finden, die zeigt, was im Kontakt-Formular eingegeben wurde.
### 15.6.7 Noch ein TippÂ
Natürlich ist anzumerken, dass Sie nur einen ersten Einblick zum Thema CGI erhalten haben. Wie bereits erwähnt, können Sie sich hierzu auf meiner Homepage http://www.pronix.de (allerdings in der Programmiersprache C) einen etwas umfangreicheren Überblick verschaffen, falls, Sie mehr Informationen dazu benötigen.
# A.3 Shell-OptionenÂ
A.3 Shell-OptionenÂ
Mit den Shell-Optionen können Sie das Verhalten der Shell steuern. Die einzelnen Optionen werden mit dem Kommando set gesetzt oder abgeschaltet. Mit einem + schalten Sie eine Option ab und mit einem â ein.
# Eine Option einschalten set -opt # Eine Option abschalten set +opt
Hier folgt nun eine Tabelle zu den gängigsten Optionen der verschiedenen Shells:
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
A.3 Shell-OptionenÂ
Mit den Shell-Optionen können Sie das Verhalten der Shell steuern. Die einzelnen Optionen werden mit dem Kommando set gesetzt oder abgeschaltet. Mit einem + schalten Sie eine Option ab und mit einem â ein.
# Eine Option einschalten set -opt # Eine Option abschalten set +opt
Hier folgt nun eine Tabelle zu den gängigsten Optionen der verschiedenen Shells:
## A.3 Shell-OptionenÂ
Mit den Shell-Optionen können Sie das Verhalten der Shell steuern. Die einzelnen Optionen werden mit dem Kommando set gesetzt oder abgeschaltet. Mit einem + schalten Sie eine Option ab und mit einem â ein.
> # Eine Option einschalten set -opt # Eine Option abschalten set +opt
Hier folgt nun eine Tabelle zu den gängigsten Optionen der verschiedenen Shells:
Option | Shell | Bedeutung |
| --- | --- | --- |
âa | sh, ksh, bash | Alle neu angelegten oder veränderten Variablen werden automatisch exportiert. |
âA array wert1 ... | ksh | Belegt ein Array mit Werten |
âb | ksh, bash | Informiert den User über beendete Hintergrundjobs |
âc argument | sh, ksh, bash | Die als argument angegeben Kommandoliste wird verwendet und ausgeführt. |
âC | ksh, bash | Verhindert das Überschreiben einer Datei via Umleitung (kommando > datei) |
âe | sh, ksh, bash | Beendet die Shell, wenn ein Kommando nicht ordnungsmäßig ausgeführt wurde |
âf | sh, ksh, bash | Schaltet die Dateinamen-Expansion ab |
âh | sh, ksh, bash | Damit merkt sich die Shell die Lage der Kommandos, die innerhalb von Funktionen auftauchen, schon beim Lesen der Funktion â nicht, wie gewöhnlich, bei deren Ausführung. |
âi | sh, ksh, bash | Startet eine interaktive Subshell |
ân | sh, ksh, bash | Liest und testet ein Script auf syntaktische Korrektheit; führt das Script nicht aus |
âr | sh, ksh, bash | Shell wird als »restricted« Shell ausgeführt |
âs arg(s) | sh | Startet wie âi eine interaktive Subshell, nur werden die eventuell angegebenen Argumente als Positionsparameter mit übergeben |
âs | ksh, bash | Sortiert die Positionsparameter alphabetisch |
ât | sh, ksh, bash | Die Shell nach dem ersten Befehl verlassen (ausführen und nichts wie weg) |
âu | sh, ksh, bash | Werden undefinierte Variablen verwendet, wird eine Fehlermeldung ausgegeben. |
âv | sh, ksh, bash | Jede Zeile wird vor ihrer Ausführung unverändert angezeigt. |
âx | sh, ksh, bash | Jede Zeile wird vor ihrer Ausführung nach allen Ersetzungen angezeigt. |
# A.5 Kommandozeile editierenÂ
A.5 Kommandozeile editierenÂ
Da Sie als Programmierer mit der interaktiven Shell zu tun haben, sollten Sie auch wissen, wie Sie effektiver mit der Kommandozeile arbeiten können. Abgesehen von der Bourne-Shell bieten Ihnen hierzu die bash und ksh unzählige Tastenkommandos an, weshalb ich mich hier nur auf das Nötigste und Gängigste beschränke.
Zunächst muss angemerkt werden, dass die reine, »echte« Bourne-Shell überhaupt keine Möglichkeit hat, die Kommandozeile zu editieren bzw. gibt es hierbei auch keinerlei Kommando-History. Dies war übrigens u. a. auch ein Grund, weshalb weitere Shell-Varianten entwickelt wurden. Dies sollte erwähnt werden, nur für den Fall, dass Sie auf einmal vor einer Shell sitzen und geliebte Tasten- bzw. Tastenkombinationen nicht mehr funktionieren und stattdessen irgendwelche kryptischen Zeichen ausgegeben werden.
Die folgende Tabelle listet jeweils die wichtigsten Tasten- bzw. Tastenkombinationen des Emacs-Modus (bspw. (Strg)+(P) für »History nach oben durchlaufen«) und die Pfeil- und Metatasten auf. Den Emacs-Modus muss man mit
set -o emacs
aktivieren. Wenn Sie in der Korn-Shell die Pfeiltasten wie in der Bash nutzen wollen, müssen Sie einen Eintrag in der Startdatei .kshrc im Heimverzeichnis des Benutzers vornehmen. Mehr dazu im Anschluss dieser Tabelle. Neben dem Emacs-Modus gibt es auch den vi-Modus, worauf allerdings in diesem Buch nicht eingegangen wird.
Tabelle A.8 Â Tastenkombinationen zum Editieren der Kommandozeile
Tastendruck
Bedeutung
(?) (Strg)+(P)
Die Befehls-History nach oben durchlaufen
(?) (Strg)+(N)
Die Befehls-History nach unten durchlaufen
(?) (Strg)+(B)
Ein Zeichen nach links in der Kommandozeile
(?) (Strg)+(F)
Ein Zeichen nach rechts in der Kommandozeile
(Pos1) (Strg)+(A)
Cursor an den Anfang der Kommandozeile setzen
(Ende) (Strg)+(E)
Cursor an das Ende der Kommandozeile setzen
(Del) (Strg)+(D)
Zeichen rechts vom Cursor löschen
(Backspace) (Strg)+(H)
Zeichen links vom Cursor löschen
(Strg)+(K)
Alles bis zum Zeilenende löschen
(Strg)+(R)
Sucht rückwärts in der History nach einem Befehl
(Strg)+(S)
Sucht vorwärts in der History nach einem Befehl
(\)+(ENTER)
Kommando in der nächsten Zeile fortsetzen
(Strg)+(T)
Zwei letzte Zeichen vertauschen
(Strg)+(L)
Bildschirm löschen
(Strg)+(_)
Letzte Änderung(en) aufheben (rückgängig machen)
Nun zu den Einträgen in .kshrc, damit Ihnen auch hier die Pfeiltasten zur Verfügung stehen:
alias __A=Strg+P alias __B=Strg+N alias __C=Strg+F alias __D=Strg+B alias __H=Strg+A alias __Y=Strg+E set -o emacs
Zusätzlich bieten Ihnen die Bash und die Korn-Shell im Emacs-Modus eine Autovervollständigung an. Geben Sie z. B. in der Bash Folgendes an
you@host > cd(Ë_) cd cd.. cddaslave cdinfo cdparanoia cdrdao cdrecord
werden Ihnen sämtliche Befehle aufgelistet, die mit »cd« beginnen. Machen Sie hierbei weiter
you@host > cdi(Ë_)
wird daraus automatisch:
you@host > cdinfo
Dies funktioniert natürlich auch mit Dateinamen:
you@host > cp thr(Ë_)
Daraus wird hier:
you@host > cp thread1.c
Gleiches funktioniert natürlich auch bei der Korn-Shell. Allerdings muss hier anstatt der Taste (Ë_) zweimal (ESC) gedrückt werden, damit ein Kommando, eine Datei oder Verzeichnis vervollständigt wird. Voraussetzung ist hier natürlich auch, dass der Emacs-Modus aktiv ist.
Ihre Meinung
Wie hat Ihnen das Openbook gefallen? Wir freuen uns immer über Ihre Rückmeldung. Schreiben Sie uns gerne Ihr Feedback als E-Mail an <EMAIL>.
## A.5 Kommandozeile editierenÂ
Da Sie als Programmierer mit der interaktiven Shell zu tun haben, sollten Sie auch wissen, wie Sie effektiver mit der Kommandozeile arbeiten können. Abgesehen von der Bourne-Shell bieten Ihnen hierzu die bash und ksh unzählige Tastenkommandos an, weshalb ich mich hier nur auf das Nötigste und Gängigste beschränke.
Zunächst muss angemerkt werden, dass die reine, »echte« Bourne-Shell überhaupt keine Möglichkeit hat, die Kommandozeile zu editieren bzw. gibt es hierbei auch keinerlei Kommando-History. Dies war übrigens u. a. auch ein Grund, weshalb weitere Shell-Varianten entwickelt wurden. Dies sollte erwähnt werden, nur für den Fall, dass Sie auf einmal vor einer Shell sitzen und geliebte Tasten- bzw. Tastenkombinationen nicht mehr funktionieren und stattdessen irgendwelche kryptischen Zeichen ausgegeben werden.
Die folgende Tabelle listet jeweils die wichtigsten Tasten- bzw. Tastenkombinationen des Emacs-Modus (bspw. (Strg)+(P) für »History nach oben durchlaufen«) und die Pfeil- und Metatasten auf. Den Emacs-Modus muss man mit
> set -o emacs
aktivieren. Wenn Sie in der Korn-Shell die Pfeiltasten wie in der Bash nutzen wollen, müssen Sie einen Eintrag in der Startdatei .kshrc im Heimverzeichnis des Benutzers vornehmen. Mehr dazu im Anschluss dieser Tabelle. Neben dem Emacs-Modus gibt es auch den vi-Modus, worauf allerdings in diesem Buch nicht eingegangen wird.
Tastendruck | Bedeutung |
| --- | --- |
(?) (Strg)+(P) | Die Befehls-History nach oben durchlaufen |
(?) (Strg)+(N) | Die Befehls-History nach unten durchlaufen |
(?) (Strg)+(B) | Ein Zeichen nach links in der Kommandozeile |
(?) (Strg)+(F) | Ein Zeichen nach rechts in der Kommandozeile |
(Pos1) (Strg)+(A) | Cursor an den Anfang der Kommandozeile setzen |
(Ende) (Strg)+(E) | Cursor an das Ende der Kommandozeile setzen |
(Del) (Strg)+(D) | Zeichen rechts vom Cursor löschen |
(Backspace) (Strg)+(H) | Zeichen links vom Cursor löschen |
(Strg)+(K) | Alles bis zum Zeilenende löschen |
(Strg)+(R) | Sucht rückwärts in der History nach einem Befehl |
(Strg)+(S) | Sucht vorwärts in der History nach einem Befehl |
(\)+(ENTER) | Kommando in der nächsten Zeile fortsetzen |
(Strg)+(T) | Zwei letzte Zeichen vertauschen |
(Strg)+(L) | Bildschirm löschen |
(Strg)+(_) | Letzte Änderung(en) aufheben (rückgängig machen) |
Nun zu den Einträgen in .kshrc, damit Ihnen auch hier die Pfeiltasten zur Verfügung stehen:
> alias __A=Strg+P alias __B=Strg+N alias __C=Strg+F alias __D=Strg+B alias __H=Strg+A alias __Y=Strg+E set -o emacs
Zusätzlich bieten Ihnen die Bash und die Korn-Shell im Emacs-Modus eine Autovervollständigung an. Geben Sie z. B. in der Bash Folgendes an
> you@host > cd(Ë_) cd cd.. cddaslave cdinfo cdparanoia cdrdao cdrecord
werden Ihnen sämtliche Befehle aufgelistet, die mit »cd« beginnen. Machen Sie hierbei weiter
> you@host > cdi(Ë_)
wird daraus automatisch:
> you@host > cdinfo
Dies funktioniert natürlich auch mit Dateinamen:
> you@host > cp thr(Ë_)
Daraus wird hier:
> you@host > cp thread1.c
Gleiches funktioniert natürlich auch bei der Korn-Shell. Allerdings muss hier anstatt der Taste (Ë_) zweimal (ESC) gedrückt werden, damit ein Kommando, eine Datei oder Verzeichnis vervollständigt wird. Voraussetzung ist hier natürlich auch, dass der Emacs-Modus aktiv ist.
# A.6 Wichtige Tastenkürzel (Kontrolltasten)Â
# A.7 Initialisierungsdateien der ShellsÂ
# A.8 SignaleÂ
A.8 SignaleÂ
Versandkostenfrei bestellen in Deutschland, Ãsterreich und der SchweizInfo
A.8 SignaleÂ
# A.9 Sonderzeichen und Zeichenklassen |
plotmo | cran | R | Package ‘plotmo’
October 14, 2022
Version 3.6.2
Title Plot a Model's Residuals, Response, and Partial Dependence Plots
Author <NAME>
Maintainer <NAME> <<EMAIL>>
Depends R (>= 3.4.0), Formula (>= 1.2-3), plotrix, TeachingDemos
Description Plot model surfaces for a wide variety of models
using partial dependence plots and other techniques.
Also plot model residuals and other information on the model.
Suggests C50 (>= 0.1.0-24), earth (>= 5.1.2), gbm (>= 2.1.1), glmnet
(>= 2.0.5), glmnetUtils (>= 1.0.3), MASS (>= 7.3-51), mlr (>=
2.12.1), neuralnet (>= 1.33), partykit (>= 1.2-2), pre (>=
0.5.0), rpart (>= 4.1-15), rpart.plot (>= 3.0.8)
License GPL-3
URL http://www.milbo.users.sonic.net
NeedsCompilation no
Repository CRAN
Date/Publication 2022-05-21 19:30:02 UTC
R topics documented:
plotm... 2
plotmo.mis... 8
plotre... 10
plot_gb... 15
plot_glmne... 17
plotmo Plot a model’s response over a range of predictor values (the model
surface)
Description
Plot model surfaces for a wide variety of models.
This function plots the model’s response when varying one or two predictors while holding the other
predictors constant (a poor man’s partial-dependence plot).
It can also generate partial-dependence plots (by specifying pmethod="partdep").
Please see the plotmo vignette (also available here).
Usage
plotmo(object=stop("no 'object' argument"),
type=NULL, nresponse=NA, pmethod="plotmo",
pt.col=0, jitter=.5, smooth.col=0, level=0,
func=NULL, inverse.func=NULL, nrug=0, grid.col=0,
type2="persp",
degree1=TRUE, all1=FALSE, degree2=TRUE, all2=FALSE,
do.par=TRUE, clip=TRUE, ylim=NULL, caption=NULL, trace=0,
grid.func=NULL, grid.levels=NULL, extend=0,
ngrid1=50, ngrid2=20, ndiscrete=5, npoints=3000,
center=FALSE, xflip=FALSE, yflip=FALSE, swapxy=FALSE, int.only.ok=TRUE,
...)
Arguments
object The model object.
type Type parameter passed to predict. For allowed values see the predict method
for your object (such as predict.earth). By default, plotmo tries to automat-
ically select a suitable value for the model in question (usually "response") but
this will not always be correct. Use trace=1 to see the type argument passed
to predict.
nresponse Which column to use when predict returns multiple columns. This can be a
column index, or a column name if the predict method for the model returns
column names. The column name may be abbreviated, partial matching is used.
pmethod Plotting method. One of:
"plotmo" (default) Classic plotmo plots i.e. the background variables are fixed
at their medians (or first level for factors).
"partdep" Partial dependence plots, i.e. at each point the effect of the back-
ground variables is averaged.
"apartdep" Approximate partial dependence plots. Faster than "partdep" es-
pecially for big datasets. Like "partdep" but the background variables are av-
eraged over a subset of ngrid1 cases (default 50), rather than all cases in the
training data. The subset is created by selecting rows at equally spaced inter-
vals from the training data after sorting the data on the response values (ties are
randomly broken).
The same background subset of ngrid1 cases is used for both degree1 and de-
gree2 plots.
pt.col The color of response points (or response sites in degree2 plots). This refers to
the response y in the data used to build the model. Note that the displayed points
are jittered by default (see the jitter argument).
Default is 0, display no response points.
This can be a vector, like all such arguments – for example pt.col = as.numeric(survived)+2
to color points by their survival class.
You can modify the plotted points with pt.pch, pt.cex, etc. (these get passed
via plotmo’s “...” argument). For example, pt.cex = weights to size points
by their weight. To label the points, set pt.pch to a character vector.
jitter Applies only if pt.col is specified.
The default is jitter=.5, automatically apply some jitter to the points. Points
are jittered horizontally and vertically.
Use jitter=0 to disable this automatic jittering. Otherwise something like
jitter=1, but the optimum value is data dependent.
smooth.col Color of smooth line through the response points. (The points themselves will
not be plotted unless pt.col is specified.) Default is 0, no smooth line.
Example:
mod <- lm(Volume~Height, data=trees)
plotmo(mod, pt.color=1, smooth.col=2)
You can adjust the amount of smoothing with smooth.f. This gets passed as f
to lowess. The default is .5. Lower values make the line more wiggly.
level Draw estimated confidence or prediction interval bands at the given level, if
the predict method for the model supports them.
Default is 0, bands not plotted. Else a fraction, for example level=.95. See
“Prediction intervals” in the plotmo vignette. Example:
mod <- lm(log(Volume)~log(Girth), data=trees)
plotmo(mod, level=.95)
You can modify the color of the bands with level.shade and level.shade2.
func Superimpose func(x) on the plot. Example:
mod <- lm(Volume~Girth, data=trees)
estimated.volume <- function(x) .17 * x$Girth^2
plotmo(mod, pt.col=2, func=estimated.volume)
The func is called for each plot with a single argument which is a dataframe
with columns in the same order as the predictors in the formula or x used to
build the model. Use trace=2 to see the column names and first few rows of
this dataframe.
inverse.func A function applied to the response before plotting. Useful to transform a trans-
formed response back to the original scale. Example:
mod <- lm(log(Volume)~., data=trees)
plotmo(mod, inverse.func=exp) # exp() is inverse of log()
nrug Number of ticks in the rug along the bottom of the plot
Default is 0, no rug.
Use nrug=TRUE for all the points.
Else specify the number of quantiles e.g. use nrug=10 for ticks at the 0, 10, 20,
..., 100 percentiles.
Modify the rug ticks with rug.col, rug.lwd, etc.
The special value nrug="density" means plot the density of the points along
the bottom. Modify the density plot with density.adjust (default is .5),
density.col, density.lty, etc.
grid.col Default is 0, no grid. Else add a background grid of the specified color to the
degree1 plots. The special value grid.col=TRUE is treated as "lightgray".
type2 Degree2 plot type. One of "persp" (default), "image", or "contour". You
can pass arguments to these functions if necessary by using persp., image., or
contour. as a prefix. Examples:
plotmo(mod, persp.ticktype="detailed", persp.nticks=3)
plotmo(mod, type2="image")
plotmo(mod, type2="image", image.col=heat.colors(12))
plotmo(mod, type2="contour", contour.col=2, contour.labcex=.4)
degree1 An index vector specifying which subset of degree1 (main effect) plots to in-
clude (after selecting the relevant predictors as described in “Which variables
are plotted?” in the plotmo vignette).
Default is TRUE, meaning all (the TRUE gets recycled). To plot only the third plot
use degree1=3. For no degree1 plots use degree1=0.
Note that degree1 indexes plots on the page, not columns of x. Probably the
easiest way to use this argument (and degree2) is to first use the default (and
possibly all1=TRUE) to plot all figures. This shows how the figures are num-
bered. Then replot using degree1 to select the figures you want, for example
degree1=c(1,3,4).
Can also be a character vector specifying which variables to plot. Examples:
degree1="wind"
degree1=c("wind", "vis").
Variables names are matched with grep. Thus "wind" will match all variables
with "wind" anywhere in their name. Use "^wind$" to match only the variable
named "wind".
all1 Default is FALSE. Use TRUE to plot all predictors, not just those usually selected
by plotmo.
The all1 argument increases the number of plots; the degree1 argument re-
duces the number of plots.
degree2 An index vector specifying which subset of degree2 (interaction) plots to in-
clude.
Default is TRUE meaning all (after selecting the relevant interaction terms as de-
scribed in “Which variables are plotted?” in the plotmo vignette).
Can also be a character vector specifying which variables to plot (grep is used
for matching). Examples:
degree2="wind" plots all degree2 plots for the wind variable.
degree2=c("wind", "vis") plots just the wind:vis plot.
all2 Default is FALSE. Use TRUE to plot all pairs of predictors, not just those usually
selected by plotmo.
do.par One of NULL, FALSE, TRUE, or 2, as follows:
do.par=NULL. Same as do.par=FALSE if the number of plots is one; else the
same as TRUE.
do.par=FALSE. Use the current par settings. You can pass additional graphics
parameters in the “...” argument.
do.par=TRUE (default). Start a new page and call par as appropriate to display
multiple plots on the same page. This automatically sets parameters like mfrow
and mar. You can pass additional graphics parameters in the “...” argument.
do.par=2. Like do.par=TRUE but don’t restore the par settings to their original
state when plotmo exits, so you can add something to the plot.
clip The default is clip=TRUE, meaning ignore very outlying predictions when deter-
mining the automatic ylim. This keeps ylim fairly compact while still covering
all or nearly all the data, even if there are a few crazy predicted values. See “The
ylim and clip arguments” in the plotmo vignette.
Use clip=FALSE for no clipping.
ylim Three possibilities:
ylim=NULL (default). Automatically determine a ylim to use across all graphs.
ylim=NA. Each graph has its own ylim.
ylim=c(ymin,ymax). Use the specified limits across all graphs.
caption Overall caption. By default create the caption automatically. Use caption=""
for no caption. (Use main to set the title of individual plots, can be a vector.)
trace Default is 0.
trace=1 (or TRUE) for a summary trace (shows how predict is invoked for the
current object).
trace=2 for detailed tracing.
trace=-1 inhibits the messages usually issued by plotmo, like the plotmo grid:,
calculating partdep, and nothing to plot messages. Error and warning
messages will be printed as usual.
grid.func Function applied to columns of the x matrix to pin the values of variables not on
the axis of the current plot (the “background” variables).
The default is a function which for numeric variables returns the median and
for logical and factors variables returns the value occurring most often in the
training data.
Examples:
plotmo(mod, grid.func=mean)
grid.func <- function(x, ...) quantile(x)[2] # 25% quantile
plotmo(mod, grid.func=grid.func)
This argument is not related to the grid.col argument.
This argument can be overridden for specific variables—see grid.levels be-
low.
grid.levels Default is NULL. Else a list of variables and their fixed value to be used when
the variable is not on the axis. Supersedes grid.func for variables in the list.
Names and values can be abbreviated, partial matching is used. Example:
plotmo(mod, grid.levels=list(sex="m", age=21))
extend Amount to extend the horizontal axis in each plot. The default is 0, do not
extend (i.e. use the range of the variable in the training data). Else something
like extend=.5, which will extend both the lower and upper xlim of each plot
by 50%.
This argument is useful if you want to see how the model performs on data that is
beyond the training data; for example, you want to see how a time-series model
performs on future data.
This argument is currently implemented only for degree1 plots. Factors and
discrete variables (see the ndiscrete argument) are not extended.
ngrid1 Number of equally spaced x values in each degree1 plot. Default is 50. Also
used as the number of background cases for pmethod="apartdep".
ngrid2 Grid size for degree2 plots (ngrid2 x ngrid2 points are plotted). Default is 20.
The default will sometimes be too small for contour and image plots.
With large ngrid2 values, persp plots look better with persp.border=NA.
npoints Number of response points to be plotted (a sample of npoints points is plotted).
Applies only if pt.col is specified.
The default is 3000 (not all, to avoid overplotting on large models). Use npoints=TRUE
or -1 for all points.
ndiscrete Default 5 (a somewhat arbitrary value). Variables with no more than ndiscrete
unique values are plotted as quantized in plots (a staircase rather than a curve).
Factors are always considered discrete. Variables with non-integer values are
always considered non-discrete.
Use ndiscrete=0 if you want to plot the response for a variable with just a few
integer values as a line or a curve, rather than a staircase.
int.only.ok Plot the model even if it is an intercept-only model (no predictors are used in the
model). Do this by plotting a single degree1 plot for the first predictor.
The default is TRUE. Use int.only.ok=FALSE to instead issue an error message
for intercept-only models.
center Center the plotted response. Default is FALSE.
xflip Default FALSE. Use TRUE to flip the direction of the x axis. This argument (and
yflip and swapxy) is useful when comparing to a plot from another source and
you want the axes to be the same. (Note that xflip and yflip cannot be used
on the persp plots, a limitation of the persp function.)
yflip Default FALSE. Use TRUE to flip the direction of the y axis of the degree2 graphs.
swapxy Default FALSE. Use TRUE to swap the x and y axes on the degree2 graphs.
... Dot arguments are passed to the predict and plot functions. Dot argument names,
whether prefixed or not, should be specified in full and not abbreviated.
“Prefixed” arguments are passed directly to the associated function. For exam-
ple the prefixed argument persp.col="pink" passes col="pink" to persp(),
overriding the global col setting. To send an argument to predict whose name
may alias with plotmo’s arguments, use predict. as a prefix. Example:
plotmo(mod, s=1) # error: arg matches multiple formal args
plotmo(mod, predict.s=1) # ok now: s=1 will be passed to predict()
The prefixes recognized by plotmo are:
predict. passed to the predict method for the model
degree1. modifies degree1 plots e.g. degree1.col=3, degree1.lwd=2
persp. arguments passed to persp
contour. arguments passed to contour
image. arguments passed to image
pt. see the pt.col argument (arguments passed to points and text)
smooth. see the smooth.col argument (arguments passed to lines and lowess)
level. see the level argument (level.shade, level.shade2, and arguments for polygon)
func. see the func argument (arguments passed to lines)
rug. see the nrug argument (rug.jitter, and arguments passed to rug)
density. see the nrug argument (density.adjust, and arguments passed to lines)
grid. see the grid.col argument (arguments passed to grid)
caption. see the caption argument (arguments passed to mtext)
par. arguments passed to par (only necessary if a par argument name clashes with a plotmo argument)
prednames. Use prednames.abbreviate=FALSE for full predictor names in graph axes.
The cex argument is relative, so specifying cex=1 is the same as not specifying
cex.
For backwards compatibility, some dot arguments are supported but not explic-
itly documented. For example, the old argument col.response is no longer
in plotmo’s formal argument list, but is still accepted and treated like the new
argument pt.col.
Note
In general this function won’t work on models that don’t save the call and data with the model in a
standard way. For further discussion please see “Accessing the model data” in the plotmo vignette.
Package authors may want to look at Guidelines for S3 Regression Models (also available here).
By default, plotmo tries to use sensible model-dependent defaults when calling predict. Use
trace=1 to see the arguments passed to predict. You can change the defaults by using plotmo’s
type argument, and by using dot arguments prefixed with predict. (see the description of “...”
above).
See Also
Please see the plotmo vignette (also available here).
Examples
if (require(rpart)) {
data(kyphosis)
rpart.model <- rpart(Kyphosis~., data=kyphosis)
# pass type="prob" to plotmo's internal calls to predict.rpart, and
# select the column named "present" from the matrix returned by predict.rpart
plotmo(rpart.model, type="prob", nresponse="present")
}
if (require(earth)) {
data(ozone1)
earth.model <- earth(O3 ~ ., data=ozone1, degree=2)
plotmo(earth.model)
# plotmo(earth.model, pmethod="partdep") # partial dependence plots
}
plotmo.misc Ignore
Description
Miscellaneous functions exported for internal use by earth and other packages. You can ignore
these.
Usage
# for earth
plotmo_fitted(object, trace, nresponse, type, ...)
plotmo_cum(rinfo, info, nfigs=1, add=FALSE,
cum.col1, grid.col, jitter=0, cum.grid="percentages", ...)
plotmo_nresponse(y, object, nresponse, trace, fname, type="response")
plotmo_rinfo(object, type=NULL, residtype=type, nresponse=1,
standardize=FALSE, delever=FALSE, trace=0,
leverage.msg="returned as NA", expected.levs=NULL, labels.id=NULL, ...)
plotmo_predict(object, newdata, nresponse,
type, expected.levs, trace, inverse.func=NULL, ...)
plotmo_prolog(object, object.name, trace, ...)
plotmo_resplevs(object, plotmo_fitted, yfull, trace)
plotmo_rsq(object, newdata, trace=0, nresponse=NA, type=NULL, ...)
plotmo_standardizescale(object)
plotmo_type(object, trace, fname="plotmo", type, ...)
plotmo_y(object, nresponse=NULL, trace=0, expected.len=NULL,
resp.levs=NULL, convert.glm.response=!is.null(nresponse))
## Default S3 method:
plotmo.pairs(object, x, nresponse, trace, all2, ...)
## Default S3 method:
plotmo.singles(object, x, nresponse, trace, all1, ...)
## Default S3 method:
plotmo.y(object, trace, naked, expected.len, ...)
# plotmo methods
plotmo.convert.na.nresponse(object, nresponse, yhat, type="response", ...)
plotmo.pairs(object, x, nresponse, trace, all2, ...)
plotmo.pint(object, newdata, type, level, trace, ...)
plotmo.predict(object, newdata, type, ..., TRACE)
plotmo.prolog(object, object.name, trace, ...)
plotmo.residtype(object, ..., TRACE)
plotmo.singles(object, x, nresponse, trace, all1, ...)
plotmo.type(object, ..., TRACE)
plotmo.x(object, trace, ...)
plotmo.y(object, trace, naked, expected.len, nresponse=1, ...)
Arguments
... -
add -
all1 -
all2 -
convert.glm.response
-
cum.col1 -
cum.grid -
delever -
expected.len -
expected.levs -
fname -
grid.col -
info -
inverse.func -
jitter -
labels.id -
level -
leverage.msg -
naked -
newdata -
nfigs -
nresponse -
object.name -
object -
plotmo_fitted -
residtype -
resp.levs -
rinfo -
standardize -
TRACE -
trace -
type -
x -
yfull -
yhat -
y -
plotres Plot the residuals of a regression model
Description
Plot the residuals of a regression model.
Please see the plotres vignette (also available here).
Usage
plotres(object = stop("no 'object' argument"),
which = 1:4, info = FALSE, versus = 1,
standardize = FALSE, delever = FALSE, level = 0,
id.n = 3, labels.id = NULL, smooth.col = 2,
grid.col = 0, jitter = 0,
do.par = NULL, caption = NULL, trace = 0,
npoints = 3000, center = TRUE,
type = NULL, nresponse = NA,
object.name = quote.deparse(substitute(object)), ...)
Arguments
object The model object.
which Which plots do draw. Default is 1:4.
1 Model plot. What gets plotted here depends on the model class. For example,
for earth models this is a model selection plot. Nothing will be displayed for
some models. For details, please see the plotres vignette.
2 Cumulative distribution of abs residuals
3 Residuals vs fitted
4 QQ plot
5 Abs residuals vs fitted
6 Sqrt abs residuals vs fitted
7 Abs residuals vs log fitted
8 Cube root of the squared residuals vs log fitted
9 Log abs residuals vs log fitted
info Default is FALSE. Use TRUE to print extra information as follows:
i) Display the distribution of the residuals along the bottom of the plot.
ii) Display the training R-Squared.
iii) Display the Spearman Rank Correlation of the absolute residuals with the
fitted values. Actually, correlation is measured against the absolute values of
whatever is on the horizontal axis — by default this is the fitted response, but
may be something else if the versus argument is used.
iv) In the Cumulative Distribution plot (which=2), display additional informa-
tion on the quantiles.
v) Only for which=5 or 9. Regress the absolute residuals against the fitted values
and display the regression slope. Robust linear regression is used via rlm in the
MASS package.
vi) Add various annotations to the other plots.
versus What do we plot the residuals against? One of:
1 Default. Plot the residuals versus the fitted values (or the log values when
which=7 to 9).
2 Residuals versus observation number, after observations have been sorted on
the fitted value. Same as versus=1, except that the residuals are spaced uni-
formly along the horizontal axis.
3 Residuals versus the response.
4 Residuals versus the hat leverages.
"b:" Residuals versus the basis functions. Currently only supported for earth,
mda::mars, and gam::gam models. A optional regex can follow the "b:"
to specify a subset of the terms, e.g. versus="b:wind" will plot terms with
"wind" in their name.
Else a character vector specifying which predictors to plot against.
Example 1: versus="" plots against all predictors (since the regex versus=""
matches anything).
Example 2: versus=c("wind", "vis") plots predictors with wind or vis in
their name.
Example 3: versus=c("wind|vis") equivalent to the above.
Note: These are regexs. Thus versus="wind" will match all variables that
have "wind" in their names. Use "^wind$" to match only the variable named
"wind".
standardize Default is FALSE. Use TRUE to standardize the residuals. Only supported for
some models, an error message will be issued otherwise.
Each residual is divided by by se_i * sqrt(1 - h_ii), where se_i is the stan-
dard error of prediction and h_ii is the leverage (the diagonal entry of the hat
matrix). When the variance model holds, the standardized residuals are ho-
moscedastic with unity variance.
The leverages are obtained using hatvalues. (For earth models the leverages
are for the linear regression of the response on the basis matrix bx.) A standard-
ized residual with a leverage of 1 is plotted as a star on the axis.
This argument applies to all plots where the residuals are used (including the
cumulative distribution and QQ plots, and to annotations displayed by the info
argument).
delever Default is FALSE. Use TRUE to “de-lever” the residuals. Only supported for some
models, an error message will be issued otherwise.
Each residual is divided by sqrt(1 - h_ii). See the standardize argument
for details.
level Draw estimated confidence or prediction interval bands at the given level, if
the model supports them.
Default is 0, bands not plotted. Else a fraction, for example level=0.90. Ex-
ample:
mod <- lm(log(Volume)~log(Girth), data=trees)
plotres(mod, level=.90)
You can modify the color of the bands with level.shade and level.shade2.
See also “Prediction intervals” in the plotmo vignette (but note that plotmo
needs prediction intervals on new data, whereas plotres requires only that the
model supports prediction intervals on the training data).
id.n The largest id.n residuals will be labeled in the plot. Default is 3. Special val-
ues TRUE and -1 or mean all.
If id.n is negative (but not -1) the id.n most positive and most negative resid-
uals will be labeled in the plot.
A current implementation restriction is that id.n is ignored when there are more
than ten thousand cases.
labels.id Residual labels. Only used if id.n > 0. Default is the case names, or the case
numbers if the cases are unnamed.
smooth.col Color of the smooth line through the residual points. Default is 2, red. Use
smooth.col=0 for no smooth line.
You can adjust the amount of smoothing with smooth.f. This gets passed as f
to lowess. The default is 2/3. Lower values make the line more wiggly.
grid.col Default is 0, no grid. Else add a background grid of the specified color to the
degree1 plots. The special value grid.col=TRUE is treated as "lightgray".
jitter Default is 0, no jitter. Passed as factor to jitter to jitter the plotted points
horizontally and vertically. Useful for discrete variables and responses, where
the residual points tend to be overlaid.
do.par One of NULL, FALSE, TRUE, or 2, as follows:
do.par=NULL (default). Same as do.par=FALSE if the number of plots is one;
else the same as TRUE.
do.par=FALSE. Use the current par settings. You can pass additional graphics
parameters in the “...” argument.
do.par=TRUE. Start a new page and call par as appropriate to display multiple
plots on the same page. This automatically sets parameters like mfrow and mar.
You can pass additional graphics parameters in the “...” argument.
do.par=2. Like do.par=TRUE but don’t restore the par settings to their original
state when plotres exits, so you can add something to the plot.
caption Overall caption. By default create the caption automatically. Use caption=""
for no caption. (Use main to set the title of an individual plot.)
trace Default is 0.
trace=1 (or TRUE) for a summary trace (shows how predict and friends are
invoked for the model).
trace=2 for detailed tracing.
npoints Number of points to be plotted. A sample of npoints is taken; the sample in-
cludes the biggest twenty or so residuals.
The default is 3000 (not all, to avoid overplotting on large models). Use npoints=TRUE
or -1 for all points.
center Default is TRUE, meaning center the horizontal axis in the residuals plot, so
asymmetry in the residual distribution is more obvious.
type Type parameter passed first to residuals and if that fails to predict. For al-
lowed values see the residuals and predict methods for your object (such as
residuals.rpart or predict.earth). By default, plotres tries to automati-
cally select a suitable value for the model in question (usually "response"), but
this will not always be correct. Use trace=1 to see the type argument passed
to residuals and predict.
nresponse Which column to use when residuals or predict returns multiple columns.
This can be a column index or column name (which may be abbreviated, partial
matching is used).
object.name The name of the object for error and trace messages. Used internally by
plot.earth.
... Dot arguments are passed to the plot functions. Dot argument names, whether
prefixed or not, should be specified in full and not abbreviated.
“Prefixed” arguments are passed directly to the associated function. For ex-
ample the prefixed argument pt.col="pink" passes col="pink" to points(),
overriding the global col setting. The prefixes recognized by plotres are:
residuals. passed to residuals
predict. passed to predict (predict is called if the call to residuals fails)
w1. sent to the model-dependent plot for which=1 e.g. w1.col=2
pt. modify the displayed points e.g. pt.col=as.numeric(survived)+2 or pt.cex=.8.
smooth. modify the smooth line e.g. smooth.col=0 or smooth.f=.5.
level. modify the interval bands, e.g. level.shade="gray" or level.shade2="lightblue"
legend. modify the displayed legend e.g. legend.cex=.9
cum. modify the Cumulative Distribution plot (arguments for plot.stepfun)
qq. modify the QQ plot, e.g. qq.pch=1
qqline modify the qqline in the QQ plot, e.g. qqline.col=0
label. modify the point labels, e.g. label.cex=.9 or label.font=2
cook. modify the Cook’s Distance annotations. This affects only the leverage plot (versus=3) for lm models with sta
caption. modify the overall caption (see the caption argument) e.g. caption.col=2.
par. arguments for par (only necessary if a par argument name clashes with a plotres argument)
The cex argument is relative, so specifying cex=1 is the same as not specifying
cex.
For backwards compatibility, some dot arguments are supported but not explic-
itly documented.
Value
If the which=1 plot was plotted, the return value of that plot (model dependent).
Else if the which=3 plot was plotted, return list(x,y) where x and y are the coordinates of the
points in that plot (but without jittering even if the jitter argument was used).
Else return NULL.
Note
This function is designed primarily for displaying standard response - fitted residuals for models
with a single continuous response, although it will work for a few other models.
In general this function won’t work on models that don’t save the call and data with the model
in a standard way. It uses the same underlying mechanism to access the model data as plotmo.
For further discussion please see “Accessing the model data” in the plotmo vignette (also available
here). Package authors may want to look at Guidelines for S3 Regression Models (also available
here).
See Also
Please see the plotres vignette (also available here).
plot.lm
plot.earth
Examples
# we use lm in this example, but plotres is more useful for models
# that don't have a function like plot.lm for plotting residuals
lm.model <- lm(Volume~., data=trees)
plotres(lm.model)
plot_gbm Plot a gbm model
Description
Plot a gbm model showing the training and other error curves.
Usage
plot_gbm(object=stop("no 'object' argument"),
smooth = c(0, 0, 0, 1),
col = c(1, 2, 3, 4), ylim = "auto",
legend.x = NULL, legend.y = NULL, legend.cex = .8,
grid.col = NA,
n.trees = NA, col.n.trees ="darkgray",
...)
Arguments
object The gbm model.
smooth Four-element vector specifying if smoothing should be applied to the train, test,
CV, and OOB curves respectively. When smoothing is specified, a smoothed
curve is plotted and the minimum is calculated from the smoothed curve.
The default is c(0, 0, 0, 1) meaning apply smoothing only to the OOB curve
(same as gbm.perf).
Note that smooth=1 (which gets recyled to c(1,1,1,1)) will smooth all the
curves.
col Four-element vector specifying the colors for the train, test, CV, and OOB curves
respectively.
The default is c(1, 2, 3, 4).
Use a color of 0 to remove the corresponding curve, e.g. col=c(1,2,3,0) to
not display the OOB curve.
If col=0 (which gets recycled to c(0,0,0,0)) nothing will be plotted, but plot_gbm
will return the number-of-trees at the minima as usual (as described in the Value
section below).
ylim The default ylim="auto" shows more detail around the minima.
Use ylim=NULL for the full vertical range of the curves.
Else specify ylim as usual.
legend.x The x position of the legend. The default positions the legend automatically.
Use legend.x=NA for no legend.
See the x and y arguments of xy.coords for other options, for example legend.x="topright".
legend.y The y position of the legend.
legend.cex The legend cex (the default is 0.8).
grid.col Default NA. Color of the optional grid, for example grid.col=1.
n.trees For use by plotres.
The x position of the gray vertical line indicating the n.trees passed by plotres
to predict.gbm to calculate the residuals. Plotres defaults to all trees.
col.n.trees For use by plotres.
Color of the vertical line showing the n.trees argument. Default is "darkgray".
... Dot arguments are passed internally to plot.default.
Value
This function returns a four-element vector specifying the number of trees at the train, test, CV, and
OOB minima respectively.
The minima are calculated after smoothing as specified by this function’s smooth argument. By
default, only the OOB curve is smoothed. The smoothing algorithm for the OOB curve differs
slightly from gbm.perf, so can give a slightly different number of trees.
Note
The OOB curve
The OOB curve is artificially rescaled to force it into the plot. See Chapter 7 in the plotres vignette.
Interaction with plotres
When invoking this function via plotres, prefix any argument of plotres with w1. to tell plotres
to pass the argument to this function. For example give w1.ylim=c(0,10) to plotres (plain
ylim=c(0,10) in this context gets passed to the residual plots).
Acknowledgments
This function is derived from code in the gbm package authored by <NAME> and others.
See Also
Chapter 7 in plotres vignette discusses this function.
Examples
if (require(gbm)) {
n <- 100 # toy model for quick demo
x1 <- 3 * runif(n)
x2 <- 3 * runif(n)
x3 <- sample(1:4, n, replace=TRUE)
y <- x1 + x2 + x3 + rnorm(n, 0, .3)
data <- data.frame(y=y, x1=x1, x2=x2, x3=x3)
mod <- gbm(y~., data=data, distribution="gaussian",
n.trees=300, shrinkage=.1, interaction.depth=3,
train.fraction=.8, verbose=FALSE)
plot_gbm(mod)
# plotres(mod) # plot residuals
# plotmo(mod) # plot regression surfaces
}
plot_glmnet Plot a glmnet model
Description
Plot the coefficient paths of a glmnet model.
An enhanced version of plot.glmnet.
Usage
plot_glmnet(x = stop("no 'x' argument"),
xvar = c("rlambda", "lambda", "norm", "dev"),
label = 10, nresponse = NA, grid.col = NA, s = NA, ...)
Arguments
x The glmnet model.
xvar What gets plotted along the x axis. One of:
"rlambda" (default) decreasing log lambda (lambda is the glmnet penalty)
"lambda" log lambda
"norm" L1-norm of the coefficients
"dev" percent deviance explained
The default xvar differs from plot.glmnet to allow s to be plotted when this
function is invoked by plotres.
label Default 10. Number of variable names displayed on the right of the plot. One
of:
FALSE display no variables
TRUE display all variables
integer (default) number of variables to display (default is 10)
nresponse Which response to plot for multiple response models.
grid.col Default NA. Color of the optional grid, for example grid.col="lightgray".
s For use by plotres. The x position of the gray vertical line indicating the
lambda s passed by plotres to predict.glmnet to calculate the residuals.
Plotres defaults to s=0.
... Dot arguments are passed internally to matplot.
Use col to change the color of curves; for example col=1:4. The six default
colors are intended to be distinguishable yet harmonious (to my eye at least),
with adjacent colors as different as easily possible.
Note
Limitations
For multiple response models use the nresponse argument to specify which response should be
plotted. (Currently each response must be plotted one by one.)
The type.coef argument of plot.glmnet is currently not supported.
Currently xvar="norm" is not supported for multiple response models (you will get an error mes-
sage).
Interaction with plotres
When invoking this function via plotres, prefix any argument of plotres with w1. to tell plotres
to pass the argument to this function. For example give w1.col=1:4 to plotres (plain col=1:4 in
this context gets passed to the residual plots).
Acknowledgments
This function is based on plot.glmnet in the glmnet package authored by <NAME>,
<NAME>, and <NAME>.
See Also
Chapter 6 in plotres vignette discusses this function.
Examples
if (require(glmnet)) {
x <- matrix(rnorm(100 * 10), 100, 10) # n=100 p=10
y <- x[,1] + x[,2] + 2 * rnorm(100) # y depends only on x[,1] and x[,2]
mod <- glmnet(x, y)
plot_glmnet(mod)
# plotres(mod) # plot the residuals
} |
ferenda | readthedoc | Python | Ferenda 0.3.0 documentation
[Ferenda](index.html#document-index)
---
Ferenda[¶](#ferenda)
===
Ferenda is a python library and framework for transforming unstructured document collections into structured
[Linked Data](http://en.wikipedia.org/wiki/Linked_data). It helps with downloading documents, parsing them to add explicit semantic structure and RDF-based metadata, finding relationships between documents, and republishing the results.
Introduction to Ferenda[¶](#introduction-to-ferenda)
---
Ferenda is a python library and framework for transforming unstructured document collections into structured
[Linked Data](http://en.wikipedia.org/wiki/Linked_data). It helps with downloading documents, parsing them to add explicit semantic structure and RDF-based metadata, finding relationships between documents, and republishing the results.
It uses the XHTML and RDFa standards for representing semantic structure, and republishes content using Linked Data principles and a REST-based API.
Ferenda works best for large document collections that have some degree of internal standardization, such as the laws of a particular country, technical standards, or reports published in a series. It is particularly useful for collections that contains explicit references between documents, within or across collections.
It is designed to make it easy to get started with basic downloading,
parsing and republishing of documents, and then to improve each step incrementally.
### Example[¶](#example)
Ferenda can be used either as a library or as a command-line tool. This code uses the Ferenda API to create a website containing all(*) RFCs and W3C recommended standards.
```
from ferenda.sources.tech import RFC, W3Standards from ferenda.manager import makeresources, frontpage, runserver, setup_logger from ferenda.errors import DocumentRemovedError, ParseError, FSMStateError
config = {'datadir':'netstandards/exampledata',
'loglevel':'DEBUG',
'force':False,
'storetype':'SQLITE',
'storelocation':'netstandards/exampledata/netstandards.sqlite',
'storerepository':'netstandards',
'downloadmax': 50 # remove this to download everything
}
setup_logger(level='DEBUG')
# Set up two document repositories docrepos = (RFC(**config), W3Standards(**config))
for docrepo in docrepos:
# Download a bunch of documents
docrepo.download()
# Parse all downloaded documents
for basefile in docrepo.store.list_basefiles_for("parse"):
try:
docrepo.parse(basefile)
except ParseError as e:
pass # or handle this in an appropriate way
# Index the text content and metadata of all parsed documents
for basefile in docrepo.store.list_basefiles_for("relate"):
docrepo.relate(basefile, docrepos)
# Prepare various assets for web site navigation makeresources(docrepos,
resourcedir="netstandards/exampledata/rsrc",
sitename="Netstandards",
sitedescription="A repository of internet standard documents")
# Relate for all repos must run before generate for any repo for docrepo in docrepos:
# Generate static HTML files from the parsed documents,
# with back- and forward links between them, etc.
for basefile in docrepo.store.list_basefiles_for("generate"):
docrepo.generate(basefile)
# Generate a table of contents of all available documents
docrepo.toc()
# Generate feeds of new and updated documents, in HTML and Atom flavors
docrepo.news()
# Create a frontpage for the entire site frontpage(docrepos,path="netstandards/exampledata/index.html")
# Start WSGI app at http://localhost:8000/ with navigation,
# document viewing, search and API
# runserver(docrepos, port=8000, documentroot="netstandards/exampledata")
```
Alternately, using the command line tools and the project framework:
```
$ ferenda-setup netstandards
$ cd netstandards
$ ./ferenda-build.py ferenda.sources.tech.RFC enable
$ ./ferenda-build.py ferenda.sources.tech.W3Standards enable
$ ./ferenda-build.py all all --downloadmax=50
# $ ./ferenda-build.py all runserver &
# $ open http://localhost:8000/
```
Note
(*) actually, it only downloads the 50 most recent of each. Downloading, parsing, indexing and re-generating close to 7000 RFC documents takes several hours. In order to process all documents, remove the `downloadmax` configuration parameter/command line option, and be prepared to wait. You should also set up an external triple store (see [Triple stores](index.html#external-triplestore)) and an external fulltext search engine (see [Fulltext search engines](index.html#external-fulltext)).
### Prerequisites[¶](#prerequisites)
Operating system Ferenda is tested and works on Unix, Mac OS and Windows.
Python Version 2.6 or newer required, 3.4 recommended. The code base is primarily developed with python 3, and is heavily dependent on all forward compatibility features introduced in Python 2.6. Python 3.0 and 3.1 is not supported.
Third-party libraries
`beautifulsoup4`, `rdflib`, `html5lib`,
`lxml`, `requests`, `whoosh`, `pyparsing`, `jsmin`,
`six` and their respective requirements. If you install ferenda using `easy_install` or `pip` they should be installed automatically. If you’re working with a clone of the source repository you can install them with a simple `pip install -r requirements.py3.txt` (substitute with
`requirements.py2.txt` if you’re not yet using python 3).
Command-line tools For some functionality, certain executables must be present and in your `$PATH`:
* [`PDFReader`](index.html#ferenda.PDFReader) requires `pdftotext` and
`pdftohtml` (from [poppler](http://poppler.freedesktop.org/), version 0.21 or newer).
+ The [`crop()`](index.html#ferenda.pdfreader.Page.crop) method requires
`convert` (from [ImageMagick](http://www.imagemagick.org/)).
+ The `convert_to_pdf` parameter to
`read()` requires the `soffice`
binary from either OpenOffice or LibreOffice
+ The `ocr_lang` parameter to
`read()` requires `tesseract` (from
[tesseract-ocr](https://code.google.com/p/tesseract-ocr/)),
`convert` (see above) and `tiffcp` (from [libtiff](http://www.libtiff.org/))
* [`WordReader`](index.html#ferenda.WordReader) requires [antiword](http://www.winfield.demon.nl/) to handle old `.doc` files.
* [`TripleStore`](index.html#ferenda.TripleStore) can perform some operations
(bulk up- and download) much faster if [curl](http://curl.haxx.se/) is installed.
Once you have a large number of documents and metadata about those documents, you’ll need a RDF triple store, either [Sesame](http://www.openrdf.org/) (at least version 2.7) or [Fuseki](http://jena.apache.org/documentation/serving_data/index.html) (at least version 1.0). For document collections small enough to keep all metadata in memory you can get by with only rdflib, using either a Sqlite or a Berkely DB (aka Sleepycat/bsddb) backend. For further information, see [Triple stores](index.html#external-triplestore).
Similarly, once you have a large collection of text (either many short documents, or fewer long documents), you’ll need an fulltext search engine to use the search feature (enabled by default). For small document collections the embedded [whoosh](https://bitbucket.org/mchaput/whoosh/wiki/Home) library is used. Right now, [ElasticSearch](http://www.elasticsearch.org/) is the only supported external fulltext search engine.
As a rule of thumb, if your document collection contains over 100 000 RDF triples or 100 000 words, you should start thinking about setting up an external triple store or a fulltext search engine. See
[Fulltext search engines](index.html#external-fulltext).
### Installing[¶](#installing)
Ferenda should preferably be installed with [pip](http://www.pip-installer.org/en/latest/installing.html) (in fact,
it’s the only method tested):
```
pip install ferenda
```
You should definitely consider installing ferenda in a [virtualenv](http://www.virtualenv.org/en/latest/).
Note
If you want to use the Sleepycat/bsddb backend for storing RDF data together with python 3, you need to install the `bsddb3`
module. Even if you’re using python 2 on Mac OS X, you might need to install this module, as the built-in `bsddb` module often has problems on this platform. It’s not automatically installed by
`easy_install`/`pip` as it has requirements of its own and is not essential.
On Windows, we recommend using a binary distribution of
`lxml`. Unfortunately, at the time of writing, no such official distribution is for Python 3.3 or later. However, the inofficial distributions available at
<http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml> has been tested with ferenda on python 3.3 and later, and seems to work great.
The binary distributions installs lxml into the system python library path. To make lxml available for your virtualenv, use the
`--system-site-packages` command line switch when creating the virtualenv.
### Features[¶](#features)
* Handles downloading, structural parsing and regeneration of large document collections.
* Contains libraries to make reading of plain text, MS Word and PDF documents (including scanned text) as easy as HTML.
* Uses established information standards like XHTML, XSLT, XML namespaces, RDF and SPARQL as much as possible.
* Leverages your favourite python libraries: [requests](http://docs.python-requests.org/en/latest/), [beautifulsoup](http://www.crummy.com/software/BeautifulSoup/), [rdflib](https://rdflib.readthedocs.org/en/latest/), [lxml](http://lxml.de/), [pyparsing](http://pyparsing.wikispaces.com/)
and [whoosh](https://bitbucket.org/mchaput/whoosh/wiki/Home).
* Handle errors in upstream sources by creating one-off patch files for individiual documents.
* Easy to write reference/citation parsers and run them on document text.
* Documents in the same and other collections are automatically cross-referenced.
* Uses caches and dependency management to avoid performing the same work over and over.
* Once documents are downloaded and structured, you get a usable web site with REST API, Atom feeds and search for free.
* Web site generation can create a set of static HTML pages for offline use.
### Next step[¶](#next-step)
See [First steps](index.html#document-firststeps) to set up a project and create your own simple document repository.
First steps[¶](#first-steps)
---
Ferenda can be used in a project-like manner with a command-line tool
(similar to how projects based on [Django](https://www.djangoproject.com/), [Sphinx](http://sphinx-doc.org/)
and [Scrapy](http://scrapy.org) are used), or it can be used programatically through a simple API. In this guide, we’ll primarily be using the command-line tool, and then show how to achieve the same thing using the API.
The first step is to create a project. Lets make a simple website that contains published standards from W3C and IETF, called
“netstandards”. Ferenda installs a system-wide command-line tool called `ferenda-setup` whose sole purpose is to create projects:
```
$ ferenda-setup netstandards Prerequisites ok Selected SQLITE as triplestore Selected WHOOSH as search engine Project created in netstandards
$ cd netstandards
$ ls ferenda-build.py ferenda.ini wsgi.py
```
The three files created by `ferenda-setup` is another command line tool (`ferenda-build.py`) used for management of the newly created project, a WSGI application (`wsgi.py`, see [The WSGI app](index.html#document-wsgi)) and a configuration file (`ferenda.ini`). The default configuration file specifies most, but not all, of the available configuration parameters. See [Configuration](index.html#configuration) for a full list of the standard configuration parameters.
Note
When using the API, you don’t create a project or deal with configuration files in the same way. Instead, your client code is responsible for keeping track of which docrepos to use, and providing configuration when calling their methods.
### Creating a Document repository class[¶](#creating-a-document-repository-class)
Any document collection is handled by a
[DocumentRepository](index.html#keyconcept-documentrepository) class (or *docrepo* for short), so our first task is to create a docrepo for W3C standards.
A docrepo class is responsible for downloading documents in a specific document collection. These classes can inherit from
[`DocumentRepository`](index.html#ferenda.DocumentRepository), which amongst others provides the method [`download()`](index.html#ferenda.DocumentRepository.download) for this. Since the details of how documents are made available on the web differ greatly from collection to collection, you’ll often have to override the default implementation, but in this particular case, it suffices. The default implementation assumes that all documents are available from a single index page, and that the URLs of the documents follow a set pattern.
The W3C standards are set up just like that: All standards are available at `http://www.w3.org/TR/tr-status-all`. There are a lot of links to documents on that page, and not all of them are links to recommended standards. A simple way to find only the recommended standards is to see if the link follows the pattern
`http://www.w3.org/TR/<year>/REC-<standardid>-<date>`.
Creating a docrepo that is able to download all web standards is then as simple as creating a subclass and setting three class properties. Create this class in the current directory (or anywhere else on your python path) and save it as `w3cstandards.py`
```
from ferenda import DocumentRepository
class W3CStandards(DocumentRepository):
alias = "w3c"
start_url = "http://www.w3.org/TR/tr-status-all"
document_url_regex = "http://www.w3.org/TR/(?P<year>\d{4})/REC-(?P<basefile>.*)-(?P<date>\d+)"
```
The first property, [`alias`](index.html#ferenda.DocumentRepository.alias), is required for all docrepos and controls the alias used by the command line tool for that docrepo, as well as the path where files are stored, amongst other things. If your project has a large collection of docrepos, it’s important that they all have unique aliases.
The other two properties are parameters which the default implementation of [`download()`](index.html#ferenda.DocumentRepository.download) uses in order to find out which documents to download. [`start_url`](index.html#ferenda.DocumentRepository.start_url) is just a simple regular URL, while
[`document_url_regex`](index.html#ferenda.DocumentRepository.document_url_regex) is a standard
[`re`](https://docs.python.org/3/library/re.html#module-re) regex with named groups. The group named `basefile` has special meaning, and will be used as a base for stored files and elsewhere as a short identifier for the document. For example, the web standard found at URL
<http://www.w3.org/TR/2012/REC-rdf-plain-literal-20121211/> will have the basefile `rdf-plain-literal`.
### Using ferenda-build.py and registering docrepo classes[¶](#using-ferenda-build-py-and-registering-docrepo-classes)
Next step is to enable our class. Like most tasks, this is done using the command line tool present in your project directory. To register the class (together with a short alias) in your `ferenda.ini`
configuration file, run the following:
```
$ ./ferenda-build.py w3cstandards.W3CStandards enable 22:16:26 root INFO Enabled class w3cstandards.W3CStandards (alias 'w3c')
```
This creates a new section in ferenda.ini that just looks like the following:
```
[w3c]
class = w3cstandards.W3CStandards
```
From this point on, you can use the class name or the alias “w3c”
interchangably:
```
$ ./ferenda-build.py w3cstandards.W3CStandards status # verbose 22:16:27 root INFO w3cstandards.W3CStandards status finished in 0.010 sec Status for document repository 'w3c' (w3cstandards.W3CStandards)
download: None.
parse: None.
generated: None.
$ ./ferenda-build.py w3c status # terse, exactly the same result
```
Note
When using the API, there is no need (nor possibility) to register docrepo classes. Your client code directly instantiates the class(es) it uses and calls methods on them.
### Downloading[¶](#downloading)
To test the downloading capabilities of our class, you can run the download method directly from the command line using the command line tool:
```
$ ./ferenda-build.py w3c download
22:16:31 w3c INFO Downloading max 3 documents 22:16:32 w3c INFO emotionml: downloaded from http://www.w3.org/TR/2014/REC-emotionml-20140522/
22:16:33 w3c INFO MathML3: downloaded from http://www.w3.org/TR/2014/REC-MathML3-20140410/
22:16:33 w3c INFO xml-entity-names: downloaded from http://www.w3.org/TR/2014/REC-xml-entity-names-20140410/
# and so on...
```
After a few minutes of downloading, the result is a bunch of files in
`data/w3c/downloaded`:
```
$ ls -1 data/w3c/downloaded MathML3.html MathML3.html.etag emotionml.html emotionml.html.etag xml-entity-names.html xml-entity-names.html.etag
```
Note
The `.etag` files are created in order to support [Conditional GET](http://en.wikipedia.org/wiki/HTTP_ETag), so that we don’t waste our time or remote server bandwith by re-downloading documents that hasn’t changed. They can be ignored and might go away in future versions of Ferenda.
We can get a overview of the status of our docrepo using the
`status` command:
```
$ ./ferenda-build.py w3cstandards.W3CStandards status # verbose 22:16:27 root INFO w3cstandards.W3CStandards status finished in 0.010 sec Status for document repository 'w3c' (w3cstandards.W3CStandards)
download: None.
parse: None.
generated: None.
$ ./ferenda-build.py w3c status # terse, exactly the same result
```
Note
To do the same using the API:
```
from w3cstandards import W3CStandards repo = W3CStandards()
repo.download()
repo.status()
# or use repo.get_status() to get all status information in a nested dict
```
Finally, if the logging information scrolls by too quickly and you want to read it again, take a look in the `data/logs` directory.
Each invocation of `ferenda-build.py` creates a new log file containing the same information that is written to stdout.
### Parsing[¶](#parsing)
Let’s try the next step in the workflow, to parse one of the documents we’ve downloaded.
```
$ ./ferenda-build.py w3c parse rdfa-core 22:16:45 w3c INFO rdfa-core: parse OK (4.863 sec)
22:16:45 root INFO w3c parse finished in 4.935 sec
```
By now, you might have realized that our command line tool generally is called in the following manner:
```
$ ./ferenda-build.py <docrepo> <command> [argument(s)]
```
The parse command resulted in one new file being created in
`data/w3c/parsed`.
```
$ ls -1 data/w3c/parsed rdfa-core.xhtml
```
And we can again use the `status` command to get a comprehensive overview of our document repository.
```
$ ./ferenda-build.py w3c status 22:16:47 root INFO w3c status finished in 0.032 sec Status for document repository 'w3c' (w3cstandards.W3CStandards)
download: xml-entity-names, rdfa-core, emotionml... (1 more)
parse: rdfa-core. Todo: xml-entity-names, emotionml, MathML3.
generated: None. Todo: rdfa-core.
```
Note that by default, subsequent invocations of parse won’t actually parse documents that don’t need parsing.
```
$ ./ferenda-build.py w3c parse rdfa-core 22:16:50 root INFO w3c parse finished in 0.019 sec
```
But during development, when you change the parsing code frequently,
you’ll need to override this through the `--force` flag (or set the
`force` parameter in `ferenda.ini`).
```
$ ./ferenda-build.py w3c parse rdfa-core --force 22:16:56 w3c INFO rdfa-core: parse OK (5.123 sec)
22:16:56 root INFO w3c parse finished in 5.166 sec
```
Note
To do the same using the API:
```
from w3cstandards import W3CStandards repo = W3CStandards(force=True)
repo.parse("rdfa-core")
```
Note also that you can parse all downloaded documents through the
`--all` flag, and control logging verbosity by the `--loglevel`
flag.
```
$ ./ferenda-build.py w3c parse --all --loglevel=DEBUG 22:16:59 w3c DEBUG xml-entity-names: Starting 22:16:59 w3c DEBUG xml-entity-names: Created data/w3c/parsed/xml-entity-names.xhtml 22:17:00 w3c DEBUG xml-entity-names: 6 triples extracted to data/w3c/distilled/xml-entity-names.rdf 22:17:00 w3c INFO xml-entity-names: parse OK (0.717 sec)
22:17:00 w3c DEBUG emotionml: Starting 22:17:00 w3c DEBUG emotionml: Created data/w3c/parsed/emotionml.xhtml 22:17:01 w3c DEBUG emotionml: 11 triples extracted to data/w3c/distilled/emotionml.rdf 22:17:01 w3c INFO emotionml: parse OK (1.174 sec)
22:17:01 w3c DEBUG MathML3: Starting 22:17:01 w3c DEBUG MathML3: Created data/w3c/parsed/MathML3.xhtml 22:17:01 w3c DEBUG MathML3: 8 triples extracted to data/w3c/distilled/MathML3.rdf 22:17:01 w3c INFO MathML3: parse OK (0.332 sec)
22:17:01 root INFO w3c parse finished in 2.247 sec
```
Note
To do the same using the API:
```
import logging from w3cstandards import W3CStandards
# client code is responsible for setting the effective log level -- ferenda
# just emits log messages, and depends on the caller to setup the logging
# subsystem in an appropriate way logging.getLogger().setLevel(logging.INFO)
repo = W3CStandards()
for basefile in repo.store.list_basefiles_for("parse"):
# You you might want to try/catch the exception
# ferenda.errors.ParseError or any of it's children here
repo.parse(basefile)
```
Note that the API makes you explicitly list and iterate over any available files. This is so that client code has the opportunity to parallelize this work in an appropriate way.
If we take a look at the files created in `data/w3c/distilled`, we see some metadata for each document. This metadata has been automatically extracted from RDFa statements in the XHTML documents,
but is so far very spartan.
Now take a look at the files created in `data/w3c/parsed`. The default implementation of `parse()` processes the DOM of the main body of the document, but some tags and attribute that are used only for formatting are stripped, such as `<style>` and `<script>`.
These documents have quite a lot of “boilerplate” text such as table of contents and links to latest and previous versions which we’d like to remove so that just the actual text is left (problem 1). And we’d like to explicitly extract some parts of the document and represent these as metadata for the document – for example the title, the publication date, the authors/editors of the document and it’s abstract, if available (problem 2).
Just like the default implementation of
[`download()`](index.html#ferenda.DocumentRepository.download) allowed for some customization using class variables, we can solve problem 1 by setting two additional class variables:
```
parse_content_selector="body"
parse_filter_selectors=["div.toc", "div.head"]
```
The [`parse_content_selector`](index.html#ferenda.DocumentRepository.parse_content_selector)
member specifies, using [CSS selector syntax](http://www.w3.org/TR/CSS2/selector.html), the part of the document which contains our main text. It defaults to `"body"`, and can often be set to `".content"` (the first element that has a class=”content”
attribute”), `"#main-text"` (any element with the id
`"main-text"`), `"article"` (the first `<article>` element) or similar. The
[`parse_filter_selectors`](index.html#ferenda.DocumentRepository.parse_filter_selectors) is a list of similar selectors, with the difference that all matching elements are removed from the tree. In this case, we use it to remove some boilerplate sections that often within the content specified by
[`parse_content_selector`](index.html#ferenda.DocumentRepository.parse_content_selector), but which we don’t want to appear in the final result.
In order to solve problem 2, we can override one of the methods that the default implementation of parse() calls:
```
def parse_metadata_from_soup(self, soup, doc):
from rdflib import Namespace
from ferenda import Describer
from ferenda import util
import re
DCTERMS = Namespace("http://purl.org/dc/terms/")
FOAF = Namespace("http://xmlns.com/foaf/0.1/")
d = Describer(doc.meta, doc.uri)
d.rdftype(FOAF.Document)
d.value(DCTERMS.title, soup.find("title").text, lang=doc.lang)
d.value(DCTERMS.abstract, soup.find(True, "abstract"), lang=doc.lang)
# find the issued date -- assume it's the first thing that looks
# like a date on the form "22 August 2013"
re_date = re.compile(r'(\d+ \w+ \d{4})')
datenode = soup.find(text=re_date)
datestr = re_date.search(datenode).group(1)
d.value(DCTERMS.issued, util.strptime(datestr, "%d %B %Y"))
editors = soup.find("dt", text=re.compile("Editors?:"))
for editor in editors.find_next_siblings("dd"):
editor_name = editor.text.strip().split(", ")[0]
d.value(DCTERMS.editor, editor_name)
```
[`parse_metadata_from_soup()`](index.html#ferenda.DocumentRepository.parse_metadata_from_soup) is called with a document object and the parsed HTML document in the form of a BeautifulSoup object. It is the responsibility of
[`parse_metadata_from_soup()`](index.html#ferenda.DocumentRepository.parse_metadata_from_soup) to add document-level metadata for this document, such as it’s title,
publication date, and similar. Note that
[`parse_metadata_from_soup()`](index.html#ferenda.DocumentRepository.parse_metadata_from_soup) is run before the
[`parse_content_selector`](index.html#ferenda.DocumentRepository.parse_content_selector) and
[`parse_filter_selectors`](index.html#ferenda.DocumentRepository.parse_filter_selectors) are applied, so the BeautifulSoup object passed into it contains the entire document.
Note
The selectors are passed to [BeautifulSoup.select()](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#css-selectors),
which supports a subset of the CSS selector syntax. If you stick with simple tag, id and class-based selectors you should be fine.
Now, if you run `parse --force` again, both documents and metadata are in better shape. Further down the line the value of properly extracted metadata will become more obvious.
### Republishing the parsed content[¶](#republishing-the-parsed-content)
The XHTML contains metadata in RDFa format. As such, you can extract all that metadata and put it into a triple store. The relate command does this, as well as creating a full text index of all textual content:
```
$ ./ferenda-build.py w3c relate --all 22:17:03 w3c INFO xml-entity-names: relate OK (0.618 sec)
22:17:04 w3c INFO rdfa-core: relate OK (1.542 sec)
22:17:06 w3c INFO emotionml: relate OK (1.647 sec)
22:17:08 w3c INFO MathML3: relate OK (1.604 sec)
22:17:08 w3c INFO Dumped 34 triples from context http://localhost:8000/dataset/w3c to data/w3c/distilled/dump.nt (0.007 sec)
22:17:08 root INFO w3c relate finished in 5.555 sec
```
The next step is to create a number of *resource files* (placed under
`data/rsrc`). These resource files include css and javascript files for the new website we’re creating, as well as a xml configuration file used by the XSLT transformation done by `generate` below:
```
$ ./ferenda-build.py w3c makeresources 22:17:08 root INFO Wrote data/rsrc/resources.xml
$ find data/rsrc -print data/rsrc data/rsrc/api data/rsrc/api/common.json data/rsrc/api/context.json data/rsrc/api/terms.json data/rsrc/css data/rsrc/css/ferenda.css data/rsrc/css/main.css data/rsrc/css/normalize-1.1.3.css data/rsrc/img data/rsrc/img/navmenu-small-black.png data/rsrc/img/navmenu.png data/rsrc/img/search.png data/rsrc/js data/rsrc/js/ferenda.js data/rsrc/js/jquery-1.10.2.js data/rsrc/js/modernizr-2.6.3.js data/rsrc/js/respond-1.3.0.js data/rsrc/resources.xml
```
Note
It is possible to combine and minify both javascript and css files using the `combineresources` option in the configuration file.
Running `makeresources` is needed for the final few steps.
```
$ ./ferenda-build.py w3c generate --all 22:17:14 w3c INFO xml-entity-names: generate OK (1.728 sec)
22:17:14 w3c INFO rdfa-core: generate OK (0.242 sec)
22:17:14 w3c INFO emotionml: generate OK (0.336 sec)
22:17:14 w3c INFO MathML3: generate OK (0.216 sec)
22:17:14 root INFO w3c generate finished in 2.535 sec
```
The `generate` command creates browser-ready HTML5 documents from our structured XHTML documents, using our site’s navigation.
```
$ ./ferenda-build.py w3c toc 22:17:17 w3c INFO Created data/w3c/toc/dcterms_issued/2013.html 22:17:17 w3c INFO Created data/w3c/toc/dcterms_issued/2014.html 22:17:17 w3c INFO Created data/w3c/toc/dcterms_title/e.html 22:17:17 w3c INFO Created data/w3c/toc/dcterms_title/m.html 22:17:17 w3c INFO Created data/w3c/toc/dcterms_title/r.html 22:17:17 w3c INFO Created data/w3c/toc/dcterms_title/x.html 22:17:18 w3c INFO Created data/w3c/toc/index.html 22:17:18 root INFO w3c toc finished in 2.059 sec
$ ./ferenda-build.py w3c news 21:43:55 w3c INFO feed type/document: 4 entries 22:17:19 w3c INFO feed main: 4 entries 22:17:19 root INFO w3c news finished in 0.115 sec
$ ./ferenda-build.py w3c frontpage 22:17:21 root INFO frontpage: wrote data/index.html (0.112 sec)
```
The `toc` and `feeds` commands creates static files for general indexes/tables of contents of all documents in our docrepo as well as Atom feeds, and the `frontpage` command creates a suitable frontpage for the site as a whole.
Note
To do all of the above using the API:
```
from ferenda import manager from w3cstandards import W3CStandards repo = W3CStandards()
for basefile in repo.store.list_basefiles_for("relate"):
repo.relate(basefile)
manager.makeresources([repo], sitename="Standards", sitedescription="W3C standards, in a new form")
for basefile in repo.store.list_basefiles_for("generate"):
repo.generate(basefile)
repo.toc()
repo.news()
manager.frontpage([repo])
```
Finally, to start a development web server and check out the finished result:
```
$ ./ferenda-build.py w3c runserver
$ open http://localhost:8080/
```
Now you’ve created your own web site with structured documents. It contains listings of all documents, feeds with updated documents (in both HTML and Atom flavors), full text search, and an API. In order to deploy your site, you can run it under Apache+mod_wsgi, ngnix+uWSGI,
Gunicorn or just about any WSGI capable web server, see [The WSGI app](index.html#document-wsgi).
Note
Using [`runserver()`](index.html#ferenda.manager.runserver) from the API does not really make any sense. If your environment supports running WSGI applications, see the above link for information about how to get the ferenda WSGI application. Otherwise, the app can be run by any standard WSGI host.
To keep it up-to-date whenever the W3C issues new standards, use the following command:
```
$ ./ferenda-build.py w3c all 22:17:25 w3c INFO Downloading max 3 documents 22:17:25 root INFO w3cstandards.W3CStandards download finished in 2.648 sec 22:17:25 root INFO w3cstandards.W3CStandards parse finished in 0.019 sec 22:17:25 root INFO w3cstandards.W3CStandards relate: Nothing to do!
22:17:25 root INFO w3cstandards.W3CStandards relate finished in 0.025 sec 22:17:25 root INFO Wrote data/rsrc/resources.xml 22:17:29 root INFO w3cstandards.W3CStandards generate finished in 0.006 sec 22:17:32 root INFO w3cstandards.W3CStandards toc finished in 3.376 sec 22:17:34 w3c INFO feed type/document: 4 entries 22:17:32 w3c INFO feed main: 4 entries 22:17:32 root INFO w3cstandards.W3CStandards news finished in 0.063 sec 22:17:32 root INFO frontpage: wrote data/index.html (0.017 sec)
```
The “all” command is an alias that runs `download`, `parse --all`,
`relate --all`, `generate --all`, `toc` and `feeds` in sequence.
Note
The API doesn’t have any corresponding method. Just run all of the above code again. As long as you don’t pass the `force=True`
parameter when creating the docrepo instance, ferendas dependency management should make sure that documents aren’t needlessly re-parsed etc.
This 20-line example of a docrepo took a lot of shortcuts by depending on the default implementation of the
[`download()`](index.html#ferenda.DocumentRepository.download) and
[`parse()`](index.html#ferenda.DocumentRepository.parse) methods. Ferenda tries to make it really to get *something* up and running quickly, and then improving each step incrementally.
In the next section [Creating your own document repositories](index.html#document-createdocrepos) we will take a closer look at each of the six main steps (`download`, `parse`, `relate`,
`generate`, `toc` and `news`), including how to completely replace the built-in methods. You can also take a look at the source code for [`ferenda.sources.tech.W3Standards`](index.html#ferenda.sources.tech.W3Standards), which contains a more complete (and substantially longer) implementation of
[`download()`](index.html#ferenda.DocumentRepository.download),
[`parse()`](index.html#ferenda.DocumentRepository.parse) and the others.
Creating your own document repositories[¶](#creating-your-own-document-repositories)
---
The next step is to do more substantial adjustments to the download/parse/generate cycle. As the source for our next docrepo we’ll use the [collected RFCs](http://www.ietf.org/rfc.html),
as published by [IETF](http://www.ietf.org/). These documents are mainly available in plain text format (formatted for printing on a line printer), as is the document index itself. This means that we cannot rely on the default implementation of download and parse. Furthermore, RFCs are categorized and refer to each other using varying semantics. This metadata can be captured, queried and used in a number of ways to present the RFC collection in a better way.
### Writing your own `download` implementation[¶](#writing-your-own-download-implementation)
The purpose of the [`download()`](index.html#ferenda.DocumentRepository.download) method is to fetch source documents from a remote source and store them locally, possibly under different filenames but otherwise bit-for-bit identical with how they were stored at the remote source (see
[File storage](index.html#file-storage) for more information about how and where files are stored locally).
The default implementation of
[`download()`](index.html#ferenda.DocumentRepository.download) uses a small number of methods and class variables to do the actual work. By selectively overriding these, you can often avoid rewriting a complete implementation of [`download()`](index.html#ferenda.DocumentRepository.download).
#### A simple example[¶](#a-simple-example)
We’ll start out by creating a class similar to our W3C class in
[First steps](index.html#document-firststeps). All RFC documents are listed in the index file at
<http://www.ietf.org/download/rfc-index.txt>, while a individual document (such as RFC 6725) are available at
<http://tools.ietf.org/rfc/rfc6725.txt>. Our first attempt will look like this (save as `rfcs.py`)
```
import re from datetime import datetime, date
import requests
from ferenda import DocumentRepository, TextReader from ferenda import util from ferenda.decorators import downloadmax
class RFCs(DocumentRepository):
alias = "rfc"
start_url = "http://www.ietf.org/download/rfc-index.txt"
document_url_template = "http://tools.ietf.org/rfc/rfc%(basefile)s.txt"
downloaded_suffix = ".txt"
```
And we’ll enable it and try to run it like before:
```
$ ./ferenda-build.py rfcs.RFCs enable
$ ./ferenda-build.py rfc download
```
This doesn’t work! This is because start page contains no actual HTML links – it’s a plaintext file. We need to parse the index text file to find out all available basefiles. In order to do that, we must override [`download()`](index.html#ferenda.DocumentRepository.download).
```
def download(self):
self.log.debug("download: Start at %s" % self.start_url)
indextext = requests.get(self.start_url).text
reader = TextReader(string=indextext) # see TextReader class
iterator = reader.getiterator(reader.readparagraph)
if not isinstance(self.config.downloadmax, (int, type(None))):
self.config.downloadmax = int(self.config.downloadmax)
for basefile in self.download_get_basefiles(iterator):
self.download_single(basefile)
@downloadmax
def download_get_basefiles(self, source):
for p in reversed(list(source)):
if re.match("^(\d{4}) ",p): # looks like a RFC number
if not "Not Issued." in p: # Skip RFC known to not exist
basefile = str(int(p[:4])) # eg. '0822' -> '822'
yield basefile
```
Since the RFC index is a plain text file, we use the
[`TextReader`](index.html#ferenda.TextReader) class, which contains a bunch of functionality to make it easier to work with plain text files. In this case, we’ll iterate through the file one paragraph at a time, and if the paragraph starts with a four-digit number (and the number hasn’t been marked “Not Issued.”) we’ll download it by calling
[`download_single()`](index.html#ferenda.DocumentRepository.download_single).
Like the default implementation, we offload the main work to
[`download_single()`](index.html#ferenda.DocumentRepository.download_single), which will look if the file exists on disk and only if not, attempt to download it. If the `--refresh` parameter is provided, a [conditional get](http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.3) is performed and only if the server says the document has changed, it is re-downloaded.
Note
In many cases, the URL for the downloaded document is not easily constructed from a basefile identifier. [`download_single()`](index.html#ferenda.DocumentRepository.download_single)
therefore takes a optional url argument. The above could be written more verbosely like:
```
url = "http://tools.ietf.org/rfc/rfc%s.txt" % basefile self.download_single(basefile, url)
```
In other cases, a document to be downloaded could consists of several resources (eg. a HTML document with images, or a PDF document with the actual content combined with a HTML document with document metadata). For these cases, you need to override
[`download_single()`](index.html#ferenda.DocumentRepository.download_single).
#### The main flow of the download process[¶](#the-main-flow-of-the-download-process)
The main flow is that the [`download()`](index.html#ferenda.DocumentRepository.download)
method itself does some source-specific setup, which often include downloading some sort of index or search results page. The location of that index resource is given by the class variable
[`start_url`](index.html#ferenda.DocumentRepository.start_url).
[`download()`](index.html#ferenda.DocumentRepository.download) then calls
[`download_get_basefiles()`](index.html#ferenda.DocumentRepository.download_get_basefiles) which returns an iterator of basefiles.
For each basefile, [`download_single()`](index.html#ferenda.DocumentRepository.download_single)
is called. This method is responsible for downloading everything related to a single document. Most of the time, this is just a single file, but can occasionally be a set of files (like a HTML document with accompanying images, or a set of PDF files that conceptually is a single document).
The default implementation of
[`download_single()`](index.html#ferenda.DocumentRepository.download_single) assumes that a document is just a single file, and calculates the URL of that document by calling the [`remote_url()`](index.html#ferenda.DocumentRepository.remote_url)
method.
The default [`remote_url()`](index.html#ferenda.DocumentRepository.remote_url) method uses the class variable
[`document_url_template`](index.html#ferenda.DocumentRepository.document_url_template). This string template should be using string formatting and expect a variable called `basefile`. The default implementation of
[`remote_url()`](index.html#ferenda.DocumentRepository.remote_url) can in other words only be used if the URLs of the remote source are predictable and directly based on the `basefile`.
Note
In many cases, the URL for the remote version of a document can be impossible to calculate from the basefile only, but be readily available from the main index page or search result page. For those cases, [`download_get_basefiles()`](index.html#ferenda.DocumentRepository.download_get_basefiles)
should return a iterator that yields `(basefile, url)`
tuples. The default implementation of
[`download()`](index.html#ferenda.DocumentRepository.download) handles this and uses
`url` as the second, optional argument to download_single.
Finally, the actual downloading of individual files is done by the
[`download_if_needed()`](index.html#ferenda.DocumentRepository.download_if_needed) method. As the name implies, this method tries to avoid downloading anything from the network if it’s not strictly needed. If there is a file in-place already, a conditional GET is done (using the timestamp of the file for a `If-modified-since` header, and an associated .etag file for a
`If-none-match` header). This avoids re-downloading the (potentially large) file if it hasn’t changed.
To summarize: The main chain of calls looks something like this:
```
download
start_url (class variable)
download_get_basefiles (instancemethod) - iterator
download_single (instancemethod)
remote_url (instancemethod)
document_url_template (class variable)
download_if_needed (instancemethod)
```
These are the methods that you may override, and when you might want to do so:
| method | Default behaviour | Override when |
| --- | --- | --- |
| download | Download the contents of
`start_url` and extracts all links by `lxml.html.iterlinks`,
which are passed to
`download_get_basefiles`.
For each item that is returned,
call download_single. | All your documents are not linked from a single index page (i.e. paged search results). In these cases, you should override
`download_get_basefiles` as well and make that method responsible for fetching all pages of search results. |
| download_get_basefiles | Iterate through the (element,
attribute, link, url) tuples from the source and examine if link matches `basefile_regex` or if url match `document_url_regex`.
If so, yield a
(text, url) tuple. | The basefile/url extraction is more complicated than what can be achieved through the `basefile_regex` /
`document_url_regex` mechanism, or when you’ve overridden download to pass a different argument than a link iterator. Note that you must return an iterator by using the
`yield` statement for each basefile found. |
| download_single | Calculates the url of the document to download (or, if a URL is provided, uses that), and calls
`download_if_needed` with that.
Afterwards, updates the
`DocumentEntry` of the document to reflect source url and download timestamps. | The complete contents of your document is contained in several different files. In these cases, you should start with the main one and call `download_if_needed` for that,
then calculate urls and file paths
(using the `attachment` parameter to
`store.downloaded_path`) for each additional file, then call
`download_if_needed` for each. Finally,
you must update the `DocumentEntry`
object. |
| remote_url | Calculates a URL from a basename using `document_url_template` | The rules for producing a URL from a basefile is more complicated than what string formatting can achieve. |
| download_if_needed | Downloads an individual URL to a local file. Makes sure the local file has the same timestamp as the Last-modified header from the server. If an older version of the file is present, this can either be archived (the default) or overwritten. | You really shouldn’t. |
#### The optional basefile argument[¶](#the-optional-basefile-argument)
During early stages of development, it’s often useful to just download a single document, both in order to check out that download_single works as it should, and to have sample documents for parse. When using the ferenda-build.py tool, the download command can take a single optional parameter, ie.:
```
./ferenda-build.py rfc download 6725
```
If provided, this parameter is passed to the download method as the optional basefile parameter. The default implementation of download checks if this parameter is provided, and if so, simply calls download_single with that parameter, skipping the full download procedure. If you’re overriding download, you should support this usage, by starting your implementation with something like this:
```
def download(self, basefile=None):
if basefile:
return self.download_single(basefile)
# the rest of your code
```
#### The [`downloadmax()`](index.html#ferenda.decorators.downloadmax) decorator[¶](#the-downloadmax-decorator)
As we saw in [Introduction to Ferenda](index.html#document-intro), the built-in docrepos support a
`downloadmax` configuration parameter. The effect of this parameter is simply to interrupt the downloading process after a certain amount of documents have been downloaded. This can be useful when doing integration-type testing, or if you just want to make it easy for someone else to try out your docrepo class. The separation between the main [`download()`](index.html#ferenda.DocumentRepository.download) method anbd the
[`download_get_basefiles()`](index.html#ferenda.DocumentRepository.download_get_basefiles) helper method makes this easy – just add the
`@`[`downloadmax()`](index.html#ferenda.decorators.downloadmax) to the latter. This decorator reads the `downloadmax` configuration parameter (it also looks for a `FERENDA_DOWNLOADMAX` environment variable) and if set,
limits the number of basefiles returned by
[`download_get_basefiles()`](index.html#ferenda.DocumentRepository.download_get_basefiles).
### Writing your own `parse` implementation[¶](#writing-your-own-parse-implementation)
The purpose of the
[`parse()`](index.html#ferenda.DocumentRepository.parse) method is to take the downloaded file(s) for a particular document and parse it into a structured document with proper metadata, both for the document as a whole, but also for individual sections of the document.
```
# In order to properly handle our RDF data, we need to tell
# ferenda which namespaces we'll be using. These will be available
# as rdflib.Namespace objects in the self.ns dict, which means you
# can state that something is eg. a dcterms:title by using
# self.ns['dcterms'].title. See
# :py:data:`~ferenda.DocumentRepository.namespaces`
namespaces = ('rdf', # always needed
'dcterms', # title, identifier, etc
'bibo', # Standard and DocumentPart classes, chapter prop
'xsd', # datatypes
'foaf', # rfcs are foaf:Documents for now
('rfc','http://example.org/ontology/rfc/')
)
from rdflib import Namespace
rdf_type = Namespace('http://example.org/ontology/rfc/').RFC
from ferenda.decorators import managedparsing
@managedparsing
def parse(self, doc):
# some very simple heuristic rules for determining
# what an individual paragraph is
def is_heading(p):
# If it's on a single line and it isn't indented with spaces
# it's probably a heading.
if p.count("\n") == 0 and not p.startswith(" "):
return True
def is_pagebreak(p):
# if it contains a form feed character, it represents a page break
return "\f" in p
# Parsing a document consists mainly of two parts:
# 1: First we parse the body of text and store it in doc.body
from ferenda.elements import Body, Preformatted, Title, Heading
from ferenda import Describer
reader = TextReader(self.store.downloaded_path(doc.basefile))
# First paragraph of an RFC is always a header block
header = reader.readparagraph()
# Preformatted is a ferenda.elements class representing a
# block of preformatted text. It is derived from the built-in
# list type, and must thus be initialized with an iterable, in
# this case a single-element list of strings. (Note: if you
# try to initialize it with a string, because strings are
# iterables as well, you'll end up with a list where each
# character in the string is an element, which is not what you
# want).
preheader = Preformatted([header])
# Doc.body is a ferenda.elements.Body class, which is also
# is derived from list, so it has (amongst others) the append
# method. We build our document by adding to this root
# element.
doc.body.append(preheader)
# Second paragraph is always the title, and we don't include
# this in the body of the document, since we'll add it to the
# medata -- once is enough
title = reader.readparagraph()
# After that, just iterate over the document and guess what
# everything is. TextReader.getiterator is useful for
# iterating through a text in other chunks than single lines
for para in reader.getiterator(reader.readparagraph):
if is_heading(para):
# Heading is yet another of these ferenda.elements
# classes.
doc.body.append(Heading([para]))
elif is_pagebreak(para):
# Just drop these remnants of a page-and-paper-based past
pass
else:
# If we don't know that it's something else, it's a
# preformatted section (the safest bet for RFC text).
doc.body.append(Preformatted([para]))
# 2: Then we create metadata for the document and store it in
# doc.meta (in this case using the convenience
# ferenda.Describer class).
desc = Describer(doc.meta, doc.uri)
# Set the rdf:type of the document
desc.rdftype(self.rdf_type)
# Set the title we've captured as the dcterms:title of the document and
# specify that it is in English
desc.value(self.ns['dcterms'].title, util.normalize_space(title), lang="en")
# Construct the dcterms:identifier (eg "RFC 6991") for this document from the basefile
desc.value(self.ns['dcterms'].identifier, "RFC " + doc.basefile)
# find and convert the publication date in the header to a datetime
# object, and set it as the dcterms:issued date for the document
re_date = re.compile("(January|February|March|April|May|June|July|August|September|October|November|December) (\d{4})").search
# This is a context manager that temporarily sets the system
# locale to the "C" locale in order to be able to use strptime
# with a string on the form "August 2013", even though the
# system may use another locale.
dt_match = re_date(header)
if dt_match:
with util.c_locale():
dt = datetime.strptime(re_date(header).group(0), "%B %Y")
pubdate = date(dt.year,dt.month,dt.day)
# Note that using some python types (cf. datetime.date)
# results in a datatyped RDF literal, ie in this case
# <http://localhost:8000/res/rfc/6994> dcterms:issued "2013-08-01"^^xsd:date
desc.value(self.ns['dcterms'].issued, pubdate)
# find any older RFCs that this document updates or obsoletes
obsoletes = re.search("^Obsoletes: ([\d+, ]+)", header, re.MULTILINE)
updates = re.search("^Updates: ([\d+, ]+)", header, re.MULTILINE)
# Find the category of this RFC, store it as dcterms:subject
cat_match = re.search("^Category: ([\w ]+?)( |$)", header, re.MULTILINE)
if cat_match:
desc.value(self.ns['dcterms'].subject, cat_match.group(1))
for predicate, matches in ((self.ns['rfc'].updates, updates),
(self.ns['rfc'].obsoletes, obsoletes)):
if matches is None:
continue
# add references between this document and these older rfcs,
# using either rfc:updates or rfc:obsoletes
for match in matches.group(1).strip().split(", "):
uri = self.canonical_uri(match)
# Note that this uses our own unofficial
# namespace/vocabulary
# http://example.org/ontology/rfc/
desc.rel(predicate, uri)
# And now we're done. We don't need to return anything as
# we've modified the Document object that was passed to
# us. The calling code will serialize this modified object to
# XHTML and RDF and store it on disk
```
This implementation builds a very simple object model of a RFC document, which is serialized to a XHTML1.1+RDFa document by the
[`managedparsing()`](index.html#ferenda.decorators.managedparsing) decorator. If you run it (by calling `ferenda-build.py rfc parse --all`) after having downloaded the rfc documents, the result will be a set of documents in
`data/rfc/parsed`, and a set of RDF files in
`data/rfc/distilled`. Take a look at them! The above might appear to be a lot of code, but it also accomplishes much. Furthermore, it should be obvious how to extend it, for instance to create more metadata from the fields in the header (such as capturing the RFC category, the publishing party, the authors etc) and better semantic representation of the body (such as marking up regular paragraphs,
line drawings, bulleted lists, definition lists, EBNF definitions and so on).
Next up, we’ll extend this implementation in two ways: First by representing the nested nature of the sections and subsections in the documents, secondly by finding and linking citations/references to other parts of the text or other RFCs in full.
Note
How does `./ferenda-build.py rfc parse --all` work? It calls
[`list_basefiles_for()`](index.html#ferenda.DocumentStore.list_basefiles_for) with the argument `parse`, which lists all downloaded files, and extracts the basefile for each of them, then calls parse for each in turn.
#### Handling document structure[¶](#handling-document-structure)
The main text of a RFC is structured into sections, which may contain subsections, which in turn can contain subsubsections. The start of each section is easy to identify, which means we can build a model of this structure by extending our parse method with relatively few lines:
```
from ferenda.elements import Section, Subsection, Subsubsection
# More heuristic rules: Section headers start at the beginning
# of a line and are numbered. Subsections and subsubsections
# have dotted numbers, optionally with a trailing period, ie
# '9.2.' or '11.3.1'
def is_section(p):
return re.match(r"\d+\.? +[A-Z]", p)
def is_subsection(p):
return re.match(r"\d+\.\d+\.? +[A-Z]", p)
def is_subsubsection(p):
return re.match(r"\d+\.\d+\.\d+\.? +[A-Z]", p)
def split_sectionheader(p):
# returns a tuple of title, ordinal, identifier
ordinal, title = p.split(" ",1)
ordinal = ordinal.strip(".")
return title.strip(), ordinal, "RFC %s, section %s" % (doc.basefile, ordinal)
# Use a list as a simple stack to keep track of the nesting
# depth of a document. Every time we create a Section,
# Subsection or Subsubsection object, we push it onto the
# stack (and clear the stack down to the appropriate nesting
# depth). Every time we create some other object, we append it
# to whatever object is at the top of the stack. As your rules
# for representing the nesting of structure become more
# complicated, you might want to use the
# :class:`~ferenda.FSMParser` class, which lets you define
# heuristic rules (recognizers), states and transitions, and
# takes care of putting your structure together.
stack = [doc.body]
for para in reader.getiterator(reader.readparagraph):
if is_section(para):
title, ordinal, identifier = split_sectionheader(para)
s = Section(title=title, ordinal=ordinal, identifier=identifier)
stack[1:] = [] # clear all but bottom element
stack[0].append(s) # add new section to body
stack.append(s) # push new section on top of stack
elif is_subsection(para):
title, ordinal, identifier = split_sectionheader(para)
s = Subsection(title=title, ordinal=ordinal, identifier=identifier)
stack[2:] = [] # clear all but bottom two elements
stack[1].append(s) # add new subsection to current section
stack.append(s)
elif is_subsubsection(para):
title, ordinal, identifier = split_sectionheader(para)
s = Subsubsection(title=title, ordinal=ordinal, identifier=identifier)
stack[3:] = [] # clear all but bottom three
stack[-1].append(s) # add new subsubsection to current subsection
stack.append(s)
elif is_heading(para):
stack[-1].append(Heading([para]))
elif is_pagebreak(para):
pass
else:
pre = Preformatted([para])
stack[-1].append(pre)
```
This enhances parse so that instead of outputting a single long list of elements directly under `body`:
```
<h1>2. Overview</h1>
<h1>2.1. Date, Location, and Participants</h1>
<pre>
The second ForCES interoperability test meeting was held by the IETF
ForCES Working Group on February 24-25, 2011...
</pre>
<h1>2.2. Testbed Configuration</h1>
<h1>2.2.1. Participants' Access</h1>
<pre>
NTT and ZJSU were physically present for the testing at the Internet
Technology Lab (ITL) at Zhejiang Gongshang University in China.
</pre>
```
…we have a properly nested element structure, as well as much more metadata represented in RDFa form:
```
<div class="section" property="dcterms:title" content=" Overview"
typeof="bibo:DocumentPart" about="http://localhost:8000/res/rfc/6984#S2.">
<span property="bibo:chapter" content="2."
about="http://localhost:8000/res/rfc/6984#S2."/>
<div class="subsection" property="dcterms:title" content=" Date, Location, and Participants"
typeof="bibo:DocumentPart" about="http://localhost:8000/res/rfc/6984#S2.1.">
<span property="bibo:chapter" content="2.1."
about="http://localhost:8000/res/rfc/6984#S2.1."/>
<pre>
The second ForCES interoperability test meeting was held by the
IETF ForCES Working Group on February 24-25, 2011...
</pre>
<div class="subsection" property="dcterms:title" content=" Testbed Configuration"
typeof="bibo:DocumentPart" about="http://localhost:8000/res/rfc/6984#S2.2.">
<span property="bibo:chapter" content="2.2."
about="http://localhost:8000/res/rfc/6984#S2.2."/>
<div class="subsubsection" property="dcterms:title" content=" Participants' Access"
typeof="bibo:DocumentPart" about="http://localhost:8000/res/rfc/6984#S2.2.1.">
<span content="2.2.1." about="http://localhost:8000/res/rfc/6984#S2.2.1."
property="bibo:chapter"/>
<pre>
NTT and ZJSU were physically present for the testing at the
Internet Technology Lab (ITL) at Zhejiang Gongshang
University in China...
</pre>
</div>
</div>
</div>
</div>
```
Note in particular that every section and subsection now has a defined URI (in the `@about` attribute). This will be useful later.
#### Handling citations in text[¶](#handling-citations-in-text)
References / citations in RFC text is often of the form `"are to be interpreted as described in [RFC2119]"` (for citations to other RFCs in whole), `"as described in Section 7.1"` (for citations to other parts of the current document) or `"Section 2.4 of [RFC2045] says"`
(for citations to a specific part in another document). We can define a simple grammar for these citations using [pyparsing](http://pyparsing.wikispaces.com/):
```
from pyparsing import Word, CaselessLiteral, nums
section_citation = (CaselessLiteral("section") + Word(nums+".").setResultsName("Sec")).setResultsName("SecRef")
rfc_citation = ("[RFC" + Word(nums).setResultsName("RFC") + "]").setResultsName("RFCRef")
section_rfc_citation = (section_citation + "of" + rfc_citation).setResultsName("SecRFCRef")
```
The above productions have named results for different parts of the citation, ie a citation of the form “Section 2.4 of [RFC2045] says”
will result in the named matches Sec = “2.4” and RFC = “2045”. The CitationParser class can be used to extract these matches into a dict,
which is then passed to a uri formatter function like:
```
def rfc_uriformatter(parts):
uri = ""
if 'RFC' in parts:
uri += self.canonical_uri(parts['RFC'].lstrip("0"))
if 'Sec' in parts:
uri += "#S" + parts['Sec']
return uri
```
And to initialize a citation parser and have it run over the entire structured text, finding citations and formatting them into URIs as we go along, just use:
```
from ferenda import CitationParser, URIFormatter
citparser = CitationParser(section_rfc_citation,
section_citation,
rfc_citation)
citparser.set_formatter(URIFormatter(("SecRFCRef", rfc_uriformatter),
("SecRef", rfc_uriformatter),
("RFCRef", rfc_uriformatter)))
citparser.parse_recursive(doc.body)
```
The result of these lines is that the following block of plain text:
```
<pre>
The behavior recommended in Section 2.5 is in line with generic error
treatment during the IKE_SA_INIT exchange, per Section 2.21.1 of
[RFC5996].
</pre>
```
…transform into this hyperlinked text:
```
<pre>
The behavior recommended in <a href="#S2.5"
rel="dcterms:references">Section 2.5</a> is in line with generic
error treatment during the IKE_SA_INIT exchange, per <a
href="http://localhost:8000/res/rfc/5996#S2.21.1"
rel="dcterms:references">Section 2.21.1 of [RFC5996]</a>.
</pre>
```
Note
The uri formatting function uses
[`canonical_uri()`](index.html#ferenda.DocumentRepository.canonical_uri) to create the base URI for each external reference. Proper design of the URIs you’ll be using is a big topic, and you should think through what URIs you want to use for your documents and their parts. Ferenda provides a default implementation to create URIs from document properties, but you might want to override this.
The parse step is probably the part of your application which you’ll spend the most time developing. You can start simple (like above) and then incrementally improve the end result by processing more metadata,
model the semantic document structure better, and handle in-line references in text more correctly. See also [Building structured documents](index.html#document-elementclasses),
[Parsing document structure](index.html#document-fsmparser) and [Citation parsing](index.html#document-citationparsing).
### Calling [`relate()`](index.html#ferenda.DocumentRepository.relate)[¶](#calling-relate)
The purpose of the [`relate()`](index.html#ferenda.DocumentRepository.relate)
method is to make sure that all document data and metadata is properly stored and indexed, so that it can be easily retrieved in later steps. This consists of three steps: Loading all RDF metadata into a triplestore, loading all document content into a full text index, and making note of how documents refer to each other.
Since the output of parse is well structured XHTML+RDFa documents that, on the surface level, do not differ much from docrepo to docrepo, you should not have to change anything about this step.
Note
You might want to configure whether to load everything into a fulltext index – this operation takes a lot of time, and this index is not even used if createing a static site. You do this by setting `fulltextindex` to `False`, either in ferenda.ini or on the command line:
```
./ferenda-build.py rfc relate --all --fulltextindex=False
```
### Calling [`makeresources()`](index.html#ferenda.manager.makeresources)[¶](#calling-makeresources)
This method needs to run at some point before generate and the rest of the methods. Unlike the other methods described above and below, which are run for one docrepo at a time, this method is run for the project as a whole (that is why it is a function in
[`ferenda.manager`](index.html#module-ferenda.manager) instead of a
[`DocumentRepository`](index.html#ferenda.DocumentRepository) method). It constructs a set of site-wide resources such as minified js and css files, and configuration for the site-wide XSLT template. It is easy to run using the command-line tool:
```
$ ./ferenda-build.py all makeresources
```
If you use the API, you need to provide a list of instances of the docrepos that you’re using, and the path to where generated resources should be stored:
```
from ferenda.manager import makeresources config = {'datadir':'mydata'}
myrepos = [RFC(**config), W3C(**config]
makeresources(myrepos,'mydata/myresources')
```
### Customizing [`generate()`](index.html#ferenda.DocumentRepository.generate)[¶](#customizing-generate)
The purpose of the
[`generate()`](index.html#ferenda.DocumentRepository.generate) method is to create new browser-ready HTML files from the structured XHTML+RDFa files created by
[`parse()`](index.html#ferenda.DocumentRepository.parse). Unlike the files created by [`parse()`](index.html#ferenda.DocumentRepository.parse), these files will contain site-branded headers, footers, navigation menus and such. They will also contain related content not directly found in the parsed files themselves: Sectioned documents will have a automatically-generated table of contents, and other documents that refer to a particular document will be listed in a sidebar in that document. If the references are made to individual sections, there will be sidebars for all such referenced sections.
The default implementation does this in two steps. In the first,
[`prep_annotation_file()`](index.html#ferenda.DocumentRepository.prep_annotation_file)
fetches metadata about other documents that relates to the document to be generated into an *annotation file*. In the second,
[`Transformer`](index.html#ferenda.Transformer) runs an XSLT transformation on the source file (which sources the annotation file and a configuration file created by
[`makeresources()`](index.html#ferenda.manager.makeresources)) in order to create the browser-ready HTML file.
You should not need to override the general
[`generate()`](index.html#ferenda.DocumentRepository.generate) method, but you might want to control how the annotation file and the XSLT transformation is done.
#### Getting annotations[¶](#getting-annotations)
The [`prep_annotation_file()`](index.html#ferenda.DocumentRepository.prep_annotation_file) step is driven by a [SPARQL construct query](http://www.w3.org/TR/rdf-sparql-query/#construct). The default query fetches metadata about every other document that refers to the document (or sections thereof) you’re generating, using the
`dcterms:references` predicate. By setting the class variable
[`sparql_annotations`](index.html#ferenda.DocumentRepository.sparql_annotations) to the file name of SPARQL query file of your choice, you can override this query.
Since our metadata contains more specialized statements on how document refer to each other, in the form of `rfc:updates` and
`rfc:obsoletes` statements, we want a query that’ll fetch this metadata as well. When we query for metadata about a particular document, we want to know if there is any other document that updates or obsoletes this document. Using a CONSTRUCT query, we create
`rfc:isUpdatedBy` and `rfc:isObsoletedBy` references to such documents.
```
sparql_annotations = "rfc-annotations.rq"
```
The contents of `rfc-annotations.rq`, placed in the current directory, should be:
```
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX dcterms: <http://purl.org/dc/terms/>
PREFIX bibo: <http://purl.org/ontology/bibo/>
PREFIX rfc: <http://example.org/ontology/rfc/CONSTRUCT {?s ?p ?o .
<%(uri)s> rfc:isObsoletedBy ?obsoleter .
<%(uri)s> rfc:isUpdatedBy ?updater .
<%(uri)s> dcterms:isReferencedBy ?referencer .
}
WHERE
{
# get all literal metadata where the document is the subject
{ ?s ?p ?o .
# FILTER(strstarts(str(?s), "%(uri)s"))
FILTER(?s = <%(uri)s> && !isUri(?o))
}
UNION
# get all metadata (except unrelated dcterms:references) about
# resources that dcterms:references the document or any of its
# sub-resources.
{ ?s dcterms:references+ <%(uri)s> ;
?p ?o .
BIND(?s as ?referencer)
FILTER(?p != dcterms:references || strstarts(str(?o), "%(uri)s"))
}
UNION
# get all metadata (except dcterms:references) about any resource that
# rfc:updates or rfc:obsolets the document
{ ?s ?x <%(uri)s> ;
?p ?o .
FILTER(?x in (rfc:updates, rfc:obsoletes) && ?p != dcterms:references)
}
# finally, bind obsoleting and updating resources to new variables for
# use in the CONSTRUCT clause
UNION { ?obsoleter rfc:obsoletes <%(uri)s> . }
UNION { ?updater rfc:updates <%(uri)s> . }
}
```
Note that `%(uri)s` will be replaced with the URI for the document we’re querying about.
Now, when querying the triplestore for metadata about RFC 6021, the
(abbreviated) result is:
```
<graph xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:rfc="http://example.org/ontology/rfc/"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
<resource uri="http://localhost:8000/res/rfc/6021">
<rfc:isObsoletedBy ref="http://localhost:8000/res/rfc/6991"/>
<dcterms:published fmt="datatype">
<date xmlns="http://www.w3.org/2001/XMLSchema#">2010-10-01</date>
</dcterms:published>
<dcterms:title xml:lang="en">Common YANG Data Types</dcterms:title>
</resource>
<resource uri="http://localhost:8000/res/rfc/6991">
<a><rfc:RFC/></a>
<rfc:obsoletes ref="http://localhost:8000/res/rfc/6021"/>
<dcterms:published fmt="datatype">
<date xmlns="http://www.w3.org/2001/XMLSchema#">2013-07-01</date>
</dcterms:published>
<dcterms:title xml:lang="en">Common YANG Data Types</dcterms:title>
</resource>
</graph>
```
Note
You can find this file in `data/rfc/annotations/6021.grit.xml`. It’s in the [Grit](http://code.google.com/p/oort/wiki/Grit) format for easy inclusion in XSLT processing.
Even if you’re not familiar with the format, or with RDF in general,
you can see that it contains information about two resources: first the document we’ve queried about (RFC 6021), then the document that obsoletes the same document (RFC 6991).
Note
If you’re coming from a relational database/SQL background, it can be a little difficult to come to grips with graph databases and SPARQL. The book “Learning SPARQL” by <NAME> is highly recommended.
#### Transforming to HTML[¶](#transforming-to-html)
The [`Transformer`](index.html#ferenda.Transformer) step is driven by a XSLT stylesheet. The default stylesheet uses a site-wide configuration file
(created by [`makeresources()`](index.html#ferenda.manager.makeresources)) for things like site name and top-level navigation, and lists the document content,
section by section, alongside of other documents that contains references (in the form of `dcterms:references`) for each section. The SPARQL query and the XSLT stylesheet often goes hand in hand – if your stylesheet needs a certain piece of data, the query must be adjusted to fetch it. By setting he class variable
[`xslt_template`](index.html#ferenda.DocumentRepository.xslt_template) in the same way as you did for the SPARQL query, you can override the default.
```
xslt_template = "rfc.xsl"
```
The contents of `rfc.xsl`, placed in the current directory, should be:
```
<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet version="1.0"
xmlns:xhtml="http://www.w3.org/1999/xhtml"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:rfc="http://example.org/ontology/rfc/"
xml:space="preserve"
exclude-result-prefixes="xhtml rdf" <xsl:include href="base.xsl"/ <!-- Implementations of templates called by base.xsl -->
<xsl:template name="headtitle"><xsl:value-of select="//xhtml:title"/> | <xsl:value-of select="$configuration/sitename"/></xsl:template>
<xsl:template name="metarobots"/>
<xsl:template name="linkalternate"/>
<xsl:template name="headmetadata"/>
<xsl:template name="bodyclass">rfc</xsl:template>
<xsl:template name="pagetitle">
<h1><xsl:value-of select="../xhtml:head/xhtml:title"/></h1>
</xsl:template <xsl:template match="xhtml:a"><a href="{@href}"><xsl:value-of select="."/></a></xsl:template <xsl:template match="xhtml:pre[1]">
<pre><xsl:apply-templates/>
</pre>
<xsl:if test="count(ancestor::*) = 2">
<xsl:call-template name="aside-annotations">
<xsl:with-param name="uri" select="../@about"/>
</xsl:call-template>
</xsl:if>
</xsl:template <!-- everything that has an @about attribute, i.e. _is_ something
(with a URI) gets a <section> with an <aside> for inbound links etc -->
<xsl:template match="xhtml:div[@about]" <div class="section-wrapper" about="{@about}"><!-- needed? -->
<section id="{substring-after(@about,'#')}">
<xsl:variable name="sectionheading"><xsl:if test="xhtml:span[@property='bibo:chapter']/@content"><xsl:value-of select="xhtml:span[@property='bibo:chapter']/@content"/>. </xsl:if><xsl:value-of select="@content"/></xsl:variable>
<xsl:if test="count(ancestor::*) = 2">
<h2><xsl:value-of select="$sectionheading"/></h2>
</xsl:if>
<xsl:if test="count(ancestor::*) = 3">
<h3><xsl:value-of select="$sectionheading"/></h3>
</xsl:if>
<xsl:if test="count(ancestor::*) = 4">
<h4><xsl:value-of select="$sectionheading"/></h4>
</xsl:if>
<xsl:apply-templates select="*[not(@about)]"/>
</section>
<xsl:call-template name="aside-annotations">
<xsl:with-param name="uri" select="@about"/>
</xsl:call-template>
</div>
<xsl:apply-templates select="xhtml:div[@about]"/>
</xsl:template <!-- remove spans which only purpose is to contain RDFa data -->
<xsl:template match="xhtml:span[@property and @content and not(text())]"/ <!-- construct the side navigation -->
<xsl:template match="xhtml:div[@about]" mode="toc">
<li><a href="#{substring-after(@about,'#')}"><xsl:if test="xhtml:span/@content"><xsl:value-of select="xhtml:span[@property='bibo:chapter']/@content"/>. </xsl:if><xsl:value-of select="@content"/></a><xsl:if test="xhtml:div[@about]">
<ul><xsl:apply-templates mode="toc"/></ul>
</xsl:if></li>
</xsl:template <!-- named template called from other templates which match
xhtml:div[@about] and pre[1] above, and which creates -->
<xsl:template name="aside-annotations">
<xsl:param name="uri"/>
<xsl:if test="$annotations/resource[@uri=$uri]/dcterms:isReferencedBy">
<aside class="annotations">
<h2>References to <xsl:value-of select="$annotations/resource[@uri=$uri]/dcterms:identifier"/></h2>
<xsl:for-each select="$annotations/resource[@uri=$uri]/rfc:isObsoletedBy">
<xsl:variable name="referencing" select="@ref"/>
Obsoleted by
<a href="{@ref}">
<xsl:value-of select="$annotations/resource[@uri=$referencing]/dcterms:identifier"/>
</a><br/>
</xsl:for-each>
<xsl:for-each select="$annotations/resource[@uri=$uri]/rfc:isUpdatedBy">
<xsl:variable name="referencing" select="@ref"/>
Updated by
<a href="{@ref}">
<xsl:value-of select="$annotations/resource[@uri=$referencing]/dcterms:identifier"/>
</a><br/>
</xsl:for-each>
<xsl:for-each select="$annotations/resource[@uri=$uri]/dcterms:isReferencedBy">
<xsl:variable name="referencing" select="@ref"/>
Referenced by
<a href="{@ref}">
<xsl:value-of select="$annotations/resource[@uri=$referencing]/dcterms:identifier"/>
</a><br/>
</xsl:for-each>
</aside>
</xsl:if>
</xsl:template <!-- default template: translate everything from whatever namespace
it's in (usually the XHTML1.1 NS) into the default namespace
-->
<xsl:template match="*"><xsl:element name="{local-name(.)}"><xsl:apply-templates select="node()"/></xsl:element></xsl:template <!-- default template for toc handling: do nothing -->
<xsl:template match="@*|node()" mode="toc"/</xsl:stylesheet>
```
This XSLT stylesheet depends on `base.xsl` (which resides in
`ferenda/res/xsl` in the source distribution of ferenda – take a look if you want to know how everything fits together). The main responsibility of this stylesheet is to format individual elements of the document body.
`base.xsl` takes care of the main chrome of the page, and it has a default implementation (that basically transforms everything from XHTML1.1 to HTML5, and removes some RDFa-only elements). It also loads and provides the annotation file in the global variable
$annotations. The above XSLT stylesheet uses this to fetch information about referencing documents. In particular, when processing an older document, it lists if later documents have updated or obsoleted it
(see the named template `aside-annotations`).
You might notice that this XSLT template flattens the nested structure of sections which we spent so much effort to create in the parse step. This is to make it easier to put up the aside boxes next to each part of the document, independent of the nesting level.
Note
While both the SPARQL query and the XSLT stylesheet might look complicated (and unless you’re a RDF/XSL expert, they are…), most of the time you can get a good result using the default generic query and stylesheet.
### Customizing [`toc()`](index.html#ferenda.DocumentRepository.toc)[¶](#customizing-toc)
The purpose of the [`toc()`](index.html#ferenda.DocumentRepository.toc)
method is to create a set of pages that acts as tables of contents for all documents in your docrepo. For large document collections there are often several different ways of creating such tables, eg. sorted by title, publication date, document status, author and similar. The pages uses the same site-branding,headers, footers, navigation menus etc used by [`generate()`](index.html#ferenda.DocumentRepository.generate).
The default implementation is generic enough to handle most cases, but you’ll have to override other methods which it calls, primarily
[`facets()`](index.html#ferenda.DocumentRepository.facets) and
[`toc_item()`](index.html#ferenda.DocumentRepository.toc_item). These methods depend on the metadata you’ve created by your parse implementation,
but in the simplest cases it’s enough to specify that you want one set of pages organized by the `dcterms:title` of each document
(alphabetically sorted) and another by `dcterms:issued`
(numerically/calendarically sorted). The default implementation does exactly this.
In our case, we wish to create four kinds of sorting: By identifier
(RFC number), by date of issue, by title and by category. These map directly to four kinds of metadata that we’ve stored about each and every document. By overriding
[`facets()`](index.html#ferenda.DocumentRepository.facets) we can specify these four
*facets*, aspects of documents used for grouping and sorting.
```
def facets(self):
from ferenda import Facet
return [Facet(self.ns['dcterms'].title),
Facet(self.ns['dcterms'].issued),
Facet(self.ns['dcterms'].subject),
Facet(self.ns['dcterms'].identifier)]
```
After running toc with this change, you can see that three sets of index pages are created. By default, the `dcterms:identifier`
predicate isn’t used for the TOC pages, as it’s often derived from the document title. Furthermore, you’ll get some error messages along the lines of “Best Current Practice does not look like a valid URI”, which is because the `dcterms:subject` predicate normally should have URIs as values, and we are using plain string literals.
We can fix both these problems by customizing our facet objects a little. We specify that we wish to use `dcterms:identifier` as a TOC facet, and provide a simple method to group RFCs by their identifier in groups of 100, ie one page for RFC 1-99, another for RFC 100-199,
and so on. We also specify that we expect our `dcterms:subject`
values to be plain strings.
```
def facets(self):
def select_rfcnum(row, binding, resource_graph):
# "RFC 6998" -> "6900"
return row[binding][4:-2] + "00"
from ferenda import Facet
return [Facet(self.ns['dcterms'].title),
Facet(self.ns['dcterms'].issued),
Facet(self.ns['dcterms'].subject,
selector=Facet.defaultselector,
identificator=Facet.defaultselector,
key=Facet.defaultselector),
Facet(self.ns['dcterms'].identifier,
use_for_toc=True,
selector=select_rfcnum,
pagetitle="RFC %(selected)s00-%(selected)s99")]
```
The above code gives some example of how [`Facet`](index.html#ferenda.Facet)
objects can be configured. However, a [`Facet`](index.html#ferenda.Facet)
object does not control how each individual document is listed on a toc page. The default formatting just lists the title of the document,
linked to the document in question. For RFCs, who mainly is referenced using their RFC number rather than their title, we’d like to add the RFC number in this display. This is done by overriding
[`toc_item()`](index.html#ferenda.DocumentRepository.toc_item).
```
def toc_item(self, binding, row):
from ferenda.elements import Link
return [row['dcterms_identifier'] + ": ",
Link(row['dcterms_title'],
uri=row['uri'])]
```
Se also [Customizing the table(s) of content](index.html#document-toc) and [Grouping documents with facets](index.html#document-facets).
### Customizing [`news()`](index.html#ferenda.DocumentRepository.news)[¶](#customizing-news)
The purpose of [`news()`](index.html#ferenda.DocumentRepository.news),
the next to final step, is to provide a set of news feeds for your document repository.
The default implementation gives you one single news feed for all documents in your docrepo, and creates both browser-ready HTML (using the same headers, footers, navigation menus etc used by
[`generate()`](index.html#ferenda.DocumentRepository.generate)) and [Atom syndication format](http://www.ietf.org/rfc/rfc4287.txt) files.
The facets you’ve defined for your docrepo are re-used to create news feeds for eg. all documents published by a particular entity, or all documents of a certain type. Only facet objects which has the
`use_for_feed` property set to a truthy value are used to construct newsfeeds.
In this example, we adjust the facet based on `dcterms:subject` so that it can be used for newsfeed generation.
```
def facets(self):
def select_rfcnum(row, binding, resource_graph):
# "RFC 6998" -> "6900"
return row[binding][4:-2] + "00"
from ferenda import Facet
return [Facet(self.ns['dcterms'].title),
Facet(self.ns['dcterms'].issued),
Facet(self.ns['dcterms'].subject,
selector=Facet.defaultselector,
identificator=Facet.defaultidentificator,
key=Facet.defaultselector,
use_for_feed=True),
Facet(self.ns['dcterms'].identifier,
use_for_toc=True,
selector=select_rfcnum,
pagetitle="RFC %(selected)s00-%(selected)s99")]
```
When running `news`, this will create five different atom feeds
(which are mirrored as HTML pages) under `data/rfc/news`: One containing all documents, and four others that contain documents in a particular category (eg having a particular `dcterms:subject` value.
Note
As you can see, the resulting HTML pages are a little rough around the edges. Also, there isn’t currently any way of discovering the Atom feeds or HTML pages from the main site – you need to know the URLs. This will all be fixed in due time.
Se also [Customizing the news feeds](index.html#document-news).
### Customizing [`frontpage()`](index.html#ferenda.manager.frontpage)[¶](#customizing-frontpage)
Finally, [`frontpage()`](index.html#ferenda.manager.frontpage) creates a front page for your entire site with content from the different docrepos. Each docrepos [`frontpage_content()`](index.html#ferenda.DocumentRepository.frontpage_content) method will be called, and should return a XHTML fragment with information about the repository and it’s content. Below is a simple example that uses functionality we’ve used in other contexts to create a list of the five latest documents, as well as a total count of documents.
```
def frontpage_content(self, primary=False):
from rdflib import URIRef, Graph
from itertools import islice
items = ""
for entry in islice(self.news_entries(),5):
graph = Graph()
with self.store.open_distilled(entry.basefile) as fp:
graph.parse(data=fp.read())
data = {'identifier': graph.value(URIRef(entry.id), self.ns['dcterms'].identifier).toPython(),
'uri': entry.id,
'title': entry.title}
items += '<li>%(identifier)s <a href="%(uri)s">%(title)s</a></li>' % data
return ("""<h2><a href="%(uri)s">Request for comments</a></h2>
<p>A complete archive of RFCs in Linked Data form. Contains %(doccount)s documents.</p>
<p>Latest 5 documents:</p>
<ul>
%(items)s
</ul>""" % {'uri':self.dataset_uri(),
'items': items,
'doccount': len(list(self.store.list_basefiles_for("_postgenerate")))})
```
### Next steps[¶](#next-steps)
When you have written code and customized downloading, parsing and all the other steps, you’ll want to run all these steps for all your docrepos in a single command by using the special value `all` for docrepo, and again `all` for action:
```
./ferenda-build.py all all
```
By now, you should have a basic idea about the key concepts of ferenda. In the next section, [Key concepts](index.html#document-keyconcepts), we’ll explore them further.
Key concepts[¶](#key-concepts)
---
### Project[¶](#project)
A collection of docrepos and configuration that is used to make a useful web site. The first step in creating a project is running
`ferenda-setup <projectname>`.
A project is primarily defined by its configuration file at
`<projectname>/ferenda.ini`, which specifies which docrepos are used, and settings for them as well as settings for the entire project.
A project is managed using the `ferenda-build.py` tool.
If using the API instead of these command line tools, there is no concept of a project except for what your code provides. Your client code is responsible for creating the docrepo classes and providing them with proper settings. These can be loaded from a
`ferenda.ini`-style file, be hard-coded, or handled in any other way you see fit.
Note
Ferenda uses the `layeredconfig` module internally to handle all settings.
### Configuration[¶](#configuration)
A ferenda docrepo object can be configured in two ways - either when creating the object, eg:
```
d = DocumentSource(datadir="mydata", loglevel="DEBUG",force=True)
```
Note
Parameters that is not provided when creating the object are defaulted from the built-in configuration values (see below)
Or it can be configured using the `LayeredConfig`
class, which takes configuration data from three places:
* built-in configuration values (provided by
[`get_default_options()`](index.html#ferenda.DocumentRepository.get_default_options))
* values from a configuration file (normally `ferenda.ini`”, placed alongside `ferenda-build.py`)
* command-line parameters, eg `--force --datadir=mydata`
```
d = DocumentSource()
d.config = LayeredConfig(defaults=d.get_default_options(),
inifile="ferenda.ini",
commandline=sys.argv)
```
(This is what `ferenda-build.py` does behind the scenes)
Configuration values from the configuration file overrides built-in configuration values, and command line parameters override configuration file values.
By setting the `config` property, you override any parameters provided when creating the object.
These are the normal configuration options:
| option | description | default |
| --- | --- | --- |
| datadir | Directory for all downloaded/parsed etc files | ‘data’ |
| patchdir | Directory containing patch files used by patch_if_needed | ‘patches’ |
| parseforce | Whether to re-parse downloaded files,
even if resulting XHTML1.1 files exist and are newer than downloaded files | False |
| compress | Whether to compress intermediate files.
Can be either a empty string (don’t compress) or ‘bz2’ (compress using bz2). | ‘’ |
| serializejson | Whether to serialize document data as a JSON document in the parse step. | False |
| generateforce | Whether to re-generate browser-ready HTML5 files, even if they exist and are newer than all dependencies | False |
| force | If True, overrides both parseforce and generateforce. | False |
| fsmdebug | Whether to display debugging information from FSMParser | False |
| refresh | Whether to re-download all files even if previously downloaded. | False |
| lastdownload | The datetime when this repo was last downloaded (stored in conf file) | None |
| downloadmax | Maximum number of documents to download
(None means download all of them). | None |
| conditionalget | Whether to use Conditional GET (through the If-modified-since and/or If-none-match headers) | True |
| url | The basic URL for the created site, used as template for all managed resources in a docrepo (see `canonical_uri()`). | ‘<http://localhost:8000/>’ |
| fulltextindex | Whether to index all text in a fulltext search engine. Note: This can take a lot of time. | True |
| useragent | The user-agent used with any external HTTP Requests. Please change this into something containing your contact info. | ‘ferenda-bot’ |
| storetype | Any of the suppored types: ‘SQLITE’,
‘SLEEPYCAT’, ‘SESAME’ or ‘FUSEKI’.
See [Triple stores](index.html#external-triplestore). | ‘SQLITE’ |
| storelocation | The file path or URL to the triple store,
dependent on the storetype | ‘data/ferenda.sqlite’ |
| storerepository | The repository/database to use within the given triple store (if applicable) | ‘ferenda’ |
| indextype | Any of the supported types: ‘WHOOSH’ or
‘ELASTICSEARCH’. See
[Fulltext search engines](index.html#external-fulltext). | ‘WHOOSH’ |
| indexlocation | The location of the fulltext index | ‘data/whooshindex’ |
| republishsource | Whether the Atom files should contain links to the original, unparsed, source documents | False |
| combineresources | Whether to combine and minify all css and js files into a single file each | False |
| cssfiles | A list of all required css files | [‘<http://fonts.googleapis.com/css?family=Raleway:200,100>’,
‘res/css/normalize.css’,
‘res/css/main.css’,
‘res/css/ferenda.css’] |
| jsfiles | A list of all required js files | [‘res/js/jquery-1.9.0.js’,
‘res/js/modernizr-2.6.2-respond-1.1.0.min.js’,
‘res/js/ferenda.js’] |
| staticsite | Whether to generate static HTML files suitable for offline usage (removes search and uses relative file paths instead of canonical URIs) | False |
| legacyapi | Whether the REST API should provide a simpler API for legacy clients. See
[The WSGI app](index.html#document-wsgi). | False |
### DocumentRepository[¶](#documentrepository)
A document repository (docrepo for short) is a class that handles all aspects of a document collection: Downloading the documents (or aquiring them in some other way), parsing them into structured documents, and then re-generating HTML documents with added niceties,
for example references from documents from other docrepos.
You add support for a new collection of documents by subclassing
[`DocumentRepository`](index.html#ferenda.DocumentRepository). For more details, see [Creating your own document repositories](index.html#document-createdocrepos)
### Document[¶](#document)
A [`Document`](index.html#ferenda.Document) is the main unit of information in Ferenda. A document is primarily represented in serialized form as a XHTML 1.1 file with embedded metadata in RDFa format, and in code by the [`Document`](index.html#ferenda.Document) class. The class has five properties:
* `meta` (a RDFLib [`Graph`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.graph.Graph))
* `body` (a tree of building blocks, normally instances of
[`ferenda.elements`](index.html#module-ferenda.elements) classes, representing the structure and content of the document)
* `lang` (an [IETF language](http://en.wikipedia.org/wiki/IETF_language_tag) tag, eg `sv` or
`en-GB`)
* `uri` (a string representing the canonical URI for this document)
* `basefile` (a short internal id)
The method [`render_xhtml()`](index.html#ferenda.DocumentRepository.render_xhtml) (which is called automatically, as long as your `parse` method use the
[`managedparsing()`](index.html#ferenda.decorators.managedparsing) decorator) renders a
[`Document`](index.html#ferenda.Document) object into a XHTML 1.1+RDFa document.
### Identifiers[¶](#identifiers)
Documents, and parts of documents, in ferenda have a couple of different identifiers, and it’s useful to understand the difference and relation between them.
* `basefile`: The *internal id* for a document. This is is internal to the document repository and is used as the base for the filenames for the stored files . The basefile isn’t totally random and is expected to have some relationship with a human-readable identifier for the document. As an example from the RFC docrepo, the basefile for RFC 1147 would simply be “1147”. By the rules encoded in
[`DocumentStore`](index.html#ferenda.DocumentStore), this results in the downloaded file `rfc/downloads/1147.txt`, the parsed file
`rfc/parsed/1147.xhtml` and the generated file
`rfc/generated/1147.html`. Only documents themselves, not parts of documents, have basefile identifiers.
* `uri`: The *canonical URI* for a document **or** a part of a document (generally speaking, a *resource*). This identifier is used when storing data related to the resource in a triple store and a fulltext search engine, and is also used as the external URL for the document when republishing (see [The WSGI app](index.html#document-wsgi) and also
[Document URI](index.html#parsing-uri)). URI:s for documents can be set by settings the
`uri` property of the Document object. URIs for parts of documents are set by setting the `uri` property on any
[`elements`](index.html#module-ferenda.elements) based object in the body tree. When rendering the document into XHTML, render_xhtml creates RDFa statements based on this property and the `meta` property.
* `dcterms:identifier`: The *human readable* identifier for a document or a part of a document. If the document has an established human-readable identifier, such as “RFC 1147” or “2003/98/EC” (The EU directive on the re-use of public sector information), the dcterms:identifier is used for this. Unlike `basefile` and `uri`,
this identifier isn’t set directly as a property on an object. Instead, you add a triple with `dcterms:identifier` as the predicate to the object’s `meta` property, see [Parsing and representing document metadata](index.html#document-docmetadata)
and also [DCMI Terms](http://dublincore.org/documents/2012/06/14/dcmi-terms/#terms-identifier).
### DocumentEntry[¶](#documententry)
Apart from information about what a document contains, there is also information about how it has been handled, such as when a document was first downloaded or updated from a remote source, the URL from where it came, and when it was made available through Ferenda. .This information is encapsulated in the [`DocumentEntry`](index.html#ferenda.DocumentEntry)
class. Such objects are created and updated by various methods in
[`DocumentRepository`](index.html#ferenda.DocumentRepository). The objects are persisted to JSON files, stored alongside the documents themselves, and are used by the [`news()`](index.html#ferenda.DocumentRepository.news) method in order to create valid Atom feeds.
### File storage[¶](#file-storage)
During the course of processing, data about each individual document is stored in many different files in various formats. The
[`DocumentStore`](index.html#ferenda.DocumentStore) class handles most aspects of this file handling. A configured DocumentStore object is available as the
`store` property on any DocumentRepository object.
Example: If a created docrepo object `d` has the alias `foo`, and handles a document with the basefile identifier `bar`, data about the document is then stored:
* When downloaded, the original data as retrieved from the remote server, is stored as `data/foo/downloaded/bar.html`, as determined by `d.store.`[`downloaded_path()`](index.html#ferenda.DocumentStore.downloaded_path)
* At the same time, a DocumentEntry object is serialized as
`data/foo/entries/bar.json`, as determined by
`d.store.`[`documententry_path()`](index.html#ferenda.DocumentStore.documententry_path)
* If the downloaded source needs to be transformed into some intermediate format before parsing (which is the case for eg. PDF or Word documents), the intermediate data is stored as
`data/foo/intermediate/bar.xml`, as determined by
`d.store.`[`intermediate_path()`](index.html#ferenda.DocumentStore.intermediate_path)
* When the downloaded data has been parsed, the parsed XHTML+RDFa document is stored as `data/foo/parsed/bar.xhtml`, as determined by `d.store.`[`parsed_path()`](index.html#ferenda.DocumentStore.parsed_path)
* From the parsed document is automatically destilled a RDF/XML file containing all RDFa statements from the parsed file, which is stored as `data/foo/distilled/bar.rdf`, as determined by `d.store.`
`data/foo/annotations/bar.grit.txt`, as determined by
`d.store.`[`annotation_path()`](index.html#ferenda.DocumentStore.annotation_path).
* During the `relate` step, all documents which are referred to by any other document are marked as dependencies of that document. If the `bar` document is dependent on another document, then this dependency is recorded in a dependency file stored at
`data/foo/deps/bar.txt`, as determined by
`d.store.`[`dependencies_path()`](index.html#ferenda.DocumentStore.dependencies_path).
* Just prior to the generation of browser-ready HTML5 files, all metadata in the system as a whole which is relevant to `bar` is serialized in an annotation file in GRIT/XML format at
`data/foo/annotations/bar.grit.txt`, as determined by
`d.store.`[`annotation_path()`](index.html#ferenda.DocumentStore.annotation_path).
* Finally, the generated HTML5 file is created at
`data/foo/generated/bar.html`, as determined by
`d.store.`[`generated_path()`](index.html#ferenda.DocumentStore.generated_path). (This step also updates the serialized DocumentEntry object described above)
#### Archiving[¶](#archiving)
Whenever a new version of an existing document is downloaded, an archiving process takes place when
[`archive()`](index.html#ferenda.DocumentStore.archive) is called (by
[`download_if_needed()`](index.html#ferenda.DocumentRepository.download_if_needed)). This method requires a version id, which can be any string that uniquely identifies a certain revision of the document. When called, all of the above files are moved into the subdirectory in the following way
(assuming that the version id is “42”):
The result of this process is that a version id for the previously existing files is calculated (by default, this is just a simple incrementing integer, but the document in your docrepo might have a more suitable version identifier already, in which case you should override [`get_archive_version()`](index.html#ferenda.DocumentRepository.get_archive_version) to return this), and then all the above files (if they have been generated) are moved into the subdirectory `archive` in the following way.
`data/foo/downloaded/bar.html` -> `data/foo/archive/downloaded/bar/42.html`
The method [`get_archive_version()`](index.html#ferenda.DocumentRepository.get_archive_version) is used to calculate the version id. The default implementation just provides a simple incrementing integer, but if the documents in your docrepo has a more suitable version identifier already, you should override [`get_archive_version()`](index.html#ferenda.DocumentRepository.get_archive_version) to return this.
The archive path is calculated by providing the optional `version`
parameter to any of the `*_path` methods above.
To list all archived versions for a given basefile, use the
[`list_versions()`](index.html#ferenda.DocumentStore.list_versions) method.
#### The `open_*` methods[¶](#the-open-methods)
In many cases, you don’t really need to know the filename that the
`*_path` methods return, because you only want to read from or write to it. For these cases, you can use the `open_*` methods instead. These work as context managers just as the builtin open method do, and can be used in the same way:
Instead of:
```
path = self.store.downloaded_path(basefile)
with open(path, mode="wb") as fp:
fp.write(b"...")
```
use:
```
with self.store.open_downloaded(path, mode="wb") as fp:
fp.write(b"...")
```
#### Attachments[¶](#attachments)
In many cases, a single file cannot represent the entirety of a document. For example, a downloaded HTML file may need a series of inline images. These can be handled as attachments by the download method. Just use the optional attachment parameter to the appropriate
*_path / open_* methods:
```
from __future__ import unicode_literals
# begin part-1 class TestDocrepo(DocumentRepository):
storage_policy = "dir"
def download_single(self, basefile):
mainurl = self.document_url_template % {'basefile': basefile}
self.download_if_needed(basefile, mainurl)
with self.store.open_downloaded(basefile) as fp:
soup = BeautifulSoup(fp.read())
for img in soup.find_all("img"):
imgurl = urljoin(mainurl, img["src"])
# begin part-2
# open eg. data/foo/downloaded/bar/hello.jpg for writing
with self.store.open_downloaded(basefile,
attachment=img["src"],
mode="wb") as fp:
```
Note
The DocumentStore object must be configured to handle attachments by setting the `storage_policy` property to `dir`. This alters the behaviour of all `*_path` methods, so that eg. the main downloaded path becomes `data/foo/downloaded/bar/index.html`
instead of `data/foo/downloaded/bar.html`
To list all attachments for a document, use
[`list_attachments()`](index.html#ferenda.DocumentStore.list_attachments) method.
Note that only some of the `*_path` / `open_*` methods supports the
`attachment` parameter (it doesn’t make sense to have attachments for DocumentEntry files or distilled RDF/XML files).
Parsing and representing document metadata[¶](#parsing-and-representing-document-metadata)
---
Every document has a number of properties, such as it’s title,
authors, publication date, type and much more. These properties are called metadata. Ferenda does not have a fixed set of which metadata properties are available for any particular document type. Instead, it encourages you to describe the document using RDF and any suitable vocabulary (or vocabularies). If you are new to RDF, a good starting point is the [RDF Primer](http://www.w3.org/TR/rdf-primer/)
document.
Each document has a `meta` property which initially is an empty RDFLib [`Graph`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.graph.Graph) object. As part of the
[`parse()`](index.html#ferenda.DocumentRepository.parse) method, you should fill this graph with *triples* (metadata statements) about the document.
### Document URI[¶](#document-uri)
In order to create these metadata statements, you should first create a suitable URI for your document. Preferably, this should be a URI based on the URL where your web site will be published, ie if you plan on publishing it on
`http://mynetstandards.org/`, a URI for RFC 4711 might be
`http://mynetstandards.org/res/rfc/4711` (ie based on the base URL, the docrepo alias, and the basefile). By changing the `url` variable in your project configuration file, you can set the base URL from which all document URIs are derived. If you wish to have more control over the exact way URIs are constructed, you can override
[`canonical_uri()`](index.html#ferenda.DocumentRepository.canonical_uri).
Note
In some cases, there will be another *canonical URI* for the document you’re describing, used by other people in other contexts. In these cases, you should specifiy that the metadata you’re publishing is about the exact same object by adding a triple of the type `owl:sameAs` with that other canonical URI as value.
The URI for any document is available as a `uri` property.
### Adding metadata using the RDFLib API[¶](#adding-metadata-using-the-rdflib-api)
With this, you can create metadata for your document using the RDFLib Graph API.
```
# Simpler way
def parse_metadata_from_soup(self, soup, doc):
from ferenda import Describer
from datetime import datetime
title = "My Document title"
authors = ["<NAME>", "<NAME>"]
identifier = "Docno 2013:4711"
pubdate = datetime(2013,1,6,10,8,0)
d = Describer(doc.meta, doc.uri)
d.rdftype(self.rdf_type)
d.value(self.ns['prov'].wasGeneratedBy, self.qualified_class_name())
d.value(self.ns['dcterms'].title, title, lang=doc.lang)
d.value(self.ns['dcterms'].identifier, identifier)
for author in authors:
d.value(self.ns['dcterms'].author, author)
```
### A simpler way of adding metadata[¶](#a-simpler-way-of-adding-metadata)
The default RDFLib graph API is somewhat cumbersome for adding triples to a metadata graph. Ferenda has a convenience wrapper,
[`Describer`](index.html#ferenda.Describer) (itself a subclass of
[`rdflib.extras.describer.Describer`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.extras.html#rdflib.extras.describer.Describer)) that makes this somewhat easier. The `ns` class property also contains a number of references to popular vocabularies. The above can be made more succint like this:
```
# Simpler way
def parse_metadata_from_soup(self, soup, doc):
from ferenda import Describer
from datetime import datetime
title = "My Document title"
authors = ["<NAME>", "<NAME>"]
identifier = "Docno 2013:4711"
pubdate = datetime(2013,1,6,10,8,0)
d = Describer(doc.meta, doc.uri)
d.rdftype(self.rdf_type)
d.value(self.ns['prov'].wasGeneratedBy, self.qualified_class_name())
d.value(self.ns['dcterms'].title, title, lang=doc.lang)
d.value(self.ns['dcterms'].identifier, identifier)
for author in authors:
d.value(self.ns['dcterms'].author, author)
```
Note
[`parse_metadata_from_soup()`](index.html#ferenda.DocumentRepository.parse_metadata_from_soup)
doesn’t return anything. It only modifies the `doc` object passed to it.
### Vocabularies[¶](#vocabularies)
Each RDF vocabulary is defined by a URI, and all terms (types and properties) of that vocabulary is typically directly derived from it. The vocabulary URI therefore acts as a namespace. Like namespaces in XML, a shorter prefix is often assigned to the namespace so that one can use `rdf:type` rather than
`http://www.w3.org/1999/02/22-rdf-syntax-ns#type`. The DocumentRepository object keeps a dictionary of common
(prefix,namespace)s in the class property `ns` – your code can modify this list in order to add vocabulary terms relevant for your documents.
### Serialization of metadata[¶](#serialization-of-metadata)
The [`render_xhtml()`](index.html#ferenda.DocumentRepository.render_xhtml) method serializes all information in `doc.body` and `doc.meta` to a XHTML+RDFa file (the exact location given by
[`parsed_path()`](index.html#ferenda.DocumentStore.parsed_path)). The metadata specified by doc.meta ends up in the `<head>` section of this XHTML file.
The actual RDF statements are also *distilled* to a separate RDF/XML file found alongside this file (the location given by
[`distilled_path()`](index.html#ferenda.DocumentStore.distilled_path)) for convenience.
### Metadata about parts of the document[¶](#metadata-about-parts-of-the-document)
Just like the main Document object, individual parts of the document
(represented as [`ferenda.elements`](index.html#module-ferenda.elements) objects) can have `uri`
and `meta` properties. Unlike the main Document objects, these properties are not initialized beforehand. But if you do create these properties, they are used to serialize metadata into RDFa properties for each
```
def parse_document_from_soup(self, soup, doc):
from ferenda.elements import Page
from ferenda import Describer
part = Page(["This is a part of a document"],
ordinal=42,
uri="http://example.org/doc#42",
meta=self.make_graph())
d = Describer(part.meta, part.uri)
d.rdftype(self.ns['bibo'].DocumentPart)
# the dcterms:identifier for a document part is often whatever
# would be the preferred way to cite that part in another
# document
d.value(self.ns['dcterms'].identifier, "Doc:4711, p 42")
```
This results in the following document fragment:
```
<div xmlns="http://www.w3.org/1999/xhtml"
about="http://example.org/doc#42"
typeof="bibo:DocumentPart"
class="page">
<span property="dcterms:identifier"
content="Doc:4711, p 42"
xml:lang=""/>
This is a part of a document
</div>
```
Building structured documents[¶](#building-structured-documents)
---
Any structured documents can be viewed as a tree of higher-level elements (such as chapters or sections) that contains smaller elements
(like subsections or lists) that each in turn contains even smaller elements (like paragraphs or list items). When using ferenda, you can create documents by creating such trees of elements. The
[`ferenda.elements`](index.html#module-ferenda.elements) module contains classes for such elements.
Most of the classes can be used like python lists (and are, in fact,
subclasses of [`list`](https://docs.python.org/3/library/stdtypes.html#list)). Unlike the aproach used by
`xml.etree.ElementTree` and `BeautifulSoup`, where all objects are of a specific class, and a object property determines the type of element, the element objects are of different classes if the elements are different. This means that elements representing a paragraph are `ferenda.elements.Paragraph`, and elements representing a document section are
`ferenda.elements.Section` and so on. The core
[`ferenda.elements`](index.html#module-ferenda.elements) module contains around 15 classes that covers many basic document elements, and the submodule
[`ferenda.elements.html`](index.html#module-ferenda.elements.html) contains classes that correspond to all HTML tags. There is some functional overlap between these two module, but [`ferenda.elements`](index.html#module-ferenda.elements) contains several constructs which aren’t directly expressible as HTML elements
(eg. `Page`,
:~py:class:ferenda.elements.SectionalElement and
:~py:class:ferenda.elements.Footnote)
Each element constructor (or at least those derived from
`CompoundElement`) takes a list as an argument (same as [`list`](https://docs.python.org/3/library/stdtypes.html#list)), but also any number of keyword arguments. This enables you to construct a simple document like this:
```
from ferenda.elements import Body, Heading, Paragraph, Footnote
doc = Body([Heading(["About Doc 43/2012 and it's interpretation"],predicate="dcterms:title"),
Paragraph(["According to Doc 43/2012",
Footnote(["Available at http://example.org/xyz"]),
" the bizbaz should be frobnicated"])
])
```
Note
Since `CompoundElement` works like
[`list`](https://docs.python.org/3/library/stdtypes.html#list), which is initialized with any iterable, you should normalliy initialize it with a single-element list of strings. If you initialize it directly with a string, the constructor will treat that string as an iterable and create one child element for every character in the string.
### Creating your own element classes[¶](#creating-your-own-element-classes)
The exact structure of documents differ greatly. A general document format such as XHTML or ODF cannot contain special constructs for preamble recitals of EC directives or patent claims of US patents. But your own code can create new classes for this. Example:
```
from ferenda.elements import CompoundElement, OrdinalElement
class Preamble(CompoundElement): pass class PreambleRecital(CompoundElement,OrdinalElement):
tagname = "div"
rdftype = "eurlex:PreambleRecital"
doc = Preamble([PreambleRecital("Un",ordinal=1)],
[PreambleRecital("Deux",ordinal=2)],
[PreambleRecital("Trois",ordinal=3)])
```
### Mixin classes[¶](#mixin-classes)
As the above example shows, it’s possible and even recommended to use multiple inheritance to compose objects by subclassing two classes –
one main class who’s semantics you’re extending, and one mixin class that contains particular properties. The following classes are useful as mixins:
* `OrdinalElement`: for representing elements with some sort of ordinal numbering. An ordinal element has an `ordinal` property, and different ordinal objects can be compared or sorted. The sort is based on the ordinal property. The ordinal property is a string, but comparisons/sorts are done in a natural way, i.e. “2” < “2 a” < “10”.
* `TemporalElement`: for representing things that has a start and/or a end date. A temporal element has an `in_effect` method which takes a date (or uses today’s date if none given) and returns true if that date falls between the start and end date.
### Rendering to XHTML[¶](#rendering-to-xhtml)
The built-in classes are rendered as XHTML by the built-in method
[`render_xhtml()`](index.html#ferenda.DocumentRepository.render_xhtml), which first creates a `<head>` section containing all document-level metadata
(i.e. the data you have specified in your documents `meta`
property), and then calls the `as_xhtml` method on the root body element. The method is called with `doc.uri` as a single argument,
which is then used as the RDF subject for all triples in the document
(except for those sub-elements which themselves have a `uri`
property)
All built-in element classes derive from
`AbstractElement`, which contains a generic implementation of `as_xhtml()`,
that recursively creates a lxml element tree from itself and it’s children.
Your own classes can specify how they are to be rendered in XHTML by overriding the `tagname` and
`classname` properties, or for full control, the `as_xhtml()`
method.
As an example, the class `SectionalElement`
overrides `as_xhtml` to the effect that if you provide
`identifier`, `ordinal` and `title` properties for the object, a resource URI is automatically constructed and four RDF triples are created (rdf:type, dcterms:title, dcterms:identifier, and bibo:chapter):
```
from ferenda.elements import SectionalElement p = SectionalElement(["Some content"],
ordinal = "1a",
identifier = "Doc pt 1(a)",
title="Title or name of the part")
body = Body([p])
from lxml import etree
etree.tostring(body.as_xhtml("http://example.org/doc"))
```
…which results in:
```
<body xmlns="http://www.w3.org/1999/xhtml" about="http://example.org/doc">
<div about="http://example.org/doc#S1a"
typeof="bibo:DocumentPart"
property="dcterms:title"
content="Title or name of the part"
class="sectionalelement">
<span href="http://example.org/doc"
rel="dcterms:isPartOf" />
<span about="http://example.org/doc#S1a"
property="dcterms:identifier"
content="Doc pt 1(a)" />
<span about="http://example.org/doc#S1a"
property="bibo:chapter"
content="1a" />
Some content
</div>
</body>
```
However, this is a convenience method of SectionalElement, amd may not be appropriate for your needs. The general way of attaching metdata to document parts, as specified in [Metadata about parts of the document](index.html#parsing-metadata-parts), is to provide each document part with a `uri` and `meta` property. These are then automatically serialized as RDFa statements by the default
`as_xhtml` implementation.
### Convenience methods[¶](#convenience-methods)
Your element tree structure can be serialized to well-formed XML using the `serialize()` method. Such a serialization can be turned back into the same tree using
`deserialize()`. This is primarily useful during debugging.
You might also find the
`as_plaintext` method useful. It works similar to
`as_xhtml`, but returns a plaintext string with the contents of an element, including all sub-elements
The [`ferenda.elements.html`](index.html#module-ferenda.elements.html) module contains the method
[`elements_from_soup()`](index.html#ferenda.elements.html.elements_from_soup) which converts a BeautifulSoup tree into the equivalent tree of element objects.
Parsing document structure[¶](#parsing-document-structure)
---
In many scenarios, the basic steps in parsing source documents are similar. If your source does not contain properly nested structures that accurately represent the structure of the document (such as well-authored XML documents), you will have to re-create the structure that the author intended. Usually, text formatting, section numbering and other clues contain just enough information to do that.
In many cases, your source document will naturally be split up in a large number of “chunks”. These chunks may be lines or paragraphs in a plaintext documents, or tags of a certain type in a certain location in a HTML document. Regardless, it is often easy to generate a list of such chunks. See, in particular, [Reading files in various formats](index.html#document-readers).
Note
For those with a background in computer science and formal languages, a chunk is sort of the same thing as a token, but whereas a token typically is a few characters in length, a chunk is typically one to several sentences long. Splitting up a documents in chunks is also typically much simpler than the process of tokenization.
These chunks can be fed to a *finite state machine*, which looks at each chunk, determines what kind of *structural element* it probably is (eg. a headline, the start of a chapter, a item in a bulleted list…) by looking at the chunk in the context of previous chunks,
and then explicitly re-creates the document structure that the author
(presumably) intended.
### FSMParser[¶](#fsmparser)
The framework contains a class for creating such state machines,
[`FSMParser`](index.html#ferenda.FSMParser). It is used with a set of the following objects:
| Object | Purpose |
| --- | --- |
| Recognizers | Functions that look at a chunk and determines if it is a particular structural element. |
| Constructors | Functions that creates a document element from a chunk
(or series of chunks) |
| States | Identifiers for the current state of the document being parsed, ie. “in-preamble”, “in-ordered-list” |
| Transitions | mapping (current state(s), recognizer) ->
(new state, constructor) |
You initialize the parser with the transition table (which contains the other objects), then call it’s parse() method with a iterator of chunks, an initial state, and an initial constructor. The result of parse is a nested document object tree.
### A simple example[¶](#a-simple-example)
Consider a very simple document format that only has three kinds of structural elements: a normal paragraph, preformatted text, and sections. Each section has a title and may contain paragraphs or preformatted text, which in turn may not contain anything else. All chunks are separated by double newlines
The section is identified by a header, which is any single-line string followed by a line of = characters of the same length. Any time a new header is encountered, this signals the end of the current section:
```
This is a header
===
```
A preformatted section is any chunk where each line starts with at least two spaces:
```
# some example of preformatted text def world(name):
return "Hello", name
```
A paragraph is anything else:
```
This is a simple paragraph.
It can contain short lines and longer lines.
```
(You might recognize this format as a very simple form of ReStructuredText).
Recognizers for these three elements are easy to build:
```
from ferenda import elements, FSMParser
def is_section(parser):
chunk = parser.reader.peek()
lines = chunk.split("\n")
return (len(lines) == 2 and
len(lines[0]) == len(lines[1]) and
lines[1] == "=" * len(lines[0]))
def is_preformatted(parser):
chunk = parser.reader.peek()
lines=chunk.split("\n")
not_indented = lambda x: not x.startswith(" ")
return len(list(filter(not_indented,lines))) == 0
def is_paragraph(parser):
return True
```
The `elements` module contains ready-built classes which we can use to build our constructors:
```
def make_body(parser):
b = elements.Body()
return parser.make_children(b)
def make_section(parser):
chunk = parser.reader.next()
title = chunk.split("\n")[0]
s = elements.Section(title=title)
return parser.make_children(s)
setattr(make_section,'newstate','section')
def make_paragraph(parser):
return elements.Paragraph([parser.reader.next()])
def make_preformatted(parser):
return elements.Preformatted([parser.reader.next()])
```
Note that any constructor which may contain sub-elements must itself call the [`make_children()`](index.html#ferenda.FSMParser.make_children) method of the parser. That method takes a parent object, and then repeatedly creates child objects which it attaches to that parent object, until a exit condition is met. Each call to create a child object may, in turn,
call make_children (not so in this very simple example).
The final step in putting this together is defining the transition table, and then creating, configuring and running the parser:
```
transitions = {("body", is_section): (make_section, "section"),
("section", is_paragraph): (make_paragraph, None),
("section", is_preformatted): (make_preformatted, None),
("section", is_section): (False, None)}
text = """First section
===
This is a regular paragraph. It will not be matched by is_section
(unlike the above chunk) or is_preformatted (unlike the below chunk),
but by the catch-all is_paragraph. The recognizers are run in the order specified by FSMParser.set_transitions().
This is a preformatted section.
It could be used for source code,
+---+
| line drawings |
+---+
or what have you.
Second section
===
The above new section implicitly closed the first section which we were in. This was made explicit by the last transition rule, which stated that any time a section is encountered while in the "section"
state, we should not create any more children (False) but instead return to our previous state (which in this case is "body", but for a more complex language could be any number of states)."""
p = FSMParser()
p.set_recognizers(is_section, is_preformatted, is_paragraph)
p.set_transitions(transitions)
p.initial_constructor = make_body p.initial_state = "body"
body = p.parse(text.split("\n\n"))
# print(elements.serialize(body))
```
The result of this parse is the following document object tree (passed through `serialize()`):
```
<Body>
<Section title="First section">
<Paragraph>
<str>This is a regular paragraph. It will not be matched by is_section
(unlike the above chunk) or is_preformatted (unlike the below chunk),
but by the catch-all is_paragraph. The recognizers are run in the order specified by FSMParser.set_transitions().</str>
</Paragraph><Preformatted>
<str> This is a preformatted section.
It could be used for source code,
+---+
| line drawings |
+---+
or what have you.</str>
</Preformatted>
</Section>
<Section title="Second section">
<Paragraph>
<str>The above new section implicitly closed the first section which we were in. This was made explicit by the last transition rule, which stated that any time a section is encountered while in the "section"
state, we should not create any more children (False) but instead return to our previous state (which in this case is "body", but for a more complex language could be any number of states).</str>
</Paragraph>
</Section>
</Body```
### Writing complex parsers[¶](#writing-complex-parsers)
#### Recognizers[¶](#recognizers)
Recognizers are any callables that can be called with the parser object as only parameter (so no class- or instancemethods). Objects that implement `__call__` are OK, as are `lambda` functions.
One pattern to use when creating parsers is to have a method on your docrepo class which defines a number of nested functions, then creates a transition table using those functions, create the parser with that transition table, and then return the initialized parser object. Your main parse method can then call this method, break the input document into suitable chunks, then call parse on the recieved parser object.
#### Constructors[¶](#constructors)
Like recognizers, constructors may be any callable, and they are called with the parser object as the only parameter.
Constructors that return elements which in themselves do not contain sub-elements are simple to write – just return the created element
(see eg `make_paragraph` or `make_preformatted` above).
Constructors that are to return elements that may contain subelement must first create the element, then call parser.:meth:ferenda.FSMParser.make_children with that element as a single argument. `make_children` will treat that element as a list,
and append any sub-elements created to that list, before returning it.
#### The parser object[¶](#the-parser-object)
The parser object is passed to every recognizer and constructor. The most common use is to read the next available chunk from it’s reader property – this is an instance of a simple wrapper around the stream of chunks. The reader has two methods: `peek` and `next`, which both returns the next available chunk, but `next` also consumes the chunk in question. A recognizer typically calls
`parser.reader.peek()`, a constructor typically calls
`parser.reader.next()`.
The parser object also has the following properties
| Property | Description |
| --- | --- |
| currentstate | The current state of the parser, using whatever value for state that was defined in the transition table
(typically a string) |
| debug | boolean that indicates whether to emit debug messages
(by default False) |
There is also a `parser._debug()` method that emits debug messages,
indicating current parser nesting level and current state, if
`parser.debug` is `True`
#### The transition table[¶](#the-transition-table)
The transition table is a mapping between `(currentstate(s), successful recognizer)` and `(constructor-or-false,newstate-or-None)`
The transition table is used in the following way: All recognizers that can be applicable in the current state are tried in the specified order until one of them returns True. Using this pair of
(currentstate, recognizer), the corresponding value tuple is looked up in the transition table.
`constructor-or-False`: …
`newstate-or-None`: …
The key in the transition table can also be a callable, which is called with (currentstate,symbol,parser?) and is expected to return a
`(constructor-or-false,newstate-or-None)` tuple
### Tips for debugging your parser[¶](#tips-for-debugging-your-parser)
Two useful commands in the [`Devel`](index.html#ferenda.Devel) module:
```
$ # sets debug, prints serialize(parser.parse(...))
$ ./ferenda-build.py devel fsmparse parser < chunks
$ # sets debug, returns name of matching function
$ ./ferenda-build.py devel fsmanalyze parser <currentstate> < chunk
```
Citation parsing[¶](#citation-parsing)
---
In many cases, the text in the body of a document contains references
(citations) to other documents in the same or related document collections. A good implementation of a document repository needs to find and express these references. In ferenda, references are expressed as basic hyperlinks which uses the `rel` attribute to specify the sort of relationship that the reference expresses. The process of citation parsing consists of analysing the raw text,
finding references within that text, constructing sensible URIs for each reference, and formatting these as `<a href="..."
rel="...">[citation]</a>` style links.
Since standards for expressing references / citations are very diverse, Ferenda requires that the docrepo programmer specifies the basic rules of how to recognize a reference, and how to put together the properties from a reference (such as year of publication, or page)
into a URI.
### The built-in solution[¶](#the-built-in-solution)
Ferenda uses the [Pyparsing](http://pyparsing.wikispaces.com/)
library in order to find and process citations. As an example, we’ll specify citation patterns and URI formats for references that occurr in RFC documents. These are primarily of three different kinds
(examples come from RFC 2616):
1. URL references, eg “GET <http://www.w3.org/pub/WWW/TheProject.html> HTTP/1.1”
2. IETF document references, eg “STD 3”, “BCP 14” and “RFC 2068”
3. Internal endnote references, eg “[47]” and “[33]”
We’d like to make sure that any URL reference gets turned into a link to that same URL, that any IETF document reference gets turned into the canonical URI for that document, and that internal endote references gets turned into document-relative links, eg “#endnote-47”
and “#endnote-33”. (This requires that other parts of the
[`parse()`](index.html#ferenda.DocumentRepository.parse) process has created IDs for these in `doc.body`, which we assume has been done).
Turning URL references in plain text into real links is so common that ferenda has built-in support for this. The support comes in two parts:
First running a parser that detects URLs in the textual content, and secondly, for each match, running a URL formatter on the parse result.
At the end of your [`parse()`](index.html#ferenda.DocumentRepository.parse) method,
do the following.
```
from ferenda import CitationParser from ferenda import URIFormatter import ferenda.citationpatterns import ferenda.uriformats
# CitationParser is initialized with a list of pyparsing
# ParserElements (or any other object that has a scanString method
# that returns a generator of (tokens,start,end) tuples, where start
# and end are integer string indicies and tokens are dict-like
# objects)
citparser = CitationParser(ferenda.citationpatterns.url)
# URIFormatter is initialized with a list of tuples, where each
# tuple is a string (identifying a named ParseResult) and a function
# (that takes as a single argument a dict-like object and returns a
# URI string (possibly relative)
citparser.set_formatter(URIFormatter(("URLRef", ferenda.uriformats.url)))
citparser.parse_recursive(doc.body)
```
The [`parse_recursive()`](index.html#ferenda.CitationParser.parse_recursive) takes any
[`elements`](index.html#module-ferenda.elements) document tree and modifies it in-place to mark up any references to proper `Link`
objects.
### Extending the built-in support[¶](#extending-the-built-in-support)
Building your own citation patterns and URI formats is fairly simple. First, specify your patterns in the form of a pyparsing parseExpression, and make sure that both the expression as a whole,
and any individual significant properties, are named by calling
`.setResultName`.
Then, create a set of formatting functions that takes the named properties from the parse expressions above and use them to create a URI.
Finally, initialize a [`CitationParser`](index.html#ferenda.CitationParser) object from your parse expressions and a [`URIFormatter`](index.html#ferenda.URIFormatter) object that maps named parse expressions to their corresponding URI formatting function, and call
[`parse_recursive()`](index.html#ferenda.CitationParser.parse_recursive)
```
from pyparsing import Word, nums
from ferenda import CitationParser from ferenda import URIFormatter import ferenda.citationpatterns import ferenda.uriformats
# Create two ParserElements for IETF document references and internal
# references rfc_citation = "RFC" + Word(nums).setResultsName("RFCRef")
bcp_citation = "BCP" + Word(nums).setResultsName("BCPRef")
std_citation = "STD" + Word(nums).setResultsName("STDRef")
ietf_doc_citation = (rfc_citation | bcp_citation | std_citation).setResultsName("IETFRef")
endnote_citation = ("[" + Word(nums).setResultsName("EndnoteID") + "]").setResultsName("EndnoteRef")
# Create a URI formatter for IETF documents (URI formatter for endnotes
# is so simple that we just use a lambda function below def rfc_uri_formatter(parts):
# parts is a dict-like object created from the named result parts
# of our grammar, eg those ParserElement for which we've called
# .setResultsName(), in this case eg. {'RFCRef':'2068'}
# NOTE: If your document collection contains documents of this
# type and you're republishing them, feel free to change these
# URIs to URIs under your control,
# eg. "http://mynetstandards.org/rfc/%(RFCRef)s/" and so on
if 'RFCRef' in parts:
return "http://www.ietf.org/rfc/rfc%(RFCRef)s.txt" % parts
elif 'BCPRef' in parts:
return "http://tools.ietf.org/rfc/bcp/bcp%(BCPRef)s.txt" % parts
elif 'STDRef' in parts:
return "http://rfc-editor.org/std/std%(STDRef)s.txt" % parts
else:
return None
# CitationParser is initialized with a list of pyparsing
# ParserElements (or any other object that has a scanString method
# that returns a generator of (tokens,start,end) tuples, where start
# and end are integer string indicies and tokens are dict-like
# objects)
citparser = CitationParser(ferenda.citationpatterns.url,
ietf_doc_citation,
endnote_citation)
# URIFormatter is initialized with a list of tuples, where each
# tuple is a string (identifying a named ParseResult) and a function
# (that takes as a single argument a dict-like object and returns a
# URI string (possibly relative)
citparser.set_formatter(URIFormatter(("url", ferenda.uriformats.url),
("IETFRef", rfc_uri_formatter),
("EndnoteRef", lambda d: "#endnote-%(EndnoteID)s" % d)))
citparser.parse_recursive(doc.body)
```
This turns this document
```
<body xmlns="http://www.w3.org/1999/xhtml" about="http://example.org/doc/">
<h1>Main document</h1>
<p>A naked URL: http://www.w3.org/pub/WWW/TheProject.html.</p>
<p>Some IETF document references: See STD 3, BCP 14 and RFC 2068.</p>
<p>An internal endnote reference: ...relevance ranking, cf. [47]</p>
<h2>References</h2>
<p id="endnote-47">47: Malmgren, Towards a theory of jurisprudential
ranking</p>
</body>
```
Into this document:
```
<body xmlns="http://www.w3.org/1999/xhtml" about="http://example.org/doc/">
<h1>Main document</h1>
<p>
A naked URL: <a href="http://www.w3.org/pub/WWW/TheProject.html"
rel="dcterms:references"
>http://www.w3.org/pub/WWW/TheProject.html</a>.
</p>
<p>
Some IETF document references: See
<a href="http://rfc-editor.org/std/std3.txt"
rel="dcterms:references">STD 3</a>,
<a href="http://tools.ietf.org/rfc/bcp/bcp14.txt"
rel="dcterms:references">BCP 14</a> and
<a href="http://www.ietf.org/rfc/rfc2068.txt"
rel="dcterms:references">RFC 2068</a>.
</p>
<p>
An internal endnote reference: ...relevance ranking, cf.
<a href="#endnote-47"
rel="dcterms:references">[47]</a>
</p>
<h2>References</h2>
<p id="endnote-47">47: Malmgren, Towards a theory of jurisprudential
ranking</p>
</body>
```
### Rolling your own[¶](#rolling-your-own)
For more complicated situations you can skip calling
[`parse_recursive()`](index.html#ferenda.CitationParser.parse_recursive) and instead do your own processing with the optional support of
[`CitationParser`](index.html#ferenda.CitationParser).
This is needed in particular for complicated `ParserElement` objects which may contain several sub-`ParserElement` which needs to be turned into individual links. As an example, the text “under Article 56 (2), Article 57 or Article 100a of the Treaty establishing the European Community” may be matched by a single top-level ParseResult
(and probably must be, if “Article 56 (2)” is to actually reference article 56(2) in the Treaty), but should be turned in to three separate links.
In those cases, iterate through your `doc.body` yourself, and for each text part do something like the following:
```
from ferenda import CitationParser, URIFormatter, citationpatterns, uriformats from ferenda.elements import Link
citparser = CitationParser()
citparser.add_grammar(citationpatterns.url)
formatter = URIFormatter(("url", uriformats.url))
res = []
text = "An example: http://example.org/. That is all."
for node in citparser.parse_string(text):
if isinstance(node,str):
# non-linked text, add and continue
res.append(node)
if isinstance(node, tuple):
(text, match) = node
uri = formatter.format(match)
if uri:
res.append(Link(uri, text, rel="dcterms:references"))
```
Reading files in various formats[¶](#reading-files-in-various-formats)
---
The first step of parsing a document is often getting actual text from a file. For plain text files, this is not a difficult process, but for eg. Word and PDF documents some sort of library support is useful.
Ferenda contains three different classes that all deal with this problem. They do not have a unified interface, but instead contain different methods depending on the structure and capabilities of the file format they’re reading.
### Reading plain text files[¶](#reading-plain-text-files)
The [`TextReader`](index.html#ferenda.TextReader) class works sort of like a regular file object, and can read a plain text file line by line, but contains extra methods for reading files paragraph by paragraph or page by page. It can also produce generators that yield the file contents divided into arbitrary chunks, which is suitable as input for
[`FSMParser`](index.html#ferenda.FSMParser).
### Microsoft Word documents[¶](#microsoft-word-documents)
The [`WordReader`](index.html#ferenda.WordReader) class can read both old-style
`.doc` files and newer, XML-based `.docx` files. The former requires that [antiword](http://www.winfield.demon.nl/) is installed, but the latter has no additional dependencies.
This class does not present any interface for actually reading the word document – instead, it converts the document to a XML file which is either based on the `docbook` output of `antiword`, or the raw OOXML found inside of the `.docx` file.
### PDF documents[¶](#pdf-documents)
[`PDFReader`](index.html#ferenda.PDFReader) reads PDF documents and makes them available as a list of pages, where each page contains a list of
[`Textbox`](index.html#ferenda.pdfreader.Textbox) objects, which in turn contains a list of [`Textelement`](index.html#ferenda.pdfreader.Textelement) objects.
Its [`textboxes()`](index.html#ferenda.PDFReader.textboxes) method is a flexible way of getting a generator of suitable text chunks. By passing a “glue”
function to that method, you can specify exact rules on which rows of text should be combined to form larger suitable chunks
(eg. paragraphs). This stream of chunks can be fed directly as input to [`FSMParser`](index.html#ferenda.FSMParser).
#### Handling non-PDFs and scanned documents[¶](#handling-non-pdfs-and-scanned-documents)
The class can also handle any other type of document (such as Word/OOXML/WordPerfect/RTF) that OpenOffice or LibreOffice handles by first converting it to PDF using the `soffice` command line tool. This is done by specifiying the `convert_to_pdf` parameter.
If the PDF contains only scanned pages (without any OCR information),
the pages can be run through the `tesseract` command line tool. You need to provide the main language of the document as the `ocr_lang`
parameter, and you need to have installed the tesseract language files for that language.
#### Analyzing PDF documents[¶](#analyzing-pdf-documents)
When processing a PDF file, the information contained in eg a
[`Textbox`](index.html#ferenda.pdfreader.Textbox) object (position, size, font)
is useful to determine what kind of content it might be, eg. if it’s set in a header-like font, it probably signals the start of a section,
and if it’s a digit-like text set in a small font outside of the main content area, it’s probably a page number.
Information about eg page margins, header styles etc can be hardcoded in your processing code, but it’s also possible to use the companion class [`PDFAnalyzer`](index.html#ferenda.PDFAnalyzer) can be used to statistically analyze a complete document and then make educated guesses about these metrics. It can also output histogram plots and an annotated version of the original PDF file with lines marking the identified margins,
styles and text chunks (given a provided “glue” function identical to the one provided to [`textboxes()`](index.html#ferenda.PDFReader.textboxes))
The class is designed to be overridden if your document has particular rules about eg. header styles or additional margin metrics.
Grouping documents with facets[¶](#grouping-documents-with-facets)
---
A collection of documents can be arranged in a set of groups, such as by year of publication, by document author, or by keyword. With Ferenda,
each such method of grouping is described in the form of a
[`Facet`](index.html#ferenda.Facet). By providing a list of Facet objects in its [`facets()`](index.html#ferenda.DocumentRepository.facets) method, your docrepo can specify multiple ways of arranging the documents it’s handling. These facets are used to construct a static Table of contents for your site, as well as creating Atom feeds of all documents and defining the fields available for querying when using the REST API.
A facet object is initialized with a set of parameters that, taken together, define the method of grouping. These include the RDF predicate that contains the data used for grouping, the datatype to be used for that data, functions (or other callables) that sorts the data into discrete groups, and other parameters that affect eg. the sorting order or if a particular facet is used in a particular context.
### Applying facets[¶](#applying-facets)
Facets are used in several different contexts (see below) but the general steps for applying them are similar. First, all the data that might be needed by the total set of facets is collected. This is normally done by querying the triple store for it. Each facet contains information about which RDF predicate
Once this set of data is retrieved, as a giant table with one row for each resource (document), each facet is used to create a set of groups and place each document in zero or more of these groups.
### Selectors and identificators[¶](#selectors-and-identificators)
The grouping is primarily done through a *selector function*. The selector function recieves three arguments:
* a dict with some basic information about one document (corresponding to one row),
* the name of the current facet (binding), and
* optionally some repo-dependent extra data in the form of an RDF graph.
It should return a single string, which should be a human-readable label for a grouping. The selector is called once for every document in the docrepo, and each document is sorted in one (or more, see below) group identified by that string. As a simple example, a selector may group documents into years of publication by finding the date of the `dcterms:issued` property and extracting the year part of it. The string returned by the should be suitable for end-user display.
Each facet also has a similar function called the *identificator function*. It recieves the same arguments as the selector function,
but should return a string that is well suited for eg. a URI fragment,
ie. not contain spaces or non-ascii characters.
The [`Facet`](index.html#ferenda.Facet) class has a number of classmethods that can act as selectors and/or identificators.
### Contexts where facets are used[¶](#contexts-where-facets-are-used)
#### Table of contents[¶](#table-of-contents)
Each docrepo will have their own set of Table of contents pages. The TOC for a docrepo will contain one set of pages for each defined facet, unless `use_for_toc` is set to `False`.
#### Atom feeds[¶](#atom-feeds)
Each docrepo will have a set of feedsets, where each feedset is based on a facet (only those that has the property `use_for_feed` set to
`True`). The structure of each feedset will mirror the structure of each set of TOC pages, and re-uses the same selector and identificator methods. It makes sense to have a separate feed for eg. each publisher or subject matter in a repository that comprises a reasonable amount of publishers and subject matters (using `dcterms:publisher` or
`dcterms:subject` as the base for facets), but it does not make much sense to eg. have a feed for all documents published in 1975 (using
`dcterms:published` as the base for a facet). Therefore, the default value for `use_for_feed` is `False`.
Furthermore, a “main” feedset with a single feed containing all documents is also constructed.
The feeds are always sorted by the updated property (most recent updated first), taken from the corresponding
[`DocumentEntry`](index.html#ferenda.DocumentEntry) object.
#### The fulltext index[¶](#the-fulltext-index)
The metadata that each facet uses is stored as a separate field in the fulltext index. Facet can specify exactly how a particular facet should be stored (ie if the field should be boosted in any particular way). Note that the data stored in the fulltext index is not passed through the selector function, the original RDF data is stored as-is.
#### The ReST API[¶](#the-rest-api)
The ReST API uses all defined facets for all repos simultaneously. This means that you can query eg. all documents published in a certain year, and get results from all docrepos. This requires that the defined facets don’t clash, eg. that you don’t have two facets based on `dcterms:publisher` where one uses URI references and the other uses.
### Grouping a document in several groups[¶](#grouping-a-document-in-several-groups)
If a docrepo uses a facet that has `multiple_values` set to
`True`, it’s possible for that facet to categorize the document in more than one group (a typical usecase is documents that have multiple
`dcterms:subject` keywords, or articles that have multiple
`dcterms:creator` authors).
### Combining facets from different docrepos[¶](#combining-facets-from-different-docrepos)
Facets that map to the same fulltextindex field must be equal. The rules for equality: If the `rdftype` and the `dimension_type` and
`dimension_label` and `selector` is equal, then the facets are equal. `selector` functions are only equal if they are the same function object, ie it’s not just enough that they are two functions that work identically.
Customizing the table(s) of content[¶](#customizing-the-table-s-of-content)
---
In order to make the processed documents in a docrepo accessible for a website visitors, some sort of index or table of contents (TOC) that lists all available documents must be created. It’s often helpful to create different lists depending on different facets of the information in documents, eg. to sort document by title, publication date, document status, author and similar properties.
Ferenda contains a number of methods that help with this task. The general process has three steps:
1. Determine the criteria for how to group and sort all documents 2. Query the triplestore for basic information about all documents in the docrepo 3. Apply these criteria on the basic information from the database
It should be noted that you don’t need to do anything in order to get a very basic TOC. As long as your
[`parse()`](index.html#ferenda.DocumentRepository.parse) step has extracted a
`dcterms:title` string and optionally a `dcterms:issued` date for each document, you’ll get basic “Sorted by title” and “Sorted by date of publication” TOCs for free.
### Defining facets for grouping and sorting[¶](#defining-facets-for-grouping-and-sorting)
A facet in this case is a method for grouping a set into documents into distinct categories, then sorting the documents, as well as the categories themseves.
Each facet is represented by a [`Facet`](index.html#ferenda.Facet) object. If you want to customize the table of contents, you have to provide a list of these by overriding
[`facets()`](index.html#ferenda.DocumentRepository.facets).
The basic way to do this is to initialize each Facet object with a rdf predicate. Ferenda has some basic knowledge about some common predicates and know how to construct sensible Facet objects for them – ie. if you specify the predicate `dcterms:issued`, you get a Facet object that groups documents by year of publication and sorts each group by date of publication.
```
def facets(self):
from ferenda import Facet
return [Facet(self.ns['dcterms'].issued),
Facet(self.ns['dcterms'].identifier)]
```
You can customize the behaviour of each Facet by providing extra arguments to the constructor.
The `label` and `pagetitle` parameters are useful to control the headings and labels for the generated pages. They should hopefully be self-explainatory.
The `selector` and `key` parameters should be functions (or any other callable) that accept a dictionary of string values, one string which is generally a key on the dictionary, and one rdflib graph containing whatever
[`commondata`](index.html#ferenda.DocumentRepository.commondata). These functions are called once each for each row in the result set generated in the next step (see below) with the contents of that row. They should each return a single string value. The `selector` function should return the label of a group that the document belongs to, i.e. the initial letter of the title, or the year of a publication date. The `key`
function should return a value that will be used for sorting, i.e. for document titles it could return the title without any leading “The”,
lowercased, spaces removed etc. See also [Grouping documents with facets](index.html#document-facets).
### Getting information about all documents[¶](#getting-information-about-all-documents)
The next step is to perform a single SELECT query against the triplestore that retrieves a single large table, where each document is a row with a number of properties.
(This is different from the case of getting information related to a particular document, in that case, a CONSTRUCT query that retrieves a small RDF graph is used).
Your list of Facet objects returned by
[`facets()`](index.html#ferenda.DocumentRepository.facets) is used to automatically select all data from the SPARQL store.
### Making the TOC pages[¶](#making-the-toc-pages)
The final step is to apply these criteria to the table of document properties in order to create a set of static HTML5 pages. This is in turn done in three different sub-steps, neither of which you’ll have to override.
The first sub-step, [`toc_pagesets()`](index.html#ferenda.DocumentRepository.toc_pagesets),
applies the defined criteria to the data fetched from the triple store to calculate the complete set of TOC pages needed for each criteria
(in the form of a [`TocPageset`](index.html#ferenda.TocPageset) object, filled with
[`TocPage`](index.html#ferenda.TocPage) objects). If your criteria groups documents by year of publication date, this method will yield one page for every year that at least one document was published in.
The next sub-step,
[`toc_select_for_pages()`](index.html#ferenda.DocumentRepository.toc_select_for_pages), applies the criteria on the data again, and adds each document to the appropriate
[`TocPage`](index.html#ferenda.TocPage) object.
The final sub-step transforms each of these [`TocPage`](index.html#ferenda.TocPage)
objects into a HTML5 file. In the process, the method
[`toc_item()`](index.html#ferenda.DocumentRepository.toc_item) is called for every single document listed on every single TOC page. This method controls how each document is presented when laid out. It’s called with a dict and a binding (same as used on the `selector` and `key`
functions), and is expected to return a list of
[`elements`](index.html#module-ferenda.elements) objects.
As an example, if you want to group by `dcterms:identifier`, but present each document with `dcterms:identifier` + `dcterms:title`:
```
def toc_item(self, binding, row):
# note: look at binding to determine which pageset is being
# constructed in case you want to present documents in
# different ways depending on that.
from ferenda.elements import Link
return [row['identifier'] + ": ",
Link(row['title'],
uri=row['uri'])]
```
The generated TOC pages automatically get a visual representation of each calculated TocPageset in the left navigational column.
### The first page[¶](#the-first-page)
The main way in to each docrepos set of TOC pages is through the tabs in the main header. That link goes to a special copy of the first page in the first pageset. The order of criteria specified by
[`facets()`](index.html#ferenda.DocumentRepository.facets) is therefore important.
Customizing the news feeds[¶](#customizing-the-news-feeds)
---
During the `news` step, all documents in a docrepo are published in one or more feeds. Each feed is made available in both Atom and HTML formats. You can control which feeds are created, and which documents are included in each feed, by the facets defined for your repo. The process is similar to defining criteria for the TOC pages.
The main differences are:
* Most properties/RDF predicates of a document are not suitable as facets for news feed (it makes little sense to have a feed for eg. `dcterms:title` or `dcterms:issued`). By default, only
`rdf:type` and `dcterms:publisher` based facets are used for news feed generation. You can control this by specifying the `use_for_feed`
constructor argument.
* The dict that is passed to the selector and identificator functions contains extra fields from the corresponding
[`DocumentEntry`](index.html#ferenda.DocumentEntry) object. Particularly, the `updated`
value might be used by your key func in order to sort all entries by last-updated-date. The `summary` value might be used to contain a human-readable summary/representation of the entire document.
* Each row is passed through the [`news_item()`](index.html#ferenda.DocumentRepository.news_item) method. You may override this in order to change the `title` or `summary` of each feed entry for the particular feed being constructed (as determined by the `binding` argument).
* A special feed, containing all entries within the docrepo, is always created.
The WSGI app[¶](#the-wsgi-app)
---
All ferenda projects contains a built-in web application. This app provides navigation, document display and search.
### Running the web application[¶](#running-the-web-application)
During development, you can just `ferenda-build.py runserver`. This starts up a single-threaded web server in the foreground with the web application, by default accessible as `http://localhost:8000/`
You can also run the web application under any [WSGI](http://wsgi.readthedocs.org/en/latest/) server, such as [mod_wsgi](http://code.google.com/p/modwsgi/), [uWSGI](https://uwsgi-docs.readthedocs.org/en/latest/index.html) or
[Gunicorn](http://gunicorn.org/). `ferenda-setup` creates a file called `wsgi.py` alongside `ferenda-build.py` which is used to serve the ferenda web app using WSGI. This is the contents of that file:
```
from ferenda.manager import make_wsgi_app inifile = os.path.join(os.path.dirname(__file__), "ferenda.ini")
application = make_wsgi_app(inifile=inifile)
```
#### Apache and mod_wsgi[¶](#apache-and-mod-wsgi)
In your httpd.conf:
```
WSGIScriptAlias / /path/to/project/wsgi.py WSGIPythonPath /path/to/project
<Directory /path/to/project>
<Files wsgi.py>
Order deny,allow
Allow from all
</Files>
</Directory>
```
The ferenda web app consists mainly of static files. Only search and API requests are dynamically handled. By default though, all static files are served by the ferenda web app. This is simple to set up, but isn’t optimal performance-wise.
#### Gunicorn[¶](#id1)
Just run `gunicorn wsgi:application`
### URLs for retrieving resources[¶](#urls-for-retrieving-resources)
In keeping with [Linked Data principles](http://www.w3.org/DesignIssues/LinkedData.html), all URIs for your documents should be retrievable. By default, all URIs for your documents start with `http://localhost:8000/res`
(e.g. `http://localhost:8000/res/rfc/4711` – this is controlled by the `url` parameter in `ferenda.ini`). These URIs are retrievable when you run the built-in web server during development, as described above.
#### Document resources[¶](#document-resources)
For each resource, use the `Accept` header to retrieve different versions of it:
* `curl -H "Accept: text/html" http://localhost:8000/res/rfc/4711`
returns `rfc/generated/4711.html`
* `curl -H "Accept: application/xhtml+xml"
http://localhost:8000/res/rfc/4711` returns
`rfc/parsed/4711.xhtml`
* `curl -H "Accept: application/rdf+xml"
http://localhost:8000/res/rfc/4711` returns
`rfc/distilled/4711.rdf`
* `curl -H "Accept: text/turtle" http://localhost:8000/res/rfc/4711`
returns `rfc/distilled/4711.rdf`, but in Turtle format
* `curl -H "Accept: text/plain" http://localhost:8000/res/rfc/4711`
returns `rfc/distilled/4711.rdf`, but in NTriples format
You can also get *extended information* about a single document in various RDF flavours. This extended information includes everything that [`construct_annotations()`](index.html#ferenda.DocumentRepository.construct_annotations)
returns, i.e. information about documents that refer to this document.
* `curl -H "Accept: application/rdf+xml"
http://localhost:8000/res/rfc/4711/data` returns a RDF/XML combination of `rfc/distilled/4711.rdf` and
`rfc/annotation/4711.grit.xml`
* `curl -H "Accept: text/turtle"
http://localhost:8000/res/rfc/4711/data` returns the same in Turtle format
* `curl -H "Accept: text/plain"
http://localhost:8000/res/rfc/4711/data` returns the same in NTriples format
* `curl -H "Accept: application/json"
http://localhost:8000/res/rfc/4711/data` returns the same in JSON-LD format.
#### Dataset resources[¶](#dataset-resources)
Each docrepo exposes information about the data it contains through it’s dataset URI. This is a single URI (controlled by
[`dataset_uri()`](index.html#ferenda.DocumentRepository.dataset_uri)) which can be queried in a similar way as the document resources above:
* `curl -H "Accept: application/html" http://localhost/dataset/rfc`
returns a HTML view of a Table of Contents for all documents (see
[Customizing the table(s) of content](index.html#document-toc))
* `curl -H "Accept: text/plain" http://localhost/dataset/rfc`
returns `rfc/distilled/dump.nt` which contains all RDF statements for all documents in the repository.
* `curl -H "Accept: application/rdf+xml"
http://localhost/dataset/rfc` returns the same, but in RDF/XML format.
* `curl -H "Accept: text/turtle" http://localhost/dataset/rfc`
returns the same, but in turtle format.
#### File extension content negotiation[¶](#file-extension-content-negotiation)
In some environments, it might be difficult to set the Accept header. Therefore, it’s also possible to request different versions of a resource using a file extension suffix. Ie. requesting
`http://localhost:8000/res/base/123.ttl` gives the same result as requesting the resource `http://localhost:8000/res/base/123` using the `Accept: text/turtle` header. The following extensions can be used
| Content-type | Extension |
| --- | --- |
| application/xhtml+xml | .xhtml |
| application/rdf+xml | .rdf |
| text/turtle | .ttl |
| text/plain | .nt |
| application/json | .json |
See also [The ReST API for querying](index.html#document-restapi).
The ReST API for querying[¶](#the-rest-api-for-querying)
---
Ferenda tries to adhere to Linked Data principles, which makes it easy to explain how to get information about any individual document or any complete dataset (see [URLs for retrieving resources](index.html#urls-used)). Sometimes it’s desirable to query for all documents matching a particular criteria, including full text search. Ferenda has a simple API, based on the `rinfo-service`
component of [RDL](https://github.com/rinfo/rdl), and inspired by
[Linked data API](https://code.google.com/p/linked-data-api/wiki/Specification), that enables you to do that. This API only provides search/select operations that returns a result list. For information about each individual result in that list, use the methods described in
[URLs for retrieving resources](index.html#urls-used).
Note
Much of the things described below are also possible to do in pure SPARQL. Ferenda does not expose any open SPARQL endpoints to the world, though. But if you find the below API lacking in some aspect, it’s certainly possible to directly expose your chosen triplestores SPARQL endpoint (as long as you’re using Fuseki or Sesame) to the world.
The default endpoint to query is your main URL + `/api/`,
eg. `http://localhost:8000/api/`. The requests always use GET and encode their parameters in the URL, and the responses are always in JSON format.
### Free text queries[¶](#free-text-queries)
The simplest form of query is a free text query that is run against all text of all documents. Use the parameter `q`,
eg. `http://localhost:8000/api/?q=tail` returns all documents
(and document fragments) containing the word “tail”.
### Result lists[¶](#result-lists)
The result of a query will be a JSON document containing some general properties of the result, and a list of result items, eg:
```
{
"current": "/myapi/?q=tail",
"duration": null,
"items": [
{
"dcterms_identifier": "123(A)",
"dcterms_issued": "2014-01-04",
"dcterms_publisher": {
"iri": "http://example.org/publisher/A",
"label": "http://example.org/publisher/A"
},
"dcterms_title": "Example",
"matches": {
"text": "<em class=\"match\">tail</em> end of the main document"
},
"rdf_type": "http://purl.org/ontology/bibo/Standard",
"iri": "http://example.org/base/123/a"
}
],
"itemsPerPage": 10,
"startIndex": 0,
"totalResults": 1
}
```
Each result item contain all fields that have been indexed (as specified by your docrepos’ facets, see [Grouping documents with facets](index.html#document-facets), the document URI (as the field `iri`) and optionally a field `matches` that provides a snipped of the matching text.
### Parameters[¶](#parameters)
Any indexed property, as defined by your facets, can be used for querying. The parameter is the same as the qname for the rdftype with
`_` instead of `:`, eq to search all documents that have
`dcterms:publisher` set to ``http://example.org/publisher/A`, use
`http://localhost:8000/api/?dcterms_publisher=http%3A%2F%2Fexample.org%2Fpublisher%2FA`
You can use * as a wildcard for any string data, eg. the above could be shortened to
`http://localhost:8000/api/?dcterms_publisher=*%2Fpublisher%2FA`.
If you have a facet with a set `dimension_label`, you can use that label directly as a parameter, eg `http://localhost:8000/api/?aprilfools=true`.
### Paging[¶](#paging)
By default, the result list only contains 10 results. You can inspect the properties `startIndex` and `totalResults` of the response to find out if there are more results, and use the special parameter
`_page` to request subsequent pages of results. You can also request a different length of the result list through the `_pageSize`
parameter.
### Statistics[¶](#statistics)
By requesting the special resource `;stats`, eg
`http://localhost:8000/api/;stats`, you can get a statistics view over all documents in all your docrepos for each of your defined facets including the number of document for each value of it’s selector, eg:
```
{
"type": "DataSet",
"slices": [
{
"dimension": "rdf_type",
"observations": [
{"count": 3,
"term": "bibo:Standard"}
]
},
{
"dimension": "dcterms_publisher",
"observations": [ {
"count": 1,
"ref": "http://example.org/publisher/A"
}, {
"count": 2,
"ref": "http://example.org/publisher/B"
} ]
}, {
"dimension": "dcterms_issued",
"observations": [ {
"count": 1,
"year": "2013"
}, {
"count": 2,
"year": "2014"
} ]
} ]
}
```
You can also get the same information for the documents in any result list by setting the special parameter `_stats=on`.
### Ranges[¶](#ranges)
For some parameters, particularly those that use datetime values, it’s useful to specify ranges instead of exact values. By prefixing the parameter name with `min-`, `max-` or `year-`, it’s possible to do that,
eg. `http://localhost:8000/api/?min-dcterms_issued=2012-04-01` to retrieve all documents that have a dcterms:issued later than 2012-04-01, or `http://localhost:8000/api/?year-dcterms_issued=2012`
to retrieve all documents that are dct:issued during 2012.
### Support resources[¶](#support-resources)
The special resources `common.json` and `terms.json`
(eg. `http://localhost:8000/api/common.json` and
`http://localhost:8000/api/terms.json`) contains all the extra data
(see [Custom common data](index.html#custom-common-data)) and ontologies (see
[Custom ontologies](index.html#custom-ontologies)) that your repositories use, in JSON-LD format. You can use these to display user-friendly labels for properties and things in your application.
### Legacy mode[¶](#legacy-mode)
Ferenda can be made directly compatible with the API used by
`rinfo-service` (mentioned above) by activating the setting
`legacyapi`, eg by setting `legacyapi = True` in ferenda.conf or using the option `--legacyapi` on the command line.
Note that this setting is used both during the `makeresources` step as well as when serving the API eg with the `runserver` command. If you want to play with this setting, you’ll need to re-run
`makeresources --force` with this enabled.
Running `makeresources` with this setting enabled also installs a API explorer app, taken from `rinfo-service`. You can try it out at
`http://localhost:8000/rsrc/ui/`.
Setting up external databases[¶](#setting-up-external-databases)
---
Ferenda stores data in three substantially different ways:
* Documents are stored in the file system
* RDF Metadata is stored in in a [triple store](http://en.wikipedia.org/wiki/Triplestore)
* Document text is stored in a fulltext search engine.
There are many capable and performant triple stores and fulltext search engines available, and ferenda supports a few of them. The default choice for both are embedded solutions (using RDFLib + SQLite for a triple store and Whoosh for a fulltext search engine) so that you can get a small system going without installing and configuring additional server processess. However, these choices do not work well with medium to large datasets, so when you start feeling that indexing and searching is getting slow, you should run an external triplestore and an external fulltext search engine.
If you’re using the project framework, you set the configuration values `storetype` and `indextype` to new values. You’ll find that the `ferenda-setup` tool creates a `ferenda.ini` that specifies
`storetype` and `indextype`, based on whether it can find Fuseki,
Sesame and/or ElasticSearch running on their default ports on localhost. You still might have to do extra configuration,
particularly if you’re using Sesame as a triple store.
If you setup any of the external databases after running
`ferenda-setup`, or you want to use some other configuration than what `ferenda-setup` selected for you, you can still set the configuration values in `ferenda.ini` by editing the file as described below.
If you are running any of the external databases, but in a non-default location (including remote locations) you can set the environment variables `FERENDA_TRIPLESTORE_LOCATION` and/or
`FERENDA_FULLTEXTINDEX_LOCATION` to the full URL before running
`ferenda-setup`.
### Triple stores[¶](#triple-stores)
There are four choices.
#### RDFLib + SQLite[¶](#rdflib-sqlite)
In `ferenda.ini`:
```
[__root__]
storetype = SQLITE storelocation = data/ferenda.sqlite # single file storerepository = <projectname>
```
This is the simplest way to get up and running, requiring no configuration or installs on any platform.
#### RDFLib + Sleepycat (aka `bsddb`)[¶](#rdflib-sleepycat-aka-bsddb)
In `ferenda.ini`:
```
[__root__]
storetype = SLEEPYCAT storelocation = data/ferenda.db # directory storerepository = <projectname>
```
This requires that `bsddb` (part of the standard library for python 2) or `bsddb3` (separate package) is available and working (which can be a bit of pain on many platforms). Furthermore it’s less stable and slower than RDFLib + SQLite, so it can’t really be recommended. But since it’s the only persistant storage directly supported by RDFLib, it’s supported by Ferenda as well.
#### Sesame[¶](#sesame)
In `ferenda.ini`:
```
[__root__]
storetype = SESAME storelocation = http://localhost:8080/openrdf-sesame storerepository = <projectname>
```
[Sesame](http://www.openrdf.org/index.jsp) is a framework and a set of java web applications that normally runs within a Tomcat application server. If you’re comfortable with Tomcat and servlet containers you can get started with this quickly, see their [installation instructions](http://www.openrdf.org/doc/sesame2/users/ch06.html). You’ll need to install both the actual Sesame Server and the OpenRDF workbench.
After installing it and configuring `ferenda.ini` to use it, you’ll need to use the OpenRDF workbench app (at `http://localhost:8080/openrdf-workbench` by default) to create a new repository. The recommended settings are:
```
Type: Native Java store ID: <projectname> # eg same as storerepository in ferenda.ini Title: Ferenda repository for <projectname>
Triple indexes: spoc,posc,cspo,opsc,psoc
```
It’s much faster than the RDFLib-based stores and is fairly stable (although Ferenda’s usage patterns seem to sometimes make simple operations take a disproportionate amount of time).
#### Fuseki[¶](#fuseki)
In `ferenda.ini`:
```
[__root__]
storetype = SESAME storelocation = http://localhost:3030 storerepository = ds
```
[Fuseki](http://jena.apache.org/documentation/serving_data/) is a simple java server that implements most SPARQL standards and can be run [without any complicated setup](http://jena.apache.org/documentation/serving_data/#getting-started-with-fuseki). It can keep data purely in memory or store it on disk. The above configuration works with the default configuration of Fuseki - just download it and run `fuseki-server`
Fuseki seems to be the fastest triple store that Ferenda supports, at least with Ferendas usage patterns. Since it’s also the easiest to set up, it’s the recommended triple store once RDFLib + SQLite isn’t enough.
### Fulltext search engines[¶](#fulltext-search-engines)
There are two choices.
#### Whoosh[¶](#whoosh)
In `ferenda.ini`:
```
[__root__]
indextype = WHOOSH indexlocation = data/whooshindex
```
Whoosh is an embedded python fulltext search engine, which requires no setup (it’s automatically installed when installing ferenda with `pip` or `easy_install`), works reasonably well with small to medium amounts of data, and performs quick searches. However, once the index grows beyond a few hundred MB, indexing of new material begins to slow down.
#### Elasticsearch[¶](#elasticsearch)
In `ferenda.ini`:
```
[__root__]
indextype = ELASTICSEARCH indexlocation = http://localhost:9200/ferenda/
```
Elasticsearch is a distributed fulltext search engine in java which can run in a distributed fashion and which is accessed through a simple JSON/REST API. It’s easy to setup – just download it and run `bin/elasticsearch` as per the [instructions](http://www.elasticsearch.org/guide/reference/setup/installation/). Ferenda’s support for Elasticsearch is new and not yet stable, but it should be able to handle much larger amounts of data.
Testing your docrepo[¶](#testing-your-docrepo)
---
The module [`ferenda.testutil`](index.html#module-ferenda.testutil) contains an assortment of classes and functions that can be useful when testing code written against the Ferenda API.
### Extra assert methods[¶](#extra-assert-methods)
The [`FerendaTestCase`](index.html#ferenda.testutil.FerendaTestCase) is intended to be used by your [`unittest.TestCase`](https://docs.python.org/3/library/unittest.html#unittest.TestCase) based testcases. Your testcase inherits from both `TestCase` and `FerendaTestCase`, and thus gains new assert methods:
| Method | Description |
| --- | --- |
| [`assertEqualGraphs()`](index.html#ferenda.testutil.FerendaTestCase.assertEqualGraphs) | Compares two
[`Graph`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.graph.Graph) objects |
| [`assertEqualXML()`](index.html#ferenda.testutil.FerendaTestCase.assertEqualXML) | Compares two XML documents (in string or
`lxml.etree` form) |
| [`assertEqualDirs()`](index.html#ferenda.testutil.FerendaTestCase.assertEqualDirs) | Compares the files and contents of those files in two directories |
| [`assertAlmostEqualDatetime()`](index.html#ferenda.testutil.FerendaTestCase.assertAlmostEqualDatetime) | Compares two datetime objects to a specified precision |
### Creating parametric tests[¶](#creating-parametric-tests)
A parametric test case is a single unit of test code that, during test execution, is run several times with different arguments
(parameters). The function [`parametrize()`](index.html#ferenda.testutil.parametrize)
creates a single new testcase, based upon a template method, and binds the specified parameters to the template method. Each testcase is uniquely named based on the given parameters. Since each invocation creates a new test case method, specific parameters can be tested in isolation, and the normal unittest test runner reports exactly which parameters the test succeeds or fails with.
Often, the parameters to the test is best stored in files. The function [`file_parametrize()`](index.html#ferenda.testutil.file_parametrize) creates one testcase, based upon a template method, for each file found in a specified directory.
### RepoTester[¶](#repotester)
Functional tests are written to test a specific functionality of a software system as a whole. This means that functional tests excercize a larger portion of the code and is focused on what the behaviour
(output) of the code should be, given a particular input. A typical repository has at least three large units of code that benefits from a functional-level testing: Code that performs downloading of documents,
code that extracts metadata from downloaded documents, and code that generates structured XHTML documents from the downloaded documents.
The [`RepoTester`](index.html#ferenda.testutil.RepoTester) contains generic,
parametric test for all three of these. In order to use them, you create test data in some directory of your choice, create a subclass of `RepoTester` specifying the location of your test data and the docrepo class you want to test: and finally call
[`parametrize_repotester()`](index.html#ferenda.testutil.parametrize_repotester) in your top-level test code to set up one test for each test data file that you’ve created.
```
from ferenda.testutil import RepoTester, parametrize_repotester from ferenda.sources.tech import RFC
class RFCTester(RepoTester):
repoclass = RFC
docroot = "myrepo/tests/files"
parametrize_repotester(RFCTester)
```
### Download tests[¶](#download-tests)
See [`download_test()`](index.html#ferenda.testutil.RepoTester.download_test).
For each download test, you need to create a JSON file under the
`source` directory of your docroot, eg:
`myrepo/tests/files/source/basic.json` that should look something like this:
```
{
"http://www.ietf.org/download/rfc-index.txt": {
"file":"index.txt",
"content-type":"text/plain"
},
"http://tools.ietf.org/rfc/rfc6953.txt": {
"file":"rfc6953.txt",
"content-type": "text/plain",
"expect": "downloaded/6953.txt"
}
}
```
Each key of the JSON object should be a URL, and the value should be another JSON object, that should have the key `file` that specifies the relative location of a file that corresponds to that URL.
When each download test runs, calls to requests.get et al are intercepted and the given file is returned instead. This allows you to run the download tests without hitting the remote server.
Each JSON object might also have the key `expect`, which indicates that the URL represents a document to be stored. The value specifieds the location where the download method should store the corresponding file, if that particular URL should be stored underneath the
`downloaded` directory. In the above example, the index file is no
If you want to test your download code under any specific condition,
you can specify a special `@settings` key. Each key and sub-key underneath this will be set directly on the repo object being tested. For example, this sets the `next_sfsnr` key of the
[`config`](index.html#ferenda.DocumentRepository.config) object on the repo to
`2014:913`.
```
{
"@settings": {
"config": {"next_sfsnr": "2014:913"}
}
}
```
#### Recording download tests[¶](#recording-download-tests)
If the environment variable `FERENDA_SET_TESTFILE` is set, the download code runs like normal (calls to requests.get et al are not intercepted) and instead each accessed URL is stored in the JSON file. URL accessses that results in downloaded files results in
`expect` entries in the JSON file. This allows you to record the behaviour of existing download code to examine it or just to make sure it doesn’t change inadvertantly.
### Distill and parse tests[¶](#distill-and-parse-tests)
See [`distill_test()`](index.html#ferenda.testutil.RepoTester.distill_test) and
[`parse_test()`](index.html#ferenda.testutil.RepoTester.parse_test).
To create a distill or parse test, you first need to create whatever files that your parse methods will need in the `download` directory of your docroot.
Both [`distill_test()`](index.html#ferenda.testutil.RepoTester.distill_test) and
[`parse_test()`](index.html#ferenda.testutil.RepoTester.parse_test) will run your parse method, and then compare it to expected results. For distill tests,
the expected result should be placed under
`distilled/[basefile].ttl`. For parse tests, the expected result should be placed under `parsed/[basefile].xhtml`.
#### Recording distill/parse tests[¶](#recording-distill-parse-tests)
If the environment variable `FERENDA_SET_TESTFILE` is set, the parse code runs like normal and the result of the parse is stored in eg. `distilled/[basefile].ttl` or `parsed/[basefile].xhtml`. This is a quick way of recording existing behaviour as a baseline for your tests.
### Py23DocChecker[¶](#py23docchecker)
[`Py23DocChecker`](index.html#ferenda.testutil.Py23DocChecker) is a small helper to enable you to write doctest-style tests that run unmodified under python 2 and 3. The main problem with cross-version compatible doctests is with functions that return (unicode) strings. These are formatted `u'like this'` in Python 2, and `'like this'` in Python 3. Writing doctests for functions that return unicode strings requires you to choose one of these syntaxes, and the result will fail on the other platform. By strictly running doctests from within the
[`unittest`](https://docs.python.org/3/library/unittest.html#module-unittest) framework through the `load_tests` mechanism, and loading your doctests in this way, the tests will work even under Python 2:
```
from ferenda.testutil import Py23DocChecker def load_tests(loader,tests,ignore):
tests.addTests(doctest.DocTestSuite(mymodule, checker=Py23DocChecker()))
return tests
```
### testparser[¶](#testparser)
[`testparser()`](index.html#ferenda.testutil.testparser) is a simple helper that tests
[`FSMParser`](index.html#ferenda.FSMParser) based parsers.
Advanced topics[¶](#advanced-topics)
---
### Composite docrepos[¶](#composite-docrepos)
In some cases, a document collection may available from multiple sources, with varying degrees of completeness and/or quality. For example, in a collection of US patents, some patents may be available in structured XML with good metadata through a easy-to-use API, some in tag-soup style HTML with no metadata, requiring screenscraping, and some in the form of TIFF files that you scanned yourself. The implementation of both download() and parse() will differ wildly for these sources. You’ll have something like this:
```
from ferenda import DocumentRepository, CompositeRepository from ferenda.decorators import managedparsing
class XMLPatents(DocumentRepository):
alias = "patxml"
def download(self, basefile = None):
download_from_api()
@managedparsing
def parse(self,doc):
return self.transform_patent_xml_to_xhtml(doc)
class HTMLPatents(DocumentRepository):
alias = "pathtml"
def download(self, basefile=None):
screenscrape()
@managedparsing
def parse(self,doc):
return self.analyze_tagsoup(doc)
class ScannedPatents(DocumentRepository):
alias = "patscan"
# Assume that we, when we scanned the documents, placed them in their
# correct place under data/patscan/downloaded
def download(self, basefile=None): pass
@managedparsing
def parse(self,doc):
x = self.ocr_and_structure(doc)
return True
```
But since the result of all three parse() implementations are XHTML1.1+RDFa documents (possibly with varying degrees of data fidelity), the implementation of generate() will be substantially the same. Furthermore, you probably want to present a unified document collection to the end user, presenting documents derived from structured XML if they’re available, documents derived from tagsoup HTML if an XML version wasn’t available, and finally documents derived from your scanned documents if nothing else is available.
The class `CompositeRepository` makes this possible. You specify a number of subordinate docrepo classes using the `subrepos` class property.
```
class CompositePatents(CompositeRepository):
alias = "pat"
# Specify the classes in order of preference for parsed documents.
# Only if XMLPatents does not have a specific patent will HTMLPatents
# get the chance to provide it through it's parse method
subrepos = XMLPatents, HTMLPatents, ScannedPatents
def generate(self, basefile, otherrepos=[]):
# Optional code to transform parsed XHTML1.1+RDFa documents
# into browser-ready HTML5, regardless of wheter these are
# derived from structured XML, tagsoup HTML or scanned
# TIFFs. If your parse() method can make these parsed
# documents sufficiently alike and generic, you might not need
# to implement this method at all.
self.do_the_work(basefile)
```
The CompositeRepository docrepo then acts as a proxy for all of your specialized repositories:
```
$ ./ferenda-build.py patents.CompositePatents enable
# calls download() for all subrepos
$ ./ferenda-build.py pat download
# selects the best subrepo that has patent 5,723,765, calls parse()
# for that, then copies the result to pat/parsed/ 5723765 (or links)
$ ./ferenda-build.py pat parse 5723765
# uses the pat/parsed/5723765 data. From here on, we're just like any
# other docrepo.
$ ./ferenda-build.py pat generate 5723765
```
Note that `patents.XMLPatents` and the other subrepos are never registered in ferenda.ini``. They’re just called behind-the-scenes by
`patents.CompositePatents`.
### Patch files[¶](#patch-files)
It is not uncommon that source documents in a document repository contains formatting irregularities, sensitive information that must be redacted, or just outright errors. In some cases, your parse implementation can detect and correct these things, but in other cases, the irregularities are so uncommon or unique that this is not possible to do in a general way.
As an alternative, you can patch the source document (or it’s intermediate representation) before the main part of your parsing logic.
The method [`patch_if_needed()`](index.html#ferenda.DocumentRepository.patch_if_needed)
automates most of this work for you. It expects a basefile and the corresponding source document as a string, looks in a *patch directory* for a corresponding patch file, and applies it if found.
By default, the patch directory is alongside the data directory. The patch file for document foo in repository bar should be placed in
`patches/bar/foo.patch`. An optional description of the patch (as a plaintext, UTF-8 encoded file) can be placed in
`patches/bar/foo.desc`.
[`patch_if_needed()`](index.html#ferenda.DocumentRepository.patch_if_needed) returns a tuple
(text, description). If there was no available patch, text is identical to the text passed in and description is None. If there was a patch available and it applied cleanly, text is the patched text and description is a description of the patch (or “(No patch description available)”). If there was a patch, but it didn’t apply cleanly, a PatchError is raised.
Note
There is a `mkpatch` command in the Devel class which aims to automate the creation of patch files. It does not work at the moment.
### External annotations[¶](#external-annotations)
Ferenda contains a general docrepo class that fetches data from a separate MediaWiki server and stores this as annotations/descriptions related to the documents in your main docrepos. This makes it possible to present a source document and commentary on it (including annotations about individual sections) side-by-side.
See [`ferenda.sources.general.MediaWiki`](index.html#ferenda.sources.general.MediaWiki)
### Keyword hubs[¶](#keyword-hubs)
Ferenda also contains a general docrepo class that lists all keywords used by documents in your main docrepos (by default, it looks for all
`dcterms:subject` properties used in any document) and generate documents for each of them. These documents have no content of their own, but act as hub pages that list all documents that use a certain keyword in one place.
When used together with the MediaWiki module above, this makes it possible to write editorial descriptions about each keyword used, that is presented alongside the list of documents that use that keyword.
See [`ferenda.sources.general.Keyword`](index.html#ferenda.sources.general.Keyword)
### Custom common data[¶](#custom-common-data)
In many cases, you want to describe documents using references to other things that are not documents, but which should be named using URIs rather than plain text literals. This includes things like companies, publishing entities, print series and abstract things like the topic/keyword of a document. You can define a RDF graph containing more information about each such thing that you know of beforehand, eg if we want to model that some RFCs are published in the Internet Architecture Board (IAB) stream, we can define the following small graph:
```
<http://localhost:8000/ext/iab> a foaf:Organization;
foaf:name "Internet Architecture Board (IAB)";
skos:altLabel "IAB";
foaf:homepage <https://www.iab.org/> .
```
If this is placed in `res/extra/[alias].ttl`, eg
`res/extra/rfc.ttl`, the graph is made available as
[`commondata`](index.html#ferenda.DocumentRepository.commondata), and is also provided as the third `resource_graph` argument to any selector/key functions of your [`Facet`](index.html#ferenda.Facet) objects.
### Custom ontologies[¶](#custom-ontologies)
Some parts of ferenda, notably [The ReST API for querying](index.html#document-restapi), can make use of ontologies that your docrepo uses. This is so far only used to provide human-readable descriptions of predicates used (as determined by
`rdfs:label` or `rdfs:comment`). Ferenda will try to find an ontology for any namespace you use in
[`namespaces`](index.html#ferenda.DocumentRepository.namespaces), and directly supports many common vocabularies (`bibo`, `dc`, `dcterms`,
`foaf`, `prov`, `rdf`, `rdfs`, `schema` and `skos`). If you have defined your own custom ontology, place it (in Turtle format)
as `res/vocab/[alias].ttl`, eg. `res/vocab/rfc.ttl` to make Ferenda read it.
### Parallel processing[¶](#parallel-processing)
It’s common to use ferenda with document collections with tens of thousands of documents. If a single document takes a second to parse,
it means the entire document collection will take three hours or more,
which is not ideal for quick turnaround. Ferenda, and in particular the `ferenda-build.py` tool, can run tasks in parallel to speed things up.
#### Multiprocessing on a single machine[¶](#multiprocessing-on-a-single-machine)
The simplest way of speeding up processing is to use the `processes`
parameter, eg:
```
./ferenda-build.py rfc parse --all --processes=4
```
This will create 4 processes (started by a fifth control proccess),
each processing individual documents as instructed by the control process. As a rule of thumb, you should create as many processes as you have CPU cores.
#### Distributed processing[¶](#distributed-processing)
A more complex, but also more scalable way, is to set up a bunch of computers acting as processing clients, together with a main (control)
system. Each of these clients must have access to the same code and data directory as the main system (ie they should all mount the same network file system). On each client, you then run (assuming that your main system has the IP address 192.168.1.42, and that this particular client has 4 CPU cores):
```
./ferenda-build.py all buildclient --serverhost=192.168.1.42 --processes=4
```
On the main system, you first start a message queue with:
```
./ferenda-build.py all buildqueue
```
Then you can run `ferenda-build.py` as normal but with the `buildqueue`
parameter, eg:
```
./ferenda-build rfc parse --all --buildqueue
```
This will put each file to be processed in the message queue, where all clients will pick up these jobs and process them.
The clients and the message queue can be kept running indefinitely
(although the clients will need to be restarted when you change the code that they’re running).
If you’re not running ferenda on windows, you can skip the separate message queue process. Just start your clients like above, then start
`ferenda-build.py` on your main system with the `buildserver` parameter,
eg:
```
./ferenda-build.py rfc parse --all --buildserver
```
This sets up a temporary in-subprocess message queue that your clients will connect to as soon as it’s up.
Note
Because of reasons, this in-subprocess queue does not work on Windows. On that platform you’ll need to run the message queue separately, as described initially.
API reference[¶](#api-reference)
===
Classes[¶](#classes)
---
### The `DocumentRepository` class[¶](#the-documentrepository-class)
*class* `ferenda.``DocumentRepository`(*config=None*, ***kwargs*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository)[¶](#ferenda.DocumentRepository)
Base class for downloading, parsing and generating HTML versions of a repository of documents.
Start building your application by subclassing this class, and then override methods in order to customize the downloading,
parsing and generation behaviour.
| Parameters: | ****kwargs** – Any named argument overrides any similarly-named configuration file parameter. |
Example:
```
>>> class MyRepo(DocumentRepository):
... alias="myrepo"
...
>>> d = MyRepo(datadir="/tmp/ferenda")
>>> d.store.downloaded_path("mybasefile").replace(os.sep,'/')
'/tmp/ferenda/myrepo/downloaded/mybasefile.html'
```
Note
This class has a ridiculous amount of properties and methods that you can override to control most of Ferendas behaviour in all stages. For basic usage, you need only a fraction of them. Please don’t be intimidated/horrified.
`downloaded_suffix` *= '.html'*[¶](#ferenda.DocumentRepository.downloaded_suffix)
File suffix for the main document format. Determines the suffix of downloaded files.
`storage_policy` *= 'file'*[¶](#ferenda.DocumentRepository.storage_policy)
Some repositories have documents in several formats, documents split amongst several files or embedded resources. If
`storage_policy` is set to `dir`, then each document gets its own directory (the default filename being `index` +suffix),
otherwise each doc gets stored as a file in a directory with other files. Affects
[`ferenda.DocumentStore.path()`](index.html#ferenda.DocumentStore.path) (and therefore all other `*_path` methods)
`alias` *= 'base'*[¶](#ferenda.DocumentRepository.alias)
A short name for the class, used by the command line
`ferenda-build.py` tool. Also determines where to store downloaded, parsed and generated files. When you subclass
[`DocumentRepository`](#ferenda.DocumentRepository) you *must* override this.
`namespaces` *= ['rdf', 'rdfs', 'xsd', 'xsi', 'dcterms', 'skos', 'foaf', 'xhv', 'owl', 'prov', 'bibo']*[¶](#ferenda.DocumentRepository.namespaces)
The namespaces that are included in the XHTML and RDF files generated by [`parse()`](#ferenda.DocumentRepository.parse). This can be a list of strings, in which case the strings are assumed to be well-known prefixes to established namespaces, or a list of
*(prefix, namespace)* tuples. All well-known prefixes are available in [`ferenda.util.ns`](index.html#ferenda.util.ns).
If you specify a namespace for a well-known ontology/vocabulary,
that onlology will be available as a
[`Graph`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.graph.Graph) from the
[`ontologies`](#ferenda.DocumentRepository.ontologies) property.
`required_predicates` *= [rdflib.term.URIRef('http://www.w3.org/1999/02/22-rdf-syntax-ns#type')]*[¶](#ferenda.DocumentRepository.required_predicates)
A list of RDF predicates that should be present in the outdata. If any of these are missing from the result of
[`parse()`](#ferenda.DocumentRepository.parse), a warning is logged. You can add to this list as a form of simple validation of your parsed data.
`start_url` *= 'http://example.org/'*[¶](#ferenda.DocumentRepository.start_url)
The main entry page for the remote web store of documents. May be a list of documents, a search form or whatever. If it’s something more complicated than a simple list of documents, you need to override [`download()`](#ferenda.DocumentRepository.download)
in order to tell which documents are to be downloaded.
`document_url_template` *= 'http://example.org/docs/%(basefile)s.html'*[¶](#ferenda.DocumentRepository.document_url_template)
A string template for creating URLs for individual documents on the remote web server. Directly used by
[`remote_url()`](#ferenda.DocumentRepository.remote_url) and indirectly by [`download_single()`](#ferenda.DocumentRepository.download_single).
`document_url_regex` *= 'http://example.org/docs/(?P<basefile>\\w+).html'*[¶](#ferenda.DocumentRepository.document_url_regex)
A regex that matches URLs for individual documents – the reverse of what
[`document_url_template`](#ferenda.DocumentRepository.document_url_template) is used for. Used by
[`download()`](#ferenda.DocumentRepository.download) to find suitable links if [`basefile_regex`](#ferenda.DocumentRepository.basefile_regex)
doesn’t match. Must define the named group `basefile` using the
`(?P<basefile>...)` syntax
`basefile_regex` *= '^ID: ?(?P<basefile>[\\w\\d\\:\\/]+)$'*[¶](#ferenda.DocumentRepository.basefile_regex)
A regex for matching document names in link text, as used by
[`download()`](#ferenda.DocumentRepository.download). Must define a named group `basefile`, just like
[`document_url_template`](#ferenda.DocumentRepository.document_url_template).
`rdf_type` *= rdflib.term.URIRef('http://xmlns.com/foaf/0.1/Document')*[¶](#ferenda.DocumentRepository.rdf_type)
The RDF type of the documents you are handling (expressed as a
[`rdflib.term.URIRef`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.term.URIRef) object).
Note
If your repo produces documents of several different types, you can define this as a list (or other iterable) of
[`URIRef`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.term.URIRef)
objects. [`faceted_data()`](#ferenda.DocumentRepository.faceted_data)
will only find documents that are any of the types.
`source_encoding` *= 'utf-8'*[¶](#ferenda.DocumentRepository.source_encoding)
The character set that the source HTML documents use (if applicable).
`lang` *= 'en'*[¶](#ferenda.DocumentRepository.lang)
The language which the source documents are assumed to be written in (unless otherwise specified), and the language which output document should use.
`parse_content_selector` *= 'body'*[¶](#ferenda.DocumentRepository.parse_content_selector)
CSS selector used to select the main part of the document content by the default
[`parse()`](#ferenda.DocumentRepository.parse) implementation.
`parse_filter_selectors` *= ['script']*[¶](#ferenda.DocumentRepository.parse_filter_selectors)
CSS selectors used to filter/remove certain parts of the document content by the default
[`parse()`](#ferenda.DocumentRepository.parse) implementation.
`xslt_template` *= 'res/xsl/generic.xsl'*[¶](#ferenda.DocumentRepository.xslt_template)
A template used by
[`generate()`](#ferenda.DocumentRepository.generate) to transform the XML file into browser-ready HTML. If your document type is complex, you might want to override this (and write your own XSLT transform). You should include `base.xslt` in that template,
though.
`sparql_annotations` *= 'res/sparql/annotations.rq'*[¶](#ferenda.DocumentRepository.sparql_annotations)
A template SPARQL CONSTRUCT query for document annotations.
`documentstore_class`[¶](#ferenda.DocumentRepository.documentstore_class)
alias of `ferenda.documentstore.DocumentStore`
`ontologies`[¶](#ferenda.DocumentRepository.ontologies)
Provides a [`Graph`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.graph.Graph) loaded with the ontologies/vocabularies that this docrepo uses (as determined by the
`namespaces`` property).
If you’re using your own vocabularies, you can place them (in Turtle format) as `res/vocab/[prefix].ttl` to have them loaded into the graph.
Note
Some system-like vocabularies (`rdf`, `rdfs` and `owl`)
are never loaded into the graph.
`commondata`[¶](#ferenda.DocumentRepository.commondata)
Provides a [`Graph`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.graph.Graph) containing any extra data that is common to documents in this docrepo – this can be information about different entities that publishes the documents, the printed series in which they’re published, and so on. The data is taken from `res/extra/[repoalias].ttl`.
`config`[¶](#ferenda.DocumentRepository.config)
The [`LayeredConfig`](https://layeredconfig.readthedocs.io/en/latest/api.html#layeredconfig.LayeredConfig) object that contains the current configuration for this docrepo instance. You can read or write individual properties of this object, or replace it with a new
[`LayeredConfig`](https://layeredconfig.readthedocs.io/en/latest/api.html#layeredconfig.LayeredConfig) object entirely.
`lookup_resource`(*label*, *predicate=rdflib.term.URIRef('http://xmlns.com/foaf/0.1/name')*, *cutoff=0.8*, *warn=True*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.lookup_resource)[¶](#ferenda.DocumentRepository.lookup_resource)
Given a textual identifier (ie. the name for something), lookup the canonical uri for that thing in the RDF graph containing extra data (i.e. the graph that
[`commondata`](#ferenda.DocumentRepository.commondata)
provides). The graph should have a foaf:name`` statement about the url with the sought label as the object.
Since data is imperfect, the textual label may be spelled or expressed different in different contexts. This method therefore performs fuzzy matching (using
[`difflib.get_close_matches()`](https://docs.python.org/3/library/difflib.html#difflib.get_close_matches)) using the cutoff parameter determines exactly how fuzzy this matching is.
If no resource matches the given label, a
[`KeyError`](https://docs.python.org/3/library/exceptions.html#KeyError) is raised.
| Parameters: | * **label** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The textual label to lookup
* **predicate** ([*rdflib.term.URIRef*](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.term.URIRef)) – The RDF predicate to use when looking for the label
* **cutoff** ([*float*](https://docs.python.org/3/library/functions.html#float)) – How fuzzy the matching may be (1 = must match exactly, 0 = anything goes)
* **warn** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Whether to log a warning when an inexact match is performed
|
| Returns: | The matching resource |
| Return type: | [rdflib.term.URIRef](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.term.URIRef) |
`get_default_options`()[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.get_default_options)[¶](#ferenda.DocumentRepository.get_default_options)
Returns the class’ configuration default configuration properties. These can be overridden by a configution file, or by named arguments to
`__init__()`. See
[Configuration](index.html#configuration) for a list of standard configuration properties (your subclass is free to define and use additional configuration properties).
| Returns: | default configuration properties |
| Return type: | [dict](https://docs.python.org/3/library/stdtypes.html#dict) |
*classmethod* `setup`(*action*, *config*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.setup)[¶](#ferenda.DocumentRepository.setup)
Runs before any of the `*_all` methods starts executing. It just calls the appropriate setup method, ie if *action* is `parse`, then this method calls `parse_all_setup` (if defined) with the *config*
object as single parameter.
*classmethod* `teardown`(*action*, *config*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.teardown)[¶](#ferenda.DocumentRepository.teardown)
Runs after any of the `*_all` methods has finished executing. It just calls the appropriate teardown method, ie if *action* is
`parse`, then this method calls `parse_all_teardown` (if defined)
with the *config* object as single parameter.
`get_archive_version`(*basefile*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.get_archive_version)[¶](#ferenda.DocumentRepository.get_archive_version)
Get a version identifier for the current version of the document identified by `basefile`.
The default implementation simply increments most recent archived version identifier, starting at “1”. If versions in your docrepo are normally identified in some other way (such as SCM revision numbers, dates or similar) you should override this method to return those identifiers.
| Parameters: | **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile of the document to archive |
| Returns: | The version identifier for the current version of the document. |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`qualified_class_name`()[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.qualified_class_name)[¶](#ferenda.DocumentRepository.qualified_class_name)
The qualified class name of this class
| Returns: | class name (e.g. `ferenda.DocumentRepository`) |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`canonical_uri`(*basefile*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.canonical_uri)[¶](#ferenda.DocumentRepository.canonical_uri)
The canonical URI for the document identified by `basefile`.
| Returns: | The canonical URI |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`dataset_uri`(*param=None*, *value=None*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.dataset_uri)[¶](#ferenda.DocumentRepository.dataset_uri)
Returns the URI that identifies the dataset that this docrepository provides. The default implementation is based on the url config parameter and the alias attribute of the class,
c.f. `http://localhost:8000/dataset/base`.
| Parameters: | * **param** – An optional parameter name represeting a way of createing a subset of the dataset (eg. all document whose title starts with a particular letter)
* **value** – A value for *param* (eg. “a”)
|
```
>>> d = DocumentRepository()
>>> d.alias
'base'
>>> d.config.url = "http://example.org/"
>>> d.dataset_uri()
'http://example.org/dataset/base'
>>> d.dataset_uri("title","a")
'http://example.org/dataset/base?title=a'
```
`basefile_from_uri`(*uri*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.basefile_from_uri)[¶](#ferenda.DocumentRepository.basefile_from_uri)
The reverse of [`canonical_uri()`](#ferenda.DocumentRepository.canonical_uri).
Returns `None` if the uri doesn’t map to a basefile in this repo.
```
>>> d = DocumentRepository()
>>> d.alias
'base'
>>> d.config.url = "http://example.org/"
>>> d.basefile_from_uri("http://example.org/res/base/123/a")
'123/a'
>>> d.basefile_from_uri("http://example.org/res/base/123/a#S1")
'123/a'
>>> d.basefile_from_uri("http://example.org/res/other/123/a") # None
```
`dataset_params_from_uri`(*uri*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.dataset_params_from_uri)[¶](#ferenda.DocumentRepository.dataset_params_from_uri)
Given a parametrized dataset URI, return the parameter and value used (or an empty tuple, if it is a dataset URI handled by this repo, but without any parameters).
```
>>> d = DocumentRepository()
>>> d.alias
'base'
>>> d.config.url = "http://example.org/"
>>> d.dataset_params_from_uri("http://example.org/dataset/base?title=a")
('title', 'a')
>>> d.dataset_params_from_uri("http://example.org/dataset/base")
()
```
`download`(*basefile=None*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.download)[¶](#ferenda.DocumentRepository.download)
Downloads all documents from a remote web service.
The default generic implementation assumes that all documents are linked from a single page (which has the url of
[`start_url`](#ferenda.DocumentRepository.start_url)), that they all have URLs matching the
[`document_url_regex`](#ferenda.DocumentRepository.document_url_regex) or that the link text is always equal to basefile (as determined by [`basefile_regex`](#ferenda.DocumentRepository.basefile_regex)). If these assumptions don’t hold, you need to override this method.
If you do override it, your download method should read and set the
`lastdownload` parameter to either the datetime of the last download or any other module-specific string (id number or similar).
You should also read the `refresh` parameter. If it is
`True` (the default), then you should call
[`download_single()`](#ferenda.DocumentRepository.download_single) for every basefile you encounter, even though they may already exist in some form on disk. [`download_single()`](#ferenda.DocumentRepository.download_single)
will normally be using conditional GET to see if there is a newer version available.
See [Writing your own download implementation](index.html#implementing-download) for more details.
| Returns: | True if any document was downloaded, False otherwise. |
| Return type: | [bool](https://docs.python.org/3/library/functions.html#bool) |
`download_get_basefiles`(*source*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.download_get_basefiles)[¶](#ferenda.DocumentRepository.download_get_basefiles)
Given *source* (a iterator that provides (element, attribute, link,
pos) tuples, like `lxml.etree.iterlinks()`), generate tuples
(basefile, link) for all document links found in *source*.
`download_single`(*basefile*, *url=None*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.download_single)[¶](#ferenda.DocumentRepository.download_single)
Downloads the document from the web (unless explicitly specified, the URL to download is determined by
[`document_url_template`](#ferenda.DocumentRepository.document_url_template) combined with basefile, the location on disk is determined by the function
[`downloaded_path()`](index.html#ferenda.DocumentStore.downloaded_path)).
If the document exists on disk, but the version on the web is unchanged (determined using a conditional GET), the file on disk is left unchanged (i.e. the timestamp is not modified).
| Parameters: | * **basefile** (*string*) – The basefile of the document to download
* **url** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The URL to download (optional)
|
| Returns: | `True` if the document was downloaded and stored on disk, `False` if the file on disk was not updated. |
`download_if_needed`(*url*, *basefile*, *archive=True*, *filename=None*, *sleep=1*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.download_if_needed)[¶](#ferenda.DocumentRepository.download_if_needed)
Downloads a remote resource to a local file. If a different version is already in place, archive that old version.
| Parameters: | * **url** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The url to download
* **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile of the document to download
* **archive** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Whether to archive existing older versions of the document, or just delete the previously downloaded file.
* **filename** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The filename to download to. If not provided,
the filename is derived from the supplied basefile
|
| Returns: | True if the local file was updated (and archived),
False otherwise. |
| Return type: | [bool](https://docs.python.org/3/library/functions.html#bool) |
`download_is_different`(*existing*, *new*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.download_is_different)[¶](#ferenda.DocumentRepository.download_is_different)
Returns True if the new file is semantically different from the existing file.
`remote_url`(*basefile*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.remote_url)[¶](#ferenda.DocumentRepository.remote_url)
Get the URL of the source document at it’s remote location,
unless the source document is fetched by other means or if it cannot be computed from basefile only. The default implementation uses
[`document_url_template`](#ferenda.DocumentRepository.document_url_template)
to calculate the url.
Example:
```
>>> d = DocumentRepository()
>>> d.remote_url("123/a")
'http://example.org/docs/123/a.html'
>>> d.document_url_template = "http://mysite.org/archive/%(basefile)s/"
>>> d.remote_url("123/a")
'http://mysite.org/archive/123/a/'
```
| Parameters: | **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile of the source document |
| Returns: | The remote url where the document can be fetched, or `None`. |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`generic_url`(*basefile*, *maindir*, *suffix*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.generic_url)[¶](#ferenda.DocumentRepository.generic_url)
Analogous to
[`ferenda.DocumentStore.path()`](index.html#ferenda.DocumentStore.path), calculate the full local url for the given basefile and stage of processing.
| Parameters: | * **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for which to calculate the local url
* **maindir** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The processing stage directory (normally
`downloaded`, `parsed`, or `generated`)
* **suffix** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The file extension including period (i.e. `.txt`,
not `txt`)
|
| Returns: | The local url |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`downloaded_url`(*basefile*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.downloaded_url)[¶](#ferenda.DocumentRepository.downloaded_url)
Get the full local url for the downloaded file for the given basefile.
| Parameters: | **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for which to calculate the local url |
| Returns: | The local url |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
```
>>> d = DocumentRepository()
>>> d.downloaded_url("123/a")
'http://localhost:8000/base/downloaded/123/a.html'
```
*classmethod* `parse_all_setup`(*config*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.parse_all_setup)[¶](#ferenda.DocumentRepository.parse_all_setup)
Runs any action needed prior to parsing all documents in a docrepo. The default implementation does nothing.
Note
This is a classmethod for now (and that’s why a config object is passsed as an argument), but might change to a instance method.
*classmethod* `parse_all_teardown`(*config*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.parse_all_teardown)[¶](#ferenda.DocumentRepository.parse_all_teardown)
Runs any cleanup action needed after parsing all documents in a docrepo. The default implementation does nothing.
Note
Like [`parse_all_setup()`](#ferenda.DocumentRepository.parse_all_setup)
this might change to a instance method.
`parseneeded`(*basefile*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.parseneeded)[¶](#ferenda.DocumentRepository.parseneeded)
Returns True iff there is a need to parse the given basefile. If the resulting parsed file exists and is newer than the downloaded file, there is typically no reason to parse the file.
`parse`(*doc*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.parse)[¶](#ferenda.DocumentRepository.parse)
Parse downloaded documents into structured XML and RDF.
It will also save the same RDF statements in a separate RDF/XML file.
You will need to provide your own parsing logic, but often it’s easier to just override parse_from_soup (assuming your indata is in a HTML format parseable by BeautifulSoup) and let the base class read and write the files.
If your data is not in a HTML format, or BeautifulSoup is not an appropriate parser to use, override this method.
| Parameters: | **doc** ([*ferenda.Document*](index.html#ferenda.Document)) – The document object to fill in. |
`parse_entry_update`(*doc*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.parse_entry_update)[¶](#ferenda.DocumentRepository.parse_entry_update)
`parse_entry_title`(*doc*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.parse_entry_title)[¶](#ferenda.DocumentRepository.parse_entry_title)
`soup_from_basefile`(*basefile*, *encoding='utf-8'*, *parser='lxml'*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.soup_from_basefile)[¶](#ferenda.DocumentRepository.soup_from_basefile)
Load the downloaded document for basefile into a BeautifulSoup object
| Parameters: | * **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for the downloaded document to parse
* **encoding** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The encoding of the downloaded document
|
| Returns: | The parsed document as a `BeautifulSoup` object |
Note
Helper function. You probably don’t need to override it.
`parse_metadata_from_soup`(*soup*, *doc*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.parse_metadata_from_soup)[¶](#ferenda.DocumentRepository.parse_metadata_from_soup)
Given a BeautifulSoup document, retrieve all document-level metadata from it and put it into the given `doc` object’s
`meta` property.
Note
The default implementation sets `rdf:type`,
`dcterms:title`, `dcterms:identifier` and
`prov:wasGeneratedBy` properties in `doc.meta`, as well as setting the language of the document in `doc.lang`.
| Parameters: | * **soup** – A parsed document, as `BeautifulSoup` object
* **doc** ([*ferenda.Document*](index.html#ferenda.Document)) – Our document
|
| Returns: | None |
`parse_document_from_soup`(*soup*, *doc*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.parse_document_from_soup)[¶](#ferenda.DocumentRepository.parse_document_from_soup)
Given a BeautifulSoup document, convert it into the provided
`doc` object’s `body` property as suitable
[`ferenda.elements`](index.html#module-ferenda.elements) objects.
Note
The default implementation respects
[`parse_content_selector`](#ferenda.DocumentRepository.parse_content_selector)
and
[`parse_filter_selectors`](#ferenda.DocumentRepository.parse_filter_selectors).
| Parameters: | * **soup** – A parsed document as a `BeautifulSoup` object
* **doc** ([*ferenda.Document*](index.html#ferenda.Document)) – Our document
|
| Returns: | None |
`patch_if_needed`(*basefile*, *text*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.patch_if_needed)[¶](#ferenda.DocumentRepository.patch_if_needed)
Given *basefile* and the entire *text* of the downloaded or intermediate document, find if there exists a patch file under
`self.config.patchdir`, and if so, applies it. Returns
(patchedtext, patchdescription) if so, (text,None)
otherwise.
| Parameters: | * **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile of the text
* **text** ([*bytes*](https://docs.python.org/3/library/stdtypes.html#bytes)) – The text to be patched
|
`make_document`(*basefile=None*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.make_document)[¶](#ferenda.DocumentRepository.make_document)
Create a [`Document`](index.html#ferenda.Document) objects with basic initialized fields.
Note
Helper method used by the
[`makedocument()`](index.html#ferenda.decorators.makedocument) decorator.
| Parameters: | **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for the document |
| Return type: | [ferenda.Document](index.html#ferenda.Document) |
`make_graph`()[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.make_graph)[¶](#ferenda.DocumentRepository.make_graph)
Initialize a rdflib Graph object with proper namespace prefix bindings (as determined by
[`namespaces`](#ferenda.DocumentRepository.namespaces))
| Return type: | rdflib.Graph |
`create_external_resources`(*doc*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.create_external_resources)[¶](#ferenda.DocumentRepository.create_external_resources)
Optionally create external files that go together with the parsed file (stylesheets, images, etc).
The default implementation does nothing.
| Parameters: | **doc** ([*ferenda.Document*](index.html#ferenda.Document)) – The document |
`render_xhtml`(*doc*, *outfile=None*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.render_xhtml)[¶](#ferenda.DocumentRepository.render_xhtml)
Renders the parsed object structure as a XHTML file with RDFa attributes (also returns the same XHTML as a string).
| Parameters: | * **doc** ([*ferenda.Document*](index.html#ferenda.Document)) – The document to render
* **outfile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The file name for the XHTML document
|
| Returns: | The XHTML document |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`render_xhtml_tree`(*doc*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.render_xhtml_tree)[¶](#ferenda.DocumentRepository.render_xhtml_tree)
Renders the parsed object structure as a `lxml.etree._Element` object.
| Parameters: | **doc** ([*ferenda.Document*](index.html#ferenda.Document)) – The document to render |
| Returns: | The XHTML document as a lxml structure |
| Return type: | lxml.etree._Element |
`parsed_url`(*basefile*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.parsed_url)[¶](#ferenda.DocumentRepository.parsed_url)
Get the full local url for the parsed file for the given basefile.
| Parameters: | **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for which to calculate the local url |
| Returns: | The local url |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`distilled_url`(*basefile*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.distilled_url)[¶](#ferenda.DocumentRepository.distilled_url)
Get the full local url for the distilled RDF/XML file for the given basefile.
| Parameters: | **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for which to calculate the local url |
| Returns: | The local url |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
*classmethod* `relate_all_setup`(*config*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.relate_all_setup)[¶](#ferenda.DocumentRepository.relate_all_setup)
Runs any cleanup action needed prior to relating all documents in a docrepo. The default implementation clears the corresponsing context (see [`dataset_uri()`](#ferenda.DocumentRepository.dataset_uri))
in the triple store.
Note
Like [`parse_all_setup()`](#ferenda.DocumentRepository.parse_all_setup)
this might change to a instance method.
Returns False if no relation needs to be done (as determined by the timestamp on the dump nt file)
*classmethod* `relate_all_teardown`(*config*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.relate_all_teardown)[¶](#ferenda.DocumentRepository.relate_all_teardown)
Runs any cleanup action needed after relating all documents in a docrepo. The default implementation dumps all RDF data loaded into the triplestore into one giant N-Triples file.
Note
Like [`parse_all_setup()`](#ferenda.DocumentRepository.parse_all_setup)
this might change to a instance method.
`relate`(*basefile*, *otherrepos=[]*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.relate)[¶](#ferenda.DocumentRepository.relate)
Runs various indexing operations for the document represented by
*basefile*: insert RDF statements into a triple store, add this document to the dependency list to all documents that it refers to,
and put the text of the document into a fulltext index.
`relate_triples`(*basefile*, *removesubjects=False*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.relate_triples)[¶](#ferenda.DocumentRepository.relate_triples)
Insert the (previously distilled) RDF statements into the triple store.
| Parameters: | * **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for the document containing the RDF statements.
* **removesubjects** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Whether to remove all identified subjects
from the triplestore beforehand (to clear the previous version of this basefile’s
metadata). FIXME: not yet used
|
| Returns: | None |
`relate_dependencies`(*basefile*, *repos=[]*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.relate_dependencies)[¶](#ferenda.DocumentRepository.relate_dependencies)
For each document that the basefile document refers to, attempt to find this document in the current or any other docrepo, and add the parsed document path to that documents dependency file.
`add_dependency`(*basefile*, *dependencyfile*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.add_dependency)[¶](#ferenda.DocumentRepository.add_dependency)
Add the *dependencyfile* to *basefile* s dependency file. Returns True if anything new was added, False otherwise
`relate_fulltext`(*basefile*, *repos=None*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.relate_fulltext)[¶](#ferenda.DocumentRepository.relate_fulltext)
Index the text of the document into fulltext index. Also indexes all metadata that facets() indicate should be indexed.
| Parameters: | **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for the document to be indexed. |
| Returns: | None |
`facets`()[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.facets)[¶](#ferenda.DocumentRepository.facets)
Provides a list of [`Facet`](index.html#ferenda.Facet) objects that specify how documents in your docrepo should be grouped.
Override this if you want to specify your own way of grouping data in your docrepo.
`faceted_data`()[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.faceted_data)[¶](#ferenda.DocumentRepository.faceted_data)
Provides a list of dicts, each containing a row of information about a single document in the repository. The exact fields provided are controlled by the list of
[`Facet`](index.html#ferenda.Facet) objects returned by
`facet()`.
Note
The same document can occur multiple times if any of it’s facets have `multiple_values` set, once for each different values that that facet has.
`facet_query`(*context*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.facet_query)[¶](#ferenda.DocumentRepository.facet_query)
Constructs a SPARQL SELECT query that fetches all information needed to create faceted data.
| Parameters: | **context** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The context (named graph) to which to limit the query. |
| Returns: | The SPARQL query |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
Example:
```
>>> d = DocumentRepository()
>>> expected = """PREFIX dcterms: <http://purl.org/dc/terms/>
... PREFIX foaf: <http://xmlns.com/foaf/0.1/>
... PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
...
... SELECT DISTINCT ?uri ?rdf_type ?dcterms_title ?dcterms_publisher ?dcterms_identifier ?dcterms_issued
... FROM <http://example.org/ctx/base>
... WHERE {
... ?uri rdf:type foaf:Document .
... OPTIONAL { ?uri rdf:type ?rdf_type . }
... OPTIONAL { ?uri dcterms:title ?dcterms_title . }
... OPTIONAL { ?uri dcterms:publisher ?dcterms_publisher . }
... OPTIONAL { ?uri dcterms:identifier ?dcterms_identifier . }
... OPTIONAL { ?uri dcterms:issued ?dcterms_issued . }
...
... }"""
>>> d.facet_query("http://example.org/ctx/base") == expected True
```
`facet_select`(*query*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.facet_select)[¶](#ferenda.DocumentRepository.facet_select)
Select all data from the triple store needed to create faceted data.
| Parameters: | **context** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The context (named graph) to restrict the query to.
If None, search entire triplestore. |
| Returns: | The results of the query, as python objects |
| Return type: | set of dicts |
*classmethod* `generate_all_setup`(*config*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.generate_all_setup)[¶](#ferenda.DocumentRepository.generate_all_setup)
Runs any action needed prior to generating all documents in a docrepo. The default implementation does nothing.
Note
Like [`parse_all_setup()`](#ferenda.DocumentRepository.parse_all_setup)
this might change to a instance method.
*classmethod* `generate_all_teardown`(*config*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.generate_all_teardown)[¶](#ferenda.DocumentRepository.generate_all_teardown)
Runs any cleanup action needed after generating all documents in a docrepo. The default implementation does nothing.
Note
Like [`parse_all_setup()`](#ferenda.DocumentRepository.parse_all_setup)
this might change to a instance method.
`generate`(*basefile*, *otherrepos=[]*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.generate)[¶](#ferenda.DocumentRepository.generate)
Generate a browser-ready HTML file from structured XML and RDF.
Uses the XML and RDF files constructed by
[`ferenda.DocumentRepository.parse()`](#ferenda.DocumentRepository.parse).
The generation is done by XSLT, and normally you won’t need to override this, but you might want to provide your own xslt file and set
[`ferenda.DocumentRepository.xslt_template`](#ferenda.DocumentRepository.xslt_template) to the name of that file.
If you want to generate your browser-ready HTML by any other means than XSLT, you should override this method.
| Parameters: | **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for which to generate HTML |
| Returns: | None |
`get_url_transform_func`(*repos*, *basedir*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.get_url_transform_func)[¶](#ferenda.DocumentRepository.get_url_transform_func)
Returns a function that, when called with a URI, transforms that URI to another suitable reference. This can be used to eg. map between canonical URIs and local URIs. The function is run on all URIs in a post-processing step after
[`generate()`](#ferenda.DocumentRepository.generate) runs. The default implementatation maps URIs to local file paths, and is only run if `config.staticsite``is ``True`.
`prep_annotation_file`(*basefile*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.prep_annotation_file)[¶](#ferenda.DocumentRepository.prep_annotation_file)
Helper function used by
[`generate()`](#ferenda.DocumentRepository.generate) – prepares a RDF/XML file containing statements that in some way annotates the information found in the document that generate handles,
like URI/title of other documents that refers to this one.
| Parameters: | **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for which to collect annotating statements. |
| Returns: | The full path to the prepared RDF/XML file |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`construct_annotations`(*uri*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.construct_annotations)[¶](#ferenda.DocumentRepository.construct_annotations)
Construct a RDF graph containing metadata by running the query provided by
[`construct_sparql_query()`](#ferenda.DocumentRepository.construct_sparql_query)
`construct_sparql_query`(*uri*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.construct_sparql_query)[¶](#ferenda.DocumentRepository.construct_sparql_query)
Construct a SPARQL query that will select metadata relating to
*uri* in some way, using the query template specified by
[`sparql_annotations`](#ferenda.DocumentRepository.sparql_annotations)
`graph_to_annotation_file`(*graph*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.graph_to_annotation_file)[¶](#ferenda.DocumentRepository.graph_to_annotation_file)
Converts a RDFLib graph into a XML file with the same statements, ordered using the Grit format
(<https://code.google.com/p/oort/wiki/Grit>) for easier XSLT inclusion.
| Parameters: | **graph** ([*rdflib.graph.Graph*](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.graph.Graph)) – The graph to convert |
| Returns: | A serialized XML document with the RDF statements |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`annotation_file_to_graph`(*annotation_file*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.annotation_file_to_graph)[¶](#ferenda.DocumentRepository.annotation_file_to_graph)
Converts a annotation file (using the Grit format) back into an RDFLib graph.
| Parameters: | **graph** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The filename of a serialized XML document with RDF statements |
| Returns: | The RDF statements as a regular graph |
| Return type: | rdflib.Graph |
`generated_url`(*basefile*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.generated_url)[¶](#ferenda.DocumentRepository.generated_url)
Get the full local url for the generated file for the given basefile.
| Parameters: | **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for which to calculate the local url |
| Returns: | The local url |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`toc`(*otherrepos=[]*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.toc)[¶](#ferenda.DocumentRepository.toc)
Creates a set of pages that together acts as a table of contents for all documents in the repository. For smaller repositories a single page might be enough, but for repositoriees with a few hundred documents or more, there will usually be one page for all documents starting with A, starting with B, and so on. There might be different ways of browseing/drilling down,
i.e. both by title, publication year, keyword and so on.
The default implementation calls
[`faceted_data()`](#ferenda.DocumentRepository.faceted_data) to get all data from the triple store,
[`facets()`](#ferenda.DocumentRepository.facets) to find out the facets for ordering,
[`toc_pagesets()`](#ferenda.DocumentRepository.toc_pagesets) to calculate the total set of TOC html files,
[`toc_select_for_pages()`](#ferenda.DocumentRepository.toc_select_for_pages) to create a list of documents for each TOC html file, and finally
[`toc_generate_pages()`](#ferenda.DocumentRepository.toc_generate_pages) to create the HTML files. The default implemention assumes that documents have a title (in the form of a `dcterms:title`
property) and a publication date (in the form of a
`dcterms:issued` property).
You can override any of these methods to customize any part of the toc generation process. Often overriding
[`facets()`](#ferenda.DocumentRepository.facets) to specify other document properties will be sufficient.
`toc_pagesets`(*data*, *facets*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.toc_pagesets)[¶](#ferenda.DocumentRepository.toc_pagesets)
Calculate the set of needed TOC pages based on the result rows
| Parameters: | * **data** – list of dicts, each dict containing metadata about a single document
* **facets** – list of Facet objects
|
| Returns: | A set of Pageset objects |
| Return type: | [list](https://docs.python.org/3/library/stdtypes.html#list) |
Example:
```
>>> d = DocumentRepository()
>>> from rdflib.namespace import DCTERMS
>>> rows = [{'uri':'http://ex.org/1','dcterms_title':'Abc','dcterms_issued':'2009-04-02'},
... {'uri':'http://ex.org/2','dcterms_title':'Abcd','dcterms_issued':'2010-06-30'},
... {'uri':'http://ex.org/3','dcterms_title':'Dfg','dcterms_issued':'2010-08-01'}]
>>> from rdflib.namespace import DCTERMS
>>> facets = [Facet(DCTERMS.title), Facet(DCTERMS.issued)]
>>> pagesets=d.toc_pagesets(rows,facets)
>>> pagesets[0].label
'Sorted by title'
>>> pagesets[0].pages[0]
<TocPage binding=dcterms_title linktext=a title=Documents starting with "a" value=a>
>>> pagesets[0].pages[0].linktext
'a'
>>> pagesets[0].pages[0].title
'Documents starting with "a"'
>>> pagesets[0].pages[0].binding
'dcterms_title'
>>> pagesets[0].pages[0].value
'a'
>>> pagesets[1].label
'Sorted by publication year'
>>> pagesets[1].pages[0]
<TocPage binding=dcterms_issued linktext=2009 title=Documents published in 2009 value=2009>
```
`toc_select_for_pages`(*data*, *pagesets*, *facets*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.toc_select_for_pages)[¶](#ferenda.DocumentRepository.toc_select_for_pages)
Go through all data rows (each row representing a document)
and, for each toc page, select those documents that are to appear in a particular page.
Example:
```
>>> d = DocumentRepository()
>>> rows = [{'uri':'http://ex.org/1','dcterms_title':'Abc','dcterms_issued':'2009-04-02'},
... {'uri':'http://ex.org/2','dcterms_title':'Abcd','dcterms_issued':'2010-06-30'},
... {'uri':'http://ex.org/3','dcterms_title':'Dfg','dcterms_issued':'2010-08-01'}]
>>> from rdflib.namespace import DCTERMS
>>> facets = [Facet(DCTERMS.title), Facet(DCTERMS.issued)]
>>> pagesets=d.toc_pagesets(rows,facets)
>>> expected={('dcterms_title','a'):[[Link('Abc',uri='http://ex.org/1')],
... [Link('Abcd',uri='http://ex.org/2')]],
... ('dcterms_title','d'):[[Link('Dfg',uri='http://ex.org/3')]],
... ('dcterms_issued','2009'):[[Link('Abc',uri='http://ex.org/1')]],
... ('dcterms_issued','2010'):[[Link('Abcd',uri='http://ex.org/2')],
... [Link('Dfg',uri='http://ex.org/3')]]}
>>> d.toc_select_for_pages(rows, pagesets, facets) == expected True
```
| Parameters: | * **data** – List of dicts as returned by `toc_select()`
* **pagesets** – Result from [`toc_pagesets()`](#ferenda.DocumentRepository.toc_pagesets)
* **facets** – Result from [`facets()`](#ferenda.DocumentRepository.facets)
|
| Returns: | mapping between toc basefile and documentlist for that basefile |
| Return type: | [dict](https://docs.python.org/3/library/stdtypes.html#dict) |
`toc_item`(*binding*, *row*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.toc_item)[¶](#ferenda.DocumentRepository.toc_item)
Returns a formatted version of row, using Element objects
`toc_generate_pages`(*pagecontent*, *pagesets*, *otherrepos=[]*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.toc_generate_pages)[¶](#ferenda.DocumentRepository.toc_generate_pages)
Creates a set of TOC pages by calling
[`toc_generate_page()`](#ferenda.DocumentRepository.toc_generate_page).
| Parameters: | * **pagecontent** – Result from
[`toc_select_for_pages()`](#ferenda.DocumentRepository.toc_select_for_pages)
* **pagesets** – Result from
[`toc_pagesets()`](#ferenda.DocumentRepository.toc_pagesets)
* **otherrepos** – A list of document repository instances
|
`toc_generate_first_page`(*pagecontent*, *pagesets*, *otherrepos=[]*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.toc_generate_first_page)[¶](#ferenda.DocumentRepository.toc_generate_first_page)
Generate the main page of TOC pages.
`toc_generate_page`(*binding*, *value*, *documentlist*, *pagesets*, *effective_basefile=None*, *otherrepos=[]*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.toc_generate_page)[¶](#ferenda.DocumentRepository.toc_generate_page)
Generate a single TOC page.
| Parameters: | * **binding** – The binding used (eg. ‘title’ or ‘issued’)
* **value** – The value for the used binding (eg. ‘a’ or ‘2013’
* **documentlist** – Result from
[`toc_select_for_pages()`](#ferenda.DocumentRepository.toc_select_for_pages)
* **pagesets** – Result from
[`toc_pagesets()`](#ferenda.DocumentRepository.toc_pagesets)
* **effective_basefile** – Place the resulting page somewhere else than `toc/*binding*/*value*.html`
* **otherrepos** – A list of document repository instances
|
`news`(*otherrepos=[]*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.news)[¶](#ferenda.DocumentRepository.news)
Create a set of Atom feeds and corresponding HTML pages for new/updated documents in different categories in the repository.
`news_facet_entries`(*keyfunc=None*, *reverse=True*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.news_facet_entries)[¶](#ferenda.DocumentRepository.news_facet_entries)
Returns a set of entries, decorated with information from
[`faceted_data()`](#ferenda.DocumentRepository.faceted_data), used for feed generation.
| Parameters: | * **keyfunc** (*callable*) – Function that given a dict, returns an element
from that dict, used for sorting entries.
* **reverse** – The direction of the sorting
|
| Returns: | entries, each represented as a dict |
| Return type: | [list](https://docs.python.org/3/library/stdtypes.html#list) |
`news_feedsets`(*data*, *facets*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.news_feedsets)[¶](#ferenda.DocumentRepository.news_feedsets)
Calculate the set of needed feedsets based on facets and instance values in the data
| Parameters: | * **data** – list of dicts, each dict containing metadata about a single document
* **facets** – list of Facet objects
|
| Returns: | A list of Feedset objects |
`news_select_for_feeds`(*data*, *feedsets*, *facets*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.news_select_for_feeds)[¶](#ferenda.DocumentRepository.news_select_for_feeds)
Go through all data rows (each row representing a document)
and, for each newsfeed, select those document entries that are to appear in that feed
| Parameters: | * **data** – List of dicts as returned by
[`news_facet_entries()`](#ferenda.DocumentRepository.news_facet_entries)
* **feedsets** – List of feedset objects, the result from
[`news_feedsets()`](#ferenda.DocumentRepository.news_feedsets)
* **facets** – Result from [`facets()`](#ferenda.DocumentRepository.facets)
|
| Returns: | mapping between a (binding, value) tuple and entries for
that tuple! |
`news_item`(*binding*, *entry*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.news_item)[¶](#ferenda.DocumentRepository.news_item)
Returns a modified version of the news entry for use in a specific feed.
You can override this if you eg. want to customize title or summary of each entry in a particular feed. The default implementation does not change the entry in any way.
| Parameters: | * **binding** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – identifier for the feed being constructed, derived
from a facet object.
* **entry** ([*ferenda.DocumentEntry*](index.html#ferenda.DocumentEntry)) – The entry object to modify
|
| Returns: | The modified entry |
| Return type: | [ferenda.DocumentEntry](index.html#ferenda.DocumentEntry) |
`news_entries`()[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.news_entries)[¶](#ferenda.DocumentRepository.news_entries)
Return a generator of all available (and published) DocumentEntry objects.
`news_generate_feeds`(*feedsets*, *generate_html=True*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.news_generate_feeds)[¶](#ferenda.DocumentRepository.news_generate_feeds)
Creates a set of Atom feeds (and optionally HTML equivalents) by calling [`news_write_atom()`](#ferenda.DocumentRepository.news_write_atom)
for each feed in feedsets.
| Parameters: | * **feedsets** ([*list*](https://docs.python.org/3/library/stdtypes.html#list)) – the result of [`news_feedsets()`](#ferenda.DocumentRepository.news_feedsets)
* **generate_html** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Whether to generate HTML equivalents of the atom feeds
|
`news_write_atom`(*entries*, *title*, *slug*, *archivesize=100*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.news_write_atom)[¶](#ferenda.DocumentRepository.news_write_atom)
Given a list of Atom entry-like objects, including links to RDF and PDF files (if applicable), create a rinfo-compatible Atom feed,
optionally splitting into archives.
| Parameters: | * **entries** ([*list*](https://docs.python.org/3/library/stdtypes.html#list)) – [`DocumentEntry`](index.html#ferenda.DocumentEntry) objects
* **title** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – feed title
* **slug** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – used for constructing the path where the Atom files are stored and the URL where it’s published.
* **archivesize** ([*int*](https://docs.python.org/3/library/functions.html#int)) – The amount of entries in each archive file. The main file might contain up to 2 x this amount.
|
`frontpage_content`(*primary=False*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.frontpage_content)[¶](#ferenda.DocumentRepository.frontpage_content)
If the module wants to provide any particular content on the frontpage, it can do so by returning a XHTML fragment (in text form) here.
| Parameters: | **primary** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Whether the caller wants the module to take primary responsibility for the frontpage content. If `False`, the caller only expects a smaller amount of content (like a smaller presentation of the repository and the document it contains). |
| Returns: | the XHTML fragment |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
If primary is true, . If primary is false, the caller only expects a smaller amount of content (like a smaller presentation of the repository and the document it contains).
`status`(*basefile=None*, *samplesize=3*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.status)[¶](#ferenda.DocumentRepository.status)
Prints out some basic status information about this repository.
`get_status`()[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.get_status)[¶](#ferenda.DocumentRepository.get_status)
Returns basic data about the state about this repository, used by
[`status()`](#ferenda.DocumentRepository.status). Returns a dict of dicts, one per state (‘download’, ‘parse’ and ‘generated’),
each containing lists under the ‘exists’ and ‘todo’ keys.
| Returns: | Status information |
| Return type: | [dict](https://docs.python.org/3/library/stdtypes.html#dict) |
`tabs`()[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.tabs)[¶](#ferenda.DocumentRepository.tabs)
Get the navigation menu segment(s) provided by this docrepo.
Returns a list of tuples, where each tuple will be rendered as a tab in the main UI. First element of the tuple is the link text, and the second is the link destination. Normally, a module will only return a single tab.
| Returns: | (link text, link destination) tuples |
| Return type: | [list](https://docs.python.org/3/library/stdtypes.html#list) |
Example:
```
>>> d = DocumentRepository()
>>> d.tabs()
[('base', 'http://localhost:8000/dataset/base')]
```
`footer`()[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.footer)[¶](#ferenda.DocumentRepository.footer)
Get a list of resources provided by this repo for publication in the site footer.
Works like [`tabs()`](#ferenda.DocumentRepository.tabs), but normally returns an empty list. The repo
[`ferenda.sources.general.Static`](index.html#ferenda.sources.general.Static) is an exception.
`http_handle`(*environ*)[[source]](_modules/ferenda/documentrepository.html#DocumentRepository.http_handle)[¶](#ferenda.DocumentRepository.http_handle)
Used by the WSGI support to indicate if this repo can provide a response to a particular request. If so, returns a tuple *(fp,
length, memtype)*, where *fp* is an open file of the document to be returned.
### The `Document` class[¶](#the-document-class)
*class* `ferenda.``Document`(*meta=None*, *body=None*, *uri=None*, *lang=None*, *basefile=None*)[[source]](_modules/ferenda/document.html#Document)[¶](#ferenda.Document)
A document represents the content of a document together with a RDF graph containing metadata about the document. Don’t create instances of [`Document`](#ferenda.Document) directly. Create them through [`make_document()`](index.html#ferenda.DocumentRepository.make_document) in order to properly initialize the `meta` property.
| Parameters: | * **meta** – A RDF graph containing metadata about the document
* **body** – A list of [`ferenda.elements`](index.html#module-ferenda.elements) based objects representing the content of the document
* **uri** – The canonical URI for this document
* **lang** – The main language of the document as a IETF language tag, i.e. “sv” or “en-GB”
* **basefile** – The basefile of the document
|
### The `DocumentEntry` class[¶](#the-documententry-class)
*class* `ferenda.``DocumentEntry`(*path=None*)[[source]](_modules/ferenda/documententry.html#DocumentEntry)[¶](#ferenda.DocumentEntry)
This class has two primary uses – it is used to represent and store aspects of the downloading of each document (when it was initially downloaded, optionally updated, and last checked, as well as the URL from which it was downloaded). It’s also used by the news_* methods to encapsulate various aspects of a document entry in an atom feed. Some properties and methods are used by both of these use cases, but not all.
| Parameters: | **path** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – If this file path is an existing JSON file, the object is initialized from that file. |
`orig_created` *= None*[¶](#ferenda.DocumentEntry.orig_created)
The first time we fetched the document from it’s original location.
`id` *= None*[¶](#ferenda.DocumentEntry.id)
The canonical uri for the document.
`basefile` *= None*[¶](#ferenda.DocumentEntry.basefile)
The basefile for the document.
`orig_updated` *= None*[¶](#ferenda.DocumentEntry.orig_updated)
The last time the content at the original location of the document was changed.
`orig_checked` *= None*[¶](#ferenda.DocumentEntry.orig_checked)
The last time we accessed the original location of this document, regardless of wheter this led to an update.
`orig_url` *= None*[¶](#ferenda.DocumentEntry.orig_url)
The main url from where we fetched this document.
`indexed_ts` *= None*[¶](#ferenda.DocumentEntry.indexed_ts)
The last time the metadata was indexed in a triplestore
`indexed_dep` *= None*[¶](#ferenda.DocumentEntry.indexed_dep)
The last time the dependent files of the document was indexed
`indexed_ft` *= None*[¶](#ferenda.DocumentEntry.indexed_ft)
The last time the document was indexed in a fulltext index
`published` *= None*[¶](#ferenda.DocumentEntry.published)
The date our parsed/processed version of the document was published.
`updated` *= None*[¶](#ferenda.DocumentEntry.updated)
The last time our parsed/processed version changed in any way
(due to the original content being updated, or due to changes in our parsing functionality.
`title` *= None*[¶](#ferenda.DocumentEntry.title)
A title/label for the document, as used in an Atom feed.
`summary` *= None*[¶](#ferenda.DocumentEntry.summary)
A summary of the document, as used in an Atom feed.
`url` *= None*[¶](#ferenda.DocumentEntry.url)
The URL to the browser-ready version of the page, equivalent to what
`generated_url()` returns.
`content` *= None*[¶](#ferenda.DocumentEntry.content)
A dict that represents metadata about the document file.
`link` *= None*[¶](#ferenda.DocumentEntry.link)
A dict that represents metadata about the document RDF metadata
(such as it’s URI, length, MIME-type and MD5 hash).
`save`(*path=None*)[[source]](_modules/ferenda/documententry.html#DocumentEntry.save)[¶](#ferenda.DocumentEntry.save)
Saves the state of the documententry to a JSON file at *path*. If
*path* is not provided, uses the path that the object was initialized with.
`set_content`(*filename*, *url*, *mimetype=None*, *inline=False*)[[source]](_modules/ferenda/documententry.html#DocumentEntry.set_content)[¶](#ferenda.DocumentEntry.set_content)
Sets the `content` property and calculates md5 hash for the file
| Parameters: | * **filename** – The full path to the document file
* **url** – The full external URL that will be used to get the same document file
* **mimetype** – The MIME-type used in the atom feed. If not provided,
guess from file extension.
* **inline** – whether to inline the document content in the file or refer to *url*
|
`set_link`(*filename*, *url*, *mimetype=None*)[[source]](_modules/ferenda/documententry.html#DocumentEntry.set_link)[¶](#ferenda.DocumentEntry.set_link)
Sets the `link` property and calculate md5 hash for the RDF metadata.
| Parameters: | * **filename** – The full path to the RDF file for a document
* **url** – The full external URL that will be used to get the same RDF file
* **mimetype** – The MIME-type used in the atom feed. If not provided,
guess from file extension.
|
`calculate_md5`(*filename*)[[source]](_modules/ferenda/documententry.html#DocumentEntry.calculate_md5)[¶](#ferenda.DocumentEntry.calculate_md5)
Given a filename, return the md5 value for the file’s content.
`guess_type`(*filename*)[[source]](_modules/ferenda/documententry.html#DocumentEntry.guess_type)[¶](#ferenda.DocumentEntry.guess_type)
Given a filename, return a MIME-type based on the file extension.
### The `DocumentStore` class[¶](#the-documentstore-class)
*class* `ferenda.``DocumentStore`(*datadir*, *downloaded_suffix='.html'*, *storage_policy='file'*)[[source]](_modules/ferenda/documentstore.html#DocumentStore)[¶](#ferenda.DocumentStore)
Unifies handling of reading and writing of various data files during the `download`, `parse` and `generate` stages.
| Parameters: | * **datadir** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The root directory (including docrepo path segment) where files are stored.
* **downloaded_suffix** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – File suffix for the main source document format. Determines the suffix of downloaded files.
* **storage_policy** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Some repositories have documents in several formats, documents split amongst several files or embedded resources. If
`storage_policy` is set to `dir`, then each document gets its own directory (the default filename being `index` +suffix),
otherwise each doc gets stored as a file in a directory with other files. Affects
[`path()`](#ferenda.DocumentStore.path)
(and therefore all other `*_path`
methods)
|
`resourcepath`(*resourcename*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.resourcepath)[¶](#ferenda.DocumentStore.resourcepath)
`path`(*basefile*, *maindir*, *suffix*, *version=None*, *attachment=None*, *storage_policy=None*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.path)[¶](#ferenda.DocumentStore.path)
Calculate a full filesystem path for the given parameters.
| Parameters: | * **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile of the resource we’re calculating a filename for
* **maindir** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The stage of processing, e.g. `downloaded` or `parsed`
* **suffix** – Appropriate file suffix, e.g. `.txt` or `.pdf`
* **version** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Optional. The archived version id
* **attachment** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Optional. Any associated file needed by the main file.
* **storage_policy** – Optional. Used to override storage_policy if needed
|
Note
This is a generic method with many parameters. In order to keep your code tidy and and loosely coupled to the actual storage policy, you should use methods like
[`downloaded_path()`](#ferenda.DocumentStore.downloaded_path) or [`parsed_path()`](#ferenda.DocumentStore.parsed_path) when possible.
Example:
```
>>> d = DocumentStore(datadir="/tmp/base")
>>> realsep = os.sep
>>> os.sep = "/"
>>> d.path('123/a', 'parsed', '.xhtml') == '/tmp/base/parsed/123/a.xhtml'
True
>>> d.storage_policy = "dir"
>>> d.path('123/a', 'parsed', '.xhtml') == '/tmp/base/parsed/123/a/index.xhtml'
True
>>> d.path('123/a', 'downloaded', None, 'r4711', 'appendix.txt') == '/tmp/base/archive/downloaded/123/a/r4711/appendix.txt'
True
>>> os.sep = realsep
```
| Parameters: | * **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for which to calculate the path
* **maindir** – The processing stage directory (normally `downloaded`, `parsed`, or `generated`)
* **suffix** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The file extension including period (i.e. `.txt`, not `txt`)
* **version** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Optional, the archived version id
* **attachment** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Optional. Any associated file needed by the main file. Requires that `storage_policy` is set to `dir`. `suffix` is ignored if this parameter is used.
|
| Returns: | The full filesystem path |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`open`(*basefile*, *maindir*, *suffix*, *mode='r'*, *version=None*, *attachment=None*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.open)[¶](#ferenda.DocumentStore.open)
Context manager that opens files for reading or writing. The parameters are the same as for
[`path()`](#ferenda.DocumentStore.path), and the note is applicable here as well – use
[`open_downloaded()`](#ferenda.DocumentStore.open_downloaded),
[`open_parsed()`](#ferenda.DocumentStore.open_parsed) et al if possible.
Example:
```
>>> store = DocumentStore(datadir="/tmp/base")
>>> with store.open('123/a', 'parsed', '.xhtml', mode="w") as fp:
... res = fp.write("hello world")
>>> os.path.exists("/tmp/base/parsed/123/a.xhtml")
True
```
`list_basefiles_for`(*action*, *basedir=None*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.list_basefiles_for)[¶](#ferenda.DocumentStore.list_basefiles_for)
Get all available basefiles that can be used for the specified action.
| Parameters: | * **action** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The action for which to get available basefiles (`parse`, `relate`, `generate`
or `news`)
* **basedir** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The base directory in which to search for available files. If not provided, defaults to
`self.datadir`.
|
| Returns: | All available basefiles |
| Return type: | generator |
`list_versions`(*basefile*, *action=None*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.list_versions)[¶](#ferenda.DocumentStore.list_versions)
Get all archived versions of a given basefile.
| Parameters: | * **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile to list archived versions for
* **action** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The type of file to look for (either
`downloaded`, `parsed` or `generated`. If
`None`, look for all types.
|
| Returns: | All available versions for that basefile |
| Return type: | generator |
`list_attachments`(*basefile*, *action*, *version=None*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.list_attachments)[¶](#ferenda.DocumentStore.list_attachments)
Get all attachments for a basefile in a specified state
| Parameters: | * **action** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The state (type of file) to look for (either
`downloaded`, `parsed` or `generated`. If
`None`, look for all types.
* **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile to list attachments for
* **version** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The version of the basefile to list attachments for. If None, list attachments for the current version.
|
| Returns: | All available attachments for the basefile |
| Return type: | generator |
`basefile_to_pathfrag`(*basefile*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.basefile_to_pathfrag)[¶](#ferenda.DocumentStore.basefile_to_pathfrag)
Given a basefile, returns a string that can safely be used as a fragment of the path for any representation of that file. The default implementation recognizes a number of characters that are unsafe to use in file names and replaces them with HTTP percent-style encoding.
Example:
```
>>> d = DocumentStore("/tmp")
>>> realsep = os.sep
>>> os.sep = "/"
>>> d.basefile_to_pathfrag('1998:204') == '1998/%3A204'
True
>>> os.sep = realsep
```
If you wish to override how document files are stored in directories, you can override this method, but you should make sure to also override
[`pathfrag_to_basefile()`](#ferenda.DocumentStore.pathfrag_to_basefile) to work as the inverse of this method.
| Parameters: | **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile to encode |
| Returns: | The encoded path fragment |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`pathfrag_to_basefile`(*pathfrag*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.pathfrag_to_basefile)[¶](#ferenda.DocumentStore.pathfrag_to_basefile)
Does the inverse of
[`basefile_to_pathfrag()`](#ferenda.DocumentStore.basefile_to_pathfrag),
that is, converts a fragment of a file path into the corresponding basefile.
| Parameters: | **pathfrag** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The path fragment to decode |
| Returns: | The resulting basefile |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`archive`(*basefile*, *version*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.archive)[¶](#ferenda.DocumentStore.archive)
Moves the current version of a document to an archive. All files related to the document are moved (downloaded, parsed,
generated files and any existing attachment files).
| Parameters: | * **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile of the document to archive
* **version** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The version id to archive under
|
`downloaded_path`(*basefile*, *version=None*, *attachment=None*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.downloaded_path)[¶](#ferenda.DocumentStore.downloaded_path)
Get the full path for the downloaded file for the given basefile (and optionally archived version and/or attachment filename).
| Parameters: | * **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for which to calculate the path
* **version** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Optional. The archived version id
* **attachment** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Optional. Any associated file needed by the main file.
|
| Returns: | The full filesystem path |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`open_downloaded`(*basefile*, *mode='r'*, *version=None*, *attachment=None*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.open_downloaded)[¶](#ferenda.DocumentStore.open_downloaded)
Opens files for reading and writing,
c.f. [`open()`](#ferenda.DocumentStore.open). The parameters are the same as for
[`downloaded_path()`](#ferenda.DocumentStore.downloaded_path).
`documententry_path`(*basefile*, *version=None*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.documententry_path)[¶](#ferenda.DocumentStore.documententry_path)
Get the full path for the documententry JSON file for the given basefile (and optionally archived version).
| Parameters: | * **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for which to calculate the path
* **version** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Optional. The archived version id
|
| Returns: | The full filesystem path |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`intermediate_path`(*basefile*, *version=None*, *attachment=None*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.intermediate_path)[¶](#ferenda.DocumentStore.intermediate_path)
Get the full path for the main intermediate file for the given basefile (and optionally archived version).
| Parameters: | * **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for which to calculate the path
* **version** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Optional. The archived version id
* **attachment** – Optional. Any associated file created or retained in the intermediate step
|
| Returns: | The full filesystem path |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`open_intermediate`(*basefile*, *mode='r'*, *version=None*, *attachment=None*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.open_intermediate)[¶](#ferenda.DocumentStore.open_intermediate)
Opens files for reading and writing,
c.f. [`open()`](#ferenda.DocumentStore.open). The parameters are the same as for
[`intermediate_path()`](#ferenda.DocumentStore.intermediate_path).
`parsed_path`(*basefile*, *version=None*, *attachment=None*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.parsed_path)[¶](#ferenda.DocumentStore.parsed_path)
Get the full path for the parsed XHTML file for the given basefile.
| Parameters: | * **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for which to calculate the path
* **version** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Optional. The archived version id
* **attachment** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Optional. Any associated file needed by the main file (created by
[`parse()`](index.html#ferenda.DocumentRepository.parse))
|
| Returns: | The full filesystem path |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`open_parsed`(*basefile*, *mode='r'*, *version=None*, *attachment=None*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.open_parsed)[¶](#ferenda.DocumentStore.open_parsed)
Opens files for reading and writing,
c.f. [`open()`](#ferenda.DocumentStore.open). The parameters are the same as for
[`parsed_path()`](#ferenda.DocumentStore.parsed_path).
`serialized_path`(*basefile*, *version=None*, *attachment=None*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.serialized_path)[¶](#ferenda.DocumentStore.serialized_path)
Get the full path for the serialized JSON file for the given basefile.
| Parameters: | * **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for which to calculate the path
* **version** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Optional. The archived version id
|
| Returns: | The full filesystem path |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`open_serialized`(*basefile*, *mode='r'*, *version=None*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.open_serialized)[¶](#ferenda.DocumentStore.open_serialized)
Opens files for reading and writing,
c.f. [`open()`](#ferenda.DocumentStore.open). The parameters are the same as for
[`serialized_path()`](#ferenda.DocumentStore.serialized_path).
`distilled_path`(*basefile*, *version=None*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.distilled_path)[¶](#ferenda.DocumentStore.distilled_path)
Get the full path for the distilled RDF/XML file for the given basefile.
| Parameters: | * **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for which to calculate the path
* **version** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Optional. The archived version id
|
| Returns: | The full filesystem path |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`open_distilled`(*basefile*, *mode='r'*, *version=None*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.open_distilled)[¶](#ferenda.DocumentStore.open_distilled)
Opens files for reading and writing,
c.f. [`open()`](#ferenda.DocumentStore.open). The parameters are the same as for
[`distilled_path()`](#ferenda.DocumentStore.distilled_path).
`generated_path`(*basefile*, *version=None*, *attachment=None*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.generated_path)[¶](#ferenda.DocumentStore.generated_path)
Get the full path for the generated file for the given basefile (and optionally archived version and/or attachment filename).
| Parameters: | * **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for which to calculate the path
* **version** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Optional. The archived version id
* **attachment** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Optional. Any associated file needed by the main file.
|
| Returns: | The full filesystem path |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`annotation_path`(*basefile*, *version=None*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.annotation_path)[¶](#ferenda.DocumentStore.annotation_path)
Get the full path for the annotation file for the given basefile (and optionally archived version).
| Parameters: | * **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for which to calculate the path
* **version** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Optional. The archived version id
|
| Returns: | The full filesystem path |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`open_annotation`(*basefile*, *mode='r'*, *version=None*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.open_annotation)[¶](#ferenda.DocumentStore.open_annotation)
Opens files for reading and writing,
c.f. [`open()`](#ferenda.DocumentStore.open). The parameters are the same as for
[`annotation_path()`](#ferenda.DocumentStore.annotation_path).
`dependencies_path`(*basefile*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.dependencies_path)[¶](#ferenda.DocumentStore.dependencies_path)
Get the full path for the dependency file for the given basefile
| Parameters: | **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for which to calculate the path |
| Returns: | The full filesystem path |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`open_dependencies`(*basefile*, *mode='r'*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.open_dependencies)[¶](#ferenda.DocumentStore.open_dependencies)
Opens files for reading and writing,
c.f. [`open()`](#ferenda.DocumentStore.open). The parameters are the same as for
[`dependencies_path()`](#ferenda.DocumentStore.dependencies_path).
`atom_path`(*basefile*)[[source]](_modules/ferenda/documentstore.html#DocumentStore.atom_path)[¶](#ferenda.DocumentStore.atom_path)
Get the full path for the atom file for the given basefile
Note
This is used by [`ferenda.DocumentRepository.news()`](index.html#ferenda.DocumentRepository.news) and does not really operate on “real” basefiles. It might be removed. You probably shouldn’t use it unless you override
[`news()`](index.html#ferenda.DocumentRepository.news)
| Parameters: | **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for which to calculate the path |
| Returns: | The full filesystem path |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
### The `Facet` class[¶](#the-facet-class)
*class* `ferenda.``Facet`(*rdftype=rdflib.term.URIRef('http://purl.org/dc/terms/title')*, *label=None*, *pagetitle=None*, *indexingtype=None*, *selector=None*, *key=None*, *identificator=None*, *toplevel_only=None*, *use_for_toc=None*, *use_for_feed=None*, *selector_descending=None*, *key_descending=None*, *multiple_values=None*, *dimension_type=None*, *dimension_label=None*)[[source]](_modules/ferenda/facet.html#Facet)[¶](#ferenda.Facet)
Create a facet from the given rdftype and some optional parameters.
| Parameters: | * **rdftype** ([*rdflib.term.URIRef*](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.term.URIRef)) – The type of facet being created
* **label** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – A template for the label property of TocPageset objects created from this facet
* **pagetitle** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – A template for the title property of TocPage objects created from this facet
* **indexingtype** (*ferenda.fulltext.IndexedType*) – Object specifying how to store the data selected by this facet in the fulltext index
* **selector** (*callable*) – A function that takes *(row, binding, resource_graph)*
and returns a string acting as a category of some kind
* **key** (*callable*) – A function that takes *(row, binding, resource_graph)* and returns a string usable for sorting
* **toplevel_only** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Whether this facet should be applied to documents only, or any named (ie. given an URI) fragment of a document.
* **use_for_toc** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Whether this facet should be used for TOC generation
* **use_for_feed** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Whether this facet should be used for newsfeed generation
* **selector_descending** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Whether the values returned by `selector`
should be presented in lexical descending order
* **key_descending** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Whether documents, when sorted through the `key`
function, should be presented in reverse order.
* **multiple_values** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Whether more than one instance of the `rdftype`
value should be processed (such as multiple keywords each specified by one `dcterms:subject`
triple).
* **dimension_type** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The general type of this facet – can be `"type"`
(values are `rdf:type`), `"ref"` (values are URIs), `"year"` (values are xsd:datetime or similar), or `"value"` (values are string literals).
* **dimension_label** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – An alternate label for this facet to be used if the `selector` logic is more transformative than selectional (ie. if it transforms dates to True or False values depending on whether they’re April 1st, you might set this to “aprilfirst”)
* **identificator** (*callable*) – A function that takes *(row, binding,
resource_graph)* and returns an identifier-like string usable as an id string or URL segment.
|
If optional parameters aren’t provided, then appropriate values are selected if rdfrtype is one of some common rdf properties:
| facet | description |
| --- | --- |
| rdf:type | Grouped by [`qname()`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.graph.Graph.qname) of the
`rdf:type` of the document, eg. `foaf:Document`.
Not used for toc |
| dcterms:title | Grouped by first “sortable” letter, eg for a document titled “The Little Prince” returns “l”. Is used as a facet for the API, but it’s debatable if it’s useful |
| dcterms:identifier | Also grouped by first sortable letter. When indexing,
the resulting fulltext index field has a high boost value, which increases the chances of this document ranking high when one searches for its identifier. |
| dcterms:abstract | Not used for toc |
| dc:creator | Should be a free-test (string literal) value |
| dcterms:publisher | Should be a URIRef |
| dcterms:references | |
| dcterms:issued | Used for grouping documents published/issued in the same year |
| dc:subject | A document can have multiple dc:subjects and all are indexed/processed |
| dcterms:subject | Works like dc:subject, but the value should be a URIRef |
| schema:free | A boolean value |
This module contains a number of classmethods that can be used as arguments to `selector` and `key`, eg
```
>>> from rdflib import Namespace
>>> MYVOCAB = Namespace("http://example.org/vocab/")
>>> f = Facet(MYVOCAB.enactmentDate, selector=Facet.year)
>>> f.selector({'myvocab_enactmentDate': '2014-07-06'},
... 'myvocab_enactmentDate')
'2014'
```
*classmethod* `defaultselector`(*row*, *binding*, *resource_graph=None*)[[source]](_modules/ferenda/facet.html#Facet.defaultselector)[¶](#ferenda.Facet.defaultselector)
This returns `row[binding]` without any transformation.
```
>>> row = {"rdf_type": "http://purl.org/ontology/bibo/Book",
... "dcterms_title": "A Tale of Two Cities",
... "dcterms_issued": "1859-04-30",
... "dcterms_publisher": "http://example.org/chapman_hall",
... "schema_free": "true"}
>>> Facet.defaultselector(row, "dcterms_title")
'A Tale of Two Cities'
```
*classmethod* `defaultidentificator`(*row*, *binding*, *resource_graph=None*)[[source]](_modules/ferenda/facet.html#Facet.defaultidentificator)[¶](#ferenda.Facet.defaultidentificator)
This returns `row[binding]` run through a simple slug-like transformation.
```
>>> row = {"rdf_type": "http://purl.org/ontology/bibo/Book",
... "dcterms_title": "A Tale of Two Cities",
... "dcterms_issued": "1859-04-30",
... "dcterms_publisher": "http://example.org/chapman_hall",
... "schema_free": "true"}
>>> Facet.defaultidentificator(row, "dcterms_title")
'a-tale-of-two-cities'
```
*classmethod* `year`(*row*, *binding='dcterms_issued'*, *resource_graph=None*)[[source]](_modules/ferenda/facet.html#Facet.year)[¶](#ferenda.Facet.year)
This returns the the year part of `row[binding]`.
```
>>> row = {"rdf_type": "http://purl.org/ontology/bibo/Book",
... "dcterms_title": "A Tale of Two Cities",
... "dcterms_issued": "1859-04-30",
... "dcterms_publisher": "http://example.org/chapman_hall",
... "schema_free": "true"}
>>> Facet.year(row, "dcterms_issued")
'1859'
```
*classmethod* `booleanvalue`(*row*, *binding='schema_free'*, *resource_graph=None*)[[source]](_modules/ferenda/facet.html#Facet.booleanvalue)[¶](#ferenda.Facet.booleanvalue)
Returns True iff row[binding] == “true”, False otherwise.
```
>>> row = {"rdf_type": "http://purl.org/ontology/bibo/Book",
... "dcterms_title": "A Tale of Two Cities",
... "dcterms_issued": "1859-04-30",
... "dcterms_publisher": "http://example.org/chapman_hall",
... "schema_free": "true"}
>>> Facet.booleanvalue(row, "schema_free")
True
```
*classmethod* `titlesortkey`(*row*, *binding='dcterms_title'*, *resource_graph=None*)[[source]](_modules/ferenda/facet.html#Facet.titlesortkey)[¶](#ferenda.Facet.titlesortkey)
Returns a version of row[binding] suitable for sorting. The function [`title_sortkey()`](index.html#ferenda.util.title_sortkey) is used for string transformation.
```
>>> row = {"rdf_type": "http://purl.org/ontology/bibo/Book",
... "dcterms_title": "A Tale of Two Cities",
... "dcterms_issued": "1859-04-30",
... "dcterms_publisher": "http://example.org/chapman_hall",
... "schema_free": "true"}
>>> Facet.titlesortkey(row, "dcterms_title")
'ataleoftwocities'
```
*classmethod* `firstletter`(*row*, *binding='dcterms_title'*, *resource_graph=None*)[[source]](_modules/ferenda/facet.html#Facet.firstletter)[¶](#ferenda.Facet.firstletter)
Returns the first letter of row[binding], transformed into a sortable string.
```
>>> row = {"rdf_type": "http://purl.org/ontology/bibo/Book",
... "dcterms_title": "A Tale of Two Cities",
... "dcterms_issued": "1859-04-30",
... "dcterms_publisher": "http://example.org/chapman_hall",
... "schema_free": "true"}
>>> Facet.firstletter(row, "dcterms_title")
'a'
```
*classmethod* `resourcelabel`(*row*, *binding='dcterms_publisher'*, *resource_graph=None*)[[source]](_modules/ferenda/facet.html#Facet.resourcelabel)[¶](#ferenda.Facet.resourcelabel)
Lookup a suitable text label for row[binding] in resource_graph.
```
>>> row = {"rdf_type": "http://purl.org/ontology/bibo/Book",
... "dcterms_title": "A Tale of Two Cities",
... "dcterms_issued": "1859-04-30",
... "dcterms_publisher": "http://example.org/chapman_hall",
... "schema_free": "true"}
>>> import rdflib
>>> resources = rdflib.Graph().parse(format="turtle", data="""
... @prefix foaf: <http://xmlns.com/foaf/0.1/> .
...
... <http://example.org/chapman_hall> a foaf:Organization;
... foaf:name "<NAME>" .
...
... """)
>>> Facet.resourcelabel(row, "dcterms_publisher", resources)
'Chapman & Hall'
```
*classmethod* `sortresource`(*row*, *binding='dcterms_publisher'*, *resource_graph=None*)[[source]](_modules/ferenda/facet.html#Facet.sortresource)[¶](#ferenda.Facet.sortresource)
Returns a sortable version of the resource label for
`row[binding]`.
```
>>> row = {"rdf_type": "http://purl.org/ontology/bibo/Book",
... "dcterms_title": "A Tale of Two Cities",
... "dcterms_issued": "1859-04-30",
... "dcterms_publisher": "http://example.org/chapman_hall",
... "schema_free": "true"}
>>> import rdflib
>>> resources = rdflib.Graph().parse(format="turtle", data="""
... @prefix foaf: <http://xmlns.com/foaf/0.1/> .
...
... <http://example.org/chapman_hall> a foaf:Organization;
... foaf:name "Ch<NAME>" .
...
... """)
>>> Facet.sortresource(row, "dcterms_publisher", resources)
'chapmanhall'
```
*classmethod* `term`(*row*, *binding='dcterms_publisher'*, *resource_graph=None*)[[source]](_modules/ferenda/facet.html#Facet.term)[¶](#ferenda.Facet.term)
Returns the leaf part of the URI found in `row[binding]`.
```
>>> row = {"rdf_type": "http://purl.org/ontology/bibo/Book",
... "dcterms_title": "A Tale of Two Cities",
... "dcterms_issued": "1859-04-30",
... "dcterms_publisher": "http://example.org/chapman_hall",
... "schema_free": "true"}
>>> Facet.term(row, "dcterms_publisher")
'chapman_hall'
```
*classmethod* `qname`(*row*, *binding='rdf_type'*, *resource_graph=None*)[[source]](_modules/ferenda/facet.html#Facet.qname)[¶](#ferenda.Facet.qname)
Returns the qname of the rdf URIref contained in row[binding], as determined by the namespace prefixes registered in resource_graph.
```
>>> row = {"rdf_type": "http://purl.org/ontology/bibo/Book",
... "dcterms_title": "A Tale of Two Cities",
... "dcterms_issued": "1859-04-30",
... "dcterms_publisher": "http://example.org/chapman_hall",
... "schema_free": "true"}
>>> import rdflib
>>> resources = rdflib.Graph()
>>> resources.bind("bibo", "http://purl.org/ontology/bibo/")
>>> Facet.qname(row, "rdf_type", resources)
'bibo:Book'
```
*classmethod* `resourcelabel_or_qname`(*row*, *binding='rdf_type'*, *resource_graph=None*)[[source]](_modules/ferenda/facet.html#Facet.resourcelabel_or_qname)[¶](#ferenda.Facet.resourcelabel_or_qname)
### The `TocPage` class[¶](#the-tocpage-class)
*class* `ferenda.``TocPage`(*linktext*, *title*, *binding*, *value*)[[source]](_modules/ferenda/tocpage.html#TocPage)[¶](#ferenda.TocPage)
Represents a particular TOC page.
| Parameters: | * **linktext** – The text used for TOC links *to* this page, like “a” or “2013”.
* **linktext** – str
* **label** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – A description of this page, like “Documents starting with ‘a’”
* **binding** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The variable binding used for defining this TOC page, like “title” or “issued”
* **value** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The particular value of bound variable that corresponds to this TOC page, like “a” or “2013”. The `selector` function of a [`Facet`](index.html#ferenda.Facet) object is used to select this value out of the raw data.
|
### The `TocPageset` class[¶](#the-tocpageset-class)
*class* `ferenda.``TocPageset`(*label*, *pages*, *predicate=None*)[[source]](_modules/ferenda/tocpageset.html#TocPageset)[¶](#ferenda.TocPageset)
Represents a particular set of TOC pages, structured around some particular attribute(s) of documents, like title or publication date. [`toc_pagesets()`](index.html#ferenda.DocumentRepository.toc_pagesets) returns a list of these objects, override that method to provide custom TocPageset objects.
| Parameters: | * **label** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – A description of this set of TOC pages, like
“By publication year”
* **pages** ([*list*](https://docs.python.org/3/library/stdtypes.html#list)) – The set of [`TocPage`](index.html#ferenda.TocPage) objects that makes up this page set.
* **predicate** ([*rdflib.term.URIRef*](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.term.URIRef)) – The RDFLib predicate (if any) that this pageset is keyed on.
|
### The `Feed` class[¶](#the-feed-class)
*class* `ferenda.``Feed`(*slug*, *title*, *binding*, *value*)[[source]](_modules/ferenda/feed.html#Feed)[¶](#ferenda.Feed)
Represents a particular Feed of new or updated items selected by some criteria.
| Parameters: | * **label** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – A description of this feed, like “Documents published by XYZ”
* **binding** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The variable binding used for defining this feed, like
“title” or “issued”
* **value** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The particular value of bound variable that corresponds to this TOC page, like “a” or “2013”. The `selector`
function of a [`Facet`](index.html#ferenda.Facet) object is used to select this value out of the raw data.
|
*classmethod* `all`(*row*, *entry*)[[source]](_modules/ferenda/feed.html#Feed.all)[¶](#ferenda.Feed.all)
### The `Feedset` class[¶](#the-feedset-class)
*class* `ferenda.``Feedset`(*label*, *feeds*, *predicate=None*)[[source]](_modules/ferenda/feedset.html#Feedset)[¶](#ferenda.Feedset)
Represents a particular set of feeds, structured around some ke particular attribute(s) of documents, like title or publication date.
| Parameters: | * **label** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – A description of this set of feeds, like “By publisher”
* **feeds** ([*list*](https://docs.python.org/3/library/stdtypes.html#list)) – The set of [`Feed`](index.html#ferenda.Feed) objects that makes up this page set.
* **predicate** ([*rdflib.term.URIRef*](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.term.URIRef)) – The predicate (if any) that this feedset is keyed on.
|
### The `elements` classes[¶](#module-ferenda.elements)
### The `elements.html` classes[¶](#module-ferenda.elements.html)
The purpose of this module is to provide classes corresponding to most elements (except `<style>`, `<script>` and similar non-document content elements) and core attributes (except `@style`
and the `%events` attributes) of HTML4.01 and HTML5. It is not totally compliant with the HTML4.01 and HTML5 standards, but is enough to model most real-world HTML. It contains no provisions to ensure that elements of a particular kind only contain allowed sub-elements.
`ferenda.elements.html.``elements_from_soup`(*soup*, *remove_tags=('script'*, *'style'*, *'font'*, *'map'*, *'center')*, *keep_attributes=('class'*, *'id'*, *'dir'*, *'lang'*, *'src'*, *'href'*, *'name'*, *'alt')*)[[source]](_modules/ferenda/elements/html.html#elements_from_soup)[¶](#ferenda.elements.html.elements_from_soup)
Converts a BeautifulSoup tree into a tree of
[`ferenda.elements.html.HTMLElement`](#ferenda.elements.html.HTMLElement) objects. Some non-semantic attributes and tags are removed in the process.
| Parameters: | * **soup** – Soup object to convert
* **remove_tags** ([*list*](https://docs.python.org/3/library/stdtypes.html#list)) – Tags that should not be included
* **keep_attributes** ([*list*](https://docs.python.org/3/library/stdtypes.html#list)) – Attributes to keep
|
| Returns: | tree of element objects |
| Return type: | [ferenda.elements.html.HTMLElement](index.html#ferenda.elements.html.HTMLElement) |
*class* `ferenda.elements.html.``HTMLElement`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#HTMLElement)[¶](#ferenda.elements.html.HTMLElement)
Abstract base class for all elements.
*class* `ferenda.elements.html.``HTML`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#HTML)[¶](#ferenda.elements.html.HTML)
Element corresponding to the `<html>` tag
*class* `ferenda.elements.html.``Head`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Head)[¶](#ferenda.elements.html.Head)
Element corresponding to the `<head>` tag
*class* `ferenda.elements.html.``Title`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Title)[¶](#ferenda.elements.html.Title)
Element corresponding to the `<title>` tag
*class* `ferenda.elements.html.``Body`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Body)[¶](#ferenda.elements.html.Body)
Element corresponding to the `<body>` tag
`as_xhtml`(*uri*, *parent_uri=None*)[[source]](_modules/ferenda/elements/html.html#Body.as_xhtml)[¶](#ferenda.elements.html.Body.as_xhtml)
Converts this object to a `lxml.etree` object (with children)
| Parameters: | **uri** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – If provided, gets converted to an `@about` attribute in the resulting XHTML. |
*class* `ferenda.elements.html.``P`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#P)[¶](#ferenda.elements.html.P)
Element corresponding to the `<p>` tag
*class* `ferenda.elements.html.``H1`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#H1)[¶](#ferenda.elements.html.H1)
Element corresponding to the `<h1>` tag
*class* `ferenda.elements.html.``H2`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#H2)[¶](#ferenda.elements.html.H2)
Element corresponding to the `<h2>` tag
*class* `ferenda.elements.html.``H3`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#H3)[¶](#ferenda.elements.html.H3)
Element corresponding to the `<h3>` tag
*class* `ferenda.elements.html.``H4`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#H4)[¶](#ferenda.elements.html.H4)
Element corresponding to the `<h4>` tag
*class* `ferenda.elements.html.``H5`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#H5)[¶](#ferenda.elements.html.H5)
Element corresponding to the `<h5>` tag
*class* `ferenda.elements.html.``H6`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#H6)[¶](#ferenda.elements.html.H6)
Element corresponding to the `<h6>` tag
*class* `ferenda.elements.html.``UL`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#UL)[¶](#ferenda.elements.html.UL)
Element corresponding to the `<ul>` tag
*class* `ferenda.elements.html.``OL`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#OL)[¶](#ferenda.elements.html.OL)
Element corresponding to the `<ol>` tag
*class* `ferenda.elements.html.``LI`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#LI)[¶](#ferenda.elements.html.LI)
Element corresponding to the `<li>` tag
*class* `ferenda.elements.html.``Pre`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Pre)[¶](#ferenda.elements.html.Pre)
Element corresponding to the `<pre>` tag
*class* `ferenda.elements.html.``DL`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#DL)[¶](#ferenda.elements.html.DL)
Element corresponding to the `<dl>` tag
*class* `ferenda.elements.html.``DT`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#DT)[¶](#ferenda.elements.html.DT)
Element corresponding to the `<dt>` tag
*class* `ferenda.elements.html.``DD`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#DD)[¶](#ferenda.elements.html.DD)
Element corresponding to the `<dd>` tag
*class* `ferenda.elements.html.``Div`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Div)[¶](#ferenda.elements.html.Div)
Element corresponding to the `<div>` tag
*class* `ferenda.elements.html.``Blockquote`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Blockquote)[¶](#ferenda.elements.html.Blockquote)
Element corresponding to the `<blockquote>` tag
*class* `ferenda.elements.html.``Form`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Form)[¶](#ferenda.elements.html.Form)
Element corresponding to the `<form>` tag
*class* `ferenda.elements.html.``HR`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#HR)[¶](#ferenda.elements.html.HR)
Element corresponding to the `<hr>` tag
*class* `ferenda.elements.html.``Table`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Table)[¶](#ferenda.elements.html.Table)
Element corresponding to the `<table>` tag
*class* `ferenda.elements.html.``Fieldset`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Fieldset)[¶](#ferenda.elements.html.Fieldset)
Element corresponding to the `<fieldset>` tag
*class* `ferenda.elements.html.``Address`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Address)[¶](#ferenda.elements.html.Address)
Element corresponding to the `<address>` tag
*class* `ferenda.elements.html.``TT`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#TT)[¶](#ferenda.elements.html.TT)
Element corresponding to the `<tt >` tag
*class* `ferenda.elements.html.``I`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#I)[¶](#ferenda.elements.html.I)
Element corresponding to the `<i >` tag
*class* `ferenda.elements.html.``B`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#B)[¶](#ferenda.elements.html.B)
Element corresponding to the `<b >` tag
*class* `ferenda.elements.html.``U`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#U)[¶](#ferenda.elements.html.U)
Element corresponding to the `<u >` tag
*class* `ferenda.elements.html.``Big`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Big)[¶](#ferenda.elements.html.Big)
Element corresponding to the `<big >` tag
*class* `ferenda.elements.html.``Small`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Small)[¶](#ferenda.elements.html.Small)
Element corresponding to the `<small>` tag
*class* `ferenda.elements.html.``Em`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Em)[¶](#ferenda.elements.html.Em)
Element corresponding to the `<em >` tag
*class* `ferenda.elements.html.``Strong`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Strong)[¶](#ferenda.elements.html.Strong)
Element corresponding to the `<strong >` tag
*class* `ferenda.elements.html.``Dfn`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Dfn)[¶](#ferenda.elements.html.Dfn)
Element corresponding to the `<dfn >` tag
*class* `ferenda.elements.html.``Code`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Code)[¶](#ferenda.elements.html.Code)
Element corresponding to the `<code >` tag
*class* `ferenda.elements.html.``Samp`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Samp)[¶](#ferenda.elements.html.Samp)
Element corresponding to the `<samp >` tag
*class* `ferenda.elements.html.``Kbd`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Kbd)[¶](#ferenda.elements.html.Kbd)
Element corresponding to the `<kbd >` tag
*class* `ferenda.elements.html.``Var`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Var)[¶](#ferenda.elements.html.Var)
Element corresponding to the `<var >` tag
*class* `ferenda.elements.html.``Cite`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Cite)[¶](#ferenda.elements.html.Cite)
Element corresponding to the `<cite >` tag
*class* `ferenda.elements.html.``Abbr`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Abbr)[¶](#ferenda.elements.html.Abbr)
Element corresponding to the `<abbr >` tag
*class* `ferenda.elements.html.``Acronym`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Acronym)[¶](#ferenda.elements.html.Acronym)
Element corresponding to the `<acronym>` tag
*class* `ferenda.elements.html.``A`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#A)[¶](#ferenda.elements.html.A)
Element corresponding to the `<a >` tag
*class* `ferenda.elements.html.``Img`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Img)[¶](#ferenda.elements.html.Img)
Element corresponding to the `<img >` tag
*class* `ferenda.elements.html.``Object`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Object)[¶](#ferenda.elements.html.Object)
Element corresponding to the `<object >` tag
*class* `ferenda.elements.html.``Br`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Br)[¶](#ferenda.elements.html.Br)
Element corresponding to the `<br >` tag
*class* `ferenda.elements.html.``Q`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Q)[¶](#ferenda.elements.html.Q)
Element corresponding to the `<q >` tag
*class* `ferenda.elements.html.``Sub`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Sub)[¶](#ferenda.elements.html.Sub)
Element corresponding to the `<sub >` tag
*class* `ferenda.elements.html.``Sup`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Sup)[¶](#ferenda.elements.html.Sup)
Element corresponding to the `<sup >` tag
*class* `ferenda.elements.html.``Span`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Span)[¶](#ferenda.elements.html.Span)
Element corresponding to the `<span >` tag
*class* `ferenda.elements.html.``BDO`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#BDO)[¶](#ferenda.elements.html.BDO)
Element corresponding to the `<bdo>` tag
*class* `ferenda.elements.html.``Input`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Input)[¶](#ferenda.elements.html.Input)
Element corresponding to the `<input>` tag
*class* `ferenda.elements.html.``Select`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Select)[¶](#ferenda.elements.html.Select)
Element corresponding to the `<select>` tag
*class* `ferenda.elements.html.``Textarea`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Textarea)[¶](#ferenda.elements.html.Textarea)
Element corresponding to the `<textarea>` tag
*class* `ferenda.elements.html.``Label`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Label)[¶](#ferenda.elements.html.Label)
Element corresponding to the `<label>` tag
*class* `ferenda.elements.html.``Button`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Button)[¶](#ferenda.elements.html.Button)
Element corresponding to the `<button>` tag
*class* `ferenda.elements.html.``Caption`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Caption)[¶](#ferenda.elements.html.Caption)
Element corresponding to the `<caption>` tag
*class* `ferenda.elements.html.``Thead`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Thead)[¶](#ferenda.elements.html.Thead)
Element corresponding to the `<thead>` tag
*class* `ferenda.elements.html.``Tfoot`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Tfoot)[¶](#ferenda.elements.html.Tfoot)
Element corresponding to the `<tfoot>` tag
*class* `ferenda.elements.html.``Tbody`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Tbody)[¶](#ferenda.elements.html.Tbody)
Element corresponding to the `<tbody>` tag
*class* `ferenda.elements.html.``Colgroup`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Colgroup)[¶](#ferenda.elements.html.Colgroup)
Element corresponding to the `<colgroup>` tag
*class* `ferenda.elements.html.``Col`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Col)[¶](#ferenda.elements.html.Col)
Element corresponding to the `<col>` tag
*class* `ferenda.elements.html.``TR`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#TR)[¶](#ferenda.elements.html.TR)
Element corresponding to the `<tr>` tag
*class* `ferenda.elements.html.``TH`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#TH)[¶](#ferenda.elements.html.TH)
Element corresponding to the `<th>` tag
*class* `ferenda.elements.html.``TD`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#TD)[¶](#ferenda.elements.html.TD)
Element corresponding to the `<td>` tag
*class* `ferenda.elements.html.``Ins`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Ins)[¶](#ferenda.elements.html.Ins)
Element corresponding to the `<ins>` tag
*class* `ferenda.elements.html.``Del`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Del)[¶](#ferenda.elements.html.Del)
Element corresponding to the `<del>` tag
*class* `ferenda.elements.html.``HTML5Element`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#HTML5Element)[¶](#ferenda.elements.html.HTML5Element)
`tagname` *= 'div'*[¶](#ferenda.elements.html.HTML5Element.tagname)
`classname`[¶](#ferenda.elements.html.HTML5Element.classname)
*class* `ferenda.elements.html.``Article`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Article)[¶](#ferenda.elements.html.Article)
Element corresponding to the `<article>` tag
*class* `ferenda.elements.html.``Aside`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Aside)[¶](#ferenda.elements.html.Aside)
Element corresponding to the `<aside>` tag
*class* `ferenda.elements.html.``Bdi`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Bdi)[¶](#ferenda.elements.html.Bdi)
Element corresponding to the `<bdi>` tag
*class* `ferenda.elements.html.``Details`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Details)[¶](#ferenda.elements.html.Details)
Element corresponding to the `<details>` tag
*class* `ferenda.elements.html.``Dialog`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Dialog)[¶](#ferenda.elements.html.Dialog)
Element corresponding to the `<dialog>` tag
*class* `ferenda.elements.html.``Summary`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Summary)[¶](#ferenda.elements.html.Summary)
Element corresponding to the `<summary>` tag
*class* `ferenda.elements.html.``Figure`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Figure)[¶](#ferenda.elements.html.Figure)
Element corresponding to the `<figure>` tag
*class* `ferenda.elements.html.``Figcaption`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Figcaption)[¶](#ferenda.elements.html.Figcaption)
Element corresponding to the `<figcaption>` tag
*class* `ferenda.elements.html.``Footer`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Footer)[¶](#ferenda.elements.html.Footer)
Element corresponding to the `<footer>` tag
*class* `ferenda.elements.html.``Header`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Header)[¶](#ferenda.elements.html.Header)
Element corresponding to the `<header>` tag
*class* `ferenda.elements.html.``Hgroup`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Hgroup)[¶](#ferenda.elements.html.Hgroup)
Element corresponding to the `<hgroup>` tag
*class* `ferenda.elements.html.``Mark`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Mark)[¶](#ferenda.elements.html.Mark)
Element corresponding to the `<mark>` tag
*class* `ferenda.elements.html.``Meter`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Meter)[¶](#ferenda.elements.html.Meter)
Element corresponding to the `<meter>` tag
*class* `ferenda.elements.html.``Nav`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Nav)[¶](#ferenda.elements.html.Nav)
Element corresponding to the `<nav>` tag
*class* `ferenda.elements.html.``Progress`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Progress)[¶](#ferenda.elements.html.Progress)
Element corresponding to the `<progress>` tag
*class* `ferenda.elements.html.``Ruby`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Ruby)[¶](#ferenda.elements.html.Ruby)
Element corresponding to the `<ruby>` tag
*class* `ferenda.elements.html.``Rt`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Rt)[¶](#ferenda.elements.html.Rt)
Element corresponding to the `<rt>` tag
*class* `ferenda.elements.html.``Rp`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Rp)[¶](#ferenda.elements.html.Rp)
Element corresponding to the `<rp>` tag
*class* `ferenda.elements.html.``Section`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Section)[¶](#ferenda.elements.html.Section)
Element corresponding to the `<section>` tag
*class* `ferenda.elements.html.``Time`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Time)[¶](#ferenda.elements.html.Time)
Element corresponding to the `<time>` tag
*class* `ferenda.elements.html.``Wbr`(**args*, ***kwargs*)[[source]](_modules/ferenda/elements/html.html#Wbr)[¶](#ferenda.elements.html.Wbr)
Element corresponding to the `<wbr>` tag
### The `Describer` class[¶](#the-describer-class)
*class* `ferenda.``Describer`(*graph=None*, *about=None*, *base=None*)[[source]](_modules/ferenda/describer.html#Describer)[¶](#ferenda.Describer)
Extends the utility class
[`rdflib.extras.describer.Describer`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.extras.html#rdflib.extras.describer.Describer) so that it reads values and refences as well as write them.
| Parameters: | * **graph** ([`Graph`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.graph.Graph)) – The graph to read from and write to
* **about** (string or [`Identifier`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.term.Identifier)) – the current subject to use
* **base** (*string*) – Base URI for any relative URIs used with [`about()`](#ferenda.Describer.about), [`rel()`](#ferenda.Describer.rel) or [`rev()`](#ferenda.Describer.rev),
|
`getvalues`(*p*)[[source]](_modules/ferenda/describer.html#Describer.getvalues)[¶](#ferenda.Describer.getvalues)
Get a list (possibly empty) of all literal values for the given property and the current subject. Values will be converted to plain literals, i.e. not
[`rdflib.term.Literal`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.term.Literal) objects.
| Parameters: | **p** ([`rdflib.term.URIRef`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.term.URIRef)) – The property of the sought literal. |
| Returns: | a list of matching literals |
| Return type: | list of strings (or other appropriate python type if the literal has a datatype) |
`getrels`(*p*)[[source]](_modules/ferenda/describer.html#Describer.getrels)[¶](#ferenda.Describer.getrels)
Get a list (possibly empty) of all URIs for the given property and the current subject. Values will be converted to strings, i.e. not
[`rdflib.term.URIRef`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.term.URIRef) objects.
| Parameters: | **p** ([`rdflib.term.URIRef`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.term.URIRef)) – The property of the sought URI. |
| Returns: | The matching URIs |
| Return type: | list of strings |
`getrdftype`()[[source]](_modules/ferenda/describer.html#Describer.getrdftype)[¶](#ferenda.Describer.getrdftype)
Get the rdf:type of the current subject.
| Returns: | The URI of the current subjects’s rdf:type. |
| Return type: | string |
`getvalue`(*p*)[[source]](_modules/ferenda/describer.html#Describer.getvalue)[¶](#ferenda.Describer.getvalue)
Get a single literal value for the given property and the current subject. If the graph contains zero or more than one such literal, a [`KeyError`](https://docs.python.org/3/library/exceptions.html#KeyError) will be raised.
Note
If this is all you use `Describer` for, you might want to use
[`rdflib.graph.Graph.value()`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.graph.Graph.value) instead – the main advantage that this method has is that it converts the return value to a plain python object instead of a
[`rdflib.term.Literal`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.term.Literal) object.
| Parameters: | **p** ([`rdflib.term.URIRef`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.term.URIRef)) – The property of the sought literal. |
| Returns: | The sought literal |
| Return type: | string (or other appropriate python type if the literal has a datatype) |
`getrel`(*p*)[[source]](_modules/ferenda/describer.html#Describer.getrel)[¶](#ferenda.Describer.getrel)
Get a single URI for the given property and the current subject. If the graph contains zero or more than one such URI,
a [`KeyError`](https://docs.python.org/3/library/exceptions.html#KeyError) will be raised.
| Parameters: | **p** ([`rdflib.term.URIRef`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.term.URIRef)) – The property of the sought literal. |
| Returns: | The sought URI |
| Return type: | string |
`about`(*subject*, ***kws*)[[source]](_modules/rdflib/extras/describer.html#Describer.about)[¶](#ferenda.Describer.about)
Sets the current subject. Will convert the given object into an
`URIRef` if it’s not an `Identifier`.
Usage:
```
>>> d = Describer()
>>> d._current()
rdflib.term.BNode(...)
>>> d.about("http://example.org/")
>>> d._current()
rdflib.term.URIRef('http://example.org/')
```
`rdftype`(*t*)[[source]](_modules/rdflib/extras/describer.html#Describer.rdftype)[¶](#ferenda.Describer.rdftype)
Shorthand for setting rdf:type of the current subject.
Usage:
```
>>> from rdflib import URIRef
>>> from rdflib.namespace import RDF, RDFS
>>> d = Describer(about="http://example.org/")
>>> d.rdftype(RDFS.Resource)
>>> (URIRef('http://example.org/'),
... RDF.type, RDFS.Resource) in d.graph True
```
`rel`(*p*, *o=None*, ***kws*)[[source]](_modules/rdflib/extras/describer.html#Describer.rel)[¶](#ferenda.Describer.rel)
Set an object for the given property. Will convert the given object into an `URIRef` if it’s not an `Identifier`. If none is given, a new `BNode` is used.
Returns a context manager for use in a `with` block, within which the given object is used as current subject.
Usage:
```
>>> from rdflib import URIRef
>>> from rdflib.namespace import RDF, RDFS
>>> d = Describer(about="/", base="http://example.org/")
>>> _ctxt = d.rel(RDFS.seeAlso, "/about")
>>> d.graph.value(URIRef('http://example.org/'), RDFS.seeAlso)
rdflib.term.URIRef('http://example.org/about')
>>> with d.rel(RDFS.seeAlso, "/more"):
... d.value(RDFS.label, "More")
>>> (URIRef('http://example.org/'), RDFS.seeAlso,
... URIRef('http://example.org/more')) in d.graph True
>>> d.graph.value(URIRef('http://example.org/more'), RDFS.label)
rdflib.term.Literal('More')
```
`rev`(*p*, *s=None*, ***kws*)[[source]](_modules/rdflib/extras/describer.html#Describer.rev)[¶](#ferenda.Describer.rev)
Same as `rel`, but uses current subject as *object* of the relation.
The given resource is still used as subject in the returned context manager.
Usage:
```
>>> from rdflib import URIRef
>>> from rdflib.namespace import RDF, RDFS
>>> d = Describer(about="http://example.org/")
>>> with d.rev(RDFS.seeAlso, "http://example.net/"):
... d.value(RDFS.label, "Net")
>>> (URIRef('http://example.net/'), RDFS.seeAlso,
... URIRef('http://example.org/')) in d.graph True
>>> d.graph.value(URIRef('http://example.net/'), RDFS.label)
rdflib.term.Literal('Net')
```
`value`(*p*, *v*, ***kws*)[[source]](_modules/rdflib/extras/describer.html#Describer.value)[¶](#ferenda.Describer.value)
Set a literal value for the given property. Will cast the value to an
`Literal` if a plain literal is given.
Usage:
```
>>> from rdflib import URIRef
>>> from rdflib.namespace import RDF, RDFS
>>> d = Describer(about="http://example.org/")
>>> d.value(RDFS.label, "Example")
>>> d.graph.value(URIRef('http://example.org/'), RDFS.label)
rdflib.term.Literal('Example')
```
### The `Transformer` class[¶](#the-transformer-class)
*class* `ferenda.``Transformer`(*transformertype*, *template*, *templatedirs*, *documentroot=None*, *config=None*)[[source]](_modules/ferenda/transformer.html#Transformer)[¶](#ferenda.Transformer)
Transforms parsed “pure content” documents into “browser-ready”
HTML5 files with site branding and navigation, using a template of some kind.
| Parameters: | * **transformertype** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The engine to be used for transforming. Right now only `"XSLT"` is supported.
* **template** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The main template file.
* **templatedirs** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Directories that may contain supporting templates used by the main template.
* **documentroot** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The base directory for all generated files – used to make relative references to CSS/JS files correct.
* **config** – Any configuration information used by the transforming engine. Can be a path to a config file, a python data structure, or anything else compatible with the engine selected by
`transformertype`.
|
Note
An initialized Transformer object only transforms using the template file provided at initialization. If you need to use another template file, create another Transformer object.
`transform`(*indata*, *depth*, *parameters=None*, *uritransform=None*)[[source]](_modules/ferenda/transformer.html#Transformer.transform)[¶](#ferenda.Transformer.transform)
Perform the transformation. This method always operates on the
“native” datastructure – this might be different depending on the transformer engine. For XSLT, which is implemented through lxml, its in- and outdata are lxml trees
If you need an engine-indepent API, use
[`transform_stream()`](#ferenda.Transformer.transform_stream) or
[`transform_file()`](#ferenda.Transformer.transform_file) instead
| Parameters: | * **indata** – The document to be transformed
* **depth** ([*int*](https://docs.python.org/3/library/functions.html#int)) – The directory nesting level, compared to `documentroot`
* **parameters** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – Any parameters that should be provided to the template
* **uritransform** (*callable*) – A function, when called with an URI,
returns a transformed URI/URL (such as the relative path to a static file) –
used when transforming to files used for static offline use.
|
| Returns: | The transformed document |
`transform_stream`(*instream*, *depth*, *parameters=None*, *uritransform=None*)[[source]](_modules/ferenda/transformer.html#Transformer.transform_stream)[¶](#ferenda.Transformer.transform_stream)
Accepts a file-like object, returns a file-like object.
`transform_file`(*infile*, *outfile*, *parameters=None*, *uritransform=None*)[[source]](_modules/ferenda/transformer.html#Transformer.transform_file)[¶](#ferenda.Transformer.transform_file)
Accepts two filenames, reads from *infile*, writes to *outfile*.
### The `FSMParser` class[¶](#the-fsmparser-class)
*class* `ferenda.``FSMParser`[[source]](_modules/ferenda/fsmparser.html#FSMParser)[¶](#ferenda.FSMParser)
A configurable finite state machine (FSM) for parsing documents with nested structure. You provide a set of *recognizers*, a set of *constructors*, a *transition table* and a *stream* of document text chunks, and it returns a hierarchical document object structure.
See [Parsing document structure](index.html#document-fsmparser).
`set_recognizers`(**args*)[[source]](_modules/ferenda/fsmparser.html#FSMParser.set_recognizers)[¶](#ferenda.FSMParser.set_recognizers)
Set the list of functions (or other callables) used in order to recognize symbols from the stream of text chunks. Recognizers are tried in the order specified here.
`set_transitions`(*transitions*)[[source]](_modules/ferenda/fsmparser.html#FSMParser.set_transitions)[¶](#ferenda.FSMParser.set_transitions)
Set the transition table for the state matchine.
| Parameters: | **transitions** – The transition table, in the form of a mapping between two tuples. The first tuple should be the current state (or a list of possible current states) and a callable function that determines if a particular symbol is recognized `(currentstate, recognizer)`. The second tuple should be a constructor function (or False``) and the new state to transition into. |
`parse`(*chunks*)[[source]](_modules/ferenda/fsmparser.html#FSMParser.parse)[¶](#ferenda.FSMParser.parse)
Parse a document in the form of an iterable of suitable chunks – often lines or elements. each chunk should be a string or a string-like obje ct. Some examples:
```
p = FSMParser()
reader = TextReader("foo.txt")
body = p.parse(reader.getiterator(reader.readparagraph),"body", make_body)
body = p.parse(BeautifulSoup("foo.html").find_all("#main p"), "body", make_body)
body = p.parse(ElementTree.parse("foo.xml").find(".//paragraph"), "body", make_body)
```
| Parameters: | * **chunks** – The document to be parsed, as a list or any other iterable of text-like objects.
* **initialstate** – The initial state for the machine. The state must be present in the transition table. This could be any object, but strings are preferrable as they make error messages easier to understand.
* **initialconstructor** (*callable*) – A function that creates a document root object, and then fills it with child objects using
.make_children()
|
| Returns: | A document object tree. |
`analyze_symbol`()[[source]](_modules/ferenda/fsmparser.html#FSMParser.analyze_symbol)[¶](#ferenda.FSMParser.analyze_symbol)
Internal function used by make_children()
`transition`(*currentstate*, *symbol*)[[source]](_modules/ferenda/fsmparser.html#FSMParser.transition)[¶](#ferenda.FSMParser.transition)
Internal function used by make_children()
`make_child`(*constructor*, *childstate*)[[source]](_modules/ferenda/fsmparser.html#FSMParser.make_child)[¶](#ferenda.FSMParser.make_child)
Internal function used by make_children(), which calls one of the constructors defined in the transition table.
`make_children`(*parent*)[[source]](_modules/ferenda/fsmparser.html#FSMParser.make_children)[¶](#ferenda.FSMParser.make_children)
Creates child nodes for the current (parent) document node.
| Parameters: | **parent** – The parent document node, as any list-like object
(preferrably a subclass of
`ferenda.elements.CompoundElement`) |
| Returns: | The same `parent` object. |
### The `CitationParser` class[¶](#the-citationparser-class)
*class* `ferenda.``CitationParser`(**grammars*)[[source]](_modules/ferenda/citationparser.html#CitationParser)[¶](#ferenda.CitationParser)
Finds citations to documents and other resources in text strings. Each type of citation is specified by a
[pyparsing](http://pyparsing.wikispaces.com/Documentation)
grammar, and for each found citation a URI can be constructed using a [`URIFormatter`](index.html#ferenda.URIFormatter) object.
| Parameters: | **grammars** (list of `pyparsing.ParserElement` objects) – The grammar(s) for the citations that this parser should find, in order of priority. |
Usage:
```
>>> from pyparsing import Word,nums
>>> rfc_grammar = ("RFC " + Word(nums).setResultsName("rfcnumber")).setResultsName("rfccite")
>>> pep_grammar = ("PEP" + Word(nums).setResultsName("pepnumber")).setResultsName("pepcite")
>>> citparser = CitationParser(rfc_grammar, pep_grammar)
>>> res = citparser.parse_string("The WSGI spec (PEP 333) references RFC 2616 (The HTTP spec)")
>>> # res is a list of strings and/or pyparsing.ParseResult objects
>>> from ferenda import URIFormatter
>>> from ferenda.elements import Link
>>> f = URIFormatter(('rfccite',
... lambda p: "http://www.rfc-editor.org/rfc/rfc%(rfcnumber)s" % p),
... ('pepcite',
... lambda p: "http://www.python.org/dev/peps/pep-0%(pepnumber)s/" % p))
>>> citparser.set_formatter(f)
>>> res = citparser.parse_recursive(["The WSGI spec (PEP 333) references RFC 2616 (The HTTP spec)"])
>>> res == ['The WSGI spec (', Link('PEP 333',uri='http://www.python.org/dev/peps/pep-0333/'), ') references ', Link('RFC 2616',uri='http://www.rfc-editor.org/rfc/rfc2616'), ' (The HTTP spec)']
True
```
`set_formatter`(*formatter*)[[source]](_modules/ferenda/citationparser.html#CitationParser.set_formatter)[¶](#ferenda.CitationParser.set_formatter)
Specify how found citations are to be formatted when using
[`parse_recursive()`](#ferenda.CitationParser.parse_recursive)
| Parameters: | **formatter** ([`URIFormatter`](index.html#ferenda.URIFormatter)) – The formatter object to use for all citations |
`add_grammar`(*grammar*)[[source]](_modules/ferenda/citationparser.html#CitationParser.add_grammar)[¶](#ferenda.CitationParser.add_grammar)
Add another grammar.
| Parameters: | **grammar** (`pyparsing.ParserElement`) – The grammar to add |
`parse_string`(*string*, *predicate='dcterms:references'*)[[source]](_modules/ferenda/citationparser.html#CitationParser.parse_string)[¶](#ferenda.CitationParser.parse_string)
Find any citations in a text string, using the configured grammars.
| Parameters: | **string** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Text to parse for citations |
| Returns: | strings (for parts of the input text that do not contain any citation) and/or tuples (for found citation) consisting of (string, `pyparsing.ParseResult`) |
| Return type: | [list](https://docs.python.org/3/library/stdtypes.html#list) |
`parse_recursive`(*part*, *predicate='dcterms:references'*)[[source]](_modules/ferenda/citationparser.html#CitationParser.parse_recursive)[¶](#ferenda.CitationParser.parse_recursive)
Traverse a nested tree of elements, finding citations in any strings contained in the tree. Found citations are marked up as `Link` elements with the uri constructed by the [`URIFormatter`](index.html#ferenda.URIFormatter) set by
[`set_formatter()`](#ferenda.CitationParser.set_formatter).
| Parameters: | **part** ([*list*](https://docs.python.org/3/library/stdtypes.html#list)) – The root element of the structure to parse |
| Returns: | a correspondingly nested structure. |
| Return type: | [list](https://docs.python.org/3/library/stdtypes.html#list) |
### The `URIFormatter` class[¶](#the-uriformatter-class)
*class* `ferenda.``URIFormatter`(**formatters*)[[source]](_modules/ferenda/uriformatter.html#URIFormatter)[¶](#ferenda.URIFormatter)
Companion class to [`ferenda.CitationParser`](index.html#ferenda.CitationParser), that handles the work of formatting the dicts or dict-like objects that CitationParser creates.
The class is initialized with a list of formatters, where each formatter is a tuple (key, callable). When
[`format()`](#ferenda.URIFormatter.format) is passed a citation reference in the form of a `pyparsing.ParseResult` object (which has a `.getName` method), the name of that reference is matched against the key of all formatters. If there is a match, the corresponding callable is called with the parseresult object as a single parameter, and the resulting string is returned.
An initialized `URIFormatter` object is not used directly. Instead, call
[`ferenda.CitationParser.set_formatter()`](index.html#ferenda.CitationParser.set_formatter) with the object as parameter. See [Citation parsing](index.html#document-citationparsing).
| Parameters: | ***formatters** ([*list*](https://docs.python.org/3/library/stdtypes.html#list)) – Formatters, each provided as a *(name, callable)* tuple. |
`format`(*parseresult*)[[source]](_modules/ferenda/uriformatter.html#URIFormatter.format)[¶](#ferenda.URIFormatter.format)
Given a pyparsing.ParseResult object, finds a appropriate formatter for that
result, and formats the result into a URI using that formatter.
`addformatter`(*key*, *func*)[[source]](_modules/ferenda/uriformatter.html#URIFormatter.addformatter)[¶](#ferenda.URIFormatter.addformatter)
Add a single formatter to the list of registered formatters after initialization.
`formatterfor`(*key*)[[source]](_modules/ferenda/uriformatter.html#URIFormatter.formatterfor)[¶](#ferenda.URIFormatter.formatterfor)
Returns an appropriate formatting callable for the given key, or None if not found.
### The `TripleStore` class[¶](#the-triplestore-class)
*class* `ferenda.``TripleStore`(*location*, *repository*, ***kwargs*)[[source]](_modules/ferenda/triplestore.html#TripleStore)[¶](#ferenda.TripleStore)
Presents a limited but uniform interface to different triple stores. It supports both standalone servers accessed over HTTP (Fuseki and Sesame, right now) as well as RDFLib-based persistant stores (The SQLite and Sleepycat/BerkeleyDB backends are supported).
> Note
> This class does not implement the [RDFlib store interface](http://rdflib.readthedocs.org/en/latest/univrdfstore.html). Instead,
> it provides a small list of operations that is generally useful
> for the kinds of things that ferenda-based applications need to
> do.
> This class is an abstract base class, and is not directly
> instantiated. Instead, call
> [`connect()`](#ferenda.TripleStore.connect), which returns an
> initialized object of the appropriate subclass. All subclasses
> implements the following API.
*static* `connect`(*storetype*, *location*, *repository*, ***kwargs*)[[source]](_modules/ferenda/triplestore.html#TripleStore.connect)[¶](#ferenda.TripleStore.connect)
Returns a initialized object, the exact type depending on the
`storetype` parameter.
| Parameters: | * **storetype** – The type of store to connect to (`"FUSEKI"`, `"SESAME"`, `"SLEEPYCAT"` or `"SQLITE"`)
* **location** – The URL or file path where the main repository is stored
* **repository** – The name of the repository to use with the main repository storage
* ****kwargs** – Any other named parameters are passed to the appropriate class constructor (see “Store-specific parameters” below).
|
Example:
```
>>> # creates a new SQLite db at /tmp/test.sqlite if not already present
>>> sqlitestore = TripleStore.connect("SQLITE", "/tmp/test.sqlite", "myrepo")
>>> sqlitestore.triple_count()
0
>>> sqlitestore.close()
>>> # connect to same db, but store all triples in memory (read-only)
>>> sqlitestore = TripleStore.connect("SQLITE", "/tmp/test.sqlite", "myrepo", inmemory=True)
>>> # connect to a remote Fuseki store over HTTP, using the command-line
>>> # tool curl for faster batch up/downloads
>>> fusekistore = TripleStore.connect("FUSEKI", "http://localhost:3030/", "ds", curl=True)
```
**Store-specific parameters:**
When using storetypes `SQLITE` or `SLEEPYCAT`, the
[`select()`](#ferenda.TripleStore.select) and
[`construct()`](#ferenda.TripleStore.construct) methods can be sped up
(around 150%) by loading the entire content of the triple store into memory, by setting the `inmemory` parameter to
`True`
When using storetypes `FUSEKI` or `SESAME`, storage and retrieval of a large number of triples (particularly the
[`add_serialized_file()`](#ferenda.TripleStore.add_serialized_file) and
[`get_serialized_file()`](#ferenda.TripleStore.get_serialized_file) methods) can be sped up by setting the `curl` parameter to `True`, if the command-line tool [curl](http://curl.haxx.se/) is available.
`re_fromgraph` *= re.compile('\\sFROM <(?P<graphuri>[^>]+)>\\s')*[¶](#ferenda.TripleStore.re_fromgraph)
Internal utility regex to determine wether a query specifies a particular graph to select against.
`add_serialized`(*data*, *format*, *context=None*)[[source]](_modules/ferenda/triplestore.html#TripleStore.add_serialized)[¶](#ferenda.TripleStore.add_serialized)
Add the serialized RDF statements in the string *data* directly to the repository.
`add_serialized_file`(*filename*, *format*, *context=None*)[[source]](_modules/ferenda/triplestore.html#TripleStore.add_serialized_file)[¶](#ferenda.TripleStore.add_serialized_file)
Add the serialized RDF statements contained in the file *filename* directly to the repository.
`get_serialized`(*format='nt'*, *context=None*)[[source]](_modules/ferenda/triplestore.html#TripleStore.get_serialized)[¶](#ferenda.TripleStore.get_serialized)
Returns a string containing all statements in the store,
serialized in the selected format. Returns byte string, not unicode array!
`get_serialized_file`(*filename*, *format='nt'*, *context=None*)[[source]](_modules/ferenda/triplestore.html#TripleStore.get_serialized_file)[¶](#ferenda.TripleStore.get_serialized_file)
Saves all statements in the store to *filename*.
`select`(*query*, *format='sparql'*)[[source]](_modules/ferenda/triplestore.html#TripleStore.select)[¶](#ferenda.TripleStore.select)
Run a SPARQL SELECT query against the triple store and returns the results.
| Parameters: | * **query** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – A SPARQL query with all neccessary prefixes defined.
* **format** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Either one of the standard formats for queries
(`"sparql"`, `"json"` or `"binary"`) –
returns whatever `requests.get().content`
returns – or the special value `"python"`
which returns a python list of dicts representing rows and columns.
|
`construct`(*query*)[[source]](_modules/ferenda/triplestore.html#TripleStore.construct)[¶](#ferenda.TripleStore.construct)
Run a SPARQL CONSTRUCT query against the triple store and returns the results as a RDFLib graph
| Parameters: | **query** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – A SPARQL query with all neccessary prefixes defined. |
`update`(*query*)[[source]](_modules/ferenda/triplestore.html#TripleStore.update)[¶](#ferenda.TripleStore.update)
Run a SPARQL UPDATE (or DELETE/DROP/CLEAR) against the triplestore. Returns nothing but may raise an exception if something went wrong.
| Parameters: | **query** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – A SPARQL query with all neccessary prefixes defined. |
`triple_count`(*context=None*)[[source]](_modules/ferenda/triplestore.html#TripleStore.triple_count)[¶](#ferenda.TripleStore.triple_count)
Returns the number of triples in the repository.
`clear`(*context=None*)[[source]](_modules/ferenda/triplestore.html#TripleStore.clear)[¶](#ferenda.TripleStore.clear)
Removes all statements from the repository (without removing the repository as such).
`close`()[[source]](_modules/ferenda/triplestore.html#TripleStore.close)[¶](#ferenda.TripleStore.close)
Close all connections to the triplestore. Needed if using RDFLib-based triple store, a no-op if using HTTP based stores.
### The `FulltextIndex` class[¶](#the-fulltextindex-class)
Abstracts access to full text indexes (right now only [Whoosh](https://pypi.python.org/pypi/Whoosh) and [ElasticSearch](http://www.elasticsearch.org/) is supported, but maybe later, [Solr](http://lucene.apache.org/solr/), [Xapian](http://xapian.org/)
and/or [Sphinx](http://sphinxsearch.com/) will be supported).
*class* `ferenda.``FulltextIndex`(*location*, *repos*)[[source]](_modules/ferenda/fulltextindex.html#FulltextIndex)[¶](#ferenda.FulltextIndex)
This is the abstract base class for a fulltext index. You use it by calling the static method FulltextIndex.connect, passing a string representing the underlying fulltext engine you wish to use. It returns a subclass on which you then call further methods.
*static* `connect`(*indextype*, *location*, *repos*)[[source]](_modules/ferenda/fulltextindex.html#FulltextIndex.connect)[¶](#ferenda.FulltextIndex.connect)
Open a fulltext index (creating it if it doesn’t already exists).
| Parameters: | * **location** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Type of fulltext index (“WHOOSH” or “ELASTICSEARCH”)
* **location** – The file path of the fulltext index.
|
`make_schema`(*repos*)[[source]](_modules/ferenda/fulltextindex.html#FulltextIndex.make_schema)[¶](#ferenda.FulltextIndex.make_schema)
`get_default_schema`()[[source]](_modules/ferenda/fulltextindex.html#FulltextIndex.get_default_schema)[¶](#ferenda.FulltextIndex.get_default_schema)
`exists`()[[source]](_modules/ferenda/fulltextindex.html#FulltextIndex.exists)[¶](#ferenda.FulltextIndex.exists)
Whether the fulltext index exists.
`create`(*repos*)[[source]](_modules/ferenda/fulltextindex.html#FulltextIndex.create)[¶](#ferenda.FulltextIndex.create)
Creates a fulltext index using the provided schema.
`destroy`()[[source]](_modules/ferenda/fulltextindex.html#FulltextIndex.destroy)[¶](#ferenda.FulltextIndex.destroy)
Destroys the index, if created.
`open`()[[source]](_modules/ferenda/fulltextindex.html#FulltextIndex.open)[¶](#ferenda.FulltextIndex.open)
Opens the index so that it can be queried.
`schema`()[[source]](_modules/ferenda/fulltextindex.html#FulltextIndex.schema)[¶](#ferenda.FulltextIndex.schema)
Returns the schema that actually is in use. A schema is a dict where the keys are field names and the values are any subclass of
[`ferenda.fulltextindex.IndexedType`](#ferenda.fulltextindex.IndexedType)
`update`(*uri*, *repo*, *basefile*, *text*, ***kwargs*)[[source]](_modules/ferenda/fulltextindex.html#FulltextIndex.update)[¶](#ferenda.FulltextIndex.update)
Insert (or update) a resource in the fulltext index. A resource may be an entire document, but it can also be any part of a document that is referenceable (i.e. a document node that has
`@typeof` and `@about` attributes). A document with 100 sections can be stored as 100 independent resources, as long as each section has a unique key in the form of a URI.
| Parameters: | * **uri** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – URI for the resource
* **repo** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The alias for the document repository that the resource is part of
* **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile which contains resource
* **title** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – User-displayable title of resource (if applicable).
Should not contain the same information as
`identifier`.
* **identifier** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – User-displayable short identifier for resource (if applicable)
|
Note
Calling this method may not directly update the fulltext index – you need to call
[`commit()`](#ferenda.FulltextIndex.commit) or
[`close()`](#ferenda.FulltextIndex.close) for that.
`commit`()[[source]](_modules/ferenda/fulltextindex.html#FulltextIndex.commit)[¶](#ferenda.FulltextIndex.commit)
Commit all pending updates to the fulltext index.
`close`()[[source]](_modules/ferenda/fulltextindex.html#FulltextIndex.close)[¶](#ferenda.FulltextIndex.close)
Commits all pending updates and closes the index.
`doccount`()[[source]](_modules/ferenda/fulltextindex.html#FulltextIndex.doccount)[¶](#ferenda.FulltextIndex.doccount)
Returns the number of currently indexed (non-deleted) documents.
`query`(*q=None*, *pagenum=1*, *pagelen=10*, ***kwargs*)[[source]](_modules/ferenda/fulltextindex.html#FulltextIndex.query)[¶](#ferenda.FulltextIndex.query)
Perform a free text query against the full text index, optionally restricted with parameters for individual fields.
| Parameters: | * **q** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Free text query, using the selected full text index’s prefered query syntax
* ****kwargs** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – any parameter will be used to match a similarly-named field
|
| Returns: | matching documents, each document as a dict of fields |
| Return type: | [list](https://docs.python.org/3/library/stdtypes.html#list) |
Note
The *kwargs* parameters do not yet do anything – only simple full text queries are possible.
`fieldmapping` *= ()*[¶](#ferenda.FulltextIndex.fieldmapping)
A tuple of `(abstractfield, nativefield)` tuples. Each
`abstractfield` should be a instance of a IndexedType-derived class. Each `nativefield` should be whatever kind of object that is used with the native fullltextindex API.
The methods [`to_native_field()`](#ferenda.FulltextIndex.to_native_field) and
[`from_native_field()`](#ferenda.FulltextIndex.from_native_field) uses this tuple of tuples to convert fields.
`to_native_field`(*fieldobject*)[[source]](_modules/ferenda/fulltextindex.html#FulltextIndex.to_native_field)[¶](#ferenda.FulltextIndex.to_native_field)
Given a abstract field (an instance of a IndexedType-derived class), convert to the corresponding native type for the fulltextindex in use.
`from_native_field`(*fieldobject*)[[source]](_modules/ferenda/fulltextindex.html#FulltextIndex.from_native_field)[¶](#ferenda.FulltextIndex.from_native_field)
Given a fulltextindex native type, convert to the corresponding IndexedType object.
#### Datatype field classes[¶](#datatype-field-classes)
*class* `ferenda.fulltextindex.``IndexedType`(***kwargs*)[[source]](_modules/ferenda/fulltextindex.html#IndexedType)[¶](#ferenda.fulltextindex.IndexedType)
Base class for a fulltext searchengine-independent representation of indexed data. By using IndexType-derived classes to represent the schema, it becomes possible to switch out search engines without affecting the rest of the code.
*class* `ferenda.fulltextindex.``Identifier`(***kwargs*)[[source]](_modules/ferenda/fulltextindex.html#Identifier)[¶](#ferenda.fulltextindex.Identifier)
An identifier is a string, normally in the form of a URI, which uniquely identifies an indexed document.
*class* `ferenda.fulltextindex.``Datetime`(***kwargs*)[[source]](_modules/ferenda/fulltextindex.html#Datetime)[¶](#ferenda.fulltextindex.Datetime)
*class* `ferenda.fulltextindex.``Text`(***kwargs*)[[source]](_modules/ferenda/fulltextindex.html#Text)[¶](#ferenda.fulltextindex.Text)
*class* `ferenda.fulltextindex.``Label`(***kwargs*)[[source]](_modules/ferenda/fulltextindex.html#Label)[¶](#ferenda.fulltextindex.Label)
*class* `ferenda.fulltextindex.``Keyword`(***kwargs*)[[source]](_modules/ferenda/fulltextindex.html#Keyword)[¶](#ferenda.fulltextindex.Keyword)
A keyword is a single string from a controlled vocabulary.
*class* `ferenda.fulltextindex.``Boolean`(***kwargs*)[[source]](_modules/ferenda/fulltextindex.html#Boolean)[¶](#ferenda.fulltextindex.Boolean)
*class* `ferenda.fulltextindex.``URI`(***kwargs*)[[source]](_modules/ferenda/fulltextindex.html#URI)[¶](#ferenda.fulltextindex.URI)
Any URI (except the URI that identifies a indexed document – use Identifier for that).
*class* `ferenda.fulltextindex.``Resource`(***kwargs*)[[source]](_modules/ferenda/fulltextindex.html#Resource)[¶](#ferenda.fulltextindex.Resource)
A fulltextindex.Resource is a URI that also has a human-readable label.
#### Search field classes[¶](#search-field-classes)
*class* `ferenda.fulltextindex.``SearchModifier`(**values*)[[source]](_modules/ferenda/fulltextindex.html#SearchModifier)[¶](#ferenda.fulltextindex.SearchModifier)
*class* `ferenda.fulltextindex.``Less`(*max*)[[source]](_modules/ferenda/fulltextindex.html#Less)[¶](#ferenda.fulltextindex.Less)
*class* `ferenda.fulltextindex.``More`(*min*)[[source]](_modules/ferenda/fulltextindex.html#More)[¶](#ferenda.fulltextindex.More)
*class* `ferenda.fulltextindex.``Between`(*min*, *max*)[[source]](_modules/ferenda/fulltextindex.html#Between)[¶](#ferenda.fulltextindex.Between)
### The `TextReader` class[¶](#the-textreader-class)
*class* `ferenda.``TextReader`(*filename=None*, *encoding=None*, *string=None*, *linesep=None*)[[source]](_modules/ferenda/textreader.html#TextReader)[¶](#ferenda.TextReader)
Fancy file-like-class for reading (not writing) text files by line,
paragraph, page or any other user-defined unit of text, with support for peeking ahead and looking backwards. It can read files with byte streams using different encodings, but converts/handles everything to real strings (unicode in python 2). Alternatively, it can be initialized from an existing string.
| Parameters: | * **filename** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The file to read
* **encoding** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The encoding used by the file (default `ascii`)
* **string** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Alternatively, a string used for initialization
* **linesep** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The line separators used in the file/string
|
`UNIX` *= '\n'*[¶](#ferenda.TextReader.UNIX)
Unix line endings, for use with the `linesep` parameter.
`DOS` *= '\r\n'*[¶](#ferenda.TextReader.DOS)
Dos/Windows line endings, for use with the `linesep` parameter.
`MAC` *= '\r'*[¶](#ferenda.TextReader.MAC)
Old-style Mac line endings, for use with the `linesep` parameter.
`eof`()[[source]](_modules/ferenda/textreader.html#TextReader.eof)[¶](#ferenda.TextReader.eof)
Returns True iff current seek position is at end of file.
`bof`()[[source]](_modules/ferenda/textreader.html#TextReader.bof)[¶](#ferenda.TextReader.bof)
Returns True iff current seek position is at begining of file.
`cue`(*string*)[[source]](_modules/ferenda/textreader.html#TextReader.cue)[¶](#ferenda.TextReader.cue)
Set seek position at the beginning of *string*, starting at current seek position. Raises IOError if *string* not found.
`cuepast`(*string*)[[source]](_modules/ferenda/textreader.html#TextReader.cuepast)[¶](#ferenda.TextReader.cuepast)
Set seek position at the beginning of *string*, starting at current seek position. Raises IOError if *string* not found.
`readto`(*string*)[[source]](_modules/ferenda/textreader.html#TextReader.readto)[¶](#ferenda.TextReader.readto)
Read and return all text between current seek potition and *string*. Sets new seek position at the start of *string*. Raises IOError if *string* not found.
`readparagraph`()[[source]](_modules/ferenda/textreader.html#TextReader.readparagraph)[¶](#ferenda.TextReader.readparagraph)
Reads and returns the next paragraph (all text up to two or more consecutive line separators).
`readpage`()[[source]](_modules/ferenda/textreader.html#TextReader.readpage)[¶](#ferenda.TextReader.readpage)
Reads and returns the next page (all text up to next form feed, `"\f"`)
`readchunk`(*delimiter*)[[source]](_modules/ferenda/textreader.html#TextReader.readchunk)[¶](#ferenda.TextReader.readchunk)
Reads and returns the next chunk of text up to *delimiter*
`lastread`()[[source]](_modules/ferenda/textreader.html#TextReader.lastread)[¶](#ferenda.TextReader.lastread)
Returns the last chunk of data that was actually read (i.e. the `peek*` and `prev*` methods do not affect this)
`peek`(*size=0*)[[source]](_modules/ferenda/textreader.html#TextReader.peek)[¶](#ferenda.TextReader.peek)
Works like [`read()`](#ferenda.TextReader.read), but does not affect current seek position.
`peekline`(*times=1*)[[source]](_modules/ferenda/textreader.html#TextReader.peekline)[¶](#ferenda.TextReader.peekline)
Works like [`readline()`](#ferenda.TextReader.readline), but does not affect current seek position. If *times* is specified, peeks that many lines ahead.
`peekparagraph`(*times=1*)[[source]](_modules/ferenda/textreader.html#TextReader.peekparagraph)[¶](#ferenda.TextReader.peekparagraph)
Works like [`readparagraph()`](#ferenda.TextReader.readparagraph), but does not affect current seek position. If *times* is specified, peeks that many paragraphs ahead.
`peekchunk`(*delimiter*, *times=1*)[[source]](_modules/ferenda/textreader.html#TextReader.peekchunk)[¶](#ferenda.TextReader.peekchunk)
Works like [`readchunk()`](#ferenda.TextReader.readchunk), but does not affect current seek position. If *times* is specified, peeks that many chunks ahead.
`prev`(*size=0*)[[source]](_modules/ferenda/textreader.html#TextReader.prev)[¶](#ferenda.TextReader.prev)
Works like [`read()`](#ferenda.TextReader.read), but reads backwards from current seek position, and does not affect it.
`prevline`(*times=1*)[[source]](_modules/ferenda/textreader.html#TextReader.prevline)[¶](#ferenda.TextReader.prevline)
Works like [`readline()`](#ferenda.TextReader.readline), but reads backwards from current seek position, and does not affect it. If *times* is specified, reads the line that many times back.
`prevparagraph`(*times=1*)[[source]](_modules/ferenda/textreader.html#TextReader.prevparagraph)[¶](#ferenda.TextReader.prevparagraph)
Works like [`readparagraph()`](#ferenda.TextReader.readparagraph), but reads backwards from current seek position, and does not affect it. If *times* is specified, reads the paragraph that many times back.
`prevchunk`(*delimiter*, *times=1*)[[source]](_modules/ferenda/textreader.html#TextReader.prevchunk)[¶](#ferenda.TextReader.prevchunk)
Works like [`readchunk()`](#ferenda.TextReader.readchunk), but reads backwards from current seek position, and does not affect it. If *times* is specified, reads the chunk that many times back.
`getreader`(*callableObj*, **args*, ***kwargs*)[[source]](_modules/ferenda/textreader.html#TextReader.getreader)[¶](#ferenda.TextReader.getreader)
Enables you to treat the result of any single `read*`, `peek*`
or `prev*` methods as a new TextReader. Particularly useful to process individual pages in page-oriented documents:
```
filereader = TextReader("rfc822.txt")
firstpagereader = filereader.getreader(filereader.readpage)
# firstpagereader is now a standalone TextReader that only
# contains the first page of text from rfc822.txt filereader.seek(0) # reset current seek position page5reader = filereader.getreader(filereader.peekpage, times=5)
# page5reader now contains the 5th page of text from rfc822.txt
```
`getiterator`(*callableObj*, **args*, ***kwargs*)[[source]](_modules/ferenda/textreader.html#TextReader.getiterator)[¶](#ferenda.TextReader.getiterator)
Returns an iterator:
filereader = TextReader(“dashed.txt”)
# dashed.txt contains paragraphs separated by “—-”
for para in filereader.getiterator(filereader.readchunk, “—-“):
> print(para)
`flush`()[[source]](_modules/ferenda/textreader.html#TextReader.flush)[¶](#ferenda.TextReader.flush)
See [`io.IOBase.flush()`](https://docs.python.org/3/library/io.html#io.IOBase.flush). This is a no-op.
`read`(*size=0*)[[source]](_modules/ferenda/textreader.html#TextReader.read)[¶](#ferenda.TextReader.read)
See [`io.TextIOBase.read()`](https://docs.python.org/3/library/io.html#io.TextIOBase.read).
`readline`(*size=None*)[[source]](_modules/ferenda/textreader.html#TextReader.readline)[¶](#ferenda.TextReader.readline)
See [`io.TextIOBase.readline()`](https://docs.python.org/3/library/io.html#io.TextIOBase.readline).
Note
The `size` parameter is not supported.
`seek`(*offset*, *whence=0*)[[source]](_modules/ferenda/textreader.html#TextReader.seek)[¶](#ferenda.TextReader.seek)
See [`io.TextIOBase.seek()`](https://docs.python.org/3/library/io.html#io.TextIOBase.seek).
Note
The `whence` parameter is not supported.
`tell`()[[source]](_modules/ferenda/textreader.html#TextReader.tell)[¶](#ferenda.TextReader.tell)
See [`io.TextIOBase.tell()`](https://docs.python.org/3/library/io.html#io.TextIOBase.tell).
`write`()[[source]](_modules/ferenda/textreader.html#TextReader.write)[¶](#ferenda.TextReader.write)
See [`io.TextIOBase.write()`](https://docs.python.org/3/library/io.html#io.TextIOBase.write).
Note
Always raises IOError, as TextReader is a read-only object.
`writelines`()[[source]](_modules/ferenda/textreader.html#TextReader.writelines)[¶](#ferenda.TextReader.writelines)
See [`io.IOBase.writelines()`](https://docs.python.org/3/library/io.html#io.IOBase.writelines).
Note
Always raises IOError, as TextReader is a read-only object.
`next`()[¶](#ferenda.TextReader.next)
Backwards-compatibility alias for iterating over a file in python 2. Use [`getiterator()`](#ferenda.TextReader.getiterator) to make iteration work over anything other than lines (eg paragraphs, pages, etc).
### The `PDFReader` class[¶](#the-pdfreader-class)
*class* `ferenda.``PDFReader`(*pages=None*, *filename=None*, *workdir=None*, *images=True*, *convert_to_pdf=False*, *keep_xml=True*, *ocr_lang=None*, *fontspec=None*)[[source]](_modules/ferenda/pdfreader.html#PDFReader)[¶](#ferenda.PDFReader)
Parses PDF files and makes the content available as a object hierarchy. Calling the `read()` method returns a `ferenda.pdfreader.PDFFile` object, which is a list of [`ferenda.pdfreader.Page`](#ferenda.pdfreader.Page) objects, which each is a list of [`ferenda.pdfreader.Textbox`](#ferenda.pdfreader.Textbox) objects, which each is a list of [`ferenda.pdfreader.Textelement`](#ferenda.pdfreader.Textelement)
objects.
Note
This class depends on the command line tool pdftohtml from
[poppler](http://poppler.freedesktop.org/).
The class can also handle any other type of document (such as Word/OOXML/WordPerfect/RTF) that OpenOffice or LibreOffice handles by first converting it to PDF using the `soffice`
command line tool (which then must be in your `$PATH`).
If the PDF contains only scanned pages (without any OCR information), the pages can be run through the `tesseract`
command line tool (which, again, needs to be in your
`$PATH`). You need to provide the main language of the document as the `ocr_lang` parameter, and you need to have installed the tesseract language files for that language.
`dims` *= 'bbox (?P<left>\\d+) (?P<top>\\d+) (?P<right>\\d+) (?P<bottom>\\d+)'*[¶](#ferenda.PDFReader.dims)
`re_dimensions`()[¶](#ferenda.PDFReader.re_dimensions)
Scan through string looking for a match, and return a corresponding match object instance.
Return None if no position in the string matches.
`tagname` *= 'div'*[¶](#ferenda.PDFReader.tagname)
`classname` *= 'pdfreader'*[¶](#ferenda.PDFReader.classname)
`is_empty`()[[source]](_modules/ferenda/pdfreader.html#PDFReader.is_empty)[¶](#ferenda.PDFReader.is_empty)
`textboxes`(*gluefunc=None*, *pageobjects=False*, *keepempty=False*)[[source]](_modules/ferenda/pdfreader.html#PDFReader.textboxes)[¶](#ferenda.PDFReader.textboxes)
Return an iterator of the textboxes available.
`gluefunc` should be a callable that is called with
(textbox, nextbox, prevbox), and returns True iff nextbox should be appended to textbox.
If `pageobjects`, the iterator can return Page objects to signal that pagebreak has ocurred (these Page objects may or may not have Textbox elements).
If `keepempty`, process and return textboxes that have no text content (these are filtered out by default)
`median_box_width`(*threshold=0*)[[source]](_modules/ferenda/pdfreader.html#PDFReader.median_box_width)[¶](#ferenda.PDFReader.median_box_width)
Returns the median box width of all pages.
*class* `ferenda.pdfreader.``Page`(**args*, ***kwargs*)[[source]](_modules/ferenda/pdfreader.html#Page)[¶](#ferenda.pdfreader.Page)
Represents a Page in a PDF file. Has *width* and *height* properties.
`tagname` *= 'div'*[¶](#ferenda.pdfreader.Page.tagname)
`classname` *= 'pdfpage'*[¶](#ferenda.pdfreader.Page.classname)
`margins` *= None*[¶](#ferenda.pdfreader.Page.margins)
`id`[¶](#ferenda.pdfreader.Page.id)
`boundingbox`(*top=0*, *left=0*, *bottom=None*, *right=None*)[[source]](_modules/ferenda/pdfreader.html#Page.boundingbox)[¶](#ferenda.pdfreader.Page.boundingbox)
A generator of [`ferenda.pdfreader.Textbox`](#ferenda.pdfreader.Textbox) objects that fit into the bounding box specified by the parameters.
`crop`(*top=0*, *left=0*, *bottom=None*, *right=None*)[[source]](_modules/ferenda/pdfreader.html#Page.crop)[¶](#ferenda.pdfreader.Page.crop)
Removes any [`ferenda.pdfreader.Textbox`](#ferenda.pdfreader.Textbox) objects that does not fit within the bounding box specified by the parameters.
*class* `ferenda.pdfreader.``Textbox`(**args*, ***kwargs*)[[source]](_modules/ferenda/pdfreader.html#Textbox)[¶](#ferenda.pdfreader.Textbox)
A textbox is a amount of text on a PDF page, with *top*, *left*,
*width* and *height* properties that specifies the bounding box of the text. The *fontid* property specifies the id of font used (use
`getfont()` to get a dict of all font properties). A textbox consists of a list of Textelements which may differ in basic formatting (bold and or italics), but otherwise all text in a Textbox has the same font and size.
`tagname` *= 'p'*[¶](#ferenda.pdfreader.Textbox.tagname)
`classname` *= 'textbox'*[¶](#ferenda.pdfreader.Textbox.classname)
`as_xhtml`(*uri*, *parent_uri=None*)[[source]](_modules/ferenda/pdfreader.html#Textbox.as_xhtml)[¶](#ferenda.pdfreader.Textbox.as_xhtml)
Converts this object to a `lxml.etree` object (with children)
| Parameters: | **uri** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – If provided, gets converted to an `@about` attribute in the resulting XHTML. |
`font`[¶](#ferenda.pdfreader.Textbox.font)
*class* `ferenda.pdfreader.``Textelement`(**args*, ***kwargs*)[[source]](_modules/ferenda/pdfreader.html#Textelement)[¶](#ferenda.pdfreader.Textelement)
Represent a single part of text where each letter has the exact same formatting. The `tag` property specifies whether the text as a whole is bold (`'b'`) , italic(`'i'` bold + italic
(`'bi'`) or regular (`None`).
`as_xhtml`(*uri*, *parent_uri=None*)[[source]](_modules/ferenda/pdfreader.html#Textelement.as_xhtml)[¶](#ferenda.pdfreader.Textelement.as_xhtml)
Converts this object to a `lxml.etree` object (with children)
| Parameters: | **uri** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – If provided, gets converted to an `@about` attribute in the resulting XHTML. |
`tagname`[¶](#ferenda.pdfreader.Textelement.tagname)
### The `PDFAnalyzer` class[¶](#the-pdfanalyzer-class)
*class* `ferenda.``PDFAnalyzer`(*pdf*)[[source]](_modules/ferenda/pdfanalyze.html#PDFAnalyzer)[¶](#ferenda.PDFAnalyzer)
Create a analyzer for the given pdf file.
The primary purpose of an analyzer is to determine margins and other spatial metrics of a document, and identifiy common typographic styles for default text, title and headings. This is done by calling the [`metrics()`](#ferenda.PDFAnalyzer.metrics)
method.
The analysis is done in several steps. The properties of all textboxes on each page is collected in several
[`collections.Counter`](https://docs.python.org/3/library/collections.html#collections.Counter) objects. These counters are then statistically analyzed in a series of functions to yield these metrics.
If different analyzis logic, or additional metrics, are desired,
this class should be inherited and some methods/properties overridden.
| Parameters: | **pdf** ([*ferenda.PDFReader*](index.html#ferenda.PDFReader)) – The pdf file to analyze. |
`twopage` *= True*[¶](#ferenda.PDFAnalyzer.twopage)
Whether or not the document is expected to have different margins depending on whether it’s a even or odd page.
`style_significance_threshold` *= 0.005*[¶](#ferenda.PDFAnalyzer.style_significance_threshold)
“The amount of use (as compared to the rest of the document that a style must have to be considered significant.
`header_significance_threshold` *= 0.002*[¶](#ferenda.PDFAnalyzer.header_significance_threshold)
The maximum amount (expressed as part of the entire text amount) of text that can occur on the top of the page for it to be considered part of the header.
`footer_significance_threshold` *= 0.002*[¶](#ferenda.PDFAnalyzer.footer_significance_threshold)
The maximum amount (expressed as part of the entire text amount) of text that can occur on the bottom of the page for it to be considered part of the footer.
`frontmatter` *= 1*[¶](#ferenda.PDFAnalyzer.frontmatter)
The amount of pages to be considered frontmatter, which might have different typography, special title font etc.
`documents`()[[source]](_modules/ferenda/pdfanalyze.html#PDFAnalyzer.documents)[¶](#ferenda.PDFAnalyzer.documents)
Attempts to distinguish different logical document (eg parts with differing pagesizes/margins/styles etc) within this PDF.
You should override this method if you want to provide your own document segmentation logic.
| Returns: | Tuples (startpage, pagecount) for the different identified
documents |
| Return type: | [list](https://docs.python.org/3/library/stdtypes.html#list) |
`metrics`(*metricspath=None*, *plotpath=None*, *startpage=0*, *pagecount=None*, *force=False*)[[source]](_modules/ferenda/pdfanalyze.html#PDFAnalyzer.metrics)[¶](#ferenda.PDFAnalyzer.metrics)
Calculate and return the metrics for this analyzer.
metrics is a set of named properties in the form of a dict. The keys of the dict can represent margins or other measurements of the document (left/right margins,
header/footer etc) or font styles used in the document (eg.
default, title, h1 – h3). Style values are in turn dicts themselves, with the keys ‘family’ and ‘size’.
| Parameters: | * **metricspath** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The path of a JSON file used as cache for the
calculated metrics
* **plotpath** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The path to write a PNG file with histograms for
different values (for debugging).
* **startpage** ([*int*](https://docs.python.org/3/library/functions.html#int)) – starting page for the analysis
* **startpage** – number of pages to analyze (default: all available)
* **force** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Perform analysis even if cached JSON metrics exists.
|
| Returns: | calculated metrics |
| Return type: | [dict](https://docs.python.org/3/library/stdtypes.html#dict) |
The default implementation will try to find out values for the following metrics:
| key | description |
| --- | --- |
| leftmargin | position of left margin (for odd pages if twopage = True) |
| rightmargin | position of right margin (for odd pages if twopage = True) |
| leftmargin_even | position of left margin for even pages |
| rightmargin_even | position of right margin for right pages |
| topmargin | position of header zone |
| bottommargin | position of footer zone |
| default | style used for default text |
| title | style used for main document title (on front page) |
| h1 | style used for level 1 headings |
| h2 | style used for level 2 headings |
| h3 | style used for level 3 headings |
Subclasses might add (or remove) from the above.
`textboxes`(*startpage*, *pagecount*)[[source]](_modules/ferenda/pdfanalyze.html#PDFAnalyzer.textboxes)[¶](#ferenda.PDFAnalyzer.textboxes)
Generate a stream of (pagenumber, textbox) tuples consisting of all pages/textboxes from startpage to pagecount.
`count_horizontal_margins`(*startpage*, *pagecount*)[[source]](_modules/ferenda/pdfanalyze.html#PDFAnalyzer.count_horizontal_margins)[¶](#ferenda.PDFAnalyzer.count_horizontal_margins)
Return a dict of Counter objects for all the horizontally oriented textbox properties (number of textboxes starting/ending at different positions).
The set of counters is determined by setup_horizontal_counters.
`setup_horizontal_counters`()[[source]](_modules/ferenda/pdfanalyze.html#PDFAnalyzer.setup_horizontal_counters)[¶](#ferenda.PDFAnalyzer.setup_horizontal_counters)
Create initial set of horizontal counters.
`count_horizontal_textbox`(*pagenumber*, *textbox*, *counters*)[[source]](_modules/ferenda/pdfanalyze.html#PDFAnalyzer.count_horizontal_textbox)[¶](#ferenda.PDFAnalyzer.count_horizontal_textbox)
Add a single textbox to the set of horizontal counters.
`count_vertical_margins`(*startpage*, *pagecount*)[[source]](_modules/ferenda/pdfanalyze.html#PDFAnalyzer.count_vertical_margins)[¶](#ferenda.PDFAnalyzer.count_vertical_margins)
`setup_vertical_counters`()[[source]](_modules/ferenda/pdfanalyze.html#PDFAnalyzer.setup_vertical_counters)[¶](#ferenda.PDFAnalyzer.setup_vertical_counters)
`count_vertical_textbox`(*pagenumber*, *textbox*, *counters*)[[source]](_modules/ferenda/pdfanalyze.html#PDFAnalyzer.count_vertical_textbox)[¶](#ferenda.PDFAnalyzer.count_vertical_textbox)
`count_styles`(*startpage*, *pagecount*)[[source]](_modules/ferenda/pdfanalyze.html#PDFAnalyzer.count_styles)[¶](#ferenda.PDFAnalyzer.count_styles)
`count_styles_textbox`(*pagenumber*, *textbox*, *counters*)[[source]](_modules/ferenda/pdfanalyze.html#PDFAnalyzer.count_styles_textbox)[¶](#ferenda.PDFAnalyzer.count_styles_textbox)
`analyze_vertical_margins`(*vcounters*)[[source]](_modules/ferenda/pdfanalyze.html#PDFAnalyzer.analyze_vertical_margins)[¶](#ferenda.PDFAnalyzer.analyze_vertical_margins)
`analyze_horizontal_margins`(*vcounters*)[[source]](_modules/ferenda/pdfanalyze.html#PDFAnalyzer.analyze_horizontal_margins)[¶](#ferenda.PDFAnalyzer.analyze_horizontal_margins)
`fontsize_key`(*fonttuple*)[[source]](_modules/ferenda/pdfanalyze.html#PDFAnalyzer.fontsize_key)[¶](#ferenda.PDFAnalyzer.fontsize_key)
`fontdict`(*fonttuple*)[[source]](_modules/ferenda/pdfanalyze.html#PDFAnalyzer.fontdict)[¶](#ferenda.PDFAnalyzer.fontdict)
`analyze_styles`(*frontmatter_styles*, *rest_styles*)[[source]](_modules/ferenda/pdfanalyze.html#PDFAnalyzer.analyze_styles)[¶](#ferenda.PDFAnalyzer.analyze_styles)
`drawboxes`(*outfilename*, *gluefunc=None*, *startpage=0*, *pagecount=None*, *counters=None*, *metrics=None*)[[source]](_modules/ferenda/pdfanalyze.html#PDFAnalyzer.drawboxes)[¶](#ferenda.PDFAnalyzer.drawboxes)
Create a copy of the parsed PDF file, but with the textboxes created by `gluefunc` clearly marked, and metrics shown on the page.
Note
This requires PyPDF2 and reportlab, which aren’t installed by default. Reportlab (3.*) only works on py27+ and py33+
`plot`(*filename*, *margincounters*, *stylecounters*, *metrics*)[[source]](_modules/ferenda/pdfanalyze.html#PDFAnalyzer.plot)[¶](#ferenda.PDFAnalyzer.plot)
`plot_margins`(*subplots*, *margin_counters*, *metrics*, *pagewidth*, *pageheight*)[[source]](_modules/ferenda/pdfanalyze.html#PDFAnalyzer.plot_margins)[¶](#ferenda.PDFAnalyzer.plot_margins)
`plot_styles`(*plot*, *stylecounters*, *metrics*)[[source]](_modules/ferenda/pdfanalyze.html#PDFAnalyzer.plot_styles)[¶](#ferenda.PDFAnalyzer.plot_styles)
### The `WordReader` class[¶](#the-wordreader-class)
*class* `ferenda.``WordReader`[[source]](_modules/ferenda/wordreader.html#WordReader)[¶](#ferenda.WordReader)
Reads .docx and .doc-files (the latter with support from [antiword](http://www.winfield.demon.nl/)) and converts them to a XML form that is slightly easier to deal with.
`log` *= <logging.Logger object>*[¶](#ferenda.WordReader.log)
`read`(*wordfile*, *intermediatefile*)[[source]](_modules/ferenda/wordreader.html#WordReader.read)[¶](#ferenda.WordReader.read)
Converts the word file to a more easily parsed format.
| Parameters: | * **wordfile** – Path to original docfile
* **intermediatefile** – Where to store the more parseable file
|
| Returns: | name of parseable file, filetype (either “doc” or “docx”) |
| Return type: | [tuple](https://docs.python.org/3/library/stdtypes.html#tuple) |
`word_to_docbook`(*indoc*, *outdoc*)[[source]](_modules/ferenda/wordreader.html#WordReader.word_to_docbook)[¶](#ferenda.WordReader.word_to_docbook)
Convert a old Word document (.doc) to a pseudo-docbook file through antiword.
`word_to_ooxml`(*indoc*, *outdoc*)[[source]](_modules/ferenda/wordreader.html#WordReader.word_to_ooxml)[¶](#ferenda.WordReader.word_to_ooxml)
Extracts the raw OOXML file from a modern Word document (.docx).
Modules[¶](#modules)
---
### The `util` module[¶](#module-ferenda.util)
General library of small utility functions.
*class* `ferenda.util.``gYearMonth`[[source]](_modules/ferenda/util.html#gYearMonth)[¶](#ferenda.util.gYearMonth)
*class* `ferenda.util.``gYear`[[source]](_modules/ferenda/util.html#gYear)[¶](#ferenda.util.gYear)
`ferenda.util.``ns`[¶](#ferenda.util.ns)
A mapping of well-known prefixes and their corresponding namespaces. Includes `dc`, `dcterms`, `rdfs`, `rdf`, `skos`, `xsd`, `foaf`, `owl`, `xhv`, `prov` and `bibo`.
`ferenda.util.``mkdir`(*newdir*)[[source]](_modules/ferenda/util.html#mkdir)[¶](#ferenda.util.mkdir)
Like [`os.makedirs()`](https://docs.python.org/3/library/os.html#os.makedirs), but doesn’t raise an exception if the directory already exists.
`ferenda.util.``ensure_dir`(*filename*)[[source]](_modules/ferenda/util.html#ensure_dir)[¶](#ferenda.util.ensure_dir)
Given a filename (typically one that you wish to create), ensures that the directory the file is in actually exists.
`ferenda.util.``robust_rename`(*old*, *new*)[[source]](_modules/ferenda/util.html#robust_rename)[¶](#ferenda.util.robust_rename)
Rename old to new no matter what (if the file exists, it’s removed, if the target dir doesn’t exist, it’s created)
`ferenda.util.``robust_remove`(*filename*)[[source]](_modules/ferenda/util.html#robust_remove)[¶](#ferenda.util.robust_remove)
Removes a filename no matter what (unlike [`os.unlink()`](https://docs.python.org/3/library/os.html#os.unlink), does not raise an error if the file does not exist).
`ferenda.util.``relurl`(*url*, *starturl*)[[source]](_modules/ferenda/util.html#relurl)[¶](#ferenda.util.relurl)
Works like [`os.path.relpath()`](https://docs.python.org/3/library/os.path.html#os.path.relpath), but for urls
```
>>> relurl("http://example.org/other/index.html", "http://example.org/main/index.html") == '../other/index.html'
True
>>> relurl("http://other.org/foo.html", "http://example.org/bar.html") == 'http://other.org/foo.html'
True
```
`ferenda.util.``numcmp`(*x*, *y*)[[source]](_modules/ferenda/util.html#numcmp)[¶](#ferenda.util.numcmp)
Works like `cmp` in python 2, but compares two strings using a
‘natural sort’ order, ie “10” < “2”. Also handles strings that contains a mixture of numbers and letters, ie “2” < “2 a”.
Return negative if x<y, zero if x==y, positive if x>y.
```
>>> numcmp("10", "2")
1
>>> numcmp("2", "2 a")
-1
>>> numcmp("3", "2 a")
1
```
`ferenda.util.``split_numalpha`(*s*)[[source]](_modules/ferenda/util.html#split_numalpha)[¶](#ferenda.util.split_numalpha)
Converts a string into a list of alternating string and integers. This makes it possible to sort a list of strings numerically even though they might not be fully convertable to integers
```
>>> split_numalpha('10 a §') == ['', 10, ' a §']
True
>>> sorted(['2 §', '10 §', '1 §'], key=split_numalpha) == ['1 §', '2 §', '10 §']
True
```
`ferenda.util.``runcmd`(*cmdline*, *require_success=False*, *cwd=None*, *cmdline_encoding=None*, *output_encoding='utf-8'*)[[source]](_modules/ferenda/util.html#runcmd)[¶](#ferenda.util.runcmd)
Run a shell command, wait for it to finish and return the results.
| Parameters: | * **cmdline** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The full command line (will be passed through a shell)
* **require_success** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – If the command fails (non-zero exit code), raise [`ExternalCommandError`](index.html#ferenda.errors.ExternalCommandError)
* **cwd** – The working directory for the process to run
|
| Returns: | The returncode, all stdout output, all stderr output |
| Return type: | [tuple](https://docs.python.org/3/library/stdtypes.html#tuple) |
`ferenda.util.``normalize_space`(*string*)[[source]](_modules/ferenda/util.html#normalize_space)[¶](#ferenda.util.normalize_space)
Normalize all whitespace in string so that only a single space between words is ever used, and that the string neither starts with nor ends with whitespace.
```
>>> normalize_space(" This is a long \n string\n") == 'This is a long string'
True
```
`ferenda.util.``list_dirs`(*d*, *suffix=None*, *reverse=False*)[[source]](_modules/ferenda/util.html#list_dirs)[¶](#ferenda.util.list_dirs)
A generator that works much like [`os.listdir()`](https://docs.python.org/3/library/os.html#os.listdir), only recursively (and only returns files, not directories).
| Parameters: | * **d** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The directory to start in
* **suffix** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Only return files with the given suffix
* **reverse** – Returns result sorted in reverse alphabetic order
* **type** –
|
| Returns: | the full path (starting from d) of each matching file |
| Return type: | generator |
`ferenda.util.``replace_if_different`(*src*, *dst*, *archivefile=None*)[[source]](_modules/ferenda/util.html#replace_if_different)[¶](#ferenda.util.replace_if_different)
Like [`shutil.move()`](https://docs.python.org/3/library/shutil.html#shutil.move), except the *src* file isn’t moved if the
*dst* file already exists and is identical to *src*. Also doesn’t require that the directory of *dst* exists beforehand.
**Note**: regardless of whether it was moved or not, *src* is always deleted.
| Parameters: | * **src** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The source file to move
* **dst** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The destination file
|
| Returns: | True if src was copied to dst, False otherwise |
| Return type: | [bool](https://docs.python.org/3/library/functions.html#bool) |
`ferenda.util.``copy_if_different`(*src*, *dest*)[[source]](_modules/ferenda/util.html#copy_if_different)[¶](#ferenda.util.copy_if_different)
Like [`shutil.copyfile()`](https://docs.python.org/3/library/shutil.html#shutil.copyfile), except the *src* file isn’t copied if the *dst* file already exists and is identical to *src*. Also doesn’t require that the directory of *dst* exists beforehand.
> | param src: | The source file to move |
> | type src: | str |
> | param dst: | The destination file |
> | type dst: | str |
> | returns: | True if src was copied to dst, False otherwise |
> | rtype: | bool |
`ferenda.util.``outfile_is_newer`(*infiles*, *outfile*)[[source]](_modules/ferenda/util.html#outfile_is_newer)[¶](#ferenda.util.outfile_is_newer)
Check if a given *outfile* is newer (has a more recent modification time) than a list of *infiles*. Returns True if so, False otherwise (including if outfile doesn’t exist).
`ferenda.util.``link_or_copy`(*src*, *dst*)[[source]](_modules/ferenda/util.html#link_or_copy)[¶](#ferenda.util.link_or_copy)
Create a symlink at *dst* pointing back to *src* on systems that support it. On other systems (i.e. Windows), copy *src* to *dst* (using [`copy_if_different()`](#ferenda.util.copy_if_different))
`ferenda.util.``ucfirst`(*string*)[[source]](_modules/ferenda/util.html#ucfirst)[¶](#ferenda.util.ucfirst)
Returns string with first character uppercased but otherwise unchanged.
```
>>> ucfirst("iPhone") == 'IPhone'
True
```
`ferenda.util.``rfc_3339_timestamp`(*dt*)[[source]](_modules/ferenda/util.html#rfc_3339_timestamp)[¶](#ferenda.util.rfc_3339_timestamp)
Converts a datetime object to a RFC 3339-style date
```
>>> rfc_3339_timestamp(datetime.datetime(2013, 7, 2, 21, 20, 25)) == '2013-07-02T21:20:25-00:00'
True
```
`ferenda.util.``parse_rfc822_date`(*httpdate*)[[source]](_modules/ferenda/util.html#parse_rfc822_date)[¶](#ferenda.util.parse_rfc822_date)
Converts a RFC 822-type date string (more-or-less the same as a HTTP-date) to an UTC-localized (naive) datetime.
```
>>> parse_rfc822_date("Mon, 4 Aug 1997 02:14:00 EST")
datetime.datetime(1997, 8, 4, 7, 14)
```
`ferenda.util.``strptime`(*datestr*, *format*)[[source]](_modules/ferenda/util.html#strptime)[¶](#ferenda.util.strptime)
Like datetime.strptime, but guaranteed to not be affected by current system locale – all datetime parsing is done using the C locale.
```
>>> strptime("Mon, 4 Aug 1997 02:14:05", "%a, %d %b %Y %H:%M:%S")
datetime.datetime(1997, 8, 4, 2, 14, 5)
```
`ferenda.util.``readfile`(*filename*, *mode='r'*, *encoding='utf-8'*)[[source]](_modules/ferenda/util.html#readfile)[¶](#ferenda.util.readfile)
Opens *filename*, reads it’s contents and returns them as a string.
`ferenda.util.``writefile`(*filename*, *contents*, *encoding='utf-8'*)[[source]](_modules/ferenda/util.html#writefile)[¶](#ferenda.util.writefile)
Create *filename* and write *contents* to it.
`ferenda.util.``extract_text`(*html*, *start*, *end*, *decode_entities=True*, *strip_tags=True*)[[source]](_modules/ferenda/util.html#extract_text)[¶](#ferenda.util.extract_text)
Given *html*, a string of HTML content, and two substrings (*start* and *end*) present in this string, return all text between the substrings, optionally decoding any HTML entities and removing HTML tags.
```
>>> extract_text("<body><div><b>Hello</b> <i>World</i>™</div></body>",
... "<div>", "</div>") == 'Hello World™'
True
>>> extract_text("<body><div><b>Hello</b> <i>World</i>™</div></body>",
... "<div>", "</div>", decode_entities=False) == 'Hello World™'
True
>>> extract_text("<body><div><b>Hello</b> <i>World</i>™</div></body>",
... "<div>", "</div>", strip_tags=False) == '<b>Hello</b> <i>World</i>™'
True
```
`ferenda.util.``merge_dict_recursive`(*base*, *other*)[[source]](_modules/ferenda/util.html#merge_dict_recursive)[¶](#ferenda.util.merge_dict_recursive)
Merges the *other* dict into the *base* dict. If any value in other is itself a dict and the base also has a dict for the same key, merge these sub-dicts (and so on, recursively).
```
>>> base = {'a': 1, 'b': {'c': 3}}
>>> other = {'x': 4, 'b': {'y': 5}}
>>> want = {'a': 1, 'x': 4, 'b': {'c': 3, 'y': 5}}
>>> got = merge_dict_recursive(base, other)
>>> got == want True
>>> base == want True
```
`ferenda.util.``resource_extract`(*resource_name*, *outfile*, *params={}*)[[source]](_modules/ferenda/util.html#resource_extract)[¶](#ferenda.util.resource_extract)
Copy a file from the ferenda package resources to a specified path, optionally performing variable substitutions on the contents of the file.
| Parameters: | * **resource_name** – The named resource (eg ‘res/sparql/annotations.rq’)
* **outfile** – Path to extract the resource to
* **params** – A dict of parameters, to be used with regular string subtitutions in the resource file.
|
`ferenda.util.``uri_leaf`(*uri*)[[source]](_modules/ferenda/util.html#uri_leaf)[¶](#ferenda.util.uri_leaf)
Get the “leaf” - fragment id or last segment - of a URI. Useful e.g. for getting a term from a “namespace like” URI.
```
>>> uri_leaf("http://purl.org/dc/terms/title") == 'title'
True
>>> uri_leaf("http://www.w3.org/2004/02/skos/core#Concept") == 'Concept'
True
>>> uri_leaf("http://www.w3.org/2004/02/skos/core#") # returns None
```
`ferenda.util.``logtime`(*method*, *format='The operation took %(elapsed).3f sec'*, *values={}*)[[source]](_modules/ferenda/util.html#logtime)[¶](#ferenda.util.logtime)
A context manager that uses the supplied method and format string to log the elapsed time:
```
with util.logtime(log.debug,
"Basefile %(basefile)s took %(elapsed).3f s",
{'basefile':'foo'}):
do_stuff_that_takes_some_time()
```
This results in a call like log.debug(“Basefile foo took 1.324 s”).
`ferenda.util.``c_locale`(*category=2*)[[source]](_modules/ferenda/util.html#c_locale)[¶](#ferenda.util.c_locale)
Temporarily change process locale to the C locale, for use when eg parsing English dates on a system that may have non-english locale.
```
>>> with c_locale():
... datetime.datetime.strptime("August 2013", "%B %Y")
datetime.datetime(2013, 8, 1, 0, 0)
```
`ferenda.util.``from_roman`(*s*)[[source]](_modules/ferenda/util.html#from_roman)[¶](#ferenda.util.from_roman)
convert Roman numeral to integer.
```
>>> from_roman("MCMLXXXIV")
1984
```
`ferenda.util.``title_sortkey`(*s*)[[source]](_modules/ferenda/util.html#title_sortkey)[¶](#ferenda.util.title_sortkey)
Transform a document title into a key useful for sorting and partitioning documents.
```
>>> title_sortkey("The 'viewstate' property") == 'viewstateproperty'
True
```
`ferenda.util.``parseresults_as_xml`(*parseres*, *depth=0*)[[source]](_modules/ferenda/util.html#parseresults_as_xml)[¶](#ferenda.util.parseresults_as_xml)
`ferenda.util.``json_default_date`(*obj*)[[source]](_modules/ferenda/util.html#json_default_date)[¶](#ferenda.util.json_default_date)
`ferenda.util.``make_json_date_object_hook`(**fields*)[[source]](_modules/ferenda/util.html#make_json_date_object_hook)[¶](#ferenda.util.make_json_date_object_hook)
### The `citationpatterns` module[¶](#module-ferenda.citationpatterns)
General ready-made grammars for use with
[`CitationParser`](index.html#ferenda.CitationParser). See [Citation parsing](index.html#document-citationparsing) for examples.
`ferenda.citationpatterns.``url`[¶](#ferenda.citationpatterns.url)
Matchs any URL like ‘<http://example.com/> or
‘<https://example.org/?key=value#fragment>’ (note: only the schemes/protocols ‘http’, ‘https’ and ‘ftp’ are supported)
`ferenda.citationpatterns.``eulaw`[¶](#ferenda.citationpatterns.eulaw)
Matches EU Legislation references like ‘direktiv 2007/42/EU’.
### The `uriformats` module[¶](#module-ferenda.uriformats)
A small set of generic functions to convert (dicts or dict-like objects) to URIs. They are usually matched with a corresponding citationpattern like the ones found in
[`ferenda.citationpatterns`](index.html#module-ferenda.citationpatterns). See [Citation parsing](index.html#document-citationparsing) for examples.
`ferenda.uriformats.``generic`(*d*)[[source]](_modules/ferenda/uriformats.html#generic)[¶](#ferenda.uriformats.generic)
Converts any dict into a URL. The domain (netlog) is always example.org, and all keys/values of the dict is turned into a querystring.
```
>>> generic({'foo':'1', 'bar':'2'})
"http://example.org/?foo=1&bar=2"
```
`ferenda.uriformats.``url`(*d*)[[source]](_modules/ferenda/uriformats.html#url)[¶](#ferenda.uriformats.url)
Converts a dict with keys `scheme`, `netloc`, `path` (and optionally query and/or fragment) into the corresponding URL.
```
>>> url({'scheme':'https', 'netloc':'example.org', 'path':'test'})
"https://example.org/test
```
`ferenda.uriformats.``eulaw`(*d*)[[source]](_modules/ferenda/uriformats.html#eulaw)[¶](#ferenda.uriformats.eulaw)
Converts a dict with keys like LegalactType, Directive, ArticleId
(produced by [`ferenda.citationpatterns.eulaw`](index.html#ferenda.citationpatterns.eulaw)) into a CELEX-based URI.
> Note
> This is not yet implemented.
### The `manager` module[¶](#module-ferenda.manager)
Utility functions for running various ferenda tasks from the command line, including registering classes in the configuration file. If you’re using the [`DocumentRepository`](index.html#ferenda.DocumentRepository) API directly in your code, you’ll probably only need
[`makeresources()`](#ferenda.manager.makeresources), [`frontpage()`](#ferenda.manager.frontpage) and possibly
[`setup_logger()`](#ferenda.manager.setup_logger). If you’re using the `ferenda-build.py`
tool, you don’t need to directly call any of these methods –
`ferenda-build.py` calls [`run()`](#ferenda.manager.run), which calls everything else, for you.
`ferenda.manager.``makeresources`(*repos*, *resourcedir='data/rsrc'*, *combine=False*, *cssfiles=[]*, *jsfiles=[]*, *imgfiles=[]*, *staticsite=False*, *legacyapi=False*, *sitename='MySite'*, *sitedescription='Just another Ferenda site'*, *url='http://localhost:8000/'*)[[source]](_modules/ferenda/manager.html#makeresources)[¶](#ferenda.manager.makeresources)
Creates the web assets/resources needed for the web app
(concatenated and minified js/css files, resources.xml used by most XSLT stylesheets, etc).
| Parameters: | * **repos** ([*list*](https://docs.python.org/3/library/stdtypes.html#list)) – The repositories to create resources for, as instantiated and configured docrepo objects
* **combine** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – whether to combine and compact/minify CSS and JS files
* **resourcedir** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – where to put generated/copied resources
|
| Returns: | All created/copied css, js and resources.xml files |
| Return type: | dict of lists |
`ferenda.manager.``frontpage`(*repos*, *path='data/index.html'*, *stylesheet='res/xsl/frontpage.xsl'*, *sitename='MySite'*, *staticsite=False*)[[source]](_modules/ferenda/manager.html#frontpage)[¶](#ferenda.manager.frontpage)
Create a suitable frontpage.
| Parameters: | * **repos** ([*list*](https://docs.python.org/3/library/stdtypes.html#list)) – The repositories to list on the frontpage, as instantiated and configured docrepo objects
* **path** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – the filename to create.
|
`ferenda.manager.``runserver`(*repos*, *port=8000*, *documentroot='data'*, *apiendpoint='/api/'*, *searchendpoint='/search/'*, *url='http://localhost:8000/'*, *indextype='WHOOSH'*, *indexlocation='data/whooshindex'*, *legacyapi=False*)[[source]](_modules/ferenda/manager.html#runserver)[¶](#ferenda.manager.runserver)
Starts up a internal webserver and runs the WSGI app (see
[`make_wsgi_app()`](#ferenda.manager.make_wsgi_app)) using all the specified document repositories. Runs forever (or until interrupted by keyboard).
| Parameters: | * **repos** ([*list*](https://docs.python.org/3/library/stdtypes.html#list)) – Object instances for the repositories that should be served over HTTP
* **port** ([*int*](https://docs.python.org/3/library/functions.html#int)) – The port to use
* **documentroot** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The root document, used to locate files not directly handled by any repository
* **apiendpoint** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The part of the URI space handled by the API functionality
* **searchendpoint** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The part of the URI space handled by the search functionality
|
`ferenda.manager.``make_wsgi_app`(*inifile=None*, ***kwargs*)[[source]](_modules/ferenda/manager.html#make_wsgi_app)[¶](#ferenda.manager.make_wsgi_app)
Creates a callable object that can act as a WSGI application by mod_wsgi, gunicorn, the built-in webserver, or any other WSGI-compliant webserver.
| Parameters: | * **inifile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The full path to a `ferenda.ini` configuration file
* ****kwargs** – Configuration values for the wsgi app (must include `documentroot`, `apiendpoint` and
`searchendpoint`). Only used if `inifile`
is not provided.
|
| Returns: | A WSGI application |
| Return type: | callable |
`ferenda.manager.``setup_logger`(*level='INFO'*, *filename=None*, *logformat='%(asctime)s %(name)s %(levelname)s %(message)s'*, *datefmt='%H:%M:%S'*)[[source]](_modules/ferenda/manager.html#setup_logger)[¶](#ferenda.manager.setup_logger)
Sets up the logging facilities and creates the module-global log object as a root logger.
| Parameters: | * **name** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The name of the logger (used in log messages)
* **level** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – ‘DEBUG’,’INFO’,’WARNING’,’ERROR’ or ‘CRITICAL’
* **filename** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The name of the file to log to. If None, log to stdout
|
`ferenda.manager.``shutdown_logger`()[[source]](_modules/ferenda/manager.html#shutdown_logger)[¶](#ferenda.manager.shutdown_logger)
Shuts down the configured logger. In particular, closes any FileHandlers, which is needed on win32.
`ferenda.manager.``run`(*argv*, *subcall=False*)[[source]](_modules/ferenda/manager.html#run)[¶](#ferenda.manager.run)
Runs a particular action for either a particular class or all enabled classes.
| Parameters: | **argv** – a `sys.argv`-style list of strings specifying the class to load, the action to run, and additional parameters. The first parameter is either the name of the class-or-alias, or the special value “all”,
meaning all registered classes in turn. The second parameter is the action to run, or the special value
“all” to run all actions in correct order. Remaining parameters are either configuration parameters (if prefixed with `--`, e.g. `--loglevel=INFO`, or positional arguments to the specified action). |
`ferenda.manager.``enable`(*classname*)[[source]](_modules/ferenda/manager.html#enable)[¶](#ferenda.manager.enable)
Registers a class by creating a section for it in the configuration file (`ferenda.ini`). Returns the short-form alias for the class.
```
>>> enable("ferenda.DocumentRepository")
'base'
>>> os.unlink("ferenda.ini")
```
| Parameters: | **classname** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The fully qualified name of the class |
| Returns: | The short-form alias for the class |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`ferenda.manager.``runsetup`()[[source]](_modules/ferenda/manager.html#runsetup)[¶](#ferenda.manager.runsetup)
Runs [`setup()`](#ferenda.manager.setup) and exits with a non-zero status if setup failed in any way
Note
The `ferenda-setup` script that gets installed with ferenda is a tiny wrapper around this function.
`ferenda.manager.``setup`(*argv=None*, *force=False*, *verbose=False*, *unattended=False*)[[source]](_modules/ferenda/manager.html#setup)[¶](#ferenda.manager.setup)
Creates a project, complete with configuration file and ferenda-build tool.
Checks to see that all required python modules and command line utilities are present. Also checks which triple store(s) are available and selects the best one (in order of preference:
Sesame, Fuseki, RDFLib+Sleepycat, RDFLib+SQLite), and checks which fulltextindex(es) are available and selects the best one (in order of preference: ElasticSearch, Whoosh)
| Parameters: | * **argv** ([*list*](https://docs.python.org/3/library/stdtypes.html#list)) – a sys.argv style command line
* **force** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) –
* **verbose** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) –
* **unattended** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) –
|
`ferenda.manager.``runbuildclient`(*clientname*, *serverhost*, *serverport*, *authkey*, *processes*)[[source]](_modules/ferenda/manager.html#runbuildclient)[¶](#ferenda.manager.runbuildclient)
`ferenda.manager.``runbuildqueue`(*serverport*, *authkey*)[[source]](_modules/ferenda/manager.html#runbuildqueue)[¶](#ferenda.manager.runbuildqueue)
### The `testutil` module[¶](#module-ferenda.testutil)
[`unittest`](https://docs.python.org/3/library/unittest.html#module-unittest)-based classes and accompanying functions to create some types of ferenda-specific tests easier.
*class* `ferenda.testutil.``FerendaTestCase`[[source]](_modules/ferenda/testutil.html#FerendaTestCase)[¶](#ferenda.testutil.FerendaTestCase)
Convenience class with extra AssertEqual methods. Note that even though this method provides [`unittest.TestCase`](https://docs.python.org/3/library/unittest.html#unittest.TestCase)-like assert methods, it does not derive from
[`TestCase`](https://docs.python.org/3/library/unittest.html#unittest.TestCase). When creating a test case that makes use of these methods, you need to inherit from both
[`TestCase`](https://docs.python.org/3/library/unittest.html#unittest.TestCase) and this class, ie:
```
class MyTestcase(unittest.TestCase, ferenda.testutil.FerendaTestCase):
def test_simple(self):
self.assertEqualXML("<foo arg1='x' arg2='y'/>",
"<foo arg2='y' arg1='x'></foo>")
```
`assertEqualGraphs`(*want*, *got*, *exact=True*)[[source]](_modules/ferenda/testutil.html#FerendaTestCase.assertEqualGraphs)[¶](#ferenda.testutil.FerendaTestCase.assertEqualGraphs)
Assert that two RDF graphs are identical (isomorphic).
| Parameters: | * **want** – The graph as expected, as an [`Graph`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.graph.Graph) object or the filename of a serialized graph
* **got** – The actual graph, as an [`Graph`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.graph.Graph) object or the filename of a serialized graph
* **exact** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Whether to require that the graphs are exactly alike (True) or only if all triples in want exists in got (False)
|
`assertAlmostEqualDatetime`(*datetime1*, *datetime2*, *delta=1*)[[source]](_modules/ferenda/testutil.html#FerendaTestCase.assertAlmostEqualDatetime)[¶](#ferenda.testutil.FerendaTestCase.assertAlmostEqualDatetime)
Assert that two datetime objects are reasonably equal.
| Parameters: | * **datetime1** (*datetime*) – The first datetime to compare
* **datetime2** (*datetime*) – The second datetime to compare
* **delta** ([*int*](https://docs.python.org/3/library/functions.html#int)) – How much the datetimes are allowed to differ, in seconds.
|
`assertEqualXML`(*want*, *got*, *namespace_aware=True*, *tidy_xhtml=False*)[[source]](_modules/ferenda/testutil.html#FerendaTestCase.assertEqualXML)[¶](#ferenda.testutil.FerendaTestCase.assertEqualXML)
Assert that two xml trees are canonically identical.
| Parameters: | * **want** – The XML document as expected, as a string, byte string or ElementTree element
* **got** – The actual XML document, as a string, byte string or ElementTree element
|
`assertEqualDirs`(*want*, *got*, *suffix=None*, *subset=False*, *filterdir='entries'*)[[source]](_modules/ferenda/testutil.html#FerendaTestCase.assertEqualDirs)[¶](#ferenda.testutil.FerendaTestCase.assertEqualDirs)
Assert that two directory trees contains identical files
| Parameters: | * **want** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The expected directory tree
* **got** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The actual directory tree
* **suffix** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – If given, only check files ending in suffix (otherwise check all the files
* **subset** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – If True, require only that files in want is a subset of files in got (otherwise require that the sets are identical)
* **filterdir** – If given, don’t compare the parts of the tree that starts with filterdir
|
*class* `ferenda.testutil.``RepoTesterStore`(*origstore*, *downloaded_file=None*)[[source]](_modules/ferenda/testutil.html#RepoTesterStore)[¶](#ferenda.testutil.RepoTesterStore)
This is an internal class used by RepoTester in order to control where source documents are read from.
*class* `ferenda.testutil.``RepoTester`(*methodName='runTest'*)[[source]](_modules/ferenda/testutil.html#RepoTester)[¶](#ferenda.testutil.RepoTester)
A unittest.TestCase-based convenience class for creating file-based integration tests for an entire docrepo. To use this, you only need a very small amount of boilerplate code, and some files containing data to be downloaded or parsed. The actual tests are dynamically created from these files. The boilerplate can look something like this:
```
class TestRFC(RepoTester):
repoclass = RFC # the docrepo class to test
docroot = os.path.dirname(__file__)+"/files/repo/rfc"
parametrize_repotester(TestRFC)
```
`repoclass`[¶](#ferenda.testutil.RepoTester.repoclass)
alias of `ferenda.documentrepository.DocumentRepository`
`docroot` *= '/tmp'*[¶](#ferenda.testutil.RepoTester.docroot)
The location of test files to create tests from. Must be overridden when creating a testcase class
`setUp`()[[source]](_modules/ferenda/testutil.html#RepoTester.setUp)[¶](#ferenda.testutil.RepoTester.setUp)
Hook method for setting up the test fixture before exercising it.
`tearDown`()[[source]](_modules/ferenda/testutil.html#RepoTester.tearDown)[¶](#ferenda.testutil.RepoTester.tearDown)
Hook method for deconstructing the test fixture after testing it.
`filename_to_basefile`(*filename*)[[source]](_modules/ferenda/testutil.html#RepoTester.filename_to_basefile)[¶](#ferenda.testutil.RepoTester.filename_to_basefile)
Converts a test filename to a basefile. Default implementation attempts to find out basefile from the repoclass being tested
(or rather it’s documentstore), but returns a hard-coded basefile if it fails.
| Parameters: | **filename** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The test file |
| Returns: | Corresponding basefile |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`download_test`(*specfile*, *basefile=None*)[[source]](_modules/ferenda/testutil.html#RepoTester.download_test)[¶](#ferenda.testutil.RepoTester.download_test)
This test is run for each json file found in docroot/source.
`distill_test`(*downloaded_file*, *rdf_file*, *docroot*)[[source]](_modules/ferenda/testutil.html#RepoTester.distill_test)[¶](#ferenda.testutil.RepoTester.distill_test)
This test is run once for each basefile found in docroot/downloaded. It performs a full parse, and verifies that the distilled RDF metadata is equal to the TTL files placed in docroot/distilled/.
`parse_test`(*downloaded_file*, *xhtml_file*, *docroot*)[[source]](_modules/ferenda/testutil.html#RepoTester.parse_test)[¶](#ferenda.testutil.RepoTester.parse_test)
This test is run once for each basefile found in docroot/downloaded. It performs a full parse, and verifies that the resulting XHTML document is equal to the XHTML file placed in docroot/parsed/.
*class* `ferenda.testutil.``Py23DocChecker`[[source]](_modules/ferenda/testutil.html#Py23DocChecker)[¶](#ferenda.testutil.Py23DocChecker)
Checker to use in conjuction with `doctest.DocTestSuite`.
`check_output`(*want*, *got*, *optionflags*)[[source]](_modules/ferenda/testutil.html#Py23DocChecker.check_output)[¶](#ferenda.testutil.Py23DocChecker.check_output)
Return True iff the actual output from an example (got)
matches the expected output (want). These strings are always considered to match if they are identical; but depending on what option flags the test runner is using,
several non-exact match types are also possible. See the documentation for TestRunner for more information about option flags.
`ferenda.testutil.``parametrize`(*cls*, *template_method*, *name*, *params*, *wrapper=None*)[[source]](_modules/ferenda/testutil.html#parametrize)[¶](#ferenda.testutil.parametrize)
Creates a new test method on a TestCase class, which calls a specific template method with the given parameters (ie. a parametrized test). Given a testcase like this:
```
class MyTest(unittest.TestCase):
def my_general_test(self, parameter):
self.assertEqual(parameter, "hello")
```
and the following top-level initalization code:
```
parametrize(MyTest,MyTest.my_general_test, "test_one", ["hello"])
parametrize(MyTest,MyTest.my_general_test, "test_two", ["world"])
```
you end up with a test case class with two methods. Using e.g. `unittest discover` (or any other unittest-compatible test runner), the following should be the result:
```
test_one (test_parametric.MyTest) ... ok test_two (test_parametric.MyTest) ... FAIL
===
FAIL: test_two (test_parametric.MyTest)
---
Traceback (most recent call last):
File "./ferenda/testutil.py", line 365, in test_method
template_method(self, *params)
File "./test_parametric.py", line 6, in my_general_test
self.assertEqual(parameter, "hello")
AssertionError: 'world' != 'hello'
- world
+ hello
```
| Parameters: | * **cls** – The `TestCase` class to add the parametrized test to.
* **template_method** – The method to use for parametrization
* **name** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The name for the new test method
* **params** ([*list*](https://docs.python.org/3/library/stdtypes.html#list)) – The parameter list (Note: keyword parameters are not supported)
* **wrapper** – A unittest decorator like [`unittest.skip()`](https://docs.python.org/3/library/unittest.html#unittest.skip) or [`unittest.expectedFailure()`](https://docs.python.org/3/library/unittest.html#unittest.expectedFailure).
* **wrapper** – callable
|
`ferenda.testutil.``file_parametrize`(*cls*, *directory*, *suffix*, *filter=None*, *wrapper=None*)[[source]](_modules/ferenda/testutil.html#file_parametrize)[¶](#ferenda.testutil.file_parametrize)
Creates a test for each file in a given directory. Call with any class that subclasses unittest.TestCase and which has a method called `` parametric_test``, eg:
```
class MyTest(unittest.TestCase):
def parametric_test(self,filename):
self.assertTrue(os.path.exists(filename))
from ferenda.testutil import file_parametrize file_parametrize(Parse,"test/files/legaluri",".txt")
```
For each .txt file in the directory `test/files/legaluri`, a corresponding test is created, which calls `parametric_test`
with the full path to the .txt file as parameter.
| Parameters: | * **cls** (*class*) – TestCase to add the parametrized test to.
* **directory** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The path to the files to turn into tests
* **suffix** – Suffix of the files that should be turned into tests (other files in the directory are ignored)
* **filter** – A function to be called with the name of each found file. If this function returns True, no test is created
* **wrapper** – A unittest decorator like [`unittest.skip()`](https://docs.python.org/3/library/unittest.html#unittest.skip) or [`unittest.expectedFailure()`](https://docs.python.org/3/library/unittest.html#unittest.expectedFailure).
* **wrapper** – callable (decorator)
|
`ferenda.testutil.``parametrize_repotester`(*cls*, *include_failures=True*)[[source]](_modules/ferenda/testutil.html#parametrize_repotester)[¶](#ferenda.testutil.parametrize_repotester)
Helper function to activate a
[`ferenda.testutil.RepoTester`](#ferenda.testutil.RepoTester) based class (see the documentation for that class).
| Parameters: | * **cls** – The RepoTester-based class to create tests on.
* **include_failures** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – Create parse/distill tests even if the corresponding xhtml/ttl files doesn’t exist
|
`ferenda.testutil.``testparser`(*testcase*, *parser*, *filename*)[[source]](_modules/ferenda/testutil.html#testparser)[¶](#ferenda.testutil.testparser)
Helper function to test [`FSMParser`](index.html#ferenda.FSMParser) based parsers.
Decorators[¶](#decorators)
---
### Decorators[¶](#module-ferenda.decorators)
Most of these decorators are intended to handle various aspects of a complete [`parse()`](index.html#ferenda.DocumentRepository.parse)
implementation. Normally you should only use the
[`managedparsing()`](#ferenda.decorators.managedparsing) decorator (if you even override the basic implementation). If you create separate actions aside from the standards (`download`, `parse`, `generate` et al), you should also use [`action()`](#ferenda.decorators.action) so that manage.py will be able to call it.
`ferenda.decorators.``timed`(*f*)[[source]](_modules/ferenda/decorators.html#timed)[¶](#ferenda.decorators.timed)
Automatically log a statement of how long the function call takes
`ferenda.decorators.``recordlastdownload`(*f*)[[source]](_modules/ferenda/decorators.html#recordlastdownload)[¶](#ferenda.decorators.recordlastdownload)
Automatically stores current time in `self.config.lastdownload`
`ferenda.decorators.``parseifneeded`(*f*)[[source]](_modules/ferenda/decorators.html#parseifneeded)[¶](#ferenda.decorators.parseifneeded)
Makes sure the parse function is only called if needed, i.e. if the outfile is nonexistent or older than the infile(s), or if the user has specified in the config file or on the command line that it should be re-generated.
`ferenda.decorators.``render`(*f*)[[source]](_modules/ferenda/decorators.html#render)[¶](#ferenda.decorators.render)
Handles the serialization of the [`Document`](index.html#ferenda.Document)
object to XHTML+RDFa and RDF/XML files. Must be used in conjunction with [`makedocument()`](#ferenda.decorators.makedocument).
`ferenda.decorators.``handleerror`(*f*)[[source]](_modules/ferenda/decorators.html#handleerror)[¶](#ferenda.decorators.handleerror)
Make sure any errors in [`ferenda.DocumentRepository.parse()`](index.html#ferenda.DocumentRepository.parse)
are handled appropriately and do not stop the parsing of all documents.
`ferenda.decorators.``makedocument`(*f*)[[source]](_modules/ferenda/decorators.html#makedocument)[¶](#ferenda.decorators.makedocument)
Changes the signature of the parse method to expect a Document object instead of a basefile string, and creates the object.
`ferenda.decorators.``managedparsing`(*f*)[[source]](_modules/ferenda/decorators.html#managedparsing)[¶](#ferenda.decorators.managedparsing)
Use all standard decorators for parse() in the correct order
([`makedocument()`](#ferenda.decorators.makedocument), [`parseifneeded()`](#ferenda.decorators.parseifneeded), [`timed()`](#ferenda.decorators.timed), [`render()`](#ferenda.decorators.render))
`ferenda.decorators.``action`(*f*)[[source]](_modules/ferenda/decorators.html#action)[¶](#ferenda.decorators.action)
Decorator that marks a class or instance method as runnable by
[`ferenda.manager.run()`](index.html#ferenda.manager.run)
`ferenda.decorators.``downloadmax`(*f*)[[source]](_modules/ferenda/decorators.html#downloadmax)[¶](#ferenda.decorators.downloadmax)
Makes any generator respect the `downloadmax` config parameter.
`ferenda.decorators.``newstate`(*state*)[[source]](_modules/ferenda/decorators.html#newstate)[¶](#ferenda.decorators.newstate)
Errors[¶](#errors)
---
### Errors[¶](#module-ferenda.errors)
These are the exceptions thrown by Ferenda. Any of the python built-in exceptions may be thrown as well, but exceptions in used third-party libraries should be wrapped in one of these.
*exception* `ferenda.errors.``ParseError`[[source]](_modules/ferenda/errors.html#ParseError)[¶](#ferenda.errors.ParseError)
Raised when [`parse()`](index.html#ferenda.DocumentRepository.parse) fails in any way.
*exception* `ferenda.errors.``FSMStateError`[[source]](_modules/ferenda/errors.html#FSMStateError)[¶](#ferenda.errors.FSMStateError)
Raised whenever the current state and the current symbol in a
[`FSMParser`](index.html#ferenda.FSMParser) configuration does not have a defined transition.
*exception* `ferenda.errors.``DocumentRemovedError`[[source]](_modules/ferenda/errors.html#DocumentRemovedError)[¶](#ferenda.errors.DocumentRemovedError)
Raised whenever a particular document has been found to be removed
– this can happen either during
[`download()`](index.html#ferenda.DocumentRepository.download) or
[`parse()`](index.html#ferenda.DocumentRepository.parse) (which may be the case if there exists a physical document, but whose contents are essentially a placeholder saying that the document has been removed).
You can set the attribute `dummyfile` on this exception when raising it, preferably to the parsed_path that would be created,
if not this exception had occurred.. If present,
`ferenda-build.py` (or rather [`ferenda.manager.run()`](index.html#ferenda.manager.run)) will use this to create a dummy file at the indicated path. This prevents endless re-parsing of expired documents.
*exception* `ferenda.errors.``PatchError`[[source]](_modules/ferenda/errors.html#PatchError)[¶](#ferenda.errors.PatchError)
Raised if a patch cannot be applied by
[`patch_if_needed()`](index.html#ferenda.DocumentRepository.patch_if_needed).
*exception* `ferenda.errors.``NoDownloadedFileError`[[source]](_modules/ferenda/errors.html#NoDownloadedFileError)[¶](#ferenda.errors.NoDownloadedFileError)
Raised on an attempt to parse a basefile for which there doesn’t exist a downloaded file.
*exception* `ferenda.errors.``AttachmentNameError`[[source]](_modules/ferenda/errors.html#AttachmentNameError)[¶](#ferenda.errors.AttachmentNameError)
Raised whenever an invalid attachment name is used with any method of [`DocumentStore`](index.html#ferenda.DocumentStore).
*exception* `ferenda.errors.``AttachmentPolicyError`[[source]](_modules/ferenda/errors.html#AttachmentPolicyError)[¶](#ferenda.errors.AttachmentPolicyError)
Raised on any attempt to store an attachment using
[`DocumentStore`](index.html#ferenda.DocumentStore) when `storage_policy` is not set to `dir`.
*exception* `ferenda.errors.``ArchivingError`[[source]](_modules/ferenda/errors.html#ArchivingError)[¶](#ferenda.errors.ArchivingError)
Raised whenever an attempt to archive a document version using [`archive()`](index.html#ferenda.DocumentStore.archive) fails (for example, because the archive version
already exists).
*exception* `ferenda.errors.``ValidationError`[[source]](_modules/ferenda/errors.html#ValidationError)[¶](#ferenda.errors.ValidationError)
Raised whenever a created document doesn’t validate using the appropriate schema.
*exception* `ferenda.errors.``TransformError`[[source]](_modules/ferenda/errors.html#TransformError)[¶](#ferenda.errors.TransformError)
Raised whenever a XSLT transformation fails for any reason.
*exception* `ferenda.errors.``ExternalCommandError`[[source]](_modules/ferenda/errors.html#ExternalCommandError)[¶](#ferenda.errors.ExternalCommandError)
Raised whenever any invocation of an external commmand fails for any reason.
*exception* `ferenda.errors.``ExternalCommandNotFound`[[source]](_modules/ferenda/errors.html#ExternalCommandNotFound)[¶](#ferenda.errors.ExternalCommandNotFound)
Raised whenever any invocation of an external commmand fails
*exception* `ferenda.errors.``ConfigurationError`[[source]](_modules/ferenda/errors.html#ConfigurationError)[¶](#ferenda.errors.ConfigurationError)
Raised when a configuration file cannot be found in it’s expected location or when it cannot be used due to corruption, file permissions or other reasons
*exception* `ferenda.errors.``TriplestoreError`[[source]](_modules/ferenda/errors.html#TriplestoreError)[¶](#ferenda.errors.TriplestoreError)
Raised whenever communications with the triple store fails, for whatever reason.
*exception* `ferenda.errors.``SparqlError`[[source]](_modules/ferenda/errors.html#SparqlError)[¶](#ferenda.errors.SparqlError)
Raised whenever a SPARQL query fails. The Exception should contain whatever error message that the Triple store returned, so the exact formatting may be dependent on which store is used.
*exception* `ferenda.errors.``IndexingError`[[source]](_modules/ferenda/errors.html#IndexingError)[¶](#ferenda.errors.IndexingError)
Raised whenever an attempt to put text into the fulltext index fails.
*exception* `ferenda.errors.``SearchingError`[[source]](_modules/ferenda/errors.html#SearchingError)[¶](#ferenda.errors.SearchingError)
Raised whenever an attempt to do a full-text search fails.
*exception* `ferenda.errors.``SchemaConflictError`[[source]](_modules/ferenda/errors.html#SchemaConflictError)[¶](#ferenda.errors.SchemaConflictError)
Raised whenever a fulltext index is opened with repo arguments that result in a different schema than what’s currently in use. Workaround this by removing the fulltext index and recreating.
*exception* `ferenda.errors.``SchemaMappingError`[[source]](_modules/ferenda/errors.html#SchemaMappingError)[¶](#ferenda.errors.SchemaMappingError)
Raised whenever a given field in a schema cannot be mapped to or from the underlying native field object in an actual fulltextindex store.
*exception* `ferenda.errors.``MaxDownloadsReached`[[source]](_modules/ferenda/errors.html#MaxDownloadsReached)[¶](#ferenda.errors.MaxDownloadsReached)
Raised whenever a recursive download operation has reached a globally set maximum number of requests.
Document repositories[¶](#document-repositories)
---
### `ferenda.sources.general.Static` – generate documents from your own `.rst` files[¶](#ferenda-sources-general-static-generate-documents-from-your-own-rst-files)
*class* `ferenda.sources.general.``Static`(*config=None*, ***kwargs*)[[source]](_modules/ferenda/sources/general/static.html#Static)[¶](#ferenda.sources.general.Static)
Generates documents from your own `.rst` files
The primary purpose of this docrepo is to provide a small set of static pages for a complete ferenda-based web site, like “About us”, “Contact information”, “Terms of service” or whatever else you need. The `download` step of this docrepo does not do anything, and it’s `parse` step reads ReStructuredText
(`.rst`) files from a local directory and converts them into XHTML+RDFa. From that point on, it works just like any other docrepo.
After enabling this, you should set the configuration parameter
`staticdir` to the path of a directory where you keep your
`.rst` files:
```
[static]
class = ferenda.sources.general.Static staticdir = /var/www/mysite/static/rst
```
Note
If this configuration parameter is not set, this docrepo will use a small set of generic static pages, stored under
`ferenda/res/static-pages` in the distribution. To get started, you can just copy this directory and set `staticdir`
to point at your copy.
Every file present in `staticdir` results in a link in the site footer. The link text will be the title of the document, i.e. the first header in the `.rst` file.
### `ferenda.sources.general.Keyword` – generate documents for keywords used by document in other docrepos[¶](#ferenda-sources-general-keyword-generate-documents-for-keywords-used-by-document-in-other-docrepos)
*class* `ferenda.sources.general.``Keyword`(*config=None*, ***kwargs*)[[source]](_modules/ferenda/sources/general/keyword.html#Keyword)[¶](#ferenda.sources.general.Keyword)
Implements support for ‘keyword hubs’, conceptual resources which themselves aren’t related to any document, but to which other documents are related. As an example, if a docrepo has documents that each contains a set of keywords, and the docrepo parse implementation extracts these keywords as `dcterms:subject`
resources, this docrepo creates a document resource for each of those keywords. The main content for the keyword may come from the [`MediaWiki`](index.html#ferenda.sources.general.MediaWiki) docrepo, and all other documents in any of the repos that refer to this concept resource are automatically listed.
### `ferenda.sources.general.MediaWiki` – pull in commentary on documents and keywords from a MediaWiki instance[¶](#ferenda-sources-general-mediawiki-pull-in-commentary-on-documents-and-keywords-from-a-mediawiki-instance)
*class* `ferenda.sources.general.``MediaWiki`(*config=None*, ***kwargs*)[[source]](_modules/ferenda/sources/general/wiki.html#MediaWiki)[¶](#ferenda.sources.general.MediaWiki)
Downloads content from a Mediawiki system and converts it to annotations on other documents.
For efficient downloads, this docrepo requires that there exists a XML dump (created by [dumpBackup.php](http://www.mediawiki.org/wiki/Manual:DumpBackup.php)) of the mediawiki contents that can be fetched over HTTP/HTTPS. Configure the location of this dump using the `mediawikiexport`
parameter:
```
[mediawiki]
class = ferenda.sources.general.MediaWiki mediawikiexport = http://localhost/wiki/allpages-dump.xml
```
Note
This docrepo relies on the smc.mw module, which doesn’t work on python 2.6, only 2.7 and newer.
### `ferenda.sources.general.Skeleton` – generate skeleton documents for references from other documents[¶](#ferenda-sources-general-skeleton-generate-skeleton-documents-for-references-from-other-documents)
*class* `ferenda.sources.general.``Skeleton`(*config=None*, ***kwargs*)[[source]](_modules/ferenda/sources/general/skeleton.html#Skeleton)[¶](#ferenda.sources.general.Skeleton)
Utility docrepo to fetch all RDF data from a triplestore (either our triple store, or a remote one, fetched through the combined ferenda atom feed), find out those resources that are referred to but not present in the data (usually older documents that are not available in electronic form), and create “skeleton entries” for those resources.
### `ferenda.sources.tech` – repositories for technical standards[¶](#ferenda-sources-tech-repositories-for-technical-standards)
#### `W3Standards`[¶](#w3standards)
*class* `ferenda.sources.tech.``W3Standards`(*config=None*, ***kwargs*)[[source]](_modules/ferenda/sources/tech/w3c.html#W3Standards)[¶](#ferenda.sources.tech.W3Standards)
#### `RFC`[¶](#rfc)
*class* `ferenda.sources.tech.``RFC`(*config=None*, ***kwargs*)[[source]](_modules/ferenda/sources/tech/rfc.html#RFC)[¶](#ferenda.sources.tech.RFC)
### `ferenda.sources.legal.eu` – repositories for EU law[¶](#ferenda-sources-legal-eu-repositories-for-eu-law)
#### `EurlexTreaties`[¶](#eurlextreaties)
#### `EurlexCaselaw`[¶](#eurlexcaselaw)
*class* `ferenda.sources.legal.eu.``EurlexTreaties`(*config=None*, ***kwargs*)[[source]](_modules/ferenda/sources/legal/eu/eurlextreaties.html#EurlexTreaties)[¶](#ferenda.sources.legal.eu.EurlexTreaties)
Handles the foundation treaties of the European union.
### `ferenda.sources.legal.se` – repositories for Swedish law[¶](#ferenda-sources-legal-se-repositories-for-swedish-law)
#### `ARN`[¶](#arn)
#### `Direktiv`[¶](#direktiv)
##### `direktiv.DirTrips`[¶](#direktiv-dirtrips)
*class* `ferenda.sources.legal.se.direktiv.``DirTrips`(*config=None*, ***kwargs*)[[source]](_modules/ferenda/sources/legal/se/direktiv.html#DirTrips)[¶](#ferenda.sources.legal.se.direktiv.DirTrips)
Downloads Direktiv in plain text format from <http://rkrattsbaser.gov.se/dir/##### `direktiv.DirAsp`[¶](#direktiv-dirasp)
*class* `ferenda.sources.legal.se.direktiv.``DirAsp`(*config=None*, ***kwargs*)[[source]](_modules/ferenda/sources/legal/se/direktiv.html#DirAsp)[¶](#ferenda.sources.legal.se.direktiv.DirAsp)
Downloads Direktiv in PDF format from <http://rkrattsdb.gov.se/kompdf/##### `direktiv.DirRegeringen`[¶](#direktiv-dirregeringen)
*class* `ferenda.sources.legal.se.direktiv.``DirRegeringen`(*config=None*, ***kwargs*)[[source]](_modules/ferenda/sources/legal/se/direktiv.html#DirRegeringen)[¶](#ferenda.sources.legal.se.direktiv.DirRegeringen)
Downloads Direktiv in PDF format from <http://www.regeringen.se/#### `Ds`[¶](#ds)
#### `DV`[¶](#dv)
#### `JK`[¶](#jk)
#### `JO`[¶](#jo)
#### `Kommitte`[¶](#kommitte)
#### `MyndFskr`[¶](#myndfskr)
##### `myndfskr.SJVFS`[¶](#myndfskr-sjvfs)
*class* `ferenda.sources.legal.se.myndfskr.``SJVFS`(*config=None*, ***kwargs*)[[source]](_modules/ferenda/sources/legal/se/myndfskr.html#SJVFS)[¶](#ferenda.sources.legal.se.myndfskr.SJVFS)
##### `myndfskr.DVFS`[¶](#myndfskr-dvfs)
*class* `ferenda.sources.legal.se.myndfskr.``DVFS`(*config=None*, ***kwargs*)[[source]](_modules/ferenda/sources/legal/se/myndfskr.html#DVFS)[¶](#ferenda.sources.legal.se.myndfskr.DVFS)
##### `myndfskr.FFFS`[¶](#myndfskr-fffs)
*class* `ferenda.sources.legal.se.myndfskr.``FFFS`(*config=None*, ***kwargs*)[[source]](_modules/ferenda/sources/legal/se/myndfskr.html#FFFS)[¶](#ferenda.sources.legal.se.myndfskr.FFFS)
##### `myndfskr.ELSAKFS`[¶](#myndfskr-elsakfs)
*class* `ferenda.sources.legal.se.myndfskr.``ELSAKFS`(*config=None*, ***kwargs*)[[source]](_modules/ferenda/sources/legal/se/myndfskr.html#ELSAKFS)[¶](#ferenda.sources.legal.se.myndfskr.ELSAKFS)
##### `myndfskr.NFS`[¶](#myndfskr-nfs)
*class* `ferenda.sources.legal.se.myndfskr.``NFS`(*config=None*, ***kwargs*)[[source]](_modules/ferenda/sources/legal/se/myndfskr.html#NFS)[¶](#ferenda.sources.legal.se.myndfskr.NFS)
##### `myndfskr.STAFS`[¶](#myndfskr-stafs)
*class* `ferenda.sources.legal.se.myndfskr.``STAFS`(*config=None*, ***kwargs*)[[source]](_modules/ferenda/sources/legal/se/myndfskr.html#STAFS)[¶](#ferenda.sources.legal.se.myndfskr.STAFS)
##### `myndfskr.SKVFS`[¶](#myndfskr-skvfs)
*class* `ferenda.sources.legal.se.myndfskr.``SKVFS`(*config=None*, ***kwargs*)[[source]](_modules/ferenda/sources/legal/se/myndfskr.html#SKVFS)[¶](#ferenda.sources.legal.se.myndfskr.SKVFS)
#### `Propositioner`[¶](#propositioner)
##### `propositioner.PropRegeringen`[¶](#propositioner-propregeringen)
*class* `ferenda.sources.legal.se.propositioner.``PropRegeringen`(*config=None*, ***kwargs*)[[source]](_modules/ferenda/sources/legal/se/propositioner.html#PropRegeringen)[¶](#ferenda.sources.legal.se.propositioner.PropRegeringen)
##### `propositioner.PropTrips`[¶](#propositioner-proptrips)
*class* `ferenda.sources.legal.se.propositioner.``PropTrips`(*config=None*, ***kwargs*)[[source]](_modules/ferenda/sources/legal/se/propositioner.html#PropTrips)[¶](#ferenda.sources.legal.se.propositioner.PropTrips)
##### `propositioner.PropRiksdagen`[¶](#propositioner-propriksdagen)
*class* `ferenda.sources.legal.se.propositioner.``PropRiksdagen`(*config=None*, ***kwargs*)[[source]](_modules/ferenda/sources/legal/se/propositioner.html#PropRiksdagen)[¶](#ferenda.sources.legal.se.propositioner.PropRiksdagen)
#### `SFS`[¶](#sfs)
### The `Devel` class[¶](#the-devel-class)
*class* `ferenda.``Devel`(*config=None*, ***kwargs*)[[source]](_modules/ferenda/devel.html#Devel)[¶](#ferenda.Devel)
Collection of utility commands for developing docrepos.
This module acts as a docrepo (and as such is easily callable from
`ferenda-manager.py`), but instead of `download`, `parse`,
`generate` et al, contains various tool commands that is useful for developing and debugging your own docrepo classes.
Use it by first enabling it:
```
./ferenda-build.py ferenda.Devel enable
```
And then run individual tools like:
```
./ferenda-build.py devel dumprdf path/to/xhtml/rdfa.xhtml
```
`alias` *= 'devel'*[¶](#ferenda.Devel.alias)
`dumprdf`(*filename*, *format='turtle'*)[[source]](_modules/ferenda/devel.html#Devel.dumprdf)[¶](#ferenda.Devel.dumprdf)
Extract all RDF data from a parsed file and dump it to stdout.
| Parameters: | * **filename** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Full path of the parsed XHTML+RDFa file.
* **format** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The serialization format for RDF data (same as for [`rdflib.graph.Graph.serialize()`](https://rdflib.readthedocs.io/en/latest/apidocs/rdflib.html#rdflib.graph.Graph.serialize))
|
Example:
```
./ferenda-build.py devel dumprdf path/to/xhtml/rdfa.xhtml nt
```
`dumpstore`(*format='turtle'*)[[source]](_modules/ferenda/devel.html#Devel.dumpstore)[¶](#ferenda.Devel.dumpstore)
Extract all RDF data from the system triplestore and dump it to stdout using the specified format.
| Parameters: | **format** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The serialization format for RDF data (same as for [`ferenda.TripleStore.get_serialized()`](index.html#ferenda.TripleStore.get_serialized)). |
Example:
```
./ferenda-build.py devel dumpstore nt > alltriples.nt
```
`csvinventory`(*alias*)[[source]](_modules/ferenda/devel.html#Devel.csvinventory)[¶](#ferenda.Devel.csvinventory)
Create an inventory of documents, as a CSV file. Only documents that have been parsed and yielded some minimum amount of RDF metadata will be included.
| Parameters: | **alias** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Docrepo alias |
`mkpatch`(*alias*, *basefile*, *description*)[[source]](_modules/ferenda/devel.html#Devel.mkpatch)[¶](#ferenda.Devel.mkpatch)
Create a patch file from downloaded or intermediate files. Before running this tool, you should hand-edit the intermediate file. If your docrepo doesn’t use intermediate files, you should hand-edit the downloaded file instead. The tool will first stash away the intermediate (or downloaded) file, then re-run [`parse()`](index.html#ferenda.DocumentRepository.parse) (or
[`download_single()`](index.html#ferenda.DocumentRepository.download_single)) in order to get a new intermediate (or downloaded) file. It will then calculate the diff between these two versions and save it as a patch file in it’s proper place (as determined by
`config.patchdir`), where it will be picked up automatically by [`patch_if_needed()`](index.html#ferenda.DocumentRepository.patch_if_needed).
| Parameters: | * **alias** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Docrepo alias
* **basefile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The basefile for the document to patch
|
Example:
```
./ferenda-build.py devel mkpatch myrepo basefile1 "Removed sensitive personal information"
```
`parsestring`(*string*, *citationpattern*, *uriformatter=None*)[[source]](_modules/ferenda/devel.html#Devel.parsestring)[¶](#ferenda.Devel.parsestring)
Parse a string using a named citationpattern and print parse tree and optionally formatted uri(s) on stdout.
| Parameters: | * **string** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The text to parse
* **citationpattern** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The fully qualified name of a citationpattern
* **uriformatter** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The fully qualified name of a uriformatter
|
Note
This is not implemented yet
Example:
```
./ferenda-build.py devel parsestring \
"According to direktiv 2007/42/EU, ..." \
ferenda.citationpatterns.eulaw
```
`fsmparse`(*functionname*, *source*)[[source]](_modules/ferenda/devel.html#Devel.fsmparse)[¶](#ferenda.Devel.fsmparse)
Parse a list of text chunks using a named fsm parser and output the parse tree and final result to stdout.
| Parameters: | * **functionname** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – A function that returns a configured
[`FSMParser`](index.html#ferenda.FSMParser)
* **source** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – A file containing the text chunks, separated by double newlines
|
`queryindex`(*querystring*)[[source]](_modules/ferenda/devel.html#Devel.queryindex)[¶](#ferenda.Devel.queryindex)
Query the system fulltext index and return the IDs/URIs for matching documents.
| Parameters: | **querystring** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The query |
`construct`(*template*, *uri*, *format='turtle'*)[[source]](_modules/ferenda/devel.html#Devel.construct)[¶](#ferenda.Devel.construct)
`select`(*template*, *uri*, *format='json'*)[[source]](_modules/ferenda/devel.html#Devel.select)[¶](#ferenda.Devel.select)
`destroyindex`()[[source]](_modules/ferenda/devel.html#Devel.destroyindex)[¶](#ferenda.Devel.destroyindex)
`documentstore_class`[¶](#ferenda.Devel.documentstore_class)
alias of `DummyStore`
`downloaded_suffix` *= '.html'*[¶](#ferenda.Devel.downloaded_suffix)
`storage_policy` *= 'file'*[¶](#ferenda.Devel.storage_policy)
`get_default_options`()[[source]](_modules/ferenda/devel.html#Devel.get_default_options)[¶](#ferenda.Devel.get_default_options)
`download`()[[source]](_modules/ferenda/devel.html#Devel.download)[¶](#ferenda.Devel.download)
`parse`(*basefile*)[[source]](_modules/ferenda/devel.html#Devel.parse)[¶](#ferenda.Devel.parse)
`relate`(*basefile*)[[source]](_modules/ferenda/devel.html#Devel.relate)[¶](#ferenda.Devel.relate)
`generate`(*basefile*)[[source]](_modules/ferenda/devel.html#Devel.generate)[¶](#ferenda.Devel.generate)
`toc`(*otherrepos*)[[source]](_modules/ferenda/devel.html#Devel.toc)[¶](#ferenda.Devel.toc)
`news`(*otherrepos*)[[source]](_modules/ferenda/devel.html#Devel.news)[¶](#ferenda.Devel.news)
`status`()[[source]](_modules/ferenda/devel.html#Devel.status)[¶](#ferenda.Devel.status)
*classmethod* `setup`(*action*, *config*)[[source]](_modules/ferenda/devel.html#Devel.setup)[¶](#ferenda.Devel.setup)
*classmethod* `teardown`(*action*, *config*)[[source]](_modules/ferenda/devel.html#Devel.teardown)[¶](#ferenda.Devel.teardown)
Changes[¶](#changes)
---
### 0.3.0 (released 2015-02-18)[¶](#released-2015-02-18)
This release adds support for processing things in parallel, both by using multiple processes on a single machine, and also by running
“build clients” on any number of machines, which run jobs managed by a central queue.
Parsing of PDF files has been improved by the [`PDFReader`](index.html#ferenda.PDFReader)
and [`PDFAnalyzer`](index.html#ferenda.PDFAnalyzer) (new) classes. See [PDF documents](index.html#pdfreader).
In addition, a lot of the included repositorys have been overhauled. The general repos [`MediaWiki`](index.html#ferenda.sources.general.MediaWiki) and
[`Keyword`](index.html#ferenda.sources.general.Keyword) should be usable for most projects by creating a subclass and configuring it.
#### Backwards-incompatible changes:[¶](#backwards-incompatible-changes)
* DocumentRepository and all derived classes now takes an optional first config argument. If present, this should be a LayeredConfig object that contains the repo configuration. If not provided, a blank LayeredConfig object is created. All other optional keyword arguments are then added to the config object. If you have overridden __init__ for your docrepo, you’ll need to make sure to handle this first argument.
* The Newscriteria class has been removed, and DocumentRepository.news_criteria with it. The Facet framework is now used to define news feeds (as well as TOC pages, the ReST API and fulltext indexing)
* The PDFReader constructor now takes, as first argument, a list of pdfreader.Page objects. Normally, a client won’t have these but must instead provide a filename of a PDF file through the filename argument (which used to be the first argument, but must now be specified as a named argument).
* the getfont() method of pdfreader.Textbox objects used to return a straight dict of strings, but has now been replaced with a font property that is now a LayeredConfig object with proper typing. Code like “int(textbox.getfont()[‘size’])” should now be written like
“textbox.font.size”.
#### New features:[¶](#new-features)
* The default serialization of Element objects to XHTML now inserts appropriate dcterms:isPartOf statements when one element with a URI is contained within another element with another URI. Custom element classes can change this by changing the partrelation property of the included document.
* Serialization of Element documents to XHTML now omits namespaces defined in self.namespaces, but which never actually occur in the data.
* CitationParser.parse_string and .parse_recursive now has an optional predicate argument that determines the RDF predicate between the refering and the referred resources (by default, this is dcterms:references)
* manager (and by extension ./ferenda-build.py) has new commands that allows processing jobs in parallell (see Advanced > Parallel processing)
* The ferenda.sources.general.wiki can now transform mediawiki markup to Element objects.
* The ferenda.sources.general.keyword can be used to build keyboard hubs from all concepts that your documents point to through a dcterms:subject property (as well as things in a wiki docrepo, and configurable other sources).
* The ferenda.sources.legal.se docrepos have been updated generally and are now close to being able to replicate the function set of
<https://lagen.nu/> (which was the main motivation with this codebase all along).
* ferenda.testutil.assertEqualXML now has a tidy_xhtml argument which runs the XML documents to be compared through HTML tidy (in XML mode) in order to produce easier-to-read diffs.
* Transformer now outputs the equivalent xsltproc command if the environment variable FERENDA_TRANSFORMDEBUG is set.
* The relate() action now uses dependency management to avoid costly re-indexing if no changes have been made to a document.
* TOC and newsfeed generation now uses dependency management to avoid re-generating if no changes in the underlying data has occurred.
* Documentation in general has been improved (readers, testing).
#### Infrastructural changes:[¶](#infrastructural-changes)
* Ferenda now uses the CI service Appveyor to automatically run the entire test suite under Windows on every commit.
* LayeredConfig is now a separate package and not included with Ferenda. It has been generalized and can take any number of configuration sources (in the form of object instances) as initialization arguments. Classes that provide configuration sources from code defaults, INI files, command line arguments, environment variables and more are included. It also has two new class methods,
.set and .get.
### 0.2.0 (released 2014-07-23)[¶](#released-2014-07-23)
This release adds a REST-based HTTP API and includes a lot of infrastructure to support repo-defined querying and aggregation of arbitrary document properties. This also led to a generalization of the TocCriteria class and associated methods, which are now replaced by the Facet class.
The REST API should be considered an alpha version and is definitly not stable.
#### Backwards-incompatible changes:[¶](#id1)
* The class TocCriteria and the DocumentRepository methods toc_predicates, toc_criteria et al have been removed and replaced with the Facet class and similar methods.
* ferenda.sources.legal.se.direktiv.DirPolopoly and ferenda.sources.legal.se.propositioner.PropPolo has been renamed to
…DirRegeringen and …PropRegeringen, respectively.
#### New features:[¶](#id2)
* A REST API enables clients to do faceted querying (ie document whose properties have specified values), full-text search or combinations.
* Several popular RDF ontologies are included and exposed using the REST API. A docrepo can include custom RDF ontologies that are used in the same way. All ontologies used by a docrepo is available as a RDFLib graph from the .ontologies property
* Docrepos can include extra/common data that describes things which your documents refer to, like companies, publishing entities, print series and abstract things like the topic/keyword of a document. This information is provided in the form of a RDF graph, which is also exposed using the REST API. All common data defined for a docrepo is available as the .commondata property.
* New method DocumentRepository.lookup_resource lookup resource URIs from the common data using foaf:name labels (or any other RDF predicate that you might want to use)
* New class Facet and new methods DocumentRepository.facets,
.faceted_data, facet_query and facet_seltct to go with that class. These replace the TocCriteria class and the methods DocumentRepository.toc_select, .toc_query, .toc_criteria and
.toc_predicates.
* The WSGI app now provides content negotiation using file extensions as well as a the HTTP Accept header, ie. requesting
“<http://localhost:8000/res/base/123.ttl>” gives the same result as requesting the resource “<http://localhost:8000/res/base/123>” using the
“Accept: text/turtle” header.
* New exceptions ferenda.errors.SchemaConflictError and .SchemaMappingError.
* The FulltextIndex class now creates a schema in the underlying fulltext enginge based upon the used docrepos, and the facets that those repos define. The FulltextIndex.update method now takes arbitrary arguments that are stored as separate fields in the fulltext index. Similarly, the FulltextIndex.query method now takes arbitrary arguments that are used to limit the search to only those documents whose properties match the arguments.
* ferenda.Devel has a new ´destroyindex’ action which completely removes the fulltext index, which might be needed whenever its schema changes. If you add any new facets, you’ll need to run
“./ferenda-build.py devel destroyindex” followed by
“./ferenda-build.py all relate –all –force”
* The docrepos ferenda.sources.tech.RFC and W3Standards have been updated with their own ontologies and commondata. The result of parse now creates better RDF, in particular things like dcterms:creator and dcterms:subject not point to URIs (defined in commondata) instead of plain string literals.
#### Infrastructural changes:[¶](#id3)
* cssmin is no longer bundled within ferenda. Instead it’s marked as a dependency so that pip/easy_install automatically downloads it from pypi.
* The prefix for DCMI Metadata Terms have been changed from “dct” to
“dcterms” in all code and documentation.
* testutil now has a Py23DocChecker that can be used with doctest.DocTestSuite() to enable single-source doctests that work with both python 2 and 3.
* New method ferenda.util.json_default_date, usable as the default argument of json.dump to serialize datetime object into JSON strings.
### 0.1.7 (released 2014-04-22)[¶](#released-2014-04-22)
This release mainly updates the swedish legal sources, which now does a decent job of downloading and parsing a variety of legal information. During the course of that work, a number of changes needed to be made to the core of ferenda. The release is still a part of the 0.1 series because the REST API isn’t done yet (once it’s in,
that will be release 0.2)
#### Backwards-incompatible changes:[¶](#id4)
* CompositeRepository.parse now raises ParseError if no subrepository is able to parse the given basefile.
#### New features:[¶](#id5)
* ferenda.CompositeRepository.parse no longer requires that all subrepos have storage_policy == “dir”.
* Setting ferenda.DocumentStore.config now updates the associated DocumentStore object with the config.datadir parameter
* New method ferenda.DocumentRepository.construct_sparql_query()
allows for more complex overrides than just setting the sparql_annotations class attribute.
* New method DocumentRepository.download_is_different() is used to control whether a newly downloaded resource is semantically different from a previously downloaded resource (to avoid having each ASP.Net VIEWSTATE change result in an archived document).
* New method DocumentRepository.parseneeded(): returns True iff parsing of the document is needed (logic moved from ferenda.decorators.parseifneeded)
* New class variable ferenda.DocumentRepository.required_predicates:
Controls which predicates that is expected to be in the output data from .parse()
* The method ferenda.DocumentRepository.download_if_needed() now sets both the If-None-match and If-modified-since HTTP headers.
* The method ferenda.DocumentRepository.render_xhtml() now creates RDFa 1.1
* New ‘compress’ parameter (Can either be empty or “bz2”) controls whether intermediate files are compressed to save space.
* The method ferenda.DocumentStore.path() now takes an extra storage_policy parameter.
* The class ferenda.DocumentStore now stores multiple basefiles in a single directory even when storage_policy == “dir” for all methods that cannot handle attachments (like distilled_path,
documententry_path etc)
* New methods ferenda.DocumentStore.open_intermediate(), .serialized_path() and open_serialized()
* The decorator @ferenda.decorators.render (by default called when calling DocumentRepository.parse()) now serialize the entire document to JSON, which later can be loaded to recreate the entire document object tree. Controlled by config parameter serializejson.
* The decorator @ferenda.decorators.render now validates that required triples (as determined by .required_predicates) are present in the output.
* New decorator @ferenda.decorators.newstate, used in ferenda.FSMParser
* The docrepo ferenda.Devel now has a new csvinventory action
* The functions ferenda.Elements.serialize() and .deserialize() now takes a format parameter,
which can be either “xml” (default) or “json”. The “json” format allows for full roundtripping of all documents.
* New exception ferenda.errors.NoDownloadedFileError.
* The class ferenda.PDFReader now handles any word processing format that OpenOffice/LibreOffice can handle, by first using soffice to convert it to a PDF. It also handles PDFs that consists entirely of scanned pages without text information, by first running the images through the tesseract OCR engine. Finally, a new keep_xml parameter allows for either removing the intermediate XML files or compressing them using bz2 to save space.
* New method ferenda.PDFReader.is_empty()
* New method ferenda.PDFReader.textboxes() iterates through all textboxes on all pages. The user can provide a glue function to automatically concatenate textboxes that should be considered part of the same paragraph (or other meaningful unit of text).
* New debug method ferenda.PDFReader.drawboxes() can use the same glue function, and creates a new pdf with all the resulting textboxes marked up. (Requires PyPDF2 and reportlab, which makes this particular feature Python 2-only).
* ferenda.PDFReader.Textbox objects can now be added to each other to form larger Textbox objects.
* ferenda.Transformer now optionally logs the equivalent xsltproc command line when transforming using XSLT.
* new method ferenda.TripleStore.update(), performs SPARQL UPDATE/DELETE/DROP/CLEAR queries.
* ferenda.util has new gYearMonth and gYear classes that subclass datetime.date, but are useful when handling RDF literals that should have the datatype xsd:gYearMonth (or xsd:gYear)
### 0.1.6.1 (released 2013-11-13)[¶](#released-2013-11-13)
This hotfix release corrected an error in setup.py that prevented installs when using python 3.
### 0.1.6 (released 2013-11-13)[¶](#id6)
This release mainly contains bug fixes and development infrastructure changes. 95 % of the main code base is covered through the unit test suite, and the examples featured in the documentation is now automatically tested as well. Whenever discrepancies between the map
(documentation) and reality (code) has been found, reality has been adjusted to be in accordance with the map.
The default HTML5 theme has also been updated, and should scale nicely from screen widths ranging from mobile phones in portrait mode to wide-screen desktops. The various bundled css and js files has been upgraded to their most recent versions.
#### Backwards-incompatible changes:[¶](#id7)
* The DocumentStore.open_generated method was removed as noone was using it.
* The (non-documented) modules legalref and legaluri, which were specific to swedish legal references, have been moved into the ferenda.sources.legal.se namespace
* The (non-documented) feature where CSS files specified in the configuration could be in SCSS format, and automatically compiled/transformed, has been removed, since the library used
(pyScss) currently has problems on the Python 3 platform.
#### New features:[¶](#id8)
* The [`ferenda.Devel.mkpatch()`](index.html#ferenda.Devel.mkpatch) command now actually works.
* The republishsource configuration parameter is now available, and controls whether your Atom feeds link to the original document file as it was fetched from the source, or to the parsed version. See
[Configuration](index.html#configuration).
* The entire RDF dataset for a particular docrepo is now available through the ReST API in various formats using the same content negotiation mechanisms as the documents themselves. See [The WSGI app](index.html#document-wsgi).
* ferenda-setup now auto-configures `indextype` (and checks whether ElasticSearch is available, before falling back to Whoosh) in addition to `storetype`.
### 0.1.5 (released 2013-09-29)[¶](#released-2013-09-29)
Documentation, particularly code examples, has been updated to better fit reality. They have also been added to the test suite, so they’re almost guaranteed to be updated when the API changes.
#### Backwards-incompatible changes[¶](#id9)
* Transformation of XHTML1.1+RDFa files to HTML5 is now done using the new Transformer class, instead of the DocumentRepository.transform_to_html method, which has been removed
* DocumentRepository.list_basefiles_for (which was a shortcut for calling list_basefiles_for on the docrepos’ store object) has been removed. Typical change needed:
```
- for basefile in self.list_basefiles_for("parse"):
+ for basefile in self.store.list_basefiles_for("parse"):
```
#### New features:[¶](#id10)
* New ferenda.Transformer class (see above)
* A new decorator, ferenda.decorators.downloadmax, can be used to limit the maximum number of documents that a docrepo will download. It looks for eitther the “FERENDA_DOWNLOADMAX” environment variable or the downloadmax configuration parameteter. This is primarily useful for testing and trying out new docrepos.
* DocumentRepository.render_xhtml will now include RDFa statements for all (non-BNode) subjects in doc.meta, not just the doc.uri subject. This makes it possible to state that a document is written by some person or published by some entity, and then include metadata on that person/entity. It also makes it possible to describe documents related to the main document, using the information gleaned from the main document
* DocumentStore now has a intermediate_path method – previously some derived subclasses implemented their own, but now it’s part of the base class.
* ferenda.errors.DocumentRemovedError now has a dummyfile attribute,
which is used by ferenda.manager.run to avoid endless re-parsing of downloaded files that do not contain an actual document.
* A new shim module, ferenda.compat (modelled after six.moves),
simplified imports of modules that may or may not be present in the stdlib depending on python version. So far, this includes OrderedDict, unittest and mock.
#### Infrastructural changes:[¶](#id11)
* Most of the bundled document repository classes in ferenda.sources has been overhauled and adapted to the changes that has occurred to the API since the old days.
* Continous integration and coverage is now set up with Travis-CI
(<https://travis-ci.org/staffanm/ferenda/>) and Coveralls
(<https://coveralls.io/r/staffanm/ferenda>)
### 0.1.4 (released 2013-08-26)[¶](#released-2013-08-26)
* ElasticSearch is now supported as an alternate backend to Whoosh for fulltext indexing and searching.
* Documentation, particularly “Creating your own document repositories” have been substantially overhauled, and in the process various bugs that prevented the usage of custom SPARQL queries and XSLT transforms were fixed.
* The example RFC docrepo parser has been improved.
### 0.1.3 (released 2013-08-11)[¶](#released-2013-08-11)
* Search functionality when running under WSGI is now implemented. Still a bit basic and not really customizable
(everything is done by manager._wsgi_search), but seems to actually work.
* New docrepo: ferenda.sources.general.Static, for publishing static content (such as “About”, “Contact”, “Legal info”) that goes into the site footer.
* The FulltextIndex class have been split up similarly to TripleStore and the road has been paved to get alternative implementations that connect to other fulltext index servers. ElasticSearch is next up to be implemented, but is not done yet.
* General improvement of documentation
### 0.1.2 (released 2013-08-02)[¶](#released-2013-08-02)
* If using a RDFLib based triple store (storetype=”SQLITE” or
“SLEEPYCAT”), when generating all documents, all triples are read into memory, which speeds up the SPARQL querying considerably
* The TripleStore class has been overhauled and split into subclasses. Also gained the above inmemory functionality + the possibility of using command-line curl instead of requests when up/downloading large datasets.
* Content-negotiation when using the WSGI app (as described in doc/wsgi.rst) is supported
### 0.1.1 (released 2013-07-27)[¶](#released-2013-07-27)
This release fixes a bug with TOC generation on python 2, creates a correct long_description for pypi and adds some uncommitted CSS improvements. Running the finished site under WSGI is now tested and works ok-ish (although search is still unimplemented).
### 0.1.0 (released 2013-07-26)[¶](#released-2013-07-26)
This is just a test release to test out pypi uploading as well as git branching and tagging. Neverthless, this code is approaching feature completeness, except that running a finished site under WSGI hasn’t been tested. Generating a static HTML site should work OK-ish.
Indices and tables[¶](#indices-and-tables)
===
* [Index](genindex.html)
* [Module Index](py-modindex.html)
* [Search Page](search.html) |
abc | cran | R | Package ‘abc.data’
October 12, 2022
Type Package
Title Data Only: Tools for Approximate Bayesian Computation (ABC)
Version 1.0
Date 2015-05-04
Depends R (>= 2.10)
Description Contains data which are used by functions of the 'abc' package.
Repository CRAN
License GPL (>= 3)
NeedsCompilation no
Author <NAME> [aut],
<NAME> [aut],
<NAME> [aut],
<NAME> [aut, cre]
Maintainer <NAME> <<EMAIL>>
Date/Publication 2015-05-05 11:34:13
R topics documented:
huma... 1
musigma... 3
pp... 4
human A set of R objects containing observed data from three human popula-
tions, and simulated data under three different demographic models.
The data set is used to illustrate model selection and parameter infer-
ence in an ABC framework (see the vignette of the abc package for
more details).
Description
data(human) loads in four R objects: stat.voight is a data frame with 3 rows and 3 columns
and contains the observed summary statistics for three human populations, stat.3pops.sim is
also a data frame with 150,000 rows and 3 columns and contains the simulated summary statis-
tics, models is a vector of character strings of length 150,000 and contains the model indices,
par.italy.sim is a data frame with 50,000 rows and 4 columns and contains the parameter values
that were used to simulate data under a population bottleneck model. The corresponding sum-
mary statistics can be subsetted from the stat.3pops.sim object as subset(stat.3pops.sim,
subset=models=="bott").
Usage
data(human)
Format
The stat.voight data frame contains the following columns:
pi The mean nucleotide diversity over 50 loci in 3 human populations, Hausa, Italian, and Chinese.
TajD.m The mean of Tajima’s D statistic over 50 loci in 3 human populations, Hausa, Italian, and
Chinese.
TajD.v The variance of Tajima’s D statistic over 50 loci in 3 human populations, Hausa, Italian,
and Chinese.
Each row represents a simulation. Under each model 50,000 simulations were performed. Row
names indicate the type of demographic model.
The stat.3pops.sim data frame contains the following columns:
pi The mean of nucleotide diversity over 50 simulated loci under 3 demographic scenarios: con-
stant size population, population bottleneck, and population expansion.
TajD.m The mean of Tajima’s D statistic over 50 simulated loci under 3 demographic scenarios:
constant size population, population bottleneck, and population expansion.
TajD.v The variance of Tajima’s D statistic over 50 simulated loci under 3 demographic scenarios:
constant size population, population bottleneck, and population expansion.
Each row represents a simulation. Under each model 50,000 simulations were performed. Row
names indicate the type of demographic model.
The par.italy.sim data frame contains the following columns:
Ne The effective population size.
a The intensity of the bottleneck (i.e. the ratio of the population sizes before and during the bottle-
neck).
duration The duration of the bottleneck.
start The start of the bottleneck.
Each row represents a simulation.
models contains the names of the demographic models.
Details
Data is provided to estimate the posterior probabilities of classical demographic scenarios in three
human populations: Hausa, Italian, and Chinese. These three populations represent the three conti-
nents: Africa, Europe, Asia, respectively. par.italy.sim may then used to estimate the ancestral
population size of the European population assuming a bottleneck model.
It is generally believed that African human populations are expanding, while human populations
from outside of Africa have gone through a population bottleneck. Tajima’s D statistic has been
classically used to detect changes in historical population size. A negative Tajima’s D signifies an
excess of low frequency polymorphisms, indicating population size expansion. While a positive
Tajima’s D indicates low levels of both low and high frequency polymorphisms, thus a sign of a
population bottleneck. In constant size populations, Tajima’s D is expected to be zero.
With the help of the human data one can reach these expected conclusions for the three human
population samples, in accordance with the conclusions of Voight et al. (2005) (where the observed
statistics was taken from), but using ABC.
Source
The observed statistics were taken from Voight et al. 2005 (Table 1.). Also, the same input pa-
rameters were used as in Voight et al. 2005 to simulate data under the three demographic models.
Simulations were performed using the software ms and the summary statistics were calculated using
sample_stats (Hudson 1983).
References
<NAME>, <NAME>, <NAME>, <NAME>, <NAME> and <NAME> (2005) Interro-
gating multiple aspects of variation in a full resequencing data set to infer human population size
changes. PNAS 102, 18508-18513.
Hudson, <NAME>. (2002) Generating samples under a Wright-Fisher neutral model of genetic variation.
Bioinformatics 18 337-338.
musigma2 A set of objects used to estimate the population mean and variance in
a Gaussian model with ABC (see the vignette of the abc package for
more details).
Description
musigma2 loads in five R objects: par.sim is a data frame and contains the parameter values of the
simulated data sets, stat is a data frame and contains the simulated summary statistics, stat.obs
is a data frame and contains the observed summary statistics, post.mu and post.sigma2 are data
frames and contain the true posterior distributions for the two parameters of interest, µ and σ 2 ,
respectively.
Usage
data(musigma2)
Format
The par.sim data frame contains the following columns:
mu The population mean.
sigma2 The population variance.
The stat.sim and stat.obs data frames contain the following columns:
mean The sample mean.
var The logarithm of the sample variance.
The post.mu and post.sigma2 data frames contain the following columns:
x the coordinates of the points where the density is estimated.
y the posterior density values.
Details
The prior of σ 2 is an inverse χ2 distribution with one degree of freedom. The prior of µ is a
normal distribution with variance of σ 2 . For this simple example, the closed form of the posterior
distribution is available.
Source
The observed statistics are the mean and variance of the sepal of Iris setosa, estimated from part of
the iris data.
The data were collected by <NAME>.
References
<NAME>. (1935). The irises of the Gaspe Peninsula, Bulletin of the American Iris Society, 59,
2-5.
ppc Data to illustrate the posterior predictive checks for the data human.
ppc and human are used to illustrate model selection and parameter
inference in an ABC framework (see the vignette of the abc package
for more details).
Description
data(ppc) loads in the data frame post.bott, which contains the summary statistics calculated
from data simulated a posteriori under the bottleneck model (see data(human) and the package’s
vignette for more details).
Usage
data(ppc)
Format
The post.bott data frame contains the following columns:
pi The mean nucleotide diversity over 50 loci.
TajD.m The mean of Tajima’s D statistic over 50 loci.
TajD.v The variance of Tajima’s D statistic over 50 loci.
Each row represents a simulation. 1000 simulations were performed under the bottleneck model. |
rospca | cran | R | Package ‘rospca’
October 14, 2022
Version 1.0.4
Date 2018-02-26
Title Robust Sparse PCA using the ROSPCA Algorithm
Description Implementation of robust sparse PCA using the ROSPCA algorithm
of Hubert et al. (2016) <DOI:10.1080/00401706.2015.1093962>.
Maintainer <NAME> <<EMAIL>>
Depends R (>= 2.14.0)
Imports stats, graphics, parallel, mrfDepth (>= 1.0.5), robustbase (>=
0.92-6), pcaPP, rrcov, rrcovHD (>= 0.2-3), elasticnet, mvtnorm,
pracma
License GPL (>= 2)
URL https://github.com/TReynkens/rospca
BugReports https://github.com/TReynkens/rospca/issues
ByteCompile yes
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-5516-5107>),
<NAME> [ctb] (Original R code for PcaHubert and diagnostic
plot in rrcov package),
<NAME> [ctb],
<NAME> [ctb],
<NAME> [ctb]
Repository CRAN
Date/Publication 2018-02-26 08:21:16 UTC
R topics documented:
angl... 2
dataGe... 3
diagPlo... 4
Glas... 6
robpc... 6
rospc... 9
selectLambd... 12
selectPlo... 14
zeroMeasur... 15
angle Standardised last principal angle
Description
Standardised last principal angle between the subspaces generated by the columns of A and B.
Usage
angle(A, B)
Arguments
A Numeric matrix of size p by k.
B Numeric matrix of size q by l.
Details
We compute the last principal angle between the subspaces generated by the columns of A and B
using the algorithm in Bjorck and Golub (1973). This angle takes values between 0 and π/2. We
divide it by π/2 to make it take values between 0 and 1, where 0 indicates that the subspaces are
close.
Value
Standardised last principal angle between A and B.
Author(s)
<NAME>
References
<NAME>. and <NAME>. (1973), “Numerical Methods for Computing Angles Between Linear
Subspaces," Mathematics of Computation, 27, 579–594.
Examples
tmp <- dataGen(m=1)
P <- eigen(tmp$R)$vectors[,1:2]
PP <- rospca(tmp$data[[1]], k=2)$loadings
angle(P, PP)
dataGen Generate sparse data with outliers
Description
Generate sparse data with outliers using simulation scheme detailed in Hubert et al. (2016).
Usage
dataGen(m = 100, n = 100, p = 10, a = c(0.9,0.5,0), bLength = 4, SD = c(10,5,2),
eps = 0, seed = TRUE)
Arguments
m Number of datasets to generate, default is 100.
n Number of observations, default is 100.
p Number of dimensions, default is 10.
a Numeric vector containing the inner group correlations for each block. The
number of useful blocks is thus given by k = length(a) − 1 which should be at
least 2. By default, the correlations are equal to 0.9, 0.5 and 0, respectively.
bLength Length of the blocks of useful variables, default is 4.
SD Numeric vector containing the standard deviations of the blocks of variables,
default is c(10,4,2). Note that SD and a should have the same length.
eps Proportion of contamination, should be between 0 and 0.5. Default is 0 (no
contamination).
seed Logical indicating if a seed is used when generating the datasets, default is TRUE.
Details
Firstly, we generate a correlation matrix such that it has sparse eigenvectors. We design the corre-
lation matrix to have length(a) = k + 1 groups of variables with no correlation between variables
from different groups. The first k groups consist of bLength variables each. The correlation be-
tween the different variables of the group is equal to a[1] for group 1, .... . The (k+1)th group
contains the remaining p − k × bLength variables, which we specify to have correlation a[k+1].
Secondly, the correlation matrix R is transformed into the covariance matrix Σ = V 0.5 · R · V 0.5 ,
where V = diag(SD2 ).
Thirdly, the n observations are generated from a p-variate normal distribution with mean the p-
variate zero-vector and covariance matrix Σ. Standard normally distributed noise terms are also
added to each of the p variables to make the sparse structure of the data harder to detect.
Finally, (100 × eps)% of the data points are randomly replaced by outliers. These outliers are gen-
erated from a p-variate normal distribution as in Croux et al. (2013).
The ith eigenvector of R, for i = 1, ..., k, is given by a√(sparse) vector with the (bLength × (i −
1) + 1)th till the (bLength × i)th elements equal to 1/ bLength and all other elements equal to
zero.
See Hubert et al. (2016) for more details.
Value
A list with components:
data List of length m containing all data matrices.
ind List of length m containing the numeric vectors with the indices of the contam-
inated observations.
R Correlation matrix of the data, a numeric matrix of size p by p.
Sigma Covariance matrix of the data (Σ), a numeric matrix of size p by p.
Author(s)
<NAME>
References
<NAME>., <NAME>., <NAME>. and <NAME>. (2016). “Sparse PCA for High-Dimensional
Data with Outliers,” Technometrics, 58, 424–434.
<NAME>., <NAME>., and <NAME>. (2013), “Robust Sparse Principal Component Analysis,”
Technometrics, 55, 202–214.
Examples
X <- dataGen(m=1, n=100, p=10, eps=0.2, bLength=4)$data[[1]]
resR <- robpca(X, k=2, skew=FALSE)
diagPlot(resR)
diagPlot Diagnostic plot for PCA
Description
Make diagnostic plot using the output from robpca or rospca.
Usage
diagPlot(res, title = "Robust PCA", col = "black", pch = 16, labelOut = TRUE, id = 3)
Arguments
res A list containing the orthogonal distances (od), the score distances (sd) and
their respective cut-offs (cutoff.od and cutoff.sd). Output from robpca or
rospca can for example be used.
title Title of the plot, default is "Robust PCA".
col Colour of the points in the plot, this can be a single colour for all points or a
vector specifying the colour for each point. The default is "black".
pch Plotting characters or symbol used in the plot, see points for more details. The
default is 16 which corresponds to filled circles.
labelOut Logical indicating if outliers should be labelled on the plot, default is TRUE.
id Number of OD outliers and number of SD outliers to label on the plot, default
is 3.
Details
The diagnostic plot contains the score distances on the x-axis and the orthogonal distances on the
y-axis. To detect outliers, cut-offs for both distances are added, see Hubert et al. (2005).
Author(s)
<NAME>, based on R code from <NAME> for the diagnostic plot in rrcov (released
under GPL-3).
References
<NAME>., <NAME>., and <NAME>. (2005), “ROBPCA: A New Approach to
Robust Principal Component Analysis,” Technometrics, 47, 64–79.
Examples
X <- dataGen(m=1, n=100, p=10, eps=0.2, bLength=4)$data[[1]]
resR <- robpca(X, k=2, skew=FALSE)
diagPlot(resR)
Glass Glass data
Description
Glass data of Lemberge et al. (2000) containing Electron Probe X-ray Microanalysis (EPXMA)
intensities for different wavelengths of 16–17th century archaeological glass vessels. This dataset
was also used in Hubert et al. (2005).
Usage
data(Glass)
Format
A data frame with 180 observations and 750 variables. These variables correspond to EPXMA
intensities for different wavelengths and are indicated by V1, V2, ..., V750.
Source
<NAME>., <NAME>., <NAME>., <NAME>., and <NAME>. (2000), “Quantitative
Z-Analysis of the 16–17th Century Archaelogical Glass Vessels using PLS Regression of EPXMA
and µ-XRF Data," Journal of Chemometrics, 14, 751–763.
References
<NAME>., <NAME>., and <NAME>. (2005), “ROBPCA: A New Approach to
Robust Principal Component Analysis,” Technometrics, 47, 64–79.
Examples
data(Glass)
res <- robpca(Glass, k=4, alpha=0.5)
matplot(res$loadings, type="l", lty=1)
robpca ROBust PCA algorithm
Description
ROBPCA algorithm of Hubert et al. (2005) including reweighting (Engelen et al., 2005) and possi-
ble extension to skewed data (Hubert et al., 2009).
Usage
robpca (x, k = 0, kmax = 10, alpha = 0.75, h = NULL, mcd = FALSE,
ndir = "all", skew = FALSE, ...)
Arguments
x An n by p matrix or data matrix with observations in the rows and variables in
the columns.
k Number of principal components that will be used. When k=0 (default), the
number of components is selected using the criterion in Hubert et al. (2005).
kmax Maximal number of principal components that will be computed, default is 10.
alpha Robustness parameter, default is 0.75.
h The number of outliers the algorithm should resist is given by n − h. Any
value for h between n/2 and n may be specified. Default is NULL which uses
h=ceiling(alpha*n)+1. Do not specify alpha and h at the same time.
mcd Logical indicating if the MCD adaptation of ROBPCA may be applied when the
number of variables is sufficiently small (see Details). If mcd=FALSE (default),
the full ROBPCA algorithm is always applied.
ndir Number of directions used when computing the outlyingness (or the adjusted
outlyingness when skew=TRUE), see outlyingness and adjOutl for more de-
tails.
skew Logical indicating if the version for skewed data (Hubert et al., 2009) is applied,
default is FALSE.
... Other arguments to pass to methods.
Details
This function is based extensively on PcaHubert from rrcov and there are two main differences:
The outlyingness measure that is used for non-skewed data (skew=FALSE) is the Stahel-Donoho
measure as described in Hubert et al. (2005) which is also used in PcaHubert. The implementation
in mrfDepth (which is used here) is however much faster than the one in PcaHubert and hence
more, or even all, directions can be considered when computing the outlyingness measure.
Moreover, the extension for skewed data of Hubert et al. (2009) (skew=TRUE) is also implemented
here, but this is not included in PcaHubert.
For an extensive description of the ROBPCA algorithm we refer to Hubert et al. (2005) and to
PcaHubert.
When mcd=TRUE and n < 5 × p, we do not apply the full ROBPCA algorithm. The loadings and
eigenvalues are then computed as the eigenvectors and eigenvalues of the MCD estimator applied
to the data set after the SVD step.
Value
A list with components:
loadings Loadings matrix containing the robust loadings (eigenvectors), a numeric matrix
of size p by k.
eigenvalues Numeric vector of length k containing the robust eigenvalues.
scores Scores matrix (computed as (X − center) · loadings), a numeric matrix of size
n by k.
center Numeric vector of length k containing the centre of the data.
k Number of (chosen) principal components.
H0 Logical vector of size n indicating if an observation is in the initial h-subset.
H1 Logical vector of size n indicating if an observation is kept in the reweighting
step.
alpha The robustness parameter α used throughout the algorithm.
h The h-parameter used throughout the algorithm.
sd Numeric vector of size n containing the robust score distances within the robust
PCA subspace.
od Numeric vector of size n containing the orthogonal distances to the robust PCA
subspace.
cutoff.sd Cut-off value for the robust score distances.
cutoff.od Cut-off value for the orthogonal distances.
flag.sd Numeric vector of size n containing the SD-flags of the observations. The obser-
vations whose score distance is larger than cutoff.sd receive an SD-flag equal
to zero. The other observations receive an SD-flag equal to 1.
flag.od Numeric vector of size n containing the OD-flags of the observations. The ob-
servations whose orthogonal distance is larger than cutoff.od receive an OD-
flag equal to zero. The other observations receive an OD-flag equal to 1.
flag.all Numeric vector of size n containing the flags of the observations. The observa-
tions whose score distance is larger than cutoff.sd or whose orthogonal dis-
tance is larger than cutoff.od can be considered as outliers and receive a flag
equal to zero. The regular observations receive flag 1.
Author(s)
<NAME>, based on R code from <NAME> for PcaHubert in rrcov (released under
GPL-3) and Matlab code from Katrien Van Driessen (for the univariate MCD).
References
<NAME>., <NAME>., and <NAME>. (2005), “ROBPCA: A New Approach to
Robust Principal Component Analysis,” Technometrics, 47, 64–79.
<NAME>., <NAME>. and <NAME>. (2005), “A Comparison of Three Procedures for
Robust PCA in High Dimensions", Austrian Journal of Statistics, 34, 117–126.
<NAME>., <NAME>., and <NAME>. (2009), “Robust PCA for Skewed Data and Its
Outlier Map," Computational Statistics & Data Analysis, 53, 2264–2274.
See Also
PcaHubert, outlyingness, adjOutl
Examples
X <- dataGen(m=1, n=100, p=10, eps=0.2, bLength=4)$data[[1]]
resR <- robpca(X, k=2)
diagPlot(resR)
rospca RObust Sparse PCA algorithm
Description
Sparse robust PCA algorithm based on the ROBPCA algorithm of Hubert et al. (2005).
Usage
rospca(X, k, kmax = 10, alpha = 0.75, h = NULL, ndir = "all", grid = TRUE,
lambda = 10^(-6), sparse = "varnum", para, stand = TRUE, skew = FALSE)
Arguments
X An n by p matrix or data matrix with observations in the rows and variables in
the columns.
k Number of principal components that will be used.
kmax Maximal number of principal components that will be computed, default is 10.
alpha Robustness parameter, default is 0.75.
h The number of outliers the algorithm should resist is given by n − h. Any
value for h between n/2 and n may be specified. Default is NULL which uses
h=ceiling(alpha*n)+1. Do not specify alpha and h at the same time.
ndir Number of directions used when computing the outlyingness (or the adjusted
outlyingness when skew=TRUE), see outlyingness and adjOutl for more de-
tails.
grid Logical indicating if the grid version of sparse PCA should be used (SPcaGrid
with method="sd" from rrcovHD). Otherwise, the version of Zou et al. (2006)
is used (spca from elasticnet). Default is TRUE.
lambda Sparsity parameter of SPcaGrid (when grid=TRUE) or ridge parameter of spca
(when grid=FALSE), default is 10−6 .
sparse Parameter for spca (only used when grid=FALSE), see spca for more details.
para Parameter for spca (only used when grid=FALSE), see spca for more details.
stand If TRUE, the data are standardised robustly in the beginning and classically before
applying sparse PCA. If FALSE, the data are only mean-centred before applying
sparse PCA. Default is TRUE.
skew Logical indicating if the version for skewed data should be applied, default is
FALSE.
Details
The ROSPCA algorithm consists of an outlier detection part (step 1), and a sparsification part (steps
2 and 3). We give an overview of these steps here and refer to Hubert et al. (2016) for more details.
Step 1: This is a robustness step similar to ROBPCA. When a standardisation is appropriate, the
variables are first robustly standardised by means of the componentwise median and the Qn . Using
the singular value decomposition (SVD) of the resulting data matrix, the p-dimensional data space is
reduced to the affine subspace spanned by the n observations. Then, the subset of the h observations
with smallest outlyingness is selected (H0 ). Thereafter, a reweighting step is applied: given the
orthogonal distances to the preliminary PCA subspace determined by the observations in H0 , all
observations with orthogonal distances (ODs) smaller than the corresponding cut-off are kept (H1 ).
Step 2: First, the data points with indices in H1 are standardised using the componentwise median
and the Qn and sparse PCA is applied to them. Then, an additional reweighting step is performed
which incorporates information about the sparse structure of the data. Variables with zero loadings
on all k PCs are discarded and then the orthogonal distances to the estimated sparse PCA subspace
are computed. This yields an index set H2 of observations with orthogonal distance smaller than
the cut-off corresponding to these new orthogonal distances. Thereafter, the subset of observations
with indices in H2 is standardised using the componentwise median and the Qn of the observations
in H1 (the same standardisation as in the first time sparse PCA is applied) and sparse PCA is applied
to them which gives sparse loadings. Adding the discarded zero loadings again gives the loadings
matrix P2 .
Step 3: In the last step, the eigenvalues are estimated robustly by applying the Q2n estimator on
the scores of the observations with indices in H2 . In order to robustly estimate the centre, the
score distances are computed and all observations of H2 with a score distance smaller than the
corresponding cut-off are considered, this is the set H3 . Then, the centre is estimated by the mean of
these observations. Finally, the estimates of the eigenvalues are recomputed as the sample variance
of the (new) scores of the observations with indices in H3 . The eigenvalues are sorted in descending
order, so the order of the PCs may change. The columns of the loadings and scores matrices are
changed accordingly.
Note that when it is not necessary to standardise the data, they are only centred as in the scheme
above, but not scaled.
In contrast to Hubert et al. (2016), we allow for SPCA (Zou et al., 2006) to be used as the sparse
PCA method inside ROSPCA (grid=FALSE). Moreover, we also include a skew-adjusted version
of ROSPCA (skew=TRUE) similar to the skew-adjusted version of ROBPCA (Hubert et al., 2009).
This adjusted version is not detailed in Hubert et al. (2016).
Value
A list with components:
loadings Loadings matrix containing the sparse robust loadings (eigenvectors), a numeric
matrix of size p by k.
eigenvalues Numeric vector of length k containing the robust eigenvalues.
scores Scores matrix (computed as (X − center) · loadings), a numeric matrix of size
n by k.
center Numeric vector of length k containing the centre of the data.
D Matrix used to standardise the data before applying sparse PCA (identity matrix
if stand=FALSE), a numeric matrix of size p by p.
k Number of (chosen) principal components.
H0 Logical vector of size n indicating if an observation is in the initial h-subset.
H1 Logical vector of size n indicating if an observation is kept in the non-sparse
reweighting step (in robust part).
P1 Loadings matrix before applying sparse reweighting step, a numeric matrix of
size p by k.
index Numeric vector containing the indices of the variables that are used in the sparse
reweighting step.
H2 Logical vector of size n indicating if an observation is kept in the sparse reweight-
ing step.
P2 Loadings matrix before estimating eigenvalues, a numeric matrix of size p by k.
H3 Logical vector of size n indicating if an observation is kept in the final SD
reweighting step.
alpha The robustness parameter α used throughout the algorithm.
h The h-parameter used throughout the algorithm.
sd Numeric vector of size n containing the robust score distances within the robust
PCA subspace.
od Numeric vector of size n containing the orthogonal distances to the robust PCA
subspace.
cutoff.sd Cut-off value for the robust score distances.
cutoff.od Cut-off value for the orthogonal distances.
flag.sd Numeric vector of size n containing the SD-flags of the observations. The obser-
vations whose score distance is larger than cutoff.sd receive an SD-flag equal
to zero. The other observations receive an SD-flag equal to 1.
flag.od Numeric vector of size n containing the OD-flags of the observations. The ob-
servations whose orthogonal distance is larger than cutoff.od receive an OD-
flag equal to zero. The other observations receive an OD-flag equal to 1.
flag.all Numeric vector of size n containing the flags of the observations. The observa-
tions whose score distance is larger than cutoff.sd or whose orthogonal dis-
tance is larger than cutoff.od can be considered as outliers and receive a flag
equal to zero. The regular observations receive flag 1.
Author(s)
<NAME>, based on R code from <NAME> for PcaHubert in rrcov (released under
GPL-3) and Matlab code from <NAME> (for the univariate MCD).
References
<NAME>., <NAME>., <NAME>. and <NAME>. (2016). “Sparse PCA for High-Dimensional
Data with Outliers,” Technometrics, 58, 424–434.
<NAME>., <NAME>., and <NAME>. (2005), “ROBPCA: A New Approach to
Robust Principal Component Analysis,” Technometrics, 47, 64–79.
<NAME>., <NAME>., and <NAME>. (2009), “Robust PCA for Skewed Data and Its
Outlier Map," Computational Statistics & Data Analysis, 53, 2264–2274.
<NAME>., <NAME>., and <NAME>. (2013), “Robust Sparse Principal Component Analysis,”
Technometrics, 55, 202–214.
<NAME>., <NAME>., and <NAME>. (2006), “Sparse Principal Component Analysis,” Journal of
Computational and Graphical Statistics, 15, 265–286.
See Also
PcaHubert, robpca, outlyingness, adjOutl, SPcaGrid, spca
Examples
X <- dataGen(m=1, n=100, p=10, eps=0.2, bLength=4)$data[[1]]
resRS <- rospca(X, k=2, lambda=0.4, stand=TRUE)
diagPlot(resRS)
selectLambda Selection of sparsity parameter using IC
Description
Selection of the sparsity parameter for ROSPCA and SCoTLASS using BIC of Hubert et al. (2016),
and for SRPCA using BIC of Croux et al. (2013).
Usage
selectLambda(X, k, kmax = 10, method = "ROSPCA", lmin = 0, lmax = 2, lstep = 0.02,
alpha = 0.75, stand = TRUE, skew = FALSE, multicore = FALSE,
mc.cores = NULL, P = NULL, ndir = "all")
Arguments
X An n by p matrix or data matrix with observations in the rows and variables in
the columns.
k Number of Principal Components (PCs).
kmax Maximal number of PCs to be computed, only used when method = "ROSPCA"
or method = "ROSPCAg". Default is 10.
method PCA method to use: ROSPCA ("ROSPCA" or "ROSPCAg"), SCoTLASS ("SCoTLASS"
or "SPCAg") or SRPCA ("SRPCA"). Default is "ROSPCA".
lmin Minimal value of λ to look at, default is 0.
lmax Maximal value of λ to look at, default is 2.
lstep Difference between two consecutive values of λ, i.e. the step size, default is
0.02.
alpha Robustness parameter for ROSPCA, default is 0.75.
stand Logical indicating if the data should be standardised, default is TRUE.
skew Logical indicating if the skewed version of ROSPCA should be applied, default
is FALSE.
multicore Logical indicating if multiple cores can be used, default is TRUE. Note that this
is not possible for the Windows platform, so multicore is always FALSE there.
mc.cores Number of cores to use if multicore=TRUE, default is NULL which corresponds
to the number of cores minus 1.
P True loadings matrix, a numeric matrix of size p by k. The default is NULL which
means that no true loadings matrix is specified.
ndir Number of directions used when computing the outlyingness (or the adjusted
outlyingness when skew=TRUE) in rospca, see outlyingness and adjOutl for
more details.
Details
We select an optimal value of λ for a certain method on a certain dataset by looking at an equidistant
grid of λ values. For each value of λ, we apply the method on the dataset using this sparsity
parameter, and compute an Information Criterion (IC). The optimal value of λ is then the one
corresponding to the minimal IC. The ICs we consider are the BIC of for Hubert et al. (2016) for
ROSPCA and SCoTLASS, and the BIC of Croux et al. (2013) for SRPCA. The BIC of Hubert et
al. (2016) is defined as
Xh1
BIC(λ) = ln(1/(h1 p) OD(i) (λ)) + df (λ) ln(h1 p)/(h1 p),
i=1
where h1 is the size of H1 (the subset of observations that are kept in the non-sparse reweighting
step) and OD(i) (λ) is the ith smallest orthogonal distance for the model when using λ as the sparsity
parameter. The degrees of freedom df (λ) are the number of non-zero loadings when λ is used as
the sparsity parameter.
Value
A list with components:
opt.lambda Value of λ corresponding to minimal IC.
min.IC Minimal value of IC.
Lambda Numeric vector containing the used values of λ.
IC Numeric cector containing the IC values corresponding to all values of λ in
Lambda.
loadings Loadings obtained using method with sparsity parameter opt.lambda, a nu-
meric matrix of size p by k.
fit Fit obtained using method with sparsity parameter opt.lambda. This is a list
containing the loadings (loadings), the eigenvalues (eigenvalues), the stan-
dardised data matrix used as input (Xst), the scores matrix (scores), the orthog-
onal distances (od) and the score distances (sd).
type Type of IC used: BICod (BIC of Hubert et al. (2016)) or BIC (BIC of Croux et
al. (2013)).
measure A numeric vector containing the standardised angles between the true and the
estimated loadings matrix for each value of λ if a loadings matrix is given. When
no loadings matrix is given as input (P=NULL), measure is equal to NULL.
Author(s)
<NAME>
References
<NAME>., <NAME>., <NAME>. and <NAME>. (2016). “Sparse PCA for High-Dimensional
Data with Outliers,” Technometrics, 58, 424–434.
<NAME>., <NAME>., and <NAME>. (2013), “Robust Sparse Principal Component Analysis,”
Technometrics, 55, 202–214.
See Also
selectPlot, mclapply, angle
Examples
X <- dataGen(m=1, n=100, p=10, eps=0.2, bLength=4)$data[[1]]
sl <- selectLambda(X, k=2, method="ROSPCA", lstep=0.1)
selectPlot(sl)
selectPlot Selection plot
Description
Plot Information Criterion (IC) versus values of the sparsity parameter λ.
Usage
selectPlot(sl, indicate = TRUE, main = NULL)
Arguments
sl Output from selectLambda function.
indicate Logical indicating if the value of λ corresponding to the minimal IC is indicated
on the plot, default is TRUE.
main Title for the plot, default is NULL (no title).
Author(s)
<NAME>
References
<NAME>., <NAME>., <NAME>. and <NAME>. (2016). “Sparse PCA for High-Dimensional
Data with Outliers,” Technometrics, 58, 424–434.
See Also
selectLambda
Examples
X <- dataGen(m=1, n=100, p=10, eps=0.2, bLength=4)$data[[1]]
sl <- selectLambda(X, k=2, method="ROSPCA", lstep=0.1)
selectPlot(sl)
zeroMeasure Zero measure
Description
Compute the average zero measures and total zero measure for a list of matrices.
Usage
zeroMeasure(Plist, P, prec = 10^(-5))
Arguments
Plist List of estimated loadings matrices or a single estimated loadings matrix. All
these matrices should be numeric matrices of size p by k.
P True loadings matrix, a numeric matrix of size p by k.
prec Precision used when determining if an element is non-zero, default is 10−5 . We
say that all elements with an absolute value smaller than prec are “equal to
zero”.
Details
The zero measure is a way to compare how correctly a PCA method estimates the sparse loadings
matrix P. For each element of an estimated loadings matrix, it is equal to one if the estimated
and true value are both zero or both non-zero, and zero otherwise. We then take the average zero
measure over all elements of an estimated loadings matrix and over all estimated loadings matrices
which we call the total zero measure.
Value
A list with components:
measure Numeric matrix of size p by k containing the average zero measure over all
length(Plist) simulations for each element of P.
index Numeric vector containing the indices of all data sets where the estimate was
wrong (at least one of the zero measures for the elements of an estimated load-
ings matrix is equal to 0).
total Total zero measure, i.e. the average zero measure over all elements of an esti-
mated loadings matrix and over all estimated loadings matrices.
Author(s)
<NAME>
References
<NAME>., <NAME>., <NAME>. and <NAME>. (2016). “Sparse PCA for High-Dimensional
Data with Outliers,” Technometrics, 58, 424–434.
Examples
P <- cbind(c(1,1), c(0,1))
Plist <- list(matrix(1,2,2), P)
zeroMeasure(Plist, P) |
github.com/aws/aws-sdk-go-v2/service/appconfig | go | Go | None
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
Package appconfig provides the API client, operations, and parameter types for Amazon AppConfig.
Use AppConfig, a capability of Amazon Web Services Systems Manager, to create,
manage, and quickly deploy application configurations. AppConfig supports controlled deployments to applications of any size and includes built-in validation checks and monitoring. You can use AppConfig with applications hosted on Amazon EC2 instances, Lambda, containers, mobile applications, or IoT devices. To prevent errors when deploying application configurations, especially for production systems where a simple typo could cause an unexpected outage,
AppConfig includes validators. A validator provides a syntactic or semantic check to ensure that the configuration you want to deploy works as intended. To validate your application configuration data, you provide a schema or an Amazon Web Services Lambda function that runs against the configuration. The configuration deployment or update can only proceed when the configuration data is valid. During a configuration deployment, AppConfig monitors the application to ensure that the deployment is successful. If the system encounters an error,
AppConfig rolls back the change to minimize impact for your application users.
You can configure a deployment strategy for each application or environment that includes deployment criteria, including velocity, bake time, and alarms to monitor. Similar to error monitoring, if a deployment triggers an alarm,
AppConfig automatically rolls back to the previous version. AppConfig supports multiple use cases. Here are some examples:
* Feature flags: Use AppConfig to turn on new features that require a timely deployment, such as a product launch or announcement.
* Application tuning: Use AppConfig to carefully introduce changes to your application that can only be tested with production traffic.
* Allow list: Use AppConfig to allow premium subscribers to access paid content.
* Operational issues: Use AppConfig to reduce stress on your application when a dependency or other external factor impacts the system.
This reference is intended to be used with the AppConfig User Guide (<http://docs.aws.amazon.com/appconfig/latest/userguide/what-is-appconfig.html>)
.
### Index [¶](#pkg-index)
* [Constants](#pkg-constants)
* [func NewDefaultEndpointResolver() *internalendpoints.Resolver](#NewDefaultEndpointResolver)
* [func WithAPIOptions(optFns ...func(*middleware.Stack) error) func(*Options)](#WithAPIOptions)
* [func WithEndpointResolver(v EndpointResolver) func(*Options)](#WithEndpointResolver)deprecated
* [func WithEndpointResolverV2(v EndpointResolverV2) func(*Options)](#WithEndpointResolverV2)
* [type Client](#Client)
* + [func New(options Options, optFns ...func(*Options)) *Client](#New)
+ [func NewFromConfig(cfg aws.Config, optFns ...func(*Options)) *Client](#NewFromConfig)
* + [func (c *Client) CreateApplication(ctx context.Context, params *CreateApplicationInput, optFns ...func(*Options)) (*CreateApplicationOutput, error)](#Client.CreateApplication)
+ [func (c *Client) CreateConfigurationProfile(ctx context.Context, params *CreateConfigurationProfileInput, ...) (*CreateConfigurationProfileOutput, error)](#Client.CreateConfigurationProfile)
+ [func (c *Client) CreateDeploymentStrategy(ctx context.Context, params *CreateDeploymentStrategyInput, ...) (*CreateDeploymentStrategyOutput, error)](#Client.CreateDeploymentStrategy)
+ [func (c *Client) CreateEnvironment(ctx context.Context, params *CreateEnvironmentInput, optFns ...func(*Options)) (*CreateEnvironmentOutput, error)](#Client.CreateEnvironment)
+ [func (c *Client) CreateExtension(ctx context.Context, params *CreateExtensionInput, optFns ...func(*Options)) (*CreateExtensionOutput, error)](#Client.CreateExtension)
+ [func (c *Client) CreateExtensionAssociation(ctx context.Context, params *CreateExtensionAssociationInput, ...) (*CreateExtensionAssociationOutput, error)](#Client.CreateExtensionAssociation)
+ [func (c *Client) CreateHostedConfigurationVersion(ctx context.Context, params *CreateHostedConfigurationVersionInput, ...) (*CreateHostedConfigurationVersionOutput, error)](#Client.CreateHostedConfigurationVersion)
+ [func (c *Client) DeleteApplication(ctx context.Context, params *DeleteApplicationInput, optFns ...func(*Options)) (*DeleteApplicationOutput, error)](#Client.DeleteApplication)
+ [func (c *Client) DeleteConfigurationProfile(ctx context.Context, params *DeleteConfigurationProfileInput, ...) (*DeleteConfigurationProfileOutput, error)](#Client.DeleteConfigurationProfile)
+ [func (c *Client) DeleteDeploymentStrategy(ctx context.Context, params *DeleteDeploymentStrategyInput, ...) (*DeleteDeploymentStrategyOutput, error)](#Client.DeleteDeploymentStrategy)
+ [func (c *Client) DeleteEnvironment(ctx context.Context, params *DeleteEnvironmentInput, optFns ...func(*Options)) (*DeleteEnvironmentOutput, error)](#Client.DeleteEnvironment)
+ [func (c *Client) DeleteExtension(ctx context.Context, params *DeleteExtensionInput, optFns ...func(*Options)) (*DeleteExtensionOutput, error)](#Client.DeleteExtension)
+ [func (c *Client) DeleteExtensionAssociation(ctx context.Context, params *DeleteExtensionAssociationInput, ...) (*DeleteExtensionAssociationOutput, error)](#Client.DeleteExtensionAssociation)
+ [func (c *Client) DeleteHostedConfigurationVersion(ctx context.Context, params *DeleteHostedConfigurationVersionInput, ...) (*DeleteHostedConfigurationVersionOutput, error)](#Client.DeleteHostedConfigurationVersion)
+ [func (c *Client) GetApplication(ctx context.Context, params *GetApplicationInput, optFns ...func(*Options)) (*GetApplicationOutput, error)](#Client.GetApplication)
+ [func (c *Client) GetConfiguration(ctx context.Context, params *GetConfigurationInput, optFns ...func(*Options)) (*GetConfigurationOutput, error)](#Client.GetConfiguration)deprecated
+ [func (c *Client) GetConfigurationProfile(ctx context.Context, params *GetConfigurationProfileInput, ...) (*GetConfigurationProfileOutput, error)](#Client.GetConfigurationProfile)
+ [func (c *Client) GetDeployment(ctx context.Context, params *GetDeploymentInput, optFns ...func(*Options)) (*GetDeploymentOutput, error)](#Client.GetDeployment)
+ [func (c *Client) GetDeploymentStrategy(ctx context.Context, params *GetDeploymentStrategyInput, ...) (*GetDeploymentStrategyOutput, error)](#Client.GetDeploymentStrategy)
+ [func (c *Client) GetEnvironment(ctx context.Context, params *GetEnvironmentInput, optFns ...func(*Options)) (*GetEnvironmentOutput, error)](#Client.GetEnvironment)
+ [func (c *Client) GetExtension(ctx context.Context, params *GetExtensionInput, optFns ...func(*Options)) (*GetExtensionOutput, error)](#Client.GetExtension)
+ [func (c *Client) GetExtensionAssociation(ctx context.Context, params *GetExtensionAssociationInput, ...) (*GetExtensionAssociationOutput, error)](#Client.GetExtensionAssociation)
+ [func (c *Client) GetHostedConfigurationVersion(ctx context.Context, params *GetHostedConfigurationVersionInput, ...) (*GetHostedConfigurationVersionOutput, error)](#Client.GetHostedConfigurationVersion)
+ [func (c *Client) ListApplications(ctx context.Context, params *ListApplicationsInput, optFns ...func(*Options)) (*ListApplicationsOutput, error)](#Client.ListApplications)
+ [func (c *Client) ListConfigurationProfiles(ctx context.Context, params *ListConfigurationProfilesInput, ...) (*ListConfigurationProfilesOutput, error)](#Client.ListConfigurationProfiles)
+ [func (c *Client) ListDeploymentStrategies(ctx context.Context, params *ListDeploymentStrategiesInput, ...) (*ListDeploymentStrategiesOutput, error)](#Client.ListDeploymentStrategies)
+ [func (c *Client) ListDeployments(ctx context.Context, params *ListDeploymentsInput, optFns ...func(*Options)) (*ListDeploymentsOutput, error)](#Client.ListDeployments)
+ [func (c *Client) ListEnvironments(ctx context.Context, params *ListEnvironmentsInput, optFns ...func(*Options)) (*ListEnvironmentsOutput, error)](#Client.ListEnvironments)
+ [func (c *Client) ListExtensionAssociations(ctx context.Context, params *ListExtensionAssociationsInput, ...) (*ListExtensionAssociationsOutput, error)](#Client.ListExtensionAssociations)
+ [func (c *Client) ListExtensions(ctx context.Context, params *ListExtensionsInput, optFns ...func(*Options)) (*ListExtensionsOutput, error)](#Client.ListExtensions)
+ [func (c *Client) ListHostedConfigurationVersions(ctx context.Context, params *ListHostedConfigurationVersionsInput, ...) (*ListHostedConfigurationVersionsOutput, error)](#Client.ListHostedConfigurationVersions)
+ [func (c *Client) ListTagsForResource(ctx context.Context, params *ListTagsForResourceInput, ...) (*ListTagsForResourceOutput, error)](#Client.ListTagsForResource)
+ [func (c *Client) StartDeployment(ctx context.Context, params *StartDeploymentInput, optFns ...func(*Options)) (*StartDeploymentOutput, error)](#Client.StartDeployment)
+ [func (c *Client) StopDeployment(ctx context.Context, params *StopDeploymentInput, optFns ...func(*Options)) (*StopDeploymentOutput, error)](#Client.StopDeployment)
+ [func (c *Client) TagResource(ctx context.Context, params *TagResourceInput, optFns ...func(*Options)) (*TagResourceOutput, error)](#Client.TagResource)
+ [func (c *Client) UntagResource(ctx context.Context, params *UntagResourceInput, optFns ...func(*Options)) (*UntagResourceOutput, error)](#Client.UntagResource)
+ [func (c *Client) UpdateApplication(ctx context.Context, params *UpdateApplicationInput, optFns ...func(*Options)) (*UpdateApplicationOutput, error)](#Client.UpdateApplication)
+ [func (c *Client) UpdateConfigurationProfile(ctx context.Context, params *UpdateConfigurationProfileInput, ...) (*UpdateConfigurationProfileOutput, error)](#Client.UpdateConfigurationProfile)
+ [func (c *Client) UpdateDeploymentStrategy(ctx context.Context, params *UpdateDeploymentStrategyInput, ...) (*UpdateDeploymentStrategyOutput, error)](#Client.UpdateDeploymentStrategy)
+ [func (c *Client) UpdateEnvironment(ctx context.Context, params *UpdateEnvironmentInput, optFns ...func(*Options)) (*UpdateEnvironmentOutput, error)](#Client.UpdateEnvironment)
+ [func (c *Client) UpdateExtension(ctx context.Context, params *UpdateExtensionInput, optFns ...func(*Options)) (*UpdateExtensionOutput, error)](#Client.UpdateExtension)
+ [func (c *Client) UpdateExtensionAssociation(ctx context.Context, params *UpdateExtensionAssociationInput, ...) (*UpdateExtensionAssociationOutput, error)](#Client.UpdateExtensionAssociation)
+ [func (c *Client) ValidateConfiguration(ctx context.Context, params *ValidateConfigurationInput, ...) (*ValidateConfigurationOutput, error)](#Client.ValidateConfiguration)
* [type CreateApplicationInput](#CreateApplicationInput)
* [type CreateApplicationOutput](#CreateApplicationOutput)
* [type CreateConfigurationProfileInput](#CreateConfigurationProfileInput)
* [type CreateConfigurationProfileOutput](#CreateConfigurationProfileOutput)
* [type CreateDeploymentStrategyInput](#CreateDeploymentStrategyInput)
* [type CreateDeploymentStrategyOutput](#CreateDeploymentStrategyOutput)
* [type CreateEnvironmentInput](#CreateEnvironmentInput)
* [type CreateEnvironmentOutput](#CreateEnvironmentOutput)
* [type CreateExtensionAssociationInput](#CreateExtensionAssociationInput)
* [type CreateExtensionAssociationOutput](#CreateExtensionAssociationOutput)
* [type CreateExtensionInput](#CreateExtensionInput)
* [type CreateExtensionOutput](#CreateExtensionOutput)
* [type CreateHostedConfigurationVersionInput](#CreateHostedConfigurationVersionInput)
* [type CreateHostedConfigurationVersionOutput](#CreateHostedConfigurationVersionOutput)
* [type DeleteApplicationInput](#DeleteApplicationInput)
* [type DeleteApplicationOutput](#DeleteApplicationOutput)
* [type DeleteConfigurationProfileInput](#DeleteConfigurationProfileInput)
* [type DeleteConfigurationProfileOutput](#DeleteConfigurationProfileOutput)
* [type DeleteDeploymentStrategyInput](#DeleteDeploymentStrategyInput)
* [type DeleteDeploymentStrategyOutput](#DeleteDeploymentStrategyOutput)
* [type DeleteEnvironmentInput](#DeleteEnvironmentInput)
* [type DeleteEnvironmentOutput](#DeleteEnvironmentOutput)
* [type DeleteExtensionAssociationInput](#DeleteExtensionAssociationInput)
* [type DeleteExtensionAssociationOutput](#DeleteExtensionAssociationOutput)
* [type DeleteExtensionInput](#DeleteExtensionInput)
* [type DeleteExtensionOutput](#DeleteExtensionOutput)
* [type DeleteHostedConfigurationVersionInput](#DeleteHostedConfigurationVersionInput)
* [type DeleteHostedConfigurationVersionOutput](#DeleteHostedConfigurationVersionOutput)
* [type EndpointParameters](#EndpointParameters)
* + [func (p EndpointParameters) ValidateRequired() error](#EndpointParameters.ValidateRequired)
+ [func (p EndpointParameters) WithDefaults() EndpointParameters](#EndpointParameters.WithDefaults)
* [type EndpointResolver](#EndpointResolver)
* + [func EndpointResolverFromURL(url string, optFns ...func(*aws.Endpoint)) EndpointResolver](#EndpointResolverFromURL)
* [type EndpointResolverFunc](#EndpointResolverFunc)
* + [func (fn EndpointResolverFunc) ResolveEndpoint(region string, options EndpointResolverOptions) (endpoint aws.Endpoint, err error)](#EndpointResolverFunc.ResolveEndpoint)
* [type EndpointResolverOptions](#EndpointResolverOptions)
* [type EndpointResolverV2](#EndpointResolverV2)
* + [func NewDefaultEndpointResolverV2() EndpointResolverV2](#NewDefaultEndpointResolverV2)
* [type GetApplicationInput](#GetApplicationInput)
* [type GetApplicationOutput](#GetApplicationOutput)
* [type GetConfigurationInput](#GetConfigurationInput)
* [type GetConfigurationOutput](#GetConfigurationOutput)
* [type GetConfigurationProfileInput](#GetConfigurationProfileInput)
* [type GetConfigurationProfileOutput](#GetConfigurationProfileOutput)
* [type GetDeploymentInput](#GetDeploymentInput)
* [type GetDeploymentOutput](#GetDeploymentOutput)
* [type GetDeploymentStrategyInput](#GetDeploymentStrategyInput)
* [type GetDeploymentStrategyOutput](#GetDeploymentStrategyOutput)
* [type GetEnvironmentInput](#GetEnvironmentInput)
* [type GetEnvironmentOutput](#GetEnvironmentOutput)
* [type GetExtensionAssociationInput](#GetExtensionAssociationInput)
* [type GetExtensionAssociationOutput](#GetExtensionAssociationOutput)
* [type GetExtensionInput](#GetExtensionInput)
* [type GetExtensionOutput](#GetExtensionOutput)
* [type GetHostedConfigurationVersionInput](#GetHostedConfigurationVersionInput)
* [type GetHostedConfigurationVersionOutput](#GetHostedConfigurationVersionOutput)
* [type HTTPClient](#HTTPClient)
* [type HTTPSignerV4](#HTTPSignerV4)
* [type ListApplicationsAPIClient](#ListApplicationsAPIClient)
* [type ListApplicationsInput](#ListApplicationsInput)
* [type ListApplicationsOutput](#ListApplicationsOutput)
* [type ListApplicationsPaginator](#ListApplicationsPaginator)
* + [func NewListApplicationsPaginator(client ListApplicationsAPIClient, params *ListApplicationsInput, ...) *ListApplicationsPaginator](#NewListApplicationsPaginator)
* + [func (p *ListApplicationsPaginator) HasMorePages() bool](#ListApplicationsPaginator.HasMorePages)
+ [func (p *ListApplicationsPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListApplicationsOutput, error)](#ListApplicationsPaginator.NextPage)
* [type ListApplicationsPaginatorOptions](#ListApplicationsPaginatorOptions)
* [type ListConfigurationProfilesAPIClient](#ListConfigurationProfilesAPIClient)
* [type ListConfigurationProfilesInput](#ListConfigurationProfilesInput)
* [type ListConfigurationProfilesOutput](#ListConfigurationProfilesOutput)
* [type ListConfigurationProfilesPaginator](#ListConfigurationProfilesPaginator)
* + [func NewListConfigurationProfilesPaginator(client ListConfigurationProfilesAPIClient, ...) *ListConfigurationProfilesPaginator](#NewListConfigurationProfilesPaginator)
* + [func (p *ListConfigurationProfilesPaginator) HasMorePages() bool](#ListConfigurationProfilesPaginator.HasMorePages)
+ [func (p *ListConfigurationProfilesPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListConfigurationProfilesOutput, error)](#ListConfigurationProfilesPaginator.NextPage)
* [type ListConfigurationProfilesPaginatorOptions](#ListConfigurationProfilesPaginatorOptions)
* [type ListDeploymentStrategiesAPIClient](#ListDeploymentStrategiesAPIClient)
* [type ListDeploymentStrategiesInput](#ListDeploymentStrategiesInput)
* [type ListDeploymentStrategiesOutput](#ListDeploymentStrategiesOutput)
* [type ListDeploymentStrategiesPaginator](#ListDeploymentStrategiesPaginator)
* + [func NewListDeploymentStrategiesPaginator(client ListDeploymentStrategiesAPIClient, ...) *ListDeploymentStrategiesPaginator](#NewListDeploymentStrategiesPaginator)
* + [func (p *ListDeploymentStrategiesPaginator) HasMorePages() bool](#ListDeploymentStrategiesPaginator.HasMorePages)
+ [func (p *ListDeploymentStrategiesPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListDeploymentStrategiesOutput, error)](#ListDeploymentStrategiesPaginator.NextPage)
* [type ListDeploymentStrategiesPaginatorOptions](#ListDeploymentStrategiesPaginatorOptions)
* [type ListDeploymentsAPIClient](#ListDeploymentsAPIClient)
* [type ListDeploymentsInput](#ListDeploymentsInput)
* [type ListDeploymentsOutput](#ListDeploymentsOutput)
* [type ListDeploymentsPaginator](#ListDeploymentsPaginator)
* + [func NewListDeploymentsPaginator(client ListDeploymentsAPIClient, params *ListDeploymentsInput, ...) *ListDeploymentsPaginator](#NewListDeploymentsPaginator)
* + [func (p *ListDeploymentsPaginator) HasMorePages() bool](#ListDeploymentsPaginator.HasMorePages)
+ [func (p *ListDeploymentsPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListDeploymentsOutput, error)](#ListDeploymentsPaginator.NextPage)
* [type ListDeploymentsPaginatorOptions](#ListDeploymentsPaginatorOptions)
* [type ListEnvironmentsAPIClient](#ListEnvironmentsAPIClient)
* [type ListEnvironmentsInput](#ListEnvironmentsInput)
* [type ListEnvironmentsOutput](#ListEnvironmentsOutput)
* [type ListEnvironmentsPaginator](#ListEnvironmentsPaginator)
* + [func NewListEnvironmentsPaginator(client ListEnvironmentsAPIClient, params *ListEnvironmentsInput, ...) *ListEnvironmentsPaginator](#NewListEnvironmentsPaginator)
* + [func (p *ListEnvironmentsPaginator) HasMorePages() bool](#ListEnvironmentsPaginator.HasMorePages)
+ [func (p *ListEnvironmentsPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListEnvironmentsOutput, error)](#ListEnvironmentsPaginator.NextPage)
* [type ListEnvironmentsPaginatorOptions](#ListEnvironmentsPaginatorOptions)
* [type ListExtensionAssociationsAPIClient](#ListExtensionAssociationsAPIClient)
* [type ListExtensionAssociationsInput](#ListExtensionAssociationsInput)
* [type ListExtensionAssociationsOutput](#ListExtensionAssociationsOutput)
* [type ListExtensionAssociationsPaginator](#ListExtensionAssociationsPaginator)
* + [func NewListExtensionAssociationsPaginator(client ListExtensionAssociationsAPIClient, ...) *ListExtensionAssociationsPaginator](#NewListExtensionAssociationsPaginator)
* + [func (p *ListExtensionAssociationsPaginator) HasMorePages() bool](#ListExtensionAssociationsPaginator.HasMorePages)
+ [func (p *ListExtensionAssociationsPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListExtensionAssociationsOutput, error)](#ListExtensionAssociationsPaginator.NextPage)
* [type ListExtensionAssociationsPaginatorOptions](#ListExtensionAssociationsPaginatorOptions)
* [type ListExtensionsAPIClient](#ListExtensionsAPIClient)
* [type ListExtensionsInput](#ListExtensionsInput)
* [type ListExtensionsOutput](#ListExtensionsOutput)
* [type ListExtensionsPaginator](#ListExtensionsPaginator)
* + [func NewListExtensionsPaginator(client ListExtensionsAPIClient, params *ListExtensionsInput, ...) *ListExtensionsPaginator](#NewListExtensionsPaginator)
* + [func (p *ListExtensionsPaginator) HasMorePages() bool](#ListExtensionsPaginator.HasMorePages)
+ [func (p *ListExtensionsPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListExtensionsOutput, error)](#ListExtensionsPaginator.NextPage)
* [type ListExtensionsPaginatorOptions](#ListExtensionsPaginatorOptions)
* [type ListHostedConfigurationVersionsAPIClient](#ListHostedConfigurationVersionsAPIClient)
* [type ListHostedConfigurationVersionsInput](#ListHostedConfigurationVersionsInput)
* [type ListHostedConfigurationVersionsOutput](#ListHostedConfigurationVersionsOutput)
* [type ListHostedConfigurationVersionsPaginator](#ListHostedConfigurationVersionsPaginator)
* + [func NewListHostedConfigurationVersionsPaginator(client ListHostedConfigurationVersionsAPIClient, ...) *ListHostedConfigurationVersionsPaginator](#NewListHostedConfigurationVersionsPaginator)
* + [func (p *ListHostedConfigurationVersionsPaginator) HasMorePages() bool](#ListHostedConfigurationVersionsPaginator.HasMorePages)
+ [func (p *ListHostedConfigurationVersionsPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListHostedConfigurationVersionsOutput, error)](#ListHostedConfigurationVersionsPaginator.NextPage)
* [type ListHostedConfigurationVersionsPaginatorOptions](#ListHostedConfigurationVersionsPaginatorOptions)
* [type ListTagsForResourceInput](#ListTagsForResourceInput)
* [type ListTagsForResourceOutput](#ListTagsForResourceOutput)
* [type Options](#Options)
* + [func (o Options) Copy() Options](#Options.Copy)
* [type ResolveEndpoint](#ResolveEndpoint)
* + [func (m *ResolveEndpoint) HandleSerialize(ctx context.Context, in middleware.SerializeInput, ...) (out middleware.SerializeOutput, metadata middleware.Metadata, err error)](#ResolveEndpoint.HandleSerialize)
+ [func (*ResolveEndpoint) ID() string](#ResolveEndpoint.ID)
* [type StartDeploymentInput](#StartDeploymentInput)
* [type StartDeploymentOutput](#StartDeploymentOutput)
* [type StopDeploymentInput](#StopDeploymentInput)
* [type StopDeploymentOutput](#StopDeploymentOutput)
* [type TagResourceInput](#TagResourceInput)
* [type TagResourceOutput](#TagResourceOutput)
* [type UntagResourceInput](#UntagResourceInput)
* [type UntagResourceOutput](#UntagResourceOutput)
* [type UpdateApplicationInput](#UpdateApplicationInput)
* [type UpdateApplicationOutput](#UpdateApplicationOutput)
* [type UpdateConfigurationProfileInput](#UpdateConfigurationProfileInput)
* [type UpdateConfigurationProfileOutput](#UpdateConfigurationProfileOutput)
* [type UpdateDeploymentStrategyInput](#UpdateDeploymentStrategyInput)
* [type UpdateDeploymentStrategyOutput](#UpdateDeploymentStrategyOutput)
* [type UpdateEnvironmentInput](#UpdateEnvironmentInput)
* [type UpdateEnvironmentOutput](#UpdateEnvironmentOutput)
* [type UpdateExtensionAssociationInput](#UpdateExtensionAssociationInput)
* [type UpdateExtensionAssociationOutput](#UpdateExtensionAssociationOutput)
* [type UpdateExtensionInput](#UpdateExtensionInput)
* [type UpdateExtensionOutput](#UpdateExtensionOutput)
* [type ValidateConfigurationInput](#ValidateConfigurationInput)
* [type ValidateConfigurationOutput](#ValidateConfigurationOutput)
### Constants [¶](#pkg-constants)
```
const ServiceAPIVersion = "2019-10-09"
```
```
const ServiceID = "AppConfig"
```
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
####
func [NewDefaultEndpointResolver](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/endpoints.go#L33) [¶](#NewDefaultEndpointResolver)
```
func NewDefaultEndpointResolver() *[internalendpoints](/github.com/aws/aws-sdk-go-v2/service/[email protected]/internal/endpoints).[Resolver](/github.com/aws/aws-sdk-go-v2/service/[email protected]/internal/endpoints#Resolver)
```
NewDefaultEndpointResolver constructs a new service endpoint resolver
####
func [WithAPIOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_client.go#L152) [¶](#WithAPIOptions)
added in v1.0.0
```
func WithAPIOptions(optFns ...func(*[middleware](/github.com/aws/smithy-go/middleware).[Stack](/github.com/aws/smithy-go/middleware#Stack)) [error](/builtin#error)) func(*[Options](#Options))
```
WithAPIOptions returns a functional option for setting the Client's APIOptions option.
####
func [WithEndpointResolver](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_client.go#L163)
deprecated
```
func WithEndpointResolver(v [EndpointResolver](#EndpointResolver)) func(*[Options](#Options))
```
Deprecated: EndpointResolver and WithEndpointResolver. Providing a value for this field will likely prevent you from using any endpoint-related service features released after the introduction of EndpointResolverV2 and BaseEndpoint.
To migrate an EndpointResolver implementation that uses a custom endpoint, set the client option BaseEndpoint instead.
####
func [WithEndpointResolverV2](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_client.go#L171) [¶](#WithEndpointResolverV2)
added in v1.18.0
```
func WithEndpointResolverV2(v [EndpointResolverV2](#EndpointResolverV2)) func(*[Options](#Options))
```
WithEndpointResolverV2 returns a functional option for setting the Client's EndpointResolverV2 option.
### Types [¶](#pkg-types)
####
type [Client](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_client.go#L29) [¶](#Client)
```
type Client struct {
// contains filtered or unexported fields
}
```
Client provides the API client to make operations call for Amazon AppConfig.
####
func [New](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_client.go#L36) [¶](#New)
```
func New(options [Options](#Options), optFns ...func(*[Options](#Options))) *[Client](#Client)
```
New returns an initialized Client based on the functional options. Provide additional functional options to further configure the behavior of the client,
such as changing the client's endpoint or adding custom middleware behavior.
####
func [NewFromConfig](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_client.go#L280) [¶](#NewFromConfig)
```
func NewFromConfig(cfg [aws](/github.com/aws/aws-sdk-go-v2/aws).[Config](/github.com/aws/aws-sdk-go-v2/aws#Config), optFns ...func(*[Options](#Options))) *[Client](#Client)
```
NewFromConfig returns a new client from the provided config.
####
func (*Client) [CreateApplication](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_CreateApplication.go#L23) [¶](#Client.CreateApplication)
```
func (c *[Client](#Client)) CreateApplication(ctx [context](/context).[Context](/context#Context), params *[CreateApplicationInput](#CreateApplicationInput), optFns ...func(*[Options](#Options))) (*[CreateApplicationOutput](#CreateApplicationOutput), [error](/builtin#error))
```
Creates an application. In AppConfig, an application is simply an organizational construct like a folder. This organizational construct has a relationship with some unit of executable code. For example, you could create an application called MyMobileApp to organize and manage configuration data for a mobile application installed by your users.
####
func (*Client) [CreateConfigurationProfile](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_CreateConfigurationProfile.go#L42) [¶](#Client.CreateConfigurationProfile)
```
func (c *[Client](#Client)) CreateConfigurationProfile(ctx [context](/context).[Context](/context#Context), params *[CreateConfigurationProfileInput](#CreateConfigurationProfileInput), optFns ...func(*[Options](#Options))) (*[CreateConfigurationProfileOutput](#CreateConfigurationProfileOutput), [error](/builtin#error))
```
Creates a configuration profile, which is information that enables AppConfig to access the configuration source. Valid configuration sources include the following:
* Configuration data in YAML, JSON, and other formats stored in the AppConfig hosted configuration store
* Configuration data stored as objects in an Amazon Simple Storage Service
(Amazon S3) bucket
* Pipelines stored in CodePipeline
* Secrets stored in Secrets Manager
* Standard and secure string parameters stored in Amazon Web Services Systems Manager Parameter Store
* Configuration data in SSM documents stored in the Systems Manager document store
A configuration profile includes the following information:
* The URI location of the configuration data.
* The Identity and Access Management (IAM) role that provides access to the configuration data.
* A validator for the configuration data. Available validators include either a JSON Schema or an Amazon Web Services Lambda function.
For more information, see Create a Configuration and a Configuration Profile (<http://docs.aws.amazon.com/appconfig/latest/userguide/appconfig-creating-configuration-and-profile.html>)
in the AppConfig User Guide.
####
func (*Client) [CreateDeploymentStrategy](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_CreateDeploymentStrategy.go#L24) [¶](#Client.CreateDeploymentStrategy)
```
func (c *[Client](#Client)) CreateDeploymentStrategy(ctx [context](/context).[Context](/context#Context), params *[CreateDeploymentStrategyInput](#CreateDeploymentStrategyInput), optFns ...func(*[Options](#Options))) (*[CreateDeploymentStrategyOutput](#CreateDeploymentStrategyOutput), [error](/builtin#error))
```
Creates a deployment strategy that defines important criteria for rolling out your configuration to the designated targets. A deployment strategy includes the overall duration required, a percentage of targets to receive the deployment during each interval, an algorithm that defines how percentage grows, and bake time.
####
func (*Client) [CreateEnvironment](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_CreateEnvironment.go#L26) [¶](#Client.CreateEnvironment)
```
func (c *[Client](#Client)) CreateEnvironment(ctx [context](/context).[Context](/context#Context), params *[CreateEnvironmentInput](#CreateEnvironmentInput), optFns ...func(*[Options](#Options))) (*[CreateEnvironmentOutput](#CreateEnvironmentOutput), [error](/builtin#error))
```
Creates an environment. For each application, you define one or more environments. An environment is a deployment group of AppConfig targets, such as applications in a Beta or Production environment. You can also define environments for application subcomponents such as the Web , Mobile and Back-end components for your application. You can configure Amazon CloudWatch alarms for each environment. The system monitors alarms during a configuration deployment.
If an alarm is triggered, the system rolls back the configuration.
####
func (*Client) [CreateExtension](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_CreateExtension.go#L37) [¶](#Client.CreateExtension)
added in v1.13.0
```
func (c *[Client](#Client)) CreateExtension(ctx [context](/context).[Context](/context#Context), params *[CreateExtensionInput](#CreateExtensionInput), optFns ...func(*[Options](#Options))) (*[CreateExtensionOutput](#CreateExtensionOutput), [error](/builtin#error))
```
Creates an AppConfig extension. An extension augments your ability to inject logic or behavior at different points during the AppConfig workflow of creating or deploying a configuration. You can create your own extensions or use the Amazon Web Services authored extensions provided by AppConfig. For an AppConfig extension that uses Lambda, you must create a Lambda function to perform any computation and processing defined in the extension. If you plan to create custom versions of the Amazon Web Services authored notification extensions, you only need to specify an Amazon Resource Name (ARN) in the Uri field for the new extension version.
* For a custom EventBridge notification extension, enter the ARN of the EventBridge default events in the Uri field.
* For a custom Amazon SNS notification extension, enter the ARN of an Amazon SNS topic in the Uri field.
* For a custom Amazon SQS notification extension, enter the ARN of an Amazon SQS message queue in the Uri field.
For more information about extensions, see Working with AppConfig extensions (<https://docs.aws.amazon.com/appconfig/latest/userguide/working-with-appconfig-extensions.html>)
in the AppConfig User Guide.
####
func (*Client) [CreateExtensionAssociation](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_CreateExtensionAssociation.go#L30) [¶](#Client.CreateExtensionAssociation)
added in v1.13.0
```
func (c *[Client](#Client)) CreateExtensionAssociation(ctx [context](/context).[Context](/context#Context), params *[CreateExtensionAssociationInput](#CreateExtensionAssociationInput), optFns ...func(*[Options](#Options))) (*[CreateExtensionAssociationOutput](#CreateExtensionAssociationOutput), [error](/builtin#error))
```
When you create an extension or configure an Amazon Web Services authored extension, you associate the extension with an AppConfig application,
environment, or configuration profile. For example, you can choose to run the AppConfig deployment events to Amazon SNS Amazon Web Services authored extension and receive notifications on an Amazon SNS topic anytime a configuration deployment is started for a specific application. Defining which extension to associate with an AppConfig resource is called an extension association. An extension association is a specified relationship between an extension and an AppConfig resource, such as an application or a configuration profile. For more information about extensions and associations, see Working with AppConfig extensions (<https://docs.aws.amazon.com/appconfig/latest/userguide/working-with-appconfig-extensions.html>)
in the AppConfig User Guide.
####
func (*Client) [CreateHostedConfigurationVersion](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_CreateHostedConfigurationVersion.go#L19) [¶](#Client.CreateHostedConfigurationVersion)
```
func (c *[Client](#Client)) CreateHostedConfigurationVersion(ctx [context](/context).[Context](/context#Context), params *[CreateHostedConfigurationVersionInput](#CreateHostedConfigurationVersionInput), optFns ...func(*[Options](#Options))) (*[CreateHostedConfigurationVersionOutput](#CreateHostedConfigurationVersionOutput), [error](/builtin#error))
```
Creates a new configuration in the AppConfig hosted configuration store.
####
func (*Client) [DeleteApplication](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_DeleteApplication.go#L20) [¶](#Client.DeleteApplication)
```
func (c *[Client](#Client)) DeleteApplication(ctx [context](/context).[Context](/context#Context), params *[DeleteApplicationInput](#DeleteApplicationInput), optFns ...func(*[Options](#Options))) (*[DeleteApplicationOutput](#DeleteApplicationOutput), [error](/builtin#error))
```
Deletes an application. Deleting an application does not delete a configuration from a host.
####
func (*Client) [DeleteConfigurationProfile](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_DeleteConfigurationProfile.go#L20) [¶](#Client.DeleteConfigurationProfile)
```
func (c *[Client](#Client)) DeleteConfigurationProfile(ctx [context](/context).[Context](/context#Context), params *[DeleteConfigurationProfileInput](#DeleteConfigurationProfileInput), optFns ...func(*[Options](#Options))) (*[DeleteConfigurationProfileOutput](#DeleteConfigurationProfileOutput), [error](/builtin#error))
```
Deletes a configuration profile. Deleting a configuration profile does not delete a configuration from a host.
####
func (*Client) [DeleteDeploymentStrategy](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_DeleteDeploymentStrategy.go#L20) [¶](#Client.DeleteDeploymentStrategy)
```
func (c *[Client](#Client)) DeleteDeploymentStrategy(ctx [context](/context).[Context](/context#Context), params *[DeleteDeploymentStrategyInput](#DeleteDeploymentStrategyInput), optFns ...func(*[Options](#Options))) (*[DeleteDeploymentStrategyOutput](#DeleteDeploymentStrategyOutput), [error](/builtin#error))
```
Deletes a deployment strategy. Deleting a deployment strategy does not delete a configuration from a host.
####
func (*Client) [DeleteEnvironment](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_DeleteEnvironment.go#L20) [¶](#Client.DeleteEnvironment)
```
func (c *[Client](#Client)) DeleteEnvironment(ctx [context](/context).[Context](/context#Context), params *[DeleteEnvironmentInput](#DeleteEnvironmentInput), optFns ...func(*[Options](#Options))) (*[DeleteEnvironmentOutput](#DeleteEnvironmentOutput), [error](/builtin#error))
```
Deletes an environment. Deleting an environment does not delete a configuration from a host.
####
func (*Client) [DeleteExtension](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_DeleteExtension.go#L20) [¶](#Client.DeleteExtension)
added in v1.13.0
```
func (c *[Client](#Client)) DeleteExtension(ctx [context](/context).[Context](/context#Context), params *[DeleteExtensionInput](#DeleteExtensionInput), optFns ...func(*[Options](#Options))) (*[DeleteExtensionOutput](#DeleteExtensionOutput), [error](/builtin#error))
```
Deletes an AppConfig extension. You must delete all associations to an extension before you delete the extension.
####
func (*Client) [DeleteExtensionAssociation](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_DeleteExtensionAssociation.go#L20) [¶](#Client.DeleteExtensionAssociation)
added in v1.13.0
```
func (c *[Client](#Client)) DeleteExtensionAssociation(ctx [context](/context).[Context](/context#Context), params *[DeleteExtensionAssociationInput](#DeleteExtensionAssociationInput), optFns ...func(*[Options](#Options))) (*[DeleteExtensionAssociationOutput](#DeleteExtensionAssociationOutput), [error](/builtin#error))
```
Deletes an extension association. This action doesn't delete extensions defined in the association.
####
func (*Client) [DeleteHostedConfigurationVersion](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_DeleteHostedConfigurationVersion.go#L20) [¶](#Client.DeleteHostedConfigurationVersion)
```
func (c *[Client](#Client)) DeleteHostedConfigurationVersion(ctx [context](/context).[Context](/context#Context), params *[DeleteHostedConfigurationVersionInput](#DeleteHostedConfigurationVersionInput), optFns ...func(*[Options](#Options))) (*[DeleteHostedConfigurationVersionOutput](#DeleteHostedConfigurationVersionOutput), [error](/builtin#error))
```
Deletes a version of a configuration from the AppConfig hosted configuration store.
####
func (*Client) [GetApplication](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetApplication.go#L19) [¶](#Client.GetApplication)
```
func (c *[Client](#Client)) GetApplication(ctx [context](/context).[Context](/context#Context), params *[GetApplicationInput](#GetApplicationInput), optFns ...func(*[Options](#Options))) (*[GetApplicationOutput](#GetApplicationOutput), [error](/builtin#error))
```
Retrieves information about an application.
####
func (*Client) [GetConfiguration](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetConfiguration.go#L29)
deprecated
```
func (c *[Client](#Client)) GetConfiguration(ctx [context](/context).[Context](/context#Context), params *[GetConfigurationInput](#GetConfigurationInput), optFns ...func(*[Options](#Options))) (*[GetConfigurationOutput](#GetConfigurationOutput), [error](/builtin#error))
```
(Deprecated) Retrieves the latest deployed configuration. Note the following important information.
* This API action is deprecated. Calls to receive configuration data should use the StartConfigurationSession (<https://docs.aws.amazon.com/appconfig/2019-10-09/APIReference/API_appconfigdata_StartConfigurationSession.html>)
and GetLatestConfiguration (<https://docs.aws.amazon.com/appconfig/2019-10-09/APIReference/API_appconfigdata_GetLatestConfiguration.html>)
APIs instead.
* GetConfiguration is a priced call. For more information, see Pricing (<https://aws.amazon.com/systems-manager/pricing/>)
.
Deprecated: This API has been deprecated in favor of the GetLatestConfiguration API used in conjunction with StartConfigurationSession.
####
func (*Client) [GetConfigurationProfile](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetConfigurationProfile.go#L20) [¶](#Client.GetConfigurationProfile)
```
func (c *[Client](#Client)) GetConfigurationProfile(ctx [context](/context).[Context](/context#Context), params *[GetConfigurationProfileInput](#GetConfigurationProfileInput), optFns ...func(*[Options](#Options))) (*[GetConfigurationProfileOutput](#GetConfigurationProfileOutput), [error](/builtin#error))
```
Retrieves information about a configuration profile.
####
func (*Client) [GetDeployment](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetDeployment.go#L21) [¶](#Client.GetDeployment)
```
func (c *[Client](#Client)) GetDeployment(ctx [context](/context).[Context](/context#Context), params *[GetDeploymentInput](#GetDeploymentInput), optFns ...func(*[Options](#Options))) (*[GetDeploymentOutput](#GetDeploymentOutput), [error](/builtin#error))
```
Retrieves information about a configuration deployment.
####
func (*Client) [GetDeploymentStrategy](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetDeploymentStrategy.go#L24) [¶](#Client.GetDeploymentStrategy)
```
func (c *[Client](#Client)) GetDeploymentStrategy(ctx [context](/context).[Context](/context#Context), params *[GetDeploymentStrategyInput](#GetDeploymentStrategyInput), optFns ...func(*[Options](#Options))) (*[GetDeploymentStrategyOutput](#GetDeploymentStrategyOutput), [error](/builtin#error))
```
Retrieves information about a deployment strategy. A deployment strategy defines important criteria for rolling out your configuration to the designated targets. A deployment strategy includes the overall duration required, a percentage of targets to receive the deployment during each interval, an algorithm that defines how percentage grows, and bake time.
####
func (*Client) [GetEnvironment](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetEnvironment.go#L25) [¶](#Client.GetEnvironment)
```
func (c *[Client](#Client)) GetEnvironment(ctx [context](/context).[Context](/context#Context), params *[GetEnvironmentInput](#GetEnvironmentInput), optFns ...func(*[Options](#Options))) (*[GetEnvironmentOutput](#GetEnvironmentOutput), [error](/builtin#error))
```
Retrieves information about an environment. An environment is a deployment group of AppConfig applications, such as applications in a Production environment or in an EU_Region environment. Each configuration deployment targets an environment. You can enable one or more Amazon CloudWatch alarms for an environment. If an alarm is triggered during a deployment, AppConfig roles back the configuration.
####
func (*Client) [GetExtension](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetExtension.go#L20) [¶](#Client.GetExtension)
added in v1.13.0
```
func (c *[Client](#Client)) GetExtension(ctx [context](/context).[Context](/context#Context), params *[GetExtensionInput](#GetExtensionInput), optFns ...func(*[Options](#Options))) (*[GetExtensionOutput](#GetExtensionOutput), [error](/builtin#error))
```
Returns information about an AppConfig extension.
####
func (*Client) [GetExtensionAssociation](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetExtensionAssociation.go#L22) [¶](#Client.GetExtensionAssociation)
added in v1.13.0
```
func (c *[Client](#Client)) GetExtensionAssociation(ctx [context](/context).[Context](/context#Context), params *[GetExtensionAssociationInput](#GetExtensionAssociationInput), optFns ...func(*[Options](#Options))) (*[GetExtensionAssociationOutput](#GetExtensionAssociationOutput), [error](/builtin#error))
```
Returns information about an AppConfig extension association. For more information about extensions and associations, see Working with AppConfig extensions (<https://docs.aws.amazon.com/appconfig/latest/userguide/working-with-appconfig-extensions.html>)
in the AppConfig User Guide.
####
func (*Client) [GetHostedConfigurationVersion](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetHostedConfigurationVersion.go#L19) [¶](#Client.GetHostedConfigurationVersion)
```
func (c *[Client](#Client)) GetHostedConfigurationVersion(ctx [context](/context).[Context](/context#Context), params *[GetHostedConfigurationVersionInput](#GetHostedConfigurationVersionInput), optFns ...func(*[Options](#Options))) (*[GetHostedConfigurationVersionOutput](#GetHostedConfigurationVersionOutput), [error](/builtin#error))
```
Retrieves information about a specific configuration version.
####
func (*Client) [ListApplications](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListApplications.go#L20) [¶](#Client.ListApplications)
```
func (c *[Client](#Client)) ListApplications(ctx [context](/context).[Context](/context#Context), params *[ListApplicationsInput](#ListApplicationsInput), optFns ...func(*[Options](#Options))) (*[ListApplicationsOutput](#ListApplicationsOutput), [error](/builtin#error))
```
Lists all applications in your Amazon Web Services account.
####
func (*Client) [ListConfigurationProfiles](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListConfigurationProfiles.go#L20) [¶](#Client.ListConfigurationProfiles)
```
func (c *[Client](#Client)) ListConfigurationProfiles(ctx [context](/context).[Context](/context#Context), params *[ListConfigurationProfilesInput](#ListConfigurationProfilesInput), optFns ...func(*[Options](#Options))) (*[ListConfigurationProfilesOutput](#ListConfigurationProfilesOutput), [error](/builtin#error))
```
Lists the configuration profiles for an application.
####
func (*Client) [ListDeploymentStrategies](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListDeploymentStrategies.go#L20) [¶](#Client.ListDeploymentStrategies)
```
func (c *[Client](#Client)) ListDeploymentStrategies(ctx [context](/context).[Context](/context#Context), params *[ListDeploymentStrategiesInput](#ListDeploymentStrategiesInput), optFns ...func(*[Options](#Options))) (*[ListDeploymentStrategiesOutput](#ListDeploymentStrategiesOutput), [error](/builtin#error))
```
Lists deployment strategies.
####
func (*Client) [ListDeployments](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListDeployments.go#L20) [¶](#Client.ListDeployments)
```
func (c *[Client](#Client)) ListDeployments(ctx [context](/context).[Context](/context#Context), params *[ListDeploymentsInput](#ListDeploymentsInput), optFns ...func(*[Options](#Options))) (*[ListDeploymentsOutput](#ListDeploymentsOutput), [error](/builtin#error))
```
Lists the deployments for an environment in descending deployment number order.
####
func (*Client) [ListEnvironments](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListEnvironments.go#L20) [¶](#Client.ListEnvironments)
```
func (c *[Client](#Client)) ListEnvironments(ctx [context](/context).[Context](/context#Context), params *[ListEnvironmentsInput](#ListEnvironmentsInput), optFns ...func(*[Options](#Options))) (*[ListEnvironmentsOutput](#ListEnvironmentsOutput), [error](/builtin#error))
```
Lists the environments for an application.
####
func (*Client) [ListExtensionAssociations](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListExtensionAssociations.go#L22) [¶](#Client.ListExtensionAssociations)
added in v1.13.0
```
func (c *[Client](#Client)) ListExtensionAssociations(ctx [context](/context).[Context](/context#Context), params *[ListExtensionAssociationsInput](#ListExtensionAssociationsInput), optFns ...func(*[Options](#Options))) (*[ListExtensionAssociationsOutput](#ListExtensionAssociationsOutput), [error](/builtin#error))
```
Lists all AppConfig extension associations in the account. For more information about extensions and associations, see Working with AppConfig extensions (<https://docs.aws.amazon.com/appconfig/latest/userguide/working-with-appconfig-extensions.html>)
in the AppConfig User Guide.
####
func (*Client) [ListExtensions](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListExtensions.go#L23) [¶](#Client.ListExtensions)
added in v1.13.0
```
func (c *[Client](#Client)) ListExtensions(ctx [context](/context).[Context](/context#Context), params *[ListExtensionsInput](#ListExtensionsInput), optFns ...func(*[Options](#Options))) (*[ListExtensionsOutput](#ListExtensionsOutput), [error](/builtin#error))
```
Lists all custom and Amazon Web Services authored AppConfig extensions in the account. For more information about extensions, see Working with AppConfig extensions (<https://docs.aws.amazon.com/appconfig/latest/userguide/working-with-appconfig-extensions.html>)
in the AppConfig User Guide.
####
func (*Client) [ListHostedConfigurationVersions](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListHostedConfigurationVersions.go#L21) [¶](#Client.ListHostedConfigurationVersions)
```
func (c *[Client](#Client)) ListHostedConfigurationVersions(ctx [context](/context).[Context](/context#Context), params *[ListHostedConfigurationVersionsInput](#ListHostedConfigurationVersionsInput), optFns ...func(*[Options](#Options))) (*[ListHostedConfigurationVersionsOutput](#ListHostedConfigurationVersionsOutput), [error](/builtin#error))
```
Lists configurations stored in the AppConfig hosted configuration store by version.
####
func (*Client) [ListTagsForResource](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListTagsForResource.go#L19) [¶](#Client.ListTagsForResource)
```
func (c *[Client](#Client)) ListTagsForResource(ctx [context](/context).[Context](/context#Context), params *[ListTagsForResourceInput](#ListTagsForResourceInput), optFns ...func(*[Options](#Options))) (*[ListTagsForResourceOutput](#ListTagsForResourceOutput), [error](/builtin#error))
```
Retrieves the list of key-value tags assigned to the resource.
####
func (*Client) [StartDeployment](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_StartDeployment.go#L21) [¶](#Client.StartDeployment)
```
func (c *[Client](#Client)) StartDeployment(ctx [context](/context).[Context](/context#Context), params *[StartDeploymentInput](#StartDeploymentInput), optFns ...func(*[Options](#Options))) (*[StartDeploymentOutput](#StartDeploymentOutput), [error](/builtin#error))
```
Starts a deployment.
####
func (*Client) [StopDeployment](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_StopDeployment.go#L23) [¶](#Client.StopDeployment)
```
func (c *[Client](#Client)) StopDeployment(ctx [context](/context).[Context](/context#Context), params *[StopDeploymentInput](#StopDeploymentInput), optFns ...func(*[Options](#Options))) (*[StopDeploymentOutput](#StopDeploymentOutput), [error](/builtin#error))
```
Stops a deployment. This API action works only on deployments that have a status of DEPLOYING . This action moves the deployment to a status of ROLLED_BACK .
####
func (*Client) [TagResource](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_TagResource.go#L21) [¶](#Client.TagResource)
```
func (c *[Client](#Client)) TagResource(ctx [context](/context).[Context](/context#Context), params *[TagResourceInput](#TagResourceInput), optFns ...func(*[Options](#Options))) (*[TagResourceOutput](#TagResourceOutput), [error](/builtin#error))
```
Assigns metadata to an AppConfig resource. Tags help organize and categorize your AppConfig resources. Each tag consists of a key and an optional value, both of which you define. You can specify a maximum of 50 tags for a resource.
####
func (*Client) [UntagResource](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_UntagResource.go#L19) [¶](#Client.UntagResource)
```
func (c *[Client](#Client)) UntagResource(ctx [context](/context).[Context](/context#Context), params *[UntagResourceInput](#UntagResourceInput), optFns ...func(*[Options](#Options))) (*[UntagResourceOutput](#UntagResourceOutput), [error](/builtin#error))
```
Deletes a tag key and value from an AppConfig resource.
####
func (*Client) [UpdateApplication](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_UpdateApplication.go#L19) [¶](#Client.UpdateApplication)
```
func (c *[Client](#Client)) UpdateApplication(ctx [context](/context).[Context](/context#Context), params *[UpdateApplicationInput](#UpdateApplicationInput), optFns ...func(*[Options](#Options))) (*[UpdateApplicationOutput](#UpdateApplicationOutput), [error](/builtin#error))
```
Updates an application.
####
func (*Client) [UpdateConfigurationProfile](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_UpdateConfigurationProfile.go#L20) [¶](#Client.UpdateConfigurationProfile)
```
func (c *[Client](#Client)) UpdateConfigurationProfile(ctx [context](/context).[Context](/context#Context), params *[UpdateConfigurationProfileInput](#UpdateConfigurationProfileInput), optFns ...func(*[Options](#Options))) (*[UpdateConfigurationProfileOutput](#UpdateConfigurationProfileOutput), [error](/builtin#error))
```
Updates a configuration profile.
####
func (*Client) [UpdateDeploymentStrategy](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_UpdateDeploymentStrategy.go#L20) [¶](#Client.UpdateDeploymentStrategy)
```
func (c *[Client](#Client)) UpdateDeploymentStrategy(ctx [context](/context).[Context](/context#Context), params *[UpdateDeploymentStrategyInput](#UpdateDeploymentStrategyInput), optFns ...func(*[Options](#Options))) (*[UpdateDeploymentStrategyOutput](#UpdateDeploymentStrategyOutput), [error](/builtin#error))
```
Updates a deployment strategy.
####
func (*Client) [UpdateEnvironment](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_UpdateEnvironment.go#L20) [¶](#Client.UpdateEnvironment)
```
func (c *[Client](#Client)) UpdateEnvironment(ctx [context](/context).[Context](/context#Context), params *[UpdateEnvironmentInput](#UpdateEnvironmentInput), optFns ...func(*[Options](#Options))) (*[UpdateEnvironmentOutput](#UpdateEnvironmentOutput), [error](/builtin#error))
```
Updates an environment.
####
func (*Client) [UpdateExtension](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_UpdateExtension.go#L22) [¶](#Client.UpdateExtension)
added in v1.13.0
```
func (c *[Client](#Client)) UpdateExtension(ctx [context](/context).[Context](/context#Context), params *[UpdateExtensionInput](#UpdateExtensionInput), optFns ...func(*[Options](#Options))) (*[UpdateExtensionOutput](#UpdateExtensionOutput), [error](/builtin#error))
```
Updates an AppConfig extension. For more information about extensions, see Working with AppConfig extensions (<https://docs.aws.amazon.com/appconfig/latest/userguide/working-with-appconfig-extensions.html>)
in the AppConfig User Guide.
####
func (*Client) [UpdateExtensionAssociation](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_UpdateExtensionAssociation.go#L21) [¶](#Client.UpdateExtensionAssociation)
added in v1.13.0
```
func (c *[Client](#Client)) UpdateExtensionAssociation(ctx [context](/context).[Context](/context#Context), params *[UpdateExtensionAssociationInput](#UpdateExtensionAssociationInput), optFns ...func(*[Options](#Options))) (*[UpdateExtensionAssociationOutput](#UpdateExtensionAssociationOutput), [error](/builtin#error))
```
Updates an association. For more information about extensions and associations,
see Working with AppConfig extensions (<https://docs.aws.amazon.com/appconfig/latest/userguide/working-with-appconfig-extensions.html>)
in the AppConfig User Guide.
####
func (*Client) [ValidateConfiguration](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ValidateConfiguration.go#L19) [¶](#Client.ValidateConfiguration)
```
func (c *[Client](#Client)) ValidateConfiguration(ctx [context](/context).[Context](/context#Context), params *[ValidateConfigurationInput](#ValidateConfigurationInput), optFns ...func(*[Options](#Options))) (*[ValidateConfigurationOutput](#ValidateConfigurationOutput), [error](/builtin#error))
```
Uses the validators in a configuration profile to validate a configuration.
####
type [CreateApplicationInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_CreateApplication.go#L38) [¶](#CreateApplicationInput)
```
type CreateApplicationInput struct {
// A name for the application.
//
// This member is required.
Name *[string](/builtin#string)
// A description of the application.
Description *[string](/builtin#string)
// Metadata to assign to the application. Tags help organize and categorize your
// AppConfig resources. Each tag consists of a key and an optional value, both of
// which you define.
Tags map[[string](/builtin#string)][string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [CreateApplicationOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_CreateApplication.go#L56) [¶](#CreateApplicationOutput)
```
type CreateApplicationOutput struct {
// The description of the application.
Description *[string](/builtin#string)
// The application ID.
Id *[string](/builtin#string)
// The application name.
Name *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [CreateConfigurationProfileInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_CreateConfigurationProfile.go#L57) [¶](#CreateConfigurationProfileInput)
```
type CreateConfigurationProfileInput struct {
// The application ID.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// A URI to locate the configuration. You can specify the following:
// - For the AppConfig hosted configuration store and for feature flags, specify
// hosted .
// - For an Amazon Web Services Systems Manager Parameter Store parameter,
// specify either the parameter name in the format ssm-parameter:// or the ARN.
// - For an Amazon Web Services CodePipeline pipeline, specify the URI in the
// following format: codepipeline ://.
// - For an Secrets Manager secret, specify the URI in the following format:
// secretsmanager ://.
// - For an Amazon S3 object, specify the URI in the following format: s3:/// .
// Here is an example: s3://my-bucket/my-app/us-east-1/my-config.json
// - For an SSM document, specify either the document name in the format
// ssm-document:// or the Amazon Resource Name (ARN).
//
// This member is required.
LocationUri *[string](/builtin#string)
// A name for the configuration profile.
//
// This member is required.
Name *[string](/builtin#string)
// A description of the configuration profile.
Description *[string](/builtin#string)
// The identifier for an Key Management Service key to encrypt new configuration
// data versions in the AppConfig hosted configuration store. This attribute is
// only used for hosted configuration types. The identifier can be an KMS key ID,
// alias, or the Amazon Resource Name (ARN) of the key ID or alias. To encrypt data
// managed in other configuration stores, see the documentation for how to specify
// an KMS key for that particular service.
KmsKeyIdentifier *[string](/builtin#string)
// The ARN of an IAM role with permission to access the configuration at the
// specified LocationUri . A retrieval role ARN is not required for configurations
// stored in the AppConfig hosted configuration store. It is required for all other
// sources that store your configuration.
RetrievalRoleArn *[string](/builtin#string)
// Metadata to assign to the configuration profile. Tags help organize and
// categorize your AppConfig resources. Each tag consists of a key and an optional
// value, both of which you define.
Tags map[[string](/builtin#string)][string](/builtin#string)
// The type of configurations contained in the profile. AppConfig supports feature
// flags and freeform configurations. We recommend you create feature flag
// configurations to enable or disable new features and freeform configurations to
// distribute configurations to an application. When calling this API, enter one of
// the following values for Type : AWS.AppConfig.FeatureFlags
// AWS.Freeform
Type *[string](/builtin#string)
// A list of methods for validating the configuration.
Validators [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Validator](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Validator)
// contains filtered or unexported fields
}
```
####
type [CreateConfigurationProfileOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_CreateConfigurationProfile.go#L122) [¶](#CreateConfigurationProfileOutput)
```
type CreateConfigurationProfileOutput struct {
// The application ID.
ApplicationId *[string](/builtin#string)
// The configuration profile description.
Description *[string](/builtin#string)
// The configuration profile ID.
Id *[string](/builtin#string)
// The Amazon Resource Name of the Key Management Service key to encrypt new
// configuration data versions in the AppConfig hosted configuration store. This
// attribute is only used for hosted configuration types. To encrypt data managed
// in other configuration stores, see the documentation for how to specify an KMS
// key for that particular service.
KmsKeyArn *[string](/builtin#string)
// The Key Management Service key identifier (key ID, key alias, or key ARN)
// provided when the resource was created or updated.
KmsKeyIdentifier *[string](/builtin#string)
// The URI location of the configuration.
LocationUri *[string](/builtin#string)
// The name of the configuration profile.
Name *[string](/builtin#string)
// The ARN of an IAM role with permission to access the configuration at the
// specified LocationUri .
RetrievalRoleArn *[string](/builtin#string)
// The type of configurations contained in the profile. AppConfig supports feature
// flags and freeform configurations. We recommend you create feature flag
// configurations to enable or disable new features and freeform configurations to
// distribute configurations to an application. When calling this API, enter one of
// the following values for Type : AWS.AppConfig.FeatureFlags
// AWS.Freeform
Type *[string](/builtin#string)
// A list of methods for validating the configuration.
Validators [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Validator](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Validator)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [CreateDeploymentStrategyInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_CreateDeploymentStrategy.go#L39) [¶](#CreateDeploymentStrategyInput)
```
type CreateDeploymentStrategyInput struct {
// Total amount of time for a deployment to last.
//
// This member is required.
DeploymentDurationInMinutes *[int32](/builtin#int32)
// The percentage of targets to receive a deployed configuration during each
// interval.
//
// This member is required.
GrowthFactor *[float32](/builtin#float32)
// A name for the deployment strategy.
//
// This member is required.
Name *[string](/builtin#string)
// A description of the deployment strategy.
Description *[string](/builtin#string)
// Specifies the amount of time AppConfig monitors for Amazon CloudWatch alarms
// after the configuration has been deployed to 100% of its targets, before
// considering the deployment to be complete. If an alarm is triggered during this
// time, AppConfig rolls back the deployment. You must configure permissions for
// AppConfig to roll back based on CloudWatch alarms. For more information, see
// Configuring permissions for rollback based on Amazon CloudWatch alarms (<https://docs.aws.amazon.com/appconfig/latest/userguide/getting-started-with-appconfig-cloudwatch-alarms-permissions.html>)
// in the AppConfig User Guide.
FinalBakeTimeInMinutes [int32](/builtin#int32)
// The algorithm used to define how percentage grows over time. AppConfig supports
// the following growth types: Linear: For this type, AppConfig processes the
// deployment by dividing the total number of targets by the value specified for
// Step percentage . For example, a linear deployment that uses a Step percentage
// of 10 deploys the configuration to 10 percent of the hosts. After those
// deployments are complete, the system deploys the configuration to the next 10
// percent. This continues until 100% of the targets have successfully received the
// configuration. Exponential: For this type, AppConfig processes the deployment
// exponentially using the following formula: G*(2^N) . In this formula, G is the
// growth factor specified by the user and N is the number of steps until the
// configuration is deployed to all targets. For example, if you specify a growth
// factor of 2, then the system rolls out the configuration as follows: 2*(2^0)
// 2*(2^1)
// 2*(2^2) Expressed numerically, the deployment rolls out as follows: 2% of the
// targets, 4% of the targets, 8% of the targets, and continues until the
// configuration has been deployed to all targets.
GrowthType [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[GrowthType](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#GrowthType)
// Save the deployment strategy to a Systems Manager (SSM) document.
ReplicateTo [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[ReplicateTo](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#ReplicateTo)
// Metadata to assign to the deployment strategy. Tags help organize and
// categorize your AppConfig resources. Each tag consists of a key and an optional
// value, both of which you define.
Tags map[[string](/builtin#string)][string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [CreateDeploymentStrategyOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_CreateDeploymentStrategy.go#L98) [¶](#CreateDeploymentStrategyOutput)
```
type CreateDeploymentStrategyOutput struct {
// Total amount of time the deployment lasted.
DeploymentDurationInMinutes [int32](/builtin#int32)
// The description of the deployment strategy.
Description *[string](/builtin#string)
// The amount of time that AppConfig monitored for alarms before considering the
// deployment to be complete and no longer eligible for automatic rollback.
FinalBakeTimeInMinutes [int32](/builtin#int32)
// The percentage of targets that received a deployed configuration during each
// interval.
GrowthFactor [float32](/builtin#float32)
// The algorithm used to define how percentage grew over time.
GrowthType [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[GrowthType](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#GrowthType)
// The deployment strategy ID.
Id *[string](/builtin#string)
// The name of the deployment strategy.
Name *[string](/builtin#string)
// Save the deployment strategy to a Systems Manager (SSM) document.
ReplicateTo [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[ReplicateTo](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#ReplicateTo)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [CreateEnvironmentInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_CreateEnvironment.go#L41) [¶](#CreateEnvironmentInput)
```
type CreateEnvironmentInput struct {
// The application ID.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// A name for the environment.
//
// This member is required.
Name *[string](/builtin#string)
// A description of the environment.
Description *[string](/builtin#string)
// Amazon CloudWatch alarms to monitor during the deployment process.
Monitors [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Monitor](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Monitor)
// Metadata to assign to the environment. Tags help organize and categorize your
// AppConfig resources. Each tag consists of a key and an optional value, both of
// which you define.
Tags map[[string](/builtin#string)][string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [CreateEnvironmentOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_CreateEnvironment.go#L67) [¶](#CreateEnvironmentOutput)
```
type CreateEnvironmentOutput struct {
// The application ID.
ApplicationId *[string](/builtin#string)
// The description of the environment.
Description *[string](/builtin#string)
// The environment ID.
Id *[string](/builtin#string)
// Amazon CloudWatch alarms monitored during the deployment.
Monitors [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Monitor](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Monitor)
// The name of the environment.
Name *[string](/builtin#string)
// The state of the environment. An environment can be in one of the following
// states: READY_FOR_DEPLOYMENT , DEPLOYING , ROLLING_BACK , or ROLLED_BACK
State [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[EnvironmentState](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#EnvironmentState)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [CreateExtensionAssociationInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_CreateExtensionAssociation.go#L45) [¶](#CreateExtensionAssociationInput)
added in v1.13.0
```
type CreateExtensionAssociationInput struct {
// The name, the ID, or the Amazon Resource Name (ARN) of the extension.
//
// This member is required.
ExtensionIdentifier *[string](/builtin#string)
// The ARN of an application, configuration profile, or environment.
//
// This member is required.
ResourceIdentifier *[string](/builtin#string)
// The version number of the extension. If not specified, AppConfig uses the
// maximum version of the extension.
ExtensionVersionNumber *[int32](/builtin#int32)
// The parameter names and values defined in the extensions. Extension parameters
// marked Required must be entered for this field.
Parameters map[[string](/builtin#string)][string](/builtin#string)
// Adds one or more tags for the specified extension association. Tags are
// metadata that help you categorize resources in different ways, for example, by
// purpose, owner, or environment. Each tag consists of a key and an optional
// value, both of which you define.
Tags map[[string](/builtin#string)][string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [CreateExtensionAssociationOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_CreateExtensionAssociation.go#L74) [¶](#CreateExtensionAssociationOutput)
added in v1.13.0
```
type CreateExtensionAssociationOutput struct {
// The system-generated Amazon Resource Name (ARN) for the extension.
Arn *[string](/builtin#string)
// The ARN of the extension defined in the association.
ExtensionArn *[string](/builtin#string)
// The version number for the extension defined in the association.
ExtensionVersionNumber [int32](/builtin#int32)
// The system-generated ID for the association.
Id *[string](/builtin#string)
// The parameter names and values defined in the association.
Parameters map[[string](/builtin#string)][string](/builtin#string)
// The ARNs of applications, configuration profiles, or environments defined in
// the association.
ResourceArn *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [CreateExtensionInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_CreateExtension.go#L52) [¶](#CreateExtensionInput)
added in v1.13.0
```
type CreateExtensionInput struct {
// The actions defined in the extension.
//
// This member is required.
Actions map[[string](/builtin#string)][][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Action](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Action)
// A name for the extension. Each extension name in your account must be unique.
// Extension versions use the same name.
//
// This member is required.
Name *[string](/builtin#string)
// Information about the extension.
Description *[string](/builtin#string)
// You can omit this field when you create an extension. When you create a new
// version, specify the most recent current version number. For example, you create
// version 3, enter 2 for this field.
LatestVersionNumber *[int32](/builtin#int32)
// The parameters accepted by the extension. You specify parameter values when you
// associate the extension to an AppConfig resource by using the
// CreateExtensionAssociation API action. For Lambda extension actions, these
// parameters are included in the Lambda request object.
Parameters map[[string](/builtin#string)][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Parameter](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Parameter)
// Adds one or more tags for the specified extension. Tags are metadata that help
// you categorize resources in different ways, for example, by purpose, owner, or
// environment. Each tag consists of a key and an optional value, both of which you
// define.
Tags map[[string](/builtin#string)][string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [CreateExtensionOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_CreateExtension.go#L88) [¶](#CreateExtensionOutput)
added in v1.13.0
```
type CreateExtensionOutput struct {
// The actions defined in the extension.
Actions map[[string](/builtin#string)][][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Action](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Action)
// The system-generated Amazon Resource Name (ARN) for the extension.
Arn *[string](/builtin#string)
// Information about the extension.
Description *[string](/builtin#string)
// The system-generated ID of the extension.
Id *[string](/builtin#string)
// The extension name.
Name *[string](/builtin#string)
// The parameters accepted by the extension. You specify parameter values when you
// associate the extension to an AppConfig resource by using the
// CreateExtensionAssociation API action. For Lambda extension actions, these
// parameters are included in the Lambda request object.
Parameters map[[string](/builtin#string)][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Parameter](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Parameter)
// The extension version number.
VersionNumber [int32](/builtin#int32)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [CreateHostedConfigurationVersionInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_CreateHostedConfigurationVersion.go#L34) [¶](#CreateHostedConfigurationVersionInput)
```
type CreateHostedConfigurationVersionInput struct {
// The application ID.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The configuration profile ID.
//
// This member is required.
ConfigurationProfileId *[string](/builtin#string)
// The content of the configuration or the configuration data.
//
// This member is required.
Content [][byte](/builtin#byte)
// A standard MIME type describing the format of the configuration content. For
// more information, see Content-Type (<https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.17>)
// .
//
// This member is required.
ContentType *[string](/builtin#string)
// A description of the configuration.
Description *[string](/builtin#string)
// An optional locking token used to prevent race conditions from overwriting
// configuration updates when creating a new version. To ensure your data is not
// overwritten when creating multiple hosted configuration versions in rapid
// succession, specify the version number of the latest hosted configuration
// version.
LatestVersionNumber *[int32](/builtin#int32)
// An optional, user-defined label for the AppConfig hosted configuration version.
// This value must contain at least one non-numeric character. For example,
// "v2.2.0".
VersionLabel *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [CreateHostedConfigurationVersionOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_CreateHostedConfigurationVersion.go#L76) [¶](#CreateHostedConfigurationVersionOutput)
```
type CreateHostedConfigurationVersionOutput struct {
// The application ID.
ApplicationId *[string](/builtin#string)
// The configuration profile ID.
ConfigurationProfileId *[string](/builtin#string)
// The content of the configuration or the configuration data.
Content [][byte](/builtin#byte)
// A standard MIME type describing the format of the configuration content. For
// more information, see Content-Type (<https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.17>)
// .
ContentType *[string](/builtin#string)
// A description of the configuration.
Description *[string](/builtin#string)
// The Amazon Resource Name of the Key Management Service key that was used to
// encrypt this specific version of the configuration data in the AppConfig hosted
// configuration store.
KmsKeyArn *[string](/builtin#string)
// A user-defined label for an AppConfig hosted configuration version.
VersionLabel *[string](/builtin#string)
// The configuration version.
VersionNumber [int32](/builtin#int32)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [DeleteApplicationInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_DeleteApplication.go#L35) [¶](#DeleteApplicationInput)
```
type DeleteApplicationInput struct {
// The ID of the application to delete.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [DeleteApplicationOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_DeleteApplication.go#L45) [¶](#DeleteApplicationOutput)
```
type DeleteApplicationOutput struct {
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [DeleteConfigurationProfileInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_DeleteConfigurationProfile.go#L35) [¶](#DeleteConfigurationProfileInput)
```
type DeleteConfigurationProfileInput struct {
// The application ID that includes the configuration profile you want to delete.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The ID of the configuration profile you want to delete.
//
// This member is required.
ConfigurationProfileId *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [DeleteConfigurationProfileOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_DeleteConfigurationProfile.go#L50) [¶](#DeleteConfigurationProfileOutput)
```
type DeleteConfigurationProfileOutput struct {
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [DeleteDeploymentStrategyInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_DeleteDeploymentStrategy.go#L35) [¶](#DeleteDeploymentStrategyInput)
```
type DeleteDeploymentStrategyInput struct {
// The ID of the deployment strategy you want to delete.
//
// This member is required.
DeploymentStrategyId *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [DeleteDeploymentStrategyOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_DeleteDeploymentStrategy.go#L45) [¶](#DeleteDeploymentStrategyOutput)
```
type DeleteDeploymentStrategyOutput struct {
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [DeleteEnvironmentInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_DeleteEnvironment.go#L35) [¶](#DeleteEnvironmentInput)
```
type DeleteEnvironmentInput struct {
// The application ID that includes the environment that you want to delete.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The ID of the environment that you want to delete.
//
// This member is required.
EnvironmentId *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [DeleteEnvironmentOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_DeleteEnvironment.go#L50) [¶](#DeleteEnvironmentOutput)
```
type DeleteEnvironmentOutput struct {
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [DeleteExtensionAssociationInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_DeleteExtensionAssociation.go#L35) [¶](#DeleteExtensionAssociationInput)
added in v1.13.0
```
type DeleteExtensionAssociationInput struct {
// The ID of the extension association to delete.
//
// This member is required.
ExtensionAssociationId *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [DeleteExtensionAssociationOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_DeleteExtensionAssociation.go#L45) [¶](#DeleteExtensionAssociationOutput)
added in v1.13.0
```
type DeleteExtensionAssociationOutput struct {
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [DeleteExtensionInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_DeleteExtension.go#L35) [¶](#DeleteExtensionInput)
added in v1.13.0
```
type DeleteExtensionInput struct {
// The name, ID, or Amazon Resource Name (ARN) of the extension you want to delete.
//
// This member is required.
ExtensionIdentifier *[string](/builtin#string)
// A specific version of an extension to delete. If omitted, the highest version
// is deleted.
VersionNumber *[int32](/builtin#int32)
// contains filtered or unexported fields
}
```
####
type [DeleteExtensionOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_DeleteExtension.go#L49) [¶](#DeleteExtensionOutput)
added in v1.13.0
```
type DeleteExtensionOutput struct {
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [DeleteHostedConfigurationVersionInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_DeleteHostedConfigurationVersion.go#L35) [¶](#DeleteHostedConfigurationVersionInput)
```
type DeleteHostedConfigurationVersionInput struct {
// The application ID.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The configuration profile ID.
//
// This member is required.
ConfigurationProfileId *[string](/builtin#string)
// The versions number to delete.
//
// This member is required.
VersionNumber [int32](/builtin#int32)
// contains filtered or unexported fields
}
```
####
type [DeleteHostedConfigurationVersionOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_DeleteHostedConfigurationVersion.go#L55) [¶](#DeleteHostedConfigurationVersionOutput)
```
type DeleteHostedConfigurationVersionOutput struct {
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [EndpointParameters](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/endpoints.go#L265) [¶](#EndpointParameters)
added in v1.18.0
```
type EndpointParameters struct {
// The AWS region used to dispatch the request.
//
// Parameter is
// required.
//
// AWS::Region
Region *[string](/builtin#string)
// When true, use the dual-stack endpoint. If the configured endpoint does not
// support dual-stack, dispatching the request MAY return an error.
//
// Defaults to
// false if no value is provided.
//
// AWS::UseDualStack
UseDualStack *[bool](/builtin#bool)
// When true, send this request to the FIPS-compliant regional endpoint. If the
// configured endpoint does not have a FIPS compliant endpoint, dispatching the
// request will return an error.
//
// Defaults to false if no value is
// provided.
//
// AWS::UseFIPS
UseFIPS *[bool](/builtin#bool)
// Override the endpoint used to send this request
//
// Parameter is
// required.
//
// SDK::Endpoint
Endpoint *[string](/builtin#string)
}
```
EndpointParameters provides the parameters that influence how endpoints are resolved.
####
func (EndpointParameters) [ValidateRequired](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/endpoints.go#L303) [¶](#EndpointParameters.ValidateRequired)
added in v1.18.0
```
func (p [EndpointParameters](#EndpointParameters)) ValidateRequired() [error](/builtin#error)
```
ValidateRequired validates required parameters are set.
####
func (EndpointParameters) [WithDefaults](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/endpoints.go#L317) [¶](#EndpointParameters.WithDefaults)
added in v1.18.0
```
func (p [EndpointParameters](#EndpointParameters)) WithDefaults() [EndpointParameters](#EndpointParameters)
```
WithDefaults returns a shallow copy of EndpointParameterswith default values applied to members where applicable.
####
type [EndpointResolver](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/endpoints.go#L26) [¶](#EndpointResolver)
```
type EndpointResolver interface {
ResolveEndpoint(region [string](/builtin#string), options [EndpointResolverOptions](#EndpointResolverOptions)) ([aws](/github.com/aws/aws-sdk-go-v2/aws).[Endpoint](/github.com/aws/aws-sdk-go-v2/aws#Endpoint), [error](/builtin#error))
}
```
EndpointResolver interface for resolving service endpoints.
####
func [EndpointResolverFromURL](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/endpoints.go#L51) [¶](#EndpointResolverFromURL)
added in v1.1.0
```
func EndpointResolverFromURL(url [string](/builtin#string), optFns ...func(*[aws](/github.com/aws/aws-sdk-go-v2/aws).[Endpoint](/github.com/aws/aws-sdk-go-v2/aws#Endpoint))) [EndpointResolver](#EndpointResolver)
```
EndpointResolverFromURL returns an EndpointResolver configured using the provided endpoint url. By default, the resolved endpoint resolver uses the client region as signing region, and the endpoint source is set to EndpointSourceCustom.You can provide functional options to configure endpoint values for the resolved endpoint.
####
type [EndpointResolverFunc](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/endpoints.go#L40) [¶](#EndpointResolverFunc)
```
type EndpointResolverFunc func(region [string](/builtin#string), options [EndpointResolverOptions](#EndpointResolverOptions)) ([aws](/github.com/aws/aws-sdk-go-v2/aws).[Endpoint](/github.com/aws/aws-sdk-go-v2/aws#Endpoint), [error](/builtin#error))
```
EndpointResolverFunc is a helper utility that wraps a function so it satisfies the EndpointResolver interface. This is useful when you want to add additional endpoint resolving logic, or stub out specific endpoints with custom values.
####
func (EndpointResolverFunc) [ResolveEndpoint](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/endpoints.go#L42) [¶](#EndpointResolverFunc.ResolveEndpoint)
```
func (fn [EndpointResolverFunc](#EndpointResolverFunc)) ResolveEndpoint(region [string](/builtin#string), options [EndpointResolverOptions](#EndpointResolverOptions)) (endpoint [aws](/github.com/aws/aws-sdk-go-v2/aws).[Endpoint](/github.com/aws/aws-sdk-go-v2/aws#Endpoint), err [error](/builtin#error))
```
####
type [EndpointResolverOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/endpoints.go#L23) [¶](#EndpointResolverOptions)
added in v0.29.0
```
type EndpointResolverOptions = [internalendpoints](/github.com/aws/aws-sdk-go-v2/service/[email protected]/internal/endpoints).[Options](/github.com/aws/aws-sdk-go-v2/service/[email protected]/internal/endpoints#Options)
```
EndpointResolverOptions is the service endpoint resolver options
####
type [EndpointResolverV2](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/endpoints.go#L329) [¶](#EndpointResolverV2)
added in v1.18.0
```
type EndpointResolverV2 interface {
// ResolveEndpoint attempts to resolve the endpoint with the provided options,
// returning the endpoint if found. Otherwise an error is returned.
ResolveEndpoint(ctx [context](/context).[Context](/context#Context), params [EndpointParameters](#EndpointParameters)) (
[smithyendpoints](/github.com/aws/smithy-go/endpoints).[Endpoint](/github.com/aws/smithy-go/endpoints#Endpoint), [error](/builtin#error),
)
}
```
EndpointResolverV2 provides the interface for resolving service endpoints.
####
func [NewDefaultEndpointResolverV2](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/endpoints.go#L340) [¶](#NewDefaultEndpointResolverV2)
added in v1.18.0
```
func NewDefaultEndpointResolverV2() [EndpointResolverV2](#EndpointResolverV2)
```
####
type [GetApplicationInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetApplication.go#L34) [¶](#GetApplicationInput)
```
type GetApplicationInput struct {
// The ID of the application you want to get.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [GetApplicationOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetApplication.go#L44) [¶](#GetApplicationOutput)
```
type GetApplicationOutput struct {
// The description of the application.
Description *[string](/builtin#string)
// The application ID.
Id *[string](/builtin#string)
// The application name.
Name *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [GetConfigurationInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetConfiguration.go#L44) [¶](#GetConfigurationInput)
```
type GetConfigurationInput struct {
// The application to get. Specify either the application name or the application
// ID.
//
// This member is required.
Application *[string](/builtin#string)
// The clientId parameter in the following command is a unique, user-specified ID
// to identify the client for the configuration. This ID enables AppConfig to
// deploy the configuration in intervals, as defined in the deployment strategy.
//
// This member is required.
ClientId *[string](/builtin#string)
// The configuration to get. Specify either the configuration name or the
// configuration ID.
//
// This member is required.
Configuration *[string](/builtin#string)
// The environment to get. Specify either the environment name or the environment
// ID.
//
// This member is required.
Environment *[string](/builtin#string)
// The configuration version returned in the most recent GetConfiguration
// response. AppConfig uses the value of the ClientConfigurationVersion parameter
// to identify the configuration version on your clients. If you don’t send
// ClientConfigurationVersion with each call to GetConfiguration , your clients
// receive the current configuration. You are charged each time your clients
// receive a configuration. To avoid excess charges, we recommend you use the
// StartConfigurationSession (<https://docs.aws.amazon.com/appconfig/2019-10-09/APIReference/StartConfigurationSession.html>)
// and GetLatestConfiguration (<https://docs.aws.amazon.com/appconfig/2019-10-09/APIReference/GetLatestConfiguration.html>)
// APIs, which track the client configuration version on your behalf. If you choose
// to continue using GetConfiguration , we recommend that you include the
// ClientConfigurationVersion value with every call to GetConfiguration . The value
// to use for ClientConfigurationVersion comes from the ConfigurationVersion
// attribute returned by GetConfiguration when there is new or updated data, and
// should be saved for subsequent calls to GetConfiguration . For more information
// about working with configurations, see Retrieving the Configuration (<http://docs.aws.amazon.com/appconfig/latest/userguide/appconfig-retrieving-the-configuration.html>)
// in the AppConfig User Guide.
ClientConfigurationVersion *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [GetConfigurationOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetConfiguration.go#L92) [¶](#GetConfigurationOutput)
```
type GetConfigurationOutput struct {
// The configuration version.
ConfigurationVersion *[string](/builtin#string)
// The content of the configuration or the configuration data. The Content
// attribute only contains data if the system finds new or updated configuration
// data. If there is no new or updated data and ClientConfigurationVersion matches
// the version of the current configuration, AppConfig returns a 204 No Content
// HTTP response code and the Content value will be empty.
Content [][byte](/builtin#byte)
// A standard MIME type describing the format of the configuration content. For
// more information, see Content-Type (<http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.17>)
// .
ContentType *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [GetConfigurationProfileInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetConfigurationProfile.go#L35) [¶](#GetConfigurationProfileInput)
```
type GetConfigurationProfileInput struct {
// The ID of the application that includes the configuration profile you want to
// get.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The ID of the configuration profile that you want to get.
//
// This member is required.
ConfigurationProfileId *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [GetConfigurationProfileOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetConfigurationProfile.go#L51) [¶](#GetConfigurationProfileOutput)
```
type GetConfigurationProfileOutput struct {
// The application ID.
ApplicationId *[string](/builtin#string)
// The configuration profile description.
Description *[string](/builtin#string)
// The configuration profile ID.
Id *[string](/builtin#string)
// The Amazon Resource Name of the Key Management Service key to encrypt new
// configuration data versions in the AppConfig hosted configuration store. This
// attribute is only used for hosted configuration types. To encrypt data managed
// in other configuration stores, see the documentation for how to specify an KMS
// key for that particular service.
KmsKeyArn *[string](/builtin#string)
// The Key Management Service key identifier (key ID, key alias, or key ARN)
// provided when the resource was created or updated.
KmsKeyIdentifier *[string](/builtin#string)
// The URI location of the configuration.
LocationUri *[string](/builtin#string)
// The name of the configuration profile.
Name *[string](/builtin#string)
// The ARN of an IAM role with permission to access the configuration at the
// specified LocationUri .
RetrievalRoleArn *[string](/builtin#string)
// The type of configurations contained in the profile. AppConfig supports feature
// flags and freeform configurations. We recommend you create feature flag
// configurations to enable or disable new features and freeform configurations to
// distribute configurations to an application. When calling this API, enter one of
// the following values for Type : AWS.AppConfig.FeatureFlags
// AWS.Freeform
Type *[string](/builtin#string)
// A list of methods for validating the configuration.
Validators [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Validator](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Validator)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [GetDeploymentInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetDeployment.go#L36) [¶](#GetDeploymentInput)
```
type GetDeploymentInput struct {
// The ID of the application that includes the deployment you want to get.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The sequence number of the deployment.
//
// This member is required.
DeploymentNumber *[int32](/builtin#int32)
// The ID of the environment that includes the deployment you want to get.
//
// This member is required.
EnvironmentId *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [GetDeploymentOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetDeployment.go#L56) [¶](#GetDeploymentOutput)
```
type GetDeploymentOutput struct {
// The ID of the application that was deployed.
ApplicationId *[string](/builtin#string)
// A list of extensions that were processed as part of the deployment. The
// extensions that were previously associated to the configuration profile,
// environment, or the application when StartDeployment was called.
AppliedExtensions [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[AppliedExtension](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#AppliedExtension)
// The time the deployment completed.
CompletedAt *[time](/time).[Time](/time#Time)
// Information about the source location of the configuration.
ConfigurationLocationUri *[string](/builtin#string)
// The name of the configuration.
ConfigurationName *[string](/builtin#string)
// The ID of the configuration profile that was deployed.
ConfigurationProfileId *[string](/builtin#string)
// The configuration version that was deployed.
ConfigurationVersion *[string](/builtin#string)
// Total amount of time the deployment lasted.
DeploymentDurationInMinutes [int32](/builtin#int32)
// The sequence number of the deployment.
DeploymentNumber [int32](/builtin#int32)
// The ID of the deployment strategy that was deployed.
DeploymentStrategyId *[string](/builtin#string)
// The description of the deployment.
Description *[string](/builtin#string)
// The ID of the environment that was deployed.
EnvironmentId *[string](/builtin#string)
// A list containing all events related to a deployment. The most recent events
// are displayed first.
EventLog [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[DeploymentEvent](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#DeploymentEvent)
// The amount of time that AppConfig monitored for alarms before considering the
// deployment to be complete and no longer eligible for automatic rollback.
FinalBakeTimeInMinutes [int32](/builtin#int32)
// The percentage of targets to receive a deployed configuration during each
// interval.
GrowthFactor [float32](/builtin#float32)
// The algorithm used to define how percentage grew over time.
GrowthType [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[GrowthType](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#GrowthType)
// The Amazon Resource Name of the Key Management Service key used to encrypt
// configuration data. You can encrypt secrets stored in Secrets Manager, Amazon
// Simple Storage Service (Amazon S3) objects encrypted with SSE-KMS, or secure
// string parameters stored in Amazon Web Services Systems Manager Parameter Store.
KmsKeyArn *[string](/builtin#string)
// The Key Management Service key identifier (key ID, key alias, or key ARN)
// provided when the resource was created or updated.
KmsKeyIdentifier *[string](/builtin#string)
// The percentage of targets for which the deployment is available.
PercentageComplete [float32](/builtin#float32)
// The time the deployment started.
StartedAt *[time](/time).[Time](/time#Time)
// The state of the deployment.
State [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[DeploymentState](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#DeploymentState)
// A user-defined label for an AppConfig hosted configuration version.
VersionLabel *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [GetDeploymentStrategyInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetDeploymentStrategy.go#L39) [¶](#GetDeploymentStrategyInput)
```
type GetDeploymentStrategyInput struct {
// The ID of the deployment strategy to get.
//
// This member is required.
DeploymentStrategyId *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [GetDeploymentStrategyOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetDeploymentStrategy.go#L49) [¶](#GetDeploymentStrategyOutput)
```
type GetDeploymentStrategyOutput struct {
// Total amount of time the deployment lasted.
DeploymentDurationInMinutes [int32](/builtin#int32)
// The description of the deployment strategy.
Description *[string](/builtin#string)
// The amount of time that AppConfig monitored for alarms before considering the
// deployment to be complete and no longer eligible for automatic rollback.
FinalBakeTimeInMinutes [int32](/builtin#int32)
// The percentage of targets that received a deployed configuration during each
// interval.
GrowthFactor [float32](/builtin#float32)
// The algorithm used to define how percentage grew over time.
GrowthType [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[GrowthType](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#GrowthType)
// The deployment strategy ID.
Id *[string](/builtin#string)
// The name of the deployment strategy.
Name *[string](/builtin#string)
// Save the deployment strategy to a Systems Manager (SSM) document.
ReplicateTo [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[ReplicateTo](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#ReplicateTo)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [GetEnvironmentInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetEnvironment.go#L40) [¶](#GetEnvironmentInput)
```
type GetEnvironmentInput struct {
// The ID of the application that includes the environment you want to get.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The ID of the environment that you want to get.
//
// This member is required.
EnvironmentId *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [GetEnvironmentOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetEnvironment.go#L55) [¶](#GetEnvironmentOutput)
```
type GetEnvironmentOutput struct {
// The application ID.
ApplicationId *[string](/builtin#string)
// The description of the environment.
Description *[string](/builtin#string)
// The environment ID.
Id *[string](/builtin#string)
// Amazon CloudWatch alarms monitored during the deployment.
Monitors [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Monitor](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Monitor)
// The name of the environment.
Name *[string](/builtin#string)
// The state of the environment. An environment can be in one of the following
// states: READY_FOR_DEPLOYMENT , DEPLOYING , ROLLING_BACK , or ROLLED_BACK
State [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[EnvironmentState](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#EnvironmentState)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [GetExtensionAssociationInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetExtensionAssociation.go#L37) [¶](#GetExtensionAssociationInput)
added in v1.13.0
```
type GetExtensionAssociationInput struct {
// The extension association ID to get.
//
// This member is required.
ExtensionAssociationId *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [GetExtensionAssociationOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetExtensionAssociation.go#L47) [¶](#GetExtensionAssociationOutput)
added in v1.13.0
```
type GetExtensionAssociationOutput struct {
// The system-generated Amazon Resource Name (ARN) for the extension.
Arn *[string](/builtin#string)
// The ARN of the extension defined in the association.
ExtensionArn *[string](/builtin#string)
// The version number for the extension defined in the association.
ExtensionVersionNumber [int32](/builtin#int32)
// The system-generated ID for the association.
Id *[string](/builtin#string)
// The parameter names and values defined in the association.
Parameters map[[string](/builtin#string)][string](/builtin#string)
// The ARNs of applications, configuration profiles, or environments defined in
// the association.
ResourceArn *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [GetExtensionInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetExtension.go#L35) [¶](#GetExtensionInput)
added in v1.13.0
```
type GetExtensionInput struct {
// The name, the ID, or the Amazon Resource Name (ARN) of the extension.
//
// This member is required.
ExtensionIdentifier *[string](/builtin#string)
// The extension version number. If no version number was defined, AppConfig uses
// the highest version.
VersionNumber *[int32](/builtin#int32)
// contains filtered or unexported fields
}
```
####
type [GetExtensionOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetExtension.go#L49) [¶](#GetExtensionOutput)
added in v1.13.0
```
type GetExtensionOutput struct {
// The actions defined in the extension.
Actions map[[string](/builtin#string)][][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Action](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Action)
// The system-generated Amazon Resource Name (ARN) for the extension.
Arn *[string](/builtin#string)
// Information about the extension.
Description *[string](/builtin#string)
// The system-generated ID of the extension.
Id *[string](/builtin#string)
// The extension name.
Name *[string](/builtin#string)
// The parameters accepted by the extension. You specify parameter values when you
// associate the extension to an AppConfig resource by using the
// CreateExtensionAssociation API action. For Lambda extension actions, these
// parameters are included in the Lambda request object.
Parameters map[[string](/builtin#string)][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Parameter](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Parameter)
// The extension version number.
VersionNumber [int32](/builtin#int32)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [GetHostedConfigurationVersionInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetHostedConfigurationVersion.go#L34) [¶](#GetHostedConfigurationVersionInput)
```
type GetHostedConfigurationVersionInput struct {
// The application ID.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The configuration profile ID.
//
// This member is required.
ConfigurationProfileId *[string](/builtin#string)
// The version.
//
// This member is required.
VersionNumber [int32](/builtin#int32)
// contains filtered or unexported fields
}
```
####
type [GetHostedConfigurationVersionOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_GetHostedConfigurationVersion.go#L54) [¶](#GetHostedConfigurationVersionOutput)
```
type GetHostedConfigurationVersionOutput struct {
// The application ID.
ApplicationId *[string](/builtin#string)
// The configuration profile ID.
ConfigurationProfileId *[string](/builtin#string)
// The content of the configuration or the configuration data.
Content [][byte](/builtin#byte)
// A standard MIME type describing the format of the configuration content. For
// more information, see Content-Type (<https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.17>)
// .
ContentType *[string](/builtin#string)
// A description of the configuration.
Description *[string](/builtin#string)
// The Amazon Resource Name of the Key Management Service key that was used to
// encrypt this specific version of the configuration data in the AppConfig hosted
// configuration store.
KmsKeyArn *[string](/builtin#string)
// A user-defined label for an AppConfig hosted configuration version.
VersionLabel *[string](/builtin#string)
// The configuration version.
VersionNumber [int32](/builtin#int32)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [HTTPClient](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_client.go#L177) [¶](#HTTPClient)
```
type HTTPClient interface {
Do(*[http](/net/http).[Request](/net/http#Request)) (*[http](/net/http).[Response](/net/http#Response), [error](/builtin#error))
}
```
####
type [HTTPSignerV4](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_client.go#L425) [¶](#HTTPSignerV4)
```
type HTTPSignerV4 interface {
SignHTTP(ctx [context](/context).[Context](/context#Context), credentials [aws](/github.com/aws/aws-sdk-go-v2/aws).[Credentials](/github.com/aws/aws-sdk-go-v2/aws#Credentials), r *[http](/net/http).[Request](/net/http#Request), payloadHash [string](/builtin#string), service [string](/builtin#string), region [string](/builtin#string), signingTime [time](/time).[Time](/time#Time), optFns ...func(*[v4](/github.com/aws/aws-sdk-go-v2/aws/signer/v4).[SignerOptions](/github.com/aws/aws-sdk-go-v2/aws/signer/v4#SignerOptions))) [error](/builtin#error)
}
```
####
type [ListApplicationsAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListApplications.go#L140) [¶](#ListApplicationsAPIClient)
added in v0.30.0
```
type ListApplicationsAPIClient interface {
ListApplications([context](/context).[Context](/context#Context), *[ListApplicationsInput](#ListApplicationsInput), ...func(*[Options](#Options))) (*[ListApplicationsOutput](#ListApplicationsOutput), [error](/builtin#error))
}
```
ListApplicationsAPIClient is a client that implements the ListApplications operation.
####
type [ListApplicationsInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListApplications.go#L35) [¶](#ListApplicationsInput)
```
type ListApplicationsInput struct {
// The maximum number of items to return for this call. The call also returns a
// token that you can specify in a subsequent call to get the next set of results.
MaxResults *[int32](/builtin#int32)
// A token to start the list. Next token is a pagination token generated by
// AppConfig to describe what page the previous List call ended on. For the first
// List request, the nextToken should not be set. On subsequent calls, the
// nextToken parameter should be set to the previous responses nextToken value. Use
// this token to get the next set of results.
NextToken *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [ListApplicationsOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListApplications.go#L51) [¶](#ListApplicationsOutput)
```
type ListApplicationsOutput struct {
// The elements from this collection.
Items [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Application](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Application)
// The token for the next set of items to return. Use this token to get the next
// set of results.
NextToken *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [ListApplicationsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListApplications.go#L158) [¶](#ListApplicationsPaginator)
added in v0.30.0
```
type ListApplicationsPaginator struct {
// contains filtered or unexported fields
}
```
ListApplicationsPaginator is a paginator for ListApplications
####
func [NewListApplicationsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListApplications.go#L167) [¶](#NewListApplicationsPaginator)
added in v0.30.0
```
func NewListApplicationsPaginator(client [ListApplicationsAPIClient](#ListApplicationsAPIClient), params *[ListApplicationsInput](#ListApplicationsInput), optFns ...func(*[ListApplicationsPaginatorOptions](#ListApplicationsPaginatorOptions))) *[ListApplicationsPaginator](#ListApplicationsPaginator)
```
NewListApplicationsPaginator returns a new ListApplicationsPaginator
####
func (*ListApplicationsPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListApplications.go#L191) [¶](#ListApplicationsPaginator.HasMorePages)
added in v0.30.0
```
func (p *[ListApplicationsPaginator](#ListApplicationsPaginator)) HasMorePages() [bool](/builtin#bool)
```
HasMorePages returns a boolean indicating whether more pages are available
####
func (*ListApplicationsPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListApplications.go#L196) [¶](#ListApplicationsPaginator.NextPage)
added in v0.30.0
```
func (p *[ListApplicationsPaginator](#ListApplicationsPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[ListApplicationsOutput](#ListApplicationsOutput), [error](/builtin#error))
```
NextPage retrieves the next ListApplications page.
####
type [ListApplicationsPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListApplications.go#L147) [¶](#ListApplicationsPaginatorOptions)
added in v0.30.0
```
type ListApplicationsPaginatorOptions struct {
// The maximum number of items to return for this call. The call also returns a
// token that you can specify in a subsequent call to get the next set of results.
Limit [int32](/builtin#int32)
// Set to true if pagination should stop if the service returns a pagination token
// that matches the most recent token provided to the service.
StopOnDuplicateToken [bool](/builtin#bool)
}
```
ListApplicationsPaginatorOptions is the paginator options for ListApplications
####
type [ListConfigurationProfilesAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListConfigurationProfiles.go#L148) [¶](#ListConfigurationProfilesAPIClient)
added in v0.30.0
```
type ListConfigurationProfilesAPIClient interface {
ListConfigurationProfiles([context](/context).[Context](/context#Context), *[ListConfigurationProfilesInput](#ListConfigurationProfilesInput), ...func(*[Options](#Options))) (*[ListConfigurationProfilesOutput](#ListConfigurationProfilesOutput), [error](/builtin#error))
}
```
ListConfigurationProfilesAPIClient is a client that implements the ListConfigurationProfiles operation.
####
type [ListConfigurationProfilesInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListConfigurationProfiles.go#L35) [¶](#ListConfigurationProfilesInput)
```
type ListConfigurationProfilesInput struct {
// The application ID.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The maximum number of items to return for this call. The call also returns a
// token that you can specify in a subsequent call to get the next set of results.
MaxResults *[int32](/builtin#int32)
// A token to start the list. Use this token to get the next set of results.
NextToken *[string](/builtin#string)
// A filter based on the type of configurations that the configuration profile
// contains. A configuration can be a feature flag or a freeform configuration.
Type *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [ListConfigurationProfilesOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListConfigurationProfiles.go#L56) [¶](#ListConfigurationProfilesOutput)
```
type ListConfigurationProfilesOutput struct {
// The elements from this collection.
Items [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[ConfigurationProfileSummary](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#ConfigurationProfileSummary)
// The token for the next set of items to return. Use this token to get the next
// set of results.
NextToken *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [ListConfigurationProfilesPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListConfigurationProfiles.go#L167) [¶](#ListConfigurationProfilesPaginator)
added in v0.30.0
```
type ListConfigurationProfilesPaginator struct {
// contains filtered or unexported fields
}
```
ListConfigurationProfilesPaginator is a paginator for ListConfigurationProfiles
####
func [NewListConfigurationProfilesPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListConfigurationProfiles.go#L177) [¶](#NewListConfigurationProfilesPaginator)
added in v0.30.0
```
func NewListConfigurationProfilesPaginator(client [ListConfigurationProfilesAPIClient](#ListConfigurationProfilesAPIClient), params *[ListConfigurationProfilesInput](#ListConfigurationProfilesInput), optFns ...func(*[ListConfigurationProfilesPaginatorOptions](#ListConfigurationProfilesPaginatorOptions))) *[ListConfigurationProfilesPaginator](#ListConfigurationProfilesPaginator)
```
NewListConfigurationProfilesPaginator returns a new ListConfigurationProfilesPaginator
####
func (*ListConfigurationProfilesPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListConfigurationProfiles.go#L201) [¶](#ListConfigurationProfilesPaginator.HasMorePages)
added in v0.30.0
```
func (p *[ListConfigurationProfilesPaginator](#ListConfigurationProfilesPaginator)) HasMorePages() [bool](/builtin#bool)
```
HasMorePages returns a boolean indicating whether more pages are available
####
func (*ListConfigurationProfilesPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListConfigurationProfiles.go#L206) [¶](#ListConfigurationProfilesPaginator.NextPage)
added in v0.30.0
```
func (p *[ListConfigurationProfilesPaginator](#ListConfigurationProfilesPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[ListConfigurationProfilesOutput](#ListConfigurationProfilesOutput), [error](/builtin#error))
```
NextPage retrieves the next ListConfigurationProfiles page.
####
type [ListConfigurationProfilesPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListConfigurationProfiles.go#L156) [¶](#ListConfigurationProfilesPaginatorOptions)
added in v0.30.0
```
type ListConfigurationProfilesPaginatorOptions struct {
// The maximum number of items to return for this call. The call also returns a
// token that you can specify in a subsequent call to get the next set of results.
Limit [int32](/builtin#int32)
// Set to true if pagination should stop if the service returns a pagination token
// that matches the most recent token provided to the service.
StopOnDuplicateToken [bool](/builtin#bool)
}
```
ListConfigurationProfilesPaginatorOptions is the paginator options for ListConfigurationProfiles
####
type [ListDeploymentStrategiesAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListDeploymentStrategies.go#L136) [¶](#ListDeploymentStrategiesAPIClient)
added in v0.30.0
```
type ListDeploymentStrategiesAPIClient interface {
ListDeploymentStrategies([context](/context).[Context](/context#Context), *[ListDeploymentStrategiesInput](#ListDeploymentStrategiesInput), ...func(*[Options](#Options))) (*[ListDeploymentStrategiesOutput](#ListDeploymentStrategiesOutput), [error](/builtin#error))
}
```
ListDeploymentStrategiesAPIClient is a client that implements the ListDeploymentStrategies operation.
####
type [ListDeploymentStrategiesInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListDeploymentStrategies.go#L35) [¶](#ListDeploymentStrategiesInput)
```
type ListDeploymentStrategiesInput struct {
// The maximum number of items to return for this call. The call also returns a
// token that you can specify in a subsequent call to get the next set of results.
MaxResults *[int32](/builtin#int32)
// A token to start the list. Use this token to get the next set of results.
NextToken *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [ListDeploymentStrategiesOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListDeploymentStrategies.go#L47) [¶](#ListDeploymentStrategiesOutput)
```
type ListDeploymentStrategiesOutput struct {
// The elements from this collection.
Items [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[DeploymentStrategy](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#DeploymentStrategy)
// The token for the next set of items to return. Use this token to get the next
// set of results.
NextToken *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [ListDeploymentStrategiesPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListDeploymentStrategies.go#L155) [¶](#ListDeploymentStrategiesPaginator)
added in v0.30.0
```
type ListDeploymentStrategiesPaginator struct {
// contains filtered or unexported fields
}
```
ListDeploymentStrategiesPaginator is a paginator for ListDeploymentStrategies
####
func [NewListDeploymentStrategiesPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListDeploymentStrategies.go#L165) [¶](#NewListDeploymentStrategiesPaginator)
added in v0.30.0
```
func NewListDeploymentStrategiesPaginator(client [ListDeploymentStrategiesAPIClient](#ListDeploymentStrategiesAPIClient), params *[ListDeploymentStrategiesInput](#ListDeploymentStrategiesInput), optFns ...func(*[ListDeploymentStrategiesPaginatorOptions](#ListDeploymentStrategiesPaginatorOptions))) *[ListDeploymentStrategiesPaginator](#ListDeploymentStrategiesPaginator)
```
NewListDeploymentStrategiesPaginator returns a new ListDeploymentStrategiesPaginator
####
func (*ListDeploymentStrategiesPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListDeploymentStrategies.go#L189) [¶](#ListDeploymentStrategiesPaginator.HasMorePages)
added in v0.30.0
```
func (p *[ListDeploymentStrategiesPaginator](#ListDeploymentStrategiesPaginator)) HasMorePages() [bool](/builtin#bool)
```
HasMorePages returns a boolean indicating whether more pages are available
####
func (*ListDeploymentStrategiesPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListDeploymentStrategies.go#L194) [¶](#ListDeploymentStrategiesPaginator.NextPage)
added in v0.30.0
```
func (p *[ListDeploymentStrategiesPaginator](#ListDeploymentStrategiesPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[ListDeploymentStrategiesOutput](#ListDeploymentStrategiesOutput), [error](/builtin#error))
```
NextPage retrieves the next ListDeploymentStrategies page.
####
type [ListDeploymentStrategiesPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListDeploymentStrategies.go#L144) [¶](#ListDeploymentStrategiesPaginatorOptions)
added in v0.30.0
```
type ListDeploymentStrategiesPaginatorOptions struct {
// The maximum number of items to return for this call. The call also returns a
// token that you can specify in a subsequent call to get the next set of results.
Limit [int32](/builtin#int32)
// Set to true if pagination should stop if the service returns a pagination token
// that matches the most recent token provided to the service.
StopOnDuplicateToken [bool](/builtin#bool)
}
```
ListDeploymentStrategiesPaginatorOptions is the paginator options for ListDeploymentStrategies
####
type [ListDeploymentsAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListDeployments.go#L153) [¶](#ListDeploymentsAPIClient)
added in v0.30.0
```
type ListDeploymentsAPIClient interface {
ListDeployments([context](/context).[Context](/context#Context), *[ListDeploymentsInput](#ListDeploymentsInput), ...func(*[Options](#Options))) (*[ListDeploymentsOutput](#ListDeploymentsOutput), [error](/builtin#error))
}
```
ListDeploymentsAPIClient is a client that implements the ListDeployments operation.
####
type [ListDeploymentsInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListDeployments.go#L35) [¶](#ListDeploymentsInput)
```
type ListDeploymentsInput struct {
// The application ID.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The environment ID.
//
// This member is required.
EnvironmentId *[string](/builtin#string)
// The maximum number of items that may be returned for this call. If there are
// items that have not yet been returned, the response will include a non-null
// NextToken that you can provide in a subsequent call to get the next set of
// results.
MaxResults *[int32](/builtin#int32)
// The token returned by a prior call to this operation indicating the next set of
// results to be returned. If not specified, the operation will return the first
// set of results.
NextToken *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [ListDeploymentsOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListDeployments.go#L61) [¶](#ListDeploymentsOutput)
```
type ListDeploymentsOutput struct {
// The elements from this collection.
Items [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[DeploymentSummary](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#DeploymentSummary)
// The token for the next set of items to return. Use this token to get the next
// set of results.
NextToken *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [ListDeploymentsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListDeployments.go#L173) [¶](#ListDeploymentsPaginator)
added in v0.30.0
```
type ListDeploymentsPaginator struct {
// contains filtered or unexported fields
}
```
ListDeploymentsPaginator is a paginator for ListDeployments
####
func [NewListDeploymentsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListDeployments.go#L182) [¶](#NewListDeploymentsPaginator)
added in v0.30.0
```
func NewListDeploymentsPaginator(client [ListDeploymentsAPIClient](#ListDeploymentsAPIClient), params *[ListDeploymentsInput](#ListDeploymentsInput), optFns ...func(*[ListDeploymentsPaginatorOptions](#ListDeploymentsPaginatorOptions))) *[ListDeploymentsPaginator](#ListDeploymentsPaginator)
```
NewListDeploymentsPaginator returns a new ListDeploymentsPaginator
####
func (*ListDeploymentsPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListDeployments.go#L206) [¶](#ListDeploymentsPaginator.HasMorePages)
added in v0.30.0
```
func (p *[ListDeploymentsPaginator](#ListDeploymentsPaginator)) HasMorePages() [bool](/builtin#bool)
```
HasMorePages returns a boolean indicating whether more pages are available
####
func (*ListDeploymentsPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListDeployments.go#L211) [¶](#ListDeploymentsPaginator.NextPage)
added in v0.30.0
```
func (p *[ListDeploymentsPaginator](#ListDeploymentsPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[ListDeploymentsOutput](#ListDeploymentsOutput), [error](/builtin#error))
```
NextPage retrieves the next ListDeployments page.
####
type [ListDeploymentsPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListDeployments.go#L160) [¶](#ListDeploymentsPaginatorOptions)
added in v0.30.0
```
type ListDeploymentsPaginatorOptions struct {
// The maximum number of items that may be returned for this call. If there are
// items that have not yet been returned, the response will include a non-null
// NextToken that you can provide in a subsequent call to get the next set of
// results.
Limit [int32](/builtin#int32)
// Set to true if pagination should stop if the service returns a pagination token
// that matches the most recent token provided to the service.
StopOnDuplicateToken [bool](/builtin#bool)
}
```
ListDeploymentsPaginatorOptions is the paginator options for ListDeployments
####
type [ListEnvironmentsAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListEnvironments.go#L144) [¶](#ListEnvironmentsAPIClient)
added in v0.30.0
```
type ListEnvironmentsAPIClient interface {
ListEnvironments([context](/context).[Context](/context#Context), *[ListEnvironmentsInput](#ListEnvironmentsInput), ...func(*[Options](#Options))) (*[ListEnvironmentsOutput](#ListEnvironmentsOutput), [error](/builtin#error))
}
```
ListEnvironmentsAPIClient is a client that implements the ListEnvironments operation.
####
type [ListEnvironmentsInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListEnvironments.go#L35) [¶](#ListEnvironmentsInput)
```
type ListEnvironmentsInput struct {
// The application ID.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The maximum number of items to return for this call. The call also returns a
// token that you can specify in a subsequent call to get the next set of results.
MaxResults *[int32](/builtin#int32)
// A token to start the list. Use this token to get the next set of results.
NextToken *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [ListEnvironmentsOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListEnvironments.go#L52) [¶](#ListEnvironmentsOutput)
```
type ListEnvironmentsOutput struct {
// The elements from this collection.
Items [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Environment](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Environment)
// The token for the next set of items to return. Use this token to get the next
// set of results.
NextToken *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [ListEnvironmentsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListEnvironments.go#L162) [¶](#ListEnvironmentsPaginator)
added in v0.30.0
```
type ListEnvironmentsPaginator struct {
// contains filtered or unexported fields
}
```
ListEnvironmentsPaginator is a paginator for ListEnvironments
####
func [NewListEnvironmentsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListEnvironments.go#L171) [¶](#NewListEnvironmentsPaginator)
added in v0.30.0
```
func NewListEnvironmentsPaginator(client [ListEnvironmentsAPIClient](#ListEnvironmentsAPIClient), params *[ListEnvironmentsInput](#ListEnvironmentsInput), optFns ...func(*[ListEnvironmentsPaginatorOptions](#ListEnvironmentsPaginatorOptions))) *[ListEnvironmentsPaginator](#ListEnvironmentsPaginator)
```
NewListEnvironmentsPaginator returns a new ListEnvironmentsPaginator
####
func (*ListEnvironmentsPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListEnvironments.go#L195) [¶](#ListEnvironmentsPaginator.HasMorePages)
added in v0.30.0
```
func (p *[ListEnvironmentsPaginator](#ListEnvironmentsPaginator)) HasMorePages() [bool](/builtin#bool)
```
HasMorePages returns a boolean indicating whether more pages are available
####
func (*ListEnvironmentsPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListEnvironments.go#L200) [¶](#ListEnvironmentsPaginator.NextPage)
added in v0.30.0
```
func (p *[ListEnvironmentsPaginator](#ListEnvironmentsPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[ListEnvironmentsOutput](#ListEnvironmentsOutput), [error](/builtin#error))
```
NextPage retrieves the next ListEnvironments page.
####
type [ListEnvironmentsPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListEnvironments.go#L151) [¶](#ListEnvironmentsPaginatorOptions)
added in v0.30.0
```
type ListEnvironmentsPaginatorOptions struct {
// The maximum number of items to return for this call. The call also returns a
// token that you can specify in a subsequent call to get the next set of results.
Limit [int32](/builtin#int32)
// Set to true if pagination should stop if the service returns a pagination token
// that matches the most recent token provided to the service.
StopOnDuplicateToken [bool](/builtin#bool)
}
```
ListEnvironmentsPaginatorOptions is the paginator options for ListEnvironments
####
type [ListExtensionAssociationsAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListExtensionAssociations.go#L149) [¶](#ListExtensionAssociationsAPIClient)
added in v1.13.0
```
type ListExtensionAssociationsAPIClient interface {
ListExtensionAssociations([context](/context).[Context](/context#Context), *[ListExtensionAssociationsInput](#ListExtensionAssociationsInput), ...func(*[Options](#Options))) (*[ListExtensionAssociationsOutput](#ListExtensionAssociationsOutput), [error](/builtin#error))
}
```
ListExtensionAssociationsAPIClient is a client that implements the ListExtensionAssociations operation.
####
type [ListExtensionAssociationsInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListExtensionAssociations.go#L37) [¶](#ListExtensionAssociationsInput)
added in v1.13.0
```
type ListExtensionAssociationsInput struct {
// The name, the ID, or the Amazon Resource Name (ARN) of the extension.
ExtensionIdentifier *[string](/builtin#string)
// The version number for the extension defined in the association.
ExtensionVersionNumber *[int32](/builtin#int32)
// The maximum number of items to return for this call. The call also returns a
// token that you can specify in a subsequent call to get the next set of results.
MaxResults *[int32](/builtin#int32)
// A token to start the list. Use this token to get the next set of results or
// pass null to get the first set of results.
NextToken *[string](/builtin#string)
// The ARN of an application, configuration profile, or environment.
ResourceIdentifier *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [ListExtensionAssociationsOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListExtensionAssociations.go#L59) [¶](#ListExtensionAssociationsOutput)
added in v1.13.0
```
type ListExtensionAssociationsOutput struct {
// The list of extension associations. Each item represents an extension
// association to an application, environment, or configuration profile.
Items [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[ExtensionAssociationSummary](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#ExtensionAssociationSummary)
// The token for the next set of items to return. Use this token to get the next
// set of results.
NextToken *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [ListExtensionAssociationsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListExtensionAssociations.go#L168) [¶](#ListExtensionAssociationsPaginator)
added in v1.13.0
```
type ListExtensionAssociationsPaginator struct {
// contains filtered or unexported fields
}
```
ListExtensionAssociationsPaginator is a paginator for ListExtensionAssociations
####
func [NewListExtensionAssociationsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListExtensionAssociations.go#L178) [¶](#NewListExtensionAssociationsPaginator)
added in v1.13.0
```
func NewListExtensionAssociationsPaginator(client [ListExtensionAssociationsAPIClient](#ListExtensionAssociationsAPIClient), params *[ListExtensionAssociationsInput](#ListExtensionAssociationsInput), optFns ...func(*[ListExtensionAssociationsPaginatorOptions](#ListExtensionAssociationsPaginatorOptions))) *[ListExtensionAssociationsPaginator](#ListExtensionAssociationsPaginator)
```
NewListExtensionAssociationsPaginator returns a new ListExtensionAssociationsPaginator
####
func (*ListExtensionAssociationsPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListExtensionAssociations.go#L202) [¶](#ListExtensionAssociationsPaginator.HasMorePages)
added in v1.13.0
```
func (p *[ListExtensionAssociationsPaginator](#ListExtensionAssociationsPaginator)) HasMorePages() [bool](/builtin#bool)
```
HasMorePages returns a boolean indicating whether more pages are available
####
func (*ListExtensionAssociationsPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListExtensionAssociations.go#L207) [¶](#ListExtensionAssociationsPaginator.NextPage)
added in v1.13.0
```
func (p *[ListExtensionAssociationsPaginator](#ListExtensionAssociationsPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[ListExtensionAssociationsOutput](#ListExtensionAssociationsOutput), [error](/builtin#error))
```
NextPage retrieves the next ListExtensionAssociations page.
####
type [ListExtensionAssociationsPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListExtensionAssociations.go#L157) [¶](#ListExtensionAssociationsPaginatorOptions)
added in v1.13.0
```
type ListExtensionAssociationsPaginatorOptions struct {
// The maximum number of items to return for this call. The call also returns a
// token that you can specify in a subsequent call to get the next set of results.
Limit [int32](/builtin#int32)
// Set to true if pagination should stop if the service returns a pagination token
// that matches the most recent token provided to the service.
StopOnDuplicateToken [bool](/builtin#bool)
}
```
ListExtensionAssociationsPaginatorOptions is the paginator options for ListExtensionAssociations
####
type [ListExtensionsAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListExtensions.go#L143) [¶](#ListExtensionsAPIClient)
added in v1.13.0
```
type ListExtensionsAPIClient interface {
ListExtensions([context](/context).[Context](/context#Context), *[ListExtensionsInput](#ListExtensionsInput), ...func(*[Options](#Options))) (*[ListExtensionsOutput](#ListExtensionsOutput), [error](/builtin#error))
}
```
ListExtensionsAPIClient is a client that implements the ListExtensions operation.
####
type [ListExtensionsInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListExtensions.go#L38) [¶](#ListExtensionsInput)
added in v1.13.0
```
type ListExtensionsInput struct {
// The maximum number of items to return for this call. The call also returns a
// token that you can specify in a subsequent call to get the next set of results.
MaxResults *[int32](/builtin#int32)
// The extension name.
Name *[string](/builtin#string)
// A token to start the list. Use this token to get the next set of results.
NextToken *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [ListExtensionsOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListExtensions.go#L53) [¶](#ListExtensionsOutput)
added in v1.13.0
```
type ListExtensionsOutput struct {
// The list of available extensions. The list includes Amazon Web Services
// authored and user-created extensions.
Items [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[ExtensionSummary](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#ExtensionSummary)
// The token for the next set of items to return. Use this token to get the next
// set of results.
NextToken *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [ListExtensionsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListExtensions.go#L161) [¶](#ListExtensionsPaginator)
added in v1.13.0
```
type ListExtensionsPaginator struct {
// contains filtered or unexported fields
}
```
ListExtensionsPaginator is a paginator for ListExtensions
####
func [NewListExtensionsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListExtensions.go#L170) [¶](#NewListExtensionsPaginator)
added in v1.13.0
```
func NewListExtensionsPaginator(client [ListExtensionsAPIClient](#ListExtensionsAPIClient), params *[ListExtensionsInput](#ListExtensionsInput), optFns ...func(*[ListExtensionsPaginatorOptions](#ListExtensionsPaginatorOptions))) *[ListExtensionsPaginator](#ListExtensionsPaginator)
```
NewListExtensionsPaginator returns a new ListExtensionsPaginator
####
func (*ListExtensionsPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListExtensions.go#L194) [¶](#ListExtensionsPaginator.HasMorePages)
added in v1.13.0
```
func (p *[ListExtensionsPaginator](#ListExtensionsPaginator)) HasMorePages() [bool](/builtin#bool)
```
HasMorePages returns a boolean indicating whether more pages are available
####
func (*ListExtensionsPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListExtensions.go#L199) [¶](#ListExtensionsPaginator.NextPage)
added in v1.13.0
```
func (p *[ListExtensionsPaginator](#ListExtensionsPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[ListExtensionsOutput](#ListExtensionsOutput), [error](/builtin#error))
```
NextPage retrieves the next ListExtensions page.
####
type [ListExtensionsPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListExtensions.go#L150) [¶](#ListExtensionsPaginatorOptions)
added in v1.13.0
```
type ListExtensionsPaginatorOptions struct {
// The maximum number of items to return for this call. The call also returns a
// token that you can specify in a subsequent call to get the next set of results.
Limit [int32](/builtin#int32)
// Set to true if pagination should stop if the service returns a pagination token
// that matches the most recent token provided to the service.
StopOnDuplicateToken [bool](/builtin#bool)
}
```
ListExtensionsPaginatorOptions is the paginator options for ListExtensions
####
type [ListHostedConfigurationVersionsAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListHostedConfigurationVersions.go#L156) [¶](#ListHostedConfigurationVersionsAPIClient)
added in v0.30.0
```
type ListHostedConfigurationVersionsAPIClient interface {
ListHostedConfigurationVersions([context](/context).[Context](/context#Context), *[ListHostedConfigurationVersionsInput](#ListHostedConfigurationVersionsInput), ...func(*[Options](#Options))) (*[ListHostedConfigurationVersionsOutput](#ListHostedConfigurationVersionsOutput), [error](/builtin#error))
}
```
ListHostedConfigurationVersionsAPIClient is a client that implements the ListHostedConfigurationVersions operation.
####
type [ListHostedConfigurationVersionsInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListHostedConfigurationVersions.go#L36) [¶](#ListHostedConfigurationVersionsInput)
```
type ListHostedConfigurationVersionsInput struct {
// The application ID.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The configuration profile ID.
//
// This member is required.
ConfigurationProfileId *[string](/builtin#string)
// The maximum number of items to return for this call. The call also returns a
// token that you can specify in a subsequent call to get the next set of results.
MaxResults *[int32](/builtin#int32)
// A token to start the list. Use this token to get the next set of results.
NextToken *[string](/builtin#string)
// An optional filter that can be used to specify the version label of an
// AppConfig hosted configuration version. This parameter supports filtering by
// prefix using a wildcard, for example "v2*". If you don't specify an asterisk at
// the end of the value, only an exact match is returned.
VersionLabel *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [ListHostedConfigurationVersionsOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListHostedConfigurationVersions.go#L64) [¶](#ListHostedConfigurationVersionsOutput)
```
type ListHostedConfigurationVersionsOutput struct {
// The elements from this collection.
Items [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[HostedConfigurationVersionSummary](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#HostedConfigurationVersionSummary)
// The token for the next set of items to return. Use this token to get the next
// set of results.
NextToken *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [ListHostedConfigurationVersionsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListHostedConfigurationVersions.go#L176) [¶](#ListHostedConfigurationVersionsPaginator)
added in v0.30.0
```
type ListHostedConfigurationVersionsPaginator struct {
// contains filtered or unexported fields
}
```
ListHostedConfigurationVersionsPaginator is a paginator for ListHostedConfigurationVersions
####
func [NewListHostedConfigurationVersionsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListHostedConfigurationVersions.go#L186) [¶](#NewListHostedConfigurationVersionsPaginator)
added in v0.30.0
```
func NewListHostedConfigurationVersionsPaginator(client [ListHostedConfigurationVersionsAPIClient](#ListHostedConfigurationVersionsAPIClient), params *[ListHostedConfigurationVersionsInput](#ListHostedConfigurationVersionsInput), optFns ...func(*[ListHostedConfigurationVersionsPaginatorOptions](#ListHostedConfigurationVersionsPaginatorOptions))) *[ListHostedConfigurationVersionsPaginator](#ListHostedConfigurationVersionsPaginator)
```
NewListHostedConfigurationVersionsPaginator returns a new ListHostedConfigurationVersionsPaginator
####
func (*ListHostedConfigurationVersionsPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListHostedConfigurationVersions.go#L210) [¶](#ListHostedConfigurationVersionsPaginator.HasMorePages)
added in v0.30.0
```
func (p *[ListHostedConfigurationVersionsPaginator](#ListHostedConfigurationVersionsPaginator)) HasMorePages() [bool](/builtin#bool)
```
HasMorePages returns a boolean indicating whether more pages are available
####
func (*ListHostedConfigurationVersionsPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListHostedConfigurationVersions.go#L215) [¶](#ListHostedConfigurationVersionsPaginator.NextPage)
added in v0.30.0
```
func (p *[ListHostedConfigurationVersionsPaginator](#ListHostedConfigurationVersionsPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[ListHostedConfigurationVersionsOutput](#ListHostedConfigurationVersionsOutput), [error](/builtin#error))
```
NextPage retrieves the next ListHostedConfigurationVersions page.
####
type [ListHostedConfigurationVersionsPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListHostedConfigurationVersions.go#L164) [¶](#ListHostedConfigurationVersionsPaginatorOptions)
added in v0.30.0
```
type ListHostedConfigurationVersionsPaginatorOptions struct {
// The maximum number of items to return for this call. The call also returns a
// token that you can specify in a subsequent call to get the next set of results.
Limit [int32](/builtin#int32)
// Set to true if pagination should stop if the service returns a pagination token
// that matches the most recent token provided to the service.
StopOnDuplicateToken [bool](/builtin#bool)
}
```
ListHostedConfigurationVersionsPaginatorOptions is the paginator options for ListHostedConfigurationVersions
####
type [ListTagsForResourceInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListTagsForResource.go#L34) [¶](#ListTagsForResourceInput)
```
type ListTagsForResourceInput struct {
// The resource ARN.
//
// This member is required.
ResourceArn *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [ListTagsForResourceOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ListTagsForResource.go#L44) [¶](#ListTagsForResourceOutput)
```
type ListTagsForResourceOutput struct {
// Metadata to assign to AppConfig resources. Tags help organize and categorize
// your AppConfig resources. Each tag consists of a key and an optional value, both
// of which you define.
Tags map[[string](/builtin#string)][string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [Options](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_client.go#L60) [¶](#Options)
```
type Options struct {
// Set of options to modify how an operation is invoked. These apply to all
// operations invoked for this client. Use functional options on operation call to
// modify this list for per operation behavior.
APIOptions []func(*[middleware](/github.com/aws/smithy-go/middleware).[Stack](/github.com/aws/smithy-go/middleware#Stack)) [error](/builtin#error)
// The optional application specific identifier appended to the User-Agent header.
AppID [string](/builtin#string)
// This endpoint will be given as input to an EndpointResolverV2. It is used for
// providing a custom base endpoint that is subject to modifications by the
// processing EndpointResolverV2.
BaseEndpoint *[string](/builtin#string)
// Configures the events that will be sent to the configured logger.
ClientLogMode [aws](/github.com/aws/aws-sdk-go-v2/aws).[ClientLogMode](/github.com/aws/aws-sdk-go-v2/aws#ClientLogMode)
// The credentials object to use when signing requests.
Credentials [aws](/github.com/aws/aws-sdk-go-v2/aws).[CredentialsProvider](/github.com/aws/aws-sdk-go-v2/aws#CredentialsProvider)
// The configuration DefaultsMode that the SDK should use when constructing the
// clients initial default settings.
DefaultsMode [aws](/github.com/aws/aws-sdk-go-v2/aws).[DefaultsMode](/github.com/aws/aws-sdk-go-v2/aws#DefaultsMode)
// The endpoint options to be used when attempting to resolve an endpoint.
EndpointOptions [EndpointResolverOptions](#EndpointResolverOptions)
// The service endpoint resolver.
//
// Deprecated: Deprecated: EndpointResolver and WithEndpointResolver. Providing a
// value for this field will likely prevent you from using any endpoint-related
// service features released after the introduction of EndpointResolverV2 and
// BaseEndpoint. To migrate an EndpointResolver implementation that uses a custom
// endpoint, set the client option BaseEndpoint instead.
EndpointResolver [EndpointResolver](#EndpointResolver)
// Resolves the endpoint used for a particular service. This should be used over
// the deprecated EndpointResolver
EndpointResolverV2 [EndpointResolverV2](#EndpointResolverV2)
// Signature Version 4 (SigV4) Signer
HTTPSignerV4 [HTTPSignerV4](#HTTPSignerV4)
// The logger writer interface to write logging messages to.
Logger [logging](/github.com/aws/smithy-go/logging).[Logger](/github.com/aws/smithy-go/logging#Logger)
// The region to send requests to. (Required)
Region [string](/builtin#string)
// RetryMaxAttempts specifies the maximum number attempts an API client will call
// an operation that fails with a retryable error. A value of 0 is ignored, and
// will not be used to configure the API client created default retryer, or modify
// per operation call's retry max attempts. When creating a new API Clients this
// member will only be used if the Retryer Options member is nil. This value will
// be ignored if Retryer is not nil. If specified in an operation call's functional
// options with a value that is different than the constructed client's Options,
// the Client's Retryer will be wrapped to use the operation's specific
// RetryMaxAttempts value.
RetryMaxAttempts [int](/builtin#int)
// RetryMode specifies the retry mode the API client will be created with, if
// Retryer option is not also specified. When creating a new API Clients this
// member will only be used if the Retryer Options member is nil. This value will
// be ignored if Retryer is not nil. Currently does not support per operation call
// overrides, may in the future.
RetryMode [aws](/github.com/aws/aws-sdk-go-v2/aws).[RetryMode](/github.com/aws/aws-sdk-go-v2/aws#RetryMode)
// Retryer guides how HTTP requests should be retried in case of recoverable
// failures. When nil the API client will use a default retryer. The kind of
// default retry created by the API client can be changed with the RetryMode
// option.
Retryer [aws](/github.com/aws/aws-sdk-go-v2/aws).[Retryer](/github.com/aws/aws-sdk-go-v2/aws#Retryer)
// The RuntimeEnvironment configuration, only populated if the DefaultsMode is set
// to DefaultsModeAuto and is initialized using config.LoadDefaultConfig . You
// should not populate this structure programmatically, or rely on the values here
// within your applications.
RuntimeEnvironment [aws](/github.com/aws/aws-sdk-go-v2/aws).[RuntimeEnvironment](/github.com/aws/aws-sdk-go-v2/aws#RuntimeEnvironment)
// The HTTP client to invoke API calls with. Defaults to client's default HTTP
// implementation if nil.
HTTPClient [HTTPClient](#HTTPClient)
// contains filtered or unexported fields
}
```
####
func (Options) [Copy](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_client.go#L182) [¶](#Options.Copy)
```
func (o [Options](#Options)) Copy() [Options](#Options)
```
Copy creates a clone where the APIOptions list is deep copied.
####
type [ResolveEndpoint](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/endpoints.go#L67) [¶](#ResolveEndpoint)
```
type ResolveEndpoint struct {
Resolver [EndpointResolver](#EndpointResolver)
Options [EndpointResolverOptions](#EndpointResolverOptions)
}
```
####
func (*ResolveEndpoint) [HandleSerialize](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/endpoints.go#L76) [¶](#ResolveEndpoint.HandleSerialize)
```
func (m *[ResolveEndpoint](#ResolveEndpoint)) HandleSerialize(ctx [context](/context).[Context](/context#Context), in [middleware](/github.com/aws/smithy-go/middleware).[SerializeInput](/github.com/aws/smithy-go/middleware#SerializeInput), next [middleware](/github.com/aws/smithy-go/middleware).[SerializeHandler](/github.com/aws/smithy-go/middleware#SerializeHandler)) (
out [middleware](/github.com/aws/smithy-go/middleware).[SerializeOutput](/github.com/aws/smithy-go/middleware#SerializeOutput), metadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata), err [error](/builtin#error),
)
```
####
func (*ResolveEndpoint) [ID](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/endpoints.go#L72) [¶](#ResolveEndpoint.ID)
```
func (*[ResolveEndpoint](#ResolveEndpoint)) ID() [string](/builtin#string)
```
####
type [StartDeploymentInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_StartDeployment.go#L36) [¶](#StartDeploymentInput)
```
type StartDeploymentInput struct {
// The application ID.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The configuration profile ID.
//
// This member is required.
ConfigurationProfileId *[string](/builtin#string)
// The configuration version to deploy. If deploying an AppConfig hosted
// configuration version, you can specify either the version number or version
// label. For all other configurations, you must specify the version number.
//
// This member is required.
ConfigurationVersion *[string](/builtin#string)
// The deployment strategy ID.
//
// This member is required.
DeploymentStrategyId *[string](/builtin#string)
// The environment ID.
//
// This member is required.
EnvironmentId *[string](/builtin#string)
// A description of the deployment.
Description *[string](/builtin#string)
// The KMS key identifier (key ID, key alias, or key ARN). AppConfig uses this ID
// to encrypt the configuration data using a customer managed key.
KmsKeyIdentifier *[string](/builtin#string)
// Metadata to assign to the deployment. Tags help organize and categorize your
// AppConfig resources. Each tag consists of a key and an optional value, both of
// which you define.
Tags map[[string](/builtin#string)][string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [StartDeploymentOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_StartDeployment.go#L80) [¶](#StartDeploymentOutput)
```
type StartDeploymentOutput struct {
// The ID of the application that was deployed.
ApplicationId *[string](/builtin#string)
// A list of extensions that were processed as part of the deployment. The
// extensions that were previously associated to the configuration profile,
// environment, or the application when StartDeployment was called.
AppliedExtensions [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[AppliedExtension](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#AppliedExtension)
// The time the deployment completed.
CompletedAt *[time](/time).[Time](/time#Time)
// Information about the source location of the configuration.
ConfigurationLocationUri *[string](/builtin#string)
// The name of the configuration.
ConfigurationName *[string](/builtin#string)
// The ID of the configuration profile that was deployed.
ConfigurationProfileId *[string](/builtin#string)
// The configuration version that was deployed.
ConfigurationVersion *[string](/builtin#string)
// Total amount of time the deployment lasted.
DeploymentDurationInMinutes [int32](/builtin#int32)
// The sequence number of the deployment.
DeploymentNumber [int32](/builtin#int32)
// The ID of the deployment strategy that was deployed.
DeploymentStrategyId *[string](/builtin#string)
// The description of the deployment.
Description *[string](/builtin#string)
// The ID of the environment that was deployed.
EnvironmentId *[string](/builtin#string)
// A list containing all events related to a deployment. The most recent events
// are displayed first.
EventLog [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[DeploymentEvent](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#DeploymentEvent)
// The amount of time that AppConfig monitored for alarms before considering the
// deployment to be complete and no longer eligible for automatic rollback.
FinalBakeTimeInMinutes [int32](/builtin#int32)
// The percentage of targets to receive a deployed configuration during each
// interval.
GrowthFactor [float32](/builtin#float32)
// The algorithm used to define how percentage grew over time.
GrowthType [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[GrowthType](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#GrowthType)
// The Amazon Resource Name of the Key Management Service key used to encrypt
// configuration data. You can encrypt secrets stored in Secrets Manager, Amazon
// Simple Storage Service (Amazon S3) objects encrypted with SSE-KMS, or secure
// string parameters stored in Amazon Web Services Systems Manager Parameter Store.
KmsKeyArn *[string](/builtin#string)
// The Key Management Service key identifier (key ID, key alias, or key ARN)
// provided when the resource was created or updated.
KmsKeyIdentifier *[string](/builtin#string)
// The percentage of targets for which the deployment is available.
PercentageComplete [float32](/builtin#float32)
// The time the deployment started.
StartedAt *[time](/time).[Time](/time#Time)
// The state of the deployment.
State [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[DeploymentState](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#DeploymentState)
// A user-defined label for an AppConfig hosted configuration version.
VersionLabel *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [StopDeploymentInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_StopDeployment.go#L38) [¶](#StopDeploymentInput)
```
type StopDeploymentInput struct {
// The application ID.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The sequence number of the deployment.
//
// This member is required.
DeploymentNumber *[int32](/builtin#int32)
// The environment ID.
//
// This member is required.
EnvironmentId *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [StopDeploymentOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_StopDeployment.go#L58) [¶](#StopDeploymentOutput)
```
type StopDeploymentOutput struct {
// The ID of the application that was deployed.
ApplicationId *[string](/builtin#string)
// A list of extensions that were processed as part of the deployment. The
// extensions that were previously associated to the configuration profile,
// environment, or the application when StartDeployment was called.
AppliedExtensions [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[AppliedExtension](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#AppliedExtension)
// The time the deployment completed.
CompletedAt *[time](/time).[Time](/time#Time)
// Information about the source location of the configuration.
ConfigurationLocationUri *[string](/builtin#string)
// The name of the configuration.
ConfigurationName *[string](/builtin#string)
// The ID of the configuration profile that was deployed.
ConfigurationProfileId *[string](/builtin#string)
// The configuration version that was deployed.
ConfigurationVersion *[string](/builtin#string)
// Total amount of time the deployment lasted.
DeploymentDurationInMinutes [int32](/builtin#int32)
// The sequence number of the deployment.
DeploymentNumber [int32](/builtin#int32)
// The ID of the deployment strategy that was deployed.
DeploymentStrategyId *[string](/builtin#string)
// The description of the deployment.
Description *[string](/builtin#string)
// The ID of the environment that was deployed.
EnvironmentId *[string](/builtin#string)
// A list containing all events related to a deployment. The most recent events
// are displayed first.
EventLog [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[DeploymentEvent](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#DeploymentEvent)
// The amount of time that AppConfig monitored for alarms before considering the
// deployment to be complete and no longer eligible for automatic rollback.
FinalBakeTimeInMinutes [int32](/builtin#int32)
// The percentage of targets to receive a deployed configuration during each
// interval.
GrowthFactor [float32](/builtin#float32)
// The algorithm used to define how percentage grew over time.
GrowthType [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[GrowthType](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#GrowthType)
// The Amazon Resource Name of the Key Management Service key used to encrypt
// configuration data. You can encrypt secrets stored in Secrets Manager, Amazon
// Simple Storage Service (Amazon S3) objects encrypted with SSE-KMS, or secure
// string parameters stored in Amazon Web Services Systems Manager Parameter Store.
KmsKeyArn *[string](/builtin#string)
// The Key Management Service key identifier (key ID, key alias, or key ARN)
// provided when the resource was created or updated.
KmsKeyIdentifier *[string](/builtin#string)
// The percentage of targets for which the deployment is available.
PercentageComplete [float32](/builtin#float32)
// The time the deployment started.
StartedAt *[time](/time).[Time](/time#Time)
// The state of the deployment.
State [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[DeploymentState](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#DeploymentState)
// A user-defined label for an AppConfig hosted configuration version.
VersionLabel *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [TagResourceInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_TagResource.go#L36) [¶](#TagResourceInput)
```
type TagResourceInput struct {
// The ARN of the resource for which to retrieve tags.
//
// This member is required.
ResourceArn *[string](/builtin#string)
// The key-value string map. The valid character set is [a-zA-Z+-=._:/]. The tag
// key can be up to 128 characters and must not start with aws: . The tag value can
// be up to 256 characters.
//
// This member is required.
Tags map[[string](/builtin#string)][string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [TagResourceOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_TagResource.go#L53) [¶](#TagResourceOutput)
```
type TagResourceOutput struct {
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [UntagResourceInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_UntagResource.go#L34) [¶](#UntagResourceInput)
```
type UntagResourceInput struct {
// The ARN of the resource for which to remove tags.
//
// This member is required.
ResourceArn *[string](/builtin#string)
// The tag keys to delete.
//
// This member is required.
TagKeys [][string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [UntagResourceOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_UntagResource.go#L49) [¶](#UntagResourceOutput)
```
type UntagResourceOutput struct {
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [UpdateApplicationInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_UpdateApplication.go#L34) [¶](#UpdateApplicationInput)
```
type UpdateApplicationInput struct {
// The application ID.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// A description of the application.
Description *[string](/builtin#string)
// The name of the application.
Name *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [UpdateApplicationOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_UpdateApplication.go#L50) [¶](#UpdateApplicationOutput)
```
type UpdateApplicationOutput struct {
// The description of the application.
Description *[string](/builtin#string)
// The application ID.
Id *[string](/builtin#string)
// The application name.
Name *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [UpdateConfigurationProfileInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_UpdateConfigurationProfile.go#L35) [¶](#UpdateConfigurationProfileInput)
```
type UpdateConfigurationProfileInput struct {
// The application ID.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The ID of the configuration profile.
//
// This member is required.
ConfigurationProfileId *[string](/builtin#string)
// A description of the configuration profile.
Description *[string](/builtin#string)
// The identifier for a Key Management Service key to encrypt new configuration
// data versions in the AppConfig hosted configuration store. This attribute is
// only used for hosted configuration types. The identifier can be an KMS key ID,
// alias, or the Amazon Resource Name (ARN) of the key ID or alias. To encrypt data
// managed in other configuration stores, see the documentation for how to specify
// an KMS key for that particular service.
KmsKeyIdentifier *[string](/builtin#string)
// The name of the configuration profile.
Name *[string](/builtin#string)
// The ARN of an IAM role with permission to access the configuration at the
// specified LocationUri .
RetrievalRoleArn *[string](/builtin#string)
// A list of methods for validating the configuration.
Validators [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Validator](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Validator)
// contains filtered or unexported fields
}
```
####
type [UpdateConfigurationProfileOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_UpdateConfigurationProfile.go#L71) [¶](#UpdateConfigurationProfileOutput)
```
type UpdateConfigurationProfileOutput struct {
// The application ID.
ApplicationId *[string](/builtin#string)
// The configuration profile description.
Description *[string](/builtin#string)
// The configuration profile ID.
Id *[string](/builtin#string)
// The Amazon Resource Name of the Key Management Service key to encrypt new
// configuration data versions in the AppConfig hosted configuration store. This
// attribute is only used for hosted configuration types. To encrypt data managed
// in other configuration stores, see the documentation for how to specify an KMS
// key for that particular service.
KmsKeyArn *[string](/builtin#string)
// The Key Management Service key identifier (key ID, key alias, or key ARN)
// provided when the resource was created or updated.
KmsKeyIdentifier *[string](/builtin#string)
// The URI location of the configuration.
LocationUri *[string](/builtin#string)
// The name of the configuration profile.
Name *[string](/builtin#string)
// The ARN of an IAM role with permission to access the configuration at the
// specified LocationUri .
RetrievalRoleArn *[string](/builtin#string)
// The type of configurations contained in the profile. AppConfig supports feature
// flags and freeform configurations. We recommend you create feature flag
// configurations to enable or disable new features and freeform configurations to
// distribute configurations to an application. When calling this API, enter one of
// the following values for Type : AWS.AppConfig.FeatureFlags
// AWS.Freeform
Type *[string](/builtin#string)
// A list of methods for validating the configuration.
Validators [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Validator](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Validator)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [UpdateDeploymentStrategyInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_UpdateDeploymentStrategy.go#L35) [¶](#UpdateDeploymentStrategyInput)
```
type UpdateDeploymentStrategyInput struct {
// The deployment strategy ID.
//
// This member is required.
DeploymentStrategyId *[string](/builtin#string)
// Total amount of time for a deployment to last.
DeploymentDurationInMinutes *[int32](/builtin#int32)
// A description of the deployment strategy.
Description *[string](/builtin#string)
// The amount of time that AppConfig monitors for alarms before considering the
// deployment to be complete and no longer eligible for automatic rollback.
FinalBakeTimeInMinutes *[int32](/builtin#int32)
// The percentage of targets to receive a deployed configuration during each
// interval.
GrowthFactor *[float32](/builtin#float32)
// The algorithm used to define how percentage grows over time. AppConfig supports
// the following growth types: Linear: For this type, AppConfig processes the
// deployment by increments of the growth factor evenly distributed over the
// deployment time. For example, a linear deployment that uses a growth factor of
// 20 initially makes the configuration available to 20 percent of the targets.
// After 1/5th of the deployment time has passed, the system updates the percentage
// to 40 percent. This continues until 100% of the targets are set to receive the
// deployed configuration. Exponential: For this type, AppConfig processes the
// deployment exponentially using the following formula: G*(2^N) . In this formula,
// G is the growth factor specified by the user and N is the number of steps until
// the configuration is deployed to all targets. For example, if you specify a
// growth factor of 2, then the system rolls out the configuration as follows:
// 2*(2^0)
// 2*(2^1)
// 2*(2^2) Expressed numerically, the deployment rolls out as follows: 2% of the
// targets, 4% of the targets, 8% of the targets, and continues until the
// configuration has been deployed to all targets.
GrowthType [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[GrowthType](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#GrowthType)
// contains filtered or unexported fields
}
```
####
type [UpdateDeploymentStrategyOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_UpdateDeploymentStrategy.go#L78) [¶](#UpdateDeploymentStrategyOutput)
```
type UpdateDeploymentStrategyOutput struct {
// Total amount of time the deployment lasted.
DeploymentDurationInMinutes [int32](/builtin#int32)
// The description of the deployment strategy.
Description *[string](/builtin#string)
// The amount of time that AppConfig monitored for alarms before considering the
// deployment to be complete and no longer eligible for automatic rollback.
FinalBakeTimeInMinutes [int32](/builtin#int32)
// The percentage of targets that received a deployed configuration during each
// interval.
GrowthFactor [float32](/builtin#float32)
// The algorithm used to define how percentage grew over time.
GrowthType [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[GrowthType](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#GrowthType)
// The deployment strategy ID.
Id *[string](/builtin#string)
// The name of the deployment strategy.
Name *[string](/builtin#string)
// Save the deployment strategy to a Systems Manager (SSM) document.
ReplicateTo [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[ReplicateTo](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#ReplicateTo)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [UpdateEnvironmentInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_UpdateEnvironment.go#L35) [¶](#UpdateEnvironmentInput)
```
type UpdateEnvironmentInput struct {
// The application ID.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The environment ID.
//
// This member is required.
EnvironmentId *[string](/builtin#string)
// A description of the environment.
Description *[string](/builtin#string)
// Amazon CloudWatch alarms to monitor during the deployment process.
Monitors [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Monitor](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Monitor)
// The name of the environment.
Name *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [UpdateEnvironmentOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_UpdateEnvironment.go#L59) [¶](#UpdateEnvironmentOutput)
```
type UpdateEnvironmentOutput struct {
// The application ID.
ApplicationId *[string](/builtin#string)
// The description of the environment.
Description *[string](/builtin#string)
// The environment ID.
Id *[string](/builtin#string)
// Amazon CloudWatch alarms monitored during the deployment.
Monitors [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Monitor](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Monitor)
// The name of the environment.
Name *[string](/builtin#string)
// The state of the environment. An environment can be in one of the following
// states: READY_FOR_DEPLOYMENT , DEPLOYING , ROLLING_BACK , or ROLLED_BACK
State [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[EnvironmentState](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#EnvironmentState)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [UpdateExtensionAssociationInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_UpdateExtensionAssociation.go#L36) [¶](#UpdateExtensionAssociationInput)
added in v1.13.0
```
type UpdateExtensionAssociationInput struct {
// The system-generated ID for the association.
//
// This member is required.
ExtensionAssociationId *[string](/builtin#string)
// The parameter names and values defined in the extension.
Parameters map[[string](/builtin#string)][string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [UpdateExtensionAssociationOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_UpdateExtensionAssociation.go#L49) [¶](#UpdateExtensionAssociationOutput)
added in v1.13.0
```
type UpdateExtensionAssociationOutput struct {
// The system-generated Amazon Resource Name (ARN) for the extension.
Arn *[string](/builtin#string)
// The ARN of the extension defined in the association.
ExtensionArn *[string](/builtin#string)
// The version number for the extension defined in the association.
ExtensionVersionNumber [int32](/builtin#int32)
// The system-generated ID for the association.
Id *[string](/builtin#string)
// The parameter names and values defined in the association.
Parameters map[[string](/builtin#string)][string](/builtin#string)
// The ARNs of applications, configuration profiles, or environments defined in
// the association.
ResourceArn *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [UpdateExtensionInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_UpdateExtension.go#L37) [¶](#UpdateExtensionInput)
added in v1.13.0
```
type UpdateExtensionInput struct {
// The name, the ID, or the Amazon Resource Name (ARN) of the extension.
//
// This member is required.
ExtensionIdentifier *[string](/builtin#string)
// The actions defined in the extension.
Actions map[[string](/builtin#string)][][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Action](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Action)
// Information about the extension.
Description *[string](/builtin#string)
// One or more parameters for the actions called by the extension.
Parameters map[[string](/builtin#string)][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Parameter](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Parameter)
// The extension version number.
VersionNumber *[int32](/builtin#int32)
// contains filtered or unexported fields
}
```
####
type [UpdateExtensionOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_UpdateExtension.go#L59) [¶](#UpdateExtensionOutput)
added in v1.13.0
```
type UpdateExtensionOutput struct {
// The actions defined in the extension.
Actions map[[string](/builtin#string)][][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Action](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Action)
// The system-generated Amazon Resource Name (ARN) for the extension.
Arn *[string](/builtin#string)
// Information about the extension.
Description *[string](/builtin#string)
// The system-generated ID of the extension.
Id *[string](/builtin#string)
// The extension name.
Name *[string](/builtin#string)
// The parameters accepted by the extension. You specify parameter values when you
// associate the extension to an AppConfig resource by using the
// CreateExtensionAssociation API action. For Lambda extension actions, these
// parameters are included in the Lambda request object.
Parameters map[[string](/builtin#string)][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Parameter](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Parameter)
// The extension version number.
VersionNumber [int32](/builtin#int32)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [ValidateConfigurationInput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ValidateConfiguration.go#L34) [¶](#ValidateConfigurationInput)
```
type ValidateConfigurationInput struct {
// The application ID.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The configuration profile ID.
//
// This member is required.
ConfigurationProfileId *[string](/builtin#string)
// The version of the configuration to validate.
//
// This member is required.
ConfigurationVersion *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [ValidateConfigurationOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/appconfig/v1.21.2/service/appconfig/api_op_ValidateConfiguration.go#L54) [¶](#ValidateConfigurationOutput)
```
type ValidateConfigurationOutput struct {
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
``` |
psycModel | cran | R | Package ‘psycModel’
October 14, 2022
Type Package
Title Integrated Toolkit for Psychological Analysis and Modeling in R
Version 0.4.1
Description A beginner-friendly R package for modeling in psychology or
related field. It allows fitting models, plotting, checking goodness
of fit, and model assumption violations all in one place. It also
produces beautiful and easy-to-read output.
License GPL (>= 3)
URL https://github.com/jasonmoy28/psycModel
Depends R (>= 3.2)
Imports dplyr, ggplot2, glue, insight, lavaan, lifecycle, lme4,
lmerTest, parameters, patchwork, performance, psych, rlang (>=
0.1.2), stringr, tibble, tidyr, utils
Suggests correlation, covr, cowplot, fansi, ggrepel, GPArotation,
gridExtra, interactions, knitr, nFactors, nlme, pagedown,
qqplotr, rmarkdown, roxygen2, sandwich, see, semPlot, spelling,
testthat (>= 3.0.0), tidyselect
VignetteBuilder knitr
Config/testthat/edition 3
Encoding UTF-8
LazyData true
RoxygenNote 7.2.0
Language en-US
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0001-8795-3311>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2022-10-03 23:30:02 UTC
R topics documented:
anova_plo... 2
cfa_groupwis... 3
cfa_summar... 4
compare_fi... 7
cor_tes... 8
cronbach_alph... 9
descriptive_tabl... 10
efa_summar... 11
get_interaction_ter... 12
get_predict_d... 13
glm_mode... 13
html_to_pd... 14
interaction_plo... 15
knit_to_Rm... 16
label_nam... 17
lme_mode... 18
lme_multilevel_model_summar... 20
lm_mode... 22
lm_model_summar... 24
measurement_invarianc... 26
mediation_summar... 28
model_summar... 30
polynomial_regression_plo... 31
popula... 32
reliability_summar... 33
simple_slop... 34
three_way_interaction_plo... 35
two_way_interaction_plo... 36
anova_plot ANOVA Plot
Description
[Experimental]
Plot categorical variable with barplot. Continuous moderator are plotted at ± 1 SD from the mean.
Usage
anova_plot(model, predictor = NULL, graph_label_name = NULL)
Arguments
model fitted model (usually lm or aov object). Variables must be converted to correct
data type before fitting the model. Specifically, continuous variables must be
converted to type numeric and categorical variables to type factor.
predictor predictor variable. Must specified for non-interaction plot and must not specify
for interaction plot.
graph_label_name
vector or function. Vector should be passed in the form of c(response_var,
predict_var1, predict_var2, ...). Function should be passed as a switch
function that return the label based on the name passed (e.g., a switch function)
Value
a ggplot object
Examples
# Main effect plot with 1 categorical variable
fit_1 = lavaan::HolzingerSwineford1939 %>%
dplyr::mutate(school = as.factor(school)) %>%
lm(data = ., grade ~ school)
anova_plot(fit_1,predictor = school)
# Interaction effect plot with 2 categorical variables
fit_2 = lavaan::HolzingerSwineford1939 %>%
dplyr::mutate(dplyr::across(c(school,sex),as.factor)) %>%
lm(data = ., grade ~ school*sex)
anova_plot(fit_2)
# Interaction effect plot with 1 categorical variable and 1 continuous variable
fit_3 = lavaan::HolzingerSwineford1939 %>%
dplyr::mutate(school = as.factor(school)) %>%
dplyr::mutate(ageyr = as.numeric(ageyr)) %>%
lm(data = ., grade ~ ageyr*school)
anova_plot(fit_3)
cfa_groupwise Confirmatory Factor Analysis (groupwise)
Description
[Stable]
This function will run N number of CFA where N = length(group), and report the fit measures of
CFA in each group. The function is intended to help you get a better understanding of which group
has abnormal fit indicator
Usage
cfa_groupwise(data, ..., group, model = NULL, ordered = FALSE)
Arguments
data data frame
... CFA items. Support dplyr::select() syntax.
group character. group variable. Support dplyr::select() syntax.
model explicit lavaan model. Must be specify with model = lavaan_model_syntax.
[Experimental]
ordered logical. default is FALSE. If it is set to TRUE, lavaan will treat it as a ordinal
variable and use DWLS instead of ML
Details
All argument must be explicitly specified. If not, all arguments will be treated as CFA items
Value
a data.frame with group-wise CFA result
Examples
# The example is used as the illustration of the function output only.
# It does not imply the data is appropriate for the analysis.
cfa_groupwise(
data = lavaan::HolzingerSwineford1939,
group = "school",
x1:x3,
x4:x6,
x7:x9
)
cfa_summary Confirmatory Factor Analysis
Description
[Stable]
The function fits a CFA model using the lavaan::cfa(). Users can fit single and multiple factors
CFA, and it also supports multilevel CFA (by specifying the group). Users can fit the model by
passing the items using dplyr::select() syntax or an explicit lavaan model for more versatile
usage. All arguments (except the CFA items) must be explicitly named (e.g., model = your-model;
see example for inappropriate behavior).
Usage
cfa_summary(
data,
...,
model = NULL,
group = NULL,
ordered = FALSE,
digits = 3,
model_covariance = TRUE,
model_variance = TRUE,
plot = TRUE,
group_partial = NULL,
streamline = FALSE,
quite = FALSE,
return_result = FALSE
)
Arguments
data data frame
... CFA items. Multi-factor CFA items should be separated by comma (as different
argument). See below for examples. Support dplyr::select() syntax.
model explicit lavaan model. Must be specify with model = lavaan_model_syntax.
[Experimental]
group optional character. used for multi-level CFA. the nested variable for multilevel
dataset (e.g., Country). Support dplyr::select() syntax.
ordered Default is FALSE. If it is set to TRUE, lavaan will treat it as a ordinal variable
and use DWLS instead of ML
digits number of digits to round to
model_covariance
print model covariance. Default is TRUE
model_variance print model variance. Default is TRUE
plot print a path diagram. Default is TRUE
group_partial Items for partial equivalence. The form should be c(’DV =~ item1’, ’DV =~
item2’).
streamline print streamlined output
quite suppress printing output
return_result If it is set to TRUE, it will return the lavaan model
Details
First, just like researchers have argued against p value of 0.05 is not a good cut-of, researchers have
also argue against that fit indicies (more importantly, the cut-off criteria) are not completely repre-
sentative of the goodness of fit. Nonetheless, you are required to report them if you are publishing
an article anyway. I will summarize the general recommended cut-off criteria for CFA model be-
low. Researchers consider models with CFI (Bentler, 1990) that is > 0.95 to be excellent fit (Hu &
Bentler, 1999), and > 0.9 to be acceptable fit. Researchers considered a model is excellent fit if CFI
> 0.95 (Hu & Bentler, 1999), RMSEA < 0.06 (Hu & Bentler, 1999), TLI > 0.95, SRMR < 0.08. The
model is considered an acceptable fit if CFI > 0.9 and RMSEA < 0.08. I need some time to find all
the relevant references, but this should be the general consensus.
Value
a lavaan object if return_result is TRUE
References
<NAME>., & <NAME>. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Con-
ventional criteria versus new alternatives. Structural Equation Modeling, 6, 1–55. https://doi.org/10.1080/1070551990954011
Examples
# REMEMBER, YOU MUST NAMED ALL ARGUMENT EXCEPT THE CFA ITEMS ARGUMENT
# Fitting a multilevel single factor CFA model
fit <- cfa_summary(
data = lavaan::HolzingerSwineford1939,
x1:x3,
x4:x6,
x7:x9,
group = "sex",
model_variance = FALSE, # do not print the model_variance
model_covariance = FALSE # do not print the model_covariance
)
# Fitting a CFA model by passing explicit lavaan model (equivalent to the above model)
# Note in the below function how I added `model = ` in front of the lavaan model.
# Similarly, the same rule apply for all arguments (e.g., `ordered = FALSE` instead of just `FALSE`)
fit <- cfa_summary(
model = "visual =~ x1 + x2 + x3",
data = lavaan::HolzingerSwineford1939,
quite = TRUE # silence all output
)
## Not run:
# This will fail because I did not add `model = ` in front of the lavaan model.
# Therefore,you must add the tag in front of all arguments
# For example, `return_result = 'model'` instaed of `model`
cfa_summary("visual =~ x1 + x2 + x3
textual =~ x4 + x5 + x6
speed =~ x7 + x8 + x9 ",
data = lavaan::HolzingerSwineford1939
)
## End(Not run)
compare_fit Comparison of Model Fit
Description
[Stable]
Compare the fit indices of models (see below for model support)
Usage
compare_fit(
...,
digits = 3,
quite = FALSE,
streamline = FALSE,
return_result = FALSE
)
Arguments
... model. If it is a lavaan object, it will try to compute the measurement invari-
ance. Other model types will be passed to performance::compare_performance().
digits number of digits to round to
quite suppress printing output
streamline print streamlined output
return_result If it is set to TRUE, it will return the the compare fit data frame.
Value
a dataframe with fit indices and change in fit indices
Examples
# lme model
fit1 <- lm_model(
data = popular,
response_variable = popular,
predictor_var = c(sex, extrav)
)
fit2 <- lm_model(
data = popular,
response_variable = popular,
predictor_var = c(sex, extrav),
two_way_interaction_factor = c(sex, extrav)
)
compare_fit(fit1, fit2)
# see ?measurement_invariance for measurement invariance example
cor_test Correlation table
Description
[Stable]
This function uses the correlation::correlation() to generate the correlation table.
Usage
cor_test(
data,
cols,
...,
digits = 3,
method = "pearson",
p_adjust = "holm",
streamline = FALSE,
quite = FALSE,
return_result = FALSE
)
Arguments
data data frame
cols correlation items. Support dplyr::select() syntax.
... additional arguments passed to correlation::correlation(). See ?correlation::correlation.
Note that the return data.frame from correlation::correlation() must contains r
and p (e.g., passing baysesian = TRUE will not work)
digits number of digits to round to
method Default is "pearson". Options are "kendall", "spearman","biserial", "polychoric",
"tetrachoric", "biweight", "distance", "percentage", "blomqvist", "hoeffding",
"gamma", "gaussian","shepherd", or "auto". See ?correlation::correlation for
detail
p_adjust Default is "holm". Options are "hochberg", "hommel", "bonferroni", "BH",
"BY", "fdr", "somers" or "none". See ?stats::p.adjust for more detail
streamline print streamlined output.
quite suppress printing output
return_result If it is set to TRUE, it will return the data frame of the correlation table
Value
a data.frame of the correlation table
Examples
cor_test(iris, where(is.numeric))
cronbach_alpha Cronbach alpha
Description
[Stable]
Computing the Cronbach alphas for multiple factors.
Usage
cronbach_alpha(
...,
data,
var_name,
group = NULL,
quite = FALSE,
return_result = FALSE
)
Arguments
... Items. Group each latent factors using c() with when computing Cronbach alpha
for 2+ factors (see example below)
data data.frame. Must specify
var_name character or a vector of characters. The order of var_name must be same as the
order of the ...
group optional character. Specify this argument for computing Cronbach alpha for
group separetely
quite suppress printing output
return_result If it is set to TRUE, it will return a dataframe object
Value
a data.frame object if return_result is TRUE
Examples
cronbach_alpha(
data = lavaan::HolzingerSwineford1939,
var_name = c('Visual','Textual','Speed'),
c(x1,x2,x3), # one way to pass the items of a factor is by wrapping it with c()
x4:x6, # another way to pass the items is use tidyselect syntax
x7:x9)
descriptive_table Descriptive Statistics Table
Description
[Stable]
This function generates a table of descriptive statistics (mainly using psych::describe()) and or a
correlation table. User can export this to a csv file (optionally, using the file_path argument). Users
can open the csv file with MS Excel then copy and paste the table into MS Word table.
Usage
descriptive_table(
data,
cols,
...,
digits = 3,
descriptive_indicator = c("mean", "sd", "cor"),
file_path = NULL,
streamline = FALSE,
quite = FALSE,
return_result = FALSE
)
Arguments
data data.frame
cols column(s) need to be included in the table. Support dplyr::select() syntax.
... additional arguments passed to cor_test. See ?cor_test.
digits number of digit for the descriptive table
descriptive_indicator
Default is mean, sd, cor. Options are missing (missing value count), non_missing
(non-missing value count), cor (correlation table), n, mean, sd, median, trimmed
(trimmed mean), median, mad (median absolute deviation from the median),
min, max, range, skew, kurtosis, se (standard error)
file_path file path for export. The function will implicitly pass this argument to the
write.csv(file = file_path)
streamline print streamlined output
quite suppress printing output
return_result If it is set to TRUE, it will return the data frame of the descriptive table
Value
a data.frame of the descriptive table
Examples
descriptive_table(iris, cols = where(is.numeric)) # all numeric columns
descriptive_table(iris,
cols = where(is.numeric),
# get missing count, non-missing count, and mean & sd & correlation table
descriptive_indicator = c("missing", "non_missing", "mean", "sd", "cor")
)
efa_summary Exploratory Factor Analysis
Description
[Stable]
The function is used to fit a exploratory factor analysis model. It will first find the optimal number of
factors using parameters::n_factors. Once the optimal number of factor is determined, the function
will fit the model using psych::fa(). Optionally, you can request a post-hoc CFA model based on
the EFA model which gives you more fit indexes (e.g., CFI, RMSEA, TLI)
Usage
efa_summary(
data,
cols,
rotation = "varimax",
optimal_factor_method = FALSE,
efa_plot = TRUE,
digits = 3,
n_factor = NULL,
post_hoc_cfa = FALSE,
quite = FALSE,
streamline = FALSE,
return_result = FALSE
)
Arguments
data data.frame
cols columns. Support dplyr::select() syntax.
rotation the rotation to use in estimation. Default is ’oblimin’. Options are ’none’, ’vari-
max’, ’quartimax’, ’promax’, ’oblimin’, or ’simplimax’
optimal_factor_method
Show a summary of the number of factors by optimization method (e.g., BIC,
VSS complexity, Velicer’s MAP)
efa_plot show explained variance by number of factor plot. default is TRUE.
digits number of digits to round to
n_factor number of factors for EFA. It will bypass the initial optimization algorithm, and
fit the EFA model using this specified number of factor
post_hoc_cfa a CFA model based on the extracted factor
quite suppress printing output
streamline print streamlined output
return_result If it is set to TRUE (default is FALSE), it will return a fa object from psych
Value
a fa object from psych
Examples
efa_summary(lavaan::HolzingerSwineford1939, starts_with("x"), post_hoc_cfa = TRUE)
get_interaction_term get interaction term
Description
get interaction term
Usage
get_interaction_term(model)
Arguments
model model
Value
a list with predict vars names
get_predict_df get factor df to combine with mean_df
Description
get factor df to combine with mean_df
Usage
get_predict_df(data)
Arguments
data data
Value
factor_df
glm_model Generalized Linear Regression
Description
[Experimental]
Fit a generalized linear regression using glm(). This function is still in early development stage.
Usage
glm_model(
data,
response_variable,
predictor_variable,
two_way_interaction_factor = NULL,
three_way_interaction_factor = NULL,
family,
quite = FALSE
)
Arguments
data data.frame
response_variable
response variable. Support dplyr::select() syntax.
predictor_variable
predictor variable. Support dplyr::select() syntax.
two_way_interaction_factor
two-way interaction factors. You need to pass 2+ factor. Support dplyr::select()
syntax.
three_way_interaction_factor
three-way interaction factor. You need to pass exactly 3 factors. Specifying
three-way interaction factors automatically included all two-way interactions,
so please do not specify the two_way_interaction_factor argument. Support
dplyr::select() syntax.
family a GLM family. It will passed to the family argument in glmer. See ?glmer for
possible options.
quite suppress printing output
Value
an object class of glm representing the linear regression fit
Examples
fit <- glm_model(
response_variable = incidence,
predictor_variable = period,
family = "poisson", # or you can enter as poisson(link = 'log'),
data = lme4::cbpp
)
html_to_pdf Convert HTML to PDF
Description
[Experimental]
This is a helper function for knitting Rmd. Due to technological limitation, the output cannot knit to
PDF in Rmd directly (the problem is with the latex engine printing unicode character). Therefore,
to bypass this problem, you will first need to knit to html file first, then use this function to convert
it to a PDF file.
Usage
html_to_pdf(file_path = NULL, dir = NULL, scale = 1, render_exist = FALSE)
Arguments
file_path file path to the HTML file (can be relative if you are in a R project)
dir file path to the directory of all HTML files (can be relative if you are in a R
project)
scale the scale of the PDF
render_exist overwrite exist PDF. Default is FALSE
Value
no return value
Examples
## Not run:
html_to_pdf(file_path = "html_name.html")
# all HTML files in the my_html_folder will be converted
html_to_pdf(dir = "Users/Desktop/my_html_folder")
## End(Not run)
interaction_plot Interaction plot
Description
[Stable]
The function creates a two-way or three-way interaction plot. It will creates a plot with ± 1 SD
from the mean of the independent variable. See below for supported model. I recommend using
concurrently with lm_model(), lme_model().
Usage
interaction_plot(
model,
data = NULL,
graph_label_name = NULL,
cateogrical_var = NULL,
y_lim = NULL,
plot_color = FALSE
)
Arguments
model object from lme, lme4, lmerTest object.
data data frame. If the function is unable to extract data frame from the object, then
you may need to pass it directly
graph_label_name
vector of length 4 or a switch function (see ?two_way_interaction_plot exam-
ple). Vector should be passed in the form of c(response_var, predict_var1, pre-
dict_var2, predict_var3).
cateogrical_var
list. Specify the upper bound and lower bound directly instead of using ± 1 SD
from the mean. Passed in the form of list(var_name1 = c(upper_bound1,
lower_bound1),var_name2 = c(upper_bound2, lower_bound2))
y_lim the plot’s upper and lower limit for the y-axis. Length of 2. Example: c(lower_limit,
upper_limit)
plot_color default if FALSE. Set to TRUE if you want to plot in color
Value
a ggplot object
Examples
lm_fit_2 <- lm(Sepal.Length ~ Sepal.Width + Petal.Length +
Sepal.Width*Petal.Length, data = iris)
interaction_plot(lm_fit_2)
lm_fit_3 <- lm(Sepal.Length ~ Sepal.Width + Petal.Length + Petal.Width +
Sepal.Width*Petal.Length:Petal.Width, data = iris)
interaction_plot(lm_fit_3)
knit_to_Rmd Knit Rmd Files Instruction
Description
This is a helper function that instruct users of the package how to knit a R Markdown (Rmd) files
Usage
knit_to_Rmd()
Value
no return value
label_name 17
Examples
knit_to_Rmd()
label_name get label name
Description
get label name
Usage
label_name(
graph_label_name,
response_var_name,
predict_var1_name,
predict_var2_name,
predict_var3_name
)
Arguments
graph_label_name
label name
response_var_name
outcome variable name
predict_var1_name
predictor 1 name
predict_var2_name
predictor 2 name
predict_var3_name
predictor 3 name
Value
vector of var name
lme_model Linear Mixed Effect Model
Description
[Stable]
Fit a linear mixed effect model (i.e., hierarchical linear model, multilevel linear model) using the
nlme::lme() or the lmerTest::lmer() function. Linear mixed effect model is used to explore the
effect of continuous / categorical variables in predicting a normally distributed continuous variable.
Usage
lme_model(
data,
model = NULL,
response_variable,
random_effect_factors = NULL,
non_random_effect_factors = NULL,
two_way_interaction_factor = NULL,
three_way_interaction_factor = NULL,
id,
estimation_method = "REML",
opt_control = "bobyqa",
na.action = stats::na.omit,
use_package = "lmerTest",
quite = FALSE
)
Arguments
data data.frame
model lme4 model syntax. Support more complicated model. Note that model_summary
will only return fixed effect estimates.
response_variable
DV (i.e., outcome variable / response variable). Length of 1. Support dplyr::select()
syntax.
random_effect_factors
random effect factors (level-1 variable for HLM people) Factors that need to
estimate fixed effect and random effect (i.e., random slope / varying slope based
on the id). Support dplyr::select() syntax.
non_random_effect_factors
non-random effect factors (level-2 variable for HLM people). Factors only need
to estimate fixed effect. Support dplyr::select() syntax.
two_way_interaction_factor
two-way interaction factors. You need to pass 2+ factor. Support dplyr::select()
syntax.
three_way_interaction_factor
three-way interaction factor. You need to pass exactly 3 factors. Specifying
three-way interaction factors automatically included all two-way interactions,
so please do not specify the two_way_interaction_factor argument. Support
dplyr::select() syntax.
id the nesting variable (e.g. group, time). Length of 1. Support dplyr::select()
syntax.
estimation_method
character. ML or REML default to REML.
opt_control default is optim for lme and bobyqa for lmerTest
na.action default is stats::na.omit. Another common option is na.exclude
use_package Default is lmerTest. Only available for linear mixed effect model. Options are
nlme, lmerTest, or lme4('lme4 return similar result as lmerTest except the
return model)
quite suppress printing output
Details
Here is a little tip. If you are using generic selecting syntax (e.g., contains() or start_with()), you
don’t need to remove the response variable and the id from the factors. It will be automatically
remove. For example, if you have x1:x9 as your factors. You want to regress x2:x8 on x1. Your
probably pass something like response_variable = x1, random_effect_factors = c(contains(’x’),-
x1) to the function. However, you don’t need to do that, you can just pass random_effect_factors
= c(contains(’x’)) to the function since it will automatically remove the response variable from
selection.
Value
an object representing the linear mixed-effects model fit (it maybe an object from lme or lmer
depending of the package you use)
Examples
# two-level model with level-1 and level-2 variable with random intercept and random slope
fit1 <- lme_model(
data = popular,
response_variable = popular,
random_effect_factors = c(extrav, sex),
non_random_effect_factors = texp,
id = class
)
# added two-way interaction factor
fit2 <- lme_model(
data = popular,
response_variable = popular,
random_effect_factors = c(extrav, sex),
non_random_effect_factors = texp,
two_way_interaction_factor = c(extrav, texp),
id = class
)
# pass a explicit lme model (I don't why you want to do that, but you can)
lme_fit <- lme_model(
model = "popular ~ extrav*texp + (1 + extrav | class)",
data = popular
)
lme_multilevel_model_summary
Model Summary for Mixed Effect Model
Description
[Stable]
An integrated function for fitting a multilevel linear regression (also known as hierarchical linear
regression).
Usage
lme_multilevel_model_summary(
data,
model = NULL,
response_variable = NULL,
random_effect_factors = NULL,
non_random_effect_factors = NULL,
two_way_interaction_factor = NULL,
three_way_interaction_factor = NULL,
family = NULL,
cateogrical_var = NULL,
id = NULL,
graph_label_name = NULL,
estimation_method = "REML",
opt_control = "bobyqa",
na.action = stats::na.omit,
model_summary = TRUE,
interaction_plot = TRUE,
y_lim = NULL,
plot_color = FALSE,
digits = 3,
use_package = "lmerTest",
simple_slope = FALSE,
assumption_plot = FALSE,
quite = FALSE,
streamline = FALSE,
return_result = FALSE
)
Arguments
data data.frame
model lme4 model syntax. Support more complicated model structure from lme4. It is
not well-tested to ensure accuracy [Experimental]
response_variable
DV (i.e., outcome variable / response variable). Length of 1. Support dplyr::select()
syntax.
random_effect_factors
random effect factors (level-1 variable for HLM from a HLM perspective) Fac-
tors that need to estimate fixed effect and random effect (i.e., random slope /
varying slope based on the id). Support dplyr::select() syntax.
non_random_effect_factors
non-random effect factors (level-2 variable from a HLM perspective). Factors
only need to estimate fixed effect. Support dplyr::select() syntax.
two_way_interaction_factor
two-way interaction factors. You need to pass 2+ factor. Support dplyr::select()
syntax.
three_way_interaction_factor
three-way interaction factor. You need to pass exactly 3 factors. Specifying
three-way interaction factors automatically included all two-way interactions,
so please do not specify the two_way_interaction_factor argument. Support
dplyr::select() syntax.
family a GLM family. It will passed to the family argument in glmer. See ?glmer for
possible options. [Experimental]
cateogrical_var
list. Specify the upper bound and lower bound directly instead of using ± 1 SD
from the mean. Passed in the form of list(var_name1 = c(upper_bound1,
lower_bound1),var_name2 = c(upper_bound2, lower_bound2))
id the nesting variable (e.g. group, time). Length of 1. Support dplyr::select()
syntax.
graph_label_name
optional vector or function. vector of length 2 for two-way interaction graph.
vector of length 3 for three-way interaction graph. Vector should be passed in
the form of c(response_var, predict_var1, predict_var2, ...). Function should be
passed as a switch function (see ?two_way_interaction_plot for an example)
estimation_method
character. ML or REML default is REML.
opt_control default is optim for lme and bobyqa for lmerTest.
na.action default is stats::na.omit. Another common option is na.exclude
model_summary print model summary. Required to be TRUE if you want assumption_plot.
interaction_plot
generate interaction plot. Default is TRUE
y_lim the plot’s upper and lower limit for the y-axis. Length of 2. Example: c(lower_limit,
upper_limit)
plot_color If it is set to TRUE (default is FALSE), the interaction plot will plot with color.
digits number of digits to round to
use_package Default is lmerTest. Only available for linear mixed effect model. Options are
nlme, lmerTest, or lme4('lme4 return similar result as lmerTest except the
return model)
simple_slope Slope estimate at ± 1 SD and the mean of the moderator. Uses interactions::sim_slope()
in the background.
assumption_plot
Generate an panel of plots that check major assumptions. It is usually recom-
mended to inspect model assumption violation visually. In the background, it
calls performance::check_model().
quite suppress printing output
streamline print streamlined output.
return_result If it is set to TRUE (default is FALSE), it will return the model, model_summary,
and plot (plot if the interaction term is included)
Value
a list of all requested items in the order of model, model_summary, interaction_plot, simple_slope
Examples
fit <- lme_multilevel_model_summary(
data = popular,
response_variable = popular,
random_effect_factors = NULL, # you can add random effect predictors here
non_random_effect_factors = c(extrav,texp),
two_way_interaction_factor = NULL, # you can add two-way interaction plot here
graph_label_name = NULL, #you can also change graph lable name here
id = class,
simple_slope = FALSE, # you can also request simple slope estimate
assumption_plot = FALSE, # you can also request assumption plot
plot_color = FALSE, # you can also request the plot in color
streamline = FALSE # you can change this to get the least amount of info
)
lm_model Linear Regressions / ANOVA / ANCOVA
Description
[Stable]
Fit a linear regression using lm(). Linear regression is used to explore the effect of continuous
variables / categorical variables in predicting a normally-distributed continuous variables.
Usage
lm_model(
data,
response_variable,
predictor_variable,
two_way_interaction_factor = NULL,
three_way_interaction_factor = NULL,
quite = FALSE
)
Arguments
data data.frame
response_variable
response variable. Support dplyr::select() syntax.
predictor_variable
predictor variable. Support dplyr::select() syntax. It will automatically re-
move the response variable from predictor variable, so you can use contains()
or start_with() safely.
two_way_interaction_factor
two-way interaction factors. You need to pass 2+ factor. Support dplyr::select()
syntax.
three_way_interaction_factor
three-way interaction factor. You need to pass exactly 3 factors. Specifying
three-way interaction factors automatically included all two-way interactions,
so please do not specify the two_way_interaction_factor argument. Support
dplyr::select() syntax.
quite suppress printing output
Value
an object class of lm representing the linear regression fit
Examples
fit <- lm_model(
data = iris,
response_variable = "Sepal.Length",
predictor_variable = tidyselect::everything(),
two_way_interaction_factor = c(Sepal.Width, Species)
)
lm_model_summary Model Summary for Linear Regression
Description
[Stable]
An integrated function for fitting a linear regression model.
Usage
lm_model_summary(
data,
response_variable = NULL,
predictor_variable = NULL,
two_way_interaction_factor = NULL,
three_way_interaction_factor = NULL,
family = NULL,
cateogrical_var = NULL,
graph_label_name = NULL,
model_summary = TRUE,
interaction_plot = TRUE,
y_lim = NULL,
plot_color = FALSE,
digits = 3,
simple_slope = FALSE,
assumption_plot = FALSE,
quite = FALSE,
streamline = FALSE,
return_result = FALSE
)
Arguments
data data.frame
response_variable
DV (i.e., outcome variable / response variable). Length of 1. Support dplyr::select()
syntax.
predictor_variable
IV. Support dplyr::select() syntax.
two_way_interaction_factor
two-way interaction factors. You need to pass 2+ factor. Support dplyr::select()
syntax.
three_way_interaction_factor
three-way interaction factor. You need to pass exactly 3 factors. Specifying
three-way interaction factors automatically included all two-way interactions,
so please do not specify the two_way_interaction_factor argument. Support
dplyr::select() syntax.
family a GLM family. It will passed to the family argument in glm. See ?glm for
possible options. [Experimental]
cateogrical_var
list. Specify the upper bound and lower bound directly instead of using ± 1 SD
from the mean. Passed in the form of list(var_name1 = c(upper_bound1,
lower_bound1),var_name2 = c(upper_bound2, lower_bound2))
graph_label_name
optional vector or function. vector of length 2 for two-way interaction graph.
vector of length 3 for three-way interaction graph. Vector should be passed in
the form of c(response_var, predict_var1, predict_var2, ...). Function should be
passed as a switch function (see ?two_way_interaction_plot for an example)
model_summary print model summary. Required to be TRUE if you want assumption_plot.
interaction_plot
generate the interaction plot. Default is TRUE
y_lim the plot’s upper and lower limit for the y-axis. Length of 2. Example: c(lower_limit,
upper_limit)
plot_color If it is set to TRUE (default is FALSE), the interaction plot will plot with color.
digits number of digits to round to
simple_slope Slope estimate at +1/-1 SD and the mean of the moderator. Uses interactions::sim_slope()
in the background.
assumption_plot
Generate an panel of plots that check major assumptions. It is usually recom-
mended to inspect model assumption violation visually. In the background, it
calls performance::check_model()
quite suppress printing output
streamline print streamlined output
return_result If it is set to TRUE (default is FALSE), it will return the model, model_summary,
and plot (if the interaction term is included)
Value
a list of all requested items in the order of model, model_summary, interaction_plot, simple_slope
Examples
fit <- lm_model_summary(
data = iris,
response_variable = "Sepal.Length",
predictor_variable = tidyselect::everything(),
two_way_interaction_factor = c(Sepal.Width, Species),
interaction_plot = FALSE, # you can also request the interaction plot
simple_slope = FALSE, # you can also request simple slope estimate
assumption_plot = FALSE, # you can also request assumption plot
streamline = FALSE #you can change this to get the least amount of info
)
measurement_invariance
Measurement Invariance
Description
[Stable]
Compute the measurement invariance model (i.e., measurement equivalence model) using multi-
group confirmatory factor analysis (MGCFA; Jöreskog, 1971). This function uses the lavaan::cfa()
in the backend. Users can run the configural-metric or the configural-metric-scalar comparisons
(see below for detail instruction). All arguments (except the CFA items) must be explicitly named
(like model = your-model; see example for inappropriate behavior).
Usage
measurement_invariance(
data,
...,
model = NULL,
group,
ordered = FALSE,
group_partial = NULL,
invariance_level = "scalar",
digits = 3,
quite = FALSE,
streamline = FALSE,
return_result = FALSE
)
Arguments
data data.frame
... CFA items. Multi-factor CFA items should be separated by comma (as different
argument). See below for examples. Support dplyr::select() syntax.
model explicit lavaan model. Must be specify with model = lavaan_model_syntax.
[Experimental]
group the nested variable for multilevel dataset (e.g., Country). Support dplyr::select()
syntax.
ordered Default is FALSE. If it is set to TRUE, lavaan will treat it as a ordinal variable
and use DWLS instead of ML
group_partial items for partial equivalence. The form should be c(’DV =~ item1’, ’DV =~
item2’). See details for recommended practice.
invariance_level
"metric" or "scalar". Default is ’metric’. Set as ’metric’ for configural-metric
comparison, and set as ’scalar’ for configural-metric-scalar comparison.
digits number of digits to round to
quite suppress printing output except the model summary.
streamline print streamlined output
return_result If it is set to TRUE, it will return a data frame of the fit measure summary
Details
Chen (2007) suggested that change in CFI <= |-0.010| supplemented by RMSEA <= 0.015 in-
dicate non-invariance when sample sizes were equal across groups and larger than 300 in each
group (Chen, 2007). And, Chen (2007) suggested that change in CFI <= |-0.005| and change in
RMSEA <= 0.010 for unequal sample size with each group smaller than 300. For SRMR, Chen
(2007) recommend change in SRMR < 0.030 for metric-invariance and change in SRMR < 0.015
for scalar-invariance. For large group size, Rutowski & Svetina (2014) recommended a more lib-
eral cut-off for metric non-invariance for CFI (change in CFI <= |-0.020|) and RMSEA (RMSEA
<= 0.030). However, this more liberal cut-off DOES NOT apply to testing scalar non-invariance.
If measurement-invariance is not achieved, some researchers suggesting partial invariance is ac-
ceptable (by releasing the constraints on some factors). For example, Steenkamp and Baumgartner
(1998) suggested that ideally more than half of items on a factor should be invariant. However, it
is important to note that no empirical studies were cited to support the partial invariance guideline
(Putnick & Bornstein, 2016).
Value
a data.frame of the fit measure summary
References
<NAME>. (2007). Sensitivity of Goodness of Fit Indexes to Lack of Measurement Invariance.
Structural Equation Modeling: A Multidisciplinary Journal, 14(3), 464–504. https://doi.org/10.1080/10705510701301834
<NAME>. (1971). Simultaneous factor analysis in several populations. Psychometrika, 36(4),
409-426.
<NAME>., & <NAME>. (2016). Measurement Invariance Conventions and Reporting:
The State of the Art and Future Directions for Psychological Research. Developmental Review:
DR, 41, 71–90. https://doi.org/10.1016/j.dr.2016.06.004
<NAME>., & <NAME>. (2014). Assessing the Hypothesis of Measurement Invariance in the
Context of Large-Scale International Surveys. Educational and Psychological Measurement, 74(1),
31–57. https://doi.org/10.1177/0013164413498257
<NAME>., & <NAME>. (n.d.). Assessing Measurement Invariance in Cross-
National Consumer Research. JOURNAL OF CONSUMER RESEARCH, 13.
Examples
# REMEMBER, YOU MUST NAMED ALL ARGUMENT EXCEPT THE CFA ITEMS ARGUMENT
# Fitting a multiple-factor measurement invariance model by passing items.
measurement_invariance(
x1:x3,
x4:x6,
x7:x9,
data = lavaan::HolzingerSwineford1939,
group = "school",
invariance_level = "scalar" # you can change this to metric
)
# Fitting measurement invariance model by passing explicit lavaan model
# I am also going to only test for metric invariance instead of the default scalar invariance
measurement_invariance(
model = "visual =~ x1 + x2 + x3;
textual =~ x4 + x5 + x6;
speed =~ x7 + x8 + x9",
data = lavaan::HolzingerSwineford1939,
group = "school",
invariance_level = "metric"
)
## Not run:
# This will fail because I did not add `model = ` in front of the lavaan model.
# Therefore,you must add the tag in front of all arguments
# For example, `return_result = 'model'` instaed of `model`
measurement_invariance(
"visual =~ x1 + x2 + x3;
textual =~ x4 + x5 + x6;
speed =~ x7 + x8 + x9",
data = lavaan::HolzingerSwineford1939
)
## End(Not run)
mediation_summary Mediation Analysis
Description
[Experimental]
It currently only support simple mediation analysis using the path analysis approach with the
lavaan package. I am trying to implement multilevel mediation in lavaan. In the future, I will
try supporting moderated mediation (through lavaan or mediation) and mediation with latent
variable (through lavaan).
Usage
mediation_summary(
data,
response_variable,
mediator,
predictor_variable,
control_variable = NULL,
group = NULL,
standardize = TRUE,
digits = 3,
quite = FALSE,
streamline = FALSE,
return_result = FALSE
)
Arguments
data data.frame
response_variable
response variable. Support dplyr::select() syntax.
mediator mediator. Support dplyr::select() syntax.
predictor_variable
predictor variable. Support dplyr::select() syntax.
control_variable
control variables / covariate. Support dplyr::select() syntax.
group nesting variable for multilevel mediation. Not confident about the implementa-
tion method. [Experimental]
standardize standardized coefficients. Default is TRUE
digits number of digits to round to
quite suppress printing output
streamline print streamlined output
return_result If it is set to TRUE, it will return the lavaan object
Value
an object from lavaan
Examples
mediation_summary(
data = lmerTest::carrots,
response_variable = Preference,
mediator = Sweetness,
predictor_variable = Crisp
)
model_summary Model Summary for Regression Models
Description
[Stable]
The function will extract the relevant coefficients from the regression models (see below for sup-
ported model).
Usage
model_summary(
model,
digits = 3,
assumption_plot = FALSE,
quite = FALSE,
streamline = TRUE,
return_result = FALSE,
standardize = "basic"
)
Arguments
model an model object. The following model are tested for accuracy: lm, glm, lme,
lmer, glmer. Other model object may work if it work with parameters::model_parameters()
digits number of digits to round to
assumption_plot
Generate an panel of plots that check major assumptions. It is usually recom-
mended to inspect model assumption violation visually. In the background, it
calls performance::check_model().
quite suppress printing output
streamline print streamlined output. Only print model estimate and performance.
return_result It set to TRUE, it return the model estimates data frame.
standardize The method used for standardizing the parameters. Can be NULL (default; no
standardization), "refit" (for re-fitting the model on standardized data) or one of
"basic", "posthoc", "smart", "pseudo". See ’Details’ in parameters::standardize_parameters()
Value
a list of model estimate data frame, model performance data frame, and the assumption plot (an
ggplot object)
References
<NAME>., & <NAME>. (2013). A general and simple method for obtaining R2 from gener-
alized linear mixed-effects models. Methods in Ecology and Evolution, 4(2), 133–142. https://doi.org/10.1111/j.2041-
210x.2012.00261.x
Examples
# I am going to show the more generic usage of this function
# You can also use this package's built in function to fit the models
# I recommend using the integrated_multilevel_model_summary to get everything
# lme example
lme_fit <- lme4::lmer("popular ~ texp + (1 | class)",
data = popular
)
model_summary(lme_fit)
# lm example
lm_fit <- lm(Sepal.Length ~ Sepal.Width + Petal.Length + Petal.Width,
data = iris
)
model_summary(lm_fit, assumption_plot = TRUE)
polynomial_regression_plot
Polynomial Regression Plot
Description
[Experimental]
The function create a simple regression plot (no interaction). Can be used to visualize polynomial
regression.
Usage
polynomial_regression_plot(
model,
model_data = NULL,
predictor,
graph_label_name = NULL,
x_lim = NULL,
y_lim = NULL,
plot_color = FALSE
)
Arguments
model object from lm
model_data optional dataframe (in case data cannot be retrieved from the model)
predictor predictor variable name (must be character)
graph_label_name
vector of length 3 or function. Vector should be passed in the form of c(response_var,
predict_var1, predict_var2). Function should be passed as a switch func-
tion that return the label based on the name passed (e.g., a switch function)
x_lim the plot’s upper and lower limit for the x-axis. Length of 2. Example: c(lower_limit,
upper_limit)
y_lim the plot’s upper and lower limit for the y-axis. Length of 2. Example: c(lower_limit,
upper_limit)
plot_color default if FALSE. Set to TRUE if you want to plot in color
Details
It appears that predict cannot handle categorical factors. All variables are converted to numeric
before plotting.
Value
an object of class ggplot
Examples
fit = lm(data = iris, Sepal.Length ~ poly(Petal.Length,2))
polynomial_regression_plot(model = fit,predictor = 'Petal.Length')
popular Popular dataset
Description
Classic data-set from Chapter 2 of Joop Hox’s Multilevel Analysis (2010). The popular dataset
included student from different class (i.e., class is the nesting variable). The outcome variable is
a self-rated popularity scale. Individual-level (i.e., level 1) predictors are sex, extroversion. Class
level (i.e., level 2) predictor is teacher experience.
Usage
popular
Format
A data frame with 2000 rows and 6 variables:
pupil Subject ID
popular Self-rated popularity scale ranging from 1 to 10
class the class that students belong to (nesting variable)
extrav extraversion scale (individual-level)
sex gender of the student (individual-level)
texp teacher experience (class-level)
Source
http://joophox.net/mlbook2/DataExchange.zip
reliability_summary Reliability Analysis
Description
[Stable]
First, it will determine whether the data is uni-dimensional or multi-dimensional using parameters::n_factors().
If the data is uni-dimensional, then it will print a summary consists of alpha, G6, single-factor CFA,
and descriptive statistics result. If it is multi-dimensional, it will print a summary consist of alpha,
G6, omega result. You can bypass this by specifying the dimensionality argument.
Usage
reliability_summary(
data,
cols,
dimensionality = NULL,
digits = 3,
descriptive_table = TRUE,
quite = FALSE,
streamline = FALSE,
return_result = FALSE
)
Arguments
data data.frame
cols items for reliability analysis. Support dplyr::select() syntax.
dimensionality Specify the dimensionality. Either uni (uni-dimensionality) or multi (multi-
dimensionality). Default is NULL that determines the dimensionality using EFA.
digits number of digits to round to
descriptive_table
Get descriptive statistics. Default is TRUE
quite suppress printing output
streamline print streamlined output
return_result If it is set to TRUE (default is FALSE), it will return psych::alpha for uni-
dimensional scale, and psych::omega for multidimensional scale.
Value
a psych::alpha object for unidimensional scale, and a psych::omega object for multidimensional
scale.
Examples
fit <- reliability_summary(data = lavaan::HolzingerSwineford1939, cols = x1:x3)
fit <- reliability_summary(data = lavaan::HolzingerSwineford1939, cols = x1:x9)
simple_slope Slope Estimate at Varying Level of Moderators
Description
[Stable]
The function uses the interaction::sim_slopes() to calculate the slope estimate at varying level
of moderators (+/- 1 SD and mean). Additionally, it will produce a Johnson-Newman plot that shows
when the slope estimate is not significant
Usage
simple_slope(model, data = NULL)
Arguments
model model object from lm, lme,lmer
data data.frame
Value
a list with the slope estimate data frame and a Johnson-Newman plot.
Examples
fit <- lm_model(
data = iris,
response_variable = Sepal.Length,
predictor_variable = tidyselect::everything(),
three_way_interaction_factor = c(Sepal.Width, Petal.Width, Petal.Length)
)
simple_slope_fit <- simple_slope(
model = fit,
)
three_way_interaction_plot
Three-way Interaction Plot
Description
[Deprecated]
The function creates a two-way interaction plot. It will creates a plot with ± 1 SD from the mean
of the independent variable. See below for supported model. I recommend using concurrently with
lm_model(), lme_model().
Usage
three_way_interaction_plot(
model,
data = NULL,
cateogrical_var = NULL,
graph_label_name = NULL,
y_lim = NULL,
plot_color = FALSE
)
Arguments
model object from lme, lme4, lmerTest object.
data data.frame. If the function is unable to extract data frame from the object, then
you may need to pass it directly
cateogrical_var
list. Specify the upper bound and lower bound directly instead of using ± 1 SD
from the mean. Passed in the form of list(var_name1 = c(upper_bound1,
lower_bound1),var_name2 = c(upper_bound2, lower_bound2))
graph_label_name
vector of length 4 or a switch function (see ?two_way_interaction_plot exam-
ple). Vector should be passed in the form of c(response_var, predict_var1, pre-
dict_var2, predict_var3).
y_lim the plot’s upper and lower limit for the y-axis. Length of 2. Example: c(lower_limit,
upper_limit)
plot_color default if FALSE. Set to TRUE if you want to plot in color
Details
It appears that “predict‘ cannot handle categorical factors. All variables are converted to numeric
before plotting.
Value
a ggplot object
Examples
lm_fit <- lm(Sepal.Length ~ Sepal.Width + Petal.Length + Petal.Width +
Sepal.Width:Petal.Length:Petal.Width, data = iris)
three_way_interaction_plot(lm_fit, data = iris)
two_way_interaction_plot
Two-way Interaction Plot
Description
[Deprecated]
The function creates a two-way interaction plot. It will creates a plot with ± 1 SD from the mean
of the independent variable. See supported model below. I recommend using concurrently with
lm_model or lme_model.
Usage
two_way_interaction_plot(
model,
data = NULL,
graph_label_name = NULL,
cateogrical_var = NULL,
y_lim = NULL,
plot_color = FALSE
)
Arguments
model object from lm, nlme, lme4, or lmerTest
data data.frame. If the function is unable to extract data frame from the object, then
you may need to pass it directly
graph_label_name
vector of length 3 or function. Vector should be passed in the form of c(response_var,
predict_var1, predict_var2). Function should be passed as a switch func-
tion that return the label based on the name passed (e.g., a switch function)
cateogrical_var
list. Specify the upper bound and lower bound directly instead of using ± 1 SD
from the mean. Passed in the form of list(var_name1 = c(upper_bound1,
lower_bound1),var_name2 = c(upper_bound2, lower_bound2))
y_lim the plot’s upper and lower limit for the y-axis. Length of 2. Example: c(lower_limit,
upper_limit)
plot_color default if FALSE. Set to TRUE if you want to plot in color
Details
It appears that “predict‘ cannot handle categorical factors. All variables are converted to numeric
before plotting.
Value
an object of class ggplot
Examples
lm_fit <- lm(Sepal.Length ~ Sepal.Width * Petal.Width,
data = iris
)
two_way_interaction_plot(lm_fit, data = iris) |
CPBayes | cran | R | Package ‘CPBayes’
October 12, 2022
Title Bayesian Meta Analysis for Studying Cross-Phenotype Genetic
Associations
Version 1.1.0
Date 2020-12-01
Author <NAME> <<EMAIL>> [aut, cre],
<NAME> <<EMAIL>> [aut],
<NAME> [ctb]
Maintainer <NAME> <<EMAIL>>
Description A Bayesian meta-analysis method for studying cross-phenotype
genetic associations. It uses summary-level data across multiple phenotypes to
simultaneously measure the evidence of aggregate-level pleiotropic association
and estimate an optimal subset of traits associated with the risk locus. CPBayes
is based on a spike and slab prior. The methodology is available from: <NAME>, <NAME>-
dar, <NAME>, <NAME> (2018) <doi:10.1371/journal.pgen.1007139>.
Depends R (>= 3.2.0)
License GPL-3
LazyData TRUE
URL https://github.com/ArunabhaCodes/CPBayes
BugReports https://github.com/ArunabhaCodes/CPBayes/issues
RoxygenNote 7.1.1
Suggests testthat, knitr, rmarkdown
VignetteBuilder knitr
Imports MASS, stats, forestplot, grDevices, purrr, mvtnorm
NeedsCompilation no
Repository CRAN
Date/Publication 2020-12-02 07:40:23 UTC
R topics documented:
analytic_locFDR_BF_co... 2
analytic_locFDR_BF_unco... 3
CPBaye... 5
cpbayes_co... 6
cpbayes_unco... 8
estimate_corl... 11
ExampleDataCo... 13
ExampleDataUnco... 14
forest_cpbaye... 14
post_summarie... 16
SampleOverlapMatri... 18
analytic_locFDR_BF_cor
Analytic calculation of the local FDR & Bayes factor for correlated
summary statistics.
Description
Run the analytic_locFDR_BF_cor function to analytically compute the local FDR & Bayes factor
(BF) that quantifies the evidence of aggregate-level pleiotropic association for correlated summary
statistics. Here a fixed value of slab variance is considred instead of a range of it in cpbayes_cor.
Usage
analytic_locFDR_BF_cor(BetaHat, SE, Corln, SpikeVar = 1e-04, SlabVar = 0.8)
Arguments
BetaHat A numeric vector of length K where K is the number of phenotypes. It contains
the beta-hat values across studies/traits. No default.
SE A numeric vector with the same dimension as BetaHat providing the standard
errors corresponding to BetaHat. Every element of SE must be positive. No
default.
Corln A numeric square matrix of order K by K providing the correlation matrix of
BetaHat. The number of rows & columns of Corln must be the same as the
length of BetaHat. No default is specified. See estimate_corln.
SpikeVar Variance of spike (normal distribution with small variance) representing the null
effect distribution. Default is 10^(-4).
SlabVar Variance of slab normal distribution representing the non-null effect distribution.
Default is 0.8.
Value
The output produced by the function is a list which consists of the local FDR and log10(Bayes
factor).
locFDR It provides the analytically computed local false discovery rate (posterior prob-
ability of null association) under CPBayes model (a Bayesian analog of the p-
value) which is a measure of the evidence of the aggregate-level pleiotropic
association. Bayes factor is adjusted for prior odds, but locFDR is solely a func-
tion of the posterior odds.
log10_BF It provides the analytically computed log10(Bayes factor) produced by CPBayes
that measures the evidence of the overall pleiotropic association.
References
<NAME>, <NAME>, <NAME>, <NAME> (2018) An efficient Bayesian meta analysis ap-
proach for studying cross-phenotype genetic associations. PLoS Genet 14(2): e1007139.
See Also
cpbayes_cor, estimate_corln, analytic_locFDR_BF_uncor, cpbayes_uncor, post_summaries,
forest_cpbayes
Examples
data(ExampleDataCor)
BetaHat <- ExampleDataCor$BetaHat
BetaHat
SE <- ExampleDataCor$SE
SE
cor <- ExampleDataCor$cor
cor
result <- cpbayes_cor(BetaHat, SE, cor)
str(result)
analytic_locFDR_BF_uncor
Analytic calculation of the local FDR & Bayes factor for uncorrelated
summary statistics.
Description
Run the analytic_locFDR_BF_uncor function to analytically compute the local FDR & Bayes
factor (BF) that quantifies the evidence of aggregate-level pleiotropic association for uncorrelated
summary statistics. Here a fixed value of slab variance is considred instead of a range of it in
cpbayes_uncor.
Usage
analytic_locFDR_BF_uncor(BetaHat, SE, SpikeVar = 1e-04, SlabVar = 0.8)
Arguments
BetaHat A numeric vector of length K where K is the number of phenotypes. It contains
the beta-hat values across studies/traits. No default.
SE A numeric vector with the same dimension as BetaHat providing the standard
errors corresponding to BetaHat. Every element of SE must be positive. No
default.
SpikeVar Variance of spike (normal distribution with small variance) representing the null
effect distribution. Default is 10^(-4).
SlabVar Variance of slab normal distribution representing the non-null effect distribution.
Default is 0.8.
Value
The output produced by the function is a list which consists of the local FDR and log10(Bayes
factor).
locFDR It provides the analytically computed local false discovery rate (posterior prob-
ability of null association) under CPBayes model (a Bayesian analog of the p-
value) which is a measure of the evidence of the aggregate-level pleiotropic
association. Bayes factor is adjusted for prior odds, but locFDR is solely a func-
tion of the posterior odds.
log10_BF It provides the analytically computed log10(Bayes factor) produced by CPBayes
that measures the evidence of the overall pleiotropic association.
References
<NAME>, <NAME>, <NAME>, <NAME> (2018) An efficient Bayesian meta analysis ap-
proach for studying cross-phenotype genetic associations. PLoS Genet 14(2): e1007139.
See Also
cpbayes_uncor, analytic_locFDR_BF_cor, cpbayes_cor, estimate_corln, post_summaries,
forest_cpbayes
Examples
data(ExampleDataUncor)
BetaHat <- ExampleDataUncor$BetaHat
BetaHat
SE <- ExampleDataUncor$SE
SE
result <- analytic_locFDR_BF_uncor(BetaHat, SE)
str(result)
CPBayes CPBayes: An R-package implemeting a Bayesian meta analysis
method for studying cross-phenotype genetic associations.
Description
Simultaneous analysis of genetic associations with multiple phenotypes may reveal shared genetic
susceptibility across traits (pleiotropy). CPBayes is a Bayesian meta analysis method for studying
cross-phenotype genetic associations. It uses summary-level data across multiple phenotypes to
simultaneously measure the evidence of aggregate-level pleiotropic association and estimate an
optimal subset of traits associated with the risk locus. CPBayes is based on a spike and slab prior.
Details
The package consists of following functions: analytic_locFDR_BF_uncor, cpbayes_uncor; analytic_locFDR_BF_cor,
cpbayes_cor; post_summaries, forest_cpbayes, estimate_corln.
Functions
analytic_locFDR_BF_uncor It analytically computes the local FDR (locFDR) and Bayes factor
(BF) quantifying the evidence of aggregate-level pleiotropic association for uncorrelated sum-
mary statistics.
cpbayes_uncor It implements CPBayes (based on MCMC) for uncorrelated summary statistics
to figure out the optimal subset of non-null traits underlying a pleiotropic signal and other
insights. The summary statistics across traits/studies are uncorrelated when the studies have
no overlapping/genetically related subjects.
analytic_locFDR_BF_cor It analytically computes the local FDR (locFDR) and Bayes factor
(BF) for correlated summary statistics.
cpbayes_cor It implements CPBayes (based on MCMC) for correlated summary statistics to fig-
ure out the optimal subset of non-null traits underlying a pleiotropic signal and other insights.
The summary statistics across traits/studies are correlated when the studies have overlap-
ping/genetically related subjects or the phenotypes were measured in a cohort study.
post_summaries It summarizes the MCMC data produced by cpbayes_uncor or cpbayes_cor.
It computes additional summaries to provide a better insight into a pleiotropic signal. It works
in the same way for both cpbayes_uncor and cpbayes_cor.
forest_cpbayes It creates a forest plot presenting the pleiotropy result obtained by cpbayes_uncor
or cpbayes_cor. It works in the same way for both cpbayes_uncor and cpbayes_cor.
estimate_corln It computes an approximate correlation matrix of the beta-hat vector for multiple
overlapping case-control studies using the sample-overlap count matrices.
References
<NAME>, <NAME>, <NAME>, <NAME> (2018) An efficient Bayesian meta analysis ap-
proach for studying cross-phenotype genetic associations. PLoS Genet 14(2): e1007139.
cpbayes_cor Run correlated version of CPBayes.
Description
Run correlated version of CPBayes when the main genetic effect (beta/log(odds ratio)) estimates
across studies/traits are correlated.
Usage
cpbayes_cor(
BetaHat,
SE,
Corln,
Phenotypes,
Variant,
UpdateSlabVar = TRUE,
MinSlabVar = 0.6,
MaxSlabVar = 1,
MCMCiter = 7500,
Burnin = 500
)
Arguments
BetaHat A numeric vector of length K where K is the number of phenotypes. It contains
the beta-hat values across studies/traits. No default is specified.
SE A numeric vector with the same dimension as BetaHat providing the standard
errors corresponding to BetaHat. Every element of SE must be positive. No
default is specified.
Corln A numeric square matrix of order K by K providing the correlation matrix of
BetaHat. The number of rows & columns of Corln must be the same as the
length of BetaHat. No default is specified. See estimate_corln.
Phenotypes A character vector of the same length as BetaHat providing the name of the
phenotypes. Default is specified as trait1, trait2, . . . , traitK. Note that BetaHat,
SE, Corln, and Phenotypes must be in the same order.
Variant A character vector of length one providing the name of the genetic variant. De-
fault is ‘Variant’.
UpdateSlabVar A logical vector of length one. If TRUE, the variance of the slab distribution
that presents the prior distribution of non-null effects is updated at each MCMC
iteration in a range (MinSlabVar – MaxSlabVar) (see next). If FALSE, it is fixed
at (MinSlabVar + MaxSlabVar)/2. Default is TRUE.
MinSlabVar A numeric value greater than 0.01 providing the minimum value of the variance
of the slab distribution. Default is 0.6.
MaxSlabVar A numeric value smaller than 10.0 providing the maximum value of the variance
of the slab distribution. Default is 1.0. **Note that, a smaller value of the slab
variance will increase the sensitivity of CPBayes while selecting the optimal
subset of associated traits but at the expense of lower specificity. Hence the slab
variance parameter in CPBayes is inversely related to the level of false discovery
rate (FDR) in a frequentist FDR controlling procedure. For a specific dataset, an
user can experiment different choices of these three arguments: UpdateSlabVar,
MinSlabVar, and MaxSlabVar.
MCMCiter A positive integer greater than or equal to 2200 providing the total number of
iterations in the MCMC. Default is 7500.
Burnin A positive integer greater than or equal to 200 providing the burn in period in
the MCMC. Default is 200. Note that the MCMC sample size (MCMCiter -
Burnin) must be at least 2000, which is 7000 by default.
Value
The output produced by cpbayes_cor is a list which consists of various components.
variantName It is the name of the genetic variant provided by the user. If not specified by the
user, default name is ‘Variant’.
log10_BF It provides the log10(Bayes factor) produced by CPBayes that measures the
evidence of the overall pleiotropic association.
locFDR It provides the local false discovery rate (posterior probability of null associ-
ation) produced by CPBayes which is a measure of the evidence of aggregate-
level pleiotropic association. Bayes factor is adjusted for prior odds, but locFDR
is solely a function of the posterior odds. locFDR can sometimes be small in-
dicating an association, but log10_BF may not indicate an association. Hence,
always check both log10_BF and locFDR.
subset It provides the optimal subset of associated/non-null traits selected by CPBayes.
It is NULL if no phenotype is selected.
important_traits
It provides the traits which yield a trait-specific posterior probability of associ-
ation (PPAj) > 20%. Even if a phenotype is not selected in the optimal subset
of non-null traits, it can produce a non-negligible value of PPAj. Note that, ‘im-
portant_traits’ is expected to include the traits already contained in ‘subset’. It
provides both the name of the important traits and their corresponding value of
PPAj. Always check ’important_traits’ even if ’subset’ contains a single trait. It
helps to better explain an observed pleiotropic signal.
auxi_data It contains supplementary data including the MCMC data which is used later by
post_summaries and forest_cpbayes:
1. traitNames: Name of all the phenotypes.
2. K: Total number of phenotypes.
3. mcmc.samplesize: MCMC sample size.
4. PPAj: Trait-specific posterior probability of association for all the traits.
5. Z.data: MCMC data on the latent association status of all the traits (Z).
6. sim.beta: MCMC data on the unknown true genetic effect (beta) on each
trait.
7. betahat: The beta-hat vector provided by the user which will be used by
forest_cpbayes.
8. se: The standard error vector provided by the user which will be used by
forest_cpbayes.
uncor_use ’Yes’ or ’No’. Whether the combined strategy of CPBayes (implemented for
correlated summary statistics) used the uncorrelated version or not.
runtime It provides the runtime (in seconds) taken by cpbayes_cor. It will help the user
to plan the whole analysis.
References
<NAME>, <NAME>, <NAME>, <NAME> (2018) An efficient Bayesian meta analysis ap-
proach for studying cross-phenotype genetic associations. PLoS Genet 14(2): e1007139.
See Also
analytic_locFDR_BF_cor, estimate_corln, post_summaries, forest_cpbayes, analytic_locFDR_BF_uncor,
cpbayes_uncor
Examples
data(ExampleDataCor)
BetaHat <- ExampleDataCor$BetaHat
BetaHat
SE <- ExampleDataCor$SE
SE
cor <- ExampleDataCor$cor
cor
traitNames <- paste("Disease", 1:10, sep = "")
SNP1 <- "rs1234"
result <- cpbayes_cor(BetaHat, SE, cor, Phenotypes = traitNames, Variant = SNP1)
str(result)
cpbayes_uncor Run uncorrelated version of CPBayes.
Description
Run uncorrelated version of CPBayes when the main genetic effect (beta/log(odds ratio)) estimates
across studies/traits are uncorrelated.
Usage
cpbayes_uncor(
BetaHat,
SE,
Phenotypes,
Variant,
UpdateSlabVar = TRUE,
MinSlabVar = 0.6,
MaxSlabVar = 1,
MCMCiter = 7500,
Burnin = 500
)
Arguments
BetaHat A numeric vector of length K where K is the number of phenotypes. It contains
the beta-hat values across studies/traits. No default is specified.
SE A numeric vector with the same dimension as BetaHat providing the standard
errors corresponding to BetaHat. Every element of SE must be positive. No
default is specified.
Phenotypes A character vector of the same length as BetaHat providing the name of the
phenotypes. Default is specified as trait1, trait2, . . . , traitK. Note that BetaHat,
SE, and Phenotypes must be in the same order.
Variant A character vector of length one specifying the name of the genetic variant.
Default is ‘Variant’.
UpdateSlabVar A logical vector of length one. If TRUE, the variance of the slab distribution
that presents the prior distribution of non-null effects is updated at each MCMC
iteration in a range (MinSlabVar – MaxSlabVar) (see next). If FALSE, it is fixed
at (MinSlabVar + MaxSlabVar)/2. Default is TRUE.
MinSlabVar A numeric value greater than 0.01 providing the minimum value of the variance
of the slab distribution. Default is 0.6.
MaxSlabVar A numeric value smaller than 10.0 providing the maximum value of the variance
of the slab distribution. Default is 1.0. **Note that, a smaller value of the slab
variance will increase the sensitivity of CPBayes while selecting the optimal
subset of associated traits but at the expense of lower specificity. Hence the slab
variance parameter in CPBayes is inversely related to the level of false discovery
rate (FDR) in a frequentist FDR controlling procedure. For a specific dataset, an
user can experiment different choices of these three arguments: UpdateSlabVar,
MinSlabVar, and MaxSlabVar.
MCMCiter A positive integer greater than or equal to 2200 providing the total number of
iterations in the MCMC. Default is 7500.
Burnin A positive integer greater than or equal to 200 providing the burn in period in
the MCMC. Default is 500. Note that the MCMC sample size (MCMCiter -
Burnin) must be at least 2000, which is 7000 by default.
Value
The output produced by the function is a list which consists of various components.
variantName It is the name of the genetic variant provided by the user. If not specified by the
user, default name is ‘Variant’.
log10_BF It provides the log10(Bayes factor) produced by CPBayes that measures the
evidence of the overall pleiotropic association.
locFDR It provides the local false discovery rate (posterior probability of null associa-
tion) produced by CPBayes which is a measure of the evidence of the aggregate-
level pleiotropic association. Bayes factor is adjusted for prior odds, but locFDR
is solely a function of the posterior odds. locFDR can sometimes be small in-
dicating an association, but log10_BF may not indicate an association. Hence,
always check both log10_BF and locFDR.
subset It provides the optimal subset of associated/non-null traits selected by CPBayes.
It is NULL if no phenotype is selected.
important_traits
It provides the traits which yield a trait-specific posterior probability of associa-
tion (PPAj) > 20%. Even if a phenotype is not selected in the optimal subset of
non-null traits, it can produce a non-negligible value of trait-specific posterior
probability of association (PPAj). Note that, ‘important_traits’ is expected to
include the traits already contained in ‘subset’. It provides both the name of the
important traits and their corresponding values of PPAj. Always check ’impor-
tant_traits’ even if ’subset’ contains a single trait. It helps to better explain an
observed pleiotropic signal.
auxi_data It contains supplementary data including the MCMC data which is used later by
post_summaries and forest_cpbayes:
1. traitNames: Name of all the phenotypes.
2. K: Total number of phenotypes.
3. mcmc.samplesize: MCMC sample size.
4. PPAj: Trait-specific posterior probability of association for all the traits.
5. Z.data: MCMC data on the latent association status of all the traits (Z).
6. sim.beta: MCMC data on the unknown true genetic effect (beta) on all the
traits.
7. betahat: The beta-hat vector provided by the user which will be used by
forest_cpbayes.
8. se: The standard error vector provided by the user which will be used by
forest_cpbayes.
runtime It provides the runtime (in seconds) taken by cpbayes_uncor. It will help the
user to plan the whole analysis.
References
<NAME>, <NAME>, <NAME>, <NAME> (2018) An efficient Bayesian meta analysis ap-
proach for studying cross-phenotype genetic associations. PLoS Genet 14(2): e1007139.
See Also
analytic_locFDR_BF_uncor, post_summaries, forest_cpbayes, analytic_locFDR_BF_cor, cpbayes_cor,
estimate_corln
Examples
data(ExampleDataUncor)
BetaHat <- ExampleDataUncor$BetaHat
BetaHat
SE <- ExampleDataUncor$SE
SE
traitNames <- paste("Disease", 1:10, sep = "")
SNP1 <- "rs1234"
result <- cpbayes_uncor(BetaHat, SE, Phenotypes = traitNames, Variant = SNP1)
str(result)
estimate_corln Estimate correlation structure of beta-hat vector for multiple overlap-
ping case-control studies using sample-overlap matrices.
Description
It computes an approximate correlation matrix of the estimated beta (log odds ratio) vector for mul-
tiple overlapping case-control studies using the sample-overlap matrices which describe the number
of cases or controls shared between studies/traits, and the number of subjects who are case for one
study/trait but control for another study/trait. For a cohort study, the phenotypic correlation ma-
trix should be a reasonable substitute of this correlation matrix. These approximations are accurate
when none of the diseases/traits is associated with the environmental covariates and genetic variant.
Usage
estimate_corln(n11, n00, n10)
Arguments
n11 An integer square matrix (number of rows must be the same as the number of
studies/traits) providing the number of cases shared between all possible pairs of
studies/traits. So (k,l)-th element of n11 is the number of subjects who are case
for both k-th and l-th study/trait. Note that the diagonal elements of n11 are the
number of cases in the studies/traits. If no case is shared between studies/traits,
the off-diagonal elements of n11 will be zero. No default is specified.
n00 An integer square matrix (number of rows must be the same as the number
of studies/traits) providing the number of controls shared between all possible
pairs of studies/traits. So (k,l)-th element of n00 is the number subjects who are
control for both k-th and l-th study/trait. Note that the diagonal elements of n00
are the number of controls in the studies/traits. If no control is shared between
studies/traits, the off-diagonal elements will be zero. No default is specified.
n10 An integer square matrix (number of rows must be the same as the number of
studies/traits) providing the number of subjects who are case for one study/trait
and control for another study/trait. Clearly, the diagonal elements will be zero.
An off diagonal element, e.g., (k,l)-th element of n10 is the number of subjects
who are case for k-th study/trait and control for l-th study/trait. If there is no
such overlap, all the elements of n10 will be zero. No default is specified.
Details
***Important note on the estimation of correlation structure of correlated beta-hat vector:*** In
general, environmental covariates are expected to be present in a study and associated with the
phenotypes of interest. Also, a small proportion of genome-wide genetic variants are expected to
be associated. Hence the above approximation of the correlation matrix may not be accurate. So in
general, we recommend an alternative strategy to estimate the correlation matrix using the genome-
wide summary statistics data across traits as follows. First, extract all the SNPs for each of which the
trait-specific univariate association p-value across all the traits are > 0.1. The trait-specific univariate
association p-values are obtained using the beta-hat and standard error for each trait. Each of the
SNPs selected in this way is either weakly or not associated with any of the phenotypes (null SNP).
Next, select a set of independent null SNPs from the initial set of null SNPs by using a threshold
of r^2 < 0.01 (r: the correlation between the genotypes at a pair of SNPs). In the absence of in-
sample linkage disequilibrium (LD) information, one can use the reference panel LD information
for this screening. Finally, compute the correlation matrix of the effect estimates (beta-hat vector)
as the sample correlation matrix of the beta-hat vector across all the selected independent null SNPs.
This strategy is more general and applicable to a cohort study or multiple overlapping studies for
binary or quantitative traits with arbitrary distributions. It is also useful when the beta-hat vector for
multiple non-overlapping studies become correlated due to genetically related individuals across
studies. Misspecification of the correlation structure can affect the results produced by CPBayes to
some extent. Hence, if genome-wide summary statistics data across traits is available, we highly
recommend to use this alternative strategy to estimate the correlation matrix of the beta-hat vector.
See our paper for more details.
Value
This function returns an approximate correlation matrix of the beta-hat vector for multiple overlap-
ping case-control studies. See the example below.
References
<NAME>, <NAME>, <NAME>, <NAME> (2018) An efficient Bayesian meta analysis ap-
proach for studying cross-phenotype genetic associations. PLoS Genet 14(2): e1007139.
See Also
cpbayes_cor
Examples
data(SampleOverlapMatrix)
n11 <- SampleOverlapMatrix$n11
n11
n00 <- SampleOverlapMatrix$n00
n00
n10 <- SampleOverlapMatrix$n10
n10
cor <- estimate_corln(n11, n00, n10)
cor
ExampleDataCor An example data for correlated summary statistics.
Description
ExampleDataCor is a list consisting of three components: BetaHat, SE, cor. ExampleDataCor$BetaHat
is a numeric vector that contains the main genetic effect (beta/log(odds ratio)) estimates for a SNP
across 10 overlapping case-control studies for 10 different diseases. Each of the 10 studies has a
distinct set of 7000 cases and a common set of 10000 controls shared across all the studies. In each
case-control study, we fit a logistic regression of the case-control status on the genotype coded as
the minor allele count for all the individuals in the sample. One can also include various covariates,
such as, age, gender, principal components (PCs) of ancestries in the logistic regression. From each
logistic regression for a disease, we obtain the estimate of the main genetic association parameter
(beta/log(odds ratio)) along with the corresponding standard error. Since the studies have overlap-
ping subjects, the beta-hat across traits are correlated. ExampleDataCor$SE contains the standard
error vector corresponding to the correlated beta-hat vector. ExampleDataCor$cor is a numeric
square matrix providing the correlation matrix of the correlated beta-hat vector.
Usage
data(ExampleDataCor)
Format
A list consisting of two numeric vectors (each of length 10) and a numeric square matrix of dimen-
sion 10 by 10:
BetaHat beta hat vector of length 10.
SE standard error vector corresponding to the beta-hat vector.
cor correlation matrix of the beta-hat vector.
Examples
data(ExampleDataCor)
BetaHat <- ExampleDataCor$BetaHat
BetaHat
SE <- ExampleDataCor$SE
SE
cor <- ExampleDataCor$cor
cor
cpbayes_cor(BetaHat, SE, cor)
ExampleDataUncor An example data for uncorrelated summary statistics.
Description
ExampleDataUncor is a list which has two components: BetaHat, SE. The numeric vector Exam-
pleDataUncor$BetaHat contains the main genetic effect (beta/log(odds ratio)) estimates for a single
nucleotide polymorphism (SNP) obtained from 10 separate case-control studies for 10 different dis-
eases. In each case-control study comprising a distinct set of 7000 cases and 10000 controls, we
fit a logistic regression of the case-control status on the genotype coded as the minor allele count
for all the individuals in the sample. One can also include various covariates, such as, age, gender,
principal components (PCs) of ancestries in the logistic regression. From each logistic regression
for a disease, we obtain the estimate of the main genetic association parameter (beta/log(odds ratio))
along with the corresponding standard error. Since the studies do not have any overlapping subject,
the beta-hat across the traits are uncorrelated. ExampleDataUncor$SE is the second numeric vector
that contains the standard errors corresponding to the uncorrelated beta-hat vector.
Usage
data(ExampleDataUncor)
Format
A list of two numeric vectors each of length 10 (for 10 studies):
BetaHat beta hat vector of length 10.
SE standard error vector corresponding to beta-hat vector.
Examples
data(ExampleDataUncor)
BetaHat <- ExampleDataUncor$BetaHat
BetaHat
SE <- ExampleDataUncor$SE
SE
cpbayes_uncor(BetaHat, SE)
forest_cpbayes Forest plot presenting pleiotropy result obtained by CPBayes.
Description
Run the forest_cpbayes function to create a forest plot that presents the pleiotropy result obtained
by cpbayes_uncor or cpbayes_cor.
Usage
forest_cpbayes(mcmc_output, level = 0.05, PPAj_cutoff = 0.01)
Arguments
mcmc_output A list returned by either cpbayes_uncor or cpbayes_cor. This list contains all
the primary results and MCMC data produced by cpbayes_uncor or cpbayes_cor.
No default is specified. See the example below.
level A numeric value. (1-level)% confidence interval of the unknown true genetic
effect (beta/log(odds ratio)) on each trait is plotted in the forest plot. Default
choice is 0.05.
PPAj_cutoff A numeric value. It’s a user-specified threshold of PPAj (trait-specific posterior
probability of association). Only those traits having PPAj values above this cut-
off are included in the forest plot. So, the choice of this variable as ’0.0’ includes
all traits in the forest plot. Default is 0.01.
Value
The output produced by this function is a diagram file in .pdf format. The details of the diagram are
as follows:
file_name The pdf file is named after the genetic variant. So, if the argument ‘Variant’
in cpbayes_uncor or cpbayes_cor is specified as ’rs1234’, the figure file is
named as rs1234.pdf.
Column1 First column in the figure specifies the name of the phenotypes.
Column2 Second column provides the trait-specific univariate association p-value for a
trait.
Column3 Third column provides the trait-specific posterior probability of association (PPAj)
produced by CPBayes.
Column4 Fourth column states whether a phenotype was selected in the optimal subset
of associated/non-null traits detected by CPBayes. If a phenotype was not se-
lected, selected and positively associated, selected and negatively associated, its
association status is stated as null, positive and negative, respectively.
Column5 In the right section of the figure, the primary eatimate and confidence interval of
the beta/log odds ratio parameter for a trait is plotted.
References
<NAME>, <NAME>, <NAME>, <NAME> (2018) An efficient Bayesian meta analysis ap-
proach for studying cross-phenotype genetic associations. PLoS Genet 14(2): e1007139.
See Also
cpbayes_uncor, cpbayes_cor
Examples
data(ExampleDataUncor)
BetaHat <- ExampleDataUncor$BetaHat
SE <- ExampleDataUncor$SE
traitNames <- paste("Disease", 1:10, sep = "")
SNP1 <- "rs1234"
result <- cpbayes_uncor(BetaHat, SE, Phenotypes = traitNames, Variant = SNP1)
## Not run: forest_cpbayes(result, level = 0.05)
post_summaries Post summary of the MCMC data generated by the uncorrelated or
correlated version of CPBayes.
Description
Run the post_summaries function to summarize the MCMC data produced by cpbayes_uncor or
cpbayes_cor and obtain meaningful insights into an observed pleiotropic signal.
Usage
post_summaries(mcmc_output, level = 0.05)
Arguments
mcmc_output A list returned by either cpbayes_uncor or cpbayes_cor. This list contains the
primary results and MCMC data produced by cpbayes_uncor or cpbayes_cor.
No default is specified. See the example below.
level A numeric value. (1-level)% credible interval (Bayesian analog of the confi-
dence interval) of the unknown true genetic effect (beta/odds ratio) on each trait
is computed. Default choice is 0.05.
Value
The output produced by this function is a list that consists of various components.
variantName It is the name of the genetic variant provided by the user. If not specified by the
user, default name is ‘Variant’.
log10_BF It provides the log10(Bayes factor) produced by CPBayes that measures the
evidence of the overall pleiotropic association.
locFDR It provides the local false discovery rate (posterior probability of null asso-
ciation) produced by CPBayes (a Bayesian analog of the p-value) which is
a measure of the evidence of aggregate-level pleiotropic association. Bayes
factor is adjusted for prior odds, but locFDR is solely a function of posterior
odds. locFDR can sometimes be significantly small indicating an association,
but log10_BF may not. Hence, always check both log10_BF and locFDR.
subset A data frame providing the optimal subset of associated/non-null traits along
with their trait-specific posterior probability of association (PPAj) and direction
of associations. It is NULL if no phenotype is selected by CPBayes.
important_traits
It provides the traits which yield a trait-specific posterior probability of associa-
tion (PPAj) > 20%. Even if a phenotype is not selected in the optimal subset of
non-null traits, it can produce a non-negligible value of trait-specific posterior
probability of association. We note that ‘important_traits’ is expected to include
the traits already contained in ‘subset’. It provides the name of the important
traits and their trait-specific posterior probability of association (PPAj) and the
direction of associations. Always check ’important_traits’ even if ’subset’ con-
tains a single trait. It helps to better explain an observed pleiotropic signal.
traitNames It returns the name of all the phenotypes specified by the user. Default is trait1,
trait2, ... , traitK.
PPAj Data frame providing the trait-specific posterior probability of association for
all the phenotypes.
poste_summary_beta
Data frame providing the posterior summary of the unknown true genetic effect
(beta) on each trait. It gives posterior mean, median, standard error, credible
interval (lower and upper limits) of the true beta corresponding to each trait.
poste_summary_OR
Data frame providing the posterior summary of the unknown true genetic effect
(odds ratio) on each trait. It gives posterior mean, median, standard error, credi-
ble interval (lower and upper limits) of the true odds ratio corresponding to each
trait.
References
<NAME>, <NAME>, <NAME>, <NAME> (2018) An efficient Bayesian meta analysis ap-
proach for studying cross-phenotype genetic associations. PLoS Genet 14(2): e1007139.
See Also
cpbayes_uncor, cpbayes_cor
Examples
data(ExampleDataUncor)
BetaHat <- ExampleDataUncor$BetaHat
BetaHat
SE <- ExampleDataUncor$SE
SE
traitNames <- paste("Disease", 1:10, sep = "")
SNP1 <- "rs1234"
result <- cpbayes_uncor(BetaHat, SE, Phenotypes = traitNames, Variant = SNP1)
PleioSumm <- post_summaries(result, level = 0.05)
str(PleioSumm)
SampleOverlapMatrix An example data of sample-overlap matrices.
Description
An example data of sample-overlap matrices for five different diseases in the Kaiser GERA cohort (a
real data). SampleOverlapMatrix is a list that contains an example of the sample overlap matrices
for five different diseases in the Kaiser GERA cohort. SampleOverlapMatrix$n11 provides the
number of cases shared between all possible pairs of diseases. SampleOverlapMatrix$n00 provides
the number of controls shared between all possible pairs of diseases. SampleOverlapMatrix$n10
provides the number of subjects who are case for one disease and control for another disease.
Usage
data(SampleOverlapMatrix)
Format
A list consisting of three integer square matrices (each of dimension 5 by 5):
n11 number of cases shared between all possible pairs of diseases.
n00 number of controls shared between all possible pairs of diseases.
n10 number of subjects who are case for one disease and control for another disease.
Examples
data(SampleOverlapMatrix)
n11 <- SampleOverlapMatrix$n11
n11
n00 <- SampleOverlapMatrix$n00
n00
n10 <- SampleOverlapMatrix$n10
n10
estimate_corln(n11,n00,n10) |
20180328-EB-Confluent_Designing_Event_Driven_Systems.pdf | free_programming_book | Unknown | Co m
pl of
Foreword by <NAME> ts
<NAME> en
Concepts and Patterns for Streaming Services with Apache Kafka im
Designing Event-Driven Systems
Designing Event-Driven Systems Concepts and Patterns for Streaming Services with Apache Kafka <NAME>ijing
Boston Farnham Sebastopol Tokyo
Designing Event-Driven Systems by <NAME> Copyright 2018 OReilly Media. All rights reserved.
Printed in the United States of America.
Published by OReilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472.
OReilly books may be purchased for educational, business, or sales promotional use. Online edi tions are also available for most titles (http://oreilly.com/safari). For more information, contact our corporate/institutional sales department: 800-998-9938 or <EMAIL>.
Editor: <NAME> Production Editor: <NAME> Copyeditor: <NAME> Proofreader: <NAME> Interior Designer: <NAME> Cover Designer: <NAME> Illustrator: <NAME>arest First Edition April 2018:
Revision History for the First Edition 2018-03-28:
First Release The OReilly logo is a registered trademark of OReilly Media, Inc. Designing Event-Driven Systems,
the cover image, and related trade dress are trademarks of OReilly Media, Inc.
While the publisher and the author have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the author disclaim all responsi bility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights.
This work is part of a collaboration between OReilly and Confluent. See our statement of editorial independence.
978-1-492-03822-1
[LSI]
Table of Contents Foreword... vii Preface... xi Part I.
Setting the Stage 1. Introduction... 3 2. The Origins of Streaming... 9 3. Is Kafka What You Think It Is?... 13 Kafka Is Like REST but Asynchronous?
Kafka Is Like a Service Bus?
Kafka Is Like a Database?
What Is Kafka Really? A Streaming Platform 13
14 15
15 4. Beyond Messaging: An Overview of the Kafka Broker... 17 The Log: An Efficient Structure for Retaining and Distributing Messages Linear Scalability Segregating Load in Multiservice Ecosystems Maintaining Strong Ordering Guarantees Ensuring Messages Are Durable Load-Balance Services and Make Them Highly Available Compacted Topics Long-Term Data Storage Security
Summary 18
19 21
21 22
23 24
25 25
25 iii
Part II.
Designing Event-Driven Systems 5. Events: A Basis for Collaboration... 29 Commands, Events, and Queries Coupling and Message Brokers Using Events for Notification Using Events to Provide State Transfer Which Approach to Use The Event Collaboration Pattern Relationship with Stream Processing Mixing Request- and Event-Driven Protocols Summary
30 32
34 37
38 39
41 42
44 6. Processing Events with Stateful Functions... 45 Making Services Stateful Summary
47 52
7. Event Sourcing, CQRS, and Other Stateful Patterns... 55 Event Sourcing, Command Sourcing, and CQRS in a Nutshell Version Control for Your Data Making Events the Source of Truth Command Query Responsibility Segregation Materialized Views Polyglot Views Whole Fact or Delta?
Implementing Event Sourcing and CQRS with Kafka Summary
Part III.
55 57
59 61
62 63
64 65
71 Rethinking Architecture at Company Scales 8. Sharing Data and Services Across an Organization... 75 Encapsulation Isnt Always Your Friend The Data Dichotomy What Happens to Systems as They Evolve?
Make Data on the Outside a First-Class Citizen Dont Be Afraid to Evolve Summary
77 79
80 83
84 85
9. Event Streams as a Shared Source of Truth... 87 A Database Inside Out Summary
iv
|
Table of Contents 87
90
10. Lean Data... 91 If Messaging Remembers, Databases Dont Have To Take Only the Data You Need, Nothing More Rebuilding Event-Sourced Views Automation and Schema Migration Summary
Part IV.
91 92
93 94
96 Consistency, Concurrency, and Evolution 11. Consistency and Concurrency in Event-Driven Systems... 101 Eventual Consistency The Single Writer Principle Atomicity with Transactions Identity and Concurrency Control Limitations
Summary 102
105 108
108 110
110 12. Transactions, but Not as We Know Them... 111 The Duplicates Problem Using the Transactions API to Remove Duplicates Exactly Once Is Both Idempotence and Atomic Commit How Kafkas Transactions Work Under the Covers Store State and Send Events Atomically Do We Need Transactions? Can We Do All This with Idempotence?
What Cant Transactions Do?
Making Use of Transactions in Your Services Summary
111 114
115 116
118 119
119 120
120 13. Evolving Schemas and Data over Time... 123 Using Schemas to Manage the Evolution of Data in Time Handling Schema Change and Breaking Backward Compatibility Collaborating over Schema Change Handling Unreadable Messages Deleting Data Segregating Public and Private Topics Summary
Part V.
123 124
126 127
127 129
129 Implementing Streaming Services with Kafka 14. Kafka Streams and KSQL... 133 A Simple Email Service Built with Kafka Streams and KSQL Table of Contents 133
|
v
Windows, Joins, Tables, and State Stores Summary
135 138
15. Building Streaming Services... 139 An Order Validation Ecosystem Join-Filter-Process Event-Sourced Views in Kafka Streams Collapsing CQRS with a Blocking Read Scaling Concurrent Operations in Streaming Systems Rekey to Join Repartitioning and Staged Execution Waiting for N Events Reflecting on the Design A More Holistic Streaming Ecosystem Summary
vi
|
Table of Contents 139
140 141
142 142
145 146
147 148
148 150
Foreword For as long as weve been talking about services, weve been talking about data. In fact, before we even had the word microservices in our lexicon, back when it was just good old-fashioned service-oriented architecture, we were talking about data: how to access it, where it lives, who owns it. Data is all-importantvital for the continued success of our businessbut has also been seen as a massive constraint in how we design and evolve our systems.
My own journey into microservices began with work I was doing to help organi zations ship software more quickly. This meant a lot of time was spent on things like cycle time analysis, build pipeline design, test automation, and infrastructure automation. The advent of the cloud was a huge boon to the work we were doing, as the improved automation made us even more productive. But I kept hitting some fundamental issues. All too often, the software wasnt designed in a way that made it easy to ship. And data was at the heart of the problem.
Back then, the most common pattern I saw for service-based systems was sharing a database among multiple services. The rationale was simple: the data I need is already in this other database, and accessing a database is easy, so Ill just reach in and grab what I need. This may allow for fast development of a new service, but over time it becomes a major constraint.
As I expanded upon in my book, Building Microservices, a shared database cre ates a huge coupling point in your architecture. It becomes difficult to under stand what changes can be made to a schema shared by multiple services. <NAME> showed us back in 1971 that the secret to creating software whose parts could be changed independently was to hide information between modules. But at a swoop, exposing a schema to multiple services prohibits our ability to inde pendently evolve our codebases.
1 <NAME>, On the Criteria to Be Used in Decomposing Systems into Modules (Pittsburgh, PA: Carnegie Mellon University, 1971).
Foreword
|
vii
As the needs and expectations of software changed, IT organizations changed with them. The shift from siloed IT toward business- or product-aligned teams helped improve the customer focus of those teams. This shift often happened in concert with the move to improve the autonomy of those teams, allowing them to develop new ideas, implement them, and then ship them, all while reducing the need for coordination with other parts of the organization. But highly cou pled architectures require heavy coordination between systems and the teams that maintain themthey are the enemy of any organization that wants to opti mize autonomy.
Amazon spotted this many years ago. It wanted to improve team autonomy to allow the company to evolve and ship software more quickly. To this end, Ama zon created small, independent teams who would own the whole lifecycle of delivery. <NAME>, after leaving Amazon for Google, attempted to capture what it was that made those teams work so well in his infamous (in some circles)
Platform Rant. In it, he outlined the mandate from Amazon CEO <NAME> regarding how teams should work together and how they should design systems.
These points in particular resonate for me:
1) All teams will henceforth expose their data and functionality through service interfaces.
2) Teams must communicate with each other through these interfaces.
3) There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another teams datastore, no shared-memory model, no backdoors whatsoever. The only communication allowed is via service interface calls over the network.
In my own way, I came to the realization that how we store and share data is key to ensuring we develop loosely coupled architectures. Well-defined interfaces are key, as is hiding information. If we need to store data in a database, that database should be part of a service, and not accessed directly by other services. A welldefined interface should guide when and how that data is accessed and manipu lated.
Much of my time over the past several years has been taken up with pushing this idea. But while people increasingly get it, challenges remain. The reality is that services do need to work together and do sometimes need to share data. How do you do that effectively? How do you ensure that this is done in a way that is sym pathetic to your applications latency and load conditions? What happens when one service needs a lot of information from another?
Enter streams of events, specifically the kinds of streams that technology like Kafka makes possible. Were already using message brokers to exchange events,
but Kafkas ability to make that event stream persistent allows us to consider a new way of storing and exchanging data without losing out on our ability to cre ate loosely coupled autonomous architectures. In this book, Ben talks about the viii
|
Foreword
idea of turning the database inside outa concept that I suspect will get as many skeptical responses as I did back when I was suggesting moving away from giant shared databases. But after the last couple of years Ive spent exploring these ideas with Ben, I cant help thinking that he and the other people working on these concepts and technology (and there is certainly lots of prior art here)
really are on to something.
Im hopeful that the ideas outlined in this book are another step forward in how we think about sharing and exchanging data, helping us change how we build microservice architecture. The ideas may well seem odd at first, but stick with them. Ben is about to take you on a very interesting journey.
<NAME>
|
ix
Preface In 2006 I was working at ThoughtWorks, in the UK. There was a certain energy to the office at that time, with lots of interesting things going on. The Agile movement was in full bloom, BDD (behavior-driven development) was flourish ing, people were experimenting with Event Sourcing, and SOA (service-oriented architecture) was being adapted to smaller projects to deal with some of the issues wed seen in larger implementations.
One project I worked on was led by <NAME>, an energetic and cheerful fellow who managed to transfer his jovial bluster into pretty much everything we did.
The project was a relatively standard, medium-sized enterprise application. It had a web portal where customers could request a variety of conveyancing serv ices. The system would then run various synchronous and asynchronous pro cesses to put the myriad of services they requested into action.
There were a number of interesting elements to that particular project, but the one that really stuck with me was the way the services communicated. It was the first system Id worked on that was built solely from a collaboration of events.
Having worked with a few different service-based systems before, all built with RPCs (remote procedure calls) or request-response messaging, I thought this one felt very different. There was something inherently spritely about the way you could plug new services right into the event stream, and something deeply satis fying about tailing the log of events and watching the narrative of the system whizz past.
A few years later, I was working at a large financial institution that wanted to build a data service at the heart of the company, somewhere applications could find the important datasets that made the bank worktrades, valuations, refer ence data, and the like. I find this sort of problem quite compelling: it was techni cally challenging and, although a number of banks and other large companies had taken this kind of approach before, it felt like the technology had moved on to a point where we could build something really interesting and transformative.
Preface
|
xi
Yet getting the technology right was only the start of the problem. The system had to interface with every major department, and that meant a lot of stakehold ers with a lot of requirements, a lot of different release schedules, and a lot of expectations around uptime. I remember discussing the practicalities of the project as we talked our design through in a two-week stakeholder kick-off meet ing. It seemed a pretty tall order, not just technically, but organizationally, but it also seemed plausible.
So we pulled together a team, with a bunch of people from ThoughtWorks and Google and a few other places, and the resulting system had some pretty interest ing properties. The datastore held queryable data in memory, spread over 35 machines per datacenter, so it could handle being hit from a compute grid.
Writes went directly through the query layer into a messaging system, which formed (somewhat unusually for the time) the system of record. Both the query layer and the messaging layer were designed to be sharded so they could scale linearly. So every insert or update was also a published event, and there was no side-stepping it either; it was baked into the heart of the architecture.
The interesting thing about making messaging the system of record is you find yourself repurposing the data stream to do a whole variety of useful things:
recording it on a filesystem for recovery, pushing it to another datacenter, hyd rating a set of databases for reporting and analytics, and, of course, broadcasting it to anyone with the API who wants to listen.
But the real importance of using messaging as a system of record evaded me somewhat at the time. I remember speaking about the project at QCon, and there were more questions about the lone messaging as a system of record slide,
which Id largely glossed over, than there were about the fancy distributed join layer that the talk had focused on. So it slowly became apparent that, for all its featuresthe data-driven precaching that made joins fast, the SQL-overDocument interface, the immutable data model, and late-bound schemawhat most customers needed was really subtly different, and somewhat simpler. While they would start off making use of the data service directly, as time passed, some requirement would often lead them to take a copy, store it independently, and do their own thing. But despite this, they still found the central dataset useful and would often take a subset, then later come back for more. So, on reflection, it seemed that a messaging system optimized to hold datasets would be more appropriate than a database optimized to publish them. A little while later Con fluent formed, and Kafka seemed a perfect solution for this type of problem.
The interesting thing about these two experiences (the conveyancing application and the bank-wide data service) is that they are more closely related than they may initially appear. The conveyancing application had been wonderfully collab orative, yet pluggable. At the bank, a much larger set of applications and services integrated through events, but also leveraged a historic reference they could go xii
|
Preface
back to and query. So the contexts were quite differentthe first was a single application, the second a companybut much of the elegance of both systems came from their use of events.
Streaming systems today are in many ways quite different from both of these examples, but the underlying patterns havent really changed all that much. Nev ertheless, the devil is in the details, and over the last few years weve seen clients take a variety of approaches to solving both of these kinds of problems, along with many others. Problems that both distributed logs and stream processing tools are well suited to, and Ive tried to extract the key elements of these approaches in this short book.
How to Read This Book The book is arranged into five sections. Part I sets the scene, with chapters that introduce Kafka and stream processing and should provide even seasoned practi tioners with a useful overview of the base concepts. In Part II youll find out how to build event-driven systems, how such systems relate to stateful stream pro cessing, and how to apply patterns like Event Collaboration, Event Sourcing, and CQRS. Part III is more conceptual, building on the ideas from Part II, but apply ing them at the level of whole organizations. Here we question many of the com mon approaches used today, and dig into patterns like event streams as a source of truth. Part IV and Part V are more practical. Part V starts to dip into a little code, and there is an associated GitHub project to help you get started if you want to build fine-grained services with Kafka Streams.
The introduction given in Chapter 1 provides a high-level overview of the main concepts covered in this book, so it is a good place to start.
Acknowledgments Many people contributed to this book, both directly and indirectly, but a special thanks to <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and of course my ever-patient wife, Emily.
Preface
|
xiii
PART I Setting the Stage The truth is the log.
Pat Helland, Immutability Changes Everything, 2015
CHAPTER 1 Introduction
While the main focus of this book is the building of event-driven systems of dif ferent sizes, there is a deeper focus on software that spans many teams. This is the realm of service-oriented architectures: an idea that arose around the start of the century, where a company reconfigures itself around shared services that do commonly useful things.
This idea became quite popular. Amazon famously banned all intersystem com munications by anything that wasnt a service interface. Later, upstart Netflix went all in on microservices, and many other web-based startups followed suit.
Enterprise companies did similar things, but often using messaging systems,
which have a subtly different dynamic. Much was learned during this time, and there was significant progress made, but it wasnt straightforward.
One lesson learned, which was pretty ubiquitous at the time, was that servicebased approaches significantly increased the probability of you getting paged at 3 a.m., when one or more services go down. In hindsight, this shouldnt have been surprising. If you take a set of largely independent applications and turn them into a web of highly connected ones, it doesnt take too much effort to imagine that one important but flaky service can have far-reaching implications, and in the worst case bring the whole system to a halt. As <NAME> put it in his famous Amazon/Google post, Organizing into services taught teams not to trust each other in most of the same ways theyre not supposed to trust external devel opers.
What did work well for Amazon, though, was the element of organizational change that came from being wholeheartedly service based. Service teams think of their software as being a cog in a far larger machine. As <NAME> put it,
Be of the web, not behind the web. This was a huge shift from the way people built applications previously, where intersystem communication was something teams reluctantly bolted on as an afterthought. But the services model made 3
interaction a first-class entity. Suddenly your users werent just customers or businesspeople; they were other applications, and they really cared that your ser vice was reliable. So applications became platforms, and building platforms is hard.
LinkedIn felt this pain as it evolved away from its original, monolithic Java appli cation into 8001,100 services. Complex dependencies led to instability, version ing issues caused painful lockstep releases, and early on, it wasnt clear that the new architecture was actually an improvement.
One difference in the way LinkedIn evolved its approach was its use of a messag ing system built in-house: Kafka. Kafka added an asynchronous publishsubscribe model to the architecture that enabled trillions of messages a day to be transported around the organization. This was important for a company in hypergrowth, as it allowed new applications to be plugged in without disturbing the fragile web of synchronous interactions that drove the frontend.
But this idea of rearchitecting a system around events isnt newevent-driven architectures have been around for decades, and technologies like enterprise messaging are big business, particularly with (unsurprisingly) enterprise compa nies. Most enterprises have been around for a long time, and their systems have grown organically, over many iterations or through acquisition. Messaging sys tems naturally fit these complex and disconnected worlds for the same reasons observed at LinkedIn: events decouple, and this means different parts of the company can operate independently of one another. It also means its easier to plug new systems into the real time stream of events.
A good example is the regulation that hit the finance industry in January 2018,
which states that trading activity has to be reported to a regulator within one minute of it happening. A minute may seem like a long time in computing terms,
but it takes only one batch-driven system, on the critical path in one business silo, for that to be unattainable. So the banks that had gone to the effort of instal ling real-time trade eventing, and plumbed it across all their product-aligned silos, made short work of these regulations. For the majority that hadnt it was a significant effort, typically resulting in half-hearted, hacky solutions.
So enterprise companies start out complex and disconnected: many separate,
asynchronous islandsoften with users of their ownoperating independently of one another for the most part. Internet companies are different, starting life as simple, front-facing web applications where users click buttons and expect things to happen. Most start as monoliths and stay that way for some time (arguably for longer than they should). But as internet companies grow and their business gets more complex, they see a similar shift to asynchronicity. New teams and depart ments are introduced and they need to operate independently, freed from the synchronous bonds that tie the frontend. So ubiquitous desires for online utilit ies, like making a payment or updating a shopping basket, are slowly replaced by 4
|
Chapter 1: Introduction
a growing need for datasets that can be used, and evolved, without any specific application lock-in.
But messaging is no panacea. Enterprise service buses (ESBs), for example, have vocal detractors and traditional messaging systems have a number of issues of their own. They are often used to move data around an organization, but the absence of any notion of history limits their value. So, even though recent events typically have more value than old ones, business operations still need historical datawhether its users wanting to query their account history, some service needing a list of customers, or analytics that need to be run for a management report.
On the other hand, data services with HTTP-fronted interfaces make lookups simple. Anyone can reach in and run a query. But they dont make it so easy to move data around. To extract a dataset you end up running a query, then period ically polling the service for changes. This is a bit of a hack, and typically the operators in charge of the service youre polling wont thank you for it.
But replayable logs, like Kafka, can play the role of an event store: a middle ground between a messaging system and a database. (If you dont know Kafka,
dont worrywe dive into it in Chapter 4.) Replayable logs decouple services from one another, much like a messaging system does, but they also provide a central point of storage that is fault-tolerant and scalablea shared source of truth that any application can fall back to.
A shared source of truth turns out to be a surprisingly useful thing. Microservi ces, for example, dont share their databases with one another (referred to as the IntegrationDatabase antipattern). There is a good reason for this: databases have very rich APIs that are wonderfully useful on their own, but when widely shared they make it hard to work out if and how one application is going to affect oth ers, be it data couplings, contention, or load. But the business facts that services do choose to share are the most important facts of all. They are the truth that the rest of the business is built on. <NAME> called out this distinction back in 2006, denoting it data on the outside.
But a replayable log provides a far more suitable place to hold this kind of data because (somewhat counterintuitively) you cant query it! It is purely about stor ing data and pushing it to somewhere new. This idea of pure data movement is important, because data on the outsidethe data services shareis the most tightly coupled of all, and the more services an ecosystem has, the more tightly coupled this data gets. The solution is to move data somewhere that is more loosely coupled, so that means moving it into your application where you can manipulate it to your hearts content. So data movement gives applications a level of operability and control that is unachievable with a direct, runtime dependency. This idea of retaining control turns out to be importantits the same reason the shared database pattern doesnt work out well in practice.
Introduction
|
5
So, this replayable logbased approach has two primary benefits. First, it makes it easy to react to events that are happening now, with a toolset specifically designed for manipulating them. Second, it provides a central repository that can push whole datasets to wherever they may be needed. This is pretty useful if you run a global business with datacenters spread around the world, need to boot strap or prototype a new project quickly, do some ad hoc data exploration, or build a complex service ecosystem that can evolve freely and independently.
So there are some clear advantages to the event-driven approach (and there are of course advantages for the REST/RPC models too). But this is, in fact, only half the story. Streaming isnt simply an alternative to RPCs that happens to work better for highly connected use cases; its a far more fundamental change in mindset that involves rethinking your business as an evolving stream of data, and your services as functions that transform these streams of data into something new.
This can feel unnatural. Many of us have been brought up with programming styles where we ask questions or issue commands and wait for answers. This is how procedural or object-oriented programs work, but the biggest culprit is probably the database. For nearly half a century databases have played a central role in system design, shapingmore than any other toolthe way we write
(and think about) programs. This has been, in some ways, unfortunate.
As we move from chapter to chapter, this book builds up a subtly different approach to dealing with data, one where the database is taken apart, unbundled,
deconstructed, and turned inside out. These concepts may sound strange or even novel, but they are, like many things in software, evolutions of older ideas that have arisen somewhat independently in various technology subcultures. For some time now, mainstream programmers have used event-driven architectures,
Event Sourcing, and CQRS (Command Query Responsibility Segregation) as a means to break away from the pains of scaling database-centric systems. The big data space encountered similar issues as multiterabyte-sized datasets highlighted the inherent impracticalities of batch-driven data management, which in turn led to a pivot toward streaming. The functional world has sat aside, somewhat know ingly, periodically tugging at the imperative views of the masses.
But these disparate progressionsturning the database inside out, destructuring,
CQRS, unbundlingall have one thing in common. They are all simple metaphors for the need to separate the conflation of concepts embedded into every database we use, to decouple them so that we can manage them separately and hence efficiently.
There are a number of reasons for wanting to do this, but maybe the most important of all is that it lets us build larger and more functionally diverse sys tems. So while a database-centric approach works wonderfully for individual applications, we dont live in a world of individual applications. We live in a 6
|
Chapter 1: Introduction
world of interconnected systemsindividual components that, while all valuable in themselves, are really part of a much larger puzzle. We need a mechanism for sharing data that complements this complex, interconnected world. Events lead us to this. They constantly push data into our applications. These applications react, blending streams together, building views, changing state, and moving themselves forward. In the streaming model there is no shared database. The database is the event stream, and the application simply molds it into something new.
In fairness, streaming systems still have database-like attributes such as tables
(for lookups) and transactions (for atomicity), but the approach has a radically different feel, more akin to functional or dataflow languages (and there is much cross-pollination between the streaming and functional programming communi ties).
So when it comes to data, we should be unequivocal about the shared facts of our system. They are the very essence of our business, after all. Facts may be evolved over time, applied in different ways, or even recast to different contexts, but they should always tie back to a single thread of irrevocable truth, one from which all others are deriveda central nervous system that underlies and drives every modern digital business.
This book looks quite specifically at the application of Apache Kafka to this problem. In Part I we introduce streaming and take a look at how Kafka works.
Part II focuses on the patterns and techniques needed to build event-driven pro grams: Event Sourcing, Event Collaboration, CQRS, and more. Part III takes these ideas a step further, applying them in the context of multiteam systems,
including microservices and SOA, with a focus on event streams as a source of truth and the aforementioned idea that both systems and companies can be reimagined as a database turned inside out. In the final part, we take a slightly more practical focus, building a small streaming system using Kafka Streams
(and KSQL).
Introduction
|
7
CHAPTER 2 The Origins of Streaming This book is about building business systems with stream processing tools, so it is useful to have an appreciation for where stream processing came from. The maturation of this toolset, in the world of real-time analytics, has heavily influ enced the way we build event-driven systems today.
Figure 2-1 shows a stream processing system used to ingest data from several hundred thousand mobile devices. Each device sends small JSON messages to denote applications on each mobile phone that are being opened, being closed,
or crashing. This can be used to look for instabilitythat is, where the ratio of crashes to usage is comparatively high.
Figure 2-1. A typical streaming application that ingests data from mobile devices into Kafka, processes it in a streaming layer, and then pushes the result to a serving layer where it can be queried 9
The mobile devices land their data into Kafka, which buffers it until it can be extracted by the various applications that need to put it to further use. For this type of workload the cluster would be relatively large; as a ballpark figure Kafka ingests data at network speed, but the overhead of replication typically divides that by three (so a three-node 10 GbE cluster will ingest around 1 GB/s in prac tice).
To the right of Kafka in Figure 2-1 sits the stream processing layer. This is a clus tered application, where queries are either defined up front via the Java DSL or sent dynamically via KSQL, Kafkas SQL-like stream processing language. Unlike in a traditional database, these queries compute continuously, so every time an input arrives in the stream processing layer, the query is recomputed, and a result is emitted if the value of the query has changed.
Once a new message has passed through all streaming computations, the result lands in a serving layer from which it can be queried. Cassandra is shown in Figure 2-1, but pushing to HDFS (Hadoop Distributed File System), pushing to another datastore, or querying directly from Kafka Streams using its interactive queries feature are all common approaches as well.
To understand streaming better, it helps to look at a typical query. Figure 2-2 shows one that computes the total number of app crashes per day. Every time a new message comes in, signifying that an application crashed, the count of total crashes for that application will be incremented. Note that this computation requires state: the count for the day so far (i.e., within the window duration)
must be stored so that, should the stream processor crash/restart, the count will continue where it was before. Kafka Streams and KSQL manage this state inter nally, and that state is backed up to Kafka via a changelog topic. This is discussed in more detail in Windows, Joins, Tables, and State Stores on page 135 in Chap ter 14.
Figure 2-2. A simple KSQL query that evaluates crashes per day Multiple queries of this type can be chained together in a pipeline. In Figure 2-3,
we break the preceding problem into three steps chained over two stages. Queries
(a) and (b) continuously compute apps opened per day and apps crashed per day, respectively. The two resulting output streams are combined together in the 10
|
Chapter 2: The Origins of Streaming
final stage (c), which computes application stability by calculating the ratio between crashes and usage and comparing it to a fixed bound.
Figure 2-3. Two initial stream processing queries are pushed into a third to create a pipeline
There are a few other things to note about this streaming approach:
The streaming layer is fault-tolerant It runs as a cluster on all available nodes. If one node exits, another will pick up where it left off. Likewise, you can scale out the cluster by adding new processing nodes. Work, and any required state, will automatically be rerou ted to make use of these new resources.
Each stream processor node can hold state of its own This is required for buffering as well as holding whole tables, for example, to do enrichments (streams and tables are discussed in more detail in Win dows, Joins, Tables, and State Stores on page 135 in Chapter 14). This idea of local storage is important, as it lets the stream processor perform fast,
message-at-a-time queries without crossing the networka necessary fea ture for the high-velocity workloads seen in internet-scale use cases. But this ability to internalize state in local stores turns out to be useful for a number of business-related use cases too, as we discuss later in this book.
Each stream processor can write and store local state Making message-at-a-time network calls isnt a particularly good idea when youre handling a high-throughput event stream. For this reason stream pro cessors write data locally (so writes and reads are fast) and back those writes The Origins of Streaming
|
11
up to Kafka. So, for example, the aforementioned count requires a running total to be tracked so that, should a crash and restart occur, the computation resumes from its previous position and the count remains accurate. This ability to store data locally is very similar conceptually to the way you might interact with a database in a traditional application. But unlike in a tradi tional two-tier application, where interacting with the database means mak ing a network call, in stream processing all the state is local (you might think of it as a kind of cache), so it is fast to accessno network calls needed.
Because it is also flushed back to Kafka, it inherits Kafkas durability guaran tees. We discuss this in more detail in Scaling Concurrent Operations in Streaming Systems on page 142 in Chapter 15.
12
|
Chapter 2: The Origins of Streaming
CHAPTER 3 Is Kafka What You Think It Is?
There is an old parable about an elephant and a group of blind men. None of the men had come across an elephant before. One blind man approaches the leg and declares, Its like a tree. Another man approaches the tail and declares, Its like a rope. A third approaches the trunk and declares, Its like a snake. So each blind man senses the elephant from his particular point of view, and comes to a subtly different conclusion as to what an elephant is. Of course the elephant is like all these things, but it is really just an elephant!
Likewise, when people learn about Kafka they often see it from a certain view point. These perspectives are usually accurate, but highlight only some subsec tion of the whole platform. In this chapter we look at some common points of view.
Kafka Is Like REST but Asynchronous?
Kafka provides an asynchronous protocol for connecting programs together, but it is undoubtedly a bit different from, say, TCP (transmission control protocol),
HTTP, or an RPC protocol. The difference is the presence of a broker. A broker is a separate piece of infrastructure that broadcasts messages to any programs that are interested in them, as well as storing them for as long as is needed. So its per fect for streaming or fire-and-forget messaging.
Other use cases sit further from its home ground. A good example is requestresponse. Say you have a service for querying customer information. So you call a getCustomer() method, passing a CustomerId, and get a document describing a customer in the reply. You can build this type of request-response interaction with Kafka using two topics: one that transports the request and one that trans ports the response. People build systems like this, but in such cases the broker doesnt contribute all that much. There is no requirement for broadcast. There is 13
also no requirement for storage. So this leaves the question: would you be better off using a stateless protocol like HTTP?
So Kafka is a mechanism for programs to exchange information, but its home ground is event-based communication, where events are business facts that have value to more than one service and are worth keeping around.
Kafka Is Like a Service Bus?
If we consider Kafka as a messaging systemwith its Connect interface, which pulls data from and pushes data to a wide range of interfaces and datastores, and streaming APIs that can manipulate data in flightit does look a little like an ESB (enterprise service bus). The difference is that ESBs focus on the integration of legacy and off-the-shelf systems, using an ephemeral and comparably lowthroughput messaging layer, which encourages request-response protocols (see the previous section).
Kafka, however, is a streaming platform, and as such puts emphasis on highthroughput events and stream processing. A Kafka cluster is a distributed system at heart, providing high availability, storage, and linear scale-out. This is quite different from traditional messaging systems, which are limited to a single machine, or if they do scale outward, those scalability properties do not stretch from end to end. Tools like Kafka Streams and KSQL allow you to write simple programs that manipulate events as they move and evolve. These make the pro cessing capabilities of a database available in the application layer, via an API,
and outside the confines of the shared broker. This is quite important.
ESBs are criticized in some circles. This criticism arises from the way the technol ogy has been built up over the last 15 years, particularly where ESBs are con trolled by central teams that dictate schemas, message flows, validation, and even transformation. In practice centralized approaches like this can constrain an organization, making it hard for individual applications and services to evolve at their own pace.
ThoughtWorks called this out recently, encouraging users to steer clear of recre ating the issues seen in ESBs with Kafka. At the same time, the company encour aged users to investigate event streaming as a source of truth, which we discuss in Chapter 9. Both of these represent sensible advice.
So Kafka may look a little like an ESB, but as well see throughout this book, it is very different. It provides a far higher level of throughput, availability, and stor age, and there are hundreds of companies routing their core facts through a sin gle Kafka cluster. Beyond that, streaming encourages services to retain control,
particularly of their data, rather than providing orchestration from a single, cen tral team or platform. So while having one single Kafka cluster at the center of an organization is quite common, the pattern works because it is simplenothing 14
|
Chapter 3: Is Kafka What You Think It Is?
more than data transfer and storage, provided at scale and high availability. This is emphasized by the core mantra of event-driven services: Centralize an immut able stream of facts. Decentralize the freedom to act, adapt, and change.
Kafka Is Like a Database?
Some people like to compare Kafka to a database. It certainly comes with similar features. It provides storage; production topics with hundreds of terabytes are not uncommon. It has a SQL interface that lets users define queries and execute them over the data held in the log. These can be piped into views that users can query directly. It also supports transactions. These are all things that sound quite databasey in nature!
So many of the elements of a traditional database are there, but if anything, Kafka is a database inside out (see A Database Inside Out on page 87 in Chapter 9), a tool for storing data, processing it in real time, and creating views. And while you are perfectly entitled to put a dataset in Kafka, run a KSQL query over it, and get an answermuch like you might in a traditional databaseKSQL and Kafka Streams are optimized for continual computation rather than batch processing.
So while the analogy is not wholly inaccurate, it is a little off the mark. Kafka is designed to move data, operating on that data as it does so. Its about real-time processing first, long-term storage second.
What Is Kafka Really? A Streaming Platform As Figure 3-1 illustrates, Kafka is a streaming platform. At its core sits a cluster of Kafka brokers (discussed in detail in Chapter 4). You can interact with the cluster through a wide range of client APIs in Go, Scala, Python, REST, and more.
There are two APIs for stream processing: Kafka Streams and KSQL (which we discuss in Chapter 14). These are database engines for data in flight, allowing users to filter streams, join them together, aggregate, store state, and run arbi trary functions over the evolving dataflow. These APIs can be stateful, which means they can hold data tables much like a regular database (see Making Serv ices Stateful on page 47 in Chapter 6).
The third API is Connect. This has a whole ecosystem of connectors that inter face with different types of database or other endpoints, both to pull data from and push data to Kafka. Finally there is a suite of utilitiessuch as Replicator and Mirror Maker, which tie disparate clusters together, and the Schema Registry,
which validates and manages schemasapplied to messages passed through Kafka and a number of other tools in the Confluent platform.
A streaming platform brings these tools together with the purpose of turning data at rest into data that flows through an organization. The analogy of a central Kafka Is Like a Database?
|
15
nervous system is often used. The brokers ability to scale, store data, and run without interruption makes it a unique tool for connecting many disparate appli cations and services across a department or organization. The Connect interface makes it easy to evolve away from legacy systems, by unlocking hidden datasets and turning them into event streams. Stream processing lets applications and services embed logic directly over these resulting streams of events.
Figure 3-1. The core components of a streaming platform 16
|
Chapter 3: Is Kafka What You Think It Is?
CHAPTER 4 Beyond Messaging: An Overview of the Kafka Broker A Kafka cluster is essentially a collection of files, filled with messages, spanning many different machines. Most of Kafkas code involves tying these various indi vidual logs together, routing messages from producers to consumers reliably,
replicating for fault tolerance, and handling failure gracefully. So it is a messaging system, at least of sorts, but its quite different from the message brokers that pre ceded it. Like any technology, it comes with both pros and cons, and these shape the design of the systems we write. This chapter examines the Kafka broker (i.e.,
the server component) from the context of building business systems. Well explore a little about how it works, as well as dipping into the less conventional use cases it supports like data storage, dynamic failover, and bandwidth protec tion.
Originally built to distribute the datasets created by large social networks, Kafka was predominantly shaped by a need to operate at scale, in the face of failure.
Accordingly, its architecture inherits more from storage systems like HDFS,
HBase, or Cassandra than it does from traditional messaging systems that imple ment JMS (Java Message Service) or AMQP (Advanced Message Queuing Proto col).
Like many good outcomes in computer science, this scalability comes largely from simplicity. The underlying abstraction is a partitioned logessentially a set of append-only files spread over a number of machineswhich encourages sequential access patterns that naturally flow with the grain of the underlying hardware.
A Kafka cluster is a distributed system, spreading data over many machines both for fault tolerance and for linear scale-out. The system is designed to handle a range of use cases, from high-throughput streaming, where only the latest mes 17
sages matter, to mission-critical use cases where messages and their relative ordering must be preserved with the same guarantees as youd expect from a DBMS (database management system) or storage system. The price paid for this scalability is a slightly simpler contract that lacks some of the obligations of JMS or AMQP, such as message selectors.
But this change of tack turns out to be quite important. Kafkas throughput prop erties make moving data from process to process faster and more practical than with previous technologies. Its ability to store datasets removes the queue-depth problems that plagued traditional messaging systems. Finally, its rich APIs, par ticularly Kafka Streams and KSQL, provide a unique mechanism for embedding data processing directly inside client programs. These attributes have led to its use as a message and storage backbone for service estates in a wide variety of companies that need all of these capabilities.
The Log: An Efficient Structure for Retaining and Distributing Messages At the heart of the Kafka messaging system sits a partitioned, replayable log. The log-structured approach is itself a simple idea: a collection of messages, appended sequentially to a file. When a service wants to read messages from Kafka, it seeks to the position of the last message it read, then scans sequentially, reading messages in order while periodically recording its new position in the log (see Figure 4-1).
Figure 4-1. A log is an append-only journal Taking a log-structured approach has an interesting side effect. Both reads and writes are sequential operations. This makes them sympathetic to the underlying media, leveraging prefetch, the various layers of caching, and naturally batching operations together. This in turn makes them efficient. In fact, when you read messages from Kafka, the server doesnt even import them into the JVM (Java virtual machine). Data is copied directly from the disk buffer to the network 18
|
Chapter 4: Beyond Messaging: An Overview of the Kafka Broker
buffer (zero copy)an opportunity afforded by the simplicity of both the con tract and the underlying data structure.
So batched, sequential operations help with overall performance. They also make the system well suited to storing messages longer term. Most traditional message brokers are built with index structureshash tables or B-treesused to manage acknowledgments, filter message headers, and remove messages when they have been read. But the downside is that these indexes must be maintained, and this comes at a cost. They must be kept in memory to get good performance, limiting retention significantly. But the log is O(1) when either reading or writing mes sages to a partition, so whether the data is on disk or cached in memory matters far less.
There are a few implications to this log-structured approach. If a service has some form of outage and doesnt read messages for a long time, the backlog wont cause the infrastructure to slow significantly (a common problem with tra ditional brokers, which have a tendency to slow down as they get full). Being logstructured also makes Kafka well suited to performing the role of an event store,
for those who like to apply Event Sourcing within their services. This subject is discussed in depth in Chapter 7.
Partitions and Partitioning Partitions are a fundamental concept for most distributed data systems. A parti tion is just a bucket that data is put into, much like buckets used to group data in a hash table. In Kafkas terminology each log is a replica of a partition held on a different machine. (So one partition might be replicated three times for high availability. Each replica is a separate log with the same data inside it.) What data goes into each partition is determined by a partitioner, coded into the Kafka pro ducer. The partitioner will either spread data across the available partitions in a round-robin fashion or, if a key is provided with the message, use a hash of the key to determine the partition number. This latter point ensures that messages with the same key are always sent to the same partition and hence are strongly ordered.
Linear Scalability As weve discussed, logs provide a hardware-sympathetic data structure for mes saging workloads, but Kafka is really many logs, spanning many different machines. The system ties these together, routing messages reliably, replicating for fault tolerance, and handling failure gracefully.
While running on a single machine is possible, production clusters typically start at three machines with larger clusters in the hundreds. When you read and write Linear Scalability
|
19
to a topic, youll typically be reading and writing to all of them, partitioning your data over all the machines you have at your disposal. Scaling is thus a pretty sim ple affair: add new machines and rebalance. Consumption can also be performed in parallel, with messages in a topic being spread over several consumers in a consumer group (see Figure 4-2).
Figure 4-2. Producers spread messages over many partitions, on many machines,
where each partition is a little queue; load-balanced consumers (denoted a con sumer group) share the partitions between them; rate limits are applied to produc ers, consumers, and groups The main advantage of this, from an architectural perspective, is that it takes the issue of scalability off the table. With Kafka, hitting a scalability wall is virtually impossible in the context of business systems. This can be quite empowering,
especially when ecosystems grow, allowing implementers to pick patterns that are a little more footloose with bandwidth and data movement.
Scalability opens other opportunities too. Single clusters can grow to company scales, without the risk of workloads overpowering the infrastructure. For exam ple, New Relic relies on a single cluster of around 100 nodes, spanning three datacenters, and processing 30 GB/s. In other, less data-intensive domains, 5- to 10-node clusters commonly support whole-company workloads. But it should be noted that not all companies take the one big cluster route. Netflix, for exam ple, advises using several smaller clusters to reduce the operational overheads of running very large installations, but their largest installation is still around the 200-node mark.
To manage shared clusters, its useful to carve bandwidth up, using the band width segregation features that ship with Kafka. Well discuss these next.
20
|
Chapter 4: Beyond Messaging: An Overview of the Kafka Broker
Segregating Load in Multiservice Ecosystems Service architectures are by definition multitenant. A single cluster will be used by many different services. In fact, its not uncommon for all services in a com pany to share a single production cluster. But doing so opens up the potential for inadvertent denial-of-service attacks, causing service degradation or instability.
To help with this, Kafka includes a throughput control feature, called quotas, that allows a defined amount of bandwidth to be allocated to specific services, ensur ing that they operate within strictly enforced service-level agreements, or SLAs
(see Figure 4-2). Greedy services are aggressively throttled, so a single cluster can be shared by any number of services without the fear of unexpected network contention. This feature can be applied to either individual service instances or load-balanced groups.
Maintaining Strong Ordering Guarantees While it often isnt the case for analytics use cases, most business systems need strong ordering guarantees. Say a customer makes several updates to their cus tomer information. The order in which these updates are processed is going to matter, or else the latest change might be overwritten with one of the older, outof-date values.
There are a couple of things that need to be considered to ensure strong ordering guarantees. The first is that messages that require relative ordering need to be sent to the same partition. (Kafka provides ordering guarantees only within a partition.) This is managed for you: you supply the same key for all messages that require a relative order. So a stream of customer information updates would use the CustomerId as their partitioning key. All messages for the same customer would then be routed to the same partition, and hence be strongly ordered (see Figure 4-3).
Segregating Load in Multiservice Ecosystems
|
21
Figure 4-3. Ordering in Kafka is specified by producers using an ordering key Sometimes key-based ordering isnt enough, and global ordering is required.
This often comes up when youre migrating from legacy messaging systems where global ordering was an assumption of the original systems design. To maintain global ordering, use a single partition topic. Throughput will be limited to that of a single machine, but this is typically sufficient for use cases of this type.
The second thing to be aware of is retries. In almost all cases we want to enable retries in the producer so that if there is some network glitch, long-running garbage collection, failure, or the like, any messages that arent successfully sent to the cluster will be retried. The subtlety is that messages are sent in batches, so we should be careful to send these batches one at a time, per destination machine, so there is no potential for a reordering of events when failures occur and batches are retried. This is simply something we configure.
Ensuring Messages Are Durable Kafka provides durability through replication. This means messages are written to a configurable number of machines so that if one or more of those machines fail, the messages will not be lost. If you configure a replication factor of three,
two machines can be lost without losing data.
To make best use of replication, for sensitive datasets like those seen in servicebased applications, configure three replicas for each partition and configure the producer to wait for replication to complete before proceeding. Finally, as dis cussed earlier, configure retries in the producer.
Highly sensitive use cases may require that data be flushed to disk synchro nously, but this approach should be used sparingly. It will have a significant 22
|
Chapter 4: Beyond Messaging: An Overview of the Kafka Broker
impact on throughput, particularly in highly concurrent environments. If you do take this approach, increase the producer batch size to increase the effectiveness of each disk flush on the machine (batches of messages are flushed together).
This approach is useful for single machine deployments, too, where a single Zoo Keeper node is run on the same machine and messages are flushed to disk syn chronously for resilience.
Load-Balance Services and Make Them Highly Available
Event-driven services should always be run in a highly available (HA) configura tion, unless there is genuinely no requirement for HA. The main reason for this is its essentially a no-op. If we have one instance of a service, then start a second,
load will naturally balance across the two. The same process provides high availa bility should one node crash (see Figure 4-4).
Say we have two instances of the orders service, reading messages from the Orders topic. Kafka would assign half of the partitions to each instance, so the load is spread over the two.
Figure 4-4. If an instance of a service dies, data is redirected and ordering guaran tees are maintained Should one of the services fail, Kafka will detect this failure and reroute messages from the failed service to the one that remains. If the failed service comes back online, load flips back again.
This process actually works by assigning whole partitions to different consumers.
A strength of this approach is that a single partition can only ever be assigned to Load-Balance Services and Make Them Highly Available
|
23
a single service instance (consumer). This is an invariant, implying that ordering is guaranteed, even as services fail and restart.
So services inherit both high availability and load balancing, meaning they can scale out, handle unplanned outages, or perform rolling restarts without service downtime. In fact, Kafka releases are always backward-compatible with the pre vious version, so you are guaranteed to be able to release a new version without taking your system offline.
Compacted Topics By default, topics in Kafka are retention-based: messages are retained for some configurable amount of time. Kafka also ships with a special type of topic that manages keyed datasetsthat is, data that has a primary key (identifier) as you might have in a database table. These compacted topics retain only the most recent events, with any old events, for a certain key, being removed. They also support deletes (see Deleting Data on page 127 in Chapter 13).
Compacted topics work a bit like simple log-structure merge-trees (LSM trees).
The topic is scanned periodically, and old messages are removed if they have been superseded (based on their key); see Figure 4-5. Its worth noting that this is an asynchronous process, so a compacted topic may contain some superseded messages, which are waiting to be compacted away.
Figure 4-5. In a compacted topic, superseded messages that share the same key are removed. So, in this example, for key K2, messages V2 and V1 would eventually be compacted as they are superseded by V3.
Compacted topics let us make a couple of optimizations. First, they help us slow down a datasets growth (by removing superseded events), but we do so in a data-specific way rather than, say, simply removing messages older than two weeks. Second, having smaller datasets makes it easier for us to move them from machine to machine.
This is important for stateful stream processing. Say a service uses the Kafkas Streams API to load the latest version of the product catalogue into a table (as discussed in Windows, Joins, Tables, and State Stores on page 135 in Chapter 14,
24
|
Chapter 4: Beyond Messaging: An Overview of the Kafka Broker
a table is a disk resident hash table held inside the API). If the product catalogue is stored in a compacted topic in Kafka, the load can be performed quicker and more efficiently if it doesnt have to load the whole versioned history as well (as would be the case with a regular topic).
Long-Term Data Storage One of the bigger differences between Kafka and other messaging systems is that it can be used as a storage layer. In fact, its not uncommon to see retentionbased or compacted topics holding more than 100 TB of data. But Kafka isnt a database; its a commit log offering no broad query functionality (and there are no plans for this to change). But its simple contract turns out to be quite useful for storing shared datasets in large systems or company architecturesfor exam ple, the use of events as a shared source of truth, as we discuss in Chapter 9.
Data can be stored in regular topics, which are great for audit or Event Sourcing,
or compacted topics, which reduce the overall footprint. You can combine the two, getting the best of both worlds at the price of additional storage, by holding both and linking them together with a Kafka Streams job. This pattern is called the latest-versioned pattern.
Security Kafka provides a number of enterprise-grade security features for both authenti cation and authorization. Client authentication is provided through either Ker beros or Transport Layer Security (TLS) client certificates, ensuring that the Kafka cluster knows who is making each request. There is also a Unix-like per missions system, which can be used to control which users can access which data.
Network communication can be encrypted, allowing messages to be securely sent across untrusted networks. Finally, administrators can require authentication for communication between Kafka and ZooKeeper.
The quotas mechanism, discussed in the section Segregating Load in Multiser vice Ecosystems on page 21, can be linked to this notion of identity, and Kafkas security features are extended across the different components of the Confluent platform (the Rest Proxy, Confluent Schema Registry, Replicator, etc.).
Summary Kafka is a little different from your average messaging technology. Being designed as a distributed, scalable infrastructure component makes it an ideal backbone through which services can exchange and buffer events. There are obviously a number of elements unique to the technology itself, but the ones that Long-Term Data Storage
|
25
stand out are its abilities to scale, to run always on, and to retain datasets longterm.
We can use the patterns and features discussed in this chapter to build a wide variety of architectures, from fine-grained service-based systems right up to hulking corporate conglomerates. This is an approach that is safe, pragmatic, and tried and tested.
26
|
Chapter 4: Beyond Messaging: An Overview of the Kafka Broker
PART II Designing Event-Driven Systems Life is a series of natural and spontaneous changes. Dont resist themthat only creates sorrow. Let reality be reality. Let things flow naturally forward.
Lao-Tzu, 6th5th century BCE
CHAPTER 5 Events: A Basis for Collaboration Service-based architectures, like microservices or SOA, are commonly built with synchronous request-response protocols. This approach is very natural. It is,
after all, the way we write programs: we make calls to other code modules, await a response, and continue. It also fits closely with a lot of use cases we see each day: front-facing websites where users hit buttons and expect things to happen,
then return.
But when we step into a world of many independent services, things start to change. As the number of services grows gradually, the web of synchronous interactions grows with them. Previously benign availability issues start to trigger far more widespread outages. Our ops engineers often end up as reluctant detec tives, playing out distributed murder mysteries as they frantically run from ser vice to service, piecing together snippets of secondhand information. (Who said what, to whom, and when?)
This is a well-known problem, and there are a number of solutions. One is to ensure each individual service has a significantly higher SLA than your system as a whole. Google provides a protocol for doing this. An alternative is to simply break down the synchronous ties that bind services together using (a) asynchro nicity and (b) a message broker as an intermediary.
Say you are working in online retail. You would probably find that synchronous interfaces like getImage() or processOrder()calls that expect an immediate responsefeel natural and familiar. But when a user clicks Buy, they actually trigger a large, complex, and asynchronous process into action. This process takes a purchase and physically ships it to the users door, way beyond the con text of the original button click. So splitting software into asynchronous flows allows us to compartmentalize the different problems we need to solve and embrace a world that is itself inherently asynchronous.
29
In practice we tend to embrace this automatically. Weve all found ourselves poll ing database tables for changes, or implementing some kind of scheduled cron job to churn through updates. These are simple ways to break the ties of synchro nicity, but they always feel like a bit of a hack. There is a good reason for this:
they probably are.
So we can condense all these issues into a single observation. The imperative pro gramming model, where we command services to do our bidding, isnt a great fit for estates where services are operated independently.
In this chapter were going to focus on the other side of the architecture coin:
composing services not through chains of commands and queries, but rather through streams of events. This is an implementation pattern in its own right,
and has been used in industry for many years, but it also forms a baseline for the more advanced patterns well be discussing in Part III and Part V, where we blend the ideas of event-driven processing with those seen in streaming plat forms.
Commands, Events, and Queries Before we go any further, consider that there are three distinct ways that pro grams can interact over a network: commands, events, and queries. If youve not considered the distinction between these three before, its well worth doing so, as it provides an important reference for interprocess communication.
The three mechanisms through which services interact can be described as fol lows (see Table 5-1 and Figure 5-1):
Commands Commands are actionsrequests for some operation to be performed by another service, something that will change the state of the system. Com mands execute synchronously and typically indicate completion, although they may also include a result.1
Example: processPayment(), returning whether the payment succeeded.
1 The term command originally came from Bertrand Meyers CQS (Command Query Separation) princi ple. A slightly different definition from Bertrands is used here, leaving it optional as to whether a com mand should return a result or not. There is a reason for this: a command is a request for something specific to happen in the future. Sometimes it is desirable to have no return value; other times, a return value is important. <NAME> uses the example of popping a stack, while here we use the example of processing a payment, which simply returns whether the command succeeded. By leaving the command with an optional return type, the implementer can decide if it should return a result or not, and if not CQS/CQRS may be used. This saves the need for having another name for a command that does return a result. Finally, a command is never an event. A command has an explicit expectation that something
(a state change or side effect) will happen in the future. Events come with no such future expectation.
They are simply a statement that something happened.
30
|
Chapter 5: Events: A Basis for Collaboration
When to use: On operations that must complete synchronously, or when using orchestration or a process manager. Consider restricting the use of commands to inside a bounded context.
Events Events are both a fact and a notification. They represent something that hap pened in the real world but include no expectation of any future action. They travel in only one direction and expect no response (sometimes called fire and forget), but one may be synthesized from a subsequent event.
Example: OrderCreated{Widget}, CustomerDetailsUpdated{Customer}
When to use: When loose coupling is important (e.g., in multiteam sys tems), where the event stream is useful to more than one service, or where data must be replicated from one application to another. Events also lend themselves to concurrent execution.
Queries Queries are a request to look something up. Unlike events or commands,
queries are free of side effects; they leave the state of the system unchanged.
Example: getOrder(ID=42) returns Order(42,).
When to use: For lightweight data retrieval across service boundaries, or heavyweight data retrieval within service boundaries.
Table 5-1. Differences between commands, events, and queries Behavior/state change Includes a response Command Requested to happen Maybe
Event Just happened Never
Query None
Always The beauty of events is they wear two hats: a notification hat that triggers services into action, but also a replication hat that copies data from one service to another. But from a services perspective, events lead to less coupling than com mands and queries. Loose coupling is a desirable property where interactions cross deployment boundaries, as services with fewer dependencies are easier to change.
Commands, Events, and Queries |
31
Figure 5-1. A visual summary of commands, events, and queries Coupling and Message Brokers The term loose coupling is used widely. It was originally a design heuristic for structuring programs, but when applied to network-attached services, particu larly those run by different teams, it must be interpreted slightly differently. Here is a relatively recent definition from <NAME>:
Loose coupling reduces the number of assumptions two parties make about one another when they exchange information.
These assumptions broadly relate to a combination of data, function, and opera bility. As it turns out, however, this isnt what most people mean when they use the term loose coupling. When people refer to a loosely coupled application, they usually mean something closer to connascence, defined as follows:2 A measure of the impact a change to one component will have on others.
This captures the intuitive notion of coupling: that if two entities are coupled,
then an action applied to one will result in some action applied to the other. But an important part of this definition of connascence is the word change, which implies a temporal element. Coupling isnt a static thing; it matters only in the very instant that we try to change our software. In fact, if we left our software alone, and never changed it, coupling wouldnt matter at all.
Is Loose Coupling Always Good?
There is a widely held sentiment in industry that tight coupling is bad and loose coupling is good. This is not wholly accurate. Both tight and loose coupling are 2 See https://en.wikipedia.org/wiki/Connascence and http://wiki.cfcl.com/pub/Projects/Connascence/Resour ces/p147-page-jones.pdf.
32
|
Chapter 5: Events: A Basis for Collaboration
actually pretty useful in different situations. We might summarize the relation ship as:
Loose coupling lets components change independently of one another. Tight cou pling lets components extract more value from one another.
The path to loose coupling is not to share. If you dont share anything, then other applications cant couple to you. Microservices, for example, are sometimes referred to as shared nothing,3 encouraging different teams not to share data and not to share functionality (across service boundaries), as it impedes their ability to operate independently.4 Of course, the problem with not sharing is its not very collaborative; you inevita bly end up reinventing the wheel or forcing others to. So while it may be conve nient for you, its probably not so good for the department or company you work in. Somewhat unsurprisingly, sensible approaches strike a balance. Most business applications have to share data with one another, so there is always some level of coupling. Shared functionality, be it services like DNS or payment processing,
can be valuable, as can shared code libraries. So tighter coupling can of course be a good thing, but we have to be aware that it is a tradeoff. Sharing always increa ses the coupling on whatever we decide to share.
Sharing always increases the coupling on whatever we decide to share.
As an example, in most traditional applications, you couple tightly to your data base and your application will extract as much value as possible from the data bases ability to perform data-intensive operations. There is little downside, as the application and database will change together, and you dont typically let other systems use your database. A different example is DNS, used widely across an organization. In this case its wide usage makes it deeply valuable, but also tightly coupled. But as it changes infrequently and has a thin interface, there is little practical downside.
So we can observe that the coupling of a single component is really a function of three factors, with an addendum:
3 Shared nothing is also used in the database world but to mean a slightly different thing.
4 As an anecdote, I once worked with a team that would encrypt sections of the information they pub lished, not so it was secure, but so they could control who could couple to it (by explicitly giving the other party the encryption key). I wouldnt recommend this practice, but it makes the point that people really care about this problem.
Coupling and Message Brokers
|
33
Interface surface area (functionality offered, breadth and quantity of data exposed)
Number of users
Operational stability and performance The addendum: Frequency of changethat is, if a component doesnt change (be it data, function, or operation), then coupling (i.e., connascence) doesnt matter.
Messaging helps us build loosely coupled services because it moves pure data from a highly coupled place (the source) and puts it into a loosely coupled place
(the subscriber). So any operations that need to be performed on that data are not done at source, but rather in each subscriber, and messaging technologies like Kafka take most of the operational stability/performance issues off the table.
On the other hand, request-driven approaches are more tightly coupled as func tionality, data, and operational factors are concentrated in a single place. Later in this chapter we discuss the idea of a bounded context, which is a way of balancing these two: request-driven protocols used inside the bounded context, and mes saging between them. We also discuss the wider consequences of coupling in some detail in Chapter 8.
Essential Data Coupling Is Unavoidable All significantly sized businesses have core datasets that many programs need. If you are sending a user an email, you need their address; if youre building a sales report, you need sales figures; and so on. Data is something applications cannot do without, and there is no way to code around not having the data (while you might, for example, code around not having access to some piece of functional ity). So pretty much all business systems, in larger organizations, need a base level of essential data coupling.
Functional couplings are optional. Core data couplings are essential.
Using Events for Notification Most message brokers provide a publish-subscribe facility where the logic for how messages are routed is defined by the receivers rather than the senders; this process is known as receiver-driven routing. So the receiver retains control of their presence in the interaction, which makes the system pluggable (see Figure 5-2).
34
|
Chapter 5: Events: A Basis for Collaboration
Figure 5-2. Comparison between the request-response and event-driven approaches demonstrating how event-driven approaches provide less coupling Lets look at a simple example based on a customer ordering an iPad. The user clicks Buy, and an order is sent to the orders service. Three things then happen:
1. The shipping service is notified.
2. It looks up the address to send the iPad to.
3. It starts the shipping process.
In a REST- or RPC-based approach this might look like Figure 5-3.
Figure 5-3. A request-driven order management system The same flow can be built with an event-driven approach (Figure 5-4), where the orders service simply journals the event, Order Created, which the shipping service then reacts to.
Using Events for Notification
|
35
Figure 5-4. An event-driven version of the system described in Figure 5-3; in this configuration the events are used only as a means of notification: the orders service notifies the shipping service via Kafka If we look closely at Figure 5-4, the interaction between the orders service and the shipping service hasnt changed all that much, other than that they commu nicate via events rather than calling each other directly. But there is an important change: the orders service has no knowledge that the shipping service exists. It just raises an event denoting that it did its job and an order was created. The shipping service now has control over whether it partakes in the interaction. This is an example of receiver-driven routing: logic for routing is located at the receiver of the events, rather than at the sender. The burden of responsibility is flipped! This reduces coupling and adds a useful level of pluggability to the sys tem.
Pluggability becomes increasingly important as systems get more complex. Say we decide to extend our system by adding a repricing service, which updates the price of goods in real time, tweaking a products price based on supply and demand (Figure 5-5). In a REST- or RPC-based approach we would need to introduce a maybeUpdatePrice() method, which is called by both the orders ser vice and the payment service. But in the event-driven model, repricing is just a service that plugs into the event streams for orders and payments, sending out price updates when relevant criteria are met.
36
|
Chapter 5: Events: A Basis for Collaboration
Figure 5-5. Extending the system described in Figure 5-4 by adding a repricing ser vice to demonstrate the pluggability of the architecture Using Events to Provide State Transfer In Figure 5-5, we used events as a means of notification, but left the query for the customers address as a REST/RPC call.
We can also use events as a type of state transfer so that, rather than sending the query to the customer service, we would use the event stream to replicate cus tomer data from the customer service to the shipping service, where it can be queried locally (see Figure 5-6).
Figure 5-6. Extending the system described in Figure 5-4 to be fully event-driven;
here events are used for notification (the orders service notifies the shipping service)
as well as for data replication (data is replicated from the customer service to the shipping service, where it can be queried locally).
This makes use of the other property events havetheir replication hat. (For mally this is termed event-carried state transfer, which is essentially a form of Using Events to Provide State Transfer
|
37
data integration.) So the notification hat makes the architecture more pluggable,
and the replication hat moves data from one service to another so queries can be executed locally. Replicating a dataset locally is advantageous in much the same way that caching is often advantageous, as it makes data access patterns faster.
Which Approach to Use We can summarize the advantages of the pure query by event-carried state transfer approach as follows:
Better isolation and autonomy Isolation is required for autonomy. Keeping the data needed to drive queries isolated and local means it stays under the services control.
Faster data access Local data is typically faster to access. This is particularly true when data from different services needs to be combined, or where the query spans geographies.
Where the data needs to be available offline In the case of a mobile device, ship, plane, train, or the like, replicating the dataset provides a mechanism for moving and resynchronizing when con nected.
On the other hand, there are advantages to the REST/RPC approach:
Simplicity Its simpler to implement, as there are fewer moving parts and no state to manage.
Singleton State lives in only one place (inevitable caching aside!), meaning a value can be changed there and all users see it immediately. This is important for use cases that require synchronicityfor example, reading a previously updated account balance (we look at how to synthesize this property with events in Collapsing CQRS with a Blocking Read on page 142 in Chapter 15).
Centralized control Command-and-control workflows can be used to centralize business pro cesses in a single controlling service. This makes it easier to reason about.
Of course, as we saw earlier, we can blend the two approaches together and,
depending on which hat we emphasize, we get a solution that suits a differently sized architecture. If were designing for a small, lightweight use caselike build ing an online applicationwe would put weight on the notification hat, as the weight of data replication might be considered an unnecessary burden. But in a larger and more complex architecture, we might place more emphasis on the 38
|
Chapter 5: Events: A Basis for Collaboration
replication hat so that each service has greater autonomy over the data it queries.
(This is discussed in more detail in Chapter 8.) Microservice applications tend to be larger and leverage both hats. <NAME> puts this quite firmly:
Communication between microservices needs to be based on asynchronous mes sage passing (while logic inside each microservice is performed in a synchronous fashion).
Implementers should be careful to note that he directs this at a strict definition of microservices, one where services are independently deployable. Slacker inter pretations, which are seen broadly in industry, may not qualify so strong an assertion.
The Event Collaboration Pattern To build fine-grained services using events, a pattern called Event Collaboration is often used. This allows a set of services to collaborate around a single business workflow, with each service doing its bit by listening to events, then creating new ones. So, for example, we might start by creating an order, and then different services would evolve the workflow until the purchased item makes it to the users door.
This might not sound too different from any other workflow, but what is special about Event Collaboration is that no single service owns the whole process;
instead, each service owns a small partsome subset of state transitionsand these plug together through a chain of events. So each service does its work, then raises an event denoting what it did. If it processed a payment, it would raise a Payment Processed event. If it validated an order, it would raise Order Validated,
and so on. These events trigger the next step in the chain (which could trigger that service again, or alternatively trigger another service).
In Figure 5-7 each circle represents an event. The color of the circle designates the topic it is in. A workflow evolves from Order Requested through to Order Completed. The three services (order, payment, shipping) handle the state transi tions that pertain to their section of the workflow. Importantly, no service knows of the existence of any other service, and no service owns the entire workflow.
For example, the payment service knows only that it must react to validated orders and create Payment Processed events, with the latter taking the workflow one step forward. So the currency of event collaboration is, unsurprisingly,
events!
The Event Collaboration Pattern
|
39
Figure 5-7. An example workflow implemented with Event Collaboration The lack of any one point of central control means systems like these are often termed choreographies: each service handles some subset of state transitions,
which, when put together, describe the whole business process. This can be con trasted with orchestration, where a single process commands and controls the whole workflow from one placefor example, via a process manager.5 A process manager is implemented with request-response.
Choreographed systems have the advantage that they are pluggable. If the pay ment service decides to create three new event types for the payment part of the workflow, so long as the Payment Processed event remains, it can do so without affecting any other service. This is useful because it means if youre implement ing a service, you can change the way you work and no other services need to know or care about it. By contrast, in an orchestrated system, where a single ser vice dictates the workflow, all changes need to be made in the controller. Which of these approaches is best for you is quite dependent on use case, but the advan tage of orchestration is that the whole workflow is written down, in code, in one place. That makes it easy to reason about the system. The downside is that the model is tightly coupled to the controller, so broadly speaking choreographed approaches better suit larger implementations (particularly those that span teams and hence change independently of one another).
5 See http://www.enterpriseintegrationpatterns.com/patterns/messaging/ProcessManager.html and https://
www.thoughtworks.com/insights/blog/scaling-microservices-event-stream.
40
|
Chapter 5: Events: A Basis for Collaboration
The events services share form a journal, or shared narrative,
describing exactly how your business evolved over time.
Relationship with Stream Processing The notification and replication duality that events demonstrate maps cleanly to the concepts of stateless and stateful stream processing, respectively. The best way to understand this is to consider the shipping service example we discussed earlier in the chapter. If we changed the shipping service to use the Kafka Streams API, we could approach the problem in two ways (Figure 5-8):
Stateful approach Replicate the Customers table into the Kafka Streams API (denoted KTa ble in Figure 5-8). This makes use of the event-carried state transfer approach.
Stateless approach We process events and look up the appropriate customer with every order that is processed.
Figure 5-8. Stateful stream processing is similar to using events for both notifica tion and state transfer (left), while stateless stream processing is similar to using events for notification (right)
So the use of event-carried state transfer, in stateful stream processing, differs in two important ways, when compared to the example we used earlier in this chap ter:
The dataset needs to be held, in its entirety, in Kafka. So if we are joining to a table of customers, all customer records must be stored in Kafka as events.
Relationship with Stream Processing
|
41
The stream processor includes in-process, disk-resident storage to hold the table. There is no external database, and this makes the service stateful.
Kafka Streams then applies a number of techniques to make managing this statefulness practical.
This topic is discussed in detail in Chapter 6.
Mixing Request- and Event-Driven Protocols A common approach, particularly seen in smaller web-based systems, is to mix protocols, as shown in Figure 5-9. Online services interact directly with a user,
say with REST, but also journal state changes to Kafka (see Event Sourcing,
Command Sourcing, and CQRS in a Nutshell on page 55 in Chapter 7). Offline services (for billing, fulfillment, etc.) are built purely with events.
Figure 5-9. A very simple event-driven services example, data is imported from a legacy application via the Connect API; user-facing services provide REST APIs to the UI; state changes are journaled to Kafka as events. at the bottom, business pro cessing is performed via Event Collaboration In larger implementations, services tend to cluster together, for example within a department or team. They mix protocols inside one cluster, but rely on events to communicate between clusters (see Figure 5-10).
42
|
Chapter 5: Events: A Basis for Collaboration
Figure 5-10. Clusters of services form bounded contexts within which functionality is shared. Contexts interact with one another only through events, spanning departments, geographies or clouds In Figure 5-10 three departments communicate with one another only through events. Inside each department (the three larger circles), service interfaces are shared more freely and there are finer-grained event-driven flows that drive col laboration. Each department contains a number of internal bounded contexts small groups of services that share a domain model, are usually deployed together, and collaborate closely. In practice, there is often a hierarchy of sharing.
At the top of this hierarchy, departments are loosely coupled: the only thing they share is events. Inside a department, there will be many applications and those applications will interact with one another with both request-response and event-based mechanisms, as in Figure 5-9. Each application may itself be com posed from several services, but these will typically be more tightly coupled to one another, sharing a domain model and having synchronized release sched ules.
This approach, which confines reuse within a bounded context, is an idea that comes from domain-driven design, or DDD. One of the big ideas in DDD was that broad reuse could be counterproductive, and that a better approach was to create boundaries around areas of a business domain and model them separately.
So within a bounded context the domain model is shared, and everything is available to everything else, but different bounded contexts dont share the same model, and typically interact through more restricted interfaces.
Mixing Request- and Event-Driven Protocols
|
43
This idea was extended by microservice implementers, so a bounded context describes a set of closely related components or services that share code and are deployed together. Across bounded contexts there is less sharing (be it code,
functionality, or data). In fact, as we noted earlier in this chapter, microservices are often termed shared nothing for this reason.6 Summary
Businesses are a collection of people, teams, and departments performing a wide range of functions, backed by technology. Teams need to work asynchronously with respect to one another to be efficient, and many business processes are inherently asynchronousfor example, shipping a parcel from a warehouse to a users door. So we might start a project as a website, where the frontend makes synchronous calls to backend services, but as it grows the web of synchronous calls tightly couple services together at runtime. Event-based methods reverse this, decoupling systems in time and allowing them to evolve independently of one another.
In this chapter we noticed that events, in fact, have two separate roles: one for notification (a call for action), and the other a mechanism for state transfer
(pushing data wherever it is needed). Events make the system pluggable, and for reasonably sized architectures it is sensible to blend request- and event-based protocols, but you must take care when using these two sides of the event duality:
they lead to very different types of architecture. Finally, we looked at how to scale the two approaches by separating out different bounded contexts that collaborate only through events.
But with all this talk of events, weve talked little of replayable logs or stream pro cessing. When we apply these patterns with Kafka, the toolset itself creates new opportunities. Retention in the broker becomes a tool we can design for, allowing us to embrace data on the outside with a central store of events that services can refer back to. So the ops engineers, whom we discussed in the opening section of this chapter, will still be playing detective, but hopefully not quite as oftenand at least now the story comes with a script!
6 <NAME>, <NAME>, and <NAME>, Building Evolutionary Architectures (Sebastopol, CA: OReilly,
2017).
44
|
Chapter 5: Events: A Basis for Collaboration
CHAPTER 6 Processing Events with Stateful Functions
Imperative styles of programming are some of the oldest of all, and their popu larity persists for good reason. Procedures execute sequentially, spelling out a story on the page and altering the programs state as they do so.
As mainstream applications became distributed in the 1980s and 1990s, the same mindset was applied to this distributed domain. Approaches like Corba and EJB
(Enterprise JavaBeans) raised the level of abstraction, making distributed pro gramming more accessible. History has not always judged these so well. EJB,
while touted as a panacea of its time, fell quickly by the wayside as systems creaked with the pains of tight coupling and the misguided notion that the net work was something that should be abstracted away from the programmer.
In fairness, things have improved since then, with popular technologies like gRPC and Finagle adding elements of asynchronicity to the request-driven style.
But the application of this mindset to the design of distributed systems isnt nec essarily the most productive or resilient route to take. Two styles of program ming that better suit distributed design, particularly in a services context, are the dataflow and functional styles.
You will have come across dataflow programming if youve used utilities like Sed or languages like Awk. These are used primarily for text processing; for example,
a stream of lines might be pushed through a regex, one line at a time, with the output piped to the next command, chaining through stdin and stdout. This style of program is more like an assembly line, with each worker doing a specific task,
as the products make their way along a conveyor belt. Since each worker is con cerned only with the availability of data inputs, there have no hidden state to track. This is very similar to the way streaming systems work. Events accumulate in a stream processor waiting for a condition to be met, say, a join operation 45
between two different streams. When the correct events are present, the join operation completes and the pipeline continues to the next command. So Kafka provides the equivalent of a pipe in Unix shell, and stream processors provide the chained functions.
There is a similarly useful analogy with functional programming. As with the dataflow style, state is not mutated in place, but rather evolves from function to function, and this matches closely with the way stream processors operate. So most of the benefits of both functional and dataflow languages also apply to streaming systems. These can be broadly summarized as:
Streaming has an inherent ability for parallelization.
Streaming naturally lends itself to creating cached datasets and keeping them up to date. This makes it well suited to systems where data and code are sep arated by the network, notably data processing and GUIs.
Streaming systems are more resilient than traditional approaches, as high availability is built into the runtime and programs execute in a lossless man ner (see the discussion of Event Sourcing in Chapter 7).
Streaming functions are typically easier to reason about than regular pro grams. Pure functions are free from side effects. Stateful functions are not,
but do avoid shared mutable state.
Streaming systems embrace a polyglot culture, be it via different program ming languages or different datastores.
Programs are written at a higher level of abstraction, making them more comprehensible.
But streaming approaches also inherit some of the downsides. Purely functional languages must negotiate an impedance mismatch when interacting with more procedural or stateful elements like filesystems or the network. In a similar vein,
streaming systems must often translate to the request-response style of REST or RPCs and back again. This has led some implementers to build systems around a functional core, which processes events asynchronously, wrapped in an impera tive shell, used to marshal to and from outward-facing request-response inter faces. The functional core, imperative shell pattern keeps the key elements of the system both flexible and scalable, encouraging services to avoid side effects and express their business logic as simple functions chained together through the log.
In the next section well look more closely at why statefulness, in the context of stream processing, matters.
46
|
Chapter 6: Processing Events with Stateful Functions
Making Services Stateful There is a well-held mantra that statelessness is good, and for good reason. State less services start instantly (no data load required) and can be scaled out linearly,
cookie-cutter-style.
Web servers are a good example: to increase their capacity for generating dynamic content, we can scale a web tier horizontally, simply by adding new servers. So why would we want anything else? The rub is that most applications arent really stateless. A web server needs to know what pages to render, what sessions are active, and more. It solves these problems by keeping the state in a database. So the database is stateful and the web server is stateless. The state problem has just been pushed down a layer. But as traffic to the website increa ses, it usually leads programmers to cache state locally, and local caching leads to cache invalidation strategies, and a spiral of coherence issues typically ensues.
Streaming platforms approach this problem of where state should live in a slightly different way. First, recall that events are also facts, converging toward the stream processor like conveyor belts on an assembly line. So, for many use cases, the events that trigger a process into action contain all the data the pro gram needs, much like the dataflow programs just discussed. If youre validating the contents of an order, all you need is its event stream.
Sometimes this style of stateless processing happens naturally; other times imple menters deliberately enrich events in advance, to ensure they have all the data they need for the job at hand. But enrichments inevitably mean looking things up, usually in a database.
Stateful stream processing engines, like Kafkas Streams API, go a step further:
they ensure all the data a computation needs is loaded into the API ahead of time, be it events or any tables needed to do lookups or enrichments. In many cases this makes the API, and hence the application, stateful, and if it were restar ted for some reason it would need to reacquire that state before it could proceed.
This should seem a bit counterintuitive. Why would you want to make a service stateful? Another way to look at this is as an advanced form of caching that better suits data-intensive workloads. To make this clearer, lets look at three examples one that uses database lookups, one that is event-driven but stateless, and one that is event-driven but stateful.
The Event-Driven Approach Say we have an email service that listens to an event stream of orders and then sends confirmation emails to users once they complete a purchase. This requires information about both the order as well as the associated payment. Such an email service might be created in a number of different ways. Lets start by Making Services Stateful
|
47
assuming its a simple event-driven service (i.e., no use of a streaming API, as in Figure 6-1). It might react to order events, then look up the corresponding pay ment. Or it might do the reverse: reacting to payments, then looking up the cor responding order. Lets assume the former.
Figure 6-1. A simple event-driven service that looks up the data it needs as it pro cesses messages So a single event stream is processed, and lookups that pull in any required data are performed inline. The solution suffers from two problems:
The constant need to look things up, one message at a time.
The payment and order are created at about the same time, so one might arrive before the other. This means that if the order arrives in the email ser vice before the payment is available in the database, then wed have to either block and poll until it becomes available or, worse, skip the email processing completely.
The Pure (Stateless) Streaming Approach A streaming system comes at this problem from a slightly different angle. The streams are buffered until both events arrive, and can be joined together
(Figure 6-2).
48
|
Chapter 6: Processing Events with Stateful Functions
Figure 6-2. A stateless streaming service that joins two streams at runtime This solves the two aforementioned issues with the event-driven approach. There are no remote lookups, addressing the first point. It also no longer matters what order events arrive in, addressing the second point.
The second point turns out to be particularly important. When youre working with asynchronous channels there is no easy way to ensure relative ordering across several of them. So even if we know that the order is always created before the payment, it may well be delayed, arriving the other way around.
Finally, note that this approach isnt, strictly speaking, stateless. The buffer actually makes the email service stateful, albeit just a little. When Kafka Streams restarts, before it does any processing, it will reload the contents of each buffer.
This is important for achieving deterministic results. For example, the output of a join operation is dependent on the contents of the opposing buffer when a mes sage arrives.
The Stateful Streaming Approach Alas, the data flowing through the various event streams isnt always enough sometimes you need lookups or enrichments. For example, the email service would need access to the customers email address. There will be no recent event for this (unless you happened to be very lucky and the customer just updated their details). So youd have to look up the email address in the customer service via, say, a REST call (Figure 6-3).
Making Services Stateful
|
49
Figure 6-3. A stateless streaming service that looks up reference data in another service at runtime This is of course a perfectly valid approach (in fact, many production systems do exactly this), but a stateful stream processing system can make a further optimi zation. It uses the same process of local buffering used to handle potential delays in the orders and payments topics, but instead of buffering for just a few minutes,
it preloads the whole customer event stream from Kafka into the email service,
where it can be used to look up historic values (Figure 6-4).
Figure 6-4. A stateful streaming service that replicates the Customers topic into a local table, held inside the Kafka Streams API So now the email service is both buffering recent events, as well as creating a local lookup table. (The mechanics of this are discussed in more detail in Chap ter 14.)
50
|
Chapter 6: Processing Events with Stateful Functions
This final, fully stateful approach comes with disadvantages:
The service is now stateful, meaning for an instance of the email service to operate it needs the relevant customer data to be present. This means, in the worst case, loading the full dataset on startup.
as well as advantages:
The service is no longer dependent on the worst-case performance or liven ess of the customer service.
The service can process events faster, as each operation is executed without making a network call.
The service is free to perform more data-centric operations on the data it holds.
This final point is particularly important for the increasingly data-centric sys tems we build today. As an example, imagine we have a GUI that allows users to browse order, payment, and customer information in a scrollable grid. The grid lets the user scroll up and down through the items it displays.
In a traditional, stateless model, each row on the screen would require a call to all three services. This would be sluggish in practice, so caching would likely be added, along with some hand-crafted polling mechanism to keep the cache up to date.
But in the streaming approach, data is constantly pushed into the UI
(Figure 6-5). So you might define a query for the data displayed in the grid,
something like select * from orders, payments, customers where. The API executes it over the incoming event streams, stores the result locally, and keeps it up to date. So streaming behaves a bit like a decoratively defined cache.
Figure 6-5. Stateful stream processing is used to materialize data inside a web server so that the UI can access it locally for better performance, in this case via a scrollable grid Making Services Stateful
|
51
The Practicalities of Being Stateful Being stateful comes with some challenges: when a new node starts, it must load all stateful components (i.e., state stores) before it can start processing messages,
and in the worst case this reload can take some time. To help with this problem,
Kafka Streams provides three mechanisms for making being stateful a bit more practical:
It uses a technique called standby replicas, which ensure that for every table or state store on one node, there is a replica kept up to date on another. So, if any node fails, it will immediately fail over to its backup node without inter rupting processing unduly.
Disk checkpoints are created periodically so that, should a node fail and restart, it can load its previous checkpoint, then top up the few messages it missed when it was offline from the log.
Finally, compacted topics are used to keep the dataset as small as possible.
This acts to reduce the load time for a complete rebuild should one be neces sary.
Kafka Streams uses intermediary topics, which can be reset and rederived using the Streams Reset tool.1 An event-driven application uses a single input stream to drive its work.
A streaming application blends one or more input streams into one or more output streams. A stateful streaming application also recasts streams to tables (used to do enrichments) and stores intermediary state in the log, so it internalizes all the data it needs.
Summary This chapter covers three different ways of doing event-based processing: the simple event-driven approach, where you process a single event stream one mes sage at a time; the streaming approach, which joins different event streams together; and finally, the stateful streaming approach, which turns streams into tables and stores data in the log.
So instead of pushing the state problem down a layer into a database, stateful stream processors, like Kafkas Streams API, are proudly stateful. They make data available wherever it is required. This increases performance and autonomy. No remote calls needed!
1 See http://bit.ly/2GaCRZO and http://bit.ly/2IUPHJa.
52
|
Chapter 6: Processing Events with Stateful Functions
Of course, being stateful comes with its downsides, but it is optional, and realworld streaming systems blend together all three approaches. We go into the detail of how these streaming operations work in Chapter 14.
Summary
|
53
CHAPTER 7 Event Sourcing, CQRS, and Other Stateful Patterns
In Chapter 5 we introduced the Event Collaboration pattern, where events describe an evolving business processlike processing an online purchase or booking and settling a tradeand several services collaborate to push that work flow forward.
This leads to a log of every state change the system makes, held immutably,
which in turn leads to two related patterns, Command Query Response Segrega tion (CQRS) and Event Sourcing,1 designed to help systems scale and be less prone to corruption. This chapter explores what these concepts mean, how they can be implemented, and when they should be applied.
Event Sourcing, Command Sourcing, and CQRS in a Nutshell
At a high level, Event Sourcing is just the observation that events (i.e., state changes) are a core element of any system. So, if they are stored, immutably, in the order they were created in, the resulting event log provides a comprehensive audit of exactly what the system did. Whats more, we can always rederive the current state of the system by rewinding the log and replaying the events in order.
CQRS is a natural progression from this. As a simple example, you might write events to Kafka (write model), read them back, and then push them into a data base (read model). In this case Kafka maps the read model onto the write model 1 See https://martinfowler.com/eaaDev/EventSourcing.html and http://bit.ly/2pLKRFF.
55
asynchronously, decoupling the two in time so the two parts can be optimized independently.
Command Sourcing is essentially a variant of Event Sourcing but applied to events that come into a service, rather than via the events it creates.
Thats all a bit abstract, so lets walk through the example in Figure 7-1. Well use one similar to the one used in the previous chapter, where a user makes an online purchase and the resulting order is validated and returned.
Figure 7-1. A simple order validation workflow When a purchase is made (1), Command Sourcing dictates that the order request be immediately stored as an event in the log, before anything happens (2). That way, should anything go wrong, the service can be rewound and replayedfor example, to recover from a corruption.
Next, the order is validated, and another event is stored in the log to reflect the resulting change in state (3). In contrast to an update-in-place persistence model like CRUD (create, read, update, delete), the validated order is represented as an entirely new event, being appended to the log rather than overwriting the exist ing order. This is the essence of Event Sourcing.
Finally, to query orders, a database is attached to the resulting event stream,
deriving an event-sourced view that represents the current state of orders in the system (4). So (1) and (4) provide the Command and Query sides of CQRS.
These patterns have a number of benefits, which we will examine in detail in the subsequent sections.
56
|
Chapter 7: Event Sourcing, CQRS, and Other Stateful Patterns
Version Control for Your Data When you store events in a log, it behaves a bit like a version control system for your data. Consider the situation illustrated in Figure 7-2. If a programmatic bug is introducedlets say a timestamp field was interpreted with the wrong time zoneyou would get a data corruption. The corruption would make its way into the database. It would also make it into interactions the service makes with other services, making the corruption more widespread and harder to fix.
Figure 7-2. A programmatic bug can lead to data corruption, both in the services own database as well as in data it exposes to other services Recovering from this situation is tricky for a couple of reasons. First, the original inputs to the system havent been recorded exactly, so we only have the corrup ted version of the order. We will have to uncorrupt it manually. Second, unlike a version control system, which can travel back in time, a database is mutated in place, meaning the previous state of the system is lost forever. So there is no easy way for this service to undo the damage the corruption did.
To fix this, the programmer would need to go through a series of steps: applying a fix to the software, running a database script to fix the corrupted timestamps in the database, and finally, working out some way of resending any corrupted data previously sent to other services. At best this will involve some custom code that pulls data out of the database, fixes it up, and makes new service calls to redis tribute the corrected data. But because the database is lossyas values are over writtenthis may not be enough. (If rather than the release being fixed, it was rolled back to a previous version after some time running as the new version, the data migration process would likely be even more complex.)
Version Control for Your Data
|
57
A replayable log turns ephemeral messaging into messaging that remembers.
Switching to an Event/Command Sourcing approach, where both inputs and state changes are recorded, might look something like Figure 7-3.
Figure 7-3. Adding Kafka and an Event Sourcing approach to the system described in Figure 7-2 ensures that the original events are preserved before the code, and bug, execute As Kafka can store events for as long as we need (as discussed in Long-Term Data Storage on page 25 in Chapter 4), correcting the timestamp corruption is now a relatively simple affair. First the bug is fixed, then the log is rewound to before the bug was introduced, and the system is replayed from the stream of order requests. The database is automatically overwritten with corrected time stamps, and new events are published downstream, correcting the previous cor rupted ones. This ability to store inputs, rewind, and replay makes the system far better at recovering from corruptions and bugs.
So Command Sourcing lets us record our inputs, which means the system can always be rewound and replayed. Event Sourcing records our state changes,
which ensures we know exactly what happened during our systems execution,
and we can always regenerate our current state (in this case the contents of the database) from this log of state changes (Figure 7-4).
58
|
Chapter 7: Event Sourcing, CQRS, and Other Stateful Patterns
Figure 7-4. Command Sourcing provides straight-through reprocessing from the original commands, while Event Sourcing rederives the services current state from just the post-processed events in the log Being able to store an ordered journal of state changes is useful for debugging and traceability purposes, too, answering retrospective questions like Why did this order mysteriously get rejected? or Why is this balance suddenly in the red?questions that are harder to answer with mutable data storage.
Event Sourcing ensures every state change in a system is recorded,
much like a version control system. As the saying goes, Accountants dont use erasers.
It is also worth mentioning that there are other well-established database pat terns that provide some of these properties. Staging tables can be used to hold unvalidated inputs, triggers can be applied in many relational databases to create audit tables, and Bitemporal databases also provide an auditable data structure.
These are all useful techniques, but none of them lends itself to rewind and replay functionality without a significant amount of effort on the programmers part. By contrast, with the Event Sourcing approach, there is little or no addi tional code to write or test. The primary execution path is used both for runtime execution as well as for recovery.
Making Events the Source of Truth One side effect of the previous example is that the application of Event Sourcing means the event, not the database record, is the source of truth. Making events first-class entities in this way leads to some interesting implications.
If we consider the order request example again, there are two versions of the order: one in the database and one in the notification. This leads to a couple of different issues. The first is that both the notification and database update must be made atomically. Imagine if a failure happens midway between the database Making Events the Source of Truth
|
59
being updated and the notification being sent (Figure 7-5). The best-case sce nario is the creation of duplicate notifications. The worst-case scenario is that the notification might not be made at all. This issue can worsen in complex systems,
as we discuss at length in Chapter 12, where we look at Kafkas transactions fea ture.
Figure 7-5. The order request is validated and a notification is sent to other serv ices, but the service fails before the data is persisted to the database The second problem is that, in practice, its quite easy for the data in the database and the data in the notification to diverge as code churns and features are imple mented. The implication here is that, while the database may well be correct, if the notifications dont quite match, then the data quality of the system as a whole suffers. (See The Data Divergence Problem on page 95 in Chapter 10.)
Event Sourcing addresses both of these problems by making the event stream the primary source of truth (Figure 7-6). Where data needs to be queried, a read model or event-sourced view is derived directly from the stream.
Event Sourcing ensures that the state a service communicates and the state a service saves internally are the same.
This actually makes a lot of sense. In a traditional system the database is the source of truth. This is sensible from an internal perspective. But if you consider it from the point of view of other services, they dont care what is stored inter nally; its the data everyone else sees that is important. So the event being the source of truth makes a lot of sense from their perspective. This leads us to CQRS.
60
|
Chapter 7: Event Sourcing, CQRS, and Other Stateful Patterns
Command Query Responsibility Segregation As discussed earlier, CQRS (Command Query Responsibility Segregation) sepa rates the write path from the read path and links them with an asynchronous channel (Figure 7-6). This idea isnt limited to application designit comes up in a number of other fields too. Databases, for example, implement the idea of a write-ahead log. Inserts and updates are immediately journaled sequentially to disk, as soon as they arrive. This makes them durable, so the database can reply back to the user in the knowledge that the data is safe, but without having to wait for the slow process of updating the various concurrent data structures like tables, indexes, and so on.2 The point is that (a) should something go wrong, the internal state of the database can be recovered from the log, and (b) writes and reads can be optimized independently.
Figure 7-6. When we make the event stream the source of truth, the notification and the database update come from the same event, which is stored immutably and durably; when we split the read and write model, the system is an implementa tion of the CQRS design pattern When we apply CQRS in our applications, we do it for very similar reasons.
Inputs are journaled to Kafka, which is much faster than writing to a database.
Segregating the read model and updating it asynchronously means that the expensive maintenance of update-in-place data structures, like indexes, can be batched together so they are more efficient. This means CQRS will display better overall read and write performance when compared to an equivalent, more tradi tional, CRUD-based system.
2 Some databasesfor example, DRUIDmake this separation quite concrete. Other databases block until indexes have been updated.
Command Query Responsibility Segregation
|
61
Of course there is no free lunch. The catch is that, because the read model is updated asynchronously, it will run slightly behind the write model in time. So if a user performs a write, then immediately performs a read, it is possible that the entry they wrote originally has not had time to propagate, so they cant read their own writes. As we will see in the example in Collapsing CQRS with a Blocking Read on page 142 in Chapter 15, there are strategies for addressing this problem.
Materialized Views There is a close link between the query side of CQRS and a materialized view in a relational database. A materialized view is a table that contains the results of some predefined query, with the view being updated every time any of the under lying tables change.
Materialized views are used as a performance optimization so, instead of a query being computed when a user needs data, the query is precomputed and stored.
For example, if we wanted to display how many active users there are on each page of a website, this might involve us scanning a database table of user visits,
which would be relatively expensive to compute. But if we were to precompute the query, the summary of active users that results will be comparatively small and hence fast to retrieve. Thus, it is a good candidate to be precomputed.
We can create exactly the same construct with CQRS using Kafka. Writes go into Kafka on the command side (rather than updating a database table directly). We can transform the event stream in a way that suits our use case, typically using Kafka Streams or KSQL, then materialize it as a precomputed query or material ized view. As Kafka is publish-subscribe, we can have many such views, precom puted to match the various use cases we have (Figure 7-7). But unlike with materialized views in a relational database, the underlying events are decoupled from the view. This means (a) they can be scaled independently, and (b) the writ ing process (so whatever process records user visits) doesnt have to wait for the view to be computed before it returns.
62
|
Chapter 7: Event Sourcing, CQRS, and Other Stateful Patterns
Figure 7-7. CQRS allows multiple read models, which may be optimized for a spe cific use case, much like a materialized view, or may use a different storage tech nology
This idea of storing data in a log and creating many derived views is taken fur ther when we discuss Event Streams as a Shared Source of Truth in Chapter 9.
If an event stream is the source of truth, you can have as many different views in as many different shapes, sizes, or technologies as you may need. Each is focused on the use case at hand.
Polyglot Views Whatever sized data problem you have, be it free-text search, analytic aggrega tion, fast key/value lookups, or a host of others, there is a database available today that is just right for your use case. But this also means there is no onesize-fits-all approach to databases, at least not anymore. A supplementary bene fit of using CQRS is that a single write model can push data into many read models or materialized views. So your read model can be in any database, or even a range of different databases.
A replayable log makes it easy to bootstrap such polyglot views from the same data, each tuned to different use cases (Figure 7-7). A common example of this is to use a fast key/value store to service queries from a website, but then use a search engine like Elasticsearch or Solr to support the free-text-search use case.
Polyglot Views
|
63
Whole Fact or Delta?
One question that arises in event-drivenparticularly event-sourcedprograms,
is whether the events should be modeled as whole facts (a whole order, in its entirety) or as deltas that must be recombined (first a whole order message, fol lowed by messages denoting only what changed: amount updated to $5, Order cancelled, etc.).
As an analogy, imagine you are building a version control system like SVN or Git. When a user commits a file for the first time, the system saves the whole file to disk. Subsequent commits, reflecting changes to that file, might save only the deltathat is, just the lines that were added, changed, or removed. Then, when the user checks out a certain version, the system opens the version-0 file and applies all subsequent deltas, in order, to derive the version the user asked for.
The alternate approach is to simply store the whole file, exactly as it was at the time it was changed, for every single commit. This will obviously take more stor age, but it means that checking out a specific version from the history is a quick and easy file retrieval. However, if the user wanted to compare different versions,
the system would have to use a diff function.
These two approaches apply equally to data we keep in the log. So to take a more business-oriented example, an order is typically a set of line items (i.e., you often order several different items in a single purchase). When implementing a system that processes purchases, you might wonder: should the order be modeled as a single order event with all the line items inside it, or should each line item be a separate event with the order being recomposed by scanning the various inde pendent line items? In domain-driven design, an order of this latter type is termed an aggregate (as it is an aggregate of line items) with the wrapping entity that is, the orderbeing termed an aggregate root.
As with many things in software design, there are a host of different opinions on which approach is best for a certain use case. There are a few rules of thumb that can help, though. The most important one is journal the whole fact as it arrived.
So when a user creates an order, if that order turns up with all line items inside it,
wed typically record it as a single entity.
But what happens when a user cancels a single line item? The simple solution is to just journal the whole thing again, as another aggregate but cancelled. But what if for some reason the order is not available, and all we get is a single can celed line item? Then there would be the temptation to look up the original order internally (say from a database), and combine it with the cancellation to create a new Cancelled Order with all its line items embedded inside it. This typically isnt a good idea, because (a) were not recording exactly what we received, and
(b) having to look up the order in the database erodes the performance benefits 64
|
Chapter 7: Event Sourcing, CQRS, and Other Stateful Patterns
of CQRS. The rule of thumb is record what you receive, so if only one line item arrives, record that. The process of combining can be done on read.
Conversely, breaking events up into subevents as they arrive often isnt good practice, either, for similar reasons. So, in summary, the rule of thumb is record exactly what you receive, immutably.
Implementing Event Sourcing and CQRS with Kafka Kafka ships with two different APIs that make it easier to build and manage CQRS-styled views derived from events stored in Kafka. The Kafka Connect API and associated Connector ecosystem provides an out-of-the-box mechanism to push data into, or pull data from, a variety of databases, data sources, and data sinks. In addition, the Kafka Streams API ships with a simple embedded data base, called a state store, built into the API (see Windows, Joins, Tables, and State Stores on page 135 in Chapter 14).
In the rest of this section we cover some useful patterns for implementing Event Sourcing and CQRS with these tools.
Build In-Process Views with Tables and State Stores in Kafka Streams
Kafkas Streams API provides one of the simplest mechanisms for implementing Event Sourcing and CQRS because it lets you implement a view natively, right inside the Kafka Streams APIno external database needed!
At its simplest this involves turning a stream of events in Kafka into a table that can be queried locally. For example, turning a stream of Customer events into a table of Customers that can be queried by CustomerId takes only a single line of code:
KTable<CustomerId, Customer> customerTable = builder.table("customer-topic");
This single line of code does several things:
It subscribes to events in the customer topic.
It resets to the earliest offset and loads all Customer events into the Kafka Streams API. That means it loads the data from Kafka into your service.
(Typically a compacted topic is used to reduce the initial/worst-case load time.)
It pushes those events into a state store (Figure 7-8), a local, disk-resident hash table, located inside the Kafka Streams API. This results in a local, diskresident table of Customers that can be queried by key or by range scan.
Implementing Event Sourcing and CQRS with Kafka
|
65
Figure 7-8. State stores in Kafka Streams can be used to create use-case-specific views right inside the service In Chapter 15 we walk through a set of richer code examples that create different types of views using tables and state stores, along with discussing how this approach can be scaled.
Writing Through a Database into a Kafka Topic with Kafka Connect
One way to get events into Kafka is to write through a database table into a Kafka topic. Strictly speaking, this isnt an Event Sourcing or CQRS-based pattern, but its useful nonetheless.
In Figure 7-9, the orders service writes orders to a database. The writes are con verted into an event stream by Kafkas Connect API. This triggers downstream processing, which validates the order. When the Order Validated event returns to the orders service, the database is updated with the final state of the order,
before the call returns to the user.
66
|
Chapter 7: Event Sourcing, CQRS, and Other Stateful Patterns
Figure 7-9. An example of writing through a database to an event stream The most reliable and efficient way to achieve this is using a technique called change data capture (CDC). Most databases write every modification operation to a write-ahead log, so, should the database encounter an error, it can recover its state from there. Many also provide some mechanism for capturing modification operations that were committed. Connectors that implement CDC repurpose these, translating database operations into events that are exposed in a messaging system like Kafka. Because CDC makes use of a native eventing interface it is
(a) very efficient, as the connector is monitoring a file or being triggered directly when changes occur, rather than issuing queries through the databases main API, and (b) very accurate, as issuing queries through the databases main API will often create an opportunity for operations to be missed if several arrive, for the same row, within a polling period.
In the Kafka ecosystem CDC isnt available for every database, but the ecosystem is growing. Some popular databases with CDC support in Kafka Connect are MySQL, Postgres, MongoDB, and Cassandra. There are also proprietary CDC connectors for Oracle, IBM, SQL Server, and more. The full list of connectors is available on the Connect home page.
The advantage of this database-fronted approach is that it provides a consistency point: you write through it into Kafka, meaning you can always read your own writes.
Implementing Event Sourcing and CQRS with Kafka
|
67
Writing Through a State Store to a Kafka Topic in Kafka Streams The same pattern of writing through a database into a Kafka topic can be achieved inside Kafka Streams, where the database is replaced with a Kafka Streams state store (Figure 7-10). This comes with all the benefits of writing through a database with CDC, but has a few additional advantages:
The database is local, making it faster to access.
Because the state store is wrapped by Kafka Streams, it can partake in trans actions, so events published by the service and writes to the state store are atomic.
There is less configuration, as its a single API (no external database, and no CDC connector to configure).
We discuss this use of state stores for holding application-level state in the sec tion Windows, Joins, Tables, and State Stores on page 135 in Chapter 14.
Figure 7-10. Applying the write-through pattern with Kafka Streams and a state store
Unlocking Legacy Systems with CDC In reality, most projects have some element of legacy and renewal, and while there is a place for big-bang redesigns, incremental change is typically an easier pill to swallow.
The problem with legacy is that there is usually a good reason for moving away from it: the most common being that it is hard to change. But most business operations in legacy applications will converge on their database. This means that, no matter how creepy the legacy code is, the database provides a coherent seam to latch into the existing business workflow, from where we can extract events via CDC. Once we have the event stream, we can plug in new event-driven services that allow us to evolve away from the past, incrementally (Figure 7-11).
68
|
Chapter 7: Event Sourcing, CQRS, and Other Stateful Patterns
Figure 7-11. Unlocking legacy data using Kafkas Connect API So part of our legacy system might allow admins to manage and update the prod uct catalog. We might retain this functionality by importing the dataset into Kafka from the legacy systems database. Then that product catalog can be reused in the validation service, or any other.
An issue with attaching to legacy, or any externally sourced dataset, is that the data is not always well formed. If this is a problem, consider adding a postprocessing stage. Kafka Connects single message transforms are useful for this type of operation (for example, adding simple adjustments or enrichments),
while Kafkas Streams API is ideal for simple to very complex manipulations and for precomputing views that other services need.
Query a Read-Optimized View Created in a Database Another common pattern is to use the Connect API to create a read-optimized,
event-sourced view, built inside a database. Such views can be created quickly and easily in any number of different databases using the sink connectors avail able for Kafka Connect. As we discussed in the previous section, these are often termed polyglot views, and they open up the architecture to a wide range of data storage technologies.
In the example in Figure 7-12, we use Elasticsearch for its rich indexing and query capabilities, but whatever shape of problem you have, these days there is a database that fits. Another common pattern is to precompute the contents as a materialized view using Kafka Streams, KSQL, or Kafka Connects single message transforms feature (see Materialized Views on page 62 earlier in this chapter).
Implementing Event Sourcing and CQRS with Kafka
|
69
Figure 7-12. Full-text search is added via an Elasticsearch database connected through Kafkas Connect API Memory Images/Prepopulated Caches Finally, we should mention a pattern called MemoryImage (Figure 7-13). This is just a fancy term, coined by <NAME>, for caching a whole dataset into memorywhere it can be queriedrather than making use of an external data base.
Figure 7-13. A MemoryImage is a simple cache of an entire topic loaded into a service so it can be referenced locally MemoryImages provide a simple and efficient model for datasets that (a) fit in memory and (b) can be loaded in a reasonable amount of time. To reduce the load time issue, its common to keep a snapshot of the event log using a compac ted topic (which represents the latest set of events, without any of the version his tory). The MemoryImage pattern can be hand-crafted, or it can be implemented with Kafka Streams using in-memory state stores. The pattern suits highperformance use cases that dont need to overflow to disk.
70
|
Chapter 7: Event Sourcing, CQRS, and Other Stateful Patterns
The Event-Sourced View Throughout the rest of this book we will use the term event-sourced view
(Figure 7-14) to refer to a query resource (database, memory image, etc.) created in one service from data authored by (and hence owned) by another.
Figure 7-14. An event-sourced view implemented with KSQL and a database; if built with Kafka Streams, the query and view both exist in the same layer What differentiates an event-sourced view from a typical database, cache, and the like is that, while it can represent data in any form the user requires, its data is sourced directly from the log and can be regenerated at any time.
For example, we might create a view of orders, payments, and customer informa tion, filtering anything that doesnt ship within the US. This would be an eventsourced view if, when we change the view definitionsay to include orders that ship to Canadawe can automatically recreate the view in its entirety from the log.
An event-sourced view is equivalent to a projection in Event Sourcing parlance.
Summary In this chapter we looked at how an event can be more than just a mechanism for notification, or state transfer. Event Sourcing is about saving state using the exact same medium we use to communicate it, in a way that ensures that every change is recorded immutably. As we noted in the section Version Control for Your Data on page 57, this makes recovery from failure or corruption simpler and more efficient when compared to traditional methods of application design.
CQRS goes a step further by turning these raw events into an event-sourced view a queryable endpoint that derives itself (and can be rederived) directly from the log. The importance of CQRS is that it scales, by optimizing read and write models independently of one another.
Summary
|
71
We then looked at various patterns for getting events into the log, as well as building views using Kafka Streams and Kafkas Connect interface and our data base of choice.
Ultimately, from the perspective of an architect or programmer, switching to this event-sourced approach will have a significant effect on an applications design.
Event Sourcing and CQRS make events first-class citizens. This allows systems to relate the data that lives inside a service directly to the data it shares with others.
Later well see how we can tie these together with Kafkas Transactions API. We will also extend the ideas introduced in this chapter by applying them to inter team contexts, with the Event Streaming as a Shared Source of Truth approach discussed in Chapter 9.
72
|
Chapter 7: Event Sourcing, CQRS, and Other Stateful Patterns
PART III Rethinking Architecture at Company Scales If you squint a bit, you can see the whole of your organizations systems and data flows as a single distributed database.
<NAME>, 2013
CHAPTER 8 Sharing Data and Services Across an Organization
When we build software, our main focus is, quite rightly, aimed at solving some real-world problem. It might be a new web page, a report of sales features, an analytics program searching for fraudulent behavior, or an almost infinite set of options that provide clear and physical benefits to our users. These are all very tangible goalsgoals that serve our business today.
But when we build software we also consider the futurenot by staring into a crystal ball in some vain attempt to predict what our company will need next year, but rather by facing up to the fact that whatever does happen, our software will need to change. We do this without really thinking about it. We carefully modularize our code so it is comprehensible and reusable. We write tests, run continuous integration, and maybe even do continuous deployment. These things take effort, yet they bear little resemblance to anything a user might ask for directly. We do these things because they make our code last, and that doesnt mean sitting on some filesystem the far side of git push. It means providing for a codebase that is changed, evolved, refactored, and repurposed. Aging in soft ware isnt a function of time; it is a function of how we choose to change it.
But when we design systems, we are less likely to think about how they will age.
We are far more likely to ask questions like: Will the system scale as our user base increases? Will response times be fast enough to keep users happy? Will it pro mote reuse? In fact, you might even wonder what a system designed to last a long time looks like.
If we look to history to answer this question, it would point us to mainframe applications for payroll, big-name client programs like Excel or Safari, or even operating systems like Windows or Linux. But these are all complex, individual programs that have been hugely valuable to society. They have also all been diffi 75
cult to evolve, particularly with regard to organizing a large engineering effort around a single codebase. So if its hard to build large but individual software programs, how do we build the software that runs a company? This is the ques tion we address in this particular section: how do we design systems that age well at company scales and keep our businesses nimble?
As it happens, many companies sensibly start their lives with a single system,
which becomes monolithic as it slowly turns into the proverbial big ball of mud.
The most common response to this today is to break the monolith into a range of different applications and services. In Chapter 1 we talked about companies like Amazon, LinkedIn, and Netflix, which take a service-based approach to this. This is no panacea; in fact, many implementations of the microservices pattern suffer from the misconceived notion that modularizing software over the network will somehow improve its sustainability. This of course isnt what microservices are really about. But regardless of your interpretation, breaking a monolith, alone,
will do little to improve sustainability. There is a very good reason for this too.
When we design systems at company scales, those systems become far more about people than they are about software.
As a company grows it forms into teams, and those teams have different respon sibilities and need to be able to make progress without extensive interaction with one another. The larger the company, the more of this autonomy they need. This is the basis of management theories like Slack.1 In stark contrast to this, total independence wont work either. Different teams or departments need some level of interaction, or at least a shared sense of pur pose. In fact, dividing sociological groups is a tactic deployed in both politics and war as a mechanism for reducing the capabilities of an opponent. The point here is that a balance must be struck, organizationally, in terms of the way people,
responsibility, and communication structures are arranged in a company, and this applies as acutely to software as it does to people, because beyond the con fines of a single application, people factors invariably dominate.
Some companies tackle this at an organizational level using approaches like the Inverse Conway Maneuver, which applies the idea that, if the shape of software and the shape of organizations are intrinsically linked (as Conway argued), then its often easier to change the organization and let the software follow suit than it is to do the reverse. But regardless of the approach taken, when we design soft ware systems where components are operated and evolved independently, the problem we face has three distinct partsorganization, software, and data which are all intrinsically linked. To complicate matters further, what really dif 1 Tom DeMarco, Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency (New York:
Broadway Books, 2001).
76
|
Chapter 8: Sharing Data and Services Across an Organization
ferentiates the good systems from the bad is their ability to manage these three factors as they evolve, independently, over time.
While this may seem a little abstract, you have no doubt felt the interplay between these forces before. Say youre working on a project, but to finish it you need three other teams to complete work on their side firstyou know intui tively that its going to take longer to build, schedule, and release. If someone from another team asks if their application can pull some data out of your data base, you know thats probably going to lead to pain in the long run, as youre left wondering if your latest release will break the dependency they have on you.
Finally, while it might seem trivial to call a couple of REST services to populate your user interface, you know that an outage on their side is going to mean you get called at 3 a.m., and the more complex the dependencies get, the harder its going to be to figure out why the system doesnt work. These are all examples of problems we face when an organization, its software, and its data evolve slowly.
The Microservices pattern is unusually opinionated in this regard. It comes down hard on independence in organization, software, and data. Microservices are run by different teams, have different deployment cycles, dont share code, and dont share databases.2 The problem is that replacing this with a web of RPC/REST calls isnt generally a preferable solution. This leads to an important tension: we want to promote reuse to develop software quickly, but at the same time the more dependencies we have, the harder it is to change.
Reuse can be a bad thing. Reuse lets us develop software quickly and succinctly, but the more we reuse a component, the more dependencies that component has, and the harder it is to change.
To better understand this tension, we need to question some core principles of software designprinciples that work wonderfully when were building a single application, but fare less well when we build software that spans many teams.
Encapsulation Isnt Always Your Friend As software engineers were taught to encapsulate. If youre building a library for other people to use, youll carefully pick a contract that suits the functionality you want to expose. If youre building a service-based system, you might be inclined to follow a similar process. This works well if we can cleanly separate responsibilities between the different services. A single sign-on (SSO) service, for example, has a well-defined role, which is cleanly separated from the roles other 2 Sam Newman, Building Microservices (Sebastopol, CA: OReilly, 2014).
Encapsulation Isnt Always Your Friend
|
77
services play (Figure 8-1). This clean separation means that, even in the face of rapid requirement churn, its unlikely the SSO service will need to change. It exists in a tightly bounded context.
Figure 8-1. An SSO service provides a good example of encapsulation and reuse The problem is that, in the real world, business services cant typically retain the same clean separation of concerns, meaning new requirements will inevitably crosscut service boundaries and several services will need to change at once. This can be measured.3 So if one team needs to implement a feature, and that requires another team to make a code change, we end up having to make changes to both services at around the same time. In a monolithic system this is pretty straight forwardyou make the change and then do a releasebut its considerably more painful where independent services must synchronize. The coordination between teams and release cycles erodes agility.
This problem isnt actually restricted to services. Shared libraries suffer from the same problem. If you work in retail, it might seem sensible to create a library that models how customers, orders, payments, and the like all relate to one another.
You could include common logic for standard operations like returns and refunds. Lots of people did this in the early days of object orientation, but it turned out to be quite painful because suddenly the most sensitive part of your system was coupled to many different programs, making it really fiddly to change and release. This is why microservices typically dont share a single domain model. But some library reuse is of course OK. Take a logging library,
for examplemuch like the earlier SSO example, youre unlikely to have a busi ness requirement that needs the logging library to change.
3 See https://www.infoq.com/news/2017/04/tornhill-prioritise-tech-debt and http://bit.ly/2pKa2rR.
78
|
Chapter 8: Sharing Data and Services Across an Organization
But in reality, of course, library reuse comes with a get-out clause: the code can be implemented anywhere. Say you did use the aforementioned shared retail domain model. If it becomes too painful to use, you could always just write the code yourself! (Whether that is actually a good idea is a different discussion.) But when we consider different applications or services that share data with one another, there is no such solution: if you dont have the data, there is literally nothing you can do.
This leads to two fundamental differences between services and shared libraries:
A service is run and operated by someone else.
A service typically has data of its own, whereas a library (or database)
requires you to input any data it needs.
Data sits at the very heart of this problem: most business services inevitably rely heavily on one anothers data. If youre an online retailer, the stream of orders,
the product catalog, or the customer information will find its way into the requirements of many of your services. Each of these services needs broad access to these datasets to do its work, and there is no temporary workaround for not having the data you need. So you need access to shared datasets, but you also want to stay loosely coupled. This turns out to be a pretty hard bargain to strike.
The Data Dichotomy Encapsulation encourages us to hide data, but data systems have little to do with encapsulation. In fact, quite the opposite: databases do everything they can to expose the data they hold (Figure 8-2). They come with wonderfully powerful,
declarative interfaces that can contort the data they hold into pretty much any shape you might desire. Thats exactly what a data scientist needs for an explora tory investigation, but its not so great for managing the spiral of interservice dependencies in a burgeoning service estate.
The Data Dichotomy
|
79
Figure 8-2. Services encapsulate the data they hold to reduce coupling and aid reuse; databases amplify the data they hold to provide greater utility to their user So we find ourselves faced with a conundrum, a dichotomy: databases are about exposing data and making it useful. Services are about hiding it so they can stay decoupled. These two forces are fundamental. They underlie much of what we do, subtly jostling for supremacy in the systems we build.
What Happens to Systems as They Evolve?
As systems evolve and grow we see the effects of this data dichotomy play out in a couple of different ways.
The God Service Problem As data services grow they inevitably expose an increasing set of functions, to the point where they start to look like some form of kooky, homegrown database
(Figure 8-3).
80
|
Chapter 8: Sharing Data and Services Across an Organization
Figure 8-3. Service interfaces inevitably grow over time Now creating something that looks like a kooky, shared database can lead to a set of issues of its own. The more functionality, data, and users data services have,
the more tightly coupled they become and the harder (and more expensive) they are to operate and evolve.
The REST-to-ETL Problem A second, often more common, issue when youre faced with a data service is that it actually becomes preferable to suck the data out so it can be held and manipulated locally (Figure 8-4). There are lots of reasons for this to happen in practice, but some of the main ones are:
The data needs to be combined with some other dataset.
The data needs to be closer, either due to geography or to be used offline
(e.g., on a mobile).
The data service displays operational issues, which cause outages down stream.
The data service doesnt provide the functionality the client needs and/or cant change quick enough.
But to extract data from some service, then keep that data up to date, you need some kind of polling mechanism. While this is not altogether terrible, it isnt ideal either.
What Happens to Systems as They Evolve?
|
81
Figure 8-4. Data is moved from service to service en masse Whats more, as this happens again and again in larger architectures, with data being extracted and moved from service to service, little errors or idiosyncrasies often creep in. Over time these typically worsen and the data quality of the whole ecosystem starts to suffer. The more mutable copies, the more data will diverge over time.
Making matters worse, divergent datasets are very hard to fix in retrospect.
(Techniques like master data management are in many ways a Band-aid over this.) In fact, some of the most intractable technology problems that businesses encounter arise from divergent datasets proliferating from application to applica tion. This issue is discussed in more detail in Chapter 10.
So a cyclical pattern of behavior emerges between (a) the drive to centralize data sets to keep them accurate and (b) the temptation (or need) to extract datasets and go it alonean endless cycle of data inadequacy (Figure 8-5).
82
|
Chapter 8: Sharing Data and Services Across an Organization
Figure 8-5. The cycle of data inadequacy Make Data on the Outside a First-Class Citizen To address these various issues, we need to think about shared data in a slightly different way. We need to consider it a first-class citizen of the architectures we build. <NAME> makes this distinction in his paper Data on the Inside and Data on the Outside.
One of the key insights he makes is that the data services share needs to be treated differently from the data they hold internally. Data on the outside is hard to change, because many programs depend upon it. But, for this very reason,
data on the outside is the most important data of all.
A second important insight is that service teams need to adopt an openly outward-facing role: one designed to serve, and be an integral part of, the wider ecosystem. This is very different from the way traditional applications are built:
written to operate in isolation, with methods for exposing their data bolted on later as an afterthought.
With these points in mind it becomes clear that data on the outsidethe data services shareneeds to be carefully curated and nurtured, but to keep our free dom to iterate we need to turn it into data on the inside so that we can make it our own.
The problem is that none of the approaches available todayservice interfaces,
messaging, or a shared databaseprovide a good solution for dealing with this transition (Figure 8-6), for the following reasons:
Make Data on the Outside a First-Class Citizen
|
83
Service interfaces form tight point-to-point couplings, make it hard to share data at any level of scale, and leave the unanswered question: how do you join the many islands of state back together?
Shared databases concentrate use cases into a single place, and this stifles progress.
Messaging moves data from a tightly coupled place (the originating service)
to a loosely coupled place (the service that is using the data). This means datasets can be brought together, enriched, and manipulated as required.
Moving data locally typically improves performance, as well as decoupling sender and receiver. Unfortunately, messaging systems provide no historical reference, which means its harder to bootstrap new applications, and this can lead to data quality issues over time (discussed in The Data Divergence Problem on page 95 in Chapter 10).
Figure 8-6. Tradeoff between service interfaces, messaging, and a shared database A better solution is to use a replayable log like Kafka. This works like a kind of event store: part messaging system, part database.
Messaging turns highly coupled, shared datasets (data on the outside)
into data a service can own and control (data on the inside). Replayable logs go a step further by adding a central reference.
Dont Be Afraid to Evolve When you start a new project, form a new department, or launch a new com pany, you dont need to get everything right from the start. Most projects evolve.
They start life as monoliths, and later they add distributed components, evolve into microservices, and add event streaming. The important point is when the approach becomes constraining, you change it. But experienced architects know where the tipping point for this lies. Leave it too late, and change can become too 84
|
Chapter 8: Sharing Data and Services Across an Organization
costly to schedule. This is closely linked with the concept of fitness functions in evolutionary architectures.4 Summary
Patterns like microservices are opinionated when it comes to services being inde pendent: services are run by different teams, have different deployment cycles,
dont share code, and dont share databases. The problem is that replacing this with a web of RPC calls fails to address the question: how do services get access to these islands of data for anything beyond trivial lookups?
The data dichotomy highlights this question, underlining the tension between the need for services to stay decoupled and their need to control, enrich, and combine data in their own time.
This leads to three core conclusions: (1) as architectures grow and systems become more data-centric, moving datasets from service to service becomes an inevitable part of how systems evolve; (2) data on the outsidethe data services sharebecomes an important entity in its own right; (3) sharing a database is not a sensible solution to data on the outside, but sharing a replayable log better balances these concerns, as it can hold datasets long-term, and it facilitates eventdriven programming, reacting to the now.
This approach can keep data across many services in sync, through a loosely cou pled interface, giving them the freedom to slice, dice, enrich, and evolve data locally.
4 <NAME>, <NAME>, and <NAME>, Building Evolutionary Architectures (Sebastopol, CA: OReilly,
2017).
Summary
|
85
CHAPTER 9 Event Streams as a Shared Source of Truth As we saw in Part II of this book, events are a useful tool for system design, pro viding notification, state transfer, and decoupling. For a couple of decades now,
messaging systems have leveraged these properties, moving events from system to system, but only in the last few years have messaging systems started to be used as a storage layer, retaining the datasets that flow through them. This cre ates an interesting architectural pattern. A companys core datasets are stored as centralized event streams, with all the decoupling effects of a message broker built in. But unlike traditional brokers, which delete messages once they have been read, historic data is stored and made available to any team that needs it.
This links closely with the ideas developed in Event Sourcing (see Chapter 7) and Pat Hellands concept of data on the outside. ThoughtWorks calls this pattern event streaming as the source of truth.
A Database Inside Out Turning the database inside out was a phrase coined by <NAME>.
Essentially it is the idea that a database has a number of core componentsa commit log, a query engine, indexes, and cachingand rather than conflating these concerns inside a single black-box technology like a database does, we can split them into separate parts using stream processing tools and these parts can exist in different places, joined together by the log. So Kafka plays the role of the commit log, a stream processor like Kafka Streams is used to create indexes or views, and these views behave like a form of continuously updated cache, living inside or close to your application (Figure 9-1).
87
Figure 9-1. A streaming engine and a replayable log have the core components of a database
As an example we might consider this pattern in the context of a simple GUI application that lets users browse order, payment, and customer information in a scrollable grid. Because the user can scroll the grid quickly up and down, the data would likely need to be cached locally. But in a streaming model, rather than periodically polling the database and then caching the result, we would define a view that represents the exact dataset needed in the scrollable grid, and the stream processor would take care of materializing it for us. So rather than query ing data in a database, then layering caching over the top, we explicitly push data to where it is needed and process it there (i.e., its inside the GUI, right next to our code).
But while we call turning the database inside out a pattern, it would probably be more accurate to call it an analogy: a different way of explaining what stream processing is. It is a powerful one, though. One reason that it seems to resonate with people is we have a deep-seated notion that pushing business logic into a database is a bad idea. But the reversepushing data into your codeopens up a wealth of opportunities for blending our data and our code together. So stream processing flips the traditional approach to data and code on its head, encourag ing us to bring data into the application layerto create tables, views, and indexes exactly where we need them.
The database inside out is an analogy for stream processing where the same components we find in a databasea commit log, views, indexes,
cachesare not confined to a single place, but instead can be made available wherever they are needed.
This idea actually comes up in a number of other areas too. The Clojure commu nity talks about deconstructing the database. There are overlaps with Event Sourcing and polyglot persistence as we discussed in Chapter 7. But the idea was 88
|
Chapter 9: Event Streams as a Shared Source of Truth
originally proposed by <NAME> back in 2013, where he calls it unbundling but frames it in a slightly different context, and this turns out to be quite important:
There is an analogy here between the role a log serves for data flow inside a dis tributed database and the role it serves for data integration in a larger organiza tionif you squint a bit, you can see the whole of your organizations systems and data flows as a single distributed database. You can view all the individual queryoriented systems (Redis, SOLR, Hive tables, and so on) as just particular indexes on your data. You can view the stream processing systems like Storm or Samza as just a very well-developed trigger and view materialization mechanism. Classical database people, I have noticed, like this view very much because it finally explains to them what on earth people are doing with all these different data sys temsthey are just different index types!
What is interesting about Jays description is he casts the analogy in the context of a whole company. The key insight is essentially the same: stream processing segregates responsibility for data storage away from the mechanism used to query it. So there might be one shared event stream, say for payments, but many specialized views, in different parts of the company (Figure 9-2). But this pro vides an important alternative to traditional mechanisms for data integration.
More specifically:
The log makes data available centrally as a shared source of truth but with the simplest possible contract. This keeps applications loosely coupled.
Query functionality is not shared; it is private to each service, allowing teams to move quickly by retaining control of the datasets they use.
Figure 9-2. A number of different applications and services, each with its own views, derived from the companys core datasets held in Kafka; the views can be optimized for each use case A Database Inside Out
|
89
The database inside out idea is important because a replayable log,
combined with a set of views, is a far more malleable entity than a single shared database. The different views can be tuned to different peoples needs, but are all derived from the same source of truth: the log.
As well see in Chapter 10, this leads to some further optimizations where each view is optimized to target a specific use case, in much the same way that materi alized views are used in relational databases to create read-optimized, use-casefocused datasets. Of course, unlike in a relational database, the view is decoupled from the underlying data and can be regenerated from the log should it need to be changed. (See Materialized Views on page 62 in Chapter 7.)
Summary This chapter introduced the analogy that stream processing can be viewed as a database turned inside out, or unbundled. In this analogy, responsibility for data storage (the log) is segregated from the mechanism used to query it (the Stream Processing API). This makes it possible to create views and embed them exactly where they are neededin another application, in another geography, or on another platform. There are two main drivers for pushing data to code in this way:
As a performance optimization, by making data local
To decouple the data in an organization, but keep it close to a single shared source of truth So at an organizational level, the pattern forms a kind of database of databases where a single repository of event data is used to feed many views, and each view can flex with the needs of that particular team.
90
|
Chapter 9: Event Streams as a Shared Source of Truth
CHAPTER 10 Lean Data Lean data is a simple idea: rather than collecting and curating large datasets,
applications carefully select small, lean onesjust the data they need at a point in timewhich are pushed from a central event store into caches, or stores they control. The resulting lightweight views are propped up by operational processes that make rederiving those views practical.
If Messaging Remembers, Databases Dont Have To One interesting consequence of using event streams as a source of truth (see Chapter 9) is that any data you extract from the log doesnt need to be stored reliably. If it is lost you can always go back for more. So if the messaging layer remembers, downstream databases dont have to (Figure 10-1). This means you can, if you choose, regenerate a database or event-sourced view completely from the log. This might seem a bit strange. Why might you want to do that?
In the context of traditional messaging, ETL (extract, transform, load) pipelines,
and the like, the messaging layer is ephemeral, and users write all messages to a database once the messages have been read. After all, they may never see them again. There are a few problems that result from this. Architectures become com parably heavyweight, with a large number of applications and services retaining copies of a large proportion of the companys data. At a macro level, as time passes, all these copies tend to diverge from one another and data quality issues start to creep in.
Data quality issues are common and often worse than people suspect. They arise for a great many reasons. One of these is linked to our use of databases as longlived resources that are tweaked and tuned over time, leading to inadvertently introduced errors. A database is essentially a file, and that file will be as old as the system it lives in. It will have been copied from environment to environment,
91
and the data in the database will have been subject to many operational fixes over its lifetime. So it is unsurprising that errors and inaccuracies creep in.
In stream processing, files arent copied around in this way. If a stream processor creates a view, then does a release that changes the shape of that view, it typically throws the original view away, resets to offset 0, and derives a new one from the log.
Looking to other areas of our industryDevOps and friendswe see similar patterns. There was a time when system administrators would individually tweak, tune, and mutate the computers they managed. Those computers would end up being subtly different from one another, and when things went wrong it was often hard to work out why.
Today, issues like these have been largely solved within as-a-service cultures that favor immutability through infrastructure as code. This approach comes with some clear benefits: deployments become deterministic, builds are identical, and rebuilds are easy. Suddenly ops engineers transform into happy people empow ered by the predictability of the infrastructure they wield, and comforted by the certainty that their software will do exactly what it did in test.
Streaming encourages a similar approach, but for data. Event-sourced views are kept lean and can be rederived from the log in a deterministic way. The view could be a cache, a Kafka Streams state store, or a full-blown database. But for this to work, we need to deal with a problem. Loading data can be quite slow. In the next section we look at ways to keep this manageable.
Figure 10-1. If the messaging system can store data, then the views or databases it feeds dont have to Take Only the Data You Need, Nothing More If datasets are stored in Kafka, when you pull data into your service you can pick out just the pieces you need. This minimizes the size of the resulting views and 92
|
Chapter 10: Lean Data
allows them to be reshaped so that they are read-optimized. This is analogous to the way materialized views are used in relational databases to optimize for reads except, unlike in a relational database, writes and reads are decoupled. (See Materialized Views on page 62 in Chapter 7.)
The inventory service, discussed in Chapter 15, makes a good example. It reads inventory messages that include lots of information about the various products stored in the warehouse. When the service reads each message it throws away the vast majority of the document, stripping it back to just two fields: the product ID and the number of items in stock.
Reducing the breadth of the view keeps the dataset small and loosely coupled. By keeping the dataset small you can store more rows in the available memory or disk, and performance will typically improve as a result. Coupling is reduced too,
since should the schema change, it is less likely that the service will store affected fields.
If messaging remembers, derived views can be refined to contain only the data that is absolutely necessary. You can always go back for more.
The approach is simple to implement in either the Kafkas Streams DSL or KSQL, or by using the Connect APIs single message transforms feature.
Rebuilding Event-Sourced Views The obvious drawback of lean data is that, should you need more data, you need to go back to the log. The cleanest way to do this is to drop the view and rebuild it from scratch. If youre using an in-memory data structure, this will happen by default, but if youre using Kafka Streams or a database, there are other factors to consider.
Kafka Streams If you create event-sourced views with Kafka Streams, view regeneration is par for the course. Views are either tables, which are a direct materialization of a Kafka topic, or state stores, which are populated with the result of some declara tive transformation, defined in JVM code or KSQL. Both of these are automati cally rebuilt if the disk within the service is lost (or removed) or if the Streams Reset tool is invoked.1 We discussed how Kafka Streams manages statefulness in 1 See http://bit.ly/2GaCRZO and http://bit.ly/2IUPHJa.
Rebuilding Event-Sourced Views
|
93
more detail in the section The Practicalities of Being Stateful on page 52 in Chapter 6.
Databases and Caches In todays world there are many different types of databases with a wide range of performance tradeoffs. While regenerating a 50 TB Oracle database from scratch would likely be impractical, regenerating the event-sourced views used in busi ness services is often quite workable with careful technology choice.
Because worst-case regeneration time is the limiting factor, it helps to pick a write-optimized database or cache. There are a great many options, but sensible choices include:
An in-memory database/cache like Redis, MemSQL, or Hazelcast
A memory-optimized database like Couchbase or one that lets you disable journaling like MongoDB
A write/disk optimized, log-structured database like Cassandra or RocksDB Handling the Impracticalities of Data Movement Rebuilding an event-sourced view may still be a relatively long-winded process
(minutes or even hours!2). Because of this lead time, when releasing new soft ware, developers typically regenerate views in advance (or in parallel), with the switch from old view to new view happening when the view is fully caught up.
This is essentially the same approach taken by stateful stream processing applica tions that use Kafka Streams.
The pattern works well for simple services that need small- to medium-sized datasets, say, to drive rules engines. Working with larger datasets means slower load times. If you need to rebuild terabyte-sized datasets in a single instance of a highly indexed, disk-resident database, this load time could well be prohibitively long. But in many common cases, memory-based solutions that have fast write times, or horizontal scaling, will keep ingestion fast.
Automation and Schema Migration Operators typically lose trust in tools that are not regularly used. This is even more true when those scripts operate on data (database rollback scripts are noto 2 As a yardstick, RocksDB (which Kafka Streams uses) will bulk-load ~10M 500 KB objects per minute
(roughly GbE speed). Badger will do ~4M 1K objects a minute per SSD. Postgres will bulk-load ~1M rows per minute.
94
|
Chapter 10: Lean Data
rious). So when you move from environment to environment, as part of your development workflow, its often best to recreate views directly from the log rather than copying database files from environment to environment as you might in a traditional database workflow (Figure 10-2).
Figure 10-2. Data is replicated to a UAT environment where views are regenerated from source A good example of this is when schemas change. If you have used traditional messaging approaches before to integrate data from one system into another, you may have encountered a time when the message schema changed in a nonbackward-compatible way. For example, if you were importing customer infor mation from a messaging system into a database when the customer message schema undergoes a breaking change, you would typically craft a database script to migrate the data forward, then subscribe to the new topic of messages.
If using Kafka to hold datasets in full,3 instead of performing a schema migration,
you can simply drop and regenerate the view.
The Data Divergence Problem In practice, all large companies start to have problems with data quality as they grow. This is actually a surprisingly deep and complex subject, with the issues that result being painstaking, laborious, and expensive to fix.
The roots of these issues come from a variety of places. Consider a typical busi ness application that loads data from several other systems and stores that data in a database. There is a data model used on the wire (e.g., JSON), an internal 3 We assume the datasets have been migrated forward using a technique like dual-schema upgrade win dow, discussed in Handling Schema Change and Breaking Backward Compatibility on page 124 in Chapter 13.
Automation and Schema Migration
|
95
domain model (e.g., an object model), a data model in the database (e.g., DDL),
and finally various schemas for any outbound communication. Code needs to be written for each of these translations, and this code needs to be evolved as the various schemas change. These layers, and the understanding required for each one, introduce opportunities for misinterpretation.
Semantic issues are even trickier to address, often arising where teams, depart ments, or companies meet. Consider two companies going through a merger.
There will be a host of equivalent datasets that were modeled differently by each side. You might think of this as a simple transformation problem, but typically there are far deeper semantic conflicts: Is a supplier a customer? Is a contractor an employee? This opens up more opportunity for misinterpretation.
So as data is moved from application to application and from service to service,
these interactions behave a bit like a game of telephone (a.k.a. Chinese whispers):
everything starts well, but as time passes the original message gets misinterpreted and transforms into something quite different.
Some of the resulting issues can be serious: a bank whose Risk and Finance departments disagree on the banks position, or a retailerwith a particularly protracted workflowtaking a week to answer customer questions. All large companies have stories like these of one form or other. So this isnt a problem you typically face building a small web application, but its a problem faced by many larger, more mature architectures.
There are tried-and-tested methods for addressing these concerns. Some compa nies create reconciliation systems that can turn into small cottage industries of their own. In fact, there is a whole industry dedicated to combating this problem master data managementalong with a whole suite of tools for data wrangling such issues toward a shared common ground.
Streaming platforms help address these problems in a slightly different way.
First, they form a kind of central nervous system that connects all applications to a single shared source of truth, reducing the telephone/Chinese whispers effect.
Secondly, because data is retained immutably in the log, its easier to track down when an error was introduced, and its easier to apply fixes with the original data on hand. So while we will always make mistakes and misinterpretations, techni ques like event streams as a source of truth, Command Sourcing, and lean data allow individual teams to develop the operational maturity needed to avoid mis takes in the first place, or repair the effects of them once they happen.
Summary So when it comes to data, we should be unequivocal about the shared facts of our system. They are the very essence of our business, after all. Lean practices encourage us to stay close to these shared facts, a process where we manufacture 96
|
Chapter 10: Lean Data
views that are specifically optimized to the problem space at hand. This keeps them lightweight and easier to regenerate, leveraging a similar mindset to that developed in Command Sourcing and Event Sourcing, which we discussed in Chapter 7. So while these facts may be evolved over time, applied in different ways, or recast to different contexts, they will always tie back to a single source of truth.
Summary
|
97
PART IV Consistency, Concurrency, and Evolution
Trust is built with consistency.
<NAME>
CHAPTER 11 Consistency and Concurrency in EventDriven Systems The term consistency is quite overused in our industry, with several different meanings applied in a range of contexts. Consistency in CAP theorem differs from consistency in ACID transactions, and there is a whole spectrum of subtly different guarantees, including strong consistency and eventual consistency,
among others. The lack of consistent terminology around this word may seem a little ironic, but it is really a reflection of the complexity of a subject that goes way beyond the scope of this book.1 But despite these many subtleties, most people have an intuitive notion of what consistency is, one often formed from writing single-threaded programs2 or mak ing use of a database. This typically equates to some general notions about the transactional guarantees a database provides. When you write a record, it stays written. When you read a record, you read the most recently written value. If you perform multiple operations in a transaction, they all become visible at once, and you dont need to be concerned with what other users may be doing at the same 1 For a full treatment, see Martin Kleppmanns encyclopedic Designing Data-Intensive Applications
(Sebastopol, CA: OReilly, 2017).
2 The various consistency models really reflect optimizations on the concept of in-order execution against a single copy of data. These optimizations are necessary in practice, and most users would prefer to trade a slightly weaker guarantee for the better performance (or availability) characteristics that typically come with them. So implementers come up with different ways to slacken the simple in-order execu tion guarantee. These various optimizations lead to different consistency models, and because there are many dimensions to optimize, particularly in distributed systems, there are many resulting models.
When we discuss Scaling Concurrent Operations in Streaming Systems on page 142 in Chapter 15, well see how streaming systems achieve strong guarantees by partitioning relevant data into different stream threads, then wrapping those operations in a transaction, which ensures that, for that operation, we have in-order execution on a single copy of data (i.e., a strong consistency model).
101
time. We might call this idea intuitive consistency (which is closest in technical terms to the famous ACID properties).
A common approach to building business systems is to take this intuitive notion and apply it directly. If you build traditional three-tier applications (i.e., client,
server, and database), this is often what you would do. The database manages concurrent changes, isolated from other users, and everyone observes exactly the same view at any one point in time. But groups of services generally dont have such strong guarantees. A set of microservices might call one another synchro nously, but as data moves from service to service it will often become visible to users at different times, unless all services coordinate around a single database and force the use of a single global consistency model.
But in a world where applications are distributed (across geographies, devices,
etc.), it isnt always desirable to have a single, global consistency model. If you create data on a mobile device, it can only be consistent with data on a backend server if the two are connected. When disconnected, they will be, by definition,
inconsistent (at least in that moment) and will synchronize at some later point,
eventually becoming consistent. But designing systems that handle periods of inconsistency is important. For a mobile device, being able to function offline is a desirable feature, as is resynchronizing with the backend server when it recon nects, converging to consistency as it does so. But the usefulness of this mode of operation depends on the specific work that needs to be done. A mobile shop ping application might let you select your weekly groceries while youre offline,
but it cant work out whether those items will be available, or let you physically buy anything until you come back online again. So these are use cases where global strong consistency is undesirable.
Business systems often dont need to work offline in this way, but there are still benefits to avoiding global strong consistency and distributed transactions: they are difficult and expensive to scale, dont work well across geographies, and are often relatively slow. In fact, experience with distributed transactions that span different systems, using techniques like XA, led the majority of implementers to design around the need for such expensive coordination points.
But on the other hand, business systems typically want strong consistency to reduce the potential for errors, which is why there are vocal proponents who consider stronger safety properties valuable. There is also an argument for want ing a bit of both worlds. This middle ground is where event-driven systems sit,
often with some form of eventual consistency.
Eventual Consistency The previous section refers to an intuitive notion of consistency: the idea that business operations execute sequentially on a single copy of data. Its quite easy 102
|
Chapter 11: Consistency and Concurrency in Event-Driven Systems
to build a system that has this property. Services call one another through RPCs,
just like methods in a single-threaded program: a set of sequential operations.
Data is passed by reference, using an ID. Each service looks up the data it needs in the database. When it needs to change it, it changes it in the database. Such a system might look something like Figure 11-1.
Figure 11-1. A set of services that notify one another and share data via a database This approach provides a very intuitive model as everything progresses sequen tially, but as services scale out and more services are added, it can get hard to scale and operate systems that follow this approach.
Event-driven systems arent typically built in this way. Instead, they leverage asynchronous broadcast, deliberately removing the need for global state and avoiding synchronous execution. (We went through the issues with global shared state in What Happens to Systems as They Evolve? on page 80 in Chapter 8.)
Such systems are often referred to as being eventually consistent.
There are two consequences of eventual consistency in this context:
Timeliness If two services process the same event stream, they will process them at dif ferent rates, so one might lag behind the other. If a business operation con sults both services for any reason, this could lead to an inconsistency.
Collisions If different services make changes to the same entity in the same event stream, if that data is later coalescedsay in a databasesome changes might be lost.
Lets dig into these with an example that continues our theme of online retail sys tems. In Figure 11-2 an order is accepted in the orders service (1). This is picked Eventual Consistency
|
103
up by the validation service, where it is validated (2). Sales tax is added (3). An email is sent (4). The updated order goes back to the orders service (5), where it can be queried via the orders view (this is an implementation of CQRS, as we dis cussed in Chapter 7). After being sent a confirmation email (6), the user can click through to the order (7).
Figure 11-2. An event-driven system connected via a log Timeliness
Consider the email service (4) and orders view (5). Both subscribe to the same event stream (Validated Orders) and process them concurrently. Executing con currently means one will lag slightly behind the other. Of course, if we stopped writes to the system, then both the orders view and the email service would even tually converge on the same state, but in normal operation they will be at slightly different positions in the event stream. So they lack timeliness with respect to one another. This could cause an issue for a user, as there is an indirect connection between the email service and the orders service. If the user clicks the link in the confirmation email, but the view in the orders service is lagging, the link would either fail or return an incorrect state (7).
So a lack of timeliness (i.e., lag) can cause problems if services are linked in some way, but in larger ecosystems it is beneficial for the services to be decoupled, as it allows them to do their work concurrently and in isolation, and the issues of timeliness can usually be managed (this relates closely to the discussion around CQRS in Chapter 7).
But what if this behavior is unacceptable, as this email example demonstrates?
Well, we can always add serial execution back in. The call to the orders service might block until the view is updated (this is the approach taken in the worked 104
|
Chapter 11: Consistency and Concurrency in Event-Driven Systems
example in Chapter 15). Alternatively, we might have the orders service raise a View Updated event, used to trigger the email service after the view has been updated. Both of these synthesize serial execution where it is necessary.
Collisions and Merging Collisions occur if two services update the same entity at the same time. If we design the system to execute serially, this wont happen, but if we allow concur rent execution it can.
Consider the validation service and tax service in Figure 11-2. To make them run serially, we might force the tax service to execute first (by programming the ser vice to react to Order Requested events), then force the validation service to exe cute next (by programming the service to react to events that have had sales tax added). This linearizes execution for each order and means that the final event will have all the information in (i.e., it is both validated and has sales tax added).
Of course, making events run serially in this way increases the end-to-end latency.
Alternatively, we can let the validation service and the tax service execute con currently, but wed end up with two events with important information in each:
one validated order and one order with sales tax added. This means that, to get the correct order, with both validation and sales tax applied, we would have to merge these two messages. (So, in this case, the merge would have to happen in both the email service and in the orders view.)
In some situations this ability to make changes to the same entity in different processes at the same time, and merge them later, can be extremely powerful
(e.g., an online whiteboarding tool). But in others it can be error-prone. Typi cally, when building business systems, particularly ones involving money and the like, we tend to err on the side of caution. There is a formal technique for merg ing data in this way that has guaranteed integrity; it is called a conflict-free repli cated data type, or CRDT. CRDTs essentially restrict what operations you can perform to ensure that, when data is changed and later merged, you dont lose information. The downside is that the dialect is relatively limited.
A good compromise for large business systems is to keep the lack of timeliness
(which allows us to have lots of replicas of the same state, available read-only)
but remove the opportunity for collisions altogether (by disallowing concurrent mutations). We do this by allocating a single writer to each type of data (topic) or alternatively to each state transition. Well talk about this next.
The Single Writer Principle A useful way to generify these ideas is to isolate consistency concerns into own ing services using the single writer principle. <NAME> used this term in The Single Writer Principle
|
105
response to the wide-scale use of locks in concurrent environments, and the sub sequent efficiencies that we can often gain by consolidating writes to a single thread. The core idea closely relates to the Actor model,3 several ideas in database research,4 and anecdotally to system design. From a services perspective, it also marries with the idea that services should have a single responsibility.
At its heart its a simple concept: responsibility for propagating events of a specific type is assigned to a single servicea single writer. So the inventory service owns how the stock inventory progresses over time, the orders service owns the pro gression of orders, and so on.
Conflating writes into a single service makes it easier to manage consistency effi ciently. But this principle has worth that goes beyond correctness or concurrency properties. For example:
It allows versioning (e.g., applying a version number) and consistency checks
(e.g., checking a version number; see Identity and Concurrency Control on page 108) to be applied in a single place.
It isolates the logic for evolving each business entity, in time, to a single ser vice, making it easier to reason about and to change (for example, rolling out a schema change, as discussed in Handling Schema Change and Breaking Backward Compatibility on page 124 in Chapter 13).
It dedicates ownership of a dataset to a single team, allowing that team to specialize. One antipattern observed in industry, when Enterprise Messaging or an Enterprise Service Bus was applied, was that centralized schemas and business logic could become a barrier to progress.5 The single writer princi ple encourages service teams with clearly defined ownership of shared data sets, putting focus on data on the outside as well as allocating clear responsibility for it. This becomes important in domains that have complex business rules associated with different types of data. So, for example, in finance, where products require rich domain knowledge to model and evolve, isolating responsibility for data evolution to a single service and team is often considered an advantage.
When the single writer principle is applied in conjunction with Event Collabora tion (discussed in Chapter 5), each writer evolves part of a single business work flow through a set of successive events. So in Figure 11-3, which shows a larger online retail workflow, several services collaborate around the order process as an order moves from inception, through payment processing and shipping, to 3 https://dspace.mit.edu/handle/1721.1/6952 and https://en.wikipedia.org/wiki/Actor_model.
4 See http://bit.ly/deds-end-of-an-era and http://bit.ly/deds-volt.
5 See https://thght.works/2IUS1zS and https://thght.works/2GdFYMq.
106
|
Chapter 11: Consistency and Concurrency in Event-Driven Systems
completion. There are separate topics for order, payment, and shipment. The order, payment, and shipping services take control of all state changes made in their respective topics.
Figure 11-3. Here each circle represents an event; the color of the circle designates the topic it is in; a workflow evolves from Order Requested through to Order Com pleted; on the way, four services perform different state transitions in topics they are single writer to; the overall workflow spans them all So, instead of sharing a global consistency model (e.g., via a database), we use the single writer principle to create local points of consistency that are connected via the event stream. There are a couple of variants on this pattern, which we will discuss in the next two sections.
As well see in Scaling Concurrent Operations in Streaming Systems on page 142 in Chapter 15, single writers can be scaled out linearly through partitioning, if we use Kafkas Streams API.
Command Topic A common variant on this pattern uses two topics per entity, often named Com mand and Entity. This is logically identical to the base pattern, but the Com mand topic can be written to by any process and is used only for the initiating event. The Entity topic can be written to only by the owning service: the single writer. Splitting these two allows administrators to enforce the single writer prin ciple strictly by configuring topic permissions. So, for example, we might break order events into two topics, shown in Table 11-1.
The Single Writer Principle
|
107
Table 11-1. A Command Topic is used to separate the initial command from subsequent events Topic
OrderCommandTopic OrdersTopic Event types OrderRequest(ed)
OrderValidated, OrderCompleted Writer
Orders service Any service Single Writer Per Transition A less stringent variant of the single writer principle involves services owning individual transitions rather than all transitions in a topic (see Table 11-2). So,
for example, the payment service might not use a Payment topic at all. It might simply add extra payment information to the existing order message (so there would be a Payment section of the Order schema). The payment service then owns just that one transition and the orders service owns the others.
Table 11-2. The order service and payment services both write to the orders topic, but each service is responsible for a different state transition Service
Orders service Payment service Topic
OrdersTopic OrdersTopic
Writable transition OrderRequested->OrderValidated OrderValidated->PaymentReceived PaymentReceived->OrderConfirmed Atomicity with Transactions Kafka provides a transactions feature with two guarantees:
Messages sent to different topics, within a transaction, will either all be writ ten or none at all.
Messages sent to a single topic, in a transaction, will never be subject to duplicates, even on failure.
But transactions provide a very interesting and powerful feature to Kafka Streams: they join writes to state stores and writes to output topics together,
atomically. Kafkas transactions are covered in full in Chapter 12.
Identity and Concurrency Control The notion of identity is hugely important in business systems, yet it is often overlooked. For example, to detect duplicates, messages need to be uniquely identified, as we discussed in Chapter 12. Identity is also important for handling the potential for updates to be made at the same time, by implementing optimis tic concurrency control.
108
|
Chapter 11: Consistency and Concurrency in Event-Driven Systems
The basic premise of identity is that it should correlate with the real world: an order has an OrderId, a payment has a PaymentId, and so on. If that entity is log ically mutable (for example, an order that has several states, Created, Validated,
etc., or a customer whose email address might be updated), then it should have a version identifier also:
"Customer"{
"CustomerId": "1234"
"Source": "ConfluentWebPortal"
"Version": "1"
...
}
The version identifier can then be used to handle concurrency. As an example
(see Figure 11-4), say a user named Bob opens his customer details in one browser window (reading version 1), then opens the same page in a second browser window (also version 1). He then changes his address and submits in the second window, so the server increments the version to 2. If Bob goes back to the first window and changes his phone number, the update should be rejected due to a version comparison check on the server.
Figure 11-4. An example of optimistic concurrency control. The write fails at T2 because the data in the browser is now stale and the server performs a version number comparison before permitting the write.
The optimistic concurrency control technique can be implemented in synchro nous or asynchronous systems equivalently.
Identity and Concurrency Control
|
109
Limitations The single writer principle and other related patterns discussed in this chapter are exactly that: patterns, which are useful in common cases, but dont provide general solutions. There will always be exceptions, particularly in environments where the implementation is constrained, for example, by legacy systems.
Summary In this chapter we looked at why global consistency can be problematic and why eventual consistency can be useful. We adapted eventual consistency with the single writer principle, keeping its lack of timeliness but avoiding collisions.
Finally, we looked at implementing identity and concurrency control in eventdriven systems.
110
|
Chapter 11: Consistency and Concurrency in Event-Driven Systems
CHAPTER 12 Transactions, but Not as We Know Them Kafka ships with built-in transactions, in much the same way that most relational databases do. The implementation is quite different, as we will see, but the goal is similar: to ensure that our programs create predictable and repeatable results,
even when things fail.
Transactions do three important things in a services context:
They remove duplicates, which cause many streaming operations to get incorrect results (even something as simple as a count).
They allow groups of messages to be sent, atomically, to different topicsfor example, Order Confirmed and Decrease Stock Level, which would leave the system in an inconsistent state if only one of the two succeeded.
Because Kafka Streams uses state stores, and state stores are backed by a Kafka topic, when we save data to the state store, then send a message to another service, we can wrap the whole thing in a transaction. This property turns out to be particularly useful.
In this chapter we delve into transactions, looking at the problems they solve,
how we should make use of them, and how they actually work under the covers.
The Duplicates Problem Any service-based architecture is itself a distributed system, a field renowned for being difficult, particularly when things go wrong. Thought experiments like the Two Generals Problem and proofs like FLP highlight these inherent difficulties.
But in practice the problem seems less complex. If you make a call to a service and its not running for whatever reason, you retry, and eventually the call will complete.
111
One issue with this is that retries can result in duplicate processing, and this can cause very real problems. Taking a payment twice from someones account will lead to an incorrect balance (Figure 12-1). Adding duplicate tweets to a users feed will lead to a poor user experience. The list goes on.
Figure 12-1. The UI makes a call to the payment service, which calls an external payment provider; the payment service fails before returning to the UI; as the UI did not get a response, it eventually times out and retries the call; the users account could be debited twice In reality we handle these duplicate issues automatically in the majority of sys tems we build, as many systems simply push data to a database, which will auto matically deduplicate based on the primary key. Such processes are naturally idempotent. So if a customer updates their address and we are saving that data in a database, we dont care if we create duplicates, as the worst-case scenario is that the database table that holds customer addresses gets updated twice, which is no big deal. This applies to the payment example also, so long as each one has a unique ID. As long as deduplication happens at the end of each use case, then, it doesnt matter how many duplicate calls are made in between. This is an old idea,
dating back to the early days of TCP (Transmission Control Protocol). Its called the end-to-end principle.
The rub is thisfor this natural deduplication to work, every network call needs to:
Have an appropriate key that defines its identity.
Be deduplicated in a database that holds an extensive history of these keys.
Or, duplicates have to be constantly considered in the business logic we write, which increases the cognitive overhead of this task.
Event-driven systems attempt to move away from this database-centric style of processing, instead executing business logic, communicating the results of that processing, and moving on.
The result of this is that most event-driven systems end up deduplicating on every message received, before it is processed, and every message sent out has a 112
|
Chapter 12: Transactions, but Not as We Know Them
carefully chosen ID so it can be deduplicated downstream. This is at best a bit of a hassle. At worst its a breeding ground for errors.
But if you think about it, this is no more an application layer concern than order ing of messages, arranging redelivery, or any of the other benefits that come with TCP. We choose TCP over UDP (User Datagram Protocol) because we want to program at a higher level of abstraction, where delivery, ordering, and so on are handled for us. So were left wondering why these issues of duplication have leaked up into the application layer. Isnt this something our infrastructure should solve for us?
Transactions in Kafka allow the creation of long chains of services, where the processing of each step in the chain is wrapped in exactly-once guarantees. This reduces duplicates, which means services are easier to program and, as well see later in this chapter, transactions let us tie streams and state together when we implement storage either through Kafka Streams state stores or using the Event Sourcing design pattern. All this happens automatically if you are using the Kafka Streams API.
The bad news is that this isnt some magic fairy dust that sprinkles exactlyonceness over your entire system. Your system will involve many different parts,
some based on Kafka, some based on other technologies, the latter of which wont be covered by the guarantee.
But it does sprinkle exactly-onceness over the Kafka bits, the interactions between your services (Figure 12-2). This frees services from the need to dedupli cate data coming in and pick appropriate keys for data going out. So we can hap pily chain services together, inside an event-driven workflow, without these additional concerns. This turns out to be quite empowering.
The Duplicates Problem
|
113
Figure 12-2. Kafkas transactions provide guarantees for communication per formed through Kafka, but not beyond it Using the Transactions API to Remove Duplicates As a simple example, imagine we have an account validation service. It takes deposits in, validates them, and then sends a new message back to Kafka marking the deposit as validated.
Kafka records the progress that each consumer makes by storing an offset in a special topic, called consumer_offsets. So to validate each deposit exactly once,
we need to perform the final two actions(a) send the Deposit Validated mes sage back to Kafka, and (b) commit the appropriate offset to the consumer_off sets topicas a single atomic unit (Figure 12-3). The code for this would look something like the following:
//Read and validate deposits validatedDeposits = validate(consumer.poll(0))
//Send validated deposits & commit offsets atomically producer.beginTransaction()
producer.send(validatedDeposits)
producer.sendOffsetsToTransaction(offsets(consumer))
producer.endTransaction()
114
|
Chapter 12: Transactions, but Not as We Know Them
Figure 12-3. A single message operation is in fact two operations: a send and an acknowledge, which must be performed atomically to avoid duplication If you are using the Kafka Streams API, no extra code is required. You simply enable the feature.
Exactly Once Is Both Idempotence and Atomic Commit As Kafka is a broker, there are actually two opportunities for duplication. Send ing a message to Kafka might fail before an acknowledgment is sent back to the client, with a subsequent retry potentially resulting in a duplicate message. On the other side, the process reading from Kafka might fail before offsets are com mitted, meaning that the same message might be read a second time when the process restarts (Figure 12-4).
Figure 12-4. Message brokers provide two opportunities for failureone when sending to the broker, and one when reading from it Exactly Once Is Both Idempotence and Atomic Commit
|
115
So idempotence is required in the broker to ensure duplicates cannot be created in the log. Idempotence, in this context, is just deduplication. Each producer is given an identifier, and each message is given a sequence number. The combina tion of the two uniquely defines each batch of messages sent. The broker uses this unique sequence number to work out if a message is already in the log and dis cards it if it is. This is a significantly more efficient approach than storing every key youve ever seen in a database.
On the read side, we might simply deduplicate (e.g., in a database). But Kafkas transactions actually provide a broader guarantee, more akin to transactions in a database, tying all messages sent together in a single atomic commit. So idempo tence is built into the broker, and then an atomic commit is layered on top.
How Kafkas Transactions Work Under the Covers Looking at the code example in the previous section, you might notice that Kafkas transactions implementation looks a lot like transactions in a database.
You start a transaction, write messages to Kafka, then commit or abort. But the whole model is actually pretty different, because of course its designed for streaming.
One key difference is the use of marker messages that make their way through the various streams. Marker messages are an idea first introduced by <NAME> Lamport almost 30 years ago in a method called the Snapshot Marker Model.
Kafkas transactions are an adaptation of this idea, albeit with a subtly different goal.
While this approach to transactional messaging is complex to implement, con ceptually its quite easy to understand (Figure 12-5). Take our previous example,
where two messages were written to two different topics atomically. One message goes to the Deposits topic, the other to the committed_offsets topic.
Begin markers are sent down both.1 We then send our messages. Finally, when were done, we flush each topic with a Commit (or Abort) marker, which con cludes the transaction.
Now the aim of a transaction is to ensure only committed data is seen by downstream programs. To make this work, when a consumer sees a Begin marker it starts buffering internally. Messages are held up until the Commit marker arrives. Then, and only then, are the messages presented to the consum ing program. This buffering ensures that consumers only ever read committed data.
1 In practice a clever optimization is used to move buffering from the consumer to the broker, reducing memory pressure. Begin markers are also optimized out.
116
|
Chapter 12: Transactions, but Not as We Know Them
Figure 12-5. Conceptual model of transactions in Kafka To ensure each transaction is atomic, sending the Commit markers involves the use of a transaction coordinator. There will be many of these spread throughout the cluster, so there is no single point of failure, but each transaction uses just one.
The transaction coordinator is the ultimate arbiter that marks a transaction com mitted atomically, and maintains a transaction log to back this up (this step implements two-phase commit).
For those that worry about performance, there is of course an overhead that comes with this feature, and if you were required to commit after every message,
the performance degradation would be noticeable. But in practice there is no need for that, as the overhead is dispersed among whole batches of messages,
allowing us to balance transactional overhead with worst-case latency. For exam ple, batches that commit every 100 ms, with a 1 KB message size, have a 3% over head when compared to in-order, at-least-once delivery. You can test this out yourself with the performance test scripts that ship with Kafka.
In reality, there are many subtle details to this implementation, particularly around recovering from failure, fencing zombie processes, and correctly allocat ing IDs, but what we have covered here is enough to provide a high-level under standing of how this feature works. For a comprehensive explanation of how transactions work, see the post Transactions in Apache Kafka by <NAME> and <NAME>.
How Kafkas Transactions Work Under the Covers
|
117
Store State and Send Events Atomically As we saw in Chapter 7, Kafka can be used to store data in the log, with the most common means being a state store (a disk-resident hash table, held inside the API, and backed by a Kafka topic) in Kafka Streams. As a state store gets its dura bility from a Kafka topic, we can use transactions to tie writes to the state store and writes to other output topics together. This turns out to be an extremely powerful pattern because it mimics the tying of messaging and databases together atomically, something that traditionally required painfully slow proto cols like XA.
The database used by Kafka Streams is a state store. Because state stores are backed by Kafka topics, transactions let us tie messages we send and state we save in state stores together, atomically.
Imagine we extend the previous example so our validation service keeps track of the balance as money is deposited. So if the balance is currently $50, and we deposit $5 more, then the balance should go to $55. We record that $5 was deposited, but we also store this current balance, $55, by writing it to a state store
(or directly to a compacted topic). See Figure 12-6.
Figure 12-6. Three messages are sent atomically: a deposit, a balance update, and the acknowledgment If transactions are enabled in Kafka Streams, all these operations will be wrapped in a transaction automatically, ensuring the balance will always be atomically in sync with deposits. You can achieve the same process with the product and con sumer by wrapping the calls manually in your code, and the current account bal ance can be reread on startup.
Whats powerful about this example is that it blends concepts of both messaging and state management. We listen to events, act, and create new events, but we 118
|
Chapter 12: Transactions, but Not as We Know Them
also manage state, the current balance, in Kafkaall wrapped in the same trans action.
Do We Need Transactions? Can We Do All This with Idempotence?
People have been building both event- and request-driven systems for decades,
simply by making their processes idempotent with identifiers and databases. But implementing idempotence comes with some challenges. While defining the ID of an order is relatively obvious, not all streams of events have such a clear con cept of identity. If we had a stream of events representing the average account balance per region per hour, we could come up with a suitable key, but you can imagine it would be a lot more brittle and error-prone.
Also, transactions encapsulate the concept of deduplication entirely inside your service. You dont muddy the waters seen by other services downstream with any duplicates you might create. This makes the contract of each service clean and encapsulated. Idempotence, on the other hand, relies on every service that sits downstream correctly implementing deduplication, which clearly makes their contract more complex and error-prone.
What Cant Transactions Do?
There are a few limitations or potential misunderstandings of transactions that are worth noting. First, they work only in situations where both the input and the output go through Kafka. If you are calling an external service (e.g., via HTTP),
updating a database, writing to stdout, or anything other than writing to and from the Kafka broker, transactional guarantees wont apply and calls can be duplicated. So, much like using a transactional database, transactions work only when you are using Kafka.
Also akin to accessing a database, transactions commit when messages are sent,
so once they are committed there is no way to roll them back, even if a subse quent transaction downstream fails. So if the UI sends a transactional message to the orders service and the orders service fails while sending messages of its own,
any messages the orders service sent would be rolled back, but there is no way to roll back the transaction in the UI. If you need multiservice transactions, con sider implementing sagas.
Transactions commit atomically in the broker (just like a transaction would commit in a database), but there are no guarantees regarding when an arbitrary consumer will read those messages. This may seem obvious, but it is sometimes a point of confusion. Say we send a message to the Orders topic and a message to the Payments topic, inside a transaction there is no way to know when a con Do We Need Transactions? Can We Do All This with Idempotence?
|
119
sumer will read one or the other, or that they might read them together. But again note that this is identical to the contract offered by a transactional data base.
Finally, in the examples here we use the producer and consumer APIs to demon strate how transactions work. But the Kafkas Streams API actually requires no extra coding whatsoever. All you do is set a configuration and exactly-once pro cessing is enabled automatically.
But while there is full support for individual producers and consumers, transac tions are not currently supported for consumer groups (although this will change). If you have this requirement, use the Kafka Streams API, where con sumer groups are supported in full.
Making Use of Transactions in Your Services In Chapter 5 we described a design pattern known as Event Collaboration. In this pattern messages move from service to service, creating a workflow. Its initiated with an Order Requested event and it ends with Order Complete. In between,
several different services get involved, moving the workflow forward.
Transactions are important in complex workflows like this because the end-toend principle is hard to apply. Without them, deduplication would need to hap pen in every service. Moreover, building a reliable streaming application without transactions turns out to be pretty tough. There are a couple of reasons for this:
(a) Streams applications make use of many intermediary topics, and deduplicat ing them after each step is a burden (and would be near impossible in KSQL), (b)
the DSL provides a range of one-to-many operations (e.g., flatMap()), which are hard to manage idempotently without the transactions API. Kafkas transactions feature resolves these issues, along with atomically tying stream processing with the storing of intermediary state in state stores.
Summary Transactions affect the way we build services in a number of specific ways:
They take idempotence right off the table for services interconnected with Kafka. So when we build services that follow the pattern read, process,
(save), send, we dont need to worry about deduplicating inputs or con structing keys for outputs.
We no longer need to worry about ensuring there are appropriate unique keys on the messages we send. This typically applies less to topics containing business events, which often have good keys already. But its useful when were managing derivative/intermediary datafor example, when were remapping events, creating aggregate events, or using the Streams API.
120
|
Chapter 12: Transactions, but Not as We Know Them
Where Kafka is used for persistence, we can wrap both messages we send to other services and state we need internally in a single transaction that will commit or fail. This makes it easier to build simple stateful apps and services.
So, to put it simply, when you are building event-based systems, Kafkas transac tions free you from the worries of failure and retries in a distributed worldwor ries that really should be a concern of the infrastructure, not of your code. This raises the level of abstraction, making it easier to get accurate, repeatable results from large estates of fine-grained services.
Having said all that, we should also be careful. Transactions remove just one of the issues that come with distributed systems, but there are many more. Coarsegrained services still have their place. But in a world where we want to be fast and nimble, streaming platforms raise the bar, allowing us to build finer-grained services that behave as predictably in complex chains as they would standing alone.
Summary
|
121
CHAPTER 13 Evolving Schemas and Data over Time Schemas are the APIs used by event-driven services, so a publisher and sub scriber need to agree on exactly how a message is formatted. This creates a logical coupling between sender and receiver based on the schema they both share. In the same way that request-driven services make use of service discovery technol ogy to discover APIs, event-driven technologies need some mechanism to dis cover what topics are available, and what data (i.e., schema) they provide.
There are a fair few options available for schema management: Protobuf and JSON Schema are both popular, but most projects in the Kafka space use Avro.
For central schema management and verification, Confluent has an open source Schema Registry that provides a central repository for Avro schemas.
Using Schemas to Manage the Evolution of Data in Time
Schemas provide a contract that defines what a message should look like. This is pretty intuitive. Importantly, though, most schema technologies provide a mech anism for validating whether a message in a new schema is backward-compatible with previous versions (or vice versa). This property is essential. (Dont use Java serialization or any non-evolvable format across services that change independ ently.)
Say you added a return code field to the schema for an order; this would be a backward-compatible change. Programs running with the old schema would still be able to read messages, but they wouldnt see the return code field (termed forward compatibility). Programs with the new schema would be able to read the whole message, with the return code field included (termed backward compati bility).
123
Unfortunately, you cant move or remove fields from a schema in a compatible way, although its typically possible to synthesize a move with a clone. The data will be duplicated in two places until such time as a breaking change can be released.
This ability to evolve a schema with additive changes that dont break old pro grams is how most shared messaging models are managed over time.
The Confluent Schema Registry can be used to police this approach. The Schema Registry provides a mapping between topics in Kafka and the schema they use
(Figure 13-1). It also enforces compatibility rules before messages are added. So the Schema Registry will check every message sent to Kafka for Avro compatibil ity, ensuring that incompatible messages will fail on publication.
Figure 13-1. Calling out to the Schema Registry to validate schema compatibility when reading and writing orders in the orders service Handling Schema Change and Breaking Backward Compatibility
The pain of long schema migrations is one of the telltale criticisms of the rela tional era. But the reality is that evolving schemas are a fundamental attribute of the way data ages. The main difference between then and now is that late-bound/
schema-on-read approaches allow many incompatible data schemas to exist in the same table, topic, or the like at the same time. This pushes the problem of translating the formatfrom old to newinto the application layer, hence the name schema on read.
Schema on read turns out to be useful in a couple of ways. In many cases recent data is more valuable than older data, so programs can move forward without migrating older data they dont really care about. This is a useful, pragmatic solu tion used broadly in practice, particularly with messaging. But schema on read 124
|
Chapter 13: Evolving Schemas and Data over Time
can also be simple to implement if the parsing code for the previous schemas already exists in the codebase (which is often the case in practice).
However, whichever approach you take, unless you do a single holistic big-bang release, you will end up handling the schema-evolution problem, be it by physi cally migrating datasets forward or by having different application-layer rou tines. Kafka is no different.
As we discussed in the previous section, most of the time backward compatibility between schemas can be maintained through additive changes (i.e., new fields,
but not moves or deletes). But periodically schemas will need upgrading in a non-backward-compatible way. The most common approach for this is Dual Schema Upgrade Window, where we create two topics, orders-v1 and orders-v2,
for messages with the old and new schemas, respectively. Assuming orders are mastered by the orders service, this gives you a few options:
The orders service can dual-publish in both schemas at the same time, to two topics, using Kafkas transactions API to make the publication atomic. (This approach doesnt solve back-population so isnt appropriate for topics used for long-term storage.)
The orders service can be repointed to write to orders-v2. A Kafka Streams job is added to down-convert from the orders-v2 topic to the orders-v1 for backward compatibility. (This also doesnt solve back-population.) See Figure 13-2.
The orders service continues to write to orders-v1. A Kafka Streams job is added that up-converts from orders-v1 topic to orders-v2 topic until all cli ents have upgraded, at which point the orders service is repointed to ordersv2. (This approach handles back-population.)
The orders service can migrate its dataset internally, in its own database,
then republish the whole view into the log in the orders-v2 topic. It then continues to write to both orders-v1 and orders-v2 using the appropriate formats. (This approach handles back-population.)
All four approaches achieve the same goal: to give services a window in which they can upgrade. The last two options make it easier to port historic messages from the v1 to the v2 topics, as the Kafka Streams job will do this automatically if it is started from offset 0. This makes it better suited to long-retention topics such as those used in Event Sourcing use cases.
Handling Schema Change and Breaking Backward Compatibility
|
125
Figure 13-2. Dual Schema Upgrade Window: the same data coexists in two topics,
with different schemas, so there is a window during which services can upgrade Services continue in this dual-topic mode until fully migrated to the v2 topic, at which point the v1 topic can be archived or deleted as appropriate.
As an aside, we discussed the single writer principle in Chapter 11. One of the reasons for applying this approach is that it makes schema upgrades simpler. If we had three different services writing orders, it would be much harder to sched ule a non-backward-compatible upgrade without a conjoined release.
Collaborating over Schema Change In the previous section we discussed how to roll out a non-backward-compatible schema change. However, before such a process ensues, or even before we make a minor change to a schema, there is usually some form of team-to-team collabo ration that takes place to work out whether the change is appropriate. This can take many forms. Sending an email to the affected teams, telling them what the new schema is and when its going live, is pretty common, as is having a central team that manages the process. Neither of these approaches works particularly well in practice, though. The email method lacks structure and accountability.
The central team approach stifles progress, because you have to wait for the cen tral team to make the change and then arrange some form of sign-off.
The best approach Ive seen for this is to use GitHub. This works well because (a)
schemas are code and should be version-controlled for all the same reasons code is, and (b) GitHub lets implementers propose a change and raise a pull request
(PR), which they can code against while they build and test their system. Other 126
|
Chapter 13: Evolving Schemas and Data over Time
interested parties can review, comment, and approve. Once consensus is reached,
the PR can be merged and the new schema can be rolled out. It is this process for reliably reaching (and auditing) consensus on a change, without impeding the progress of the implementer unduly, that makes this approach the most useful option.
Handling Unreadable Messages Schemas arent always enough to ensure downstream applications will work.
There is nothing to prevent a semantic errorfor example, an unexpected char acter, invalid country code, negative quantity, or even invalid bytes (say due to corruption)from causing a process to stall. Such errors will typically hold up processing until the issue is fixed, which can be unacceptable in some environ ments.
Traditional messaging systems often include a related concept called a dead letter queue,1 which is used to hold messages that cant be sent, for example, because they cannot be routed to a destination queue. The concept doesnt apply in the same way to Kafka, but it is possible for consumers to not be able to read a mes sage, either for semantic reasons or due to the message payload being invalid
(e.g., the CRC check failing, on read, as the message has become corrupted).
Some implementers choose to create a type of dead letter queue of their own in a separate topic. If a consumer cannot read a message for whatever reason, it is placed on this error queue so processing can continue. Later the error queue can be reprocessed.
Deleting Data When you keep datasets in the log for longer periods of time, or even indefi nitely, there are times you need to delete messages, correct errors or corrupted data, or redact sensitive sections. A good example of this is recent regulations like General Data Protection Regulation (GDPR), which, among other things, gives users the right to be forgotten.
The simplest way to remove messages from Kafka is to simply let them expire. By default, Kafka will keep data for two weeks, and you can tune this to an arbitrar ily large (or small) period of time. There is also an Admin API that lets you delete messages explicitly if they are older than some specified time or offset. When using Kafka for Event Sourcing or as a source of truth, you typically dont need delete. Instead, removal of a record is performed with a null value (or delete 1 See https://www.rabbitmq.com/dlx.html and https://ibm.co/2ui3rKO.
Handling Unreadable Messages
|
127
marker as appropriate). This ensures the fully versioned history is held intact,
and most Connect sinks are built with delete markers in mind.
But for regulatory requirements like GDPR, adding a delete marker isnt enough,
as all data needs to be physically removed from the system. There are a variety of approaches to this problem. Some people favor a security-based approach such as crypto shredding, but for most people, compacted topics are the tool of choice,
as they allow messages to be explicitly deleted or replaced via their key.
But data isnt removed from compacted topics in the same way as in a relational database. Instead, Kafka uses a mechanism closer to those used by Cassandra and HBase, where records are marked for removal and then later deleted when the compaction process runs. Deleting a message from a compacted topic is as sim ple as writing a new message to the topic with the key you want to delete and a null value. When compaction runs, the message will be deleted forever.
If the key of the topic is something other than the CustomerId, then you need some process to map the two. For example, if you have a topic of Orders, then you need a mapping of customer to OrderId held somewhere. Then, to forget a customer, simply look up their orders and either explicitly delete them from Kafka, or alternatively redact any customer information they contain. You might roll this into a process of your own, or you might do it using Kafka Streams if you are so inclined.
There is a less common case, which is worth mentioning, where the key (which Kafka uses for ordering) is completely different from the key you want to be able to delete by. Lets say that you need to key your orders by ProductId. This choice of key wont let you delete orders for individual customers, so the simple method just described wouldnt work. You can still achieve this by using a key that is a composite of the two: make the key [ProductId][CustomerId], then use a cus tom partitioner in the producer that extracts the ProductId and partitions only on that value. Then you can delete messages using the mechanism discussed ear lier using the [ProductId][CustomerId] pair as the key.
Triggering Downstream Deletes Quite often youll be in a pipeline where Kafka is moving data from one database to another using Kafka connectors. In this case, you need to delete the record in the originating database and have that propagate through Kafka to any Connect sinks you have downstream. If youre using CDC this will just work: the delete will be picked up by the source connector, propagated through Kafka, and deleted in the sinks. If youre not using a CDC-enabled connector, youll need some custom mechanism for managing deletes.
128
|
Chapter 13: Evolving Schemas and Data over Time
Segregating Public and Private Topics When using Kafka for Event Sourcing or stream processing, in the same cluster through which different services communicate, we typically want to segregate private, internal topics from shared, business topics.
Some teams prefer to do this by convention, but you can apply a stricter segrega tion using the authorization interface. Essentially you assign read/write permis sions, for your internal topics, only to the services that own them. This can be implemented through simple runtime validation, or alternatively fully secured via TLS or SASL.
Summary In this chapter we looked at a collection of somewhat disparate issues that affect event-driven systems. We considered the problem of schema change: something that is inevitable in the real-world. Often this can be managed simply by evolving the schema with a format like Avro or Protobuf that supports backward compati bility. At other times evolution will not be possible and the system will have to undergo a non-backward-compatible change. The dual-schema upgrade window is one way to handle this.
Then we briefly looked at handling unreadable messages as well as how data can be deleted. For many users deleting data wont be an issueit will simply age out of the logbut for those that keep data for longer periods this typically becomes important.
Segregating Public and Private Topics
|
129
PART V Implementing Streaming Services with Kafka A reactive system does not compute or perform a function, it maintains a certain ongoing relationship with its environment.
<NAME> and <NAME>, On the Development of Reactive Systems,
1985
CHAPTER 14 Kafka Streams and KSQL When it comes to building event-driven services, the Kafka Streams API pro vides the most complete toolset for handling a distributed, asynchronous world.
Kafka Streams is designed to perform streaming computations. We discussed a simple example of such a use case, where we processed app open/close events emitted from mobile phones in Chapter 2. We also touched on its stateful ele ments in Chapter 6. This led us to three types of services we can build: eventdriven, streaming, and stateful streaming.
In this chapter we look more closely at this unique tool for stateful stream pro cessing, along with its powerful declarative interface: KSQL.
A Simple Email Service Built with Kafka Streams and KSQL
Kafka Streams is the core API for stream processing on the JVM (Java, Scala,
Clojure, etc.). It is based on a DSL (domain-specific language) that provides a declaratively styled interface where streams can be joined, filtered, grouped, or aggregated via the DSL itself. It also provides functionally styled mechanisms
(map, flatMap, transform, peek, etc.) for adding bespoke processing of messages one at a time. Importantly, you can blend these two approaches together in the services you build, with the declarative interface providing a high-level abstrac tion for SQL-like operations and the more functional methods adding the free dom to branch out into any arbitrary code you may wish to write.
But what if youre not running on the JVM? In this case youd use KSQL. KSQL provides a simple, interactive SQL-like wrapper for the Kafka Streams API. It can be run standalone, for example, via the Sidecar pattern, and called remotely. As KSQL utilizes the Kafka Streams API under the hood, we can use it to do the same kind of declarative slicing and dicing. We can also apply custom processing 133
either by implementing a user-defined function (UDF) directly or, more com monly, by pushing the output to a Kafka topic and using a native Kafka client, in whatever language our service is built in, to process the manipulated streams one message at a time. Whichever approach we take, these tools let us model business operations in an asynchronous, nonblocking, and coordination-free manner.
Lets consider something more concrete. Imagine we have a service that sends emails to platinum-level clients (Figure 14-1). We can break this problem into two parts. First, we prepare by joining a stream of orders to a table of customers and filtering for the platinum clients. Second, we need code to construct and send the email itself. We would do the former in the DSL and the latter with a per-message function:
//Join customers and orders orders.join(customers, Tuple::new)
//Consider confirmed orders for platinum customers
.filter((k, tuple) tuple.customer.level().equals(PLATINUM)
&& tuple.order.state().equals(CONFIRMED))
//Send email for each customer/order pair
.peek((k, tuple) emailer.sendMail(tuple));
The code for this is available on GitHub.
Figure 14-1. An example email service that joins orders and customers, then sends an email We can perform the same operation using KSQL (Figure 14-2). The pattern is the same; the event stream is dissected with a declarative statement, then pro cessed one record at a time:
134
|
Chapter 14: Kafka Streams and KSQL
//Create a stream of confirmed orders for platinum customers ksql> CREATE STREAM platinum_emails AS SELECT * FROM orders WHERE client_level == 'PLATINUM' AND state == 'CONFIRMED';
Then we implement the emailer as a simple consumer using Kafkas Node.js API
(though a wide number of languages are supported) with KSQL running as a sidecar.
Figure 14-2. Executing a streaming operation as a sidecar, with the resulting stream being processed by a Node.js client Windows, Joins, Tables, and State Stores Chapter 6 introduced the notion of holding whole tables inside the Kafka Streams API, making services stateful. Here we look a little more closely at how both streams and tables are implemented, along with some of the other core fea tures.
Lets revisit the email service example once again, where an email is sent to con firm payment of a new order, as pictured in Figure 14-3. We apply a streamstream join, which waits for corresponding Order and Payment events to both be present in the email service before triggering the email code. The join behaves much like a logical AND.
Windows, Joins, Tables, and State Stores
|
135
Figure 14-3. A stream-stream join between orders and payments Incoming event streams are buffered for a defined period of time (denoted reten tion). But to avoid doing all of this buffering in memory, state storesdiskbacked hash tablesoverflow the buffered streams to disk. Thus, regardless of which event turns up later, the corresponding event can be quickly retrieved from the buffer so the join operation can complete.
Kafka Streams also manages whole tables. Tables are a local manifestation of a complete topicusually compactedheld in a state store by key. (You might also think of them as a stream with infinite retention.) In a services context, such tables are often used for enrichments. So to look up the customers email, we might use a table loaded from the Customers topic in Kafka.
The nice thing about using a table is that it behaves a lot like tables in a database.
So when we join a stream of orders to a table of customers, there is no need to worry about retention periods, windows, or other such complexities. Figure 14-4 shows a three-way join between orders, payments, and customers, where cus tomers are represented as a table.
Figure 14-4. A three-way join between two streams and a table 136
|
Chapter 14: Kafka Streams and KSQL
There are actually two types of table in Kafka Streams: KTables and Global KTa bles. With just one instance of a service running, these behave equivalently.
However, if we scaled our service outso it had four instances running in paral lelwed see slightly different behaviors. This is because Global KTables are broadcast: each service instance gets a complete copy of the entire table. Regular KTables are partitioned: the dataset is spread over all service instances.
Whether a table is broadcast or partitioned affects the way it can perform joins.
With a Global KTable, because the whole table exists on every node, we can join to any attribute we wish, much like a foreign key join in a database. This is not true in a KTable. Because it is partitioned, it can be joined only by its primary key, just like you have to use the primary key when you join two streams. So to join a KTable or stream by an attribute that is not its primary key, we must per form a repartition. This is discussed in Rekey to Join on page 145 in Chapter 15.
So, in short, Global KTables work well as lookup tables or star joins but take up more space on disk because they are broadcast. KTables let you scale your serv ices out when the dataset is larger, but may require that data be rekeyed.1 The final use of the state store is to save information, just like we might write data to a regular database (Figure 14-5). Anything we save can be read back again later, say after a restart. So we might expose an Admin interface to our email ser vice that provides statistics on emails that have been sent. We could store, these stats in a state store and theyll be saved locally as well as being backed up to Kafka, using whats called a changelog topic, inheriting all of Kafkas durability guarantees.
Figure 14-5. Using a state store to keep application-specific state within the Kafka Streams API as well as backed up in Kafka 1 The difference between these two is actually slightly subtler.
Windows, Joins, Tables, and State Stores
|
137
Summary This chapter provided a brief introduction to streams, tables, and state stores:
three of the most important elements of a streaming application. Streams are infinite and we process them a record at a time. Tables represent a whole dataset,
materialized locally, which we can join to much like a database table. State stores behave like dedicated databases which we can read and write to directly with any information we might wish to store. These features are of course just the tip of the iceberg, and both Kafka Streams and KSQL provide a far broader set of fea tures, some of which we explore in Chapter 15, but they all build on these base concepts.
138
|
Chapter 14: Kafka Streams and KSQL
CHAPTER 15 Building Streaming Services An Order Validation Ecosystem Having developed a basic understanding of Kafka Streams, now lets look at the techniques needed to build a small streaming services application. We will base this chapter around a simple order processing workflow that validates and pro cesses orders in response to HTTP requests, mapping the synchronous world of a standard REST interface to the asynchronous world of events, and back again.
Download the code for this example from GitHub.
Starting from the lefthand side of the Figure 15-1, the REST interface provides methods to POST and GET orders. Posting an order creates an Order Created event in Kafka. Three validation engines (Fraud, Inventory, Order Details) sub scribe to these events and execute in parallel, emitting a PASS or FAIL based on whether each validation succeeds. The result of these validations is pushed through a separate topic, Order Validations, so that we retain the single writer relationship between the orders service and the Orders topic.1 The results of the various validation checks are aggregated back in the orders service, which then moves the order to a Validated or Failed state, based on the combined result.
Validated orders accumulate in the Orders view, where they can be queried his torically. This is an implementation of the CQRS design pattern (see Command 1 In this case we choose to use a separate topic, Order Validations, but we might also choose to update the Orders topic directly using the single-writer-per-transition approach discussed in Chapter 11.
139
Query Responsibility Segregation on page 61 in Chapter 7). The email service sends confirmation emails.
Figure 15-1. An order processing system implemented as streaming services The inventory service both validates orders and reserves inventory for the pur chasean interesting problem, as it involves tying reads and writes together atomically. We look at this in detail later in this chapter.
Join-Filter-Process Most streaming systems implement the same broad pattern where a set of streams is prepared, and then work is performed one event at a time. This involves three steps:
1. Join. The DSL is used to join a set of streams and tables emitted by other services.
2. Filter. Anything that isnt required is filtered. Aggregations are often used here too.
3. Process. The join result is passed to a function where business logic executes.
The output of this business logic is pushed into another stream.
This pattern is seen in most services but is probably best demonstrated by the email service, which joins orders, payments, and customers, forwarding the result to a function that sends an email. The pattern can be implemented in either Kafka Streams or KSQL equivalently.
140
|
Chapter 15: Building Streaming Services
Event-Sourced Views in Kafka Streams To allow users to perform a HTTP GET, and potentially retrieve historical orders, the orders service creates a queryable event-sourced view. (See The Event-Sourced View on page 71 in Chapter 7.) This works by pushing orders into a set of state stores partitioned over the three instances of the Orders view,
allowing load and storage to be spread between them.
Figure 15-2. Close-up of the Orders Service, from Figure 15-1, demonstrating the materialized view it creates which can be accessed via an HTTP GET; the view rep resents the Query-side of the CQRS pattern and is spread over all three instances of the Orders Service Because data is partitioned it can be scaled out horizontally (Kafka Streams sup ports dynamic load rebalancing), but it also means GET requests must be routed to the right nodethe one that has the partition for the key being requested. This is handled automatically via the interactive queries functionality in Kafka Streams.2 There are actually two parts to this. The first is the query, which defines what data goes into the view. In this case we are grouping orders by their key (so new orders overwrite old orders), with the result written to a state store where it can be queried. We might implement this with the Kafka Streams DSL like so:
builder.stream(ORDERS.name(), serializer)
.groupByKey(groupSerializer)
.reduce((agg, newVal) -> newVal, getStateStore())
2 It is also common practice to implement such event-sourced views via Kafka Connect and your data base of choice, as we discussed in Query a Read-Optimized View Created in a Database on page 69 in Chapter 7. Use this method when you need a richer query model or greater storage capacity.
Event-Sourced Views in Kafka Streams
|
141
The second part is to expose the state store(s) over an HTTP endpoint, which is simple enough, but when running with multiple instances requests must be routed to the correct partition and instance for a certain key. Kafka Streams includes a metadata service that does this for you.
Collapsing CQRS with a Blocking Read The orders service implements a blocking HTTP GET so that clients can read their own writes. This technique is used to collapse the asynchronous nature of the CQRS pattern. So, for example, if a client wants to perform a write operation,
immediately followed by a read, the event might not have propagated to the view,
meaning they would either get an error or an incorrect value.
One solution is to block the GET operation until the event arrives (or a config ured timeout passes), collapsing the asynchronicity of the CQRS pattern so that it appears synchronous to the client. This technique is essentially long polling. The orders service, in the example code, implements this technique using nonblock ing IO.
Scaling Concurrent Operations in Streaming Systems The inventory service is interesting because it needs to implement several spe cialist techniques to ensure it works accurately and consistently. The service per forms a simple operation: when a user purchases an iPad, it makes sure there are enough iPads available for the order to be fulfilled, then physically reserves a number of them so no other process can take them (Figure 15-3). This is a little trickier than it may seem initially, as the operation involves managing atomic state across topics. Specifically:
1. Validate whether there are enough iPads in stock (inventory in warehouse minus items reserved).
2. Update the table of reserved items to reserve the iPad so no one else can take it.
3. Send out a message that validates the order.
142
|
Chapter 15: Building Streaming Services
Figure 15-3. The inventory service validates orders by ensuring there is enough inventory in stock, then reserving items using a state store, which is backed by Kafka; all operations are wrapped in a transaction This will work reliably only if we:
Enable Kafkas transactions feature.
Ensure that data is partitioned by ProductId before this operation is per formed.
The first point should be pretty obvious: if we fail and were not wrapped in a transaction, we have no idea what state the system will be in. But the second point should be a little less clear, because for it to make sense we need to think about this particular operation being scaled out linearly over several different threads or machines.
Stateful stream processing systems like Kafka Streams have a novel and highperformance mechanism for managing stateful problems like these concurrently.
We have a single critical section:
1. Read the number of unreserved iPads currently in stock.
2. Reserve the iPads requested on the order.
Lets first consider how a traditional (i.e., not stateful) streaming system might work (Figure 15-4). If we scale the operation to run over two parallel processes,
we would run the critical section inside a transaction in a (shared) database. So both instances would bottleneck on the same database instance.
Scaling Concurrent Operations in Streaming Systems
|
143
Figure 15-4. Two instances of a service manage concurrent operations via a shared database
Stateful stream processing systems like Kafka Streams avoid remote transactions or cross-process coordination. They do this by partitioning the problem over a set of threads or processes using a chosen business key. (Partitions and Parti tioning was discussed in Chapter 4.) This provides the key (no pun intended) to scaling these systems horizontally.
Partitioning in Kafka Streams works by rerouting messages so that all the state required for one particular computation is sent to a single thread, where the computation can be performed.3 The approach is inherently parallel, which is how streaming systems achieve such high message-at-a-time processing rates
(for example, in the use case discussed in Chapter 2). But the approach works only if there is a logical key that cleanly segregates all operations: both state that they need, and state they operate on.
So splitting (i.e., partitioning) the problem by ProductId ensures that all opera tions for one ProductId will be sequentially executed on the same thread. That means all iPads will be processed on one thread, all iWatches will be processed on one (potentially different) thread, and the two will require no coordination between each other to perform the critical section (Figure 15-5). The resulting operation is atomic (thanks to Kafkas transactions), can be scaled out horizon tally, and requires no expensive cross-network coordination. (This is similar to the Map phase in MapReduce systems.)
3 As an aside, one of the nice things about this feature is that it is managed by Kafka, not Kafka Streams.
Kafkas Consumer Group Protocol lets any group of consumers control how partitions are distributed across the group.
144
|
Chapter 15: Building Streaming Services
Figure 15-5. Services using the Kafka Streams API partition both event streams and stored state across the various services, which means all data required to run the critical section exists locally and is accessed by a single thread The inventory service must rearrange orders so they are processed by ProductId.
This is done with an operation called a rekey, which pushes orders into a new intermediary topic in Kafka, this time keyed by ProductId, and then back out to the inventory service. The code is very simple:
orders.selectKey((id, order) -> order.getProduct())//rekey by ProductId Part 2 of the critical section is a state mutation: inventory must be reserved. The inventory service does this with a Kafka Streams state store (a local, disk-resident hash table, backed by a Kafka topic). So each thread executing will have a state store for reserved stock for some subset of the products. You can program with these state stores much like you would program with a hash map or key/value store, but with the benefit that all the data is persisted to Kafka and restored if the process restarts. A state store can be created in a single line of code:
KeyValueStore<Product, Long> store = context.getStateStore(RESERVED);
Then we make use of it, much like a regular hash table:
//Get the current reserved stock for this product Long reserved = store.get(order.getProduct());
//Add the quantity for this order and submit it back store.put(order.getProduct(), reserved + order.getQuantity())
Writing to the store also partakes in Kafkas transactions, discussed in Chap ter 12.
Rekey to Join We can apply exactly the same technique used in the previous section, for parti tioning writes, to partitioning reads (e.g., to do a join). Say we want to join a stream of orders (keyed by OrderId) to a table of warehouse inventory (keyed by ProductId), as we do in Figure 15-3. The join will have to use the ProductId.
Rekey to Join
|
145
This is what would be termed a foreign key relationship in relational parlance,
mapping from WarehouseInventory.ProductId (its primary key) onto Order.ProductId (which isnt its primary key). To do this, we need to shuffle orders across the different nodes so that the orders end up being processed in the same thread that has the corresponding warehouse inventory assigned.
As mentioned earlier, this data redistribution step is called a rekey, and data arranged in this way is termed co-partitioned. Once rekeyed, the join condition can be performed without any additional network access required. For example,
in Figure 15-6, inventory with productId=5 is collocated with orders for produc tId=5.
Figure 15-6. To perform a join between orders and warehouse inventory by Pro ductId, orders are repartitioned by ProductId, ensuring that for each product all corresponding orders will be on the same instance Repartitioning and Staged Execution Real-world systems are often more complex. One minute were performing a join, the next were aggregating by customer or materializing data in a view, with each operation requiring a different data distribution profile. Different opera tions like these chain together in a pipeline. The inventory service provides a good example of this. It uses a rekey operation to distribute data by ProductId.
Once complete, it has to be rekeyed back to OrderId so it can be added to the Orders view (Figure 15-7). (The Orders view is destructivethat is, old versions of an order will be replaced by newer onesso its important that the stream be keyed by OrderId so that no data is lost.)
146
|
Chapter 15: Building Streaming Services
Figure 15-7. Two stages, which require joins based on different keys, are chained together via a rekey operation that changes the key from ProductId to OrderId There are limitations to this approach, though. The keys used to partition the event streams must be invariant if ordering is to be guaranteed. So in this particu lar case it means the keys, ProductId and OrderId, on each order must remain fixed across all messages that relate to that order. Typically, this is a fairly easy thing to manage at a domain level (for example, by enforcing that, should a user want to change the product they are purchasing, a brand new order must be cre ated).
Waiting for N Events Another relatively common use case in business systems is to wait for N events to occur. This is trivial if each event is located in a different topicits simply a three-way joinbut if events arrive on a single topic, it requires a little more thought.
The orders service, in the example discussed earlier in this chapter (Figure 15-1),
waits for validation results from each of the three validation services, all sent via the same topic. Validation succeeds holistically only if all three return a PASS.
Assuming you are counting messages with a certain key, the solution takes the form:
1. Group by the key.
2. Count occurrences of each key (using an aggregator executed with a win dow).
3. Filter the output for the required count.
Waiting for N Events
|
147
Reflecting on the Design Any distributed system comes with a baseline cost. This should go without say ing. The solution described here provides good scalability and resiliency proper ties, but will always be more complex to implement and run than a simple,
single-process application designed to perform the same logic. You should always carefully weigh the tradeoff between better nonfunctional properties and simplicity when designing a system. Having said that, a real system will inevita bly be more complex, with more moving parts, so the pluggability and extensibil ity of this style of system can provide a worthy return against the initial upfront cost.
A More Holistic Streaming Ecosystem In this final section we take a brief look at a larger ecosystem (Figure 15-8) that pulls together some of the main elements discussed in this book thus far, outlin ing how each service contributes, and the implementation patterns each might use:
Figure 15-8. A more holistic streaming ecosystem Basket writer/view These represent an implementation of CQRS, as discussed in Command Query Responsibility Segregation on page 61 in Chapter 7. The Basket writer proxies HTTP requests, forwarding them to the Basket topic in Kafka when a user adds a new item. The Confluent REST proxy (which ships with the Confluent distribution of Kafka) is used for this. The Basket view is an event-sourced view, implemented in Kafka Streams, with the contents of its 148
|
Chapter 15: Building Streaming Services
state stores exposed over a REST interface in a manner similar to the orders service in the example discussed earlier in this chapter (Kafka Connect and a database could be substituted also). The view represents a join between User and Basket topics, but much of the information is thrown away, retaining only the bare minimum: userId List[product]. This minimizes the views footprint.
The Catalogue Filter view This is another event-sourced view but requires richer support for pagina tion, so the implementation uses Kafka Connect and Cassandra.
Catalogue search A third event-sourced view; this one uses Solr for its full-text search capabili ties.
Orders service Orders are validated and saved to Kafka. This could be implemented either as a single service or a small ecosystem like the one detailed earlier in this chapter.
Catalog service A legacy codebase that manages changes made to the product catalog, initi ated from an internal UI. This has comparatively fewer users, and an existing codebase. Events are picked up from the legacy Postgres database using a CDC connector to push them into Kafka. The single-message transforms feature reformats the messages before they are made public. Images are saved to a distributed filesystem for access by the web tier.
Shipping service A streaming service leveraging the Kafka Streams API. This service reacts to orders as they are created, updating the Shipping topic as notifications are received from the delivery company.
Inventory service Another streaming service leveraging the Kafka Streams API. This service updates inventory levels as products enter and leave the warehouse.
Archive All events are archived to HDFS, including two, fixed T-1 and T-10 point-intime snapshots for recovery purposes. This uses Kafka Connect and its HDFS connector.
Streams management A set of stream processors manages creating latest/versioned topics where relevant (see the Latest-Versioned pattern in Long-Term Data Storage on page 25 in Chapter 3). This layer also manages the swing topics used when non-backward-compatible schema changes need to be rolled out. (See Han A More Holistic Streaming Ecosystem
|
149
dling Schema Change and Breaking Backward Compatibility on page 124 in Chapter 13.)
Schema Registry The Confluent Schema Registry provides runtime validation of schemas and their compatibility.
Summary When we build services using a streaming platform, some will be statelesssim ple functions that take an input, perform a business operation, and produce an output. Some will be stateful, but read only, as in event-sourced views. Others will need to both read and write state, either entirely inside the Kafka ecosystem
(and hence wrapped in Kafkas transactional guarantees), or by calling out to other services or databases. One of the most attractive properties of a stateful stream processing API is that all of these options are available, allowing us to trade the operational ease of stateless approaches for the data processing capabil ities of stateful ones.
But there are of course drawbacks to this approach. While standby replicas,
checkpoints, and compacted topics all mitigate the risks of pushing data to code,
there is always a worst-case scenario where service-resident datasets must be rebuilt, and this should be considered as part of any system design.
There is also a mindset shift that comes with the streaming model, one that is inherently more asynchronous and adopts a more functional and data-centric style, when compared to the more procedural nature of traditional service inter faces. But this isin the opinion of this authoran investment worth making.
In this chapter we looked at a very simple system that processes orders. We did this with a set of small streaming microservices that implement the Event Collab oration pattern we discussed in Chapter 5. Finally, we looked at how we can cre ate a larger architecture using the broader range of patterns discussed in this book.
150
|
Chapter 15: Building Streaming Services
About the Author <NAME> is a technologist working in the Office of the CTO at Confluent,
Inc. (the company behind Apache Kafka), where he has worked on a wide range of projects, from implementing the latest version of Kafkas replication protocol through to developing strategies for streaming applications. Before Confluent,
Ben led the design and build of a company-wide data platform for a large finan cial institution, as well as working on a number of early service-oriented systems,
both in finance and at ThoughtWorks.
Ben is a regular conference speaker, blogger, and keen observer of the datatechnology space. He believes that we are entering an interesting and formative period where data-engineering, software engineering, and the lifecycle of organi sations become ever more closely intertwined. |
ipp_encoder | rust | Rust | Crate ipp_encoder
===
ipp_encoder
---
IPP encoder & decoder. This crate include two primary modules:
* `spec`: RFC specification type mapping
* `encoder`: core implementation for encoding & decoding IPP operation
### Examples
See ipp/server for full IPP server example
```
use ipp_encoder::encoder::Operation;
let request: Vec<u8> = Vec::new();
// ... get raw bytes from ipp server
// request = ...
let (_, request) = Operation::from(&request, 0);
println!("Request: {}", request.to_json()); // operation can be serialized
// from spec same byte can be operation_id (request) or status_code (response)
println!"OperationID: {}", request.operation_id().unwrap() as i32);
for (_, attribute_group) in request.attribute_groups {
for (_, attribute) in attribute_group.attributes {
// do something
}
}
// request.data contain trailing bytes (for example: postscript file)
// later ...
let mut response = Operation {
version: IppVersion { major: 1, minor: 1 },
request_id: request.request_id,
operation_id_or_status_code: IppStatusCode::SuccessfulOk as u16,
attribute_groups: HashMap::new(),
data: Vec::new(),
};
println!("Response: {}", response.to_json()) // operation can be deserialized
// response.to_ipp() for sending back response with IPP server
```
Modules
---
encoderipp_encoder::encoder
specipp_encoder::spec
Crate ipp_encoder
===
ipp_encoder
---
IPP encoder & decoder. This crate include two primary modules:
* `spec`: RFC specification type mapping
* `encoder`: core implementation for encoding & decoding IPP operation
### Examples
See ipp/server for full IPP server example
```
use ipp_encoder::encoder::Operation;
let request: Vec<u8> = Vec::new();
// ... get raw bytes from ipp server
// request = ...
let (_, request) = Operation::from(&request, 0);
println!("Request: {}", request.to_json()); // operation can be serialized
// from spec same byte can be operation_id (request) or status_code (response)
println!"OperationID: {}", request.operation_id().unwrap() as i32);
for (_, attribute_group) in request.attribute_groups {
for (_, attribute) in attribute_group.attributes {
// do something
}
}
// request.data contain trailing bytes (for example: postscript file)
// later ...
let mut response = Operation {
version: IppVersion { major: 1, minor: 1 },
request_id: request.request_id,
operation_id_or_status_code: IppStatusCode::SuccessfulOk as u16,
attribute_groups: HashMap::new(),
data: Vec::new(),
};
println!("Response: {}", response.to_json()) // operation can be deserialized
// response.to_ipp() for sending back response with IPP server
```
Modules
---
encoderipp_encoder::encoder
specipp_encoder::spec
Module ipp_encoder::spec
===
ipp_encoder::spec
---
The `spec` module provides mapping to RFC-8010 and RFC-8011 for keywords, enums, values, and types.
Modules
---
attributeoperationtagvalue
Module ipp_encoder::encoder
===
ipp_encoder::encoder
---
The `encoder` module provides the `IppEncode` trait and implements encoder / decoder for IPP operations
Structs
---
AttributeWrapper for IPP attribute
AttributeGroupAn “attribute-group” field contains zero or more “attribute” fields.
IppVersion2 bytes of IPP version ref: rfc8010
OperationOperation request or response
TextWithLangWrapper for ‘textWithoutLanguage’ attribute value type
Enums
---
AttributeNameGeneralized attribute name from different group (operation, printer, job, job-template)
AttributeValueGeneralized attribute value of different types
Traits
---
IppEncodeSkeleton for implementing encoder / decoder logics |
@storybook/source-loader | npm | JavaScript | [Source Loader](#source-loader)
===
Storybook `source-loader` is a webpack loader that annotates Storybook story files with their source code. It powers the [storysource](https://github.com/storybookjs/storybook/tree/next/code/addons/storysource) and [docs](https://github.com/storybookjs/storybook/tree/next/code/addons/docs) addons.
* [Options](#options)
+ [parser](#parser)
+ [prettierConfig](#prettierconfig)
+ [uglyCommentsRegex](#uglycommentsregex)
+ [injectDecorator](#injectdecorator)
[Options](#options)
---
The loader can be customized with the following options:
### [parser](#parser)
The parser that will be parsing your code to AST (based on [prettier](https://github.com/prettier/prettier/tree/master/src/language-js))
Allowed values:
* `javascript` - default
* `typescript`
* `flow`
Be sure to update the regex test for the webpack rule if utilizing Typescript files.
Usage:
```
module.exports = function ({ config }) {
config.module.rules.push({
test: /\.stories\.tsx?$/,
use: [
{
loader: require.resolve('@storybook/source-loader'),
options: { parser: 'typescript' },
},
],
enforce: 'pre',
});
return config;
};
```
### [prettierConfig](#prettierconfig)
The prettier configuration that will be used to format the story source in the addon panel.
Defaults:
```
{
printWidth: 100,
tabWidth: 2,
bracketSpacing: true,
trailingComma: 'es5',
singleQuote: true,
}
```
Usage:
```
module.exports = function ({ config }) {
config.module.rules.push({
test: /\.stories\.jsx?$/,
use: [
{
loader: require.resolve('@storybook/source-loader'),
options: {
prettierConfig: {
printWidth: 100,
singleQuote: false,
},
},
},
],
enforce: 'pre',
});
return config;
};
```
### [uglyCommentsRegex](#uglycommentsregex)
The array of regex that is used to remove "ugly" comments.
Defaults:
```
[/^eslint-.*/, /^global.*/];
```
Usage:
```
module.exports = function ({ config }) {
config.module.rules.push({
test: /\.stories\.jsx?$/,
use: [
{
loader: require.resolve('@storybook/source-loader'),
options: {
uglyCommentsRegex: [/^eslint-.*/, /^global.*/],
},
},
],
enforce: 'pre',
});
return config;
};
```
### [injectDecorator](#injectdecorator)
Tell storysource whether you need inject decorator. If false, you need to add the decorator by yourself;
Defaults: true
Usage:
```
module.exports = function ({ config }) {
config.module.rules.push({
test: /\.stories\.jsx?$/,
use: [
{
loader: require.resolve('@storybook/source-loader'),
options: { injectDecorator: false },
},
],
enforce: 'pre',
});
return config;
};
```
Readme
---
### Keywords
* lib
* storybook |
MLFS | cran | R | Package ‘MLFS’
October 12, 2022
Type Package
Title Machine Learning Forest Simulator
Version 0.4.2
Author <NAME>
Maintainer <NAME> <<EMAIL>>
Description Climate-sensitive forest simulator based on the principles of
machine learning. It stimulates all key processes in the forest: radial growth,
height growth, mortality, crown recession, regeneration and harvesting. The
method for predicting tree heights was described by Skudnik and Jevšenak
(2022) <doi:10.1016/j.foreco.2022.120017>, while the method for predicting
basal area increments (BAI) was described by Jevšenak and Skudnik (2021)
<doi:10.1016/j.foreco.2020.118601>.
License GPL-3
Imports brnn(>= 0.6), ranger(>= 0.13.1), reshape2 (>= 1.4.4), pscl (>=
1.5.5), naivebayes(>= 0.9.7), magrittr(>= 1.5), dplyr(>=
0.7.0),tidyr (>= 1.1.3), tidyselect(>= 1.0.0)
Encoding UTF-8
LazyData true
Depends R(>= 3.4)
NeedsCompilation no
Repository CRAN
RoxygenNote 7.1.2
Date/Publication 2022-04-20 08:22:37 UTC
R topics documented:
add_stand_variable... 2
BAI_predictio... 3
calculate_BA... 5
crownHeight_predictio... 5
data_BA... 7
data_climat... 8
data_final_cut_weight... 9
data_ingrowt... 9
data_mortalit... 10
data_NF... 11
data_sit... 12
data_tariff... 12
data_thinning_weight... 13
data_tree_height... 13
data_v... 14
data_v... 15
data_v... 16
data_v... 17
data_v... 18
data_v... 18
df_volume_parameter... 20
form_factor... 21
height_predictio... 21
ingrowth_parameter_lis... 23
ingrowth_tabl... 23
max_size_dat... 24
measurement_threshold... 24
MLF... 25
predict_ingrowt... 31
predict_mortalit... 33
simulate_harvestin... 35
volume_form_factor... 37
volume_function... 38
volume_tariff... 39
add_stand_variables add_stand_variables
Description
This function adds two variables to existing data frame of individual tree measurements: 1) stand
basal area and 2) the number of trees per hectare
Usage
add_stand_variables(df)
Arguments
df a data frame with individual tree measurements that include basal area and the
upscale factors. All trees should also be described with plotID and year variables
Value
a data frame with added stand variables: total stand basal area and the number of trees per hectare
Examples
data(data_v1)
data_v1 <- add_stand_variables(df = data_v1)
BAI_prediction BAI_prediction
Description
The Basal Area Increment BAI sub model that is run within the MLFS
Usage
BAI_prediction(
df_fit,
df_predict,
species_n_threshold = 100,
site_vars,
include_climate,
eval_model_BAI = TRUE,
rf_mtry = NULL,
k = 10,
blocked_cv = TRUE,
measurement_thresholds = NULL,
area_correction = NULL
)
Arguments
df_fit a data frame with Basal Area Increments (BAI) and all independent variables as
specified with the formula
df_predict data frame which will be used for BAI predictions
species_n_threshold
a positive integer defining the minimum number of observations required to treat
a species as an independent group
site_vars a character vector of variable names which are used as site descriptors
include_climate
logical, should climate variables be included as predictors
eval_model_BAI logical, should the the BAI model be evaluated and returned as the output
rf_mtry a number of variables randomly sampled as candidates at each split of a ran-
dom forest model for predicting basal area increments (BAI). If NULL, default
settings are applied.
k the number of folds to be used in the k fold cross-validation
blocked_cv logical, should the blocked cross-validation be used in the evaluation phase?
measurement_thresholds
data frame with two variables: 1) DBH_threshold and 2) weight. This informa-
tion is used to assign the correct weights in BAI and increment sub-model; and
to upscale plot-level data to hectares.
area_correction
an optional data frame with three variables: 1) plotID and 2) DBH_threshold and
3) the correction factor to be multiplied by weight for this particular category
Value
a list with four elements:
1. $predicted_BAI - a data frame with calculated basal area increments (BAI)
2. $eval_BAI - a data frame with predicted and observed basal area increments (BAI), or a char-
acter string indicating that BAI model was not evaluated
3. $rf_model_species - the output model for BAI (species level)
4. $rf_model_speciesGroups - the output model for BAI (species group level)
# add BA to measurement thresholds measurement_thresholds$BA_threshold <- ((measurement_thresholds$DBH_threshold/
* pi)/10000
BAI_outputs <- BAI_prediction(df_fit = data_BAI, df_predict = data_v6, site_vars = c("slope",
"elevation", "northness", "siteIndex"), rf_mtry = 3, species_n_threshold = 100, include_climate
= TRUE, eval_model_BAI = FALSE, k = 10, blocked_cv = TRUE, measurement_thresholds =
measurement_thresholds)
# get the ranger objects BAI_outputs_model_species <- BAI_outputs$rf_model_species BAI_outputs_model_groups
<- BAI_outputs$rf_model_speciesGroups
Examples
library(MLFS)
data(data_BAI)
data(data_v6)
data(measurement_thresholds)
calculate_BAL calculate_BAL
Description
This function calculates the competition index BAL (Basal Area in Large trees) and adds it to the
table of individual tree measurements that include basal area and the upscale factors. All trees
should also be described with plotID and year variables
Usage
calculate_BAL(df)
Arguments
df a data frame with individual tree measurements that include basal area and the
upscale factors. All trees should also be described with plotID and year variables
Value
a data frame with calculated basal area in large trees (BAL)
Examples
data(data_v1)
data_v1 <- calculate_BAL(df = data_v1)
crownHeight_prediction
crownHeight_prediction
Description
Model for predicting crown height
Usage
crownHeight_prediction(
df_fit,
df_predict,
site_vars = site_vars,
species_n_threshold = 100,
k = 10,
eval_model_crownHeight = TRUE,
crownHeight_model = "lm",
BRNN_neurons = 3,
blocked_cv = TRUE
)
Arguments
df_fit data frame with tree heights and basal areas for individual trees
df_predict data frame which will be used for predictions
site_vars optional, character vector with names of site variables
species_n_threshold
a positive integer defining the minimum number of observations required to treat
a species as an independent group
k the number of folds to be used in the k fold cross-validation
eval_model_crownHeight
logical, should the crown height model be evaluated and returned as the output
crownHeight_model
character string defining the model to be used for crown heights. Available are
ANN with Bayesian regularization (brnn) or linear regression (lm)
BRNN_neurons positive integer defining the number of neurons to be used in the brnn method.
blocked_cv logical, should the blocked cross-validation be used in the evaluation phase?
Value
a list with four elements:
1. $predicted_crownHeight - a data frame with imputed crown heights
2. $eval_crownHeight - a data frame with predicted and observed crown heights, or a character
string indicating that crown height model was not evaluated
3. $model_species - the output model for crown heights (species level)
4. $model_speciesGroups - the output model for crown heights (species group level)
Examples
library(MLFS)
data(data_tree_heights)
data(data_v3)
# A) Example with linear model
Crown_h_predictions <- crownHeight_prediction(df_fit = data_tree_heights,
df_predict = data_v3,
crownHeight_model = "lm",
site_vars = c(),
species_n_threshold = 100,
k = 10, blocked_cv = TRUE,
eval_model_crownHeight = TRUE)
predicted_df <- Crown_h_predictions$predicted_crownHeight # df with imputed heights
evaluation_df <- Crown_h_predictions$eval_crownHeight # df with evaluation results
# B) Example with non-linear BRNN model
Crown_h_predictions <- crownHeight_prediction(df_fit = data_tree_heights,
df_predict = data_v3,
crownHeight_model = "brnn",
BRNN_neurons = 3,
site_vars = c(),
species_n_threshold = 100,
k = 10, blocked_cv = TRUE,
eval_model_crownHeight = TRUE)
data_BAI An example of joined national forest inventory data, site descriptors,
and climate data that is used as a fitting data frame for BAI sub model
Description
This is simulated data that reassemble the national forest inventory data. We use it to show how to
run examples for BAI sub model. To make examples running more quickly, we keep only one tree
species: PINI.
Usage
data_BAI
Format
A data frame with 135 rows and 25 variables:
plotID a unique identifier for plot
treeID a unique identifier for tree
year year in which plot was visited
speciesGroup identifier for species group
code status of a tree: 0 (normal), 1(harvested), 2(dead), 3 (ingrowth)
species species name
height tree height in meters
crownHeight crown height in meters
protected logical, 1 if protected, otherwise 0
slope slope on a plot
elevation plot elevation
northness plot northness, 1 is north, 0 is south
siteIndex a proxy for site index, higher value represents more productive sites
BA basal area of individual trees in m2
weight upscale weight to calculate hectare values
stand_BA Total stand basal area
stand_n The number of trees in a stand
BAL Basal Area in Large trees
p_BA basal area of individual trees in m2 from previous simulation step
p_height tree height in meters from previous simulation step
p_crownHeight crown height in meters from previous simulation step
p_weight upscale weight to calculate hectare values from previous simulation step
BAI basal area increment
p_sum monthly precipitation sum
t_avg monthly mean temperature
data_climate An example of climate data
Description
This is simulated monthly climate data, and consists of precipitation sum and mean temperature
Usage
data_climate
Format
A data frame with 16695 rows and 5 variables:
plotID a unique identifier for plot
year year
month month
t_avg monthly mean temperature
p_sum monthly precipitation sum
data_final_cut_weights
An example of data_final_cut_weights
Description
Each species should have one weight that is multiplied with the probability of being harvested when
final_cut is applied
Usage
data_final_cut_weights
Format
A data frame with 36 rows and 6 variables:
species species name as used in data_NFI
step_1 final cut weight applied in step 1
step_2 final cut weight applied in step 2
step_3 final cut weight applied in step 3
step_4 final cut weight applied in step 4
step_5 final cut weight applied in step 5 and all subsequent steps
data_ingrowth An example of data_ingrowth suitable for the MLFS
Description
An example of plot-level data with plotID, stand variables and site descriptors, and the two target
variables describing the number of ingrowth trees for inner (ingrowth_3) and outer (ingrowth_15)
circles
Usage
data_ingrowth
Format
A data frame with 365 rows and 11 variables:
plotID a unique identifier for plot
year year in which plot was visited
stand_BA Total stand basal area
stand_n The number of trees in a stand
BAL Basal Area in Large trees
slope slope on a plot
elevation plot elevation
siteIndex a proxy for site index, higher value represents more productive sites
northness plot northness, 1 is north, 0 is south
ingrowth_3 the number of new trees in inner circle
ingrowth_15 the number of new trees in outer circle
data_mortality An example of joined national forest inventory data, site descriptors,
and climate data that is used as a fitting data frame for mortality sub
model
Description
This is simulated data that reassemble the national forest inventory data. We use it to show how to
run examples for mortality sub model
Usage
data_mortality
Format
A data frame with 6394 rows and 25 variables:
plotID a unique identifier for plot
treeID a unique identifier for tree
year year in which plot was visited
speciesGroup identifier for species group
code status of a tree: 0 (normal), 1(harvested), 2(dead), 3 (ingrowth)
species species name
height tree height in meters
crownHeight crown height in meters
protected logical, 1 if protected, otherwise 0
slope slope on a plot
elevation plot elevation
northness plot northness, 1 is north, 0 is south
siteIndex a proxy for site index, higher value represents more productive sites
BA basal area of individual trees in m2
weight upscale weight to calculate hectare values
stand_BA Total stand basal area
stand_n The number of trees in a stand
BAL Basal Area in Large trees
p_BA basal area of individual trees in m2 from previous simulation step
p_height tree height in meters from previous simulation step
p_crownHeight crown height in meters from previous simulation step
p_weight upscale weight to calculate hectare values from previous simulation step
BAI basal area increment
p_sum monthly precipitation sum
t_avg monthly mean temperature
data_NFI An example of national forest inventory data
Description
This is simulated data that reassemble the national forest inventory
Usage
data_NFI
Format
A data frame with 11984 rows and 10 variables:
plotID a unique identifier for plot
treeID a unique identifier for tree
year year in which plot was visited
speciesGroup identifier for species group
code status of a tree: 0 (normal), 1(harvested), 2(dead), 3 (ingrowth)
DBH diameter at breast height in cm
species species name
height tree height in meters
crownHeight crown height in meters
protected logical, 1 if protected, otherwise 0
data_site An example of site descriptors
Description
This is simulated data describing site descriptors
Usage
data_site
Format
A data frame with 371 rows and 5 variables:
plotID a unique identifier for plot
slope slope on a plot
elevation plot elevation
northness plot northness, 1 is north, 0 is south
siteIndex a proxy for site index, higher value represents more productive sites
data_tariffs An example of table with one-parametric volume functions (adapted
uniform French tariffs)
Description
The adapted uniform French tariffs are typically used in Slovenia to determine tree volume based
on tree DBH
Usage
data_tariffs
Format
A data frame with 1196 rows and 4 variables:
tarifa_class tariff class for a particular species on this plot
plotID plot identifier
species species name as used in data_NFI
v45 volume of tree with DBH 45 cm
data_thinning_weights An example of data_thinning_weights
Description
Each species should have one weight that is multiplied with the probability of being harvested when
thinning is applied
Usage
data_thinning_weights
Format
A data frame with 36 rows and 6 variables:
species species name as used in data_NFI
step_1 thinning weight applied in step 1
step_2 thinning weight applied in step 2
step_3 thinning weight applied in step 3
step_4 thinning weight applied in step 4
step_5 thinning weight applied in step 5 and all subsequent steps
data_tree_heights An example of data with individual tree and crown heights that can be
used as a fitting data frame for predicting tree and crown heights in
MLFS
Description
This is simulated data that reassemble the national forest inventory data. We use it to show how to
run examples for some specific functions
Usage
data_tree_heights
Format
A data frame with 2741 rows and 8 variables:
plotID a unique identifier for plot
treeID a unique identifier for tree
year year in which plot was visited
speciesGroup identifier for species group
species species name
height tree height in meters
crownHeight crown height in meters
BA basal area of individual trees in m2
data_v1 An example of joined national forest inventory and site data that is
used within the MLFS
Description
This is simulated data that reassemble the national forest inventory and simulated data. We use it to
show how to run examples for some specific functions
Usage
data_v1
Format
A data frame with 11984 rows and 15 variables:
plotID a unique identifier for plot
treeID a unique identifier for tree
year year in which plot was visited
speciesGroup identifier for species group
code status of a tree: 0 (normal), 1(harvested), 2(dead), 3 (ingrowth)
species species name
height tree height in meters
crownHeight crown height in meters
protected logical, 1 if protected, otherwise 0
slope slope on a plot
elevation plot elevation
northness plot northness, 1 is north, 0 is south
siteIndex a proxy for site index, higher value represents more productive sites
BA basal area of individual trees in m2
weight upscale weight to calculate hectare values
data_v2 An example of joined national forest inventory and site data that is
used within the MLFS
Description
This is simulated data that reassemble the national forest inventory data. We use it to show how to
run examples for tree and crown height predictions
Usage
data_v2
Format
A data frame with 6948 rows and 14 variables:
plotID a unique identifier for plot
treeID a unique identifier for tree
year year in which plot was visited
speciesGroup identifier for species group
code status of a tree: 0 (normal), 1(harvested), 2(dead), 3 (ingrowth)
species species name
height tree height in meters
crownHeight crown height in meters
BA basal area of individual trees in m2
weight upscale weight to calculate hectare values
p_BA basal area of individual trees in m2 from previous simulation step
p_weight upscale weight to calculate hectare values from previous simulation step
p_height tree height in meters from previous simulation step
p_crownHeight crown height in meters from previous simulation step
data_v3 An example of joined national forest inventory and site data that is
used within the MLFS
Description
This is simulated data that reassemble the national forest inventory data. We use it to show how to
run examples for tree and crown height predictions. The difference between data_v2 and data_v3
is that in data_v3, tree heights are already predicted
Usage
data_v3
Format
A data frame with 6948 rows and 14 variables:
plotID a unique identifier for plot
treeID a unique identifier for tree
year year in which plot was visited
speciesGroup identifier for species group
code status of a tree: 0 (normal), 1(harvested), 2(dead), 3 (ingrowth)
species species name
height tree height in meters
crownHeight crown height in meters
BA basal area of individual trees in m2
weight upscale weight to calculate hectare values
p_BA basal area of individual trees in m2 from previous simulation step
p_height tree height in meters from previous simulation step
p_crownHeight crown height in meters from previous simulation step
p_weight upscale weight to calculate hectare values from previous simulation step
volume tree volume in m3
p_volume tree volume in m3 from previous simulation step
data_v4 An example of joined national forest inventory and site data that is
used within the MLFS
Description
This is simulated data that reassemble the national forest inventory data. We use it to show how to
run examples for predicting tree mortality. Mortality occurs in the middle of a simulation step, so
all variables have the preposition ’mid’
Usage
data_v4
Format
A data frame with 6855 rows and 41 variables:
year year in which plot was visited
plotID a unique identifier for plot
treeID a unique identifier for tree
speciesGroup identifier for species group
code status of a tree: 0 (normal), 1(harvested), 2(dead), 3 (ingrowth)
species species name
slope slope on a plot
elevation plot elevation
northness plot northness, 1 is north, 0 is south
siteIndex a proxy for site index, higher value represents more productive sites
p_sum monthly precipitation sum
t_avg monthly mean temperature
BA_mid basal area of individual trees in m2 in the middle of a simulation step
BAI_mid basal area increment in the middle of a simulation step
weight_mid upscale weight to calculate hectare values in the middle of a simulation step
height_mid tree height in meters in the middle of a simulation step
crownHeight_mid crown height in meters in the middle of a simulation step
volume_mid tree volume in m3 in the middle of a simulation step
BAL_mid Basal Area in Large trees the middle of a simulation step
stand_BA_mid Total stand basal area the middle of a simulation step
stand_n_mid The number of trees in a stand the middle of a simulation step
data_v5 An example of joined national forest inventory and site data that is
used within the MLFS
Description
This is simulated data that reassemble the national forest inventory data. We use it to show how to
run examples for simulating harvesting.
Usage
data_v5
Format
A data frame with 5949 rows and 10 variables:
species species name
year year in which plot was visited
plotID a unique identifier for plot
treeID a unique identifier for tree
speciesGroup identifier for species group
code status of a tree: 0 (normal), 1(harvested), 2(dead), 3 (ingrowth)
volume_mid tree volume in m3 in the middle of a simulation step
weight_mid upscale weight to calculate hectare values in the middle of a simulation step
BA_mid basal area of individual trees in m2 in the middle of a simulation step
protected logical, 1 if protected, otherwise 0
data_v6 An example of joined national forest inventory and site data that is
used within the MLFS
Description
This is simulated data that reassemble the national forest inventory data. We use it to show how to
run examples for simulating Basal Area Increments (BAI) and the ingrowth of new trees. To make
examples running more quickly, we keep only one tree species: PINI
Usage
data_v6
Format
A data frame with 186 rows and 27 variables:
species species name
year year in which plot was visited
plotID a unique identifier for plot
treeID a unique identifier for tree
speciesGroup identifier for species group
code status of a tree: 0 (normal), 1(harvested), 2(dead), 3 (ingrowth)
height tree height in meters
crownHeight crown height in meters
protected logical, 1 if protected, otherwise 0
slope slope on a plot
elevation plot elevation
northness plot northness, 1 is north, 0 is south
siteIndex a proxy for site index, higher value represents more productive sites
BA basal area of individual trees in m2
weight upscale weight to calculate hectare values
stand_BA Total stand basal area
stand_n The number of trees in a stand
BAL Basal Area in Large trees
p_BA basal area of individual trees in m2 from previous simulation step
p_height tree height in meters from previous simulation step
p_crownHeight crown height in meters from previous simulation step
p_weight upscale weight to calculate hectare values from previous simulation step
BAI basal area increment
p_sum monthly precipitation sum
t_avg monthly mean temperature
volume tree volume in m3
p_volume tree volume in m3 from previous simulation step
df_volume_parameters An example table with parameters and equations for n-parametric vol-
ume functions
Description
Volume functions can be specified for each species and plot separately, also limited to specific DBH
interval. The factor variables (vol_factor, h_factor and DBH_factor) are used to control the input
and output units.
Usage
df_volume_parameters
Format
A data frame with 6 rows and 14 variables:
species species name as used in data_NFI. The category REST is used for all species without
specific equation
equation equation for selected volume function
vol_factor will be multiplied with the volume
h_factor will be multiplied with tree height
d_factor will be divided with tree DBH
DBH_min lower interval threshold for considered trees
DBH_max upper interval threshold for considered trees
a parameter a for volume equation
b parameter b for volume equation
c parameter c for volume equation
d parameter d for volume equation
e parameter e for volume equation
f parameter f for volume equation
g parameter g for volume equation
form_factors An example table with form factors used to calculate tree volume
Description
Form factors can be specified per species, plot or per species and plot
Usage
form_factors
Format
A data frame with 1199 rows and 3 variables:
plotID a unique identifier for plot
species species name as used in data_NFI
form for factor used to calculate tree volume
height_prediction height_prediction
Description
Height model
Usage
height_prediction(
df_fit,
df_predict,
species_n_threshold = 100,
height_model = "naslund",
BRNN_neurons = 3,
height_pred_level = 0,
eval_model_height = TRUE,
blocked_cv = TRUE,
k = 10
)
Arguments
df_fit data frame with tree heights and basal areas for individual trees
df_predict data frame which will be used for predictions
species_n_threshold
a positive integer defining the minimum number of observations required to treat
a species as an independent group
height_model character string defining the model to be used for height prediction. If ’brnn’,
then ANN method with Bayesian Regularization is applied. In addition, all 2-
and 3- parametric H-D models from lmfor R package are available.
BRNN_neurons positive integer defining the number of neurons to be used in the brnn method.
height_pred_level
integer with value 0 or 1 defining the level of prediction for height-diameter
(H-D) models. The value 1 defines a plot-level prediction, while the value 0
defines regional-level predictions. Default is 0. If using 1, make sure to have
representative plot-level data for each species.
eval_model_height
logical, should the height model be evaluated and returned as the output
blocked_cv logical, should the blocked cross-validation be used in the evaluation phase?
k the number of folds to be used in the k fold cross-validation
Value
a list with four elements:
1. $data_height_predictions - a data frame with imputed tree heights
2. $data_height_eval - a data frame with predicted and observed tree heights, or a character string
indicating that tree heights were not evaluated
3. $model_species - the output model for tree heights (species level)
4. $model_speciesGroups - the output model for tree heights (species group level)
Examples
library(MLFS)
data(data_tree_heights)
data(data_v2)
# A) Example with the BRNN method
h_predictions <- height_prediction(df_fit = data_tree_heights,
df_predict = data_v2,
species_n_threshold = 100,
height_pred_level = 0,
height_model = "brnn",
BRNN_neurons = 3,
eval_model_height = FALSE,
blocked_cv = TRUE, k = 10
)
predicted_df <- h_predictions$data_height_predictions # df with imputed heights
evaluation_df <- h_predictions$data_height_eval # df with evaluation results
ingrowth_parameter_list
An example data of ingrowth_parameter_list
Description
This is a list with two ingrowth levels: 3 (inner circle) and 15 (outer circle). In each list there
are deciles of DBH distributions that are used to simulate DBH for new trees, separately for each
ingrowth category
Usage
ingrowth_parameter_list
Format
A list with 2 elements:
3 deciles of DBH distribution for ingrowth category 3
15 deciles of DBH distribution for ingrowth category 15
ingrowth_table An example data of ingrowth_table
Description
Ingrowth table is used within the ingrowth sub model to correctly simulate different ingrowth levels
and associated upscale weights
Usage
ingrowth_table
Format
A data frame with 2 rows and 4 variables:
code ingrowth codes
DBH_threshold a DBH threshold for particular ingrowth category
DBH_max maximum DBH for a particular ingrowth category
weight the upscale weight for particular measurement category
max_size_data An example of data with maximum allowed BA that is used in the mor-
tality sub model
Description
This is simulated max_size_data and used for examples in mortality sub model
Usage
max_size_data
Format
A data frame with 36 rows and 2 variables:
species species name
BA_max The maximum allowed basal area (BA) for each individual species
measurement_thresholds
An example of measurement_thresholds table
Description
An example of measurement_thresholds table resulting from concentric plots as used in Slovenian
NFI
Usage
measurement_thresholds
Format
A data frame with 2 rows and 2 variables:
DBH_threshold a DBH threshold for particular measurement category
weight the upscale weight for particular measurement category
MLFS 25
MLFS MLFS
Description
Machine Learning Forest Simulator
Usage
MLFS(
data_NFI,
data_site,
data_tariffs = NULL,
data_climate = NULL,
df_volumeF_parameters = NULL,
thinning_weights_species = NULL,
final_cut_weights_species = NULL,
thinning_weights_plot = NULL,
final_cut_weights_plot = NULL,
form_factors = NULL,
form_factors_level = "species_plot",
uniform_form_factor = 0.42,
sim_steps,
volume_calculation = "volume_functions",
merchantable_whole_tree = "merchantable",
sim_harvesting = TRUE,
sim_mortality = TRUE,
sim_ingrowth = TRUE,
sim_crownHeight = TRUE,
harvesting_sum = NULL,
forest_area_ha = NULL,
harvest_sum_level = NULL,
plot_upscale_type = NULL,
plot_upscale_factor = NULL,
mortality_share = NA,
mortality_share_type = "volume",
mortality_model = "glm",
ingrowth_model = "ZIF_poiss",
BAI_rf_mtry = NULL,
ingrowth_rf_mtry = NULL,
mortality_rf_mtry = NULL,
nb_laplace = 0,
harvesting_type = "final_cut",
share_thinning = 0.8,
final_cut_weight = 10,
thinning_small_weight = 1,
species_n_threshold = 100,
height_model = "brnn",
crownHeight_model = "brnn",
BRNN_neurons_crownHeight = 1,
BRNN_neurons_height = 3,
height_pred_level = 0,
include_climate = FALSE,
select_months_climate = c(1, 12),
set_eval_mortality = TRUE,
set_eval_crownHeight = TRUE,
set_eval_height = TRUE,
set_eval_ingrowth = TRUE,
set_eval_BAI = TRUE,
k = 10,
blocked_cv = TRUE,
max_size = NULL,
max_size_increase_factor = 1,
ingrowth_codes = c(3),
ingrowth_max_DBH_percentile = 0.9,
measurement_thresholds = NULL,
area_correction = NULL,
export_csv = FALSE,
sim_export_mode = TRUE,
include_mortality_BAI = TRUE,
intermediate_print = FALSE
)
Arguments
data_NFI data frame with individual tree variables
data_site data frame with site descriptors. This data is related to data_NFI based on the
’plotID’ column
data_tariffs optional, but mandatory if volume is calculated using the one-parametric tariff
functions. Data frame with plotID, species and V45. See details.
data_climate data frame with climate data, covering the initial calibration period and all the
years which will be included in the simulation
df_volumeF_parameters
optional, data frame with species-specific volume function parameters
thinning_weights_species
data frame with thinning weights for each species. The first column represents
species code, each next column consists of species-specific thinning weights
applied in each simulation step
final_cut_weights_species
data frame with final cut weights for each species. The first column represents
species code, each next column consists of species-specific final cut weights
applied in each simulation step
thinning_weights_plot
data frame with harvesting weights related to plot IDs, used for thinning
final_cut_weights_plot
data frame with harvesting weights related to plot IDs, used for final cut
form_factors optional, data frame with species-specific form factors
form_factors_level
character, the level of specified form factors. It can be ’species’, ’plot’ or
’species_plot’
uniform_form_factor
numeric, uniform form factor to be used for all species and plots. Only if
form_factors are not provided
sim_steps The number of simulation steps
volume_calculation
character string defining the method for volume calculation: ’tariffs’, ’volume_functions’,
’form_factors’ or ’slo_2p_volume_functions’
merchantable_whole_tree
character, ’merchantable’ or ’whole_tree’. It indicates which type of volume
functions will be used. This parameter is used only for volume calculation using
the ’slo_2p_volume_functions’.
sim_harvesting logical, should harvesting be simulated?
sim_mortality logical, should mortality be simulated?
sim_ingrowth logical, should ingrowth be simulated?
sim_crownHeight
logical, should crown heights be simulated? If TRUE, a crownHeight column is
expected in data_NFI
harvesting_sum a value, or a vector of values defining the harvesting sums through the simulation
stage. If a single value, then it is used in all simulation steps. If a vector of
values, the first value is used in the first step, the second in the second step, etc.
forest_area_ha the total area of all forest which are subject of the simulation
harvest_sum_level
integer with value 0 or 1 defining the level of specified harvesting sum: 0 for
plot level and 1 for regional level
plot_upscale_type
character defining the upscale method of plot level values. It can be ’area’ or
’upscale factor’. If ’area’, provide the forest area represented by all plots in
hectares (forest_area_ha argument). If ’factor’, provide the fixed factor to up-
scale the area of all plots. Please note: forest_area_ha/plot_upscale_factor =
number of unique plots. This argument is important when harvesting sum is
defined on regional level.
plot_upscale_factor
numeric value to be used to upscale area of each plot
mortality_share
a value, or a vector of values defining the proportion of the volume which is to
be the subject of mortality. If a single value, then it is used in all simulation
steps. If a vector of values, the first value is used in the first step, the second in
the second step, and so on.
mortality_share_type
character, it can be ’volume’ or ’n_trees’. If ’volume’ then the mortality share
relates to total standing volume, if ’n_trees’ then mortality share relates to the
total number of standing trees
mortality_model
model to be used for mortality prediction: ’glm’ for generalized linear models;
’rf’ for random forest algorithm; ’naiveBayes’ for Naive Bayes algorithm
ingrowth_model model to be used for ingrowth predictions. ’glm’ for generalized linear models
(Poisson regression), ’ZIF_poiss’ for zero inflated Poisson regression and ’rf’
for random forest
BAI_rf_mtry a number of variables randomly sampled as candidates at each split of a ran-
dom forest model for predicting basal area increments (BAI). If NULL, default
settings are applied.
ingrowth_rf_mtry
a number of variables randomly sampled as candidates at each split of a random
forest model for predicting ingrowth. If NULL, default settings are applied
mortality_rf_mtry
a number of variables randomly sampled as candidates at each split of a random
forest model for predicting mortality. If NULL, default settings are applied
nb_laplace value used for Laplace smoothing (additive smoothing) in naive Bayes algo-
rithm. Defaults to 0 (no Laplace smoothing)
harvesting_type
character, it could be ’random’, ’final_cut’, ’thinning’ or ’combined’. The latter
combines ’final_cut’ and ’thinning’ options, where the share of each is specified
with the argument ’share_thinning’
share_thinning numeric, a number or a vector of numbers between 0 and 1 that specifies the
share of thinning in comparison to final_cut. Only used if harvesting_type is
’combined’
final_cut_weight
numeric value affecting the probability distribution of harvested trees. Greater
value increases the share of harvested trees having larger DBH. Default is 10.
thinning_small_weight
numeric value affecting the probability distribution of harvested trees. Greater
value increases the share of harvested trees having smaller DBH. Default is 1.
species_n_threshold
a positive integer defining the minimum number of observations required to treat
a species as an independent group
height_model character string defining the model to be used for height prediction. If brnn, then
ANN method with Bayesian Regularization is applied.
crownHeight_model
character string defining the model to be used for crown heights. Available are
ANN with Bayesian regularization (brnn) or linear regression (lm)
BRNN_neurons_crownHeight
a positive integer defining the number of neurons to be used in the brnn method
for predicting crown heights
BRNN_neurons_height
a positive integer defining the number of neurons to be used in the brnn method
for predicting tree heights
height_pred_level
integer with value 0 or 1 defining the level of prediction for height-diameter
(H-D) models. The value 1 defines a plot-level prediction, while the value 0
defines regional-level predictions. Default is 0. If using 1, make sure to have
representative plot-level data for each species.
include_climate
logical, should climate variables be included as predictors
select_months_climate
vector of subset months to be considered. Default is c(1,12), which uses all
months.
set_eval_mortality
logical, should the mortality model be evaluated and returned as the output
set_eval_crownHeight
logical, should the crownHeight model be evaluated and returned as the output
set_eval_height
logical, should the height model be evaluated and returned as the output
set_eval_ingrowth
logical, should the the ingrowth model be evaluated and returned as the output
set_eval_BAI logical, should the the BAI model be evaluated and returned as the output
k the number of folds to be used in the k fold cross-validation
blocked_cv logical, should the blocked cross-validation be used in the evaluation phase?
max_size a data frame with the maximum values of DBH for each species. If a tree ex-
ceeds this value, it dies. If not provided, the maximum is estimated from the
input data. Two columns must be present, i.e. ’species’ and ’DBH_max’
max_size_increase_factor
numeric value, which will be used to increase the max DBH for each species,
when the maximum is estimated from the input data. If the argument ’max_size’
is provided, the ’max_size_increase_factor’ is ignored. Default is 1. To increase
maximum for 10 percent, use 1.1.
ingrowth_codes numeric value or a vector of codes which refer to ingrowth trees
ingrowth_max_DBH_percentile
which percentile should be used to estimate the maximum simulated value of
ingrowth trees?
measurement_thresholds
data frame with two variables: 1) DBH_threshold and 2) weight. This informa-
tion is used to assign the correct weights in BAI and increment sub-model; and
to upscale plot-level data to hectares.
area_correction
optional data frame with three variables: 1) plotID and 2) DBH_threshold and
3) the correction factor to be multiplied by weight for this particular category.
export_csv logical, if TRUE, at each simulation step, the results are saved in the current
working directory as csv file
sim_export_mode
logical, if FALSE, the results of the individual simulation steps are not merged
into the final export table. Therefore, output element 1 ($sim_results) will be
empty. This was introduced to allow simulations when using larger data sets
and long term simulations that might exceed the available RAM. In such cases,
we recommend setting the argument export_csv = TRUE, which will export each
simulation step to the current working directory.
include_mortality_BAI
logical, should basal area increments (BAI) be used as independent variable for
predicting individual tree morality?
intermediate_print
logical, if TRUE intermediate steps will be printed while MLFS is running
Value
a list of class mlfs with at least 15 elements:
1. $sim_results - a data frame with the simulation results
2. $height_eval - a data frame with predicted and observed tree heights, or a character string
indicating that tree heights were not evaluated
3. $crownHeight_eval - a data frame with predicted and observed crown heights, or character
string indicating that crown heights were not evaluated
4. $mortality_eval - a data frame with predicted and observed probabilities of dying for all indi-
vidual trees, or character string indicating that mortality sub-model was not evaluated
5. $ingrowth_eval - a data frame with predicted and observed number of new ingrowth trees,
separately for each ingrowth level, or character string indicating that ingrowth model was not
evaluated
6. $BAI_eval - a data frame with predicted and observed basal area increments (BAI), or char-
acter string indicating that BAI model was not evaluated
7. $height_model_species - the output model for tree heights (species level)
8. $height_model_speciesGroups - the output model for tree heights (species group level)
9. $crownHeight_model_species - the output model for crown heights (species level)
10. $crownHeight_model_speciesGroups - the output model for crown heights (species group
level)
11. $mortality_model - the output model for mortality
12. $BAI_model_species - the output model for basal area increments (species level)
13. $BAI_model_speciesGroups - the output model for basal area increments (species group level)
14. $max_size - a data frame with maximum allowed diameter at breast height (DBH) for each
species
15. $ingrowth_model_3 - the output model for ingrowth (level 1) – the output name depends on
ingrowth codes
16. $ingrowth_model_15 - the output model for ingrowth (level 2) – optional and the output name
depends on ingrowth codes
predict_ingrowth 31
Examples
library(MLFS)
# open example data
data(data_NFI)
data(data_site)
data(data_climate)
data(df_volume_parameters)
data(measurement_thresholds)
test_simulation <- MLFS(data_NFI = data_NFI,
data_site = data_site,
data_climate = data_climate,
df_volumeF_parameters = df_volume_parameters,
form_factors = volume_functions,
sim_steps = 2,
sim_harvesting = TRUE,
harvesting_sum = 100000,
harvest_sum_level = 1,
plot_upscale_type = "factor",
plot_upscale_factor = 1600,
measurement_thresholds = measurement_thresholds,
ingrowth_codes = c(3,15),
volume_calculation = "volume_functions",
select_months_climate = seq(6,8),
intermediate_print = FALSE
)
predict_ingrowth predict_ingrowth
Description
ingrowth model for predicting new trees within the MLFS
Usage
predict_ingrowth(
df_fit,
df_predict,
site_vars = site_vars,
include_climate = include_climate,
eval_model_ingrowth = TRUE,
k = 10,
blocked_cv = TRUE,
ingrowth_model = "glm",
rf_mtry = NULL,
ingrowth_table = NULL,
DBH_distribution_parameters = NULL
)
Arguments
df_fit a plot-level data with plotID, stand variables and site descriptors, and the two
target variables describing the number of ingrowth trees for inner (ingrowth_3)
and outer (ingrowth_15) circles
df_predict data frame which will be used for ingrowth predictions
site_vars a character vector of variable names which are used as site descriptors
include_climate
logical, should climate variables be included as predictors
eval_model_ingrowth
logical, should the the ingrowth model be evaluated and returned as the output
k the number of folds to be used in the k fold cross-validation
blocked_cv logical, should the blocked cross-validation be used in the evaluation phase?
ingrowth_model model to be used for ingrowth predictions. ’glm’ for generalized linear models
(Poisson regression), ’ZIF_poiss’ for zero inflated Poisson regression and ’rf’
for random forest
rf_mtry a number of variables randomly sampled as candidates at each split of a random
forest model for predicting ingrowth. If NULL, default settings are applied.
ingrowth_table a data frame with 4 variables: (ingrowth) code, DBH_threshold, DBH_max and
weight. Ingrowth table is used within the ingrowth sub model to correctly sim-
ulate different ingrowth levels and associated upscale weights
DBH_distribution_parameters
A list with deciles of DBH distributions that are used to simulate DBH for new
trees, separately for each ingrowth category
Value
a list with four elements:
1. $predicted_ingrowth - a data frame with newly added trees based on the ingrowth predictions
2. $eval_ingrowth - a data frame with predicted and observed number of new trees, separately
for each ingrowth level, or character string indicating that ingrowth model was not evaluated
3. $mod_ing_3 - the output model for predicting the ingrowth of trees with code 3
4. $mod_ing_15 - the output model for predicting the ingrowth of trees with code 15 (the output
name depends on the code used for this particular ingrowth level)
Examples
library(MLFS)
data(data_v6)
data(data_ingrowth)
data(ingrowth_table)
data(ingrowth_parameter_list)
ingrowth_outputs <- predict_ingrowth(
df_fit = data_ingrowth,
df_predict = data_v6,
site_vars = c("slope", "elevation", "northness", "siteIndex"),
include_climate = TRUE,
eval_model_ingrowth = FALSE,
rf_mtry = 3,
k = 10, blocked_cv = TRUE,
ingrowth_model = 'rf',
ingrowth_table = ingrowth_table,
DBH_distribution_parameters = ingrowth_parameter_list)
predict_mortality predict_mortality
Description
This sub model first fits a binary model to derive the effects of individual tree, site and climate
variables on mortality; and afterwards predict the probability of dying for each tree from df_predict
Usage
predict_mortality(
df_fit,
df_predict,
df_climate,
mortality_share = NA,
mortality_share_type = "volume",
include_climate,
site_vars,
select_months_climate = c(6, 8),
mortality_model = "rf",
nb_laplace = 0,
sim_crownHeight = FALSE,
k = 10,
eval_model_mortality = TRUE,
blocked_cv = TRUE,
sim_mortality = TRUE,
sim_step_years = 5,
rf_mtry = NULL,
df_max_size = NULL,
ingrowth_codes = 3,
include_mortality_BAI = TRUE,
intermediate_print = FALSE
)
Arguments
df_fit a data frame with individual tree data and site descriptors where code is used to
specify a status of each tree
df_predict data frame which will be used for mortality predictions
df_climate data frame with monthly climate data
mortality_share
a value defining the proportion of the volume which is to be the subject of mor-
tality
mortality_share_type
character, it can be ’volume’ or ’n_trees’. If ’volume’ then the mortality share
relates to total standing volume, if ’n_trees’ then mortality share relates to the
total number of standing trees
include_climate
logical, should climate variables be included as predictors
site_vars a character vector of variable names which are used as site descriptors
select_months_climate
vector of subset months to be considered. Default is c(1,12), which uses all
months.
mortality_model
logical, should the mortality model be evaluated and returned as the output
nb_laplace value used for Laplace smoothing (additive smoothing) in naive Bayes algo-
rithm. Defaults to 0 (no Laplace smoothing).
sim_crownHeight
logical, should crown heights be considered as a predictor variable? If TRUE, a
crownHeight column is expected in data_NFI
k the number of folds to be used in the k fold cross-validation
eval_model_mortality
logical, should the mortality model be evaluated and returned as the output
blocked_cv logical, should the blocked cross-validation be used in the evaluation phase?
sim_mortality logical, should mortality be simulated?
sim_step_years the simulation step in years
rf_mtry number of variables randomly sampled as candidates at each split of a random
forest model. If NULL, default settings are applied.
df_max_size a data frame with the maximum BA values for each species. If a tree exceeds
this value, it dies.
ingrowth_codes numeric value or a vector of codes which refer to ingrowth trees
include_mortality_BAI
logical, should basal area increments (BAI) be used as independent variable for
predicting individual tree morality?
intermediate_print
logical, if TRUE intermediate steps will be printed while the mortality sub model
is running
Value
a list with three elements:
1. $predicted_mortality - a data frame with updated tree status (code) based on the predicted
mortality
2. $eval_mortality - a data frame with predicted and observed probabilities of dying for all indi-
vidual trees, or character string indicating that mortality sub-model was not evaluated
3. $model_output - the output model for mortality
Examples
data("data_v4")
data("data_mortality")
data("max_size_data")
mortality_outputs <- predict_mortality(
df_fit = data_mortality,
df_predict = data_v4,
mortality_share_type = 'volume',
df_climate = data_climate,
site_vars = c("slope", "elevation", "northness", "siteIndex"),
sim_mortality = TRUE,
mortality_model = 'naiveBayes',
nb_laplace = 0,
sim_crownHeight = TRUE,
mortality_share = 0.02,
include_climate = TRUE,
select_months_climate = c(6,7,8),
eval_model_mortality = TRUE,
k = 10, blocked_cv = TRUE,
sim_step_years = 6,
df_max_size = max_size_data,
ingrowth_codes = c(3,15),
include_mortality_BAI = TRUE)
df_predicted <- mortality_outputs$predicted_mortality
df_evaluation <- mortality_outputs$eval_mortality
# confusion matrix
table(df_evaluation$mortality, round(df_evaluation$mortality_pred, 0))
simulate_harvesting A sub model to simulate harvesting within the MLFS
Description
Harvesting is based on probability sampling, which depends on the selected parameters and the
seize of a tree. Bigger trees have higher probability of being harvested when final cut is applied,
while smaller trees have higher probability of being sampled in the case of thinning.
Usage
simulate_harvesting(
df,
harvesting_sum,
df_thinning_weights_species = NULL,
df_final_cut_weights_species = NULL,
df_thinning_weights_plot = NULL,
df_final_cut_weights_plot = NULL,
harvesting_type = "random",
share_thinning = 0.8,
final_cut_weight = 1e+07,
thinning_small_weight = 1e+05,
harvest_sum_level = 1,
plot_upscale_type,
plot_upscale_factor,
forest_area_ha
)
Arguments
df a data frame with individual tree data, which include basal areas in the middle
of a simulation step, species name and code
harvesting_sum a value, or a vector of values defining the harvesting sums through the simulation
stage. If a single value, then it is used in all simulation steps. If a vector of
values, the first value is used in the first step, the second in the second step, etc.
df_thinning_weights_species
data frame with thinning weights for each species. The first column represents
species code, each next column consists of species-specific thinning weights
df_final_cut_weights_species
data frame with final cut weights for each species. The first column represents
species code, each next column consists of species-specific final cut weights
df_thinning_weights_plot
data frame with harvesting weights related to plot IDs, used for thinning
df_final_cut_weights_plot
data frame with harvesting weights related to plot IDs, used for final cut
harvesting_type
character, it could be ’random’, ’final_cut’, ’thinning’ or ’combined’. The latter
combines ’final_cut’ and ’thinning’ options, where the share of each is specified
with the argument ’share_thinning’
share_thinning numeric, a number between 0 and 1 that specifies the share of thinning in com-
parison to final_cut. Only used if harvesting_type is ’combined’
final_cut_weight
numeric value affecting the probability distribution of harvested trees. Greater
value increases the share of harvested trees having larger DBH. Default is 10.
thinning_small_weight
numeric value affecting the probability distribution of harvested trees. Greater
value increases the share of harvested trees having smaller DBH. Default is 1.
harvest_sum_level
integer with value 0 or 1 defining the level of specified harvesting sum: 0 for
plot level and 1 for regional level
plot_upscale_type
character defining the upscale method of plot level values. It can be ’area’ or
’upscale factor’. If ’area’, provide the forest area represented by all plots in
hectares (forest_area_ha argument). If ’factor’, provide the fixed factor to up-
scale the area of all plots. Please note: forest_area_ha/plot_upscale_factor =
number of unique plots. This argument is important when harvesting sum is
defined on regional level.
plot_upscale_factor
numeric value to be used to upscale area of each plot
forest_area_ha the total area of all forest which are subject of the simulation
Value
a data frame with updated status (code) of all individual trees based on the simulation of harvesting
Examples
library(MLFS)
data(data_v5)
data_v5 <- simulate_harvesting(df = data_v5,
harvesting_sum = 5500000,
harvesting_type = "combined",
share_thinning = 0.50,
harvest_sum_level = 1,
plot_upscale_type = "factor",
plot_upscale_factor = 1600,
final_cut_weight = 5,
thinning_small_weight = 1)
volume_form_factors volume_form_factors
Description
The calculation of individual tree volume using form factors, which can be defined per species, per
plot, or per species and per plot
Usage
volume_form_factors(
df,
form_factors = NULL,
form_factors_level = "species",
uniform_form_factor = 0.42
)
Arguments
df data frame with tree heights and basal areas for individual trees
form_factors data frame with for factors for species, plot or both
form_factors_level
character, the level of specified form factors. It can be ’species’, ’plot’ or
’species_plot’
uniform_form_factor
a uniform form factor to be applied to all trees. If specified, it overwrites the
argument ’form_factors’
Value
a data frame with calculated volume for all trees
Examples
library(MLFS)
data(data_v3)
data(form_factors)
data_v3 <- volume_form_factors(df = data_v3, form_factors = form_factors,
form_factors_level = "species_plot")
summary(data_v3)
volume_functions volume_functions
Description
The calculation of individual tree volume using the n-parameter volume functions for the MLFS
Usage
volume_functions(df, df_volumeF_parameters = NULL)
Arguments
df data frame with tree heights and basal areas for individual trees
df_volumeF_parameters
data frame with equations and parameters for n-parametric volume functions
Value
a data frame with calculated volume for all trees
Examples
library(MLFS)
data(data_v3)
data(df_volume_parameters)
data_v3 <- volume_functions(df = data_v3,
df_volumeF_parameters = df_volume_parameters)
volume_tariffs volume_tariffs
Description
One-parameter volume functions (tariffs) for the MLFS.
Usage
volume_tariffs(df, data_tariffs)
Arguments
df data frame with tree heights and basal areas for individual trees
data_tariffs data frame with plot- and species-specific parameters for the calculations of tree
volume
Value
a data frame with calculated volume for all trees
Examples
data(data_v3)
data(data_tariffs)
data_v3 <- volume_tariffs(df = data_v3, data_tariffs = data_tariffs) |
google.golang.org/grpc/gcp/observability | go | Go | None
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
* [Experimental](#hdr-Experimental)
Package observability implements the tracing, metrics, and logging data collection, and provides controlling knobs via a config file.
#### Experimental [¶](#hdr-Experimental)
Notice: This package is EXPERIMENTAL and may be changed or removed in a later release.
### Index [¶](#pkg-index)
* [func End()](#End)
* [func Start(ctx context.Context) error](#Start)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
####
func [End](https://github.com/grpc/grpc-go/blob/gcp/observability/v1.0.0/gcp/observability/observability.go#L89) [¶](#End)
```
func End()
```
End is the clean-up API for gRPC Observability plugin. It is expected to be invoked in the main function of the application. The suggested usage is
"defer observability.End()". This function also flushes data to upstream, and cleanup resources.
Note: this method should only be invoked once.
####
func [Start](https://github.com/grpc/grpc-go/blob/gcp/observability/v1.0.0/gcp/observability/observability.go#L48) [¶](#Start)
```
func Start(ctx [context](/context).[Context](/context#Context)) [error](/builtin#error)
```
Start is the opt-in API for gRPC Observability plugin. This function should be invoked in the main function, and before creating any gRPC clients or servers, otherwise, they might not be instrumented. At high-level, this module does the following:
* it loads observability config from environment;
* it registers default exporters if not disabled by the config;
* it sets up telemetry collectors (binary logging sink or StatsHandlers).
Note: this method should only be invoked once.
Note: handle the error
### Types [¶](#pkg-types)
This section is empty. |
github.com/asim/go-micro/plugins/config/source/consul/v4 | go | Go | README
[¶](#section-readme)
---
### Consul Source
The consul source reads config from consul key/values
#### Consul Format
The consul source expects keys under the default prefix `/micro/config`
Values are expected to be json
```
// set database consul kv put micro/config/database '{"address": "10.0.0.1", "port": 3306}'
// set cache consul kv put micro/config/cache '{"address": "10.0.0.2", "port": 6379}'
```
Keys are split on `/` so access becomes
```
conf.Get("micro", "config", "database")
```
#### New Source
Specify source with data
```
consulSource := consul.NewSource(
// optionally specify consul address; default to localhost:8500
consul.WithAddress("10.0.0.10:8500"),
// optionally specify prefix; defaults to /micro/config
consul.WithPrefix("/my/prefix"),
// optionally strip the provided prefix from the keys, defaults to false
consul.StripPrefix(true),
)
```
#### Load Source
Load the source into config
```
// Create new config conf := config.NewConfig()
// Load consul source conf.Load(consulSource)
```
Documentation
[¶](#section-documentation)
---
### Index [¶](#pkg-index)
* [Variables](#pkg-variables)
* [func NewSource(opts ...source.Option) source.Source](#NewSource)
* [func StripPrefix(strip bool) source.Option](#StripPrefix)
* [func WithAddress(a string) source.Option](#WithAddress)
* [func WithConfig(c *api.Config) source.Option](#WithConfig)
* [func WithDatacenter(p string) source.Option](#WithDatacenter)
* [func WithPrefix(p string) source.Option](#WithPrefix)
* [func WithToken(p string) source.Option](#WithToken)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
```
var (
// DefaultPrefix is the prefix that consul keys will be assumed to have if you
// haven't specified one
DefaultPrefix = "/micro/config/"
)
```
### Functions [¶](#pkg-functions)
####
func [NewSource](https://github.com/asim/go-micro/blob/plugins/config/source/consul/v4.7.0/plugins/config/source/consul/consul.go#L76) [¶](#NewSource)
```
func NewSource(opts ...[source](/go-micro.dev/v4/config/source).[Option](/go-micro.dev/v4/config/source#Option)) [source](/go-micro.dev/v4/config/source).[Source](/go-micro.dev/v4/config/source#Source)
```
NewSource creates a new consul source
####
func [StripPrefix](https://github.com/asim/go-micro/blob/plugins/config/source/consul/v4.7.0/plugins/config/source/consul/options.go#L38) [¶](#StripPrefix)
```
func StripPrefix(strip [bool](/builtin#bool)) [source](/go-micro.dev/v4/config/source).[Option](/go-micro.dev/v4/config/source#Option)
```
StripPrefix indicates whether to remove the prefix from config entries, or leave it in place.
####
func [WithAddress](https://github.com/asim/go-micro/blob/plugins/config/source/consul/v4.7.0/plugins/config/source/consul/options.go#L18) [¶](#WithAddress)
```
func WithAddress(a [string](/builtin#string)) [source](/go-micro.dev/v4/config/source).[Option](/go-micro.dev/v4/config/source#Option)
```
WithAddress sets the consul address
####
func [WithConfig](https://github.com/asim/go-micro/blob/plugins/config/source/consul/v4.7.0/plugins/config/source/consul/options.go#L68) [¶](#WithConfig)
```
func WithConfig(c *[api](/github.com/hashicorp/consul/api).[Config](/github.com/hashicorp/consul/api#Config)) [source](/go-micro.dev/v4/config/source).[Option](/go-micro.dev/v4/config/source#Option)
```
WithConfig set consul-specific options
####
func [WithDatacenter](https://github.com/asim/go-micro/blob/plugins/config/source/consul/v4.7.0/plugins/config/source/consul/options.go#L48) [¶](#WithDatacenter)
```
func WithDatacenter(p [string](/builtin#string)) [source](/go-micro.dev/v4/config/source).[Option](/go-micro.dev/v4/config/source#Option)
```
####
func [WithPrefix](https://github.com/asim/go-micro/blob/plugins/config/source/consul/v4.7.0/plugins/config/source/consul/options.go#L28) [¶](#WithPrefix)
```
func WithPrefix(p [string](/builtin#string)) [source](/go-micro.dev/v4/config/source).[Option](/go-micro.dev/v4/config/source#Option)
```
WithPrefix sets the key prefix to use
####
func [WithToken](https://github.com/asim/go-micro/blob/plugins/config/source/consul/v4.7.0/plugins/config/source/consul/options.go#L58) [¶](#WithToken)
```
func WithToken(p [string](/builtin#string)) [source](/go-micro.dev/v4/config/source).[Option](/go-micro.dev/v4/config/source#Option)
```
WithToken sets the key token to use
### Types [¶](#pkg-types)
This section is empty. |
@thi.ng/rstream-log | npm | JavaScript | This project is part of the
[@thi.ng/umbrella](https://github.com/thi-ng/umbrella/) monorepo and anti-framework.
* [About](#about)
* [Status](#status)
* [Support packages](#support-packages)
* [Related packages](#related-packages)
* [Installation](#installation)
* [Dependencies](#dependencies)
* [API](#api)
* [Authors](#authors)
* [License](#license)
[About](#about)
---
Structured, multilevel & hierarchical loggers based on [@thi.ng/rstream](https://github.com/thi-ng/umbrella/tree/develop/packages/rstream).
This package provides extensible, multi-level & multi-hierarchy logging infrastructure, with logged values transformable via
[@thi.ng/transducers](https://github.com/thi-ng/umbrella/tree/develop/packages/transducers).
Several built-in transformers are provided.
The `Logger` class provided by this package implements the
[@thi.ng/api](https://github.com/thi-ng/umbrella/tree/develop/packages/api)
`ILogger` interface and uses `LogLevel` enums to configure levels /
filtering.
[Status](#status)
---
**STABLE** - used in production
[Search or submit any issues for this package](https://github.com/thi-ng/umbrella/issues?q=%5Brstream-log%5D+in%3Atitle)
[Support packages](#support-packages)
---
* [@thi.ng/rstream-log-file](https://github.com/thi-ng/umbrella/tree/develop/packages/rstream-log-file) - File output handler for structured, multilevel & hierarchical loggers based on [@thi.ng/rstream](https://github.com/thi-ng/umbrella/tree/develop/packages/rstream)
[Related packages](#related-packages)
---
* [@thi.ng/logger](https://github.com/thi-ng/umbrella/tree/develop/packages/logger) - Types & basis infrastructure for arbitrary logging (w/ default impls)
[Installation](#installation)
---
```
yarn add @thi.ng/rstream-log
```
ES module import:
```
<script type="module" src="https://cdn.skypack.dev/@thi.ng/rstream-log"></script>
```
[Skypack documentation](https://docs.skypack.dev/)
For Node.js REPL:
```
const rstreamLog = await import("@thi.ng/rstream-log");
```
Package sizes (brotli'd, pre-treeshake): ESM: 765 bytes
[Dependencies](#dependencies)
---
* [@thi.ng/api](https://github.com/thi-ng/umbrella/tree/develop/packages/api)
* [@thi.ng/checks](https://github.com/thi-ng/umbrella/tree/develop/packages/checks)
* [@thi.ng/errors](https://github.com/thi-ng/umbrella/tree/develop/packages/errors)
* [@thi.ng/logger](https://github.com/thi-ng/umbrella/tree/develop/packages/logger)
* [@thi.ng/rstream](https://github.com/thi-ng/umbrella/tree/develop/packages/rstream)
* [@thi.ng/strings](https://github.com/thi-ng/umbrella/tree/develop/packages/strings)
* [@thi.ng/transducers](https://github.com/thi-ng/umbrella/tree/develop/packages/transducers)
[API](#api)
---
[Generated API docs](https://docs.thi.ng/umbrella/rstream-log/)
```
import { LogLevel } from "@thi.ng/api";
import * as log from "@thi.ng/rstream-log";
const logger = new log.Logger("main");
// or with min level const logger = new log.Logger("main", LogLevel.DEBUG);
// add console output w/ string formatter (a transducer)
logger.subscribe(log.writeConsole(), log.formatString());
logger.debug("hello world");
// [DEBUG] [main] 2018-01-20T09:04:05.198Z hello world
logger.warn("eek");
// [WARN] [main] 2018-01-20T09:04:16.913Z eek
// each logger instance is a rstream StreamMerge instance
// allowing to form logger hierarchies
const mod1 = new log.Logger("module-1", LogLevel.INFO);
// pipe mod1 into main logger logger.add(mod1);
import { postWorker } from "@thi.ng/rstream";
// additionally send messages from this logger to worker mod1.subscribe(postWorker("log-worker.js"));
mod1.info("hi from sub-module");
// only shown in console:
// [INFO] [module-1] 2018-01-20T09:05:21.198Z hi from sub-module
```
TODO
[Authors](#authors)
---
* [<NAME>](https://thi.ng)
If this project contributes to an academic publication, please cite it as:
```
@misc{thing-rstream-log,
title = "@thi.ng/rstream-log",
author = "<NAME>",
note = "https://thi.ng/rstream-log",
year = 2017
}
```
[License](#license)
---
© 2017 - 2023 <NAME> // Apache License 2.0
Readme
---
### Keywords
* datastructure
* logger
* multilevel
* multiplex
* pipeline
* rstream
* stream
* transducer
* typescript |
novapo-medusa-fulfillment-manual | npm | JavaScript | Manual Fulfillment
===
A minimal fulfillment provider that allows merchants to handle fulfillments manually.
[Medusa Website](https://medusajs.com) | [Medusa Repository](https://github.com/medusajs/medusa)
Features
---
* Provides a restriction-free fulfillment provider that can be used during checkout and while processing orders.
---
Prerequisites
---
* [Medusa backend](https://docs.medusajs.com/development/backend/install)
---
How to Install
---
1. Run the following command in the directory of the Medusa backend:
```
npm install medusa-fulfillment-manual
```
2. In `medusa-config.js` add the following at the end of the `plugins` array:
```
const plugins = [
// ...
`medusa-fulfillment-manual`
]
```
---
Test the Plugin
---
1. Run the following command in the directory of the Medusa backend to run the backend:
```
npm run start
```
2. Enable the fulfillment provider in the admin. You can refer to [this User Guide](https://docs.medusajs.com/user-guide/regions/providers) to learn how to do that. Alternatively, you can use the [Admin APIs](https://docs.medusajs.com/api/admin#tag/Region/operation/PostRegionsRegion).
3. Place an order using a storefront or the [Store APIs](https://docs.medusajs.com/api/store). You should be able to use the manual fulfillment provider during checkout.
Readme
---
### Keywords
* medusa-plugin
* medusa-plugin-fulfillment |
ascotraceR | cran | R | Package ‘ascotraceR’
October 12, 2022
Title Simulate the Spread of Ascochyta Blight in Chickpea
Version 0.0.1
Description A spatiotemporal model that simulates the spread of Ascochyta
blight in chickpea fields based on location-specific weather conditions.
This model is adapted from a model developed by Diggle et al. (2002)
<doi:10.1094/PHYTO.2002.92.10.1110> for simulating the spread of anthracnose
in a lupin field.
URL https://github.com/IhsanKhaliq/ascotraceR
BugReports https://github.com/IhsanKhaliq/ascotraceR/issues
License MIT + file LICENSE
Encoding UTF-8
RoxygenNote 7.1.2
Depends R (>= 3.5.0)
Imports data.table (>= 1.13.0), lubridate (>= 1.7.9.2), lutz (>=
0.3.1), circular (>= 0.4-93), purrr, stats, sf, terra
Suggests ggplot2, knitr, rmarkdown, spelling, testthat (>= 3.0.1)
VignetteBuilder knitr
Language en-US
X-schema.org-applicationCategory Tools
X-schema.org-keywords chickpea, botanical-epidemiology,
plant-disease-model, agricultural-modelling,
agricultural-modeling, crop-protection, agricultural-research,
model, modelling, modeling, Ascochyta-rabiei
NeedsCompilation no
Author <NAME> [aut] (<https://orcid.org/0000-0003-4171-0917>),
<NAME> [aut, trl, cre] (<https://orcid.org/0000-0003-4253-7167>),
<NAME> [aut, ccp] (<https://orcid.org/0000-0002-0061-8359>),
Grains Research and Development Corporation (GRDC) Project
USQ1903-003RTX [fnd, cph],
University of Southern Queensland [cph],
Western Australia Agriculture Authority (WAAA) [cph] (Supported the
development of ascotraceR through Adam H. Sparks' time.)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2021-12-20 15:50:05 UTC
R topics documented:
format_weathe... 2
summarise_trac... 5
tidy_trac... 7
trace_asc... 8
format_weather Format weather data into an an object suitable for use in ascotraceR
spore dispersal models
Description
Formats raw weather data into an object suitable for use in the trace_asco() function ensuring
that the supplied weather data meet the requirements of the model to run.
Usage
format_weather(
x,
YYYY = NULL,
MM = NULL,
DD = NULL,
hh = NULL,
mm = NULL,
POSIXct_time = NULL,
time_zone = NULL,
temp,
rain,
ws,
wd,
wd_sd,
station,
lon = NULL,
lat = NULL,
r = NULL,
lonlat_file = NULL
)
Arguments
x A data.frame object of weather station data for formatting. character.
YYYY Column name or index in x that refers to the year when the weather was logged.
character.
MM Column name or index in x that refers to the month (numerical) when the
weather was logged. character.
DD Column name or index in x that refers to the day of month when the weather
was logged. character.
hh Column name or index in x that refers to the hour (24 hour) when the weather
was logged. character.
mm Column name or index in x that refers to the minute when the weather was
logged. character.
POSIXct_time Column name or index in x which contains a POSIXct formatted time. This can
be used instead of arguments YYYY, MM, DD, hh, mm. character.
time_zone Time zone (Olsen time zone format) where the weather station is located. May
be in a column or supplied as a character string. Optional, see also r. character.
See details.
temp Column name or index in x that refers to temperature in degrees Celsius. character.
rain Column name or index in x that refers to rainfall in mm. character.
ws Column name or index in x that refers to wind speed in km / h. character.
wd Column name or index in x that refers to wind direction in degrees. character.
wd_sd Column name or index in x that refers to wind speed columns standard deviation
.character. This is only applicable if weather data is already summarised to
hourly increments. See details.
station Column name or index in x that refers to the weather station name or identifier.
character. See details.
lon Column name or index in x that refers to weather station’s longitude. character.
See details.
lat Column name or index in x that refers to weather station’s latitude. character.
See details.
r Spatial raster which is intended to be used with this weather data for use in the
blackspot model. Used to set time_zone if it is not supplied in data. character.
Optional, see also time_zone.
lonlat_file A file path to a CSV which included station name/id and longitude and latitude
coordinates if they are not supplied in the data. character. Optional, see also
lon and lat.
Details
time_zone All weather stations must fall within the same time zone. If the required stations are
located in differing time zones, separate ascotraceR.weather objects must be created for each
time zone. If a raster object, r, of previous crops is provided that spans time zones, an error will be
emitted.
wd_sd If weather data is provided in hourly increments, a column with the standard deviation of
the wind direction over the hour is required to be provided. If the weather data are sub-hourly, the
standard deviation will be calculated and returned automatically.
lon, lat and lonlat_file If x provides longitude and latitude values for station locations, these
may be specified in the lon and lat columns. If the coordinates are not relevant to the study
location NA can be specified and the function will drop these column variables. If these data are not
included, (NULL) a separate file may be provided that contains the longitude, latitude and matching
station name to provide station locations in the final ascotraceR.weather object that is created by
specifying the file path to a CSV file using lonlat_file.
Value
A ascotraceR.weather object (an extension of data.table) containing the supplied weather ag-
gregated to each hour in a suitable format for use with trace_asco() containing the following
columns:
times: Time in POSIXct format
rain: Rainfall in mm
ws: Wind speed in km / h
wd: Wind direction in compass degrees
wd_sd: Wind direction standard deviation in compass degrees
lon: Station longitude in decimal degrees
lat: Station latitude in decimal degrees
station: Unique station identifying name
YYYY: Year
MM: Month
DD: Day
hh: Hour
mm: Minute
Examples
# Weather data files for Newmarracara for testing and examples have been
# included in ascotraceR. The weather data files both are of the same format,
# so they will be combined for formatting here.
Newmarracarra <- read.csv(
system.file("extdata",
"1998_Newmarracarra_weather_table.csv",
package = "ascotraceR")
)
station_data <- system.file("extdata",
"stat_dat.csv",
package = "ascotraceR")
weather <- format_weather(
x = Newmarracarra,
POSIXct_time = "Local.Time",
temp = "mean_daily_temp",
rain = "rain_mm",
ws = "ws",
wd = "wd",
wd_sd = "wd_sd",
station = "Location",
time_zone = "Australia/Perth",
lonlat_file = station_data
)
# Saving weather data and reimporting can lose the object class
# Reimported data can be quickly reformatted, adding the 'asco.weather' class
# with this same function
temp_file_path <- paste0(tempdir(),"weather_file.csv")
write.csv(weather, file = temp_file_path, row.names = FALSE)
weather_imported <- read.csv(temp_file_path)
weather <- format_weather(weather_imported,
time_zone = "Australia/Perth")
unlink(temp_file_path) # remove temporary weather file
summarise_trace Summarise a trace_asco output nested list as a single data.frame ob-
ject
Description
Creates a paddock-level summary data.table from the output of trace_asco() on a daily time-step
where each row represents one day for the entire paddock.
Usage
summarise_trace(trace)
summarize_trace(trace)
Arguments
trace a nested list output from trace_asco()
Value
A data.table summarising the model’s output for a paddock on a daily time-step with the area under
the disease progress curve (AUDPC) at the paddock level for the simulation’s run with the following
columns:
i_day: Model iteration day (day)
new_gp: New growing points on i_day (n)
susceptible_gp: Susceptible growing points on i_day (n)
exposed_gp: Exposed growing points on i_day (n)
i_date: Calendar date corresponding to model’s i_day
day: Julian day or numeric day of year (day)
cdd: Cumulative degree days (day)
cwh: Cumulative wet hours (h)
cr: Cumulative rainfall (mm)
gp_standard: standard growing points assuming growth is not impeded by infection on i_day (n)
AUDPC: Area under the disease progress curve (AUDPC) for the duration of the model’s run.
See Also
trace_asco(), tidy_trace()
Examples
Newmarracarra <-
read.csv(system.file("extdata",
"1998_Newmarracarra_weather_table.csv", package = "ascotraceR"))
station_data <-
system.file("extdata", "stat_dat.csv", package = "ascotraceR")
weather_dat <- format_weather(
x = Newmarracarra,
POSIXct_time = "Local.Time",
temp = "mean_daily_temp",
ws = "ws",
wd_sd = "wd_sd",
rain = "rain_mm",
wd = "wd",
station = "Location",
time_zone = "Australia/Perth",
lonlat_file = station_data)
traced <- trace_asco(
weather = weather_dat,
paddock_length = 100,
paddock_width = 100,
initial_infection = "1998-06-10",
sowing_date = "1998-06-09",
harvest_date = "1998-06-30",
time_zone = "Australia/Perth",
primary_infection_foci = "centre")
summarised <- summarise_trace(traced)
Newmarracarra <-
read.csv(system.file("extdata",
"1998_Newmarracarra_weather_table.csv", package = "ascotraceR"))
station_data <-
system.file("extdata", "stat_dat.csv", package = "ascotraceR")
weather_dat <- format_weather(
x = Newmarracarra,
POSIXct_time = "Local.Time",
temp = "mean_daily_temp",
ws = "ws",
wd_sd = "wd_sd",
rain = "rain_mm",
wd = "wd",
station = "Location",
time_zone = "Australia/Perth",
lonlat_file = station_data)
traced <- trace_asco(
weather = weather_dat,
paddock_length = 100,
paddock_width = 100,
initial_infection = "1998-06-10",
sowing_date = as.POSIXct("1998-06-09"),
harvest_date = as.POSIXct("1998-06-09") + lubridate::ddays(100),
time_zone = "Australia/Perth",
primary_infection_foci = "centre")
summarised <- summarise_trace(traced)
tidy_trace Tidy up a trace_asco output nested list
Description
Creates a tidy data.table from the output of trace_asco().
Usage
tidy_trace(trace)
Arguments
trace a nested list output from trace_asco()
Value
A tidy data.table of trace_asco() output.
See Also
summarise_trace(), trace_asco()
Examples
Newmarracarra <-
read.csv(system.file("extdata",
"1998_Newmarracarra_weather_table.csv", package = "ascotraceR"))
station_data <-
system.file("extdata", "stat_dat.csv", package = "ascotraceR")
weather_dat <- format_weather(
x = Newmarracarra,
POSIXct_time = "Local.Time",
temp = "mean_daily_temp",
ws = "ws",
wd_sd = "wd_sd",
rain = "rain_mm",
wd = "wd",
station = "Location",
time_zone = "Australia/Perth",
lonlat_file = station_data)
traced <- trace_asco(
weather = weather_dat,
paddock_length = 20,
paddock_width = 20,
initial_infection = "1998-06-10",
sowing_date = as.POSIXct("1998-06-09"),
harvest_date = as.POSIXct("1998-06-09") + lubridate::ddays(100),
time_zone = "Australia/Perth",
primary_infection_foci = "centre")
tidied <- tidy_trace(traced)
# take a look at the infectious growing points on day 102
library(ggplot2)
ggplot(data = subset(tidied, i_day == 102),
aes(x = x, y = y, fill = infectious_gp)) +
geom_tile()
trace_asco Simulates the spread of Ascochyta blight in a chickpea field
Description
Simulate the spatiotemporal development of Ascochyta blight in a chickpea paddock over a growing
season. Both host and pathogen activities are simulated in one square metre cells.
Usage
trace_asco(
weather,
paddock_length,
paddock_width,
sowing_date,
harvest_date,
initial_infection,
seeding_rate = 40,
gp_rr = 0.0065,
max_gp_lim = 5000,
max_new_gp = 350,
latent_period_cdd = 150,
time_zone = "UTC",
primary_infection_foci = "random",
primary_inoculum_intensity = 1,
n_foci = 1,
spores_per_gp_per_wet_hour = 0.22,
splash_cauchy_parameter = 0.5,
wind_cauchy_multiplier = 0.015,
daily_rain_threshold = 2,
hourly_rain_threshold = 0.1,
susceptible_days = 2,
rainfall_multiplier = FALSE
)
Arguments
weather weather data for a representative chickpea paddock for a complete chickpea
growing season for the model’s operation.
paddock_length length of a paddock in metres (y).
paddock_width width of a paddock in metres (x).
sowing_date a character string of a date value indicating sowing date of chickpea seed and
the start of the ‘ascotraceR’ model. Preferably in ISO8601 format (YYYY-MM-
DD), e.g. “2020-04-26”. Assumes there is sufficient soil moisture to induce
germination and start the crop growing season.
harvest_date a character string of a date value indicating harvest date of chickpea crop, which
is also the last day to run the ‘ascotraceR’ model. Preferably in ISO8601 format
(YYYY-MM-DD), e.g., “2020-04-26”.
initial_infection
a character string of a date value referring to the initial or primary infection on
seedlings, resulting in the production of infectious growing points.
seeding_rate indicate the rate at which chickpea seed is sown per square metre. Defaults to
40.
gp_rr refers to rate of increase in chickpea growing points per degree Celsius per day.
Defaults to 0.0065.
max_gp_lim maximum number of chickpea growing points (meristems) allowed per square
metre. Defaults to 5000.
max_new_gp Maximum number of new chickpea growing points (meristems), which develop
per day, per square metre. Defaults to 350.
latent_period_cdd
latent period in cumulative degree days (sum of daily temperature means) is
the period between infection and production of lesions on susceptible growing
points. Defaults to 150.
time_zone refers to time in Coordinated Universal Time (UTC).
primary_infection_foci
refers to the inoculated coordinates where the infection starts. Accepted inputs
are: centre/center or random (Default) or a data.frame with column names
‘x’, ‘y’ and ‘load’. The data.frame inputs inform the model of specific grid
cell/s coordinates where the epidemic should begin. The ‘load’ column is op-
tional and can specify the primary_inoculum_intensity for each coordinate.
primary_inoculum_intensity
Refers to the amount of primary infection as lesions on chickpea plants at the
time of initial_infection. On the date of initial infection in the experiment.
The sources of primary inoculum can be infected seed, volunteer chickpea plants
or infested stubble from the previous seasons. Defaults to 1.
n_foci Quantifies the number of primary infection foci. The value is 1 when primary_infection_foci
= "centre" and can be greater than 1 if primary_infection_foci = "random.
spores_per_gp_per_wet_hour
number of spores produced per infectious growing point during each wet hour.
Also known as the spore_rate. Value is dependent on the susceptibility of the
host genotype.
splash_cauchy_parameter
a parameter used in the Cauchy distribution and describes the median distance
spores travel due to rain splashes. Default to 0.5.
wind_cauchy_multiplier
a scaling parameter to estimate a Cauchy distribution which resembles the pos-
sible distances a conidium travels due to wind driven rain. Defaults to 0.015.
daily_rain_threshold
minimum cumulative rainfall required in a day to allow hourly spore spread
events. See also hourly_rain_threshold. Defaults to 2.
hourly_rain_threshold
minimum rainfall in an hour to trigger a spore spread event in the same hour
(assuming daily_rain_threshold is already met). Defaults to 0.1.
susceptible_days
the number of days for which conidia remain viable on chickpea after dispersal.
Defaults to 2. Conidia remain viable on the plant for at least 48 hours after a
spread event
rainfall_multiplier
logical values will turn on or off rainfall multiplier default method. The default
method increases the number of spores spread per growing point if the rainfall
in the spore spread event hour is greater than one. Numeric values will scale the
number of spores spread per growing point against the volume of rainfall in the
hour. Defaults to FALSE.
Value
a nested list object where each sub-list contains daily data for the day i_day (the model’s iteration
day) generated by the model including: * paddock, an ’x’ ’y’ data.table containing: * x, location
of quadrat on x-axis in paddock, * y, location of quadrat on y-axis in paddock, * new_gp, new
growing points produced in the last 24 hours, * susceptible_gp, susceptible growing points in
the last 24 hours, * exposed_gp, exposed growing points in the last 24 hours, * infectious_gp,
infectious growing points in the last 24 hours,
• i_day, model iteration day, * cumulative daily weather data, a data.table containing: *
cdd, cumulative degree days, * cwh, cumulative wet hours, * cr, cumulative rainfall in mm,
* gp_standard, standard growing points assuming growth is not impeded by infection, * in-
fected_coords, a data.table of only infectious growing point coordinates, * new_infections,
a data.table of newly infected growing points, * exposed_gps, a data.table of exposed grow-
ing points in the latent period phase of infection.
See Also
tidy_trace(), summarise_trace()
Examples
# First weather data needs to be imported and formatted with `format_weather`
Newmarracarra <-
read.csv(system.file("extdata",
"1998_Newmarracarra_weather_table.csv", package = "ascotraceR"))
station_data <-
system.file("extdata", "stat_dat.csv", package = "ascotraceR")
weather_dat <- format_weather(
x = Newmarracarra,
POSIXct_time = "Local.Time",
temp = "mean_daily_temp",
ws = "ws",
wd_sd = "wd_sd",
rain = "rain_mm",
wd = "wd",
station = "Location",
time_zone = "Australia/Perth",
lonlat_file = station_data)
# Now the `trace_asco` function can be run to simulate disease spread
traced <- trace_asco(
weather = weather_dat,
paddock_length = 100,
paddock_width = 100,
initial_infection = "1998-06-10",
sowing_date = "1998-06-09",
harvest_date = "1998-06-30",
time_zone = "Australia/Perth",
gp_rr = 0.0065,
primary_inoculum_intensity = 40,
spores_per_gp_per_wet_hour = 0.22,
primary_infection_foci = "centre")
traced[[23]] # extracts the model output for day 23 |
PoissonMultinomial | cran | R | Package ‘PoissonMultinomial’
October 12, 2022
Type Package
Title The Poisson-Multinomial Distribution
Version 1.0
Date 2022-01-12
Maintainer <NAME> <<EMAIL>>
Description Implementation of the exact, normal approximation, and simulation-based meth-
ods for computing the probability mass function (pmf) and cumulative distribution func-
tion (cdf) of the Poisson-Multinomial distribution, together with a random number genera-
tor for the distribution. The exact method is based on multi-dimensional fast Fourier transforma-
tion (FFT) of the characteristic function of the Poisson-Multinomial distribution. The normal ap-
proximation method uses a multivariate normal distribution to approximate the pmf of the distri-
bution based on central limit theorem. The simulation method is based on the law of large num-
bers. Details about the methods are available in Lin, Wang, and Hong (2022) <arXiv:2201.04237>.
License GPL (>= 2)
Encoding UTF-8
Imports mvtnorm, Rcpp
LinkingTo Rcpp, RcppArmadillo
SystemRequirements fftw3(>=3.3)
RoxygenNote 7.1.1
NeedsCompilation yes
Author <NAME> [aut, cre],
<NAME> [aut, ctb],
<NAME> [aut, ctb],
<NAME> [aut, ctb]
Repository CRAN
Date/Publication 2022-01-13 20:22:42 UTC
R topics documented:
dpm... 2
ppm... 3
rpm... 4
dpmd Probability Mass Function of Poisson-Multinomial Distribution
Description
Computes the pmf of Poisson-Multinomial distribution (PMD), specified by the success probability
matrix, using various methods. This function is capable of computing all probability mass points as
well as of pmf at certain point(s).
Usage
dpmd(pmat, xmat = NULL, method = "DFT-CF", B = 1000)
Arguments
pmat An n × m success probability matrix. Here, n is the number of independent
trials, and m is the number of categories. Each row of pmat describes the success
probability for the corresponding trial and it should add up to 1.
xmat A matrix with m columns that specifies where the pmf is to be computed. Each
row of the matrix should has the form x = (x1 , . . . , xm ) which is used for
computing P(X1 = x1 , . . . , Xm = xm ), the values of x should sum up to n. It
can be a vector of length m. If xmat is NULL, the pmf at all probability mass
points will be computed.
method Character string stands for the method selected by users to compute the cdf. The
method can only be one of the following three: "DFT-CF", "NA", "SIM".
B Number of repeats used in the simulation method. It is ignored for methods
other than the "SIM" method.
Details
Consider n independent trials and each trial leads to a success outcome for exactly one of the m cat-
egories. Each category has varying success probabilities from different trials. The Poisson multino-
mial distribution (PMD) gives the probability of any particular combination of numbers of successes
for the m categories. The success probabilities form an n × m matrix, which is called the success
probability matrix and denoted by pmat. For the methods we applied in dpmd, "DFT-CF" is an exact
method that computes all probability mass points of the distribution, using multi-dimensional FFT
algorithm. When the dimension of pmat increases, the computation burden of "DFT-CF" may chal-
lenge the capability of a computer because the method automatically computes all probability mass
points regardless of the input of xmat.
"SIM" is a simulation method that generates random samples from the distribution, and uses relative
frequency to estimate the pmf. Note that the accuracy and running time will be affected by user
choice of B. Usually B=1e5 or 1e6 will be accurate enough. Increasing B to larger than 1e8 will
heavily increase the computational burden of the computer.
"NA" is an approximation method that uses a multivariate normal distribution to approximate the
pmf at the points specified in xmat. This method requires an input of xmat.
Notice if xmat is not specified then it will be set as NULL. In this case, dpmd will compute the entire
pmf if the chosen method is "DFT-CF" or "SIM". If xmat is provided, only the pmf at the points
specified by xmat will be outputted.
Value
For a given xmat, dpmd returns the pmf at points specified by xmat.
If xmat is NULL, all probability mass points for the distribution specified by the success probability
matrix pmat will be computed, and the results are stored and outputted in a multi-dimensional array,
denoted by res. Note the dimension of pmat is n × m, thus res will be an (n + 1)(m−1) array. Then
the value of the pmf P(X1 = x1 , . . . , Xm = xm ) can be extracted as res[x1 + 1, . . . , xm−1 + 1].
For example, for the pmat matrix in the example section, the array element res[1,2,1]=0.90 gives
the value of the pmf P(X1 = 0, X2 = 1, X3 = 0, X4 = 2) = 0.90.
References
<NAME>., <NAME>., and <NAME>. (2022). The Poisson Multinomial Distribution and Its Applications
in Voting Theory, Ecological Inference, and Machine Learning, arXiv:2201.04237.
Examples
pp <- matrix(c(.1, .1, .1, .7, .1, .3, .3, .3, .5, .2, .1, .2), nrow = 3, byrow = TRUE)
x <- c(0,0,1,2)
x1 <- matrix(c(0,0,1,2,2,1,0,0),nrow=2,byrow=TRUE)
dpmd(pmat = pp)
dpmd(pmat = pp, xmat = x1)
dpmd(pmat = pp, xmat = x)
dpmd(pmat = pp, xmat = x, method = "NA" )
dpmd(pmat = pp, xmat = x1, method = "NA" )
dpmd(pmat = pp, method = "SIM", B = 1e3)
dpmd(pmat = pp, xmat = x, method = "SIM", B = 1e3)
dpmd(pmat = pp, xmat = x1, method = "SIM", B = 1e3)
ppmd Cumulative Distribution Function of Poisson-Multinomial Distribu-
tion
Description
Computes the cdf of Poisson-Multinomial distribution that is specified by the success probability
matrix, using various methods.
Usage
ppmd(pmat, xmat, method = "DFT-CF", B = 1000)
Arguments
pmat An n × m success probability matrix. Here, n is the number of independent
trials, and m is the number of categories. Each row of pmat describes the success
probability for the corresponding trial and it should add up to 1.
xmat A matrix with m columns. Each row has the form x = (x1 , . . . , xm ) for com-
puting the cdf at x, P(X1 ≤ x1 , . . . , Xm ≤ xm ). It can also be a vector with
length m.
method Character string stands for the method selected by users to compute the cdf. The
method can only be one of the following three: "DFT-CF", "NA", "SIM".
B Number of repeats used in the simulation method. It is ignored for methods
other than the "SIM" method.
Details
See Details in dpmd for the definition of the PMD, the introduction of notation, and the description
of the three methods ("DFT-CF", "NA", and "SIM"). ppmd computes the cdf by adding all probability
mass points within hyper-dimensional space bounded by x as in the cdf.
Value
The value of cdf P(X1 ≤ x1 , . . . , Xm ≤ xm ) at x = (x1 , . . . , xm ).
Examples
pp <- matrix(c(.1, .1, .1, .7, .1, .3, .3, .3, .5, .2, .1, .2), nrow = 3, byrow = TRUE)
x <- c(3,2,1,3)
x1 <- matrix(c(0,0,1,2,2,1,0,0),nrow=2,byrow=TRUE)
ppmd(pmat = pp, xmat = x)
ppmd(pmat = pp, xmat = x1)
ppmd(pmat = pp, xmat = x, method = "NA")
ppmd(pmat = pp, xmat = x1, method = "NA")
ppmd(pmat = pp, xmat = x, method = "SIM", B = 1e3)
ppmd(pmat = pp, xmat = x1, method = "SIM", B = 1e3)
rpmd Poisson-Multinomial Distribution Random Number Generator
Description
Generates random samples from the PMD specified by the success probability matrix.
Usage
rpmd(pmat, s = 1)
Arguments
pmat An n × m success probability matrix, where n is the number of independent
trials and m is the number of categories. Each row of pmat contains the success
probabilities for the corresponding trial, and each row adds up to 1.
s The number of samples to be generated.
Value
An s×m matrix of samples, each row stands for one sample from the PMD with success probability
matrix pmat.
Examples
pp <- matrix(c(.1, .1, .1, .7, .1, .3, .3, .3, .5, .2, .1, .2), nrow = 3, byrow = TRUE)
rpmd(pmat = pp, s = 5) |
simhash | hex | Erlang | simhash v0.1.2
API Reference
===
* [Modules](#modules)
Modules
===
[Simhash](Simhash.html)
Provides simhash
simhash v0.1.2
Simhash
===
Provides simhash.
Examples
---
```
iex> Simhash.similarity("Universal Avenue", "Universe Avenue")
0.71875 iex> Simhash.similarity("hocus pocus", "pocus hocus")
0.8125 iex> Simhash.similarity("Sankt Eriksgatan 1", "S:t Eriksgatan 1")
0.8125 iex> Simhash.similarity("Purple flowers", "Green grass")
0.5625 iex> Simhash.similarity("Peanut butter", "Strawberry cocktail")
0.4375
```
By default trigrams (N-gram of size 3) are used as language features, but you can set a different N-gram size:
```
iex> Simhash.similarity("hocus pocus", "pocus hocus", 1)
1.0 iex> Simhash.similarity("Sankt Eriksgatan 1", "S:t Eriksgatan 1", 6)
0.859375 iex> Simhash.similarity("Purple flowers", "Green grass", 6)
0.546875
```
Algorithm description: <http://matpalm.com/resemblance/simhash/ Summary
===
[Functions](#functions)
---
[feature_hashes(subject, n)](#feature_hashes/2)
Returns list of lists of bits of 64bit Siphashes for each shingle
[hamming_distance(left, right)](#hamming_distance/2)
Hamming distance between the left and right hash, given as lists of bits
[hash(subject, n \\ 3)](#hash/2)
Generates the hash for the given subject. The feature hashes are N-grams, where N is given by the parameter n
[hash_similarity(left, right)](#hash_similarity/2)
Calculate the similarity between the left and right hash, using Simhash
[n_grams(str, n \\ 3)](#n_grams/2)
Returns N-grams of input str
[similarity(left, right, n \\ 3)](#similarity/3)
Calculates the similarity between the left and right string, using Simhash
[siphash(str)](#siphash/1)
Returns the 64bit Siphash for input str as bitstring
[vector_addition(lists)](#vector_addition/1)
Reduce list of lists to list of integers, following vector addition
Functions
===
feature_hashes(subject, n)
Returns list of lists of bits of 64bit Siphashes for each shingle
hamming_distance(left, right)
Hamming distance between the left and right hash, given as lists of bits.
```
iex> Simhash.hamming_distance([1, 1, 0, 1, 0], [0, 1, 1, 1, 0])
2
```
hash(subject, n \\ 3)
Generates the hash for the given subject. The feature hashes are N-grams, where N is given by the parameter n.
hash_similarity(left, right)
Calculate the similarity between the left and right hash, using Simhash.
n_grams(str, n \\ 3)
Returns N-grams of input str.
```
iex> Simhash.n_grams("Universal")
["Uni", "niv", "ive", "ver", "ers", "rsa", "sal"]
```
[More about N-gram](https://en.wikipedia.org/wiki/N-gram#Applications_and_considerations)
similarity(left, right, n \\ 3)
Calculates the similarity between the left and right string, using Simhash.
siphash(str)
Returns the 64bit Siphash for input str as bitstring.
```
iex> Simhash.siphash("abc")
<<249, 236, 145, 130, 66, 18, 3, 247>>
iex> byte_size(Simhash.siphash("abc"))
8
```
vector_addition(lists)
Reduce list of lists to list of integers, following vector addition.
Example:
```
iex> Simhash.vector_addition([[1, 3, 2, 1], [0, 1, -1, 2], [2, 0, 0, 0]])
[3, 4, 1, 3]
``` |
@solana/rpc-graphql | npm | JavaScript | [@solana/rpc-graphql](#solanarpc-graphql)
===
This package defines a GraphQL client resolver built on top of the
[Solana JSON-RPC](https://docs.solana.com/api/http).
[Solana & GraphQL](#solana--graphql)
===
GraphQL is a query language for your API, and a server-side runtime for executing queries using a type system you define for your data.
[**GraphQL**](https://graphql.org/learn/)
This library attempts to define a type system for Solana. With the proper type system, developers can take advantage of the best features of GraphQL to make interacting with Solana via RPC smoother, faster, more reliable,
and involve less code.
[Design](#design)
---
⚠️ **In Development:** The API's query/schema structure may change as the API matures.
With the exception of many familiar RPC methods for obtaining information about a validator or cluster, majority of Solana data required by various applications revolves around two components:
* Accounts
* Blocks
One can add a third component found within a block:
* Transactions
With these three main components in mind, consider a GraphQL type system that revolves around accounts, blocks, and transactions.
### [Types](#types)
Coming soon!
#### [Account](#account)
Coming soon!
### [Queries](#queries)
Coming soon!
#### [Accounts](#accounts)
Coming soon!
#### [Program Accounts](#program-accounts)
Coming soon!
#### [Blocks](#blocks)
Coming soon!
#### [Transactions](#transactions)
Coming soon!
Readme
---
### Keywords
* blockchain
* solana
* web3 |
sozu-command-lib | rust | Rust | Crate sozu_command_lib
===
Tools and types used to communicate with Sōzu.
Modules
---
* bufferCustom buffer used for parsing within the Sōzu codebase.
* certificateTLS certificates
* channelchannels used for communication between main process and workers
* configparse TOML config and generate requests from it
* loggingcustom made logging macros
* parserparse Requests
* protoContains Rust types generated by `prost`
using the protobuf definition in `command.proto`.
* readyFile descriptor readiness
* requestHelper functions around types received by Sōzu
* responseHelper functions around types sent by Sōzu
* scm_socketsockets used to pass file descriptors
* stateA representation of Sōzu’s state
* writerA writer used for logging
Macros
---
* debuglog a debug with Sōzu’s custom log stack
* errorlog an error with Sōzu’s custom log stack
* error_accesslog a failure concerning an HTTP or TCP request
* fixmewrite a log with a “FIXME” prefix on an info level
* infolog an info with Sōzu’s custom log stack
* info_accesslog the success of an HTTP or TCP request
* logwrite a log with the custom logger (used in other macros, do not use directly)
* log_accesslog a failure concerning an HTTP or TCP request
* setup_test_loggerstart a logger used in test environment
* tracelog a trace with Sōzu’s custom log stack
* warnlog a warning with Sōzu’s custom log stack
Enums
---
* ObjectKindUsed only when returning errors
Crate sozu_command_lib
===
Tools and types used to communicate with Sōzu.
Modules
---
* bufferCustom buffer used for parsing within the Sōzu codebase.
* certificateTLS certificates
* channelchannels used for communication between main process and workers
* configparse TOML config and generate requests from it
* loggingcustom made logging macros
* parserparse Requests
* protoContains Rust types generated by `prost`
using the protobuf definition in `command.proto`.
* readyFile descriptor readiness
* requestHelper functions around types received by Sōzu
* responseHelper functions around types sent by Sōzu
* scm_socketsockets used to pass file descriptors
* stateA representation of Sōzu’s state
* writerA writer used for logging
Macros
---
* debuglog a debug with Sōzu’s custom log stack
* errorlog an error with Sōzu’s custom log stack
* error_accesslog a failure concerning an HTTP or TCP request
* fixmewrite a log with a “FIXME” prefix on an info level
* infolog an info with Sōzu’s custom log stack
* info_accesslog the success of an HTTP or TCP request
* logwrite a log with the custom logger (used in other macros, do not use directly)
* log_accesslog a failure concerning an HTTP or TCP request
* setup_test_loggerstart a logger used in test environment
* tracelog a trace with Sōzu’s custom log stack
* warnlog a warning with Sōzu’s custom log stack
Enums
---
* ObjectKindUsed only when returning errors
Module sozu_command_lib::buffer
===
Custom buffer used for parsing within the Sōzu codebase.
Modules
---
* fixed
* growable
Module sozu_command_lib::certificate
===
TLS certificates
Structs
---
* FingerprintA TLS certificates, encoded in bytes
Enums
---
* CertificateError
Functions
---
* calculate_fingerprintCompute fingerprint from a certificate that is encoded in pem format
* calculate_fingerprint_from_derCompute fingerprint from decoded pem as binary value
* get_cn_and_san_attributesRetrieve from the `Pem` structure the common name (a.k.a `CN`) and the subject alternate names (a.k.a `SAN`)
* parseparse a pem file encoded as binary and convert it into the right structure
(a.k.a `Pem`)
* split_certificate_chain
Module sozu_command_lib::channel
===
channels used for communication between main process and workers
Structs
---
* ChannelA wrapper around unix socket using the mio crate.
Used in pairs to communicate, in a blocking or non-blocking way.
Enums
---
* ChannelError
Module sozu_command_lib::config
===
parse TOML config and generate requests from it
Sōzu’s configuration
---
This module is responsible for parsing the `config.toml` provided by the flag `--config`
when starting Sōzu.
Here is the workflow for generating a working config:
```
config.toml -> FileConfig -> ConfigBuilder -> Config
```
`config.toml` is parsed to `FileConfig`, a structure that itself contains a lot of substructures whose names start with `File-` and end with `-Config`, like `FileHttpFrontendConfig` for instance.
The instance of `FileConfig` is then passed to a `ConfigBuilder` that populates a final `Config`
with listeners and clusters.
To illustrate:
```
use sozu_command_lib::config::{FileConfig, ConfigBuilder};
let file_config = FileConfig::load_from_path("../config.toml")
.expect("Could not load config.toml");
let config = ConfigBuilder::new(file_config, "../assets/config.toml")
.into_config()
.expect("Could not build config");
```
Note that the path to `config.toml` is used twice: the first time, to parse the file,
the second time, to keep the path in the config for later use.
However, there is a simpler way that combines all this:
```
use sozu_command_lib::config::Config;
let config = Config::load_from_path("../assets/config.toml")
.expect("Could not build config from the path");
```
### How values are chosen
Values are chosen in this order of priority:
1. values defined in a section of the TOML file, for instance, timeouts for a specific listener 2. values defined globally in the TOML file, like timeouts or buffer size 3. if a variable has not been set in the TOML file, it will be set to a default defined here
Structs
---
* BackendConfig
* ConfigSōzu configuration, populated with clusters and listeners.
* ConfigBuilderA builder that converts FileConfig to Config
* FileClusterConfig
* FileClusterFrontendConfig
* FileConfigParsed from the TOML config provided by the user.
* HttpClusterConfig
* HttpFrontendConfig
* ListenerBuilderAn HTTP, HTTPS or TCP listener as parsed from the `Listeners` section in the toml
* MetricsConfig
* TcpClusterConfig
* TcpFrontendConfig
Enums
---
* ClusterConfig
* ConfigError
* FileClusterProtocolConfig
* IncompatibilityKind
* ListenerProtocol
* MissingKind
* PathRuleType
Constants
---
* DEFAULT_ACCEPT_QUEUE_TIMEOUTtimeout to accept connection events in the accept queue (60 seconds)
* DEFAULT_AUTOMATIC_STATE_SAVEwether to save the state automatically (false)
* DEFAULT_BACK_TIMEOUTmaximum time of inactivity for a backend socket (30 seconds)
* DEFAULT_BUFFER_SIZEsize of the buffers, in bytes (16 KB)
* DEFAULT_CIPHER_SUITES
* DEFAULT_COMMAND_BUFFER_SIZEsize of the buffer for the channels, in bytes. Must be bigger than the size of the data received. (1 MB)
* DEFAULT_CONNECT_TIMEOUTmaximum time to connect to a backend server (3 seconds)
* DEFAULT_FRONT_TIMEOUTmaximum time of inactivity for a frontend socket (60 seconds)
* DEFAULT_GROUPS_LIST
* DEFAULT_MAX_BUFFERSmaximum number of buffers (1 000)
* DEFAULT_MAX_COMMAND_BUFFER_SIZEmaximum size of the buffer for the channels, in bytes. (2 MB)
* DEFAULT_MAX_CONNECTIONSmaximum number of simultaneous connections (10 000)
* DEFAULT_REQUEST_TIMEOUTmaximum time to receive a request since the connection started (10 seconds)
* DEFAULT_RUSTLS_CIPHER_LISTprovides all supported cipher suites exported by Rustls TLS provider as it support only strongly secure ones.
* DEFAULT_SIGNATURE_ALGORITHMS
* DEFAULT_STICKY_NAMEa name applied to sticky sessions (“SOZUBALANCEID”)
* DEFAULT_WORKER_AUTOMATIC_RESTARTwether a worker is automatically restarted when it crashes (true)
* DEFAULT_WORKER_COUNTnumber of workers, i.e. Sōzu processes that scale horizontally (2)
* DEFAULT_ZOMBIE_CHECK_INTERVALInterval between checking for zombie sessions, (30 minutes)
Functions
---
* default_sticky_name
Module sozu_command_lib::logging
===
custom made logging macros
Structs
---
* CompatLogger
* LogDirective
* Logger
* MetadataMetadata about a log message.
* Rfc3339Time
Enums
---
* LogLevel
* LogLevelFilter
* LoggerBackend
Constants
---
* LOGGER
* TAG
Statics
---
* COMPAT_LOGGER
Functions
---
* now
* parse_logging_spec
* target_to_backend
Module sozu_command_lib::parser
===
parse Requests
Structs
---
* ParseError
Functions
---
* parse_one_requestthis is to propagate the serde_json error
* parse_several_requests
Module sozu_command_lib::proto
===
Contains Rust types generated by `prost`
using the protobuf definition in `command.proto`.
The two main types are `Request` and `Response`.
Because these types were originally defined in Rust, and later adapted for protobuf,
the Rust translation of the protobuf definition looks a bit weird. Instead of having
`Request` being a simple enum with Payload like this:
```
pub enum Request {
SaveState(String),
LoadState(String),
// ...
}
```
It is defined with `oneof` in protobuf. and looks like this in Rust:
```
pub struct Request {
pub request_type: ::core::option::Option<request::RequestType>,
}
pub mod request {
pub enum RequestType {
#[prost(string, tag = "1")]
SaveState(::prost::alloc::string::String),
/// load a state file, given its path
#[prost(string, tag = "2")]
LoadState(::prost::alloc::string::String),
}
}
```
but can be instantiated this way:
```
let load_state_request: Request = RequestType::LoadState(path).into();
```
A bit cumbersome, but it is the only way to benefit from protobuf in Rust.
Modules
---
* commandContains all types received by and sent from Sōzu
* displayImplementation of fmt::Display for the protobuf types, used in the CLI
Module sozu_command_lib::ready
===
File descriptor readiness
Structs
---
* ReadyBinary representation of a file descriptor readiness (obtained through epoll)
Module sozu_command_lib::request
===
Helper functions around types received by Sōzu
Structs
---
* ParseErrorLoadBalancing
* ProxyDestinations
* WorkerRequestThis is sent only from Sōzu to Sōzu
Enums
---
* RequestError
Module sozu_command_lib::response
===
Helper functions around types sent by Sōzu
Structs
---
* BackendA backend, as used *within* Sōzu
* HttpFrontendAn HTTP or HTTPS frontend, as used *within* Sōzu
* TcpFrontendA TCP frontend, as used *within* Sōzu
* WorkerResponseA response as sent by a worker
Functions
---
* is_default_path_rule
Type Aliases
---
* MessageId
Module sozu_command_lib::scm_socket
===
sockets used to pass file descriptors
Structs
---
* ListenersSocket addresses and file descriptors needed by a Proxy to start listening
* ScmSocketA unix socket specialized for file descriptor passing
Enums
---
* ScmSocketError
Constants
---
* MAX_BYTES_OUT
* MAX_FDS_OUT
Module sozu_command_lib::state
===
A representation of Sōzu’s state
Structs
---
* ConfigStateThe `ConfigState` represents the state of Sōzu’s business, which is to forward traffic from frontends to backends. Hence, it contains all details about:
Enums
---
* StateError
Type Aliases
---
* ClusterIdTo use throughout Sōzu
Module sozu_command_lib::writer
===
A writer used for logging
Structs
---
* MultiLineWriterA multiline writer used for logging
Macro sozu_command_lib::debug
===
```
macro_rules! debug {
($format:expr, $($arg:tt)*) => { ... };
($format:expr) => { ... };
}
```
log a debug with Sōzu’s custom log stack
Macro sozu_command_lib::error
===
```
macro_rules! error {
($format:expr, $($arg:tt)*) => { ... };
($format:expr) => { ... };
}
```
log an error with Sōzu’s custom log stack
Macro sozu_command_lib::error_access
===
```
macro_rules! error_access {
($format:expr, $($arg:tt)*) => { ... };
($format:expr) => { ... };
}
```
log a failure concerning an HTTP or TCP request
Macro sozu_command_lib::fixme
===
```
macro_rules! fixme {
() => { ... };
($($arg:tt)*) => { ... };
}
```
write a log with a “FIXME” prefix on an info level
Macro sozu_command_lib::info
===
```
macro_rules! info {
($format:expr, $($arg:tt)*) => { ... };
($format:expr) => { ... };
}
```
log an info with Sōzu’s custom log stack
Macro sozu_command_lib::info_access
===
```
macro_rules! info_access {
($format:expr, $($arg:tt)*) => { ... };
($format:expr) => { ... };
}
```
log the success of an HTTP or TCP request
Macro sozu_command_lib::log
===
```
macro_rules! log {
(__inner__ $target:expr, $lvl:expr, $format:expr, $level_tag:expr,
[$($transformed_args:ident),*], [$first_ident:ident $(, $other_idents:ident)*], $first_arg:expr $(, $other_args:expr)*) => { ... };
(__inner__ $target:expr, $lvl:expr, $format:expr, $level_tag:expr,
[$($final_args:ident),*], [$($idents:ident),*]) => { ... };
($lvl:expr, $format:expr, $level_tag:expr $(, $args:expr)+) => { ... };
($lvl:expr, $format:expr, $level_tag:expr) => { ... };
}
```
write a log with the custom logger (used in other macros, do not use directly)
Macro sozu_command_lib::log_access
===
```
macro_rules! log_access {
(__inner__ $target:expr, $lvl:expr, $format:expr, $level_tag:expr,
[$($transformed_args:ident),*], [$first_ident:ident $(, $other_idents:ident)*], $first_arg:expr $(, $other_args:expr)*) => { ... };
(__inner__ $target:expr, $lvl:expr, $format:expr, $level_tag:expr,
[$($final_args:ident),*], [$($idents:ident),*]) => { ... };
($lvl:expr, $format:expr, $level_tag:expr $(, $args:expr)+) => { ... };
($lvl:expr, $format:expr, $level_tag:expr) => { ... };
}
```
log a failure concerning an HTTP or TCP request
Macro sozu_command_lib::setup_test_logger
===
```
macro_rules! setup_test_logger {
() => { ... };
}
```
start a logger used in test environment
Macro sozu_command_lib::trace
===
```
macro_rules! trace {
($format:expr, $($arg:tt)*) => { ... };
($format:expr) => { ... };
}
```
log a trace with Sōzu’s custom log stack
Macro sozu_command_lib::warn
===
```
macro_rules! warn {
($format:expr, $($arg:tt)*) => { ... };
($format:expr) => { ... };
}
```
log a warning with Sōzu’s custom log stack
Enum sozu_command_lib::ObjectKind
===
```
pub enum ObjectKind {
Backend,
Certificate,
Cluster,
HttpFrontend,
HttpsFrontend,
HttpListener,
HttpsListener,
Listener,
TcpCluster,
TcpListener,
TcpFrontend,
}
```
Used only when returning errors
Variants
---
### Backend
### Certificate
### Cluster
### HttpFrontend
### HttpsFrontend
### HttpListener
### HttpsListener
### Listener
### TcpCluster
### TcpListener
### TcpFrontend
Trait Implementations
---
### impl Debug for ObjectKind
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for ObjectKind
### impl Send for ObjectKind
### impl Sync for ObjectKind
### impl Unpin for ObjectKind
### impl UnwindSafe for ObjectKind
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: 'a,
#### fn explicit(self, class: Class, tag: u32) -> TaggedParser<'a, Explicit, Self, E### impl<'a, T, E> AsTaggedImplicit<'a, E> for Twhere
T: 'a,
#### fn implicit(
self,
class: Class,
constructed: bool,
tag: u32
) -> TaggedParser<'a, Implicit, Self, E### impl<T> Borrow<T> for Twhere
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V |
ABCalendarPicker | cocoapods | Objective-C | abcalendarpicker
===
Overview
---
The ABCalendarPicker is a versatile and flexible calendar picker component for iOS applications. It allows users to easily select dates and navigate through months with a smooth and intuitive interface.
---
Installation
---
### Using Cocoapods
1. Add the following line to your Podfile:
```
pod 'ABCalendarPicker'
```
3. Run the command:
```
pod install
```
5. Import the ABCalendarPicker module in your Swift or Objective-C file:
```
import ABCalendarPicker
```
### Manual Installation
1. Download the latest version of ABCalendarPicker from the [GitHub repository](https://github.com/example/abcalendarpicker).
2. Drag and drop the `ABCalendarPicker.xcodeproj` into your Xcode project.
3. In your project’s Build Phases, add `ABCalendarPicker.framework` under “Link Binary With Libraries”.
---
Getting Started
---
To start using the ABCalendarPicker, follow these steps:
1. Create an instance of `ABCalendarPickerViewController`.
```
let calendarPicker = ABCalendarPickerViewController()
```
3. Configure any desired customization options:
```
calendarPicker.calendarDelegate = self calendarPicker.selectedDates = [Date()]
// Additional customization options...
```
5. Present the calendar picker:
```
present(calendarPicker, animated: true, completion: nil)
```
---
Delegate Methods
---
The ABCalendarPicker provides delegate methods to handle user interactions and retrieve selected dates.
### `func calendarPickerDidCancel(_ calendarPicker: ABCalendarPickerViewController)`
Called when the user cancels the calendar picker.
### `func calendarPicker(_ calendarPicker: ABCalendarPickerViewController, didSelectDates dates: [Date])`
Called when the user selects one or more dates.
### Example Implementation
```
extension YourViewController: ABCalendarPickerDelegate {
func calendarPickerDidCancel(_ calendarPicker: ABCalendarPickerViewController) {
dismiss(animated: true, completion: nil)
}
func calendarPicker(_ calendarPicker: ABCalendarPickerViewController, didSelectDates dates: [Date]) {
// Handle selected dates
}
}
```
---
Customization
---
The ABCalendarPicker allows for various customization options to suit your application’s style.
### Properties
Customize the behavior and appearance using the following properties:
* `selectedDates: [Date]` – Sets the initially selected dates.
* `minimumDate: Date?` – Sets the minimum selectable date.
* `maximumDate: Date?` – Sets the maximum selectable date.
* `showsCancelButton: Bool` – Determines whether to show the cancel button.
* `cancelButtonTitle: String?` – Sets the title text for the cancel button.
### Delegate Methods
Use these delegate methods to further customize or handle events:
* `func calendarPicker(_ calendarPicker: ABCalendarPickerViewController, backgroundColorForDate date: Date) -> UIColor?` – Called when determining the background color for a specific date.
* `func calendarPicker(_ calendarPicker: ABCalendarPickerViewController, didSelectDate date: Date)` – Called when the user selects a single date only.
* `func calendarPicker(_ calendarPicker: ABCalendarPickerViewController, shouldEnableDate date: Date) -> Bool` – Called when determining whether a specific date should be enabled for selection.
---
Support
---
For bug reports, feature requests, or any other support, please visit the [GitHub repository](https://github.com/example/abcalendarpicker/issues). |
blockrand | cran | R | Package ‘blockrand’
October 12, 2022
Type Package
Title Randomization for Block Random Clinical Trials
Version 1.5
Date 2020-04-01
Author <NAME> <<EMAIL>>
Maintainer <NAME> <<EMAIL>>
Description Create randomizations for block random clinical trials. Can also pro-
duce a pdf file of randomization cards.
License GPL-2
Repository CRAN
Repository/R-Forge/Project blockrand
Repository/R-Forge/Revision 14
Repository/R-Forge/DateTimeStamp 2020-04-01 21:50:26
Date/Publication 2020-04-06 10:02:15 UTC
NeedsCompilation no
R topics documented:
blockrand-packag... 2
blockran... 3
plotblockran... 5
blockrand-package Generate block randomizations for clinical trials.
Description
This package will create a block randomization for clinical trials and help with creating the ran-
domization cards that the study coordinator can use to assign new subjects to their treatment.
Details
Package: blockrand
Type: Package
Version: 1.1
Date: 2008-02-02
License: Gnu Public License Ver. 2
Copyright: <NAME> and Intermountain Healthcare
Currently there are 2 main functions and an optional list. The blockrand function is used to create
a data frame with the block sequential treatment randomizations. When doing a stratified study you
should run blockrand once for each stratum then optionally combine the different data frames with
rbind. Save the data frame(s) and when the study is completed the data can be added to the data
frame for analysis.
The plotblockrand function is used to create the randomization cards to be used when assigning
subjects to treatment. The cards are printed out and sealed in envelopes, then when a new subject is
enrolled the next envelope is opened and the subject assigned to the corresponding treatment.
You can optionally create a list named blockrand.text with optional elements top, middle, and
bottom. If this list exists and you run plotblockrand without specifying these arguments, then the
element of the blockrand.text list will be used instead.
Author(s)
<NAME> <<EMAIL>>
Maintainer: <NAME> <<EMAIL>>
Examples
## stratified by sex, 100 in stratum, 2 treatments
male <- blockrand(n=100, id.prefix='M', block.prefix='M',stratum='Male')
female <- blockrand(n=100, id.prefix='F', block.prefix='F',stratum='Female')
my.study <- rbind(male,female)
## Not run:
plotblockrand(my.study,'mystudy.pdf',
top=list(text=c('My Study','Patient: %ID%','Treatment: %TREAT%'),
col=c('black','black','red'),font=c(1,1,4)),
middle=list(text=c("My Study","Sex: %STRAT%","Patient: %ID%"),
col=c('black','blue','green'),font=c(1,2,3)),
bottom="Call 123-4567 to report patient entry",
cut.marks=TRUE)
### or
blockrand.txt <- list(
top=list(text=c('My Study','Patient: %ID%','Treatment: %TREAT%'),
col=c('black','black','red'),font=c(1,1,4)),
middle=list(text=c("My Study","Sex: %STRAT%","Patient: %ID%"),
col=c('black','blue','green'),font=c(1,2,3)),
bottom="Call 123-4567 to report patient entry")
plotblockrand(my.study, 'mystudy.pdf', cut.marks=TRUE)
## End(Not run)
blockrand Generate a block randomization for a clinical trial
Description
This function creates random assignments for clinical trials (or any experiment where the subjects
come one at a time). The randomization is done within blocks so that the balance between treat-
ments stays close to equal throughout the trial.
Usage
blockrand(n, num.levels = 2, levels = LETTERS[seq(length = num.levels)],
id.prefix, stratum, block.sizes = 1:4, block.prefix,
uneq.beg=FALSE, uneq.mid=FALSE, uneq.min=0, uneq.maxit=10)
Arguments
n The minimum number of subjects to randomize
num.levels The number of treatments or factor levels to randomize between
levels A character vector of labels for the different treatments or factor levels
id.prefix Optional integer or character string to prefix the id column values
stratum Optional character string specifying the stratum being generated
block.sizes Vector of integers specifying the sizes of blocks to use
block.prefix Optional integer or character string to prefix the block.id column
uneq.beg Should an unequal block be used at the beginning of the randomization
uneq.mid Should an unequal block be used in the middle
uneq.min what is the minimum difference between the most and least common levels in
an unequal block
uneq.maxit maximum number of tries to get uneq.min difference
Details
This function will randomize subjects to the specified treatments within sequential blocks. The total
number of randomizations may end up being more than n.
The final block sizes will actually be the product of num.levels and block.sizes (e.g. if there
are 2 levels and the default block sizes are used (1:4) then the actual block sizes will be randomly
chosen from the set (2,4,6,8)).
If id.prefix is not specified then the id column of the output will be a sequence of integers from 1
to the number of rows. If id.prefix is numeric then the id column of the output will be a sequence
of integers starting at the value of id.prefix. If id.prefix is a character string then the numbers
will be converted to strings (zero padded) and have the prefix prepended.
The block.prefix will be treated in the same way as the id.prefix for identifying the blocks.
The one difference being that the block.id will be converted to a factor in the final data frame.
If uneq.beg and/or uneq.mid are true then an additional block will be used at the beginning and/or
inserted in the middle that is not balanced, this means that the final totals in each group may not be
exactly equal (but still similar). This makes it more difficult to anticipate future assignments as the
numbers will not return to equality at the end of each block.
For stratified studies the blockrand function should run once each for each stratum using the
stratum argument to specify the current stratum (and using id.prefix and block.prefix to keep
the id’s unique). The separate data frames can then be combined using rbind if desired.
Value
A data frame with the following columns:
id: A unique identifier (number or character string) for each row
stratum: Optional, if stratum argument is specfied it will be replicated in this column
block.id: An identifier for each block of the randomization, this column will be a factor
block.size The size of each block
treatment The treatment assignment for each subject
Author(s)
<NAME> <<EMAIL>>
References
<NAME>. and <NAME>. (2002): Unequal group sizes in randomized trials: guarding against
guessing, The Lancet, 359, pp 966–970.
See Also
plotblockrand, sample, rbind
Examples
## stratified by sex, 100 in stratum, 2 treatments
male <- blockrand(n=100, id.prefix='M', block.prefix='M',stratum='Male')
female <- blockrand(n=100, id.prefix='F', block.prefix='F',stratum='Female')
my.study <- rbind(male,female)
## Not run:
plotblockrand(my.study,'mystudy.pdf',
top=list(text=c('My Study','Patient: %ID%','Treatment: %TREAT%'),
col=c('black','black','red'),font=c(1,1,4)),
middle=list(text=c("My Study","Sex: %STRAT%","Patient: %ID%"),
col=c('black','blue','green'),font=c(1,2,3)),
bottom="Call 123-4567 to report patient entry",
cut.marks=TRUE)
## End(Not run)
plotblockrand Create a pdf file of randomization cards
Description
Creates a pdf file of randomization cards based on the output from blockrand. This file can then
be printed and the cards put into envelopes for use by a study coordinator for assigning subjects to
treatment.
Usage
plotblockrand(x, file = "blockrand.pdf", top, middle, bottom,
blockrand.text, width = 11, height = 8.5, par.args, id.col = "id",
stratum.col = "stratum",
treat.col = "treatment", cut.marks = FALSE, top.ho, top.vo, middle.ho,
middle.vo, bottom.ho, bottom.vo, nrow=2, ncol=2, ...)
Arguments
x A data frame, usually the output from blockrand
file The name of the pdf file to create (include the .pdf in the name)
top A character vector or list (see details) with the template to be printed at the top
of each card
middle A character vector or list (see details) with the template to be printed in the
middle of each card (positioned to be visible through the window of an envelope)
bottom A single character string to be printed at the bottom of each card
blockrand.text A list with default values to use for other options
width Passed to pdf
height Passed to pdf
par.args A list containing additional arguments to par before plotting the text
id.col Name or number of the column in x that contains the id’s of the subjects
stratum.col Name or number of the column in x that contains the names of the strata
treat.col Name or number of the column in x that contains the treatment assignments
cut.marks Logical, should cut marks be plotted as well (useful if printing on plain paper
then cutting apart)
top.ho Shift top text to the right(left)
top.vo Shift top text up(down)
middle.ho Shift middle text to the right(left)
middle.vo Shift middle text up(down)
bottom.ho Shift bottom text to the right(left)
bottom.vo Shift bottom text up(down)
nrow Number of rows of cards to print
ncol Number of columns of cards to print
... Optional arguments passed to pdf
Details
This function creates a pdf file with randomization "cards". It puts 4 cards per page. You can either
print the file onto perforated cards (Avery 8387) or onto regular paper then cut the cards apart. The
top of each card can then be folded over (extra protection from someone trying to read the upcoming
treatments) and the card placed in an envelope (letter size) with a window and sealed. The envelopes
are then used by a study coordinator to assign subjects to treatments as they are enrolled into the
trial.
Each card is split into 3 parts, top, middle, and bottom.
The top part is printed flush left and is the part that will be folded over for better security. Informa-
tion on the treatment assignment goes here along with any other information you want.
The middle part is printed centered so that it will appear through the window of the envelope. The
subject ID number and stratification information should go here.
The bottom part is limited to a single line that will be printed flush right at the bottom of the card.
This can be used for additional instructions to the study coordinator (e.g. call the statistician at
123-4567 to record assignment).
The top, middle, and bottom templates can be vectors or lists. If the vectors have length greater
than 1, then each element of the vector will be printed on a separate line (if there are 3 elements
in top then there will be 3 lines at the top, etc.), bottom should only have a single element. If
top, middle, or bottom are lists then they should have an element named "text" that consists of
a character vector containing the template. The lists can then also have optional elements named
"font" and "col", these vectors should be the same length as the "text" vector and represent the fonts
and colors to use for the corresponding lines of text (for example if font is c(1,2,1) then the 2nd
line will be printed bold).
If the template in top or middle contains "%ID%" (not including the quotes, but including the
percent signs) then this string will be replaced with the contents of the ID column for each card. If
they contain "%STRAT%" then it will be replaced with the contents of the stratum column. If top
contains "%TREAT%" then it will be replaced with the contents of the treatment column (note that
this is not available in the middle template).
If any of the arguments top, middle, or bottom are missing then the function will look for a
corresponding element in the blockrand.text argument (a list) to use as the template. If the list
does not exist, or the list does not have a corresponding element, then that portion of the card will
be blank. Specifying the argument when calling the function will override the blockrand.text
list.
The arguments top.ho, middle.ho, and bottom.ho move the corresponding parts to the right (left
if negative). The units are approximately strwidth("W") so specifying a value of 0.5 will move the
section about half a character to the right. The arguments top.vo, middle.vo, and bottom.vo move
the corresponding parts up (down if negative). The units are approximately 1.5*strheight("Wj").
If any of the offset arguments are not specified then the corresponding element of the list "block-
rand.text" is used if it exists otherwise they are 0.
The idea of the "blockrand.text" list is to set common defaults for your system (the default positions
work for me, but you may want to tweak things for your system) including templates that are
commonly used in your institution. Individual pieces can then be overridden with the function
arguments. You can have a list saved with your defaults and pass that list to the blockrand.text
argument.
Value
This function does not return anything useful, it is run for the side effect of creating a pdf file. The
pdf file will have 4 cards per page and 1 card for each line of x.
Note
Adobe Acrobat (and possibly other pdf viewers) will often try to rescale the page when printing, for
best results turn this feature off before printing.
Author(s)
<NAME> <<EMAIL> >
Examples
## stratified by sex, 100 in stratum, 2 treatments
male <- blockrand(n=100, id.prefix='M', block.prefix='M',stratum='Male')
female <- blockrand(n=100, id.prefix='F', block.prefix='F',stratum='Female')
my.study <- rbind(male,female)
## Not run:
plotblockrand(my.study,'mystudy.pdf',
top=list(text=c('My Study','Patient: %ID%','Treatment: %TREAT%'),
col=c('black','black','red'),font=c(1,1,4)),
middle=list(text=c("My Study","Sex: %STRAT%","Patient: %ID%"),
col=c('black','blue','green'),font=c(1,2,3)),
bottom="Call 123-4567 to report patient entry",
cut.marks=TRUE)
### or
my.blockrand.text <- list(
top=list(text=c('My Study','Patient: %ID%','Treatment: %TREAT%'),
col=c('black','black','red'),font=c(1,1,4)),
middle=list(text=c("My Study","Sex: %STRAT%","Patient: %ID%"),
col=c('black','blue','green'),font=c(1,2,3)),
bottom="Call 123-4567 to report patient entry")
plotblockrand(my.study, 'mystudy.pdf', blockrand.text=my.blockrand.text,
cut.marks=TRUE)
## End(Not run) |
ggparty | cran | R | Package ‘ggparty’
October 13, 2022
Title 'ggplot' Visualizations for the 'partykit' Package
Version 1.0.0
Copyright file inst/COPYRIGHTS
Description Extends 'ggplot2' functionality to the 'partykit' package. 'ggparty' provides the neces-
sary tools to create clearly structured and highly customizable visualizations for tree-
objects of the class 'party'.
Maintainer <NAME> <<EMAIL>>
Depends R (>= 3.4.0), ggplot2, partykit
Imports grid, gtable, utils, checkmate, methods, survival, rlang
Suggests testthat, mlbench, AER, coin, vdiffr, knitr, rmarkdown,
pander, MASS, TH.data
License GPL-2 | GPL-3
URL https://github.com/martin-borkovec/ggparty
BugReports https://github.com/martin-borkovec/ggparty/issues
Encoding UTF-8
LazyData true
RoxygenNote 6.1.1
VignetteBuilder knitr
NeedsCompilation no
Author <NAME> [aut, cre],
<NAME> [aut],
<NAME> [ctb],
<NAME> [ctb],
<NAME> [ctb],
<NAME> [ctb],
<NAME> [ctb],
<NAME> [ctb],
<NAME> [ctb],
<NAME> [ctb]
Repository CRAN
Date/Publication 2019-07-18 10:54:06 UTC
R topics documented:
autoplot.part... 2
geom_edg... 3
geom_edge_labe... 4
geom_node_labe... 5
geom_node_plo... 9
get_prediction... 11
ggpart... 11
makeContent.nodeplotgro... 13
autoplot.party autoplot methods for party objects
Description
autoplot methods for party objects
Usage
## S3 method for class 'party'
autoplot(object, ...)
## S3 method for class 'constparty'
autoplot(object, ...)
## S3 method for class 'modelparty'
autoplot(object, plot_var = NULL, ...)
## S3 method for class 'lmtree'
autoplot(object, plot_var = NULL, show_fit = TRUE,
...)
Arguments
object object of class party.
... additional parameters
plot_var Which covariate to plot against response. Defaults to second column in data of
tree.
show_fit If TRUE fitted_values are drawn.
Examples
library(ggparty)
data("WeatherPlay", package = "partykit")
sp_o <- partysplit(1L, index = 1:3)
sp_h <- partysplit(3L, breaks = 75)
sp_w <- partysplit(4L, index = 1:2)
pn <- partynode(1L, split = sp_o, kids = list(
partynode(2L, split = sp_h, kids = list(
partynode(3L, info = "yes"),
partynode(4L, info = "no"))),
partynode(5L, info = "yes"),
partynode(6L, split = sp_w, kids = list(
partynode(7L, info = "yes"),
partynode(8L, info = "no")))))
py <- party(pn, WeatherPlay)
autoplot(py)
geom_edge Draw edges
Description
Draws edges between children and parent nodes. Wrapper for ggplot2::geom_segment()
Usage
geom_edge(mapping = NULL, nudge_x = 0, nudge_y = 0, ids = NULL,
show.legend = NA, ...)
Arguments
mapping Mapping of x, y, xend and yend defaults to ids’ and their parent’s coordinates.
Other mappings can be added here as aes().
nudge_x, nudge_y
Nudge labels.
ids Choose which edges to draw by their children’s ids.
show.legend logical See layer().
... Additional arguments for geom_segment().
See Also
ggparty(), geom_edge()
Examples
library(ggparty)
data("WeatherPlay", package = "partykit")
sp_o <- partysplit(1L, index = 1:3)
sp_h <- partysplit(3L, breaks = 75)
sp_w <- partysplit(4L, index = 1:2)
pn <- partynode(1L, split = sp_o, kids = list(
partynode(2L, split = sp_h, kids = list(
partynode(3L, info = "yes"),
partynode(4L, info = "no"))),
partynode(5L, info = "yes"),
partynode(6L, split = sp_w, kids = list(
partynode(7L, info = "yes"),
partynode(8L, info = "no")))))
py <- party(pn, WeatherPlay)
ggparty(py) +
geom_edge() +
geom_edge_label() +
geom_node_label(aes(label = splitvar),
ids = "inner") +
geom_node_label(aes(label = info),
ids = "terminal")
geom_edge_label Draw edge labels
Description
Label edges with corresponding split breaks
Usage
geom_edge_label(mapping = NULL, nudge_x = 0, nudge_y = 0,
ids = NULL, shift = 0.5, label.size = 0,
splitlevels = seq_len(100), max_length = NULL, parse_all = FALSE,
parse = TRUE, ...)
Arguments
mapping Mapping of label label defaults to breaks_label. Other mappings can be added
here as aes().
nudge_x, nudge_y
Nudge label.
ids Choose which splitbreaks to label by their children’s ids.
shift Value in (0,1). Moves label along corresponding edge.
label.size See geom_label().
splitlevels Which levels of split to plot. This may be useful in the presence of many factor
levels for one split break.
max_length If provided breaks_label levels will be truncated to the specified length.
parse_all Defaults to FALSE, in which case everything but the inequality signs of breaks_label
are deparsed. If TRUE complete breaks_label are parsed.
parse Needs to be true in order to parse inequality signs of breaks_label.
... Additional arguments for geom_label().
See Also
ggparty()
Examples
library(ggparty)
data("WeatherPlay", package = "partykit")
sp_o <- partysplit(1L, index = 1:3)
sp_h <- partysplit(3L, breaks = 75)
sp_w <- partysplit(4L, index = 1:2)
pn <- partynode(1L, split = sp_o, kids = list(
partynode(2L, split = sp_h, kids = list(
partynode(3L, info = "yes"),
partynode(4L, info = "no"))),
partynode(5L, info = "yes"),
partynode(6L, split = sp_w, kids = list(
partynode(7L, info = "yes"),
partynode(8L, info = "no")))))
py <- party(pn, WeatherPlay)
ggparty(py) +
geom_edge() +
geom_edge_label() +
geom_node_label(aes(label = splitvar),
ids = "inner") +
geom_node_label(aes(label = info),
ids = "terminal")
geom_node_label Draw (multi-line) labels at nodes
Description
geom_node_splitvar() and geom_node_info() are simplified versions of geom_node_label()
with the respective defaults to either label the split variables for all inner nodes or the info for all
terminal nodes.
Usage
geom_node_label(mapping = NULL, data = NULL, line_list = NULL,
line_gpar = NULL, ids = NULL, position = "identity", ...,
parse = FALSE, nudge_x = 0, nudge_y = 0,
label.padding = unit(0.25, "lines"), label.r = unit(0.15, "lines"),
label.size = 0.25, label.col = NULL, label.fill = NULL,
na.rm = FALSE, show.legend = NA, inherit.aes = TRUE)
geom_node_info(mapping = NULL, nudge_x = 0, nudge_y = 0,
ids = NULL, label.padding = unit(0.5, "lines"), ...)
geom_node_splitvar(mapping = NULL, nudge_x = 0, nudge_y = 0,
label.padding = unit(0.5, "lines"), ids = NULL, ...)
Arguments
mapping x and y are mapped per default to the node’s coordinates. If you don’t want to
set line specific graphical parameters, you can also map label here. Otherwise
set labels in line_list.
data The data to be displayed in this layer. There are three options:
If NULL, the default, the data is inherited from the plot data as specified in the
call to ggplot().
A data.frame, or other object, will override the plot data. All objects will be
fortified to produce a data frame. See fortify() for which variables will be
created.
A function will be called with a single argument, the plot data. The return
value must be a data.frame, and will be used as the layer data. A function
can be created from a formula (e.g. ~ head(.x, 10)).
line_list Use this only if you want a multi-line label with the possibility to override the
aesthetics mapping for each line specifically with fixed graphical parameters. In
this case, don’t map anything to label in the aes() supplied to mapping , but
instead pass here a list of aes() with the only mapped variable in each being
label. Other aesthetic mappings still can be passed to mapping and will apply
to all lines and the border, unless overwritten by line_gpar. The order of the
list represents the order of the plotted lines.
line_gpar List of lists containing line-specific graphical parameters. Only use in conjunc-
tion with line_list. Has to contain the same number of lists as are aes() in
line_list. First list applies to first line, and so on.
ids Select for which nodes to draw a label. Can be "inner", "terminal", "all" or
numeric vector of ids.
position Position adjustment, either as a string, or the result of a call to a position adjust-
ment function.
... Additional arguments to layer.
parse If TRUE, the labels will be parsed into expressions. Can also be specified per line
via line_gpar.
nudge_x, nudge_y
Adjust position of label.
label.padding Amount of padding around label. Defaults to 0.25 lines.
label.r Radius of rounded corners. Defaults to 0.15 lines.
label.size Size of label border, in mm.
label.col Border colour.
label.fill Background colour.
na.rm If FALSE, the default, missing values are removed with a warning. If TRUE,
missing values are silently removed.
show.legend logical. Should this layer be included in the legends? NA, the default, includes if
any aesthetics are mapped. FALSE never includes, and TRUE always includes. It
can also be a named logical vector to finely select the aesthetics to display.
inherit.aes If FALSE, overrides the default aesthetics, rather than combining with them.
This is most useful for helper functions that define both data and aesthetics and
shouldn’t inherit behaviour from the default plot specification, e.g. borders().
Details
geom_node_label() is a modified version of ggplot2::geom_label(). This modification allows
for labels with multiple lines and line specific graphical parameters.
See Also
ggparty()
Examples
library(ggparty)
data("WeatherPlay", package = "partykit")
sp_o <- partysplit(1L, index = 1:3)
sp_h <- partysplit(3L, breaks = 75)
sp_w <- partysplit(4L, index = 1:2)
pn <- partynode(1L, split = sp_o, kids = list(
partynode(2L, split = sp_h, kids = list(
partynode(3L, info = "yes"),
partynode(4L, info = "no"))),
partynode(5L, info = "yes"),
partynode(6L, split = sp_w, kids = list(
partynode(7L, info = "yes"),
partynode(8L, info = "no")))))
py <- party(pn, WeatherPlay)
ggparty(py) +
geom_edge() +
geom_edge_label() +
geom_node_label(aes(label = splitvar),
ids = "inner") +
geom_node_label(aes(label = info),
ids = "terminal")
######################################
data("TeachingRatings", package = "AER")
tr <- subset(TeachingRatings, credits == "more")
tr_tree <- lmtree(eval ~ beauty | minority + age + gender + division + native +
tenure, data = tr, weights = students, caseweights = FALSE)
data("TeachingRatings", package = "AER")
tr <- subset(TeachingRatings, credits == "more")
tr_tree <- lmtree(eval ~ beauty | minority + age + gender + division + native +
tenure, data = tr, weights = students, caseweights = FALSE)
ggparty(tr_tree,
terminal_space = 0.5,
add_vars = list(p.value = "$node$info$p.value")) +
geom_edge(size = 1.5) +
geom_edge_label(colour = "grey", size = 6) +
geom_node_plot(gglist = list(geom_point(aes(x = beauty,
y = eval,
col = tenure,
shape = minority),
alpha = 0.8),
theme_bw(base_size = 15)),
scales = "fixed",
id = "terminal",
shared_axis_labels = TRUE,
shared_legend = TRUE,
legend_separator = TRUE,
predict = "beauty",
predict_gpar = list(col = "blue",
size = 1.2)
) +
geom_node_label(aes(col = splitvar),
line_list = list(aes(label = paste("Node", id)),
aes(label = splitvar),
aes(label = paste("p =", formatC(p.value,
format = "e", digits = 2)))),
line_gpar = list(list(size = 12, col = "black", fontface = "bold"),
list(size = 20),
list(size = 12)),
ids = "inner") +
geom_node_label(aes(label = paste0("Node ", id, ", N = ", nodesize)),
fontface = "bold",
ids = "terminal",
size = 5,
nudge_y = 0.01) +
theme(legend.position = "none")
geom_node_plot Draw plots at nodes
Description
Additional component for a ggparty() that allows to create in each node a ggplot with its data. #’
Usage
geom_node_plot(plot_call = "ggplot", gglist = NULL, width = 1,
height = 1, size = 1, ids = "terminal", scales = "fixed",
nudge_x = 0, nudge_y = 0, shared_axis_labels = FALSE,
shared_legend = TRUE, predict = NULL, predict_gpar = NULL,
legend_separator = FALSE)
Arguments
plot_call Any function that generates a ggplot2 object.
gglist List of additional gg components. Columns of data of nodes can be mapped.
Additionally fitted_values and residuals can be mapped if present in party
of ggparty()
width Expansion factor for viewport’s width.
height Expansion factor for viewport’s height.
size Expansion factor for viewport’s size.
ids Id’s to plot. Numeric, "terminal", "inner" or "all". Defaults to "terminal".
scales See facet_wrap()
nudge_x, nudge_y
Nudges node plot.
shared_axis_labels
If TRUE only one pair of axes labels is plotted in the terminal space. Only
recommended if ids "terminal" or "all".
shared_legend If TRUE one shared legend is plotted at the bottom of the tree.
predict Character string specifying variable for which predictions should be plotted.
predict_gpar Named list containing arguments to be passed to the geom_line() call of pre-
dicted values.
legend_separator
If TRUE line between legend and tree is drawn.
See Also
ggparty()
Examples
library(ggparty)
airq <- subset(airquality, !is.na(Ozone))
airct <- ctree(Ozone ~ ., data = airq)
ggparty(airct, horizontal = TRUE, terminal_space = 0.6) +
geom_edge() +
geom_edge_label() +
geom_node_splitvar() +
geom_node_plot(gglist = list(
geom_density(aes(x = Ozone))),
shared_axis_labels = TRUE)
#############################################################
## Plot with ggparty
## Demand for economics journals data
data("Journals", package = "AER")
Journals <- transform(Journals,
age = 2000 - foundingyear,
chars = charpp * pages)
## linear regression tree (OLS)
j_tree <- lmtree(log(subs) ~ log(price/citations) | price + citations +
age + chars + society, data = Journals, minsize = 10, verbose = TRUE)
pred_df <- get_predictions(j_tree, ids = "terminal", newdata = function(x) {
data.frame(
citations = 1,
price = exp(seq(from = min(x$`log(price/citations)`),
to = max(x$`log(price/citations)`),
length.out = 100)))
})
ggparty(j_tree, terminal_space = 0.8) +
geom_edge() +
geom_edge_label() +
geom_node_splitvar() +
geom_node_plot(gglist =
list(aes(x = `log(price/citations)`, y = `log(subs)`),
geom_point(),
geom_line(data = pred_df,
aes(x = log(price/citations),
y = prediction),
col = "red")))
get_predictions Create data.frame with predictions for each node
Description
Create data.frame with predictions for each node
Usage
get_predictions(party_object, ids, newdata_fun, predict_arg = NULL)
Arguments
party_object object of class party
ids Id’s to plot. Numeric, "terminal", "inner" or "all". MUST be identical to ids of
geom_node_plot() used to plot this data.
newdata_fun function which takes data of node and returns newdata for predict()
predict_arg list of additional arguments passed to predict()
ggparty Create a new ggparty plot
Description
ggplot2 extension for objects of class party. Creates a data.frame from an object of class party
and calls ggplot()
Usage
ggparty(party, horizontal = FALSE, terminal_space, layout = NULL,
add_vars = NULL)
Arguments
party Object of class party.
horizontal If TRUE plot will be horizontal.
terminal_space Proportion of the plot that should be reserved for the terminal nodeplots. De-
faults to 2 / (depth(party) + 2).
layout Optional layout adjustment. Overwrites the coordinates of the specified nodes.
Must be data.frame containing the columns id, x and y. With x and y values
between 0 and 1.
add_vars Named list containing either string(s) specifying the locations of elements to be
extracted from each node of party or function(s) of corresponding row of plot
data and node. In either case returned object has to be of length 1. If the data
is supposed to be accessible by geom_node_plot() the respective list entry has
to be named with the prefix "nodedata_" and be a function returning a list of
same length as nodesize.
Details
ggparty can be called directly with an object of class party, which will convert it to a suitable
data.frame and pass it to a call to ggplot with as the data argument. As usual, additional com-
ponents can then be added with +.
The nodes will be spaced equally in the unit square. Specifying terminal_size allows to increase
or decrease the area for plots of the terminal nodes.
If one of the list entries supplied to add_vars is a function, it has to take exactly two arguments,
namely data (the corresponding row of the plot_data data frame) and node (the corresponding
node, i.e. party_object[i])
See Also
geom_edge(), geom_edge_label(), geom_node_label(), autoplot.party(), geom_node_plot()
Examples
library(ggparty)
data("WeatherPlay", package = "partykit")
sp_o <- partysplit(1L, index = 1:3)
sp_h <- partysplit(3L, breaks = 75)
sp_w <- partysplit(4L, index = 1:2)
pn <- partynode(1L, split = sp_o, kids = list(
partynode(2L, split = sp_h, kids = list(
partynode(3L, info = "yes"),
partynode(4L, info = "no"))),
partynode(5L, info = "yes"),
partynode(6L, split = sp_w, kids = list(
partynode(7L, info = "yes"),
partynode(8L, info = "no")))))
py <- party(pn, WeatherPlay)
ggparty(py) +
geom_edge() +
geom_edge_label() +
geom_node_label(aes(label = splitvar),
ids = "inner") +
geom_node_label(aes(label = info),
ids = "terminal")
makeContent.nodeplotgrob 13
makeContent.nodeplotgrob
apparantly needs to be exported
Description
apparantly needs to be exported
Usage
## S3 method for class 'nodeplotgrob'
makeContent(x)
Arguments
x nodeplotgrob |
awswrangler | readthedoc | Python | AWS SDK for pandas 3.4.0 documentation
[AWS SDK for pandas](#)
**3.4.0**
* [About](#)
* [Install](#)
* [At Scale](#)
* [Tutorials](#)
* [API Reference](#)
* [License](https://github.com/aws/aws-sdk-pandas/blob/main/LICENSE.txt)
* [Contribute](https://github.com/aws/aws-sdk-pandas/blob/main/CONTRIBUTING.md)
* [GitHub](https://github.com/aws/aws-sdk-pandas)
*
An [AWS Professional Service](https://aws.amazon.com/professional-services) open source initiative | [<EMAIL>](mailto:aws-proserve-opensource%40amazon.com)
AWS Data Wrangler is now **AWS SDK for pandas (awswrangler)**. We’re changing the name we use when we talk about the library, but everything else will stay the same. You’ll still be able to install using `pip install awswrangler` and you won’t need to change any of your code. As part of this change, we’ve moved the library from AWS Labs to the main AWS GitHub organisation but, thanks to the GitHub’s redirect feature, you’ll still be able to access the project by its old URLs until you update your bookmarks. Our documentation has also moved to [aws-sdk-pandas.readthedocs.io](https://aws-sdk-pandas.readthedocs.io), but old bookmarks will redirect to the new site.
Quick Start[¶](#quick-start)
===
```
>>> pip install awswrangler
```
```
>>> # Optional modules are installed with:
>>> pip install 'awswrangler[redshift]'
```
```
import awswrangler as wr import pandas as pd from datetime import datetime
df = pd.DataFrame({"id": [1, 2], "value": ["foo", "boo"]})
# Storing data on Data Lake wr.s3.to_parquet(
df=df,
path="s3://bucket/dataset/",
dataset=True,
database="my_db",
table="my_table"
)
# Retrieving the data directly from Amazon S3 df = wr.s3.read_parquet("s3://bucket/dataset/", dataset=True)
# Retrieving the data from Amazon Athena df = wr.athena.read_sql_query("SELECT * FROM my_table", database="my_db")
# Get a Redshift connection from Glue Catalog and retrieving data from Redshift Spectrum con = wr.redshift.connect("my-glue-connection")
df = wr.redshift.read_sql_query("SELECT * FROM external_schema.my_table", con=con)
con.close()
# Amazon Timestream Write df = pd.DataFrame({
"time": [datetime.now(), datetime.now()],
"my_dimension": ["foo", "boo"],
"measure": [1.0, 1.1],
})
rejected_records = wr.timestream.write(df,
database="sampleDB",
table="sampleTable",
time_col="time",
measure_col="measure",
dimensions_cols=["my_dimension"],
)
# Amazon Timestream Query wr.timestream.query("""
SELECT time, measure_value::double, my_dimension FROM "sampleDB"."sampleTable" ORDER BY time DESC LIMIT 3
""")
```
Read The Docs[¶](#read-the-docs)
===
What is AWS SDK for pandas?[¶](#what-is-aws-sdk-for-pandas)
---
An [AWS Professional Service](https://aws.amazon.com/professional-services) [open source](https://github.com/aws/aws-sdk-pandas) python initiative that extends the power of the [pandas](https://github.com/pandas-dev/pandas) library to AWS, connecting **DataFrames** and AWS data & analytics services.
Easy integration with Athena, Glue, Redshift, Timestream, OpenSearch, Neptune, QuickSight, Chime, CloudWatchLogs,
DynamoDB, EMR, SecretManager, PostgreSQL, MySQL, SQLServer and S3 (Parquet, CSV, JSON and EXCEL).
Built on top of other open-source projects like [Pandas](https://github.com/pandas-dev/pandas), [Apache Arrow](https://github.com/apache/arrow) and [Boto3](https://github.com/boto/boto3), it offers abstracted functions to execute your usual ETL tasks like load/unloading data from **Data Lakes**, **Data Warehouses** and **Databases**, even [at scale](https://aws-sdk-pandas.readthedocs.io/en/stable/scale.html).
Check our [tutorials](https://github.com/aws/aws-sdk-pandas/tree/main/tutorials) or the [list of functionalities](https://aws-sdk-pandas.readthedocs.io/en/stable/api.html).
Install[¶](#install)
---
**AWS SDK for pandas** runs on Python `3.8`, `3.9`, `3.10` and `3.11`,
and on several platforms (AWS Lambda, AWS Glue Python Shell, EMR, EC2,
on-premises, Amazon SageMaker, local, etc).
Some good practices to follow for options below are:
* Use new and isolated Virtual Environments for each project ([venv](https://docs.python.org/3/library/venv.html)).
* On Notebooks, always restart your kernel after installations.
### PyPI (pip)[¶](#pypi-pip)
```
>>> pip install awswrangler
```
```
>>> # Optional modules are installed with:
>>> pip install 'awswrangler[redshift]'
```
### Conda[¶](#conda)
```
>>> conda install -c conda-forge awswrangler
```
### At scale[¶](#at-scale)
AWS SDK for pandas can also run your workflows at scale by leveraging [modin](https://modin.readthedocs.io/en/stable/) and [ray](https://www.ray.io/).
```
>>> pip install "awswrangler[modin,ray]"
```
As a result existing scripts can run on significantly larger datasets with no code rewrite.
### Optional dependencies[¶](#optional-dependencies)
Starting version 3.0, some `awswrangler` modules are optional and must be installed explicitly using:
```
>>> pip install 'awswrangler[optional-module1, optional-module2]'
```
The optional modules are:
* redshift
* mysql
* postgres
* sqlserver
* oracle
* gremlin
* sparql
* opencypher
* openpyxl
* opensearch
* deltalake
Calling these modules without the required dependencies raises an error prompting you to install the missing package.
### AWS Lambda Layer[¶](#aws-lambda-layer)
#### Managed Layer[¶](#managed-layer)
Note
There is a one week minimum delay between version release and layers being available in the AWS Lambda console.
Warning
Lambda Functions using the layer with a memory size of less than 512MB may be insufficient for some workloads.
AWS SDK for pandas is available as an AWS Lambda Managed layer in all AWS commercial regions.
It can be accessed in the AWS Lambda console directly:
Or via its ARN: `arn:aws:lambda:<region>:336392948345:layer:AWSSDKPandas-Python<python-version>:<layer-version>`.
For example: `arn:aws:lambda:us-east-1:336392948345:layer:AWSSDKPandas-Python38:1`.
The full list of ARNs is available [here](index.html#document-layers).
#### Custom Layer[¶](#custom-layer)
You can also create your own Lambda layer with these instructions:
1 - Go to [GitHub’s release section](https://github.com/aws/aws-sdk-pandas/releases)
and download the zipped layer for to the desired version. Alternatively, you can download the zip from the [public artifacts bucket](https://aws-sdk-pandas.readthedocs.io/en/latest/install.html#public-artifacts).
2 - Go to the AWS Lambda console, open the layer section (left side)
and click **create layer**.
3 - Set name and python version, upload your downloaded zip file and press **create**.
4 - Go to your Lambda function and select your new layer!
#### Serverless Application Repository (SAR)[¶](#serverless-application-repository-sar)
AWS SDK for pandas layers are also available in the [AWS Serverless Application Repository](https://serverlessrepo.aws.amazon.com/applications) (SAR).
The app deploys the Lambda layer version in your own AWS account and region via a CloudFormation stack.
This option provides the ability to use semantic versions (i.e. library version) instead of Lambda layer versions.
AWS SDK for pandas Layer Apps[¶](#id3)
| App | ARN | Description |
| --- | --- | --- |
| aws-sdk-pandas-layer-py3-8 | arn:aws:serverlessrepo:us-east-1:336392948345:applications/aws-sdk-pandas-layer-py3-8 | Layer for `Python 3.8.x` runtimes |
| aws-sdk-pandas-layer-py3-9 | arn:aws:serverlessrepo:us-east-1:336392948345:applications/aws-sdk-pandas-layer-py3-9 | Layer for `Python 3.9.x` runtimes |
| aws-sdk-pandas-layer-py3-10 | arn:aws:serverlessrepo:us-east-1:336392948345:applications/aws-sdk-pandas-layer-py3-10 | Layer for `Python 3.10.x` runtimes |
| aws-sdk-pandas-layer-py3-11 | arn:aws:serverlessrepo:us-east-1:336392948345:applications/aws-sdk-pandas-layer-py3-11 | Layer for `Python 3.11.x` runtimes |
Here is an example of how to create and use the AWS SDK for pandas Lambda layer in your CDK app:
```
from aws_cdk import core, aws_sam as sam, aws_lambda
class AWSSDKPandasApp(core.Construct):
def __init__(self, scope: core.Construct, id_: str):
super.__init__(scope,id)
aws_sdk_pandas_layer = sam.CfnApplication(
self,
"awssdkpandas-layer",
location=sam.CfnApplication.ApplicationLocationProperty(
application_id="arn:aws:serverlessrepo:us-east-1:336392948345:applications/aws-sdk-pandas-layer-py3-8",
semantic_version="3.0.0", # Get the latest version from https://serverlessrepo.aws.amazon.com/applications
),
)
aws_sdk_pandas_layer_arn = aws_sdk_pandas_layer.get_att("Outputs.WranglerLayer38Arn").to_string()
aws_sdk_pandas_layer_version = aws_lambda.LayerVersion.from_layer_version_arn(self, "awssdkpandas-layer-version", aws_sdk_pandas_layer_arn)
aws_lambda.Function(
self,
"awssdkpandas-function",
runtime=aws_lambda.Runtime.PYTHON_3_8,
function_name="sample-awssdk-pandas-lambda-function",
code=aws_lambda.Code.from_asset("./src/awssdk-pandas-lambda"),
handler='lambda_function.lambda_handler',
layers=[aws_sdk_pandas_layer_version]
)
```
### AWS Glue Python Shell Jobs[¶](#aws-glue-python-shell-jobs)
Note
Glue Python Shell Python3.9 has version 2.15.1 of awswrangler [baked in](https://aws.amazon.com/blogs/big-data/aws-glue-python-shell-now-supports-python-3-9-with-a-flexible-pre-loaded-environment-and-support-to-install-additional-libraries/). If you need a different version, follow instructions below:
1 - Go to [GitHub’s release page](https://github.com/aws/aws-sdk-pandas/releases) and download the wheel file
(.whl) related to the desired version. Alternatively, you can download the wheel from the [public artifacts bucket](https://aws-sdk-pandas.readthedocs.io/en/latest/install.html#public-artifacts).
2 - Upload the wheel file to the Amazon S3 location of your choice.
3 - Go to your Glue Python Shell job and point to the S3 wheel file in the *Python library path* field.
[Official Glue Python Shell Reference](https://docs.aws.amazon.com/glue/latest/dg/add-job-python.html#create-python-extra-library)
### AWS Glue for Ray Jobs[¶](#aws-glue-for-ray-jobs)
Go to your Glue for Ray job and create a new *Job parameters* key/value:
* Key: `--pip-install`
* Value: `awswrangler[modin]`
[Official Glue for Ray Reference](https://docs.aws.amazon.com/glue/latest/dg/author-job-ray-python-libraries.html)
### AWS Glue PySpark Jobs[¶](#aws-glue-pyspark-jobs)
Note
AWS SDK for pandas has compiled dependencies (C/C++) so support is only available for `Glue PySpark Jobs >= 2.0`.
Go to your Glue PySpark job and create a new *Job parameters* key/value:
* Key: `--additional-python-modules`
* Value: `pyarrow==7,awswrangler`
To install a specific version, set the value for the above Job parameter as follows:
* Value: `pyarrow==7,pandas==1.5.3,awswrangler==3.4.0`
[Official Glue PySpark Reference](https://docs.aws.amazon.com/glue/latest/dg/reduced-start-times-spark-etl-jobs.html#reduced-start-times-new-features)
### Public Artifacts[¶](#public-artifacts)
Lambda zipped layers and Python wheels are stored in a publicly accessible S3 bucket for all versions.
* Bucket: `aws-data-wrangler-public-artifacts`
* Prefix: `releases/<version>/`
+ Lambda layer: `awswrangler-layer-<version>-py<py-version>.zip`
+ Python wheel: `awswrangler-<version>-py3-none-any.whl`
For example: `s3://aws-data-wrangler-public-artifacts/releases/3.0.0/awswrangler-layer-3.0.0-py3.8.zip`
You can check the bucket to find the latest version.
### Amazon SageMaker Notebook[¶](#amazon-sagemaker-notebook)
Run this command in any Python 3 notebook cell and then make sure to
**restart the kernel** before importing the **awswrangler** package.
```
>>> !pip install awswrangler
```
### Amazon SageMaker Notebook Lifecycle[¶](#amazon-sagemaker-notebook-lifecycle)
Open the AWS SageMaker console, go to the lifecycle section and use the below snippet to configure AWS SDK for pandas for all compatible SageMaker kernels ([Reference](https://github.com/aws-samples/amazon-sagemaker-notebook-instance-lifecycle-config-samples/blob/master/scripts/install-pip-package-all-environments/on-start.sh)).
```
#!/bin/bash
set -e
# OVERVIEW
# This script installs a single pip package in all SageMaker conda environments, apart from the JupyterSystemEnv which
# is a system environment reserved for Jupyter.
# Note this may timeout if the package installations in all environments take longer than 5 mins, consider using
# "nohup" to run this as a background process in that case.
sudo -u ec2-user -i <<'EOF'
# PARAMETERS PACKAGE=awswrangler
# Note that "base" is special environment name, include it there as well.
for env in base /home/ec2-user/anaconda3/envs/*; do
source /home/ec2-user/anaconda3/bin/activate $(basename "$env")
if [ $env = 'JupyterSystemEnv' ]; then
continue
fi
nohup pip install --upgrade "$PACKAGE" &
source /home/ec2-user/anaconda3/bin/deactivate done
EOF
```
### EMR Cluster[¶](#emr-cluster)
Despite not being a distributed library, AWS SDK for pandas could be used to complement Big Data pipelines.
* Configure Python 3 as the default interpreter for PySpark on your cluster configuration [ONLY REQUIRED FOR EMR < 6]
> ```
> [
> {
> "Classification": "spark-env",
> "Configurations": [
> {
> "Classification": "export",
> "Properties": {
> "PYSPARK_PYTHON": "/usr/bin/python3"
> }
> }
> ]
> }
> ]
> ```
>
* Keep the bootstrap script above on S3 and reference it on your cluster.
+ For EMR Release < 6
```
#!/usr/bin/env bash
set -ex
sudo pip-3.6 install pyarrow==2 awswrangler
```
+ For EMR Release >= 6
```
#!/usr/bin/env bash
set -ex
sudo pip install awswrangler
```
### From Source[¶](#from-source)
```
>>> git clone https://github.com/aws/aws-sdk-pandas.git
>>> cd aws-sdk-pandas
>>> pip install .
```
### Notes for Microsoft SQL Server[¶](#notes-for-microsoft-sql-server)
`awswrangler` uses [pyodbc](https://github.com/mkleehammer/pyodbc)
for interacting with Microsoft SQL Server. To install this package you need the ODBC header files,
which can be installed, with the following commands:
```
>>> sudo apt install unixodbc-dev
>>> yum install unixODBC-devel
```
After installing these header files you can either just install `pyodbc` or
`awswrangler` with the `sqlserver` extra, which will also install `pyodbc`:
```
>>> pip install pyodbc
>>> pip install 'awswrangler[sqlserver]'
```
Finally you also need the correct ODBC Driver for SQL Server. You can have a look at the
[documentation from Microsoft](https://docs.microsoft.com/sql/connect/odbc/microsoft-odbc-driver-for-sql-server?view=sql-server-ver15)
to see how they can be installed in your environment.
If you want to connect to Microsoft SQL Server from AWS Lambda, you can build a separate Layer including the needed OBDC drivers and pyobdc.
If you maintain your own environment, you need to take care of the above steps.
Because of this limitation usage in combination with Glue jobs is limited and you need to rely on the provided [functionality inside Glue itself](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-connect.html#aws-glue-programming-etl-connect-jdbc).
### Notes for Oracle Database[¶](#notes-for-oracle-database)
`awswrangler` is using the [oracledb](https://github.com/oracle/python-oracledb)
for interacting with Oracle Database. For installing this package you do not need the Oracle Client libraries unless you want to use the Thick mode.
You can have a look at the [documentation from Oracle](https://cx-oracle.readthedocs.io/en/latest/user_guide/installation.html#oracle-client-and-oracle-database-interoperability)
to see how they can be installed in your environment.
After installing these client libraries you can either just install `oracledb` or
`awswrangler` with the `oracle` extra, which will also install `oracledb`:
```
>>> pip install oracledb
>>> pip install 'awswrangler[oracle]'
```
If you maintain your own environment, you need to take care of the above steps.
Because of this limitation usage in combination with Glue jobs is limited and you need to rely on the provided [functionality inside Glue itself](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-connect.html#aws-glue-programming-etl-connect-jdbc).
At scale[¶](#at-scale)
---
AWS SDK for pandas supports [Ray](https://www.ray.io/) and [Modin](https://modin.readthedocs.io/en/stable/), enabling you to scale your pandas workflows from a single machine to a multi-node environment, with no code changes.
The simplest way to try this is with [AWS Glue for Ray](https://aws.amazon.com/blogs/big-data/introducing-aws-glue-for-ray-scaling-your-data-integration-workloads-using-python/), the new serverless option to run distributed Python code announced at AWS re:Invent 2022. AWS SDK for pandas also supports self-managed Ray on [Amazon Elastic Compute Cloud (Amazon EC2)](https://github.com/aws/aws-sdk-pandas/blob/main/tutorials/035%20-%20Distributing%20Calls%20on%20Ray%20Remote%20Cluster.ipynb).
### Getting Started[¶](#getting-started)
Install the library with the these two optional dependencies to enable distributed mode:
```
>>> pip install "awswrangler[ray,modin]"
```
Once installed, you can use the library in your code as usual:
```
>>> import awswrangler as wr
```
At import, SDK for pandas checks if `ray` and `modin` are in the installation path and enables distributed mode.
To confirm that you are in distributed mode, run:
```
>>> print(f"Execution Engine: {wr.engine.get()}")
>>> print(f"Memory Format: {wr.memory_format.get()}")
```
which show that both Ray and Modin are enabled as an execution engine and memory format, respectively.
You can switch back to non-distributed mode at any point (See [Switching modes](#switching-modes) below).
Initialization of the Ray cluster is lazy and only triggered when the first distributed API is executed.
At that point, SDK for pandas looks for an environment variable called `WR_ADDRESS`.
If found, it is used to send commands to a remote cluster.
If not found, a local Ray runtime is initialized on your machine instead.
Alternatively, you can trigger Ray initialization with:
```
>>> wr.engine.initialize()
```
In distributed mode, the same `awswrangler` APIs can now handle much larger datasets:
```
# Read 1.6 Gb Parquet data df = wr.s3.read_parquet(path="s3://ursa-labs-taxi-data/2017/")
# Drop vendor_id column df.drop("vendor_id", axis=1, inplace=True)
# Filter trips over 1 mile df1 = df[df["trip_distance"] > 1]
```
In the example above, New York City Taxi data is read from Amazon S3 into a distributed [Modin data frame](https://modin.readthedocs.io/en/stable/getting_started/why_modin/pandas.html).
Modin is a drop-in replacement for Pandas. It exposes the same APIs but enables you to use all of the cores on your machine, or all of the workers in an entire cluster, leading to improved performance and scale.
To use it, make sure to replace your pandas import statement with modin:
```
>>> import modin.pandas as pd # instead of import pandas as pd
```
Failing to do so means that all operations run on a single thread instead of leveraging the entire cluster resources.
Note that in distributed mode, all `awswrangler` APIs return and accept Modin data frames, not pandas.
### Supported APIs[¶](#supported-apis)
This table lists the `awswrangler` APIs available in distributed mode (i.e. that can run at scale):
| Service | API | Implementation |
| --- | --- | --- |
| `S3` | `read_parquet` | ✅ |
| | `read_parquet_metadata` | ✅ |
| | `read_parquet_table` | ✅ |
| | `read_csv` | ✅ |
| | `read_json` | ✅ |
| | `read_fwf` | ✅ |
| | `to_parquet` | ✅ |
| | `to_csv` | ✅ |
| | `to_json` | ✅ |
| | `select_query` | ✅ |
| | `store_parquet_metadata` | ✅ |
| | `delete_objects` | ✅ |
| | `describe_objects` | ✅ |
| | `size_objects` | ✅ |
| | `wait_objects_exist` | ✅ |
| | `wait_objects_not_exist` | ✅ |
| | `merge_datasets` | ✅ |
| | `copy_objects` | ✅ |
| `Redshift` | `copy` | ✅ |
| | `unload` | ✅ |
| `Athena` | `describe_table` | ✅ |
| | `get_query_results` | ✅ |
| | `read_sql_query` | ✅ |
| | `read_sql_table` | ✅ |
| | `show_create_table` | ✅ |
| | `to_iceberg` | ✅ |
| `DynamoDB` | `read_items` | ✅ |
| | `put_df` | ✅ |
| | `put_csv` | ✅ |
| | `put_json` | ✅ |
| | `put_items` | ✅ |
| `Lake Formation` | `read_sql_query` | ✅ |
| | `read_sql_table` | ✅ |
| `Neptune` | `bulk_load` | ✅ |
| `Timestream` | `batch_load` | ✅ |
| | `write` | ✅ |
| | `unload` | ✅ |
### Switching modes[¶](#switching-modes)
The following commands showcase how to switch between distributed and non-distributed modes:
```
# Switch to non-distributed wr.engine.set("python")
wr.memory_format.set("pandas")
# Switch to distributed wr.engine.set("ray")
wr.memory_format.set("modin")
```
Similarly, you can set the `WR_ENGINE` and `WR_MEMORY_FORMAT` environment variables to the desired engine and memory format, respectively.
### Caveats[¶](#caveats)
#### S3FS Filesystem[¶](#s3fs-filesystem)
When Ray is chosen as an engine, [S3Fs](https://s3fs.readthedocs.io/en/latest/) is used instead of boto3 for certain API calls.
These include listing a large number of S3 objects for example.
This choice was made for performance reasons as a boto3 implementation can be much slower in some cases.
As a side effect,
users won’t be able to use the `s3_additional_kwargs` input parameter as it’s currently not supported by S3Fs.
#### Unsupported kwargs[¶](#unsupported-kwargs)
Most AWS SDK for pandas calls support passing the `boto3_session` argument.
While this is acceptable for an application running in a single process,
distributed applications require the session to be serialized and passed to the worker nodes in the cluster.
This constitutes a security risk.
As a result, passing `boto3_session` when using the Ray runtime is not supported.
### To learn more[¶](#to-learn-more)
Read our blog posts [(1)](https://aws.amazon.com/blogs/big-data/scale-aws-sdk-for-pandas-workloads-with-aws-glue-for-ray/) and [(2)](https://aws.amazon.com/blogs/big-data/advanced-patterns-with-aws-sdk-for-pandas-on-aws-glue-for-ray/), then head to our latest [tutorials](https://aws-sdk-pandas.readthedocs.io/en/stable/tutorials.html) to discover even more features.
A runbook with common errors when running the library with Ray is available [here](https://github.com/aws/aws-sdk-pandas/discussions/1815).
Tutorials[¶](#tutorials)
---
Note
You can also find all Tutorial Notebooks on [GitHub](https://github.com/aws/aws-sdk-pandas/tree/main/tutorials).
### 1 - Introduction[¶](#1---Introduction)
#### What is AWS SDK for pandas?[¶](#What-is-AWS-SDK-for-pandas?)
An [open-source](https://github.com/aws/aws-sdk-pandas%3E) Python package that extends the power of [Pandas](https://github.com/pandas-dev/pandas%3E) library to AWS connecting **DataFrames** and AWS data related services (**Amazon Redshift**, **AWS Glue**, **Amazon Athena**, **Amazon Timestream**, **Amazon EMR**, etc).
Built on top of other open-source projects like [Pandas](https://github.com/pandas-dev/pandas), [Apache Arrow](https://github.com/apache/arrow) and [Boto3](https://github.com/boto/boto3), it offers abstracted functions to execute usual ETL tasks like load/unload data from **Data Lakes**, **Data Warehouses** and **Databases**.
Check our [list of functionalities](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/api.html).
#### How to install?[¶](#How-to-install?)
awswrangler runs almost anywhere over Python 3.8, 3.9 and 3.10, so there are several different ways to install it in the desired environment.
* [PyPi (pip)](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/install.html#pypi-pip)
* [Conda](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/install.html#conda)
* [AWS Lambda Layer](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/install.html#aws-lambda-layer)
* [AWS Glue Python Shell Jobs](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/install.html#aws-glue-python-shell-jobs)
* [AWS Glue PySpark Jobs](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/install.html#aws-glue-pyspark-jobs)
* [Amazon SageMaker Notebook](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/install.html#amazon-sagemaker-notebook)
* [Amazon SageMaker Notebook Lifecycle](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/install.html#amazon-sagemaker-notebook-lifecycle)
* [EMR Cluster](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/install.html#emr-cluster)
* [From source](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/install.html#from-source)
Some good practices for most of the above methods are: - Use new and individual Virtual Environments for each project ([venv](https://docs.python.org/3/library/venv.html)) - On Notebooks, always restart your kernel after installations.
#### Let’s Install it![¶](#Let's-Install-it!)
```
[ ]:
```
```
!pip install awswrangler
```
> Restart your kernel after the installation!
```
[1]:
```
```
import awswrangler as wr
wr.__version__
```
```
[1]:
```
```
'2.0.0'
```
### 2 - Sessions[¶](#2---Sessions)
#### How awswrangler handles Sessions and AWS credentials?[¶](#How-awswrangler-handles-Sessions-and-AWS-credentials?)
After version 1.0.0 awswrangler relies on [Boto3.Session()](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html) to manage AWS credentials and configurations.
awswrangler will not store any kind of state internally. Users are in charge of managing Sessions.
Most awswrangler functions receive the optional `boto3_session` argument. If None is received, the default boto3 Session will be used.
```
[1]:
```
```
import awswrangler as wr import boto3
```
#### Using the default Boto3 Session[¶](#Using-the-default-Boto3-Session)
```
[2]:
```
```
wr.s3.does_object_exist("s3://noaa-ghcn-pds/fake")
```
```
[2]:
```
```
False
```
#### Customizing and using the default Boto3 Session[¶](#Customizing-and-using-the-default-Boto3-Session)
```
[3]:
```
```
boto3.setup_default_session(region_name="us-east-2")
wr.s3.does_object_exist("s3://noaa-ghcn-pds/fake")
```
```
[3]:
```
```
False
```
#### Using a new custom Boto3 Session[¶](#Using-a-new-custom-Boto3-Session)
```
[4]:
```
```
my_session = boto3.Session(region_name="us-east-2")
wr.s3.does_object_exist("s3://noaa-ghcn-pds/fake", boto3_session=my_session)
```
```
[4]:
```
```
False
```
### 3 - Amazon S3[¶](#3---Amazon-S3)
#### Table of Contents[¶](#Table-of-Contents)
* [1. CSV files](#1.-CSV-files)
+ [1.1 Writing CSV files](#1.1-Writing-CSV-files)
+ [1.2 Reading single CSV file](#1.2-Reading-single-CSV-file)
+ [1.3 Reading multiple CSV files](#1.3-Reading-multiple-CSV-files)
- [1.3.1 Reading CSV by list](#1.3.1-Reading-CSV-by-list)
- [1.3.2 Reading CSV by prefix](#1.3.2-Reading-CSV-by-prefix)
* [2. JSON files](#2.-JSON-files)
+ [2.1 Writing JSON files](#2.1-Writing-JSON-files)
+ [2.2 Reading single JSON file](#2.2-Reading-single-JSON-file)
+ [2.3 Reading multiple JSON files](#2.3-Reading-multiple-JSON-files)
- [2.3.1 Reading JSON by list](#2.3.1-Reading-JSON-by-list)
- [2.3.2 Reading JSON by prefix](#2.3.2-Reading-JSON-by-prefix)
* [3. Parquet files](#3.-Parquet-files)
+ [3.1 Writing Parquet files](#3.1-Writing-Parquet-files)
+ [3.2 Reading single Parquet file](#3.2-Reading-single-Parquet-file)
+ [3.3 Reading multiple Parquet files](#3.3-Reading-multiple-Parquet-files)
- [3.3.1 Reading Parquet by list](#3.3.1-Reading-Parquet-by-list)
- [3.3.2 Reading Parquet by prefix](#3.3.2-Reading-Parquet-by-prefix)
* 4. Fixed-width formatted files (only read)
+ [4.1 Reading single FWF file](#4.1-Reading-single-FWF-file)
+ [4.2 Reading multiple FWF files](#4.2-Reading-multiple-FWF-files)
- [4.2.1 Reading FWF by list](#4.2.1-Reading-FWF-by-list)
- [4.2.2 Reading FWF by prefix](#4.2.2-Reading-FWF-by-prefix)
* [5. Excel files](#5.-Excel-files)
+ [5.1 Writing Excel file](#5.1-Writing-Excel-file)
+ [5.2 Reading Excel file](#5.2-Reading-Excel-file)
* [6. Reading with lastModified filter](#6.-Reading-with-lastModified-filter)
+ [6.1 Define the Date time with UTC Timezone](#6.1-Define-the-Date-time-with-UTC-Timezone)
+ [6.2 Define the Date time and specify the Timezone](#6.2-Define-the-Date-time-and-specify-the-Timezone)
+ [6.3 Read json using the LastModified filters](#6.3-Read-json-using-the-LastModified-filters)
* [7. Download Objects](#7.-Download-objects)
+ [7.1 Download object to a file path](#7.1-Download-object-to-a-file-path)
+ [7.2 Download object to a file-like object in binary mode](#7.2-Download-object-to-a-file-like-object-in-binary-mode)
* [8. Upload Objects](#8.-Upload-objects)
+ [8.1 Upload object from a file path](#8.1-Upload-object-from-a-file-path)
+ [8.2 Upload object from a file-like object in binary mode](#8.2-Upload-object-from-a-file-like-object-in-binary-mode)
* 9. Delete objects
```
[1]:
```
```
import awswrangler as wr import pandas as pd import boto3 import pytz from datetime import datetime
df1 = pd.DataFrame({
"id": [1, 2],
"name": ["foo", "boo"]
})
df2 = pd.DataFrame({
"id": [3],
"name": ["bar"]
})
```
#### Enter your bucket name:[¶](#Enter-your-bucket-name:)
```
[2]:
```
```
import getpass bucket = getpass.getpass()
```
#### 1. CSV files[¶](#1.-CSV-files)
##### 1.1 Writing CSV files[¶](#1.1-Writing-CSV-files)
```
[3]:
```
```
path1 = f"s3://{bucket}/csv/file1.csv"
path2 = f"s3://{bucket}/csv/file2.csv"
wr.s3.to_csv(df1, path1, index=False)
wr.s3.to_csv(df2, path2, index=False)
```
##### 1.2 Reading single CSV file[¶](#1.2-Reading-single-CSV-file)
```
[4]:
```
```
wr.s3.read_csv([path1])
```
```
[4]:
```
| | id | name |
| --- | --- | --- |
| 0 | 1 | foo |
| 1 | 2 | boo |
##### 1.3 Reading multiple CSV files[¶](#1.3-Reading-multiple-CSV-files)
###### 1.3.1 Reading CSV by list[¶](#1.3.1-Reading-CSV-by-list)
```
[5]:
```
```
wr.s3.read_csv([path1, path2])
```
```
[5]:
```
| | id | name |
| --- | --- | --- |
| 0 | 1 | foo |
| 1 | 2 | boo |
| 2 | 3 | bar |
###### 1.3.2 Reading CSV by prefix[¶](#1.3.2-Reading-CSV-by-prefix)
```
[6]:
```
```
wr.s3.read_csv(f"s3://{bucket}/csv/")
```
```
[6]:
```
| | id | name |
| --- | --- | --- |
| 0 | 1 | foo |
| 1 | 2 | boo |
| 2 | 3 | bar |
#### 2. JSON files[¶](#2.-JSON-files)
##### 2.1 Writing JSON files[¶](#2.1-Writing-JSON-files)
```
[7]:
```
```
path1 = f"s3://{bucket}/json/file1.json"
path2 = f"s3://{bucket}/json/file2.json"
wr.s3.to_json(df1, path1)
wr.s3.to_json(df2, path2)
```
```
[7]:
```
```
['s3://woodadw-test/json/file2.json']
```
##### 2.2 Reading single JSON file[¶](#2.2-Reading-single-JSON-file)
```
[8]:
```
```
wr.s3.read_json([path1])
```
```
[8]:
```
| | id | name |
| --- | --- | --- |
| 0 | 1 | foo |
| 1 | 2 | boo |
##### 2.3 Reading multiple JSON files[¶](#2.3-Reading-multiple-JSON-files)
###### 2.3.1 Reading JSON by list[¶](#2.3.1-Reading-JSON-by-list)
```
[9]:
```
```
wr.s3.read_json([path1, path2])
```
```
[9]:
```
| | id | name |
| --- | --- | --- |
| 0 | 1 | foo |
| 1 | 2 | boo |
| 0 | 3 | bar |
###### 2.3.2 Reading JSON by prefix[¶](#2.3.2-Reading-JSON-by-prefix)
```
[10]:
```
```
wr.s3.read_json(f"s3://{bucket}/json/")
```
```
[10]:
```
| | id | name |
| --- | --- | --- |
| 0 | 1 | foo |
| 1 | 2 | boo |
| 0 | 3 | bar |
#### 3. Parquet files[¶](#3.-Parquet-files)
For more complex features releated to Parquet Dataset check the tutorial number 4.
##### 3.1 Writing Parquet files[¶](#3.1-Writing-Parquet-files)
```
[11]:
```
```
path1 = f"s3://{bucket}/parquet/file1.parquet"
path2 = f"s3://{bucket}/parquet/file2.parquet"
wr.s3.to_parquet(df1, path1)
wr.s3.to_parquet(df2, path2)
```
##### 3.2 Reading single Parquet file[¶](#3.2-Reading-single-Parquet-file)
```
[12]:
```
```
wr.s3.read_parquet([path1])
```
```
[12]:
```
| | id | name |
| --- | --- | --- |
| 0 | 1 | foo |
| 1 | 2 | boo |
##### 3.3 Reading multiple Parquet files[¶](#3.3-Reading-multiple-Parquet-files)
###### 3.3.1 Reading Parquet by list[¶](#3.3.1-Reading-Parquet-by-list)
```
[13]:
```
```
wr.s3.read_parquet([path1, path2])
```
```
[13]:
```
| | id | name |
| --- | --- | --- |
| 0 | 1 | foo |
| 1 | 2 | boo |
| 2 | 3 | bar |
###### 3.3.2 Reading Parquet by prefix[¶](#3.3.2-Reading-Parquet-by-prefix)
```
[14]:
```
```
wr.s3.read_parquet(f"s3://{bucket}/parquet/")
```
```
[14]:
```
| | id | name |
| --- | --- | --- |
| 0 | 1 | foo |
| 1 | 2 | boo |
| 2 | 3 | bar |
#### 4. Fixed-width formatted files (only read)[¶](#4.-Fixed-width-formatted-files-(only-read))
As of today, Pandas doesn’t implement a `to_fwf` functionality, so let’s manually write two files:
```
[15]:
```
```
content = "1 Herfelingen 27-12-18\n"\
"2 Lambusart 14-06-18\n"\
"3 Spormaggiore 15-04-18"
boto3.client("s3").put_object(Body=content, Bucket=bucket, Key="fwf/file1.txt")
content = "4 Buizingen 05-09-19\n"\
"5 <NAME> 04-09-19"
boto3.client("s3").put_object(Body=content, Bucket=bucket, Key="fwf/file2.txt")
path1 = f"s3://{bucket}/fwf/file1.txt"
path2 = f"s3://{bucket}/fwf/file2.txt"
```
##### 4.1 Reading single FWF file[¶](#4.1-Reading-single-FWF-file)
```
[16]:
```
```
wr.s3.read_fwf([path1], names=["id", "name", "date"])
```
```
[16]:
```
| | id | name | date |
| --- | --- | --- | --- |
| 0 | 1 | Herfelingen | 27-12-18 |
| 1 | 2 | Lambusart | 14-06-18 |
| 2 | 3 | Spormaggiore | 15-04-18 |
##### 4.2 Reading multiple FWF files[¶](#4.2-Reading-multiple-FWF-files)
###### 4.2.1 Reading FWF by list[¶](#4.2.1-Reading-FWF-by-list)
```
[17]:
```
```
wr.s3.read_fwf([path1, path2], names=["id", "name", "date"])
```
```
[17]:
```
| | id | name | date |
| --- | --- | --- | --- |
| 0 | 1 | Herfelingen | 27-12-18 |
| 1 | 2 | Lambusart | 14-06-18 |
| 2 | 3 | Spormaggiore | 15-04-18 |
| 3 | 4 | Buizingen | 05-09-19 |
| 4 | 5 | <NAME> | 04-09-19 |
###### 4.2.2 Reading FWF by prefix[¶](#4.2.2-Reading-FWF-by-prefix)
```
[18]:
```
```
wr.s3.read_fwf(f"s3://{bucket}/fwf/", names=["id", "name", "date"])
```
```
[18]:
```
| | id | name | date |
| --- | --- | --- | --- |
| 0 | 1 | Herfelingen | 27-12-18 |
| 1 | 2 | Lambusart | 14-06-18 |
| 2 | 3 | Spormaggiore | 15-04-18 |
| 3 | 4 | Buizingen | 05-09-19 |
| 4 | 5 | <NAME> | 04-09-19 |
#### 5. Excel files[¶](#5.-Excel-files)
##### 5.1 Writing Excel file[¶](#5.1-Writing-Excel-file)
```
[19]:
```
```
path = f"s3://{bucket}/file0.xlsx"
wr.s3.to_excel(df1, path, index=False)
```
```
[19]:
```
```
's3://woodadw-test/file0.xlsx'
```
##### 5.2 Reading Excel file[¶](#5.2-Reading-Excel-file)
```
[20]:
```
```
wr.s3.read_excel(path)
```
```
[20]:
```
| | id | name |
| --- | --- | --- |
| 0 | 1 | foo |
| 1 | 2 | boo |
#### 6. Reading with lastModified filter[¶](#6.-Reading-with-lastModified-filter)
Specify the filter by LastModified Date.
The filter needs to be specified as datime with time zone
Internally the path needs to be listed, after that the filter is applied.
The filter compare the s3 content with the variables lastModified_begin and lastModified_end
<https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html##### 6.1 Define the Date time with UTC Timezone[¶](#6.1-Define-the-Date-time-with-UTC-Timezone)
```
[21]:
```
```
begin = datetime.strptime("20-07-31 20:30", "%y-%m-%d %H:%M")
end = datetime.strptime("21-07-31 20:30", "%y-%m-%d %H:%M")
begin_utc = pytz.utc.localize(begin)
end_utc = pytz.utc.localize(end)
```
##### 6.2 Define the Date time and specify the Timezone[¶](#6.2-Define-the-Date-time-and-specify-the-Timezone)
```
[22]:
```
```
begin = datetime.strptime("20-07-31 20:30", "%y-%m-%d %H:%M")
end = datetime.strptime("21-07-31 20:30", "%y-%m-%d %H:%M")
timezone = pytz.timezone("America/Los_Angeles")
begin_Los_Angeles = timezone.localize(begin)
end_Los_Angeles = timezone.localize(end)
```
##### 6.3 Read json using the LastModified filters[¶](#6.3-Read-json-using-the-LastModified-filters)
```
[23]:
```
```
wr.s3.read_fwf(f"s3://{bucket}/fwf/", names=["id", "name", "date"], last_modified_begin=begin_utc, last_modified_end=end_utc)
wr.s3.read_json(f"s3://{bucket}/json/", last_modified_begin=begin_utc, last_modified_end=end_utc)
wr.s3.read_csv(f"s3://{bucket}/csv/", last_modified_begin=begin_utc, last_modified_end=end_utc)
wr.s3.read_parquet(f"s3://{bucket}/parquet/", last_modified_begin=begin_utc, last_modified_end=end_utc)
```
#### 7. Download objects[¶](#7.-Download-objects)
Objects can be downloaded from S3 using either a path to a local file or a file-like object in binary mode.
##### 7.1 Download object to a file path[¶](#7.1-Download-object-to-a-file-path)
```
[24]:
```
```
local_file_dir = getpass.getpass()
```
```
[25]:
```
```
import os
path1 = f"s3://{bucket}/csv/file1.csv"
local_file = os.path.join(local_file_dir, "file1.csv")
wr.s3.download(path=path1, local_file=local_file)
pd.read_csv(local_file)
```
```
[25]:
```
| | id | name |
| --- | --- | --- |
| 0 | 1 | foo |
| 1 | 2 | boo |
##### 7.2 Download object to a file-like object in binary mode[¶](#7.2-Download-object-to-a-file-like-object-in-binary-mode)
```
[26]:
```
```
path2 = f"s3://{bucket}/csv/file2.csv"
local_file = os.path.join(local_file_dir, "file2.csv")
with open(local_file, mode="wb") as local_f:
wr.s3.download(path=path2, local_file=local_f)
pd.read_csv(local_file)
```
```
[26]:
```
| | id | name |
| --- | --- | --- |
| 0 | 3 | bar |
#### 8. Upload objects[¶](#8.-Upload-objects)
Objects can be uploaded to S3 using either a path to a local file or a file-like object in binary mode.
##### 8.1 Upload object from a file path[¶](#8.1-Upload-object-from-a-file-path)
```
[27]:
```
```
local_file = os.path.join(local_file_dir, "file1.csv")
wr.s3.upload(local_file=local_file, path=path1)
wr.s3.read_csv(path1)
```
```
[27]:
```
| | id | name |
| --- | --- | --- |
| 0 | 1 | foo |
| 1 | 2 | boo |
##### 8.2 Upload object from a file-like object in binary mode[¶](#8.2-Upload-object-from-a-file-like-object-in-binary-mode)
```
[28]:
```
```
local_file = os.path.join(local_file_dir, "file2.csv")
with open(local_file, "rb") as local_f:
wr.s3.upload(local_file=local_f, path=path2)
wr.s3.read_csv(path2)
```
```
[28]:
```
| | id | name |
| --- | --- | --- |
| 0 | 3 | bar |
#### 9. Delete objects[¶](#9.-Delete-objects)
```
[29]:
```
```
wr.s3.delete_objects(f"s3://{bucket}/")
```
### 4 - Parquet Datasets[¶](#4---Parquet-Datasets)
awswrangler has 3 different write modes to store Parquet Datasets on Amazon S3.
* **append** (Default)
Only adds new files without any delete.
* **overwrite**
Deletes everything in the target directory and then add new files. If writing new files fails for any reason, old files are *not* restored.
* **overwrite_partitions** (Partition Upsert)
Only deletes the paths of partitions that should be updated and then writes the new partitions files. It’s like a “partition Upsert”.
```
[1]:
```
```
from datetime import date import awswrangler as wr import pandas as pd
```
#### Enter your bucket name:[¶](#Enter-your-bucket-name:)
```
[2]:
```
```
import getpass bucket = getpass.getpass()
path = f"s3://{bucket}/dataset/"
```
```
············
```
#### Creating the Dataset[¶](#Creating-the-Dataset)
```
[3]:
```
```
df = pd.DataFrame({
"id": [1, 2],
"value": ["foo", "boo"],
"date": [date(2020, 1, 1), date(2020, 1, 2)]
})
wr.s3.to_parquet(
df=df,
path=path,
dataset=True,
mode="overwrite"
)
wr.s3.read_parquet(path, dataset=True)
```
```
[3]:
```
| | id | value | date |
| --- | --- | --- | --- |
| 0 | 1 | foo | 2020-01-01 |
| 1 | 2 | boo | 2020-01-02 |
#### Appending[¶](#Appending)
```
[4]:
```
```
df = pd.DataFrame({
"id": [3],
"value": ["bar"],
"date": [date(2020, 1, 3)]
})
wr.s3.to_parquet(
df=df,
path=path,
dataset=True,
mode="append"
)
wr.s3.read_parquet(path, dataset=True)
```
```
[4]:
```
| | id | value | date |
| --- | --- | --- | --- |
| 0 | 3 | bar | 2020-01-03 |
| 1 | 1 | foo | 2020-01-01 |
| 2 | 2 | boo | 2020-01-02 |
#### Overwriting[¶](#Overwriting)
```
[5]:
```
```
wr.s3.to_parquet(
df=df,
path=path,
dataset=True,
mode="overwrite"
)
wr.s3.read_parquet(path, dataset=True)
```
```
[5]:
```
| | id | value | date |
| --- | --- | --- | --- |
| 0 | 3 | bar | 2020-01-03 |
#### Creating a **Partitioned** Dataset[¶](#Creating-a-Partitioned-Dataset)
```
[6]:
```
```
df = pd.DataFrame({
"id": [1, 2],
"value": ["foo", "boo"],
"date": [date(2020, 1, 1), date(2020, 1, 2)]
})
wr.s3.to_parquet(
df=df,
path=path,
dataset=True,
mode="overwrite",
partition_cols=["date"]
)
wr.s3.read_parquet(path, dataset=True)
```
```
[6]:
```
| | id | value | date |
| --- | --- | --- | --- |
| 0 | 1 | foo | 2020-01-01 |
| 1 | 2 | boo | 2020-01-02 |
#### Upserting partitions (overwrite_partitions)[¶](#Upserting-partitions-(overwrite_partitions))
```
[7]:
```
```
df = pd.DataFrame({
"id": [2, 3],
"value": ["xoo", "bar"],
"date": [date(2020, 1, 2), date(2020, 1, 3)]
})
wr.s3.to_parquet(
df=df,
path=path,
dataset=True,
mode="overwrite_partitions",
partition_cols=["date"]
)
wr.s3.read_parquet(path, dataset=True)
```
```
[7]:
```
| | id | value | date |
| --- | --- | --- | --- |
| 0 | 1 | foo | 2020-01-01 |
| 1 | 2 | xoo | 2020-01-02 |
| 2 | 3 | bar | 2020-01-03 |
#### BONUS - Glue/Athena integration[¶](#BONUS---Glue/Athena-integration)
```
[8]:
```
```
df = pd.DataFrame({
"id": [1, 2],
"value": ["foo", "boo"],
"date": [date(2020, 1, 1), date(2020, 1, 2)]
})
wr.s3.to_parquet(
df=df,
path=path,
dataset=True,
mode="overwrite",
database="aws_sdk_pandas",
table="my_table"
)
wr.athena.read_sql_query("SELECT * FROM my_table", database="aws_sdk_pandas")
```
```
[8]:
```
| | id | value | date |
| --- | --- | --- | --- |
| 0 | 1 | foo | 2020-01-01 |
| 1 | 2 | boo | 2020-01-02 |
### 5 - Glue Catalog[¶](#5---Glue-Catalog)
[awswrangler](https://github.com/aws/aws-sdk-pandas) makes heavy use of [Glue Catalog](https://aws.amazon.com/glue/) to store metadata of tables and connections.
```
[1]:
```
```
import awswrangler as wr import pandas as pd
```
#### Enter your bucket name:[¶](#Enter-your-bucket-name:)
```
[2]:
```
```
import getpass bucket = getpass.getpass()
path = f"s3://{bucket}/data/"
```
```
············
```
##### Creating a Pandas DataFrame[¶](#Creating-a-Pandas-DataFrame)
```
[3]:
```
```
df = pd.DataFrame({
"id": [1, 2, 3],
"name": ["shoes", "tshirt", "ball"],
"price": [50.3, 10.5, 20.0],
"in_stock": [True, True, False]
})
df
```
```
[3]:
```
| | id | name | price | in_stock |
| --- | --- | --- | --- | --- |
| 0 | 1 | shoes | 50.3 | True |
| 1 | 2 | tshirt | 10.5 | True |
| 2 | 3 | ball | 20.0 | False |
#### Checking Glue Catalog Databases[¶](#Checking-Glue-Catalog-Databases)
```
[4]:
```
```
databases = wr.catalog.databases()
print(databases)
```
```
Database Description 0 aws_sdk_pandas AWS SDK for pandas Test Arena - Glue Database 1 default Default Hive database
```
##### Create the database awswrangler_test if not exists[¶](#Create-the-database-awswrangler_test-if-not-exists)
```
[5]:
```
```
if "awswrangler_test" not in databases.values:
wr.catalog.create_database("awswrangler_test")
print(wr.catalog.databases())
else:
print("Database awswrangler_test already exists")
```
```
Database Description 0 aws_sdk_pandas AWS SDK for pandas Test Arena - Glue Database 1 awswrangler_test 2 default Default Hive database
```
#### Checking the empty database[¶](#Checking-the-empty-database)
```
[6]:
```
```
wr.catalog.tables(database="awswrangler_test")
```
```
[6]:
```
| | Database | Table | Description | Columns | Partitions |
| --- | --- | --- | --- | --- | --- |
##### Writing DataFrames to Data Lake (S3 + Parquet + Glue Catalog)[¶](#Writing-DataFrames-to-Data-Lake-(S3-+-Parquet-+-Glue-Catalog))
```
[7]:
```
```
desc = "This is my product table."
param = {
"source": "Product Web Service",
"class": "e-commerce"
}
comments = {
"id": "Unique product ID.",
"name": "Product name",
"price": "Product price (dollar)",
"in_stock": "Is this product availaible in the stock?"
}
res = wr.s3.to_parquet(
df=df,
path=f"s3://{bucket}/products/",
dataset=True,
database="awswrangler_test",
table="products",
mode="overwrite",
glue_table_settings=wr.typing.GlueTableSettings(
description=desc,
parameters=param,
columns_comments=comments
),
)
```
##### Checking Glue Catalog (AWS Console)[¶](#Checking-Glue-Catalog-(AWS-Console))
##### Looking Up for the new table![¶](#Looking-Up-for-the-new-table!)
```
[8]:
```
```
wr.catalog.tables(name_contains="roduc")
```
```
[8]:
```
| | Database | Table | Description | Columns | Partitions |
| --- | --- | --- | --- | --- | --- |
| 0 | awswrangler_test | products | This is my product table. | id, name, price, in_stock | |
```
[9]:
```
```
wr.catalog.tables(name_prefix="pro")
```
```
[9]:
```
| | Database | Table | Description | Columns | Partitions |
| --- | --- | --- | --- | --- | --- |
| 0 | awswrangler_test | products | This is my product table. | id, name, price, in_stock | |
```
[10]:
```
```
wr.catalog.tables(name_suffix="ts")
```
```
[10]:
```
| | Database | Table | Description | Columns | Partitions |
| --- | --- | --- | --- | --- | --- |
| 0 | awswrangler_test | products | This is my product table. | id, name, price, in_stock | |
```
[11]:
```
```
wr.catalog.tables(search_text="This is my")
```
```
[11]:
```
| | Database | Table | Description | Columns | Partitions |
| --- | --- | --- | --- | --- | --- |
| 0 | awswrangler_test | products | This is my product table. | id, name, price, in_stock | |
##### Getting tables details[¶](#Getting-tables-details)
```
[12]:
```
```
wr.catalog.table(database="awswrangler_test", table="products")
```
```
[12]:
```
| | Column Name | Type | Partition | Comment |
| --- | --- | --- | --- | --- |
| 0 | id | bigint | False | Unique product ID. |
| 1 | name | string | False | Product name |
| 2 | price | double | False | Product price (dollar) |
| 3 | in_stock | boolean | False | Is this product availaible in the stock? |
#### Cleaning Up the Database[¶](#Cleaning-Up-the-Database)
```
[13]:
```
```
for table in wr.catalog.get_tables(database="awswrangler_test"):
wr.catalog.delete_table_if_exists(database="awswrangler_test", table=table["Name"])
```
##### Delete Database[¶](#Delete-Database)
```
[14]:
```
```
wr.catalog.delete_database('awswrangler_test')
```
### 6 - Amazon Athena[¶](#6---Amazon-Athena)
[awswrangler](https://github.com/aws/aws-sdk-pandas) has three ways to run queries on Athena and fetch the result as a DataFrame:
* **ctas_approach=True** (Default)
Wraps the query with a CTAS and then reads the table data as parquet directly from s3.
+ `PROS`:
- Faster for mid and big result sizes.
- Can handle some level of nested types.
+ `CONS`:
- Requires create/delete table permissions on Glue.
- Does not support timestamp with time zone
- Does not support columns with repeated names.
- Does not support columns with undefined data types.
- A temporary table will be created and then deleted immediately.
- Does not support custom data_source/catalog_id.
* **unload_approach=True and ctas_approach=False**
Does an UNLOAD query on Athena and parse the Parquet result on s3.
+ `PROS`:
- Faster for mid and big result sizes.
- Can handle some level of nested types.
- Does not modify Glue Data Catalog.
+ `CONS`:
- Output S3 path must be empty.
- Does not support timestamp with time zone
- Does not support columns with repeated names.
- Does not support columns with undefined data types.
* **ctas_approach=False**
Does a regular query on Athena and parse the regular CSV result on s3.
+ `PROS`:
- Faster for small result sizes (less latency).
- Does not require create/delete table permissions on Glue
- Supports timestamp with time zone.
- Support custom data_source/catalog_id.
+ `CONS`:
- Slower (But stills faster than other libraries that uses the regular Athena API)
- Does not handle nested types at all.
```
[1]:
```
```
import awswrangler as wr
```
#### Enter your bucket name:[¶](#Enter-your-bucket-name:)
```
[2]:
```
```
import getpass bucket = getpass.getpass()
path = f"s3://{bucket}/data/"
```
#### Checking/Creating Glue Catalog Databases[¶](#Checking/Creating-Glue-Catalog-Databases)
```
[3]:
```
```
if "awswrangler_test" not in wr.catalog.databases().values:
wr.catalog.create_database("awswrangler_test")
```
##### Creating a Parquet Table from the NOAA’s CSV files[¶](#Creating-a-Parquet-Table-from-the-NOAA's-CSV-files)
[Reference](https://registry.opendata.aws/noaa-ghcn/)
```
[ ]:
```
```
cols = ["id", "dt", "element", "value", "m_flag", "q_flag", "s_flag", "obs_time"]
df = wr.s3.read_csv(
path="s3://noaa-ghcn-pds/csv/by_year/189",
names=cols,
parse_dates=["dt", "obs_time"]) # Read 10 files from the 1890 decade (~1GB)
df
```
```
[ ]:
```
```
wr.s3.to_parquet(
df=df,
path=path,
dataset=True,
mode="overwrite",
database="awswrangler_test",
table="noaa"
)
```
```
[ ]:
```
```
wr.catalog.table(database="awswrangler_test", table="noaa")
```
#### Reading with ctas_approach=False[¶](#Reading-with-ctas_approach=False)
```
[ ]:
```
```
%%time
wr.athena.read_sql_query("SELECT * FROM noaa", database="awswrangler_test", ctas_approach=False)
```
#### Default with ctas_approach=True - 13x faster (default)[¶](#Default-with-ctas_approach=True---13x-faster-(default))
```
[ ]:
```
```
%%time
wr.athena.read_sql_query("SELECT * FROM noaa", database="awswrangler_test")
```
#### Using categories to speed up and save memory - 24x faster[¶](#Using-categories-to-speed-up-and-save-memory---24x-faster)
```
[ ]:
```
```
%%time
wr.athena.read_sql_query("SELECT * FROM noaa", database="awswrangler_test", categories=["id", "dt", "element", "value", "m_flag", "q_flag", "s_flag", "obs_time"])
```
#### Reading with unload_approach=True[¶](#Reading-with-unload_approach=True)
```
[ ]:
```
```
%%time
wr.athena.read_sql_query("SELECT * FROM noaa", database="awswrangler_test", ctas_approach=False, unload_approach=True, s3_output=f"s3://{bucket}/unload/")
```
#### Batching (Good for restricted memory environments)[¶](#Batching-(Good-for-restricted-memory-environments))
```
[ ]:
```
```
%%time
dfs = wr.athena.read_sql_query(
"SELECT * FROM noaa",
database="awswrangler_test",
chunksize=True # Chunksize calculated automatically for ctas_approach.
)
for df in dfs: # Batching
print(len(df.index))
```
```
[ ]:
```
```
%%time
dfs = wr.athena.read_sql_query(
"SELECT * FROM noaa",
database="awswrangler_test",
chunksize=100_000_000
)
for df in dfs: # Batching
print(len(df.index))
```
#### Parameterized queries[¶](#Parameterized-queries)
##### Client-side parameter resolution[¶](#Client-side-parameter-resolution)
The `params` parameter allows client-side resolution of parameters, which are specified with `:col_name`, when `paramstyle` is set to `named`. Additionally, Python types will map to the appropriate Athena definitions. For example, the value `dt.date(2023, 1, 1)` will resolve to `DATE '2023-01-01`.
For the example below, the following query will be sent to Athena:
```
SELECT * FROM noaa WHERE S_FLAG = 'E'
```
```
[ ]:
```
```
%%time
wr.athena.read_sql_query(
"SELECT * FROM noaa WHERE S_FLAG = :flag_value",
database="awswrangler_test",
params={
"flag_value": "E",
},
)
```
##### Server-side parameter resolution[¶](#Server-side-parameter-resolution)
Alternatively, Athena supports server-side parameter resolution when `paramstyle` is defined as `qmark`. The SQL statement sent to Athena will not contain the values passed in `params`. Instead, they will be passed as part of a separate `params` parameter in `boto3`.
The downside of using this approach is that types aren’t automatically resolved. The values sent to `params` must be strings. Therefore, if one of the values is a date, the value passed in `params` has to be `DATE 'XXXX-XX-XX'`.
The upside, however, is that these parameters can be used with prepared statements.
For more information, see “[Using parameterized queries](https://docs.aws.amazon.com/athena/latest/ug/querying-with-prepared-statements.html)”.
```
[ ]:
```
```
%%time
wr.athena.read_sql_query(
"SELECT * FROM noaa WHERE S_FLAG = ?",
database="awswrangler_test",
params=["E"],
paramstyle="qmark",
)
```
#### Prepared statements[¶](#Prepared-statements)
```
[ ]:
```
```
wr.athena.create_prepared_statement(
sql="SELECT * FROM noaa WHERE S_FLAG = ?",
statement_name="statement",
)
# Resolve parameter using Athena execution parameters wr.athena.read_sql_query(
sql="EXECUTE statement",
database="awswrangler_test",
params=["E"],
paramstyle="qmark",
)
# Resolve parameter using Athena execution parameters (same effect as above)
wr.athena.read_sql_query(
sql="EXECUTE statement USING ?",
database="awswrangler_test",
params=["E"],
paramstyle="qmark",
)
# Resolve parameter using client-side formatter wr.athena.read_sql_query(
sql="EXECUTE statement USING :flag_value",
database="awswrangler_test",
params={
"flag_value": "E",
},
paramstyle="named",
)
```
```
[ ]:
```
```
# Clean up prepared statement wr.athena.delete_prepared_statement(statement_name="statement")
```
#### Cleaning Up S3[¶](#Cleaning-Up-S3)
```
[ ]:
```
```
wr.s3.delete_objects(path)
```
#### Delete table[¶](#Delete-table)
```
[ ]:
```
```
wr.catalog.delete_table_if_exists(database="awswrangler_test", table="noaa")
```
#### Delete Database[¶](#Delete-Database)
```
[ ]:
```
```
wr.catalog.delete_database('awswrangler_test')
```
### 7 - Redshift, MySQL, PostgreSQL, SQL Server and Oracle[¶](#7---Redshift,-MySQL,-PostgreSQL,-SQL-Server-and-Oracle)
[awswrangler](https://github.com/aws/aws-sdk-pandas)’s Redshift, MySQL and PostgreSQL have two basic functions in common that try to follow Pandas conventions, but add more data type consistency.
* [wr.redshift.to_sql()](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/stubs/awswrangler.redshift.to_sql.html)
* [wr.redshift.read_sql_query()](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/stubs/awswrangler.redshift.read_sql_query.html)
* [wr.mysql.to_sql()](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/stubs/awswrangler.mysql.to_sql.html)
* [wr.mysql.read_sql_query()](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/stubs/awswrangler.mysql.read_sql_query.html)
* [wr.postgresql.to_sql()](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/stubs/awswrangler.postgresql.to_sql.html)
* [wr.postgresql.read_sql_query()](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/stubs/awswrangler.postgresql.read_sql_query.html)
* [wr.sqlserver.to_sql()](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/stubs/awswrangler.sqlserver.to_sql.html)
* [wr.sqlserver.read_sql_query()](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/stubs/awswrangler.sqlserver.read_sql_query.html)
* [wr.oracle.to_sql()](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/stubs/awswrangler.oracle.to_sql.html)
* [wr.oracle.read_sql_query()](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/stubs/awswrangler.oracle.read_sql_query.html)
```
[ ]:
```
```
# Install the optional modules first
!pip install 'awswrangler[redshift, postgres, mysql, sqlserver, oracle]'
```
```
[1]:
```
```
import awswrangler as wr import pandas as pd
df = pd.DataFrame({
"id": [1, 2],
"name": ["foo", "boo"]
})
```
#### Connect using the Glue Catalog Connections[¶](#Connect-using-the-Glue-Catalog-Connections)
* [wr.redshift.connect()](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/stubs/awswrangler.redshift.connect.html)
* [wr.mysql.connect()](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/stubs/awswrangler.mysql.connect.html)
* [wr.postgresql.connect()](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/stubs/awswrangler.postgresql.connect.html)
* [wr.sqlserver.connect()](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/stubs/awswrangler.sqlserver.connect.html)
* [wr.oracle.connect()](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/stubs/awswrangler.oracle.connect.html)
```
[2]:
```
```
con_redshift = wr.redshift.connect("aws-sdk-pandas-redshift")
con_mysql = wr.mysql.connect("aws-sdk-pandas-mysql")
con_postgresql = wr.postgresql.connect("aws-sdk-pandas-postgresql")
con_sqlserver = wr.sqlserver.connect("aws-sdk-pandas-sqlserver")
con_oracle = wr.oracle.connect("aws-sdk-pandas-oracle")
```
#### Raw SQL queries (No Pandas)[¶](#Raw-SQL-queries-(No-Pandas))
```
[3]:
```
```
with con_redshift.cursor() as cursor:
for row in cursor.execute("SELECT 1"):
print(row)
```
```
[1]
```
#### Loading data to Database[¶](#Loading-data-to-Database)
```
[4]:
```
```
wr.redshift.to_sql(df, con_redshift, schema="public", table="tutorial", mode="overwrite")
wr.mysql.to_sql(df, con_mysql, schema="test", table="tutorial", mode="overwrite")
wr.postgresql.to_sql(df, con_postgresql, schema="public", table="tutorial", mode="overwrite")
wr.sqlserver.to_sql(df, con_sqlserver, schema="dbo", table="tutorial", mode="overwrite")
wr.oracle.to_sql(df, con_oracle, schema="test", table="tutorial", mode="overwrite")
```
#### Unloading data from Database[¶](#Unloading-data-from-Database)
```
[5]:
```
```
wr.redshift.read_sql_query("SELECT * FROM public.tutorial", con=con_redshift)
wr.mysql.read_sql_query("SELECT * FROM test.tutorial", con=con_mysql)
wr.postgresql.read_sql_query("SELECT * FROM public.tutorial", con=con_postgresql)
wr.sqlserver.read_sql_query("SELECT * FROM dbo.tutorial", con=con_sqlserver)
wr.oracle.read_sql_query("SELECT * FROM test.tutorial", con=con_oracle)
```
```
[5]:
```
| | id | name |
| --- | --- | --- |
| 0 | 1 | foo |
| 1 | 2 | boo |
```
[6]:
```
```
con_redshift.close()
con_mysql.close()
con_postgresql.close()
con_sqlserver.close()
con_oracle.close()
```
### 8 - Redshift - COPY & UNLOAD[¶](#8---Redshift---COPY-&-UNLOAD)
`Amazon Redshift` has two SQL command that help to load and unload large amount of data staging it on `Amazon S3`:
1 - [COPY](https://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html)
2 - [UNLOAD](https://docs.aws.amazon.com/redshift/latest/dg/r_UNLOAD.html)
Let’s take a look and how awswrangler can use it.
```
[ ]:
```
```
# Install the optional modules first
!pip install 'awswrangler[redshift]'
```
```
[1]:
```
```
import awswrangler as wr
con = wr.redshift.connect("aws-sdk-pandas-redshift")
```
#### Enter your bucket name:[¶](#Enter-your-bucket-name:)
```
[2]:
```
```
import getpass bucket = getpass.getpass()
path = f"s3://{bucket}/stage/"
```
```
···········································
```
#### Enter your IAM ROLE ARN:[¶](#Enter-your-IAM-ROLE-ARN:)
```
[3]:
```
```
iam_role = getpass.getpass()
```
```
····················································································
```
##### Creating a DataFrame from the NOAA’s CSV files[¶](#Creating-a-DataFrame-from-the-NOAA's-CSV-files)
[Reference](https://registry.opendata.aws/noaa-ghcn/)
```
[4]:
```
```
cols = ["id", "dt", "element", "value", "m_flag", "q_flag", "s_flag", "obs_time"]
df = wr.s3.read_csv(
path="s3://noaa-ghcn-pds/csv/by_year/1897.csv",
names=cols,
parse_dates=["dt", "obs_time"]) # ~127MB, ~4MM rows
df
```
```
[4]:
```
| | id | dt | element | value | m_flag | q_flag | s_flag | obs_time |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | AG000060590 | 1897-01-01 | TMAX | 170 | NaN | NaN | E | NaN |
| 1 | AG000060590 | 1897-01-01 | TMIN | -14 | NaN | NaN | E | NaN |
| 2 | AG000060590 | 1897-01-01 | PRCP | 0 | NaN | NaN | E | NaN |
| 3 | AGE00135039 | 1897-01-01 | TMAX | 140 | NaN | NaN | E | NaN |
| 4 | AGE00135039 | 1897-01-01 | TMIN | 40 | NaN | NaN | E | NaN |
| ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 3923594 | UZM00038457 | 1897-12-31 | TMIN | -145 | NaN | NaN | r | NaN |
| 3923595 | UZM00038457 | 1897-12-31 | PRCP | 4 | NaN | NaN | r | NaN |
| 3923596 | UZM00038457 | 1897-12-31 | TAVG | -95 | NaN | NaN | r | NaN |
| 3923597 | UZM00038618 | 1897-12-31 | PRCP | 66 | NaN | NaN | r | NaN |
| 3923598 | UZM00038618 | 1897-12-31 | TAVG | -45 | NaN | NaN | r | NaN |
3923599 rows × 8 columns
#### Load and Unload with COPY and UNLOAD commands[¶](#Load-and-Unload-with-COPY-and-UNLOAD-commands)
> Note: Please use a empty S3 path for the COPY command.
```
[5]:
```
```
%%time
wr.redshift.copy(
df=df,
path=path,
con=con,
schema="public",
table="commands",
mode="overwrite",
iam_role=iam_role,
)
```
```
CPU times: user 2.78 s, sys: 293 ms, total: 3.08 s Wall time: 20.7 s
```
```
[6]:
```
```
%%time
wr.redshift.unload(
sql="SELECT * FROM public.commands",
con=con,
iam_role=iam_role,
path=path,
keep_files=True,
)
```
```
CPU times: user 10 s, sys: 1.14 s, total: 11.2 s Wall time: 27.5 s
```
```
[6]:
```
| | id | dt | element | value | m_flag | q_flag | s_flag | obs_time |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | AG000060590 | 1897-01-01 | TMAX | 170 | <NA> | <NA> | E | <NA> |
| 1 | AG000060590 | 1897-01-01 | PRCP | 0 | <NA> | <NA> | E | <NA> |
| 2 | AGE00135039 | 1897-01-01 | TMIN | 40 | <NA> | <NA> | E | <NA> |
| 3 | AGE00147705 | 1897-01-01 | TMAX | 164 | <NA> | <NA> | E | <NA> |
| 4 | AGE00147705 | 1897-01-01 | PRCP | 0 | <NA> | <NA> | E | <NA> |
| ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 3923594 | USW00094967 | 1897-12-31 | TMAX | -144 | <NA> | <NA> | 6 | <NA> |
| 3923595 | USW00094967 | 1897-12-31 | PRCP | 0 | P | <NA> | 6 | <NA> |
| 3923596 | UZM00038457 | 1897-12-31 | TMAX | -49 | <NA> | <NA> | r | <NA> |
| 3923597 | UZM00038457 | 1897-12-31 | PRCP | 4 | <NA> | <NA> | r | <NA> |
| 3923598 | UZM00038618 | 1897-12-31 | PRCP | 66 | <NA> | <NA> | r | <NA> |
7847198 rows × 8 columns
```
[7]:
```
```
con.close()
```
### 9 - Redshift - Append, Overwrite and Upsert[¶](#9---Redshift---Append,-Overwrite-and-Upsert)
awswrangler’s `copy/to_sql` function has three different `mode` options for Redshift.
1 - `append`
2 - `overwrite`
3 - `upsert`
```
[ ]:
```
```
# Install the optional modules first
!pip install 'awswrangler[redshift]'
```
```
[2]:
```
```
import awswrangler as wr import pandas as pd from datetime import date
con = wr.redshift.connect("aws-sdk-pandas-redshift")
```
#### Enter your bucket name:[¶](#Enter-your-bucket-name:)
```
[3]:
```
```
import getpass bucket = getpass.getpass()
path = f"s3://{bucket}/stage/"
```
```
···········································
```
#### Enter your IAM ROLE ARN:[¶](#Enter-your-IAM-ROLE-ARN:)
```
[4]:
```
```
iam_role = getpass.getpass()
```
```
····················································································
```
##### Creating the table (Overwriting if it exists)[¶](#Creating-the-table-(Overwriting-if-it-exists))
```
[10]:
```
```
df = pd.DataFrame({
"id": [1, 2],
"value": ["foo", "boo"],
"date": [date(2020, 1, 1), date(2020, 1, 2)]
})
wr.redshift.copy(
df=df,
path=path,
con=con,
schema="public",
table="my_table",
mode="overwrite",
iam_role=iam_role,
primary_keys=["id"]
)
wr.redshift.read_sql_table(table="my_table", schema="public", con=con)
```
```
[10]:
```
| | id | value | date |
| --- | --- | --- | --- |
| 0 | 2 | boo | 2020-01-02 |
| 1 | 1 | foo | 2020-01-01 |
#### Appending[¶](#Appending)
```
[11]:
```
```
df = pd.DataFrame({
"id": [3],
"value": ["bar"],
"date": [date(2020, 1, 3)]
})
wr.redshift.copy(
df=df,
path=path,
con=con,
schema="public",
table="my_table",
mode="append",
iam_role=iam_role,
primary_keys=["id"]
)
wr.redshift.read_sql_table(table="my_table", schema="public", con=con)
```
```
[11]:
```
| | id | value | date |
| --- | --- | --- | --- |
| 0 | 1 | foo | 2020-01-01 |
| 1 | 2 | boo | 2020-01-02 |
| 2 | 3 | bar | 2020-01-03 |
#### Upserting[¶](#Upserting)
```
[12]:
```
```
df = pd.DataFrame({
"id": [2, 3],
"value": ["xoo", "bar"],
"date": [date(2020, 1, 2), date(2020, 1, 3)]
})
wr.redshift.copy(
df=df,
path=path,
con=con,
schema="public",
table="my_table",
mode="upsert",
iam_role=iam_role,
primary_keys=["id"]
)
wr.redshift.read_sql_table(table="my_table", schema="public", con=con)
```
```
[12]:
```
| | id | value | date |
| --- | --- | --- | --- |
| 0 | 1 | foo | 2020-01-01 |
| 1 | 2 | xoo | 2020-01-02 |
| 2 | 3 | bar | 2020-01-03 |
#### Cleaning Up[¶](#Cleaning-Up)
```
[13]:
```
```
with con.cursor() as cursor:
cursor.execute("DROP TABLE public.my_table")
con.close()
```
### 10 - Parquet Crawler[¶](#10---Parquet-Crawler)
[awswrangler](https://github.com/aws/aws-sdk-pandas) can extract only the metadata from Parquet files and Partitions and then add it to the Glue Catalog.
```
[1]:
```
```
import awswrangler as wr
```
#### Enter your bucket name:[¶](#Enter-your-bucket-name:)
```
[2]:
```
```
import getpass bucket = getpass.getpass()
path = f"s3://{bucket}/data/"
```
```
············
```
##### Creating a Parquet Table from the NOAA’s CSV files[¶](#Creating-a-Parquet-Table-from-the-NOAA's-CSV-files)
[Reference](https://registry.opendata.aws/noaa-ghcn/)
```
[3]:
```
```
cols = ["id", "dt", "element", "value", "m_flag", "q_flag", "s_flag", "obs_time"]
df = wr.s3.read_csv(
path="s3://noaa-ghcn-pds/csv/by_year/189",
names=cols,
parse_dates=["dt", "obs_time"]) # Read 10 files from the 1890 decade (~1GB)
df
```
```
[3]:
```
| | id | dt | element | value | m_flag | q_flag | s_flag | obs_time |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | AGE00135039 | 1890-01-01 | TMAX | 160 | NaN | NaN | E | NaN |
| 1 | AGE00135039 | 1890-01-01 | TMIN | 30 | NaN | NaN | E | NaN |
| 2 | AGE00135039 | 1890-01-01 | PRCP | 45 | NaN | NaN | E | NaN |
| 3 | AGE00147705 | 1890-01-01 | TMAX | 140 | NaN | NaN | E | NaN |
| 4 | AGE00147705 | 1890-01-01 | TMIN | 74 | NaN | NaN | E | NaN |
| ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 29249753 | UZM00038457 | 1899-12-31 | PRCP | 16 | NaN | NaN | r | NaN |
| 29249754 | UZM00038457 | 1899-12-31 | TAVG | -73 | NaN | NaN | r | NaN |
| 29249755 | UZM00038618 | 1899-12-31 | TMIN | -76 | NaN | NaN | r | NaN |
| 29249756 | UZM00038618 | 1899-12-31 | PRCP | 0 | NaN | NaN | r | NaN |
| 29249757 | UZM00038618 | 1899-12-31 | TAVG | -60 | NaN | NaN | r | NaN |
29249758 rows × 8 columns
```
[4]:
```
```
df["year"] = df["dt"].dt.year
df.head(3)
```
```
[4]:
```
| | id | dt | element | value | m_flag | q_flag | s_flag | obs_time | year |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | AGE00135039 | 1890-01-01 | TMAX | 160 | NaN | NaN | E | NaN | 1890 |
| 1 | AGE00135039 | 1890-01-01 | TMIN | 30 | NaN | NaN | E | NaN | 1890 |
| 2 | AGE00135039 | 1890-01-01 | PRCP | 45 | NaN | NaN | E | NaN | 1890 |
```
[5]:
```
```
res = wr.s3.to_parquet(
df=df,
path=path,
dataset=True,
mode="overwrite",
partition_cols=["year"],
)
```
```
[6]:
```
```
[ x.split("data/", 1)[1] for x in wr.s3.list_objects(path)]
```
```
[6]:
```
```
['year=1890/06a519afcf8e48c9b08c8908f30adcfe.snappy.parquet',
'year=1891/5a99c28dbef54008bfc770c946099e02.snappy.parquet',
'year=1892/9b1ea5d1cfad40f78c920f93540ca8ec.snappy.parquet',
'year=1893/92259b49c134401eaf772506ee802af6.snappy.parquet',
'year=1894/c734469ffff944f69dc277c630064a16.snappy.parquet',
'year=1895/cf7ccde86aaf4d138f86c379c0817aa6.snappy.parquet',
'year=1896/ce02f4c2c554438786b766b33db451b6.snappy.parquet',
'year=1897/e04de04ad3c444deadcc9c410ab97ca1.snappy.parquet',
'year=1898/acb0e02878f04b56a6200f4b5a97be0e.snappy.parquet',
'year=1899/a269bdbb0f6a48faac55f3bcfef7df7a.snappy.parquet']
```
#### Crawling![¶](#Crawling!)
```
[7]:
```
```
%%time
res = wr.s3.store_parquet_metadata(
path=path,
database="awswrangler_test",
table="crawler",
dataset=True,
mode="overwrite",
dtype={"year": "int"}
)
```
```
CPU times: user 1.81 s, sys: 528 ms, total: 2.33 s Wall time: 3.21 s
```
#### Checking[¶](#Checking)
```
[8]:
```
```
wr.catalog.table(database="awswrangler_test", table="crawler")
```
```
[8]:
```
| | Column Name | Type | Partition | Comment |
| --- | --- | --- | --- | --- |
| 0 | id | string | False | |
| 1 | dt | timestamp | False | |
| 2 | element | string | False | |
| 3 | value | bigint | False | |
| 4 | m_flag | string | False | |
| 5 | q_flag | string | False | |
| 6 | s_flag | string | False | |
| 7 | obs_time | string | False | |
| 8 | year | int | True | |
```
[9]:
```
```
%%time
wr.athena.read_sql_query("SELECT * FROM crawler WHERE year=1890", database="awswrangler_test")
```
```
CPU times: user 3.52 s, sys: 811 ms, total: 4.33 s Wall time: 9.6 s
```
```
[9]:
```
| | id | dt | element | value | m_flag | q_flag | s_flag | obs_time | year |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | USC00195145 | 1890-01-01 | TMIN | -28 | <NA> | <NA> | 6 | <NA> | 1890 |
| 1 | USC00196770 | 1890-01-01 | PRCP | 0 | P | <NA> | 6 | <NA> | 1890 |
| 2 | USC00196770 | 1890-01-01 | SNOW | 0 | <NA> | <NA> | 6 | <NA> | 1890 |
| 3 | USC00196915 | 1890-01-01 | PRCP | 0 | P | <NA> | 6 | <NA> | 1890 |
| 4 | USC00196915 | 1890-01-01 | SNOW | 0 | <NA> | <NA> | 6 | <NA> | 1890 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 6139 | ASN00022006 | 1890-12-03 | PRCP | 0 | <NA> | <NA> | a | <NA> | 1890 |
| 6140 | ASN00022007 | 1890-12-03 | PRCP | 0 | <NA> | <NA> | a | <NA> | 1890 |
| 6141 | ASN00022008 | 1890-12-03 | PRCP | 0 | <NA> | <NA> | a | <NA> | 1890 |
| 6142 | ASN00022009 | 1890-12-03 | PRCP | 0 | <NA> | <NA> | a | <NA> | 1890 |
| 6143 | ASN00022011 | 1890-12-03 | PRCP | 0 | <NA> | <NA> | a | <NA> | 1890 |
1276246 rows × 9 columns
#### Cleaning Up S3[¶](#Cleaning-Up-S3)
```
[10]:
```
```
wr.s3.delete_objects(path)
```
#### Cleaning Up the Database[¶](#Cleaning-Up-the-Database)
```
[11]:
```
```
for table in wr.catalog.get_tables(database="awswrangler_test"):
wr.catalog.delete_table_if_exists(database="awswrangler_test", table=table["Name"])
```
### 11 - CSV Datasets[¶](#11---CSV-Datasets)
awswrangler has 3 different write modes to store CSV Datasets on Amazon S3.
* **append** (Default)
Only adds new files without any delete.
* **overwrite**
Deletes everything in the target directory and then add new files.
* **overwrite_partitions** (Partition Upsert)
Only deletes the paths of partitions that should be updated and then writes the new partitions files. It’s like a “partition Upsert”.
```
[1]:
```
```
from datetime import date import awswrangler as wr import pandas as pd
```
#### Enter your bucket name:[¶](#Enter-your-bucket-name:)
```
[2]:
```
```
import getpass bucket = getpass.getpass()
path = f"s3://{bucket}/dataset/"
```
```
············
```
#### Checking/Creating Glue Catalog Databases[¶](#Checking/Creating-Glue-Catalog-Databases)
```
[3]:
```
```
if "awswrangler_test" not in wr.catalog.databases().values:
wr.catalog.create_database("awswrangler_test")
```
#### Creating the Dataset[¶](#Creating-the-Dataset)
```
[4]:
```
```
df = pd.DataFrame({
"id": [1, 2],
"value": ["foo", "boo"],
"date": [date(2020, 1, 1), date(2020, 1, 2)]
})
wr.s3.to_csv(
df=df,
path=path,
index=False,
dataset=True,
mode="overwrite",
database="awswrangler_test",
table="csv_dataset"
)
wr.athena.read_sql_table(database="awswrangler_test", table="csv_dataset")
```
```
[4]:
```
| | id | value | date |
| --- | --- | --- | --- |
| 0 | 1 | foo | 2020-01-01 |
| 1 | 2 | boo | 2020-01-02 |
#### Appending[¶](#Appending)
```
[5]:
```
```
df = pd.DataFrame({
"id": [3],
"value": ["bar"],
"date": [date(2020, 1, 3)]
})
wr.s3.to_csv(
df=df,
path=path,
index=False,
dataset=True,
mode="append",
database="awswrangler_test",
table="csv_dataset"
)
wr.athena.read_sql_table(database="awswrangler_test", table="csv_dataset")
```
```
[5]:
```
| | id | value | date |
| --- | --- | --- | --- |
| 0 | 3 | bar | 2020-01-03 |
| 1 | 1 | foo | 2020-01-01 |
| 2 | 2 | boo | 2020-01-02 |
#### Overwriting[¶](#Overwriting)
```
[6]:
```
```
wr.s3.to_csv(
df=df,
path=path,
index=False,
dataset=True,
mode="overwrite",
database="awswrangler_test",
table="csv_dataset"
)
wr.athena.read_sql_table(database="awswrangler_test", table="csv_dataset")
```
```
[6]:
```
| | id | value | date |
| --- | --- | --- | --- |
| 0 | 3 | bar | 2020-01-03 |
#### Creating a **Partitioned** Dataset[¶](#Creating-a-Partitioned-Dataset)
```
[7]:
```
```
df = pd.DataFrame({
"id": [1, 2],
"value": ["foo", "boo"],
"date": [date(2020, 1, 1), date(2020, 1, 2)]
})
wr.s3.to_csv(
df=df,
path=path,
index=False,
dataset=True,
mode="overwrite",
database="awswrangler_test",
table="csv_dataset",
partition_cols=["date"]
)
wr.athena.read_sql_table(database="awswrangler_test", table="csv_dataset")
```
```
[7]:
```
| | id | value | date |
| --- | --- | --- | --- |
| 0 | 2 | boo | 2020-01-02 |
| 1 | 1 | foo | 2020-01-01 |
#### Upserting partitions (overwrite_partitions)[¶](#Upserting-partitions-(overwrite_partitions))
```
[8]:
```
```
df = pd.DataFrame({
"id": [2, 3],
"value": ["xoo", "bar"],
"date": [date(2020, 1, 2), date(2020, 1, 3)]
})
wr.s3.to_csv(
df=df,
path=path,
index=False,
dataset=True,
mode="overwrite_partitions",
database="awswrangler_test",
table="csv_dataset",
partition_cols=["date"]
)
wr.athena.read_sql_table(database="awswrangler_test", table="csv_dataset")
```
```
[8]:
```
| | id | value | date |
| --- | --- | --- | --- |
| 0 | 1 | foo | 2020-01-01 |
| 1 | 2 | xoo | 2020-01-02 |
| 0 | 3 | bar | 2020-01-03 |
#### BONUS - Glue/Athena integration[¶](#BONUS---Glue/Athena-integration)
```
[9]:
```
```
df = pd.DataFrame({
"id": [1, 2],
"value": ["foo", "boo"],
"date": [date(2020, 1, 1), date(2020, 1, 2)]
})
wr.s3.to_csv(
df=df,
path=path,
dataset=True,
index=False,
mode="overwrite",
database="aws_sdk_pandas",
table="my_table",
compression="gzip"
)
wr.athena.read_sql_query("SELECT * FROM my_table", database="aws_sdk_pandas")
```
```
[9]:
```
| | id | value | date |
| --- | --- | --- | --- |
| 0 | 1 | foo | 2020-01-01 |
| 1 | 2 | boo | 2020-01-02 |
### 12 - CSV Crawler[¶](#12---CSV-Crawler)
[awswrangler](https://github.com/aws/aws-sdk-pandas) can extract only the metadata from a Pandas DataFrame and then add it can be added to Glue Catalog as a table.
```
[1]:
```
```
import awswrangler as wr from datetime import datetime import pandas as pd
```
#### Enter your bucket name:[¶](#Enter-your-bucket-name:)
```
[2]:
```
```
import getpass bucket = getpass.getpass()
path = f"s3://{bucket}/csv_crawler/"
```
```
············
```
##### Creating a Pandas DataFrame[¶](#Creating-a-Pandas-DataFrame)
```
[3]:
```
```
ts = lambda x: datetime.strptime(x, "%Y-%m-%d %H:%M:%S.%f") # noqa dt = lambda x: datetime.strptime(x, "%Y-%m-%d").date() # noqa
df = pd.DataFrame(
{
"id": [1, 2, 3],
"string": ["foo", None, "boo"],
"float": [1.0, None, 2.0],
"date": [dt("2020-01-01"), None, dt("2020-01-02")],
"timestamp": [ts("2020-01-01 00:00:00.0"), None, ts("2020-01-02 00:00:01.0")],
"bool": [True, None, False],
"par0": [1, 1, 2],
"par1": ["a", "b", "b"],
}
)
df
```
```
[3]:
```
| | id | string | float | date | timestamp | bool | par0 | par1 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 1 | foo | 1.0 | 2020-01-01 | 2020-01-01 00:00:00 | True | 1 | a |
| 1 | 2 | None | NaN | None | NaT | None | 1 | b |
| 2 | 3 | boo | 2.0 | 2020-01-02 | 2020-01-02 00:00:01 | False | 2 | b |
##### Extracting the metadata[¶](#Extracting-the-metadata)
```
[4]:
```
```
columns_types, partitions_types = wr.catalog.extract_athena_types(
df=df,
file_format="csv",
index=False,
partition_cols=["par0", "par1"]
)
```
```
[5]:
```
```
columns_types
```
```
[5]:
```
```
{'id': 'bigint',
'string': 'string',
'float': 'double',
'date': 'date',
'timestamp': 'timestamp',
'bool': 'boolean'}
```
```
[6]:
```
```
partitions_types
```
```
[6]:
```
```
{'par0': 'bigint', 'par1': 'string'}
```
#### Creating the table[¶](#Creating-the-table)
```
[7]:
```
```
wr.catalog.create_csv_table(
table="csv_crawler",
database="awswrangler_test",
path=path,
partitions_types=partitions_types,
columns_types=columns_types,
)
```
#### Checking[¶](#Checking)
```
[8]:
```
```
wr.catalog.table(database="awswrangler_test", table="csv_crawler")
```
```
[8]:
```
| | Column Name | Type | Partition | Comment |
| --- | --- | --- | --- | --- |
| 0 | id | bigint | False | |
| 1 | string | string | False | |
| 2 | float | double | False | |
| 3 | date | date | False | |
| 4 | timestamp | timestamp | False | |
| 5 | bool | boolean | False | |
| 6 | par0 | bigint | True | |
| 7 | par1 | string | True | |
#### We can still using the extracted metadata to ensure all data types consistence to new data[¶](#We-can-still-using-the-extracted-metadata-to-ensure-all-data-types-consistence-to-new-data)
```
[9]:
```
```
df = pd.DataFrame(
{
"id": [1],
"string": ["1"],
"float": [1],
"date": [ts("2020-01-01 00:00:00.0")],
"timestamp": [dt("2020-01-02")],
"bool": [1],
"par0": [1],
"par1": ["a"],
}
)
df
```
```
[9]:
```
| | id | string | float | date | timestamp | bool | par0 | par1 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 1 | 1 | 1 | 2020-01-01 | 2020-01-02 | 1 | 1 | a |
```
[10]:
```
```
res = wr.s3.to_csv(
df=df,
path=path,
index=False,
dataset=True,
database="awswrangler_test",
table="csv_crawler",
partition_cols=["par0", "par1"],
dtype=columns_types
)
```
#### You can also extract the metadata directly from the Catalog if you want[¶](#You-can-also-extract-the-metadata-directly-from-the-Catalog-if-you-want)
```
[11]:
```
```
dtype = wr.catalog.get_table_types(database="awswrangler_test", table="csv_crawler")
```
```
[12]:
```
```
res = wr.s3.to_csv(
df=df,
path=path,
index=False,
dataset=True,
database="awswrangler_test",
table="csv_crawler",
partition_cols=["par0", "par1"],
dtype=dtype
)
```
#### Checking out[¶](#Checking-out)
```
[13]:
```
```
df = wr.athena.read_sql_table(database="awswrangler_test", table="csv_crawler")
df
```
```
[13]:
```
| | id | string | float | date | timestamp | bool | par0 | par1 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 1 | 1 | 1.0 | None | 2020-01-02 | True | 1 | a |
| 1 | 1 | 1 | 1.0 | None | 2020-01-02 | True | 1 | a |
```
[14]:
```
```
df.dtypes
```
```
[14]:
```
```
id Int64 string string float float64 date object timestamp datetime64[ns]
bool boolean par0 Int64 par1 string dtype: object
```
#### Cleaning Up S3[¶](#Cleaning-Up-S3)
```
[15]:
```
```
wr.s3.delete_objects(path)
```
#### Cleaning Up the Database[¶](#Cleaning-Up-the-Database)
```
[16]:
```
```
wr.catalog.delete_table_if_exists(database="awswrangler_test", table="csv_crawler")
```
```
[16]:
```
```
True
```
### 13 - Merging Datasets on S3[¶](#13---Merging-Datasets-on-S3)
awswrangler has 3 different copy modes to store Parquet Datasets on Amazon S3.
* **append** (Default)
Only adds new files without any delete.
* **overwrite**
Deletes everything in the target directory and then add new files.
* **overwrite_partitions** (Partition Upsert)
Only deletes the paths of partitions that should be updated and then writes the new partitions files. It’s like a “partition Upsert”.
```
[1]:
```
```
from datetime import date import awswrangler as wr import pandas as pd
```
#### Enter your bucket name:[¶](#Enter-your-bucket-name:)
```
[2]:
```
```
import getpass bucket = getpass.getpass()
path1 = f"s3://{bucket}/dataset1/"
path2 = f"s3://{bucket}/dataset2/"
```
```
············
```
#### Creating Dataset 1[¶](#Creating-Dataset-1)
```
[3]:
```
```
df = pd.DataFrame({
"id": [1, 2],
"value": ["foo", "boo"],
"date": [date(2020, 1, 1), date(2020, 1, 2)]
})
wr.s3.to_parquet(
df=df,
path=path1,
dataset=True,
mode="overwrite",
partition_cols=["date"]
)
wr.s3.read_parquet(path1, dataset=True)
```
```
[3]:
```
| | id | value | date |
| --- | --- | --- | --- |
| 0 | 1 | foo | 2020-01-01 |
| 1 | 2 | boo | 2020-01-02 |
#### Creating Dataset 2[¶](#Creating-Dataset-2)
```
[4]:
```
```
df = pd.DataFrame({
"id": [2, 3],
"value": ["xoo", "bar"],
"date": [date(2020, 1, 2), date(2020, 1, 3)]
})
dataset2_files = wr.s3.to_parquet(
df=df,
path=path2,
dataset=True,
mode="overwrite",
partition_cols=["date"]
)["paths"]
wr.s3.read_parquet(path2, dataset=True)
```
```
[4]:
```
| | id | value | date |
| --- | --- | --- | --- |
| 0 | 2 | xoo | 2020-01-02 |
| 1 | 3 | bar | 2020-01-03 |
#### Merging (Dataset 2 -> Dataset 1) (APPEND)[¶](#Merging-(Dataset-2-->-Dataset-1)-(APPEND))
```
[5]:
```
```
wr.s3.merge_datasets(
source_path=path2,
target_path=path1,
mode="append"
)
wr.s3.read_parquet(path1, dataset=True)
```
```
[5]:
```
| | id | value | date |
| --- | --- | --- | --- |
| 0 | 1 | foo | 2020-01-01 |
| 1 | 2 | xoo | 2020-01-02 |
| 2 | 2 | boo | 2020-01-02 |
| 3 | 3 | bar | 2020-01-03 |
#### Merging (Dataset 2 -> Dataset 1) (OVERWRITE_PARTITIONS)[¶](#Merging-(Dataset-2-->-Dataset-1)-(OVERWRITE_PARTITIONS))
```
[6]:
```
```
wr.s3.merge_datasets(
source_path=path2,
target_path=path1,
mode="overwrite_partitions"
)
wr.s3.read_parquet(path1, dataset=True)
```
```
[6]:
```
| | id | value | date |
| --- | --- | --- | --- |
| 0 | 1 | foo | 2020-01-01 |
| 1 | 2 | xoo | 2020-01-02 |
| 2 | 3 | bar | 2020-01-03 |
#### Merging (Dataset 2 -> Dataset 1) (OVERWRITE)[¶](#Merging-(Dataset-2-->-Dataset-1)-(OVERWRITE))
```
[7]:
```
```
wr.s3.merge_datasets(
source_path=path2,
target_path=path1,
mode="overwrite"
)
wr.s3.read_parquet(path1, dataset=True)
```
```
[7]:
```
| | id | value | date |
| --- | --- | --- | --- |
| 0 | 2 | xoo | 2020-01-02 |
| 1 | 3 | bar | 2020-01-03 |
#### Cleaning Up[¶](#Cleaning-Up)
```
[8]:
```
```
wr.s3.delete_objects(path1)
wr.s3.delete_objects(path2)
```
### 14 - Schema Evolution[¶](#14---Schema-Evolution)
awswrangler supports new **columns** on Parquet and CSV datasets through:
* [wr.s3.to_parquet()](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/stubs/awswrangler.s3.to_parquet.html#awswrangler.s3.to_parquet)
* [wr.s3.store_parquet_metadata()](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/stubs/awswrangler.s3.store_parquet_metadata.html#awswrangler.s3.store_parquet_metadata) i.e. “Crawler”
* [wr.s3.to_csv()](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/stubs/awswrangler.s3.to_csv.html#awswrangler.s3.to_csv)
```
[1]:
```
```
from datetime import date import awswrangler as wr import pandas as pd
```
#### Enter your bucket name:[¶](#Enter-your-bucket-name:)
```
[2]:
```
```
import getpass bucket = getpass.getpass()
path = f"s3://{bucket}/dataset/"
```
```
···········································
```
#### Creating the Dataset[¶](#Creating-the-Dataset)
##### Parquet Create[¶](#Parquet-Create)
```
[3]:
```
```
df = pd.DataFrame({
"id": [1, 2],
"value": ["foo", "boo"],
})
wr.s3.to_parquet(
df=df,
path=path,
dataset=True,
mode="overwrite",
database="aws_sdk_pandas",
table="my_table"
)
wr.s3.read_parquet(path, dataset=True)
```
```
[3]:
```
| | id | value |
| --- | --- | --- |
| 0 | 1 | foo |
| 1 | 2 | boo |
##### CSV Create[¶](#CSV-Create)
```
[ ]:
```
```
df = pd.DataFrame({
"id": [1, 2],
"value": ["foo", "boo"],
})
wr.s3.to_csv(
df=df,
path=path,
dataset=True,
mode="overwrite",
database="aws_sdk_pandas",
table="my_table"
)
wr.s3.read_csv(path, dataset=True)
```
##### Schema Version 0 on Glue Catalog (AWS Console)[¶](#Schema-Version-0-on-Glue-Catalog-(AWS-Console))
#### Appending with NEW COLUMNS[¶](#Appending-with-NEW-COLUMNS)
##### Parquet Append[¶](#Parquet-Append)
```
[4]:
```
```
df = pd.DataFrame({
"id": [3, 4],
"value": ["bar", None],
"date": [date(2020, 1, 3), date(2020, 1, 4)],
"flag": [True, False]
})
wr.s3.to_parquet(
df=df,
path=path,
dataset=True,
mode="append",
database="aws_sdk_pandas",
table="my_table",
catalog_versioning=True # Optional
)
wr.s3.read_parquet(path, dataset=True, validate_schema=False)
```
```
[4]:
```
| | id | value | date | flag |
| --- | --- | --- | --- | --- |
| 0 | 3 | bar | 2020-01-03 | True |
| 1 | 4 | None | 2020-01-04 | False |
| 2 | 1 | foo | NaN | NaN |
| 3 | 2 | boo | NaN | NaN |
##### CSV Append[¶](#CSV-Append)
Note: for CSV datasets due to [column ordering](https://docs.aws.amazon.com/athena/latest/ug/types-of-updates.html#updates-add-columns-beginning-middle-of-table), by default, schema evolution is disabled. Enable it by passing `schema_evolution=True` flag
```
[ ]:
```
```
df = pd.DataFrame({
"id": [3, 4],
"value": ["bar", None],
"date": [date(2020, 1, 3), date(2020, 1, 4)],
"flag": [True, False]
})
wr.s3.to_csv(
df=df,
path=path,
dataset=True,
mode="append",
database="aws_sdk_pandas",
table="my_table",
schema_evolution=True,
catalog_versioning=True # Optional
)
wr.s3.read_csv(path, dataset=True, validate_schema=False)
```
##### Schema Version 1 on Glue Catalog (AWS Console)[¶](#Schema-Version-1-on-Glue-Catalog-(AWS-Console))
#### Reading from Athena[¶](#Reading-from-Athena)
```
[5]:
```
```
wr.athena.read_sql_table(table="my_table", database="aws_sdk_pandas")
```
```
[5]:
```
| | id | value | date | flag |
| --- | --- | --- | --- | --- |
| 0 | 3 | bar | 2020-01-03 | True |
| 1 | 4 | None | 2020-01-04 | False |
| 2 | 1 | foo | None | <NA> |
| 3 | 2 | boo | None | <NA> |
#### Cleaning Up[¶](#Cleaning-Up)
```
[6]:
```
```
wr.s3.delete_objects(path)
wr.catalog.delete_table_if_exists(table="my_table", database="aws_sdk_pandas")
```
```
[6]:
```
```
True
```
### 15 - EMR[¶](#15---EMR)
```
[1]:
```
```
import awswrangler as wr import boto3
```
#### Enter your bucket name:[¶](#Enter-your-bucket-name:)
```
[2]:
```
```
import getpass bucket = getpass.getpass()
```
```
··········································
```
#### Enter your Subnet ID:[¶](#Enter-your-Subnet-ID:)
```
[8]:
```
```
subnet = getpass.getpass()
```
```
························
```
#### Creating EMR Cluster[¶](#Creating-EMR-Cluster)
```
[9]:
```
```
cluster_id = wr.emr.create_cluster(subnet)
```
#### Uploading our PySpark script to Amazon S3[¶](#Uploading-our-PySpark-script-to-Amazon-S3)
```
[10]:
```
```
script = """
from pyspark.sql import SparkSession spark = SparkSession.builder.appName("docker-awswrangler").getOrCreate()
sc = spark.sparkContext
print("Spark Initialized")
"""
_ = boto3.client("s3").put_object(
Body=script,
Bucket=bucket,
Key="test.py"
)
```
#### Submit PySpark step[¶](#Submit-PySpark-step)
```
[11]:
```
```
step_id = wr.emr.submit_step(cluster_id, command=f"spark-submit s3://{bucket}/test.py")
```
#### Wait Step[¶](#Wait-Step)
```
[12]:
```
```
while wr.emr.get_step_state(cluster_id, step_id) != "COMPLETED":
pass
```
#### Terminate Cluster[¶](#Terminate-Cluster)
```
[13]:
```
```
wr.emr.terminate_cluster(cluster_id)
```
### 16 - EMR & Docker[¶](#16---EMR-&-Docker)
```
[ ]:
```
```
import awswrangler as wr import boto3 import getpass
```
#### Enter your bucket name:[¶](#Enter-your-bucket-name:)
```
[2]:
```
```
bucket = getpass.getpass()
```
```
··········································
```
#### Enter your Subnet ID:[¶](#Enter-your-Subnet-ID:)
```
[3]:
```
```
subnet = getpass.getpass()
```
```
························
```
#### Build and Upload Docker Image to ECR repository[¶](#Build-and-Upload-Docker-Image-to-ECR-repository)
Replace the `{ACCOUNT_ID}` placeholder.
```
[ ]:
```
```
%%writefile Dockerfile
FROM amazoncorretto:8
RUN yum -y update RUN yum -y install yum-utils RUN yum -y groupinstall development
RUN yum list python3*
RUN yum -y install python3 python3-dev python3-pip python3-virtualenv
RUN python -V RUN python3 -V
ENV PYSPARK_DRIVER_PYTHON python3 ENV PYSPARK_PYTHON python3
RUN pip3 install --upgrade pip RUN pip3 install awswrangler
RUN python3 -c "import awswrangler as wr"
```
```
[ ]:
```
```
%%bash
docker build -t 'local/emr-wrangler' .
aws ecr create-repository --repository-name emr-wrangler docker tag local/emr-wrangler {ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/emr-wrangler:emr-wrangler eval $(aws ecr get-login --region us-east-1 --no-include-email)
docker push {ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/emr-wrangler:emr-wrangler
```
#### Creating EMR Cluster[¶](#Creating-EMR-Cluster)
```
[4]:
```
```
cluster_id = wr.emr.create_cluster(subnet, docker=True)
```
#### Refresh ECR credentials in the cluster (expiration time: 12h )[¶](#Refresh-ECR-credentials-in-the-cluster-(expiration-time:-12h-))
```
[5]:
```
```
wr.emr.submit_ecr_credentials_refresh(cluster_id, path=f"s3://{bucket}/")
```
```
[5]:
```
```
's-1B0O45RWJL8CL'
```
#### Uploading application script to Amazon S3 (PySpark)[¶](#Uploading-application-script-to-Amazon-S3-(PySpark))
```
[7]:
```
```
script = """
from pyspark.sql import SparkSession spark = SparkSession.builder.appName("docker-awswrangler").getOrCreate()
sc = spark.sparkContext
print("Spark Initialized")
import awswrangler as wr
print(f"awswrangler version: {wr.__version__}")
"""
boto3.client("s3").put_object(Body=script, Bucket=bucket, Key="test_docker.py")
```
#### Submit PySpark step[¶](#Submit-PySpark-step)
```
[8]:
```
```
DOCKER_IMAGE = f"{wr.get_account_id()}.dkr.ecr.us-east-1.amazonaws.com/emr-wrangler:emr-wrangler"
step_id = wr.emr.submit_spark_step(
cluster_id,
f"s3://{bucket}/test_docker.py",
docker_image=DOCKER_IMAGE
)
```
#### Wait Step[¶](#Wait-Step)
```
[ ]:
```
```
while wr.emr.get_step_state(cluster_id, step_id) != "COMPLETED":
pass
```
#### Terminate Cluster[¶](#Terminate-Cluster)
```
[ ]:
```
```
wr.emr.terminate_cluster(cluster_id)
```
#### Another example with custom configurations[¶](#Another-example-with-custom-configurations)
```
[9]:
```
```
cluster_id = wr.emr.create_cluster(
cluster_name="my-demo-cluster-v2",
logging_s3_path=f"s3://{bucket}/emr-logs/",
emr_release="emr-6.7.0",
subnet_id=subnet,
emr_ec2_role="EMR_EC2_DefaultRole",
emr_role="EMR_DefaultRole",
instance_type_master="m5.2xlarge",
instance_type_core="m5.2xlarge",
instance_ebs_size_master=50,
instance_ebs_size_core=50,
instance_num_on_demand_master=0,
instance_num_on_demand_core=0,
instance_num_spot_master=1,
instance_num_spot_core=2,
spot_bid_percentage_of_on_demand_master=100,
spot_bid_percentage_of_on_demand_core=100,
spot_provisioning_timeout_master=5,
spot_provisioning_timeout_core=5,
spot_timeout_to_on_demand_master=False,
spot_timeout_to_on_demand_core=False,
python3=True,
docker=True,
spark_glue_catalog=True,
hive_glue_catalog=True,
presto_glue_catalog=True,
debugging=True,
applications=["Hadoop", "Spark", "Hive", "Zeppelin", "Livy"],
visible_to_all_users=True,
maximize_resource_allocation=True,
keep_cluster_alive_when_no_steps=True,
termination_protected=False,
spark_pyarrow=True
)
wr.emr.submit_ecr_credentials_refresh(cluster_id, path=f"s3://{bucket}/emr/")
DOCKER_IMAGE = f"{wr.get_account_id()}.dkr.ecr.us-east-1.amazonaws.com/emr-wrangler:emr-wrangler"
step_id = wr.emr.submit_spark_step(
cluster_id,
f"s3://{bucket}/test_docker.py",
docker_image=DOCKER_IMAGE
)
```
```
[ ]:
```
```
while wr.emr.get_step_state(cluster_id, step_id) != "COMPLETED":
pass
wr.emr.terminate_cluster(cluster_id)
```
### 17 - Partition Projection[¶](#17---Partition-Projection)
<https://docs.aws.amazon.com/athena/latest/ug/partition-projection.html>
```
[1]:
```
```
import awswrangler as wr import pandas as pd from datetime import datetime import getpass
```
#### Enter your bucket name:[¶](#Enter-your-bucket-name:)
```
[2]:
```
```
bucket = getpass.getpass()
```
```
···········································
```
#### Integer projection[¶](#Integer-projection)
```
[3]:
```
```
df = pd.DataFrame({
"value": [1, 2, 3],
"year": [2019, 2020, 2021],
"month": [10, 11, 12],
"day": [25, 26, 27]
})
df
```
```
[3]:
```
| | value | year | month | day |
| --- | --- | --- | --- | --- |
| 0 | 1 | 2019 | 10 | 25 |
| 1 | 2 | 2020 | 11 | 26 |
| 2 | 3 | 2021 | 12 | 27 |
```
[4]:
```
```
wr.s3.to_parquet(
df=df,
path=f"s3://{bucket}/table_integer/",
dataset=True,
partition_cols=["year", "month", "day"],
database="default",
table="table_integer",
athena_partition_projection_settings={
"projection_types": {
"year": "integer",
"month": "integer",
"day": "integer"
},
"projection_ranges": {
"year": "2000,2025",
"month": "1,12",
"day": "1,31"
},
},
)
```
```
[5]:
```
```
wr.athena.read_sql_query(f"SELECT * FROM table_integer", database="default")
```
```
[5]:
```
| | value | year | month | day |
| --- | --- | --- | --- | --- |
| 0 | 3 | 2021 | 12 | 27 |
| 1 | 2 | 2020 | 11 | 26 |
| 2 | 1 | 2019 | 10 | 25 |
#### Enum projection[¶](#Enum-projection)
```
[6]:
```
```
df = pd.DataFrame({
"value": [1, 2, 3],
"city": ["São Paulo", "Tokio", "Seattle"],
})
df
```
```
[6]:
```
| | value | city |
| --- | --- | --- |
| 0 | 1 | São Paulo |
| 1 | 2 | Tokio |
| 2 | 3 | Seattle |
```
[7]:
```
```
wr.s3.to_parquet(
df=df,
path=f"s3://{bucket}/table_enum/",
dataset=True,
partition_cols=["city"],
database="default",
table="table_enum",
athena_partition_projection_settings={
"projection_types": {
"city": "enum",
},
"projection_values": {
"city": "São Paulo,Tokio,Seattle"
},
},
)
```
```
[8]:
```
```
wr.athena.read_sql_query(f"SELECT * FROM table_enum", database="default")
```
```
[8]:
```
| | value | city |
| --- | --- | --- |
| 0 | 1 | São Paulo |
| 1 | 3 | Seattle |
| 2 | 2 | Tokio |
#### Date projection[¶](#Date-projection)
```
[9]:
```
```
ts = lambda x: datetime.strptime(x, "%Y-%m-%d %H:%M:%S")
dt = lambda x: datetime.strptime(x, "%Y-%m-%d").date()
df = pd.DataFrame({
"value": [1, 2, 3],
"dt": [dt("2020-01-01"), dt("2020-01-02"), dt("2020-01-03")],
"ts": [ts("2020-01-01 00:00:00"), ts("2020-01-01 00:00:01"), ts("2020-01-01 00:00:02")],
})
df
```
```
[9]:
```
| | value | dt | ts |
| --- | --- | --- | --- |
| 0 | 1 | 2020-01-01 | 2020-01-01 00:00:00 |
| 1 | 2 | 2020-01-02 | 2020-01-01 00:00:01 |
| 2 | 3 | 2020-01-03 | 2020-01-01 00:00:02 |
```
[10]:
```
```
wr.s3.to_parquet(
df=df,
path=f"s3://{bucket}/table_date/",
dataset=True,
partition_cols=["dt", "ts"],
database="default",
table="table_date",
athena_partition_projection_settings={
"projection_types": {
"dt": "date",
"ts": "date",
},
"projection_ranges": {
"dt": "2020-01-01,2020-01-03",
"ts": "2020-01-01 00:00:00,2020-01-01 00:00:02"
},
},
)
```
```
[11]:
```
```
wr.athena.read_sql_query(f"SELECT * FROM table_date", database="default")
```
```
[11]:
```
| | value | dt | ts |
| --- | --- | --- | --- |
| 0 | 1 | 2020-01-01 | 2020-01-01 00:00:00 |
| 1 | 2 | 2020-01-02 | 2020-01-01 00:00:01 |
| 2 | 3 | 2020-01-03 | 2020-01-01 00:00:02 |
#### Injected projection[¶](#Injected-projection)
```
[12]:
```
```
df = pd.DataFrame({
"value": [1, 2, 3],
"uuid": ["761e2488-a078-11ea-bb37-0242ac130002", "b89ed095-8179-4635-9537-88592c0f6bc3", "87adc586-ce88-4f0a-b1c8-bf8e00d32249"],
})
df
```
```
[12]:
```
| | value | uuid |
| --- | --- | --- |
| 0 | 1 | 761e2488-a078-11ea-bb37-0242ac130002 |
| 1 | 2 | b89ed095-8179-4635-9537-88592c0f6bc3 |
| 2 | 3 | 87adc586-ce88-4f0a-b1c8-bf8e00d32249 |
```
[13]:
```
```
wr.s3.to_parquet(
df=df,
path=f"s3://{bucket}/table_injected/",
dataset=True,
partition_cols=["uuid"],
database="default",
table="table_injected",
athena_partition_projection_settings={
"projection_types": {
"uuid": "injected",
}
},
)
```
```
[14]:
```
```
wr.athena.read_sql_query(
sql=f"SELECT * FROM table_injected WHERE uuid='b89ed095-8179-4635-9537-88592c0f6bc3'",
database="default"
)
```
```
[14]:
```
| | value | uuid |
| --- | --- | --- |
| 0 | 2 | b89ed095-8179-4635-9537-88592c0f6bc3 |
#### Cleaning Up[¶](#Cleaning-Up)
```
[15]:
```
```
wr.s3.delete_objects(f"s3://{bucket}/table_integer/")
wr.s3.delete_objects(f"s3://{bucket}/table_enum/")
wr.s3.delete_objects(f"s3://{bucket}/table_date/")
wr.s3.delete_objects(f"s3://{bucket}/table_injected/")
```
```
[16]:
```
```
wr.catalog.delete_table_if_exists(table="table_integer", database="default")
wr.catalog.delete_table_if_exists(table="table_enum", database="default")
wr.catalog.delete_table_if_exists(table="table_date", database="default")
wr.catalog.delete_table_if_exists(table="table_injected", database="default")
```
```
[ ]:
```
```
```
### 18 - QuickSight[¶](#18---QuickSight)
For this tutorial we will use the public AWS COVID-19 data lake.
References:
* [A public data lake for analysis of COVID-19 data](https://aws.amazon.com/blogs/big-data/a-public-data-lake-for-analysis-of-covid-19-data/)
* [Exploring the public AWS COVID-19 data lake](https://aws.amazon.com/blogs/big-data/exploring-the-public-aws-covid-19-data-lake/)
* [CloudFormation template](https://covid19-lake.s3.us-east-2.amazonaws.com/cfn/CovidLakeStack.template.json)
*Please, install the CloudFormation template above to have access to the public data lake.*
*P.S. To be able to access the public data lake, you must allow explicitly QuickSight to access the related external bucket.*
```
[1]:
```
```
import awswrangler as wr from time import sleep
```
List users of QuickSight account
```
[2]:
```
```
[{"username": user["UserName"], "role": user["Role"]} for user in wr.quicksight.list_users('default')]
```
```
[2]:
```
```
[{'username': 'dev', 'role': 'ADMIN'}]
```
```
[3]:
```
```
wr.catalog.databases()
```
```
[3]:
```
| | Database | Description |
| --- | --- | --- |
| 0 | aws_sdk_pandas | AWS SDK for pandas Test Arena - Glue Database |
| 1 | awswrangler_test | |
| 2 | covid-19 | |
| 3 | default | Default Hive database |
```
[4]:
```
```
wr.catalog.tables(database="covid-19")
```
```
[4]:
```
| | Database | Table | Description | Columns | Partitions |
| --- | --- | --- | --- | --- | --- |
| 0 | covid-19 | alleninstitute_comprehend_medical | Comprehend Medical results run against Allen I... | paper_id, date, dx_name, test_name, procedure_... | |
| 1 | covid-19 | alleninstitute_metadata | Metadata on papers pulled from the Allen Insti... | cord_uid, sha, source_x, title, doi, pmcid, pu... | |
| 2 | covid-19 | country_codes | Lookup table for country codes | country, alpha-2 code, alpha-3 code, numeric c... | |
| 3 | covid-19 | county_populations | Lookup table for population for each county ba... | id, id2, county, state, population estimate 2018 | |
| 4 | covid-19 | covid_knowledge_graph_edges | AWS Knowledge Graph for COVID-19 data | id, label, from, to, score | |
| 5 | covid-19 | covid_knowledge_graph_nodes_author | AWS Knowledge Graph for COVID-19 data | id, label, first, last, full_name | |
| 6 | covid-19 | covid_knowledge_graph_nodes_concept | AWS Knowledge Graph for COVID-19 data | id, label, entity, concept | |
| 7 | covid-19 | covid_knowledge_graph_nodes_institution | AWS Knowledge Graph for COVID-19 data | id, label, institution, country, settlement | |
| 8 | covid-19 | covid_knowledge_graph_nodes_paper | AWS Knowledge Graph for COVID-19 data | id, label, doi, sha_code, publish_time, source... | |
| 9 | covid-19 | covid_knowledge_graph_nodes_topic | AWS Knowledge Graph for COVID-19 data | id, label, topic, topic_num | |
| 10 | covid-19 | covid_testing_states_daily | USA total test daily trend by state. Sourced ... | date, state, positive, negative, pending, hosp... | |
| 11 | covid-19 | covid_testing_us_daily | USA total test daily trend. Sourced from covi... | date, states, positive, negative, posneg, pend... | |
| 12 | covid-19 | covid_testing_us_total | USA total tests. Sourced from covidtracking.c... | positive, negative, posneg, hospitalized, deat... | |
| 13 | covid-19 | covidcast_data | CMU Delphi's COVID-19 Surveillance Data | data_source, signal, geo_type, time_value, geo... | |
| 14 | covid-19 | covidcast_metadata | CMU Delphi's COVID-19 Surveillance Metadata | data_source, signal, time_type, geo_type, min_... | |
| 15 | covid-19 | enigma_jhu | Johns Hopkins University Consolidated data on ... | fips, admin2, province_state, country_region, ... | |
| 16 | covid-19 | enigma_jhu_timeseries | Johns Hopkins University data on COVID-19 case... | uid, fips, iso2, iso3, code3, admin2, latitude... | |
| 17 | covid-19 | hospital_beds | Data on hospital beds and their utilization in... | objectid, hospital_name, hospital_type, hq_add... | |
| 18 | covid-19 | nytimes_counties | Data on COVID-19 cases from NY Times at US cou... | date, county, state, fips, cases, deaths | |
| 19 | covid-19 | nytimes_states | Data on COVID-19 cases from NY Times at US sta... | date, state, fips, cases, deaths | |
| 20 | covid-19 | prediction_models_county_predictions | County-level Predictions Data. Sourced from Yu... | countyfips, countyname, statename, severity_co... | |
| 21 | covid-19 | prediction_models_severity_index | Severity Index models. Sourced from Yu Group a... | severity_1-day, severity_2-day, severity_3-day... | |
| 22 | covid-19 | tableau_covid_datahub | COVID-19 data that has been gathered and unifi... | country_short_name, country_alpha_3_code, coun... | |
| 23 | covid-19 | tableau_jhu | Johns Hopkins University data on COVID-19 case... | case_type, cases, difference, date, country_re... | |
| 24 | covid-19 | us_state_abbreviations | Lookup table for US state abbreviations | state, abbreviation | |
| 25 | covid-19 | world_cases_deaths_testing | Data on confirmed cases, deaths, and testing. ... | iso_code, location, date, total_cases, new_cas... | |
Create data source of QuickSight Note: data source stores the connection information.
```
[5]:
```
```
wr.quicksight.create_athena_data_source(
name="covid-19",
workgroup="primary",
allowed_to_manage={"users": ["dev"]},
)
```
```
[6]:
```
```
wr.catalog.tables(database="covid-19", name_contains="nyt")
```
```
[6]:
```
| | Database | Table | Description | Columns | Partitions |
| --- | --- | --- | --- | --- | --- |
| 0 | covid-19 | nytimes_counties | Data on COVID-19 cases from NY Times at US cou... | date, county, state, fips, cases, deaths | |
| 1 | covid-19 | nytimes_states | Data on COVID-19 cases from NY Times at US sta... | date, state, fips, cases, deaths | |
```
[7]:
```
```
wr.athena.read_sql_query("SELECT * FROM nytimes_counties limit 10", database="covid-19", ctas_approach=False)
```
```
[7]:
```
| | date | county | state | fips | cases | deaths |
| --- | --- | --- | --- | --- | --- | --- |
| 0 | 2020-01-21 | Snohomish | Washington | 53061 | 1 | 0 |
| 1 | 2020-01-22 | Snohomish | Washington | 53061 | 1 | 0 |
| 2 | 2020-01-23 | Snohomish | Washington | 53061 | 1 | 0 |
| 3 | 2020-01-24 | Cook | Illinois | 17031 | 1 | 0 |
| 4 | 2020-01-24 | Snohomish | Washington | 53061 | 1 | 0 |
| 5 | 2020-01-25 | Orange | California | 06059 | 1 | 0 |
| 6 | 2020-01-25 | Cook | Illinois | 17031 | 1 | 0 |
| 7 | 2020-01-25 | Snohomish | Washington | 53061 | 1 | 0 |
| 8 | 2020-01-26 | Maricopa | Arizona | 04013 | 1 | 0 |
| 9 | 2020-01-26 | Los Angeles | California | 06037 | 1 | 0 |
```
[8]:
```
```
sql = """
SELECT
j.*,
co.Population,
co.county AS county2,
hb.*
FROM
(
SELECT
date,
county,
state,
fips,
cases as confirmed,
deaths
FROM "covid-19".nytimes_counties
) j
LEFT OUTER JOIN (
SELECT
DISTINCT county,
state,
"population estimate 2018" AS Population
FROM
"covid-19".county_populations
WHERE
state IN (
SELECT
DISTINCT state
FROM
"covid-19".nytimes_counties
)
AND county IN (
SELECT
DISTINCT county as county
FROM "covid-19".nytimes_counties
)
) co ON co.county = j.county
AND co.state = j.state
LEFT OUTER JOIN (
SELECT
count(objectid) as Hospital,
fips as hospital_fips,
sum(num_licensed_beds) as licensed_beds,
sum(num_staffed_beds) as staffed_beds,
sum(num_icu_beds) as icu_beds,
avg(bed_utilization) as bed_utilization,
sum(
potential_increase_in_bed_capac
) as potential_increase_bed_capacity
FROM "covid-19".hospital_beds
WHERE
fips in (
SELECT
DISTINCT fips
FROM
"covid-19".nytimes_counties
)
GROUP BY
2
) hb ON hb.hospital_fips = j.fips
"""
wr.athena.read_sql_query(sql, database="covid-19", ctas_approach=False)
```
```
[8]:
```
| | date | county | state | fips | confirmed | deaths | population | county2 | Hospital | hospital_fips | licensed_beds | staffed_beds | icu_beds | bed_utilization | potential_increase_bed_capacity |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 2020-04-12 | Park | Montana | 30067 | 7 | 0 | 16736 | Park | 0 | 30067 | 25 | 25 | 4 | 0.432548 | 0 |
| 1 | 2020-04-12 | Ravalli | Montana | 30081 | 3 | 0 | 43172 | Ravalli | 0 | 30081 | 25 | 25 | 5 | 0.567781 | 0 |
| 2 | 2020-04-12 | Silver Bow | Montana | 30093 | 11 | 0 | 34993 | Silver Bow | 0 | 30093 | 98 | 71 | 11 | 0.551457 | 27 |
| 3 | 2020-04-12 | Clay | Nebraska | 31035 | 2 | 0 | 6214 | Clay | <NA> | <NA> | <NA> | <NA> | <NA> | NaN | <NA> |
| 4 | 2020-04-12 | Cuming | Nebraska | 31039 | 2 | 0 | 8940 | Cuming | 0 | 31039 | 25 | 25 | 4 | 0.204493 | 0 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 227684 | 2020-06-11 | Hockley | Texas | 48219 | 28 | 1 | 22980 | Hockley | 0 | 48219 | 48 | 48 | 8 | 0.120605 | 0 |
| 227685 | 2020-06-11 | Hudspeth | Texas | 48229 | 11 | 0 | 4795 | Hudspeth | <NA> | <NA> | <NA> | <NA> | <NA> | NaN | <NA> |
| 227686 | 2020-06-11 | Jones | Texas | 48253 | 633 | 0 | 19817 | Jones | 0 | 48253 | 45 | 7 | 1 | 0.718591 | 38 |
| 227687 | 2020-06-11 | La Salle | Texas | 48283 | 4 | 0 | 7531 | La Salle | <NA> | <NA> | <NA> | <NA> | <NA> | NaN | <NA> |
| 227688 | 2020-06-11 | Limestone | Texas | 48293 | 36 | 1 | 23519 | Limestone | 0 | 48293 | 78 | 69 | 9 | 0.163940 | 9 |
227689 rows × 15 columns
Create Dataset with custom SQL option
```
[9]:
```
```
wr.quicksight.create_athena_dataset(
name="covid19-nytimes-usa",
sql=sql,
sql_name='CustomSQL',
data_source_name="covid-19",
import_mode='SPICE',
allowed_to_manage={"users": ["dev"]},
)
```
```
[10]:
```
```
ingestion_id = wr.quicksight.create_ingestion("covid19-nytimes-usa")
```
Wait ingestion
```
[11]:
```
```
while wr.quicksight.describe_ingestion(ingestion_id=ingestion_id, dataset_name="covid19-nytimes-usa")["IngestionStatus"] not in ["COMPLETED", "FAILED"]:
sleep(1)
```
Describe last ingestion
```
[12]:
```
```
wr.quicksight.describe_ingestion(ingestion_id=ingestion_id, dataset_name="covid19-nytimes-usa")["RowInfo"]
```
```
[12]:
```
```
{'RowsIngested': 227689, 'RowsDropped': 0}
```
List all ingestions
```
[13]:
```
```
[{"time": user["CreatedTime"], "source": user["RequestSource"]} for user in wr.quicksight.list_ingestions("covid19-nytimes-usa")]
```
```
[13]:
```
```
[{'time': datetime.datetime(2020, 6, 12, 15, 13, 46, 996000, tzinfo=tzlocal()),
'source': 'MANUAL'},
{'time': datetime.datetime(2020, 6, 12, 15, 13, 42, 344000, tzinfo=tzlocal()),
'source': 'MANUAL'}]
```
Create new dataset from a table directly
```
[14]:
```
```
wr.quicksight.create_athena_dataset(
name="covid-19-tableau_jhu",
table="tableau_jhu",
data_source_name="covid-19",
database="covid-19",
import_mode='DIRECT_QUERY',
rename_columns={
"cases": "Count_of_Cases",
"combined_key": "County"
},
cast_columns_types={
"Count_of_Cases": "INTEGER"
},
tag_columns={
"combined_key": [{"ColumnGeographicRole": "COUNTY"}]
},
allowed_to_manage={"users": ["dev"]},
)
```
Cleaning up
```
[15]:
```
```
wr.quicksight.delete_data_source("covid-19")
wr.quicksight.delete_dataset("covid19-nytimes-usa")
wr.quicksight.delete_dataset("covid-19-tableau_jhu")
```
### 19 - Amazon Athena Cache[¶](#19---Amazon-Athena-Cache)
[awswrangler](https://github.com/aws/aws-sdk-pandas) has a cache strategy that is disabled by default and can be enabled by passing `max_cache_seconds` bigger than 0 as part of the `athena_cache_settings` parameter. This cache strategy for Amazon Athena can help you to **decrease query times and costs**.
When calling `read_sql_query`, instead of just running the query, we now can verify if the query has been run before. If so, and this last run was within `max_cache_seconds` (a new parameter to `read_sql_query`), we return the same results as last time if they are still available in S3. We have seen this increase performance more than 100x, but the potential is pretty much infinite.
The detailed approach is: - When `read_sql_query` is called with `max_cache_seconds > 0` (it defaults to 0), we check for the last queries run by the same workgroup (the most we can get without pagination). - By default it will check the last 50 queries, but you can customize it through the `max_cache_query_inspections` argument. - We then sort those queries based on CompletionDateTime, descending - For each of those queries, we check if their CompletionDateTime is still within the
`max_cache_seconds` window. If so, we check if the query string is the same as now (with some smart heuristics to guarantee coverage over both `ctas_approach`es). If they are the same, we check if the last one’s results are still on S3, and then return them instead of re-running the query. - During the whole cache resolution phase, if there is anything wrong, the logic falls back to the usual `read_sql_query` path.
*P.S. The ``cache scope is bounded for the current workgroup``, so you will be able to reuse queries results from others colleagues running in the same environment.*
```
[18]:
```
```
import awswrangler as wr
```
#### Enter your bucket name:[¶](#Enter-your-bucket-name:)
```
[19]:
```
```
import getpass bucket = getpass.getpass()
path = f"s3://{bucket}/data/"
```
#### Checking/Creating Glue Catalog Databases[¶](#Checking/Creating-Glue-Catalog-Databases)
```
[20]:
```
```
if "awswrangler_test" not in wr.catalog.databases().values:
wr.catalog.create_database("awswrangler_test")
```
##### Creating a Parquet Table from the NOAA’s CSV files[¶](#Creating-a-Parquet-Table-from-the-NOAA's-CSV-files)
[Reference](https://registry.opendata.aws/noaa-ghcn/)
```
[21]:
```
```
cols = ["id", "dt", "element", "value", "m_flag", "q_flag", "s_flag", "obs_time"]
df = wr.s3.read_csv(
path="s3://noaa-ghcn-pds/csv/by_year/1865.csv",
names=cols,
parse_dates=["dt", "obs_time"])
df
```
```
[21]:
```
| | id | dt | element | value | m_flag | q_flag | s_flag | obs_time |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | ID | DATE | ELEMENT | DATA_VALUE | M_FLAG | Q_FLAG | S_FLAG | OBS_TIME |
| 1 | AGE00135039 | 18650101 | PRCP | 0 | NaN | NaN | E | NaN |
| 2 | ASN00019036 | 18650101 | PRCP | 0 | NaN | NaN | a | NaN |
| 3 | ASN00021001 | 18650101 | PRCP | 0 | NaN | NaN | a | NaN |
| 4 | ASN00021010 | 18650101 | PRCP | 0 | NaN | NaN | a | NaN |
| ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 37918 | USC00288878 | 18651231 | TMIN | -44 | NaN | NaN | 6 | NaN |
| 37919 | USC00288878 | 18651231 | PRCP | 0 | P | NaN | 6 | NaN |
| 37920 | USC00288878 | 18651231 | SNOW | 0 | P | NaN | 6 | NaN |
| 37921 | USC00361920 | 18651231 | PRCP | 0 | NaN | NaN | F | NaN |
| 37922 | USP00CA0001 | 18651231 | PRCP | 0 | NaN | NaN | F | NaN |
37923 rows × 8 columns
```
[ ]:
```
```
wr.s3.to_parquet(
df=df,
path=path,
dataset=True,
mode="overwrite",
database="awswrangler_test",
table="noaa"
)
```
```
[23]:
```
```
wr.catalog.table(database="awswrangler_test", table="noaa")
```
```
[23]:
```
| | Column Name | Type | Partition | Comment |
| --- | --- | --- | --- | --- |
| 0 | id | string | False | |
| 1 | dt | string | False | |
| 2 | element | string | False | |
| 3 | value | string | False | |
| 4 | m_flag | string | False | |
| 5 | q_flag | string | False | |
| 6 | s_flag | string | False | |
| 7 | obs_time | string | False | |
#### The test query[¶](#The-test-query)
The more computational resources the query needs, the more the cache will help you. That’s why we’re doing it using this long running query.
```
[24]:
```
```
query = """
SELECT
n1.element,
count(1) as cnt FROM
noaa n1 JOIN
noaa n2 ON
n1.id = n2.id GROUP BY
n1.element
"""
```
#### First execution…[¶](#First-execution...)
```
[25]:
```
```
%%time
wr.athena.read_sql_query(query, database="awswrangler_test")
```
```
CPU times: user 1.59 s, sys: 166 ms, total: 1.75 s Wall time: 5.62 s
```
```
[25]:
```
| | element | cnt |
| --- | --- | --- |
| 0 | PRCP | 12044499 |
| 1 | MDTX | 1460 |
| 2 | DATX | 1460 |
| 3 | ELEMENT | 1 |
| 4 | WT01 | 22260 |
| 5 | WT03 | 840 |
| 6 | DATN | 1460 |
| 7 | DWPR | 490 |
| 8 | TMIN | 7012479 |
| 9 | MDTN | 1460 |
| 10 | MDPR | 2683 |
| 11 | SNOW | 1086762 |
| 12 | DAPR | 1330 |
| 13 | SNWD | 783532 |
| 14 | TMAX | 6533103 |
#### Second execution with **CACHE** (400x faster)[¶](#Second-execution-with-CACHE-(400x-faster))
```
[26]:
```
```
%%time
wr.athena.read_sql_query(query, database="awswrangler_test", athena_cache_settings={"max_cache_seconds":900})
```
```
CPU times: user 689 ms, sys: 68.1 ms, total: 757 ms Wall time: 1.11 s
```
```
[26]:
```
| | element | cnt |
| --- | --- | --- |
| 0 | PRCP | 12044499 |
| 1 | MDTX | 1460 |
| 2 | DATX | 1460 |
| 3 | ELEMENT | 1 |
| 4 | WT01 | 22260 |
| 5 | WT03 | 840 |
| 6 | DATN | 1460 |
| 7 | DWPR | 490 |
| 8 | TMIN | 7012479 |
| 9 | MDTN | 1460 |
| 10 | MDPR | 2683 |
| 11 | SNOW | 1086762 |
| 12 | DAPR | 1330 |
| 13 | SNWD | 783532 |
| 14 | TMAX | 6533103 |
#### Allowing awswrangler to inspect up to 500 historical queries to find same result to reuse.[¶](#Allowing-awswrangler-to-inspect-up-to-500-historical-queries-to-find-same-result-to-reuse.)
```
[27]:
```
```
%%time
wr.athena.read_sql_query(query, database="awswrangler_test", athena_cache_settings={"max_cache_seconds": 900, "max_cache_query_inspections": 500})
```
```
CPU times: user 715 ms, sys: 44.9 ms, total: 760 ms Wall time: 1.03 s
```
```
[27]:
```
| | element | cnt |
| --- | --- | --- |
| 0 | PRCP | 12044499 |
| 1 | MDTX | 1460 |
| 2 | DATX | 1460 |
| 3 | ELEMENT | 1 |
| 4 | WT01 | 22260 |
| 5 | WT03 | 840 |
| 6 | DATN | 1460 |
| 7 | DWPR | 490 |
| 8 | TMIN | 7012479 |
| 9 | MDTN | 1460 |
| 10 | MDPR | 2683 |
| 11 | SNOW | 1086762 |
| 12 | DAPR | 1330 |
| 13 | SNWD | 783532 |
| 14 | TMAX | 6533103 |
#### Cleaning Up S3[¶](#Cleaning-Up-S3)
```
[28]:
```
```
wr.s3.delete_objects(path)
```
#### Delete table[¶](#Delete-table)
```
[29]:
```
```
wr.catalog.delete_table_if_exists(database="awswrangler_test", table="noaa")
```
```
[29]:
```
```
True
```
#### Delete Database[¶](#Delete-Database)
```
[30]:
```
```
wr.catalog.delete_database('awswrangler_test')
```
### 20 - Spark Table Interoperability[¶](#20---Spark-Table-Interoperability)
[awswrangler](https://github.com/aws/aws-sdk-pandas) has no difficulty to insert, overwrite or do any other kind of interaction with a Table created by Apache Spark.
But if you want to do the opposite (Spark interacting with a table created by awswrangler) you should be aware that awswrangler follows the Hive’s format and you must be explicit when using the Spark’s `saveAsTable` method:
```
[ ]:
```
```
spark_df.write.format("hive").saveAsTable("database.table")
```
Or just move forward using the `insertInto` alternative:
```
[ ]:
```
```
spark_df.write.insertInto("database.table")
```
### 21 - Global Configurations[¶](#21---Global-Configurations)
[awswrangler](https://github.com/aws/aws-sdk-pandas) has two ways to set global configurations that will override the regular default arguments configured in functions signatures.
* **Environment variables**
* **wr.config**
*P.S. Check the* [function API doc](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/api.html) *to see if your function has some argument that can be configured through Global configurations.*
*P.P.S. One exception to the above mentioned rules is the ``botocore_config`` property. It cannot be set through environment variables but only via ``wr.config``. It will be used as the ``botocore.config.Config`` for all underlying ``boto3`` calls. The default config is ``botocore.config.Config(retries={“max_attempts”: 5}, connect_timeout=10, max_pool_connections=10)``. If you only want to change the retry behavior, you can use the environment variables ``AWS_MAX_ATTEMPTS`` and
``AWS_RETRY_MODE``. (see* [Boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#using-environment-variables)*)*
#### Environment Variables[¶](#Environment-Variables)
```
[1]:
```
```
%env WR_DATABASE=default
%env WR_CTAS_APPROACH=False
%env WR_MAX_CACHE_SECONDS=900
%env WR_MAX_CACHE_QUERY_INSPECTIONS=500
%env WR_MAX_REMOTE_CACHE_ENTRIES=50
%env WR_MAX_LOCAL_CACHE_ENTRIES=100
```
```
env: WR_DATABASE=default env: WR_CTAS_APPROACH=False env: WR_MAX_CACHE_SECONDS=900 env: WR_MAX_CACHE_QUERY_INSPECTIONS=500 env: WR_MAX_REMOTE_CACHE_ENTRIES=50 env: WR_MAX_LOCAL_CACHE_ENTRIES=100
```
```
[2]:
```
```
import awswrangler as wr import botocore
```
```
[3]:
```
```
wr.athena.read_sql_query("SELECT 1 AS FOO")
```
```
[3]:
```
| | foo |
| --- | --- |
| 0 | 1 |
#### Resetting[¶](#Resetting)
```
[4]:
```
```
# Specific wr.config.reset("database")
# All wr.config.reset()
```
#### wr.config[¶](#wr.config)
```
[5]:
```
```
wr.config.database = "default"
wr.config.ctas_approach = False wr.config.max_cache_seconds = 900 wr.config.max_cache_query_inspections = 500 wr.config.max_remote_cache_entries = 50 wr.config.max_local_cache_entries = 100
# Set botocore.config.Config that will be used for all boto3 calls wr.config.botocore_config = botocore.config.Config(
retries={"max_attempts": 10},
connect_timeout=20,
max_pool_connections=20
)
```
```
[6]:
```
```
wr.athena.read_sql_query("SELECT 1 AS FOO")
```
```
[6]:
```
| | foo |
| --- | --- |
| 0 | 1 |
#### Visualizing[¶](#Visualizing)
```
[7]:
```
```
wr.config
```
```
[7]:
```
| | name | Env. Variable | type | nullable | enforced | configured | value |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | catalog_id | WR_CATALOG_ID | <class 'str'> | True | False | False | None |
| 1 | concurrent_partitioning | WR_CONCURRENT_PARTITIONING | <class 'bool'> | False | False | False | None |
| 2 | ctas_approach | WR_CTAS_APPROACH | <class 'bool'> | False | False | True | False |
| 3 | database | WR_DATABASE | <class 'str'> | True | False | True | default |
| 4 | max_cache_query_inspections | WR_MAX_CACHE_QUERY_INSPECTIONS | <class 'int'> | False | False | True | 500 |
| 5 | max_cache_seconds | WR_MAX_CACHE_SECONDS | <class 'int'> | False | False | True | 900 |
| 6 | max_remote_cache_entries | WR_MAX_REMOTE_CACHE_ENTRIES | <class 'int'> | False | False | True | 50 |
| 7 | max_local_cache_entries | WR_MAX_LOCAL_CACHE_ENTRIES | <class 'int'> | False | False | True | 100 |
| 8 | s3_block_size | WR_S3_BLOCK_SIZE | <class 'int'> | False | True | False | None |
| 9 | workgroup | WR_WORKGROUP | <class 'str'> | False | True | False | None |
| 10 | chunksize | WR_CHUNKSIZE | <class 'int'> | False | True | False | None |
| 11 | s3_endpoint_url | WR_S3_ENDPOINT_URL | <class 'str'> | True | True | True | None |
| 12 | athena_endpoint_url | WR_ATHENA_ENDPOINT_URL | <class 'str'> | True | True | True | None |
| 13 | sts_endpoint_url | WR_STS_ENDPOINT_URL | <class 'str'> | True | True | True | None |
| 14 | glue_endpoint_url | WR_GLUE_ENDPOINT_URL | <class 'str'> | True | True | True | None |
| 15 | redshift_endpoint_url | WR_REDSHIFT_ENDPOINT_URL | <class 'str'> | True | True | True | None |
| 16 | kms_endpoint_url | WR_KMS_ENDPOINT_URL | <class 'str'> | True | True | True | None |
| 17 | emr_endpoint_url | WR_EMR_ENDPOINT_URL | <class 'str'> | True | True | True | None |
| 18 | lakeformation_endpoint_url | WR_LAKEFORMATION_ENDPOINT_URL | <class 'str'> | True | True | True | None |
| 19 | dynamodb_endpoint_url | WR_DYNAMODB_ENDPOINT_URL | <class 'str'> | True | True | True | None |
| 20 | secretsmanager_endpoint_url | WR_SECRETSMANAGER_ENDPOINT_URL | <class 'str'> | True | True | True | None |
| 21 | timestream_endpoint_url | WR_TIMESTREAM_ENDPOINT_URL | <class 'str'> | True | True | True | None |
| 22 | botocore_config | WR_BOTOCORE_CONFIG | <class 'botocore.config.Config'> | True | False | True | <botocore.config.Config object at 0x14f313e50> |
| 23 | verify | WR_VERIFY | <class 'str'> | True | False | True | None |
| 24 | address | WR_ADDRESS | <class 'str'> | True | False | False | None |
| 25 | redis_password | WR_REDIS_PASSWORD | <class 'str'> | True | False | False | None |
| 26 | ignore_reinit_error | WR_IGNORE_REINIT_ERROR | <class 'bool'> | True | False | False | None |
| 27 | include_dashboard | WR_INCLUDE_DASHBOARD | <class 'bool'> | True | False | False | None |
| 28 | log_to_driver | WR_LOG_TO_DRIVER | <class 'bool'> | True | False | False | None |
| 29 | object_store_memory | WR_OBJECT_STORE_MEMORY | <class 'int'> | True | False | False | None |
| 30 | cpu_count | WR_CPU_COUNT | <class 'int'> | True | False | False | None |
| 31 | gpu_count | WR_GPU_COUNT | <class 'int'> | True | False | False | None |
```
[ ]:
```
```
```
### 22 - Writing Partitions Concurrently[¶](#22---Writing-Partitions-Concurrently)
* `concurrent_partitioning` argument:
```
If True will increase the parallelism level during the partitions writing. It will decrease the writing time and increase memory usage.
```
*P.S. Check the* [function API doc](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/api.html) *to see it has some argument that can be configured through Global configurations.*
```
[1]:
```
```
%reload_ext memory_profiler
import awswrangler as wr
```
#### Enter your bucket name:[¶](#Enter-your-bucket-name:)
```
[2]:
```
```
import getpass bucket = getpass.getpass()
path = f"s3://{bucket}/data/"
```
```
············
```
#### Reading 4 GB of CSV from NOAA’s historical data and creating a year column[¶](#Reading-4-GB-of-CSV-from-NOAA's-historical-data-and-creating-a-year-column)
```
[3]:
```
```
noaa_path = "s3://noaa-ghcn-pds/csv/by_year/193"
cols = ["id", "dt", "element", "value", "m_flag", "q_flag", "s_flag", "obs_time"]
dates = ["dt", "obs_time"]
dtype = {x: "category" for x in ["element", "m_flag", "q_flag", "s_flag"]}
df = wr.s3.read_csv(noaa_path, names=cols, parse_dates=dates, dtype=dtype)
df["year"] = df["dt"].dt.year
print(f"Number of rows: {len(df.index)}")
print(f"Number of columns: {len(df.columns)}")
```
```
Number of rows: 125407761 Number of columns: 9
```
#### Default Writing[¶](#Default-Writing)
```
[4]:
```
```
%%time
%%memit
wr.s3.to_parquet(
df=df,
path=path,
dataset=True,
mode="overwrite",
partition_cols=["year"],
)
```
```
peak memory: 22169.04 MiB, increment: 11119.68 MiB CPU times: user 49 s, sys: 12.5 s, total: 1min 1s Wall time: 1min 11s
```
#### Concurrent Partitioning (Decreasing writing time, but increasing memory usage)[¶](#Concurrent-Partitioning-(Decreasing-writing-time,-but-increasing-memory-usage))
```
[5]:
```
```
%%time
%%memit
wr.s3.to_parquet(
df=df,
path=path,
dataset=True,
mode="overwrite",
partition_cols=["year"],
concurrent_partitioning=True # <---
)
```
```
peak memory: 27819.48 MiB, increment: 15743.30 MiB CPU times: user 52.3 s, sys: 13.6 s, total: 1min 5s Wall time: 41.6 s
```
### 23 - Flexible Partitions Filter (PUSH-DOWN)[¶](#23---Flexible-Partitions-Filter-(PUSH-DOWN))
* `partition_filter` argument:
```
- Callback Function filters to apply on PARTITION columns (PUSH-DOWN filter).
- This function MUST receive a single argument (Dict[str, str]) where keys are partitions names and values are partitions values.
- This function MUST return a bool, True to read the partition or False to ignore it.
- Ignored if `dataset=False`.
```
*P.S. Check the* [function API doc](https://aws-sdk-pandas.readthedocs.io/en/3.4.0/api.html) *to see it has some argument that can be configured through Global configurations.*
```
[1]:
```
```
import awswrangler as wr import pandas as pd
```
#### Enter your bucket name:[¶](#Enter-your-bucket-name:)
```
[2]:
```
```
import getpass bucket = getpass.getpass()
path = f"s3://{bucket}/dataset/"
```
```
············
```
#### Creating the Dataset (Parquet)[¶](#Creating-the-Dataset-(Parquet))
```
[3]:
```
```
df = pd.DataFrame({
"id": [1, 2, 3],
"value": ["foo", "boo", "bar"],
})
wr.s3.to_parquet(
df=df,
path=path,
dataset=True,
mode="overwrite",
partition_cols=["value"]
)
wr.s3.read_parquet(path, dataset=True)
```
```
[3]:
```
| | id | value |
| --- | --- | --- |
| 0 | 3 | bar |
| 1 | 2 | boo |
| 2 | 1 | foo |
#### Parquet Example 1[¶](#Parquet-Example-1)
```
[4]:
```
```
my_filter = lambda x: x["value"].endswith("oo")
wr.s3.read_parquet(path, dataset=True, partition_filter=my_filter)
```
```
[4]:
```
| | id | value |
| --- | --- | --- |
| 0 | 2 | boo |
| 1 | 1 | foo |
#### Parquet Example 2[¶](#Parquet-Example-2)
```
[5]:
```
```
from Levenshtein import distance
def my_filter(partitions):
return distance("boo", partitions["value"]) <= 1
wr.s3.read_parquet(path, dataset=True, partition_filter=my_filter)
```
```
[5]:
```
| | id | value |
| --- | --- | --- |
| 0 | 2 | boo |
| 1 | 1 | foo |
#### Creating the Dataset (CSV)[¶](#Creating-the-Dataset-(CSV))
```
[6]:
```
```
df = pd.DataFrame({
"id": [1, 2, 3],
"value": ["foo", "boo", "bar"],
})
wr.s3.to_csv(
df=df,
path=path,
dataset=True,
mode="overwrite",
partition_cols=["value"],
compression="gzip",
index=False
)
wr.s3.read_csv(path, dataset=True)
```
```
[6]:
```
| | id | value |
| --- | --- | --- |
| 0 | 3 | bar |
| 1 | 2 | boo |
| 2 | 1 | foo |
#### CSV Example 1[¶](#CSV-Example-1)
```
[7]:
```
```
my_filter = lambda x: x["value"].endswith("oo")
wr.s3.read_csv(path, dataset=True, partition_filter=my_filter)
```
```
[7]:
```
| | id | value |
| --- | --- | --- |
| 0 | 2 | boo |
| 1 | 1 | foo |
#### CSV Example 2[¶](#CSV-Example-2)
```
[8]:
```
```
from Levenshtein import distance
def my_filter(partitions):
return distance("boo", partitions["value"]) <= 1
wr.s3.read_csv(path, dataset=True, partition_filter=my_filter)
```
```
[8]:
```
| | id | value |
| --- | --- | --- |
| 0 | 2 | boo |
| 1 | 1 | foo |
### 24 - Athena Query Metadata[¶](#24---Athena-Query-Metadata)
For `wr.athena.read_sql_query()` and `wr.athena.read_sql_table()` the resulting DataFrame (or every DataFrame in the returned Iterator for chunked queries) have a `query_metadata` attribute, which brings the query result metadata returned by Boto3/Athena.
The expected `query_metadata` format is the same returned by:
<https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/athena.html#Athena.Client.get_query_execution#### Environment Variables[¶](#Environment-Variables)
```
[1]:
```
```
%env WR_DATABASE=default
```
```
env: WR_DATABASE=default
```
```
[2]:
```
```
import awswrangler as wr
```
```
[5]:
```
```
df = wr.athena.read_sql_query("SELECT 1 AS foo")
df
```
```
[5]:
```
| | foo |
| --- | --- |
| 0 | 1 |
#### Getting statistics from query metadata[¶](#Getting-statistics-from-query-metadata)
```
[6]:
```
```
print(f'DataScannedInBytes: {df.query_metadata["Statistics"]["DataScannedInBytes"]}')
print(f'TotalExecutionTimeInMillis: {df.query_metadata["Statistics"]["TotalExecutionTimeInMillis"]}')
print(f'QueryQueueTimeInMillis: {df.query_metadata["Statistics"]["QueryQueueTimeInMillis"]}')
print(f'QueryPlanningTimeInMillis: {df.query_metadata["Statistics"]["QueryPlanningTimeInMillis"]}')
print(f'ServiceProcessingTimeInMillis: {df.query_metadata["Statistics"]["ServiceProcessingTimeInMillis"]}')
```
```
DataScannedInBytes: 0 TotalExecutionTimeInMillis: 2311 QueryQueueTimeInMillis: 121 QueryPlanningTimeInMillis: 250 ServiceProcessingTimeInMillis: 37
```
### 25 - Redshift - Loading Parquet files with Spectrum[¶](#25---Redshift---Loading-Parquet-files-with-Spectrum)
#### Enter your bucket name:[¶](#Enter-your-bucket-name:)
```
[ ]:
```
```
# Install the optional modules first
!pip install 'awswrangler[redshift]'
```
```
[1]:
```
```
import getpass bucket = getpass.getpass()
PATH = f"s3://{bucket}/files/"
```
```
···········································
```
#### Mocking some Parquet Files on S3[¶](#Mocking-some-Parquet-Files-on-S3)
```
[2]:
```
```
import awswrangler as wr import pandas as pd
df = pd.DataFrame({
"col0": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
"col1": ["a", "b", "c", "d", "e", "f", "g", "h", "i", "j"],
})
df
```
```
[2]:
```
| | col0 | col1 |
| --- | --- | --- |
| 0 | 0 | a |
| 1 | 1 | b |
| 2 | 2 | c |
| 3 | 3 | d |
| 4 | 4 | e |
| 5 | 5 | f |
| 6 | 6 | g |
| 7 | 7 | h |
| 8 | 8 | i |
| 9 | 9 | j |
```
[3]:
```
```
wr.s3.to_parquet(df, PATH, max_rows_by_file=2, dataset=True, mode="overwrite")
```
#### Crawling the metadata and adding into Glue Catalog[¶](#Crawling-the-metadata-and-adding-into-Glue-Catalog)
```
[4]:
```
```
wr.s3.store_parquet_metadata(
path=PATH,
database="aws_sdk_pandas",
table="test",
dataset=True,
mode="overwrite"
)
```
```
[4]:
```
```
({'col0': 'bigint', 'col1': 'string'}, None, None)
```
#### Running the CTAS query to load the data into Redshift storage[¶](#Running-the-CTAS-query-to-load-the-data-into-Redshift-storage)
```
[5]:
```
```
con = wr.redshift.connect(connection="aws-sdk-pandas-redshift")
```
```
[6]:
```
```
query = "CREATE TABLE public.test AS (SELECT * FROM aws_sdk_pandas_external.test)"
```
```
[7]:
```
```
with con.cursor() as cursor:
cursor.execute(query)
```
#### Running an INSERT INTO query to load MORE data into Redshift storage[¶](#Running-an-INSERT-INTO-query-to-load-MORE-data-into-Redshift-storage)
```
[8]:
```
```
df = pd.DataFrame({
"col0": [10, 11],
"col1": ["k", "l"],
})
wr.s3.to_parquet(df, PATH, dataset=True, mode="overwrite")
```
```
[9]:
```
```
query = "INSERT INTO public.test (SELECT * FROM aws_sdk_pandas_external.test)"
```
```
[10]:
```
```
with con.cursor() as cursor:
cursor.execute(query)
```
#### Checking the result[¶](#Checking-the-result)
```
[11]:
```
```
query = "SELECT * FROM public.test"
```
```
[13]:
```
```
wr.redshift.read_sql_table(con=con, schema="public", table="test")
```
```
[13]:
```
| | col0 | col1 |
| --- | --- | --- |
| 0 | 5 | f |
| 1 | 1 | b |
| 2 | 3 | d |
| 3 | 6 | g |
| 4 | 8 | i |
| 5 | 10 | k |
| 6 | 4 | e |
| 7 | 0 | a |
| 8 | 2 | c |
| 9 | 7 | h |
| 10 | 9 | j |
| 11 | 11 | l |
```
[14]:
```
```
con.close()
```
### 26 - Amazon Timestream[¶](#26---Amazon-Timestream)
#### Creating resources[¶](#Creating-resources)
```
[10]:
```
```
import awswrangler as wr import pandas as pd from datetime import datetime
database = "sampleDB"
table_1 = "sampleTable1"
table_2 = "sampleTable2"
wr.timestream.create_database(database)
wr.timestream.create_table(database, table_1, memory_retention_hours=1, magnetic_retention_days=1)
wr.timestream.create_table(database, table_2, memory_retention_hours=1, magnetic_retention_days=1)
```
#### Write[¶](#Write)
##### Single measure WriteRecord[¶](#Single-measure-WriteRecord)
```
[11]:
```
```
df = pd.DataFrame(
{
"time": [datetime.now()] * 3,
"dim0": ["foo", "boo", "bar"],
"dim1": [1, 2, 3],
"measure": [1.0, 1.1, 1.2],
}
)
rejected_records = wr.timestream.write(
df=df,
database=database,
table=table_1,
time_col="time",
measure_col="measure",
dimensions_cols=["dim0", "dim1"],
)
print(f"Number of rejected records: {len(rejected_records)}")
```
```
Number of rejected records: 0
```
##### Multi measure WriteRecord[¶](#Multi-measure-WriteRecord)
```
[ ]:
```
```
df = pd.DataFrame(
{
"time": [datetime.now()] * 3,
"measure_1": ["10", "20", "30"],
"measure_2": ["100", "200", "300"],
"measure_3": ["1000", "2000", "3000"],
"tag": ["tag123", "tag456", "tag789"],
}
)
rejected_records = wr.timestream.write(
df=df,
database=database,
table=table_2,
time_col="time",
measure_col=["measure_1", "measure_2", "measure_3"],
dimensions_cols=["tag"]
)
print(f"Number of rejected records: {len(rejected_records)}")
```
#### Query[¶](#Query)
```
[12]:
```
```
wr.timestream.query(
f'SELECT time, measure_value::double, dim0, dim1 FROM "{database}"."{table_1}" ORDER BY time DESC LIMIT 3'
)
```
```
[12]:
```
| | time | measure_value::double | dim0 | dim1 |
| --- | --- | --- | --- | --- |
| 0 | 2020-12-08 19:15:32.468 | 1.0 | foo | 1 |
| 1 | 2020-12-08 19:15:32.468 | 1.2 | bar | 3 |
| 2 | 2020-12-08 19:15:32.468 | 1.1 | boo | 2 |
#### Unload[¶](#Unload)
```
[ ]:
```
```
df = wr.timestream.unload(
sql=f'SELECT time, measure_value, dim0, dim1 FROM "{database}"."{table_1}"',
path="s3://bucket/extracted_parquet_files/",
partition_cols=["dim1"],
)
```
#### Deleting resources[¶](#Deleting-resources)
```
[13]:
```
```
wr.timestream.delete_table(database, table_1)
wr.timestream.delete_table(database, table_2)
wr.timestream.delete_database(database)
```
### 27 - Amazon Timestream - Example 2[¶](#27---Amazon-Timestream---Example-2)
#### Reading test data[¶](#Reading-test-data)
```
[1]:
```
```
import awswrangler as wr import pandas as pd from datetime import datetime
df = pd.read_csv(
"https://raw.githubusercontent.com/aws/amazon-timestream-tools/master/sample_apps/data/sample.csv",
names=[
"ignore0",
"region",
"ignore1",
"az",
"ignore2",
"hostname",
"measure_kind",
"measure",
"ignore3",
"ignore4",
"ignore5",
],
usecols=["region", "az", "hostname", "measure_kind", "measure"],
)
df["time"] = datetime.now()
df.reset_index(inplace=True, drop=False)
df
```
```
[1]:
```
| | index | region | az | hostname | measure_kind | measure | time |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 0 | us-east-1 | us-east-1a | host-fj2hx | cpu_utilization | 21.394363 | 2020-12-08 16:18:47.599597 |
| 1 | 1 | us-east-1 | us-east-1a | host-fj2hx | memory_utilization | 68.563420 | 2020-12-08 16:18:47.599597 |
| 2 | 2 | us-east-1 | us-east-1a | host-6kMPE | cpu_utilization | 17.144579 | 2020-12-08 16:18:47.599597 |
| 3 | 3 | us-east-1 | us-east-1a | host-6kMPE | memory_utilization | 73.507870 | 2020-12-08 16:18:47.599597 |
| 4 | 4 | us-east-1 | us-east-1a | host-sxj7X | cpu_utilization | 26.584865 | 2020-12-08 16:18:47.599597 |
| ... | ... | ... | ... | ... | ... | ... | ... |
| 125995 | 125995 | eu-north-1 | eu-north-1c | host-De8RB | memory_utilization | 68.063468 | 2020-12-08 16:18:47.599597 |
| 125996 | 125996 | eu-north-1 | eu-north-1c | host-2z8tn | memory_utilization | 72.203680 | 2020-12-08 16:18:47.599597 |
| 125997 | 125997 | eu-north-1 | eu-north-1c | host-2z8tn | cpu_utilization | 29.212219 | 2020-12-08 16:18:47.599597 |
| 125998 | 125998 | eu-north-1 | eu-north-1c | host-9FczW | memory_utilization | 71.746134 | 2020-12-08 16:18:47.599597 |
| 125999 | 125999 | eu-north-1 | eu-north-1c | host-9FczW | cpu_utilization | 1.677793 | 2020-12-08 16:18:47.599597 |
126000 rows × 7 columns
#### Creating resources[¶](#Creating-resources)
```
[2]:
```
```
wr.timestream.create_database("sampleDB")
wr.timestream.create_table("sampleDB", "sampleTable", memory_retention_hours=1, magnetic_retention_days=1)
```
#### Write CPU_UTILIZATION records[¶](#Write-CPU_UTILIZATION-records)
```
[3]:
```
```
df_cpu = df[df.measure_kind == "cpu_utilization"].copy()
df_cpu.rename(columns={"measure": "cpu_utilization"}, inplace=True)
df_cpu
```
```
[3]:
```
| | index | region | az | hostname | measure_kind | cpu_utilization | time |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 0 | us-east-1 | us-east-1a | host-fj2hx | cpu_utilization | 21.394363 | 2020-12-08 16:18:47.599597 |
| 2 | 2 | us-east-1 | us-east-1a | host-6kMPE | cpu_utilization | 17.144579 | 2020-12-08 16:18:47.599597 |
| 4 | 4 | us-east-1 | us-east-1a | host-sxj7X | cpu_utilization | 26.584865 | 2020-12-08 16:18:47.599597 |
| 6 | 6 | us-east-1 | us-east-1a | host-ExOui | cpu_utilization | 52.930970 | 2020-12-08 16:18:47.599597 |
| 8 | 8 | us-east-1 | us-east-1a | host-Bwb3j | cpu_utilization | 99.134110 | 2020-12-08 16:18:47.599597 |
| ... | ... | ... | ... | ... | ... | ... | ... |
| 125990 | 125990 | eu-north-1 | eu-north-1c | host-aPtc6 | cpu_utilization | 89.566125 | 2020-12-08 16:18:47.599597 |
| 125992 | 125992 | eu-north-1 | eu-north-1c | host-7ZF9L | cpu_utilization | 75.510598 | 2020-12-08 16:18:47.599597 |
| 125994 | 125994 | eu-north-1 | eu-north-1c | host-De8RB | cpu_utilization | 2.771261 | 2020-12-08 16:18:47.599597 |
| 125997 | 125997 | eu-north-1 | eu-north-1c | host-2z8tn | cpu_utilization | 29.212219 | 2020-12-08 16:18:47.599597 |
| 125999 | 125999 | eu-north-1 | eu-north-1c | host-9FczW | cpu_utilization | 1.677793 | 2020-12-08 16:18:47.599597 |
63000 rows × 7 columns
```
[4]:
```
```
rejected_records = wr.timestream.write(
df=df_cpu,
database="sampleDB",
table="sampleTable",
time_col="time",
measure_col="cpu_utilization",
dimensions_cols=["index", "region", "az", "hostname"],
)
assert len(rejected_records) == 0
```
#### Batch Load MEMORY_UTILIZATION records[¶](#Batch-Load-MEMORY_UTILIZATION-records)
```
[5]:
```
```
df_memory = df[df.measure_kind == "memory_utilization"].copy()
df_memory.rename(columns={"measure": "memory_utilization"}, inplace=True)
df_memory
```
```
[5]:
```
| | index | region | az | hostname | measure_kind | memory_utilization | time |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 1 | us-east-1 | us-east-1a | host-fj2hx | memory_utilization | 68.563420 | 2020-12-08 16:18:47.599597 |
| 3 | 3 | us-east-1 | us-east-1a | host-6kMPE | memory_utilization | 73.507870 | 2020-12-08 16:18:47.599597 |
| 5 | 5 | us-east-1 | us-east-1a | host-sxj7X | memory_utilization | 22.401424 | 2020-12-08 16:18:47.599597 |
| 7 | 7 | us-east-1 | us-east-1a | host-ExOui | memory_utilization | 45.440135 | 2020-12-08 16:18:47.599597 |
| 9 | 9 | us-east-1 | us-east-1a | host-Bwb3j | memory_utilization | 15.042701 | 2020-12-08 16:18:47.599597 |
| ... | ... | ... | ... | ... | ... | ... | ... |
| 125991 | 125991 | eu-north-1 | eu-north-1c | host-aPtc6 | memory_utilization | 75.686739 | 2020-12-08 16:18:47.599597 |
| 125993 | 125993 | eu-north-1 | eu-north-1c | host-7ZF9L | memory_utilization | 18.386152 | 2020-12-08 16:18:47.599597 |
| 125995 | 125995 | eu-north-1 | eu-north-1c | host-De8RB | memory_utilization | 68.063468 | 2020-12-08 16:18:47.599597 |
| 125996 | 125996 | eu-north-1 | eu-north-1c | host-2z8tn | memory_utilization | 72.203680 | 2020-12-08 16:18:47.599597 |
| 125998 | 125998 | eu-north-1 | eu-north-1c | host-9FczW | memory_utilization | 71.746134 | 2020-12-08 16:18:47.599597 |
63000 rows × 7 columns
```
[6]:
```
```
response = wr.timestream.batch_load(
df=df_memory,
path="s3://bucket/prefix/",
database="sampleDB",
table="sampleTable",
time_col="time",
measure_cols=["memory_utilization"],
dimensions_cols=["index", "region", "az", "hostname"],
measure_cols=["memory_utilization"],
measure_name_col="measure_kind",
report_s3_configuration={"BucketName": "error_bucket", "ObjectKeyPrefix": "error_prefix"},
)
assert response["BatchLoadTaskDescription"]["ProgressReport"]["RecordIngestionFailures"] == 0
```
#### Querying CPU_UTILIZATION[¶](#Querying-CPU_UTILIZATION)
```
[7]:
```
```
wr.timestream.query("""
SELECT
hostname, region, az, measure_name, measure_value::double, time
FROM "sampleDB"."sampleTable"
WHERE measure_name = 'cpu_utilization'
ORDER BY time DESC
LIMIT 10
""")
```
```
[7]:
```
| | hostname | region | az | measure_name | measure_value::double | time |
| --- | --- | --- | --- | --- | --- | --- |
| 0 | host-OgvFx | us-west-1 | us-west-1a | cpu_utilization | 39.617911 | 2020-12-08 19:18:47.600 |
| 1 | host-rZUNx | eu-north-1 | eu-north-1a | cpu_utilization | 30.793332 | 2020-12-08 19:18:47.600 |
| 2 | host-t1kAB | us-east-2 | us-east-2b | cpu_utilization | 74.453239 | 2020-12-08 19:18:47.600 |
| 3 | host-RdQRf | us-east-1 | us-east-1c | cpu_utilization | 76.984448 | 2020-12-08 19:18:47.600 |
| 4 | host-4Llhu | us-east-1 | us-east-1c | cpu_utilization | 41.862733 | 2020-12-08 19:18:47.600 |
| 5 | host-2plqa | us-west-1 | us-west-1a | cpu_utilization | 34.864762 | 2020-12-08 19:18:47.600 |
| 6 | host-J3Q4z | us-east-1 | us-east-1b | cpu_utilization | 71.574266 | 2020-12-08 19:18:47.600 |
| 7 | host-VIR5T | ap-east-1 | ap-east-1a | cpu_utilization | 14.017491 | 2020-12-08 19:18:47.600 |
| 8 | host-G042D | us-east-1 | us-east-1c | cpu_utilization | 60.199068 | 2020-12-08 19:18:47.600 |
| 9 | host-8EBHm | us-west-2 | us-west-2c | cpu_utilization | 96.631624 | 2020-12-08 19:18:47.600 |
#### Querying MEMORY_UTILIZATION[¶](#Querying-MEMORY_UTILIZATION)
```
[8]:
```
```
wr.timestream.query("""
SELECT
hostname, region, az, measure_name, measure_value::double, time
FROM "sampleDB"."sampleTable"
WHERE measure_name = 'memory_utilization'
ORDER BY time DESC
LIMIT 10
""")
```
```
[8]:
```
| | hostname | region | az | measure_name | measure_value::double | time |
| --- | --- | --- | --- | --- | --- | --- |
| 0 | host-7c897 | us-west-2 | us-west-2b | memory_utilization | 63.427726 | 2020-12-08 19:18:47.600 |
| 1 | host-2z8tn | eu-north-1 | eu-north-1c | memory_utilization | 41.071368 | 2020-12-08 19:18:47.600 |
| 2 | host-J3Q4z | us-east-1 | us-east-1b | memory_utilization | 23.944388 | 2020-12-08 19:18:47.600 |
| 3 | host-mjrQb | us-east-1 | us-east-1b | memory_utilization | 69.173431 | 2020-12-08 19:18:47.600 |
| 4 | host-AyWSI | us-east-1 | us-east-1c | memory_utilization | 75.591467 | 2020-12-08 19:18:47.600 |
| 5 | host-Axf0g | us-west-2 | us-west-2a | memory_utilization | 29.720739 | 2020-12-08 19:18:47.600 |
| 6 | host-ilMBa | us-east-2 | us-east-2b | memory_utilization | 71.544134 | 2020-12-08 19:18:47.600 |
| 7 | host-CWdXX | us-west-2 | us-west-2c | memory_utilization | 79.792799 | 2020-12-08 19:18:47.600 |
| 8 | host-8EBHm | us-west-2 | us-west-2c | memory_utilization | 66.082554 | 2020-12-08 19:18:47.600 |
| 9 | host-dRIJj | us-east-1 | us-east-1c | memory_utilization | 86.748960 | 2020-12-08 19:18:47.600 |
#### Deleting resources[¶](#Deleting-resources)
```
[9]:
```
```
wr.timestream.delete_table("sampleDB", "sampleTable")
wr.timestream.delete_database("sampleDB")
```
### 28 - Amazon DynamoDB[¶](#28---Amazon-DynamoDB)
#### Writing Data[¶](#Writing-Data)
```
[23]:
```
```
from datetime import datetime from decimal import Decimal from pathlib import Path
import awswrangler as wr import pandas as pd from boto3.dynamodb.conditions import Attr, Key
```
##### Writing DataFrame[¶](#Writing-DataFrame)
```
[27]:
```
```
table_name = "movies"
df = pd.DataFrame({
"title": ["Titanic", "Snatch", "The Godfather"],
"year": [1997, 2000, 1972],
"genre": ["drama", "caper story", "crime"],
})
wr.dynamodb.put_df(df=df, table_name=table_name)
```
##### Writing CSV file[¶](#Writing-CSV-file)
```
[3]:
```
```
filepath = Path("items.csv")
df.to_csv(filepath, index=False)
wr.dynamodb.put_csv(path=filepath, table_name=table_name)
filepath.unlink()
```
##### Writing JSON files[¶](#Writing-JSON-files)
```
[4]:
```
```
filepath = Path("items.json")
df.to_json(filepath, orient="records")
wr.dynamodb.put_json(path="items.json", table_name=table_name)
filepath.unlink()
```
##### Writing list of items[¶](#Writing-list-of-items)
```
[5]:
```
```
items = df.to_dict(orient="records")
wr.dynamodb.put_items(items=items, table_name=table_name)
```
#### Reading Data[¶](#Reading-Data)
##### Read Items[¶](#Read-Items)
```
[ ]:
```
```
# Limit Read to 5 items wr.dynamodb.read_items(table_name=table_name, max_items_evaluated=5)
# Limit Read to Key expression wr.dynamodb.read_items(
table_name=table_name,
key_condition_expression=(Key("title").eq("Snatch") & Key("year").eq(2000))
)
```
##### Read PartiQL[¶](#Read-PartiQL)
```
[29]:
```
```
wr.dynamodb.read_partiql_query(
query=f"SELECT * FROM {table_name} WHERE title=? AND year=?",
parameters=["Snatch", 2000],
)
```
```
[29]:
```
| | year | genre | title |
| --- | --- | --- | --- |
| 0 | 2000 | caper story | Snatch |
#### Executing statements[¶](#Executing-statements)
```
[29]:
```
```
title = "The Lord of the Rings: The Fellowship of the Ring"
year = datetime.now().year genre = "epic"
rating = Decimal('9.9')
plot = "The fate of Middle-earth hangs in the balance as Frodo and eight companions begin their journey to Mount Doom in the land of Mordor."
# Insert items wr.dynamodb.execute_statement(
statement=f"INSERT INTO {table_name} VALUE {{'title': ?, 'year': ?, 'genre': ?, 'info': ?}}",
parameters=[title, year, genre, {"plot": plot, "rating": rating}],
)
# Select items wr.dynamodb.execute_statement(
statement=f"SELECT * FROM \"{table_name}\" WHERE title=? AND year=?",
parameters=[title, year],
)
# Update items wr.dynamodb.execute_statement(
statement=f"UPDATE \"{table_name}\" SET info.rating=? WHERE title=? AND year=?",
parameters=[Decimal(10), title, year],
)
# Delete items wr.dynamodb.execute_statement(
statement=f"DELETE FROM \"{table_name}\" WHERE title=? AND year=?",
parameters=[title, year],
)
```
```
[29]:
```
```
[]
```
#### Deleting items[¶](#Deleting-items)
```
[6]:
```
```
wr.dynamodb.delete_items(items=items, table_name="table")
```
### 29 - S3 Select[¶](#29---S3-Select)
AWS SDK for pandas supports [Amazon S3 Select](https://aws.amazon.com/blogs/aws/s3-glacier-select/), enabling applications to use SQL statements in order to query and filter the contents of a single S3 object. It works on objects stored in CSV, JSON or Apache Parquet, including compressed and large files of several TBs.
With S3 Select, the query workload is delegated to Amazon S3, leading to lower latency and cost, and to higher performance (up to 400% improvement). This is in comparison with other awswrangler operations such as `read_parquet` where the S3 object is downloaded and filtered on the client-side.
This feature has a number of limitations however:
* The maximum length of a record in the input or result is 1 MB
* The maximum uncompressed row group size is 256 MB (Parquet only)
* It can only emit nested data in JSON format
* Certain SQL operations are not supported (e.g. ORDER BY)
#### Read multiple Parquet files from an S3 prefix[¶](#Read-multiple-Parquet-files-from-an-S3-prefix)
```
[1]:
```
```
import awswrangler as wr
df = wr.s3.select_query(
sql="SELECT * FROM s3object s where s.\"trip_distance\" > 30",
path="s3://ursa-labs-taxi-data/2019/01/",
input_serialization="Parquet",
input_serialization_params={},
)
df.head()
```
```
[1]:
```
| | vendor_id | pickup_at | dropoff_at | passenger_count | trip_distance | rate_code_id | store_and_fwd_flag | pickup_location_id | dropoff_location_id | payment_type | fare_amount | extra | mta_tax | tip_amount | tolls_amount | improvement_surcharge | total_amount | congestion_surcharge |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 2 | 2019-01-01T00:48:10.000Z | 2019-01-01T01:36:58.000Z | 1 | 31.570000 | 1 | N | 138 | 138 | 2 | 82.5 | 0.5 | 0.5 | 0.00 | 0.00 | 0.3 | 83.800003 | NaN |
| 1 | 2 | 2019-01-01T00:38:36.000Z | 2019-01-01T01:21:33.000Z | 2 | 33.189999 | 5 | N | 107 | 265 | 1 | 121.0 | 0.0 | 0.0 | 0.08 | 10.50 | 0.3 | 131.880005 | NaN |
| 2 | 2 | 2019-01-01T00:10:43.000Z | 2019-01-01T01:23:59.000Z | 1 | 33.060001 | 1 | N | 243 | 42 | 2 | 92.0 | 0.5 | 0.5 | 0.00 | 5.76 | 0.3 | 99.059998 | NaN |
| 3 | 1 | 2019-01-01T00:13:17.000Z | 2019-01-01T01:06:13.000Z | 1 | 44.099998 | 5 | N | 132 | 265 | 2 | 150.0 | 0.0 | 0.0 | 0.00 | 0.00 | 0.3 | 150.300003 | NaN |
| 4 | 2 | 2019-01-01T00:29:11.000Z | 2019-01-01T01:29:05.000Z | 2 | 31.100000 | 1 | N | 169 | 201 | 1 | 85.5 | 0.5 | 0.5 | 0.00 | 7.92 | 0.3 | 94.720001 | NaN |
#### Read full CSV file[¶](#Read-full-CSV-file)
```
[5]:
```
```
df = wr.s3.select_query(
sql="SELECT * FROM s3object",
path="s3://humor-detection-pds/Humorous.csv",
input_serialization="CSV",
input_serialization_params={
"FileHeaderInfo": "Use",
"RecordDelimiter": "\r\n",
},
scan_range_chunk_size=1024*1024*32, # override range of bytes to query, by default 1Mb
use_threads=True,
)
df.head()
```
```
[5]:
```
| | question | product_description | image_url | label |
| --- | --- | --- | --- | --- |
| 0 | Will the volca sample get me a girlfriend? | Korg Amplifier Part VOLCASAMPLE | http://ecx.images-amazon.com/images/I/81I1XZea... | 1 |
| 1 | Can u communicate with spirits even on Saturday? | Winning Moves Games Classic Ouija | http://ecx.images-amazon.com/images/I/81kcYEG5... | 1 |
| 2 | I won't get hunted right? | Winning Moves Games Classic Ouija | http://ecx.images-amazon.com/images/I/81kcYEG5... | 1 |
| 3 | I have a few questions.. Can you get possessed... | Winning Moves Games Classic Ouija | http://ecx.images-amazon.com/images/I/81kcYEG5... | 1 |
| 4 | Has anyone asked where the treasure is? What w... | Winning Moves Games Classic Ouija | http://ecx.images-amazon.com/images/I/81kcYEG5... | 1 |
#### Filter JSON file[¶](#Filter-JSON-file)
```
[3]:
```
```
wr.s3.select_query(
sql="SELECT * FROM s3object[*] s where s.\"family_name\" = \'Biden\'",
path="s3://awsglue-datasets/examples/us-legislators/all/persons.json",
input_serialization="JSON",
input_serialization_params={
"Type": "Document",
},
)
```
```
[3]:
```
| | family_name | contact_details | name | links | gender | image | identifiers | other_names | sort_name | images | given_name | birth_date | id |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | Biden | [{'type': 'twitter', 'value': 'joebiden'}] | <NAME>, Jr. | [{'note': 'Wikipedia (ace)', 'url': 'https://a... | male | https://theunitedstates.io/images/congress/ori... | [{'identifier': 'B000444', 'scheme': 'bioguide... | [{'lang': None, 'name': '<NAME>', 'note': '... | Biden, Joseph | [{'url': 'https://theunitedstates.io/images/co... | Joseph | 1942-11-20 | 64239edf-8e06-4d2d-acc0-33d96bc79774 |
### 30 - Data Api[¶](#30---Data-Api)
The Data Api simplifies access to Amazon Redshift and RDS by removing the need to manage database connections and credentials. Instead, you can execute SQL commands to an Amazon Redshift cluster or Amazon Aurora cluster by simply invoking an HTTPS API endpoint provided by the Data API. It takes care of managing database connections and returning data. Since the Data API leverages IAM user credentials or database credentials stored in AWS Secrets Manager, you don’t need to pass credentials in API calls.
#### Connect to the cluster[¶](#Connect-to-the-cluster)
* [wr.data_api.redshift.connect()](https://aws-sdk-pandas.readthedocs.io/en/2.11.0/stubs/awswrangler.data_api.redshift.connect.html)
* [wr.data_api.rds.connect()](https://aws-sdk-pandas.readthedocs.io/en/2.11.0/stubs/awswrangler.data_api.rds.connect.html)
```
[ ]:
```
```
con_redshift = wr.data_api.redshift.connect(
cluster_id="aws-sdk-pandas-1xn5lqxrdxrv3",
database="test_redshift",
secret_arn="arn:aws:secretsmanager:us-east-1:111111111111:secret:aws-sdk-pandas/redshift-ewn43d"
)
con_redshift_serverless = wr.data_api.redshift.connect(
workgroup_name="aws-sdk-pandas",
database="test_redshift",
secret_arn="arn:aws:secretsmanager:us-east-1:111111111111:secret:aws-sdk-pandas/redshift-f3en4w"
)
con_mysql = wr.data_api.rds.connect(
resource_arn="arn:aws:rds:us-east-1:111111111111:cluster:mysql-serverless-cluster-wrangler",
database="test_rds",
secret_arn="arn:aws:secretsmanager:us-east-1:111111111111:secret:aws-sdk-pandas/mysql-23df3"
)
```
#### Read from database[¶](#Read-from-database)
* [wr.data_api.redshift.read_sql_query()](https://aws-sdk-pandas.readthedocs.io/en/2.11.0/stubs/awswrangler.data_api.redshift.read_sql_query.html)
* [wr.data_api.rds.read_sql_query()](https://aws-sdk-pandas.readthedocs.io/en/2.11.0/stubs/awswrangler.data_api.rds.read_sql_query.html)
```
[ ]:
```
```
df = wr.data_api.redshift.read_sql_query(
sql="SELECT * FROM public.test_table",
con=con_redshift,
)
df = wr.data_api.rds.read_sql_query(
sql="SELECT * FROM test.test_table",
con=con_rds,
)
```
### 31 - OpenSearch[¶](#31---OpenSearch)
#### Table of Contents[¶](#Table-of-Contents)
* 1. Initialize
+ Connect to your Amazon OpenSearch domain
+ Enter your bucket name
+ Initialize sample data
* 2. Indexing (load)
+ Index documents (no Pandas)
+ Index json file
+ [Index CSV](#Index-CSV)
* 3. Search
+ 3.1 Search by DSL
+ 3.2 Search by SQL
* 4. Delete Indices
* 5. Bonus - Prepare data and index from DataFrame
+ Prepare the data for indexing
+ Create index with mapping
+ Index dataframe
+ Execute geo query
#### 1. Initialize[¶](#1.-Initialize)
```
[ ]:
```
```
# Install the optional modules first
!pip install 'awswrangler[opensearch]'
```
```
[1]:
```
```
import awswrangler as wr
```
##### Connect to your Amazon OpenSearch domain[¶](#Connect-to-your-Amazon-OpenSearch-domain)
```
[2]:
```
```
client = wr.opensearch.connect(
host='OPENSEARCH-ENDPOINT',
# username='FGAC-USERNAME(OPTIONAL)',
# password='FGAC-PASSWORD(OPTIONAL)'
)
client.info()
```
##### Enter your bucket name[¶](#Enter-your-bucket-name)
```
[3]:
```
```
bucket = 'BUCKET'
```
##### Initialize sample data[¶](#Initialize-sample-data)
```
[4]:
```
```
sf_restaurants_inspections = [
{
"inspection_id": "24936_20160609",
"business_address": "315 California St",
"business_city": "San Francisco",
"business_id": "24936",
"business_location": {"lon": -122.400152, "lat": 37.793199},
"business_name": "San Francisco Soup Company",
"business_postal_code": "94104",
"business_state": "CA",
"inspection_date": "2016-06-09T00:00:00.000",
"inspection_score": 77,
"inspection_type": "Routine - Unscheduled",
"risk_category": "Low Risk",
"violation_description": "Improper food labeling or menu misrepresentation",
"violation_id": "24936_20160609_103141",
},
{
"inspection_id": "60354_20161123",
"business_address": "10 Mason St",
"business_city": "San Francisco",
"business_id": "60354",
"business_location": {"lon": -122.409061, "lat": 37.783527},
"business_name": "Soup Unlimited",
"business_postal_code": "94102",
"business_state": "CA",
"inspection_date": "2016-11-23T00:00:00.000",
"inspection_type": "Routine",
"inspection_score": 95,
},
{
"inspection_id": "1797_20160705",
"business_address": "2872 24th St",
"business_city": "San Francisco",
"business_id": "1797",
"business_location": {"lon": -122.409752, "lat": 37.752807},
"business_name": "<NAME>",
"business_postal_code": "94110",
"business_state": "CA",
"inspection_date": "2016-07-05T00:00:00.000",
"inspection_score": 90,
"inspection_type": "Routine - Unscheduled",
"risk_category": "Low Risk",
"violation_description": "Unclean nonfood contact surfaces",
"violation_id": "1797_20160705_103142",
},
{
"inspection_id": "66198_20160527",
"business_address": "1661 Tennessee St Suite 3B",
"business_city": "San Francisco Whard Restaurant",
"business_id": "66198",
"business_location": {"lon": -122.388478, "lat": 37.75072},
"business_name": "San Francisco Restaurant",
"business_postal_code": "94107",
"business_state": "CA",
"inspection_date": "2016-05-27T00:00:00.000",
"inspection_type": "Routine",
"inspection_score": 56,
},
{
"inspection_id": "5794_20160907",
"business_address": "2162 24th Ave",
"business_city": "San Francisco",
"business_id": "5794",
"business_location": {"lon": -122.481299, "lat": 37.747228},
"business_name": "<NAME>",
"business_phone_number": "+14155752700",
"business_postal_code": "94116",
"business_state": "CA",
"inspection_date": "2016-09-07T00:00:00.000",
"inspection_score": 96,
"inspection_type": "Routine - Unscheduled",
"risk_category": "Low Risk",
"violation_description": "Unapproved or unmaintained equipment or utensils",
"violation_id": "5794_20160907_103144",
},
# duplicate record
{
"inspection_id": "5794_20160907",
"business_address": "2162 24th Ave",
"business_city": "San Francisco",
"business_id": "5794",
"business_location": {"lon": -122.481299, "lat": 37.747228},
"business_name": "Soup-or-Salad",
"business_phone_number": "+14155752700",
"business_postal_code": "94116",
"business_state": "CA",
"inspection_date": "2016-09-07T00:00:00.000",
"inspection_score": 96,
"inspection_type": "Routine - Unscheduled",
"risk_category": "Low Risk",
"violation_description": "Unapproved or unmaintained equipment or utensils",
"violation_id": "5794_20160907_103144",
},
]
```
#### 2. Indexing (load)[¶](#2.-Indexing-(load))
##### Index documents (no Pandas)[¶](#Index-documents-(no-Pandas))
```
[5]:
```
```
# index documents w/o providing keys (_id is auto-generated)
wr.opensearch.index_documents(
client,
documents=sf_restaurants_inspections,
index="sf_restaurants_inspections"
)
```
```
Indexing: 100% (6/6)|####################################|Elapsed Time: 0:00:01
```
```
[5]:
```
```
{'success': 6, 'errors': []}
```
```
[6]:
```
```
# read all documents. There are total 6 documents wr.opensearch.search(
client,
index="sf_restaurants_inspections",
_source=["inspection_id", "business_name", "business_location"]
)
```
```
[6]:
```
| | _id | business_name | inspection_id | business_location.lon | business_location.lat |
| --- | --- | --- | --- | --- | --- |
| 0 | 663dd72d-0da4-495b-b0ae-ed000105ae73 | TIO CHILOS GRILL | 1797_20160705 | -122.409752 | 37.752807 |
| 1 | ff2f50f6-5415-4706-9bcb-af7c5eb0afa3 | Soup House | 5794_20160907 | -122.481299 | 37.747228 |
| 2 | b9e8f6a2-8fd1-4660-b041-2997a1a80984 | San Francisco Soup Company | 24936_20160609 | -122.400152 | 37.793199 |
| 3 | 56b352e6-102b-4eff-8296-7e1fb2459bab | Soup Unlimited | 60354_20161123 | -122.409061 | 37.783527 |
| 4 | 6fec5411-f79a-48e4-be7b-e0e44d5ebbab | San Francisco Restaurant | 66198_20160527 | -122.388478 | 37.750720 |
| 5 | 7ba4fb17-f9a9-49da-b90e-8b3553d6d97c | Soup-or-Salad | 5794_20160907 | -122.481299 | 37.747228 |
##### Index json file[¶](#Index-json-file)
```
[ ]:
```
```
import pandas as pd df = pd.DataFrame(sf_restaurants_inspections)
path = f"s3://{bucket}/json/sf_restaurants_inspections.json"
wr.s3.to_json(df, path,orient='records',lines=True)
```
```
[8]:
```
```
# index json w/ providing keys wr.opensearch.index_json(
client,
path=path, # path can be s3 or local
index="sf_restaurants_inspections_dedup",
id_keys=["inspection_id"] # can be multiple fields. arg applicable to all index_* functions
)
```
```
Indexing: 100% (6/6)|####################################|Elapsed Time: 0:00:00
```
```
[8]:
```
```
{'success': 6, 'errors': []}
```
```
[9]:
```
```
# now there are no duplicates. There are total 5 documents wr.opensearch.search(
client,
index="sf_restaurants_inspections_dedup",
_source=["inspection_id", "business_name", "business_location"]
)
```
```
[9]:
```
| | _id | business_name | inspection_id | business_location.lon | business_location.lat |
| --- | --- | --- | --- | --- | --- |
| 0 | 24936_20160609 | San Francisco Soup Company | 24936_20160609 | -122.400152 | 37.793199 |
| 1 | 66198_20160527 | San Francisco Restaurant | 66198_20160527 | -122.388478 | 37.750720 |
| 2 | 5794_20160907 | Soup-or-Salad | 5794_20160907 | -122.481299 | 37.747228 |
| 3 | 60354_20161123 | Soup Unlimited | 60354_20161123 | -122.409061 | 37.783527 |
| 4 | 1797_20160705 | TIO CHILOS GRILL | 1797_20160705 | -122.409752 | 37.752807 |
##### Index CSV[¶](#Index-CSV)
```
[11]:
```
```
wr.opensearch.index_csv(
client,
index="nyc_restaurants_inspections_sample",
path='https://data.cityofnewyork.us/api/views/43nn-pn8j/rows.csv?accessType=DOWNLOAD', # index_csv supports local, s3 and url path
id_keys=["CAMIS"],
pandas_kwargs={'na_filter': True, 'nrows': 1000}, # pandas.read_csv() args - https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
bulk_size=500 # modify based on your cluster size
)
```
```
Indexing: 100% (1000/1000)|##############################|Elapsed Time: 0:00:00
```
```
[11]:
```
```
{'success': 1000, 'errors': []}
```
```
[12]:
```
```
wr.opensearch.search(
client,
index="nyc_restaurants_inspections_sample",
size=5
)
```
```
[12]:
```
| | _id | CAMIS | DBA | BORO | BUILDING | STREET | ZIPCODE | PHONE | CUISINE DESCRIPTION | INSPECTION DATE | ... | RECORD DATE | INSPECTION TYPE | Latitude | Longitude | Community Board | Council District | Census Tract | BIN | BBL | NTA |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 41610426 | 41610426 | GLOW THAI RESTAURANT | Brooklyn | 7107 | 3 AVENUE | 11209.0 | 7187481920 | Thai | 02/26/2020 | ... | 10/04/2021 | Cycle Inspection / Re-inspection | 40.633865 | -74.026798 | 310.0 | 43.0 | 6800.0 | 3146519.0 | 3.058910e+09 | BK31 |
| 1 | 40811162 | 40811162 | CARMINE'S | Manhattan | 2450 | BROADWAY | 10024.0 | 2123622200 | Italian | 05/28/2019 | ... | 10/04/2021 | Cycle Inspection / Initial Inspection | 40.791168 | -73.974308 | 107.0 | 6.0 | 17900.0 | 1033560.0 | 1.012380e+09 | MN12 |
| 2 | 50012113 | 50012113 | TANG | Queens | 196-50 | NORTHERN BOULEVARD | 11358.0 | 7182797080 | Korean | 08/16/2018 | ... | 10/04/2021 | Cycle Inspection / Initial Inspection | 40.757850 | -73.784593 | 411.0 | 19.0 | 145101.0 | 4124565.0 | 4.055200e+09 | QN48 |
| 3 | 50014618 | 50014618 | TOTTO RAMEN | Manhattan | 248 | EAST 52 STREET | 10022.0 | 2124210052 | Japanese | 08/20/2018 | ... | 10/04/2021 | Cycle Inspection / Re-inspection | 40.756596 | -73.968749 | 106.0 | 4.0 | 9800.0 | 1038490.0 | 1.013250e+09 | MN19 |
| 4 | 50045782 | 50045782 | OLLIE'S CHINESE RESTAURANT | Manhattan | 2705 | BROADWAY | 10025.0 | 2129323300 | Chinese | 10/21/2019 | ... | 10/04/2021 | Cycle Inspection / Re-inspection | 40.799318 | -73.968440 | 107.0 | 6.0 | 19100.0 | 1056562.0 | 1.018750e+09 | MN12 |
5 rows × 27 columns
#### 3. Search[¶](#3.-Search)
Search results are returned as Pandas DataFrame
##### 3.1 Search by DSL[¶](#3.1-Search-by-DSL)
```
[13]:
```
```
# add a search query. search all soup businesses wr.opensearch.search(
client,
index="sf_restaurants_inspections",
_source=["inspection_id", "business_name", "business_location"],
filter_path=["hits.hits._id","hits.hits._source"],
search_body={
"query": {
"match": {
"business_name": "soup"
}
}
}
)
```
```
[13]:
```
| | _id | business_name | inspection_id | business_location.lon | business_location.lat |
| --- | --- | --- | --- | --- | --- |
| 0 | ff2f50f6-5415-4706-9bcb-af7c5eb0afa3 | Soup House | 5794_20160907 | -122.481299 | 37.747228 |
| 1 | 7ba4fb17-f9a9-49da-b90e-8b3553d6d97c | Soup-or-Salad | 5794_20160907 | -122.481299 | 37.747228 |
| 2 | b9e8f6a2-8fd1-4660-b041-2997a1a80984 | San Francisco Soup Company | 24936_20160609 | -122.400152 | 37.793199 |
| 3 | 56b352e6-102b-4eff-8296-7e1fb2459bab | Soup Unlimited | 60354_20161123 | -122.409061 | 37.783527 |
##### 3.1 Search by SQL[¶](#3.1-Search-by-SQL)
```
[14]:
```
```
wr.opensearch.search_by_sql(
client,
sql_query="""SELECT business_name, inspection_score
FROM sf_restaurants_inspections_dedup
WHERE business_name LIKE '%soup%'
ORDER BY inspection_score DESC LIMIT 5"""
)
```
```
[14]:
```
| | _index | _type | _id | _score | business_name | inspection_score |
| --- | --- | --- | --- | --- | --- | --- |
| 0 | sf_restaurants_inspections_dedup | _doc | 5794_20160907 | None | Soup-or-Salad | 96 |
| 1 | sf_restaurants_inspections_dedup | _doc | 60354_20161123 | None | Soup Unlimited | 95 |
| 2 | sf_restaurants_inspections_dedup | _doc | 24936_20160609 | None | San Francisco Soup Company | 77 |
#### 4. Delete Indices[¶](#4.-Delete-Indices)
```
[15]:
```
```
wr.opensearch.delete_index(
client=client,
index="sf_restaurants_inspections"
)
```
```
[15]:
```
```
{'acknowledged': True}
```
#### 5. Bonus - Prepare data and index from DataFrame[¶](#5.-Bonus---Prepare-data-and-index-from-DataFrame)
For this exercise we’ll use [DOHMH New York City Restaurant Inspection Results dataset](https://data.cityofnewyork.us/Health/DOHMH-New-York-City-Restaurant-Inspection-Results/43nn-pn8j)
```
[16]:
```
```
import pandas as pd
```
```
[17]:
```
```
df = pd.read_csv('https://data.cityofnewyork.us/api/views/43nn-pn8j/rows.csv?accessType=DOWNLOAD')
```
##### Prepare the data for indexing[¶](#Prepare-the-data-for-indexing)
```
[18]:
```
```
# fields names underscore casing df.columns = [col.lower().replace(' ', '_') for col in df.columns]
# convert lon/lat to OpenSearch geo_point df['business_location'] = "POINT (" + df.longitude.fillna('0').astype(str) + " " + df.latitude.fillna('0').astype(str) + ")"
```
##### Create index with mapping[¶](#Create-index-with-mapping)
```
[19]:
```
```
# delete index if exists wr.opensearch.delete_index(
client=client,
index="nyc_restaurants"
)
# use dynamic_template to map date fields
# define business_location as geo_point wr.opensearch.create_index(
client=client,
index="nyc_restaurants_inspections",
mappings={
"dynamic_templates" : [
{
"dates" : {
"match" : "*date",
"mapping" : {
"type" : "date",
"format" : 'MM/dd/yyyy'
}
}
}
],
"properties": {
"business_location": {
"type": "geo_point"
}
}
}
)
```
```
[19]:
```
```
{'acknowledged': True,
'shards_acknowledged': True,
'index': 'nyc_restaurants_inspections'}
```
##### Index dataframe[¶](#Index-dataframe)
```
[20]:
```
```
wr.opensearch.index_df(
client,
df=df,
index="nyc_restaurants_inspections",
id_keys=["camis"],
bulk_size=1000
)
```
```
Indexing: 100% (382655/382655)|##########################|Elapsed Time: 0:04:15
```
```
[20]:
```
```
{'success': 382655, 'errors': []}
```
##### Execute geo query[¶](#Execute-geo-query)
###### Sort restaurants by distance from Times-Square[¶](#Sort-restaurants-by-distance-from-Times-Square)
```
[21]:
```
```
wr.opensearch.search(
client,
index="nyc_restaurants_inspections",
filter_path=["hits.hits._source"],
size=100,
search_body={
"query": {
"match_all": {}
},
"sort": [
{
"_geo_distance": {
"business_location": { # Times-Square - https://geojson.io/#map=16/40.7563/-73.9862
"lat": 40.75613228383523,
"lon": -73.9865791797638
},
"order": "asc"
}
}
]
}
)
```
```
[21]:
```
| | camis | dba | boro | building | street | zipcode | phone | cuisine_description | inspection_date | action | ... | inspection_type | latitude | longitude | community_board | council_district | census_tract | bin | bbl | nta | business_location |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 41551304 | THE COUNTER | Manhattan | 7 | TIMES SQUARE | 10036.0 | 2129976801 | American | 12/22/2016 | Violations were cited in the following area(s). | ... | Cycle Inspection / Initial Inspection | 40.755908 | -73.986681 | 105.0 | 3.0 | 11300.0 | 1086069.0 | 1.009940e+09 | MN17 | POINT (-73.986680953809 40.755907817312) |
| 1 | 50055665 | ANN INC CAFE | Manhattan | 7 | TIMES SQUARE | 10036.0 | 2125413287 | American | 12/11/2019 | Violations were cited in the following area(s). | ... | Cycle Inspection / Initial Inspection | 40.755908 | -73.986681 | 105.0 | 3.0 | 11300.0 | 1086069.0 | 1.009940e+09 | MN17 | POINT (-73.986680953809 40.755907817312) |
| 2 | 50049552 | ERNST AND YOUNG | Manhattan | 5 | TIMES SQ | 10036.0 | 2127739994 | Coffee/Tea | 11/30/2018 | Violations were cited in the following area(s). | ... | Cycle Inspection / Initial Inspection | 40.755702 | -73.987208 | 105.0 | 3.0 | 11300.0 | 1024656.0 | 1.010130e+09 | MN17 | POINT (-73.987207980138 40.755702020307) |
| 3 | 50014078 | RED LOBSTER | Manhattan | 5 | TIMES SQ | 10036.0 | 2127306706 | Seafood | 10/03/2017 | Violations were cited in the following area(s). | ... | Cycle Inspection / Initial Inspection | 40.755702 | -73.987208 | 105.0 | 3.0 | 11300.0 | 1024656.0 | 1.010130e+09 | MN17 | POINT (-73.987207980138 40.755702020307) |
| 4 | 50015171 | NEW AMSTERDAM THEATER | Manhattan | 214 | WEST 42 STREET | 10036.0 | 2125825472 | American | 06/26/2018 | Violations were cited in the following area(s). | ... | Cycle Inspection / Re-inspection | 40.756317 | -73.987652 | 105.0 | 3.0 | 11300.0 | 1024660.0 | 1.010130e+09 | MN17 | POINT (-73.987651832547 40.756316895053) |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 95 | 41552060 | PROSKAUER ROSE | Manhattan | 11 | TIMES SQUARE | 10036.0 | 2129695493 | American | 08/11/2017 | Violations were cited in the following area(s). | ... | Administrative Miscellaneous / Initial Inspection | 40.756891 | -73.990023 | 105.0 | 3.0 | 11300.0 | 1087978.0 | 1.010138e+09 | MN17 | POINT (-73.990023200823 40.756890780426) |
| 96 | 41242148 | <NAME> | Manhattan | 123 | WEST 39 STREET | 10018.0 | 2122788984 | Irish | 07/30/2019 | Violations were cited in the following area(s). | ... | Cycle Inspection / Re-inspection | 40.753405 | -73.986602 | 105.0 | 4.0 | 11300.0 | 1080611.0 | 1.008150e+09 | MN17 | POINT (-73.986602050292 40.753404587174) |
| 97 | 50095860 | THE TIMES EATERY | Manhattan | 680 | 8 AVENUE | 10036.0 | 6463867787 | American | 02/28/2020 | Violations were cited in the following area(s). | ... | Pre-permit (Operational) / Initial Inspection | 40.757991 | -73.989218 | 105.0 | 3.0 | 11900.0 | 1024703.0 | 1.010150e+09 | MN17 | POINT (-73.989218092096 40.757991356019) |
| 98 | 50072861 | ITSU | Manhattan | 530 | 7 AVENUE | 10018.0 | 9176393645 | Asian/Asian Fusion | 09/10/2018 | Violations were cited in the following area(s). | ... | Pre-permit (Operational) / Initial Inspection | 40.753844 | -73.988551 | 105.0 | 3.0 | 11300.0 | 1014485.0 | 1.007880e+09 | MN17 | POINT (-73.988551029682 40.753843959794) |
| 99 | 50068109 | <NAME> | Manhattan | 1407 | BROADWAY | 10018.0 | 9174759192 | Seafood | 09/06/2017 | Violations were cited in the following area(s). | ... | Pre-permit (Operational) / Initial Inspection | 40.753432 | -73.987151 | 105.0 | 3.0 | 11300.0 | 1015265.0 | 1.008140e+09 | MN17 | POINT (-73.98715066791 40.753432097521) |
100 rows × 27 columns
### 32 - AWS Lake Formation - Glue Governed tables[¶](#32---AWS-Lake-Formation---Glue-Governed-tables)
#### This tutorial assumes that your IAM user/role has the required Lake Formation permissions to create and read AWS Glue Governed tables[¶](#This-tutorial-assumes-that-your-IAM-user/role-has-the-required-Lake-Formation-permissions-to-create-and-read-AWS-Glue-Governed-tables)
##### Table of Contents[¶](#Table-of-Contents)
* [1. Read Governed table](#1.-Read-Governed-table)
+ [1.1 Read PartiQL query](#1.1-Read-PartiQL-query)
- [1.1.1 Read within transaction](#1.1.1-Read-within-transaction)
- [1.1.2 Read within query as of time](#1.1.2-Read-within-query-as-of-time)
+ [1.2 Read full table](#1.2-Read-full-table)
* [2. Write Governed table](#2.-Write-Governed-table)
+ 2.1 Create new Governed table
- [2.1.1 CSV table](#2.1.1-CSV-table)
- [2.1.2 Parquet table](#2.1.2-Parquet-table)
+ [2.2 Overwrite operations](#2.2-Overwrite-operations)
- [2.2.1 Overwrite](#2.2.1-Overwrite)
- [2.2.2 Append](#2.2.2-Append)
- [2.2.3 Create partitioned Governed table](#2.2.3-Create-partitioned-Governed-table)
- [2.2.4 Overwrite partitions](#2.2.4-Overwrite-partitions)
* 3. Multiple read/write operations within a transaction
##### 1. Read Governed table[¶](#1.-Read-Governed-table)
##### 1.1 Read PartiQL query[¶](#1.1-Read-PartiQL-query)
```
[ ]:
```
```
import awswrangler as wr
database = "gov_db" # Assumes a Glue database registered with Lake Formation exists in the account table = "gov_table" # Assumes a Governed table exists in the account catalog_id = "111111111111" # AWS Account Id
# Note 1: If a transaction_id is not specified, a new transaction is started df = wr.lakeformation.read_sql_query(
sql=f"SELECT * FROM {table};",
database=database,
catalog_id=catalog_id
)
```
#### 1.1.1 Read within transaction[¶](#1.1.1-Read-within-transaction)
```
[ ]:
```
```
transaction_id = wr.lakeformation.start_transaction(read_only=True)
df = wr.lakeformation.read_sql_query(
sql=f"SELECT * FROM {table};",
database=database,
transaction_id=transaction_id
)
```
#### 1.1.2 Read within query as of time[¶](#1.1.2-Read-within-query-as-of-time)
```
[ ]:
```
```
import calendar import time
query_as_of_time = query_as_of_time = calendar.timegm(time.gmtime())
df = wr.lakeformation.read_sql_query(
sql=f"SELECT * FROM {table} WHERE id=:id; AND name=:name;",
database=database,
query_as_of_time=query_as_of_time,
params={"id": 1, "name": "Ayoub"}
)
```
##### 1.2 Read full table[¶](#1.2-Read-full-table)
```
[ ]:
```
```
df = wr.lakeformation.read_sql_table(
table=table,
database=database
)
```
##### 2. Write Governed table[¶](#2.-Write-Governed-table)
##### 2.1 Create a new Governed table[¶](#2.1-Create-a-new-Governed-table)
#### Enter your bucket name:[¶](#Enter-your-bucket-name:)
```
[ ]:
```
```
import getpass
bucket = getpass.getpass()
```
If a governed table does not exist, it can be created by passing an S3 `path` argument. Make sure your IAM user/role has enough permissions in the Lake Formation database
#### 2.1.1 CSV table[¶](#2.1.1-CSV-table)
```
[ ]:
```
```
import pandas as pd
table = "gov_table_csv"
df=pd.DataFrame({
"col": [1, 2, 3],
"col2": ["A", "A", "B"],
"col3": [None, "test", None]
})
# Note 1: If a transaction_id is not specified, a new transaction is started
# Note 2: When creating a new Governed table, `table_type="GOVERNED"` must be specified. Otherwise the default is to create an EXTERNAL_TABLE wr.s3.to_csv(
df=df,
path=f"s3://{bucket}/{database}/{table}/", # S3 path
dataset=True,
database=database,
table=table,
glue_table_settings={
"table_type": "GOVERNED",
},
)
```
#### 2.1.2 Parquet table[¶](#2.1.2-Parquet-table)
```
[ ]:
```
```
table = "gov_table_parquet"
df = pd.DataFrame({"c0": [0, None]}, dtype="Int64")
wr.s3.to_parquet(
df=df,
path=f"s3://{bucket}/{database}/{table}/",
dataset=True,
database=database,
table=table,
glue_table_settings=wr.typing.GlueTableSettings(
table_type="GOVERNED",
description="c0",
parameters={"num_cols": str(len(df.columns)), "num_rows": str(len(df.index))},
columns_comments={"c0": "0"},
)
)
```
##### 2.2 Overwrite operations[¶](#2.2-Overwrite-operations)
#### 2.2.1 Overwrite[¶](#2.2.1-Overwrite)
```
[ ]:
```
```
df = pd.DataFrame({"c1": [None, 1, None]}, dtype="Int16")
wr.s3.to_parquet(
df=df,
dataset=True,
mode="overwrite",
database=database,
table=table,
glue_table_settings=wr.typing.GlueTableSettings(
description="c1",
parameters={"num_cols": str(len(df.columns)), "num_rows": str(len(df.index))},
columns_comments={"c1": "1"}
),
)
```
#### 2.2.2 Append[¶](#2.2.2-Append)
```
[ ]:
```
```
df = pd.DataFrame({"c1": [None, 2, None]}, dtype="Int8")
wr.s3.to_parquet(
df=df,
dataset=True,
mode="append",
database=database,
table=table,
description="c1",
parameters={"num_cols": str(len(df.columns)), "num_rows": str(len(df.index) * 2)},
columns_comments={"c1": "1"}
)
```
#### 2.2.3 Create partitioned Governed table[¶](#2.2.3-Create-partitioned-Governed-table)
```
[ ]:
```
```
table = "gov_table_parquet_partitioned"
df = pd.DataFrame({"c0": ["foo", None], "c1": [0, 1]})
wr.s3.to_parquet(
df=df,
path=f"s3://{bucket}/{database}/{table}/",
dataset=True,
database=database,
table=table,
glue_table_settings=wr.typing.GlueTableSettings(
table_type="GOVERNED",
partition_cols=["c1"],
description="c0+c1",
parameters={"num_cols": "2", "num_rows": "2"},
columns_comments={"c0": "zero", "c1": "one"},
),
)
```
#### 2.2.4 Overwrite partitions[¶](#2.2.4-Overwrite-partitions)
```
[ ]:
```
```
df = pd.DataFrame({"c0": [None, None], "c1": [0, 2]})
wr.s3.to_parquet(
df=df,
dataset=True,
mode="overwrite_partitions",
database=database,
table=table,
partition_cols=["c1"],
description="c0+c1",
parameters={"num_cols": "2", "num_rows": "3"},
columns_comments={"c0": "zero", "c1": "one"}
)
```
##### 3. Multiple read/write operations within a transaction[¶](#3.-Multiple-read/write-operations-within-a-transaction)
```
[ ]:
```
```
read_table = "gov_table_parquet"
write_table = "gov_table_multi_parquet"
transaction_id = wr.lakeformation.start_transaction(read_only=False)
df = pd.DataFrame({"c0": [0, None]}, dtype="Int64")
wr.s3.to_parquet(
df=df,
path=f"s3://{bucket}/{database}/{write_table}_1",
dataset=True,
database=database,
table=f"{write_table}_1",
glue_table_settings={
"table_type": "GOVERNED",
"transaction_id": transaction_id,
},
)
df2 = wr.lakeformation.read_sql_table(
table=read_table,
database=database,
transaction_id=transaction_id,
use_threads=True
)
df3 = pd.DataFrame({"c1": [None, 1, None]}, dtype="Int16")
wr.s3.to_parquet(
df=df2,
path=f"s3://{bucket}/{database}/{write_table}_2",
dataset=True,
mode="append",
database=database,
table=f"{write_table}_2",
glue_table_settings={
"table_type": "GOVERNED",
"transaction_id": transaction_id,
},
)
wr.lakeformation.commit_transaction(transaction_id=transaction_id)
```
### 33 - Amazon Neptune[¶](#33---Amazon-Neptune)
Note: to be able to use SPARQL you must either install `SPARQLWrapper` or install AWS SDK for pandas with `sparql` extra:
```
[ ]:
```
```
!pip install 'awswrangler[gremlin, opencypher, sparql]'
```
#### Initialize[¶](#Initialize)
The first step to using AWS SDK for pandas with Amazon Neptune is to import the library and create a client connection.
Note: Connecting to Amazon Neptune requires that the application you are running has access to the Private VPC where Neptune is located. Without this access you will not be able to connect using AWS SDK for pandas.
```
[ ]:
```
```
import awswrangler as wr import pandas as pd
url='<INSERT CLUSTER ENDPOINT>' # The Neptune Cluster endpoint iam_enabled = False # Set to True/False based on the configuration of your cluster neptune_port = 8182 # Set to the Neptune Cluster Port, Default is 8182 client = wr.neptune.connect(url, neptune_port, iam_enabled=iam_enabled)
```
#### Return the status of the cluster[¶](#Return-the-status-of-the-cluster)
```
[ ]:
```
```
print(client.status())
```
#### Retrieve Data from Neptune using AWS SDK for pandas[¶](#Retrieve-Data-from-Neptune-using-AWS-SDK-for-pandas)
AWS SDK for pandas supports querying Amazon Neptune using TinkerPop Gremlin and openCypher for property graph data or SPARQL for RDF data.
##### Gremlin[¶](#Gremlin)
```
[ ]:
```
```
query = "g.E().project('source', 'target').by(outV().id()).by(inV().id()).limit(5)"
df = wr.neptune.execute_gremlin(client, query)
display(df.head(5))
```
##### SPARQL[¶](#SPARQL)
```
[ ]:
```
```
query = """
PREFIX foaf: <https://xmlns.com/foaf/0.1/>
PREFIX ex: <https://www.example.com/>
SELECT ?firstName WHERE { ex:JaneDoe foaf:knows ?person . ?person foaf:firstName ?firstName }"""
df = wr.neptune.execute_sparql(client, query)
display(df.head(5))
```
##### openCypher[¶](#openCypher)
```
[ ]:
```
```
query = "MATCH (n)-[r]->(d) RETURN id(n) as source, id(d) as target LIMIT 5"
df = wr.neptune.execute_opencypher(client, query)
display(df.head(5))
```
#### Saving Data using AWS SDK for pandas[¶](#Saving-Data-using-AWS-SDK-for-pandas)
AWS SDK for pandas supports saving Pandas DataFrames into Amazon Neptune using either a property graph or RDF data model.
##### Property Graph[¶](#Property-Graph)
If writing to a property graph then DataFrames for vertices and edges must be written separately. DataFrames for vertices must have a `~label` column with the label and a `~id` column for the vertex id.
If the `~id` column does not exist, the specified id does not exists, or is empty then a new vertex will be added.
If no `~label` column exists then writing to the graph will be treated as an update of the element with the specified `~id` value.
DataFrames for edges must have a `~id`, `~label`, `~to`, and `~from` column. If the `~id` column does not exist the specified id does not exists, or is empty then a new edge will be added. If no `~label`, `~to`, or `~from` column exists an exception will be thrown.
###### Add Vertices/Nodes[¶](#Add-Vertices/Nodes)
```
[ ]:
```
```
import uuid import random import string def _create_dummy_vertex():
data = dict()
data["~id"] = uuid.uuid4()
data["~label"] = "foo"
data["int"] = random.randint(0, 1000)
data["str"] = "".join(random.choice(string.ascii_lowercase) for i in range(10))
data["list"] = [random.randint(0, 1000), random.randint(0, 1000)]
return data
data = [_create_dummy_vertex(), _create_dummy_vertex(), _create_dummy_vertex()]
df = pd.DataFrame(data)
res = wr.neptune.to_property_graph(client, df)
query = f"MATCH (s) WHERE id(s)='{data[0]['~id']}' RETURN s"
df = wr.neptune.execute_opencypher(client, query)
display(df)
```
###### Add Edges[¶](#Add-Edges)
```
[ ]:
```
```
import uuid import random import string def _create_dummy_edge():
data = dict()
data["~id"] = uuid.uuid4()
data["~label"] = "bar"
data["~to"] = uuid.uuid4()
data["~from"] = uuid.uuid4()
data["int"] = random.randint(0, 1000)
data["str"] = "".join(random.choice(string.ascii_lowercase) for i in range(10))
return data
data = [_create_dummy_edge(), _create_dummy_edge(), _create_dummy_edge()]
df = pd.DataFrame(data)
res = wr.neptune.to_property_graph(client, df)
query = f"MATCH (s)-[r]->(d) WHERE id(r)='{data[0]['~id']}' RETURN r"
df = wr.neptune.execute_opencypher(client, query)
display(df)
```
###### Update Existing Nodes[¶](#Update-Existing-Nodes)
```
[ ]:
```
```
idval=uuid.uuid4()
wr.neptune.execute_gremlin(client, f"g.addV().property(T.id, '{str(idval)}')")
query = f"MATCH (s) WHERE id(s)='{idval}' RETURN s"
df = wr.neptune.execute_opencypher(client, query)
print("Before")
display(df)
data = [{"~id": idval, "age": 50}]
df = pd.DataFrame(data)
res = wr.neptune.to_property_graph(client, df)
df = wr.neptune.execute_opencypher(client, query)
print("After")
display(df)
```
###### Setting cardinality based on the header[¶](#Setting-cardinality-based-on-the-header)
If you would like to save data using `single` cardinality then you can postfix (single) to the column header and set `use_header_cardinality=True` (default). e.g. A column named `name(single)` will save the `name` property as single cardinality. You can disable this by setting `use_header_cardinality=False`.
```
[ ]:
```
```
data = [_create_dummy_vertex()]
df = pd.DataFrame(data)
# Adding (single) to the column name in the DataFrame will cause it to write that property as `single` cardinality df.rename(columns={"int": "int(single)"}, inplace=True)
res = wr.neptune.to_property_graph(client, df, use_header_cardinality=True)
# This can be disabled by setting `use_header_cardinality = False`
df.rename(columns={"int": "int(single)"}, inplace=True)
res = wr.neptune.to_property_graph(client, df, use_header_cardinality=False)
```
##### RDF[¶](#RDF)
The DataFrame must consist of triples with column names for the subject, predicate, and object specified. If none are provided then `s`, `p`, and `o` are the default.
If you want to add data into a named graph then you will also need the graph column, default is `g`.
###### Write Triples[¶](#Write-Triples)
```
[ ]:
```
```
def _create_dummy_triple():
data = dict()
data["s"] = "http://example.com/resources/foo"
data["p"] = uuid.uuid4()
data["o"] = random.randint(0, 1000)
return data
data = [_create_dummy_triple(), _create_dummy_triple(), _create_dummy_triple()]
df = pd.DataFrame(data)
res = wr.neptune.to_rdf_graph(client, df)
query = """
PREFIX foo: <http://example.com/resources/>
SELECT ?o WHERE { <foo:foo> <" + str(data[0]['p']) + "> ?o .}"""
df = wr.neptune.execute_sparql(client, query)
display(df)
```
###### Write Quads[¶](#Write-Quads)
```
[ ]:
```
```
def _create_dummy_quad():
data = _create_dummy_triple()
data["g"] = "bar"
return data
data = [_create_dummy_quad(), _create_dummy_quad(), _create_dummy_quad()]
df = pd.DataFrame(data)
res = wr.neptune.to_rdf_graph(client, df)
query = """
PREFIX foo: <http://example.com/resources/>
SELECT ?o WHERE { <foo:foo> <" + str(data[0]['p']) + "> ?o .}"""
df = wr.neptune.execute_sparql(client, query)
display(df)
```
#### Flatten DataFrames[¶](#Flatten-DataFrames)
One of the complexities of working with a row/columns paradigm, such as Pandas, with graph results set is that it is very common for graph results to return complex and nested objects. To help simplify using the results returned from a graph within a more tabular format we have added a method to flatten the returned Pandas DataFrame.
##### Flattening the DataFrame[¶](#Flattening-the-DataFrame)
```
[ ]:
```
```
client = wr.neptune.connect(url, 8182, iam_enabled=False)
query = "MATCH (n) RETURN n LIMIT 1"
df = wr.neptune.execute_opencypher(client, query)
print("Original")
display(df)
df_new=wr.neptune.flatten_nested_df(df)
print("Flattened")
display(df_new)
```
##### Removing the prefixing of the parent column name[¶](#Removing-the-prefixing-of-the-parent-column-name)
```
[ ]:
```
```
df_new=wr.neptune.flatten_nested_df(df, include_prefix=False)
display(df_new)
```
##### Specifying the column header separator[¶](#Specifying-the-column-header-separator)
```
[ ]:
```
```
df_new=wr.neptune.flatten_nested_df(df, separator='|')
display(df_new)
```
#### Putting it into a workflow[¶](#Putting-it-into-a-workflow)
```
[ ]:
```
```
pip install igraph networkx
```
##### Running PageRank using NetworkX[¶](#Running-PageRank-using-NetworkX)
```
[ ]:
```
```
import networkx as nx
# Retrieve Data from neptune client = wr.neptune.connect(url, 8182, iam_enabled=False)
query = "MATCH (n)-[r]->(d) RETURN id(n) as source, id(d) as target LIMIT 100"
df = wr.neptune.execute_opencypher(client, query)
# Run PageRank G=nx.from_pandas_edgelist(df, edge_attr=True)
pg = nx.pagerank(G)
# Save values back into Neptune rows=[]
for k in pg.keys():
rows.append({'~id': k, 'pageRank_nx(single)': pg[k]})
pg_df=pd.DataFrame(rows, columns=['~id','pageRank_nx(single)'])
res = wr.neptune.to_property_graph(client, pg_df, use_header_cardinality=True)
# Retrieve newly saved data query = "MATCH (n:airport) WHERE n.pageRank_nx IS NOT NULL RETURN n.code, n.pageRank_nx ORDER BY n.pageRank_nx DESC LIMIT 5"
df = wr.neptune.execute_opencypher(client, query)
display(df)
```
##### Running PageRank using iGraph[¶](#Running-PageRank-using-iGraph)
```
[ ]:
```
```
import igraph as ig
# Retrieve Data from neptune client = wr.neptune.connect(url, 8182, iam_enabled=False)
query = "MATCH (n)-[r]->(d) RETURN id(n) as source, id(d) as target LIMIT 100"
df = wr.neptune.execute_opencypher(client, query)
# Run PageRank g = ig.Graph.TupleList(df.itertuples(index=False), directed=True, weights=False)
pg = g.pagerank()
# Save values back into Neptune rows=[]
for idx, v in enumerate(g.vs):
rows.append({'~id': v['name'], 'pageRank_ig(single)': pg[idx]})
pg_df=pd.DataFrame(rows, columns=['~id','pageRank_ig(single)'])
res = wr.neptune.to_property_graph(client, pg_df, use_header_cardinality=True)
# Retrieve newly saved data query = "MATCH (n:airport) WHERE n.pageRank_ig IS NOT NULL RETURN n.code, n.pageRank_ig ORDER BY n.pageRank_ig DESC LIMIT 5"
df = wr.neptune.execute_opencypher(client, query)
display(df)
```
#### Bulk Load[¶](#Bulk-Load)
Data can be written using the Neptune Bulk Loader by way of S3. The Bulk Loader is fast and optimized for large datasets.
For details on the IAM permissions needed to set this up, see [here](https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load.html).
```
[ ]:
```
```
df = pd.DataFrame([_create_dummy_edge() for _ in range(1000)])
wr.neptune.bulk_load(
client=client,
df=df,
path="s3://my-bucket/stage-files/",
iam_role="arn:aws:iam::XXX:role/XXX",
)
```
Alternatively, if the data is already on S3 in CSV format, you can use the `neptune.bulk_load_from_files` function. This is also useful if the data is written to S3 as a byproduct of an AWS Athena command, as the example below will show.
```
[ ]:
```
```
sql = """
SELECT
<col_id> AS "~id"
, <label_id> AS "~label"
, *
FROM <database>.<table>
"""
wr.athena.start_query_execution(
sql=sql,
s3_output="s3://my-bucket/stage-files-athena/",
wait=True,
)
wr.neptune.bulk_load_from_files(
client=client,
path="s3://my-bucket/stage-files-athena/",
iam_role="arn:aws:iam::XXX:role/XXX",
)
```
Both the `bulk_load` and `bulk_load_from_files` functions are suitable at scale. The latter simply invokes the Neptune Bulk Loader on existing data in S3. The former, however, involves writing CSV data to S3. With `ray` and `modin` installed, this operation can also be distributed across multiple workers in a Ray cluster.
### 34 - Distributing Calls Using Ray[¶](#34---Distributing-Calls-Using-Ray)
AWS SDK for pandas supports distribution of specific calls using [ray](https://docs.ray.io/) and [modin](https://modin.readthedocs.io/en/stable/).
When enabled, data loading methods return modin dataframes instead of pandas dataframes. Modin provides seamless integration and compatibility with existing pandas code, with the benefit of distributing operations across your Ray instance and operating at a much larger scale.
```
[1]:
```
```
!pip install "awswrangler[modin,ray,redshift]"
```
Importing `awswrangler` when `ray` and `modin` are installed will automatically initialize a local Ray instance.
```
[3]:
```
```
import awswrangler as wr
print(f"Execution Engine: {wr.engine.get()}")
print(f"Memory Format: {wr.memory_format.get()}")
```
```
Execution Engine: EngineEnum.RAY Memory Format: MemoryFormatEnum.MODIN
```
#### Read data at scale[¶](#Read-data-at-scale)
Data is read using all cores on a single machine or multiple nodes on a cluster
```
[2]:
```
```
df = wr.s3.read_parquet(path="s3://ursa-labs-taxi-data/2019/")
df.head(5)
```
```
2023-09-15 12:24:44,457 INFO worker.py:1621 -- Started a local Ray instance.
2023-09-15 12:25:10,728 INFO read_api.py:374 -- To satisfy the requested parallelism of 200, each read task output will be split into 34 smaller blocks.
```
```
[dataset]: Run `pip install tqdm` to enable progress reporting.
```
```
UserWarning: When using a pre-initialized Ray cluster, please ensure that the runtime env sets environment variable __MODIN_AUTOIMPORT_PANDAS__ to 1
```
```
[2]:
```
| | vendor_id | pickup_at | dropoff_at | passenger_count | trip_distance | rate_code_id | store_and_fwd_flag | pickup_location_id | dropoff_location_id | payment_type | fare_amount | extra | mta_tax | tip_amount | tolls_amount | improvement_surcharge | total_amount | congestion_surcharge |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 1 | 2019-01-01 00:46:40 | 2019-01-01 00:53:20 | 1 | 1.5 | 1 | N | 151 | 239 | 1 | 7.0 | 0.5 | 0.5 | 1.65 | 0.0 | 0.3 | 9.950000 | NaN |
| 1 | 1 | 2019-01-01 00:59:47 | 2019-01-01 01:18:59 | 1 | 2.6 | 1 | N | 239 | 246 | 1 | 14.0 | 0.5 | 0.5 | 1.00 | 0.0 | 0.3 | 16.299999 | NaN |
| 2 | 2 | 2018-12-21 13:48:30 | 2018-12-21 13:52:40 | 3 | 0.0 | 1 | N | 236 | 236 | 1 | 4.5 | 0.5 | 0.5 | 0.00 | 0.0 | 0.3 | 5.800000 | NaN |
| 3 | 2 | 2018-11-28 15:52:25 | 2018-11-28 15:55:45 | 5 | 0.0 | 1 | N | 193 | 193 | 2 | 3.5 | 0.5 | 0.5 | 0.00 | 0.0 | 0.3 | 7.550000 | NaN |
| 4 | 2 | 2018-11-28 15:56:57 | 2018-11-28 15:58:33 | 5 | 0.0 | 2 | N | 193 | 193 | 2 | 52.0 | 0.0 | 0.5 | 0.00 | 0.0 | 0.3 | 55.549999 | NaN |
The data type is a modin DataFrame
```
[4]:
```
```
type(df)
```
```
[4]:
```
```
modin.pandas.dataframe.DataFrame
```
However, this type is interoperable with standard pandas calls:
```
[4]:
```
```
filtered_df = df[df.trip_distance > 30]
excluded_columns = ["vendor_id", "passenger_count", "store_and_fwd_flag"]
filtered_df = filtered_df.loc[:, ~filtered_df.columns.isin(excluded_columns)]
```
Enter your bucket name:
```
[7]:
```
```
bucket = "BUCKET"
```
#### Write data at scale[¶](#Write-data-at-scale)
The write operation is parallelized, leading to significant speed-ups
```
[9]:
```
```
result = wr.s3.to_parquet(
filtered_df,
path=f"s3://{bucket}/taxi/",
dataset=True,
)
print(f"Data has been written to {len(result['paths'])} files")
```
```
Data has been written to 408 files
```
```
2023-09-15 12:32:28,917 WARNING plan.py:567 -- Warning: The Ray cluster currently does not have any available CPUs. The Dataset job will hang unless more CPUs are freed up. A common reason is that cluster resources are used by Actors or Tune trials; see the following link for more details: https://docs.ray.io/en/master/data/dataset-internals.html#datasets-and-tune 2023-09-15 12:32:31,094 INFO streaming_executor.py:92 -- Executing DAG InputDataBuffer[Input] -> TaskPoolMapOperator[Write]
2023-09-15 12:32:31,095 INFO streaming_executor.py:93 -- Execution config: ExecutionOptions(resource_limits=ExecutionResources(cpu=None, gpu=None, object_store_memory=None), locality_with_output=False, preserve_order=False, actor_locality_enabled=True, verbose_progress=False)
2023-09-15 12:32:31,096 INFO streaming_executor.py:95 -- Tip: For detailed progress reporting, run `ray.data.DataContext.get_current().execution_options.verbose_progress = True`
```
```
Data has been written to 408 files
```
#### Copy to Redshift at scale…[¶](#Copy-to-Redshift-at-scale...)
Data is first staged in S3 then a [COPY](https://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html) command is executed against the Redshift cluster to load it. Both operations are distributed: S3 write with Ray and COPY in the Redshift cluster
```
[12]:
```
```
# Connect to the Redshift instance con = wr.redshift.connect("aws-sdk-pandas-redshift")
path = f"s3://{bucket}/stage/"
iam_role = "ROLE"
schema = "public"
table = "taxi"
wr.redshift.copy(
df=filtered_df,
path=path,
con=con,
schema=schema,
table=table,
mode="overwrite",
iam_role=iam_role,
max_rows_by_file=None,
)
```
```
2023-09-15 12:52:24,155 INFO streaming_executor.py:92 -- Executing DAG InputDataBuffer[Input] -> TaskPoolMapOperator[Write]
2023-09-15 12:52:24,157 INFO streaming_executor.py:93 -- Execution config: ExecutionOptions(resource_limits=ExecutionResources(cpu=None, gpu=None, object_store_memory=None), locality_with_output=False, preserve_order=False, actor_locality_enabled=True, verbose_progress=False)
2023-09-15 12:52:24,157 INFO streaming_executor.py:95 -- Tip: For detailed progress reporting, run `ray.data.DataContext.get_current().execution_options.verbose_progress = True`
```
#### … and UNLOAD it back[¶](#...-and-UNLOAD-it-back)
Parallel calls can also be leveraged when reading from the cluster. The [UNLOAD](https://docs.aws.amazon.com/redshift/latest/dg/r_UNLOAD.html) command distributes query processing in Redshift to dump files in S3 which are then read in parallel into a dataframe
```
[13]:
```
```
df = wr.redshift.unload(
sql=f"SELECT * FROM {schema}.{table} where trip_distance > 30",
con=con,
iam_role=iam_role,
path=path,
keep_files=True,
)
df.head()
```
```
2023-09-15 12:56:53,838 INFO read_api.py:374 -- To satisfy the requested parallelism of 16, each read task output will be split into 8 smaller blocks.
```
```
[13]:
```
| | pickup_at | dropoff_at | trip_distance | rate_code_id | pickup_location_id | dropoff_location_id | payment_type | fare_amount | extra | mta_tax | tip_amount | tolls_amount | improvement_surcharge | total_amount | congestion_surcharge |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 2019-01-22 17:40:04 | 2019-01-22 18:33:48 | 30.469999 | 4 | 132 | 265 | 1 | 142.000000 | 1.0 | 0.5 | 28.760000 | 0.00 | 0.3 | 172.559998 | 0.0 |
| 1 | 2019-01-22 18:36:34 | 2019-01-22 19:52:50 | 33.330002 | 5 | 51 | 221 | 1 | 96.019997 | 0.0 | 0.5 | 0.000000 | 11.52 | 0.3 | 108.339996 | 0.0 |
| 2 | 2019-01-22 19:11:08 | 2019-01-22 20:16:10 | 32.599998 | 1 | 231 | 205 | 1 | 88.000000 | 1.0 | 0.5 | 0.000000 | 0.00 | 0.3 | 89.800003 | 0.0 |
| 3 | 2019-01-22 19:14:15 | 2019-01-22 20:09:57 | 36.220001 | 4 | 132 | 265 | 1 | 130.500000 | 1.0 | 0.5 | 27.610001 | 5.76 | 0.3 | 165.669998 | 0.0 |
| 4 | 2019-01-22 19:51:56 | 2019-01-22 20:48:39 | 33.040001 | 5 | 132 | 265 | 1 | 130.000000 | 0.0 | 0.5 | 29.410000 | 16.26 | 0.3 | 176.470001 | 0.0 |
#### Find a needle in a hay stack with S3 Select[¶](#Find-a-needle-in-a-hay-stack-with-S3-Select)
```
[3]:
```
```
import awswrangler as wr
# Run S3 Select query against all objects for 2019 year to find trips starting from a particular location wr.s3.select_query(
sql="SELECT * FROM s3object s where s.\"pickup_location_id\" = 138",
path="s3://ursa-labs-taxi-data/2019/",
input_serialization="Parquet",
input_serialization_params={},
scan_range_chunk_size=32*1024*1024,
)
```
```
[3]:
```
| | vendor_id | pickup_at | dropoff_at | passenger_count | trip_distance | rate_code_id | store_and_fwd_flag | pickup_location_id | dropoff_location_id | payment_type | fare_amount | extra | mta_tax | tip_amount | tolls_amount | improvement_surcharge | total_amount | congestion_surcharge |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 1 | 2019-01-01T00:19:55.000Z | 2019-01-01T00:57:56.000Z | 1 | 12.30 | 1 | N | 138 | 50 | 1 | 38.0 | 0.5 | 0.5 | 4.00 | 5.76 | 0.3 | 49.060001 | NaN |
| 1 | 2 | 2019-01-01T00:48:10.000Z | 2019-01-01T01:36:58.000Z | 1 | 31.57 | 1 | N | 138 | 138 | 2 | 82.5 | 0.5 | 0.5 | 0.00 | 0.00 | 0.3 | 83.800003 | NaN |
| 2 | 1 | 2019-01-01T00:39:58.000Z | 2019-01-01T00:58:58.000Z | 2 | 8.90 | 1 | N | 138 | 224 | 1 | 26.0 | 0.5 | 0.5 | 8.25 | 5.76 | 0.3 | 41.310001 | NaN |
| 3 | 1 | 2019-01-01T00:07:45.000Z | 2019-01-01T00:34:12.000Z | 4 | 9.60 | 1 | N | 138 | 239 | 1 | 29.0 | 0.5 | 0.5 | 7.20 | 5.76 | 0.3 | 43.259998 | NaN |
| 4 | 2 | 2019-01-01T00:27:40.000Z | 2019-01-01T00:52:15.000Z | 1 | 12.89 | 1 | N | 138 | 87 | 2 | 36.0 | 0.5 | 0.5 | 0.00 | 0.00 | 0.3 | 37.299999 | NaN |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 1167508 | 2 | 2019-06-30T23:42:24.000Z | 2019-07-01T00:10:28.000Z | 1 | 15.66 | 1 | N | 138 | 265 | 2 | 44.0 | 0.5 | 0.5 | 0.00 | 0.00 | 0.3 | 45.299999 | 0.0 |
| 1167509 | 2 | 2019-06-30T23:07:34.000Z | 2019-06-30T23:25:09.000Z | 1 | 7.38 | 1 | N | 138 | 262 | 1 | 22.0 | 0.5 | 0.5 | 7.98 | 6.12 | 0.3 | 39.900002 | 2.5 |
| 1167510 | 2 | 2019-06-30T23:00:36.000Z | 2019-06-30T23:20:18.000Z | 1 | 11.24 | 1 | N | 138 | 107 | 1 | 31.0 | 0.5 | 0.5 | 8.18 | 6.12 | 0.3 | 49.099998 | 2.5 |
| 1167511 | 1 | 2019-06-30T23:08:06.000Z | 2019-06-30T23:30:20.000Z | 1 | 7.50 | 1 | N | 138 | 229 | 1 | 24.0 | 3.0 | 0.5 | 4.00 | 0.00 | 0.3 | 31.799999 | 2.5 |
| 1167512 | 2 | 2019-06-30T23:15:13.000Z | 2019-06-30T23:35:18.000Z | 2 | 8.73 | 1 | N | 138 | 262 | 1 | 25.5 | 0.5 | 0.5 | 1.77 | 6.12 | 0.3 | 37.189999 | 2.5 |
1167513 rows × 18 columns
```
[ ]:
```
```
```
### 35 - Distributing Calls on Ray Remote Cluster[¶](#35---Distributing-Calls-on-Ray-Remote-Cluster)
AWS SDK for pandas supports distribution of specific calls on a cluster of EC2s using [ray](https://docs.ray.io/).
Note that this tutorial creates a cluster of EC2 nodes which will incur a charge in your account. Please make sure to delete the cluster at the end.
#### Install the library[¶](#Install-the-library)
```
[ ]:
```
```
!pip install "awswrangler[modin,ray]"
```
##### Configure and Build Ray Cluster on AWS[¶](#Configure-and-Build-Ray-Cluster-on-AWS)
#### Build Prerequisite Infrastructure[¶](#Build-Prerequisite-Infrastructure)
Click on the link below to provision an AWS CloudFormation stack. It builds a security group and IAM instance profile for the Ray Cluster to use. A valid CIDR range (encompassing your local machine IP) and a VPC ID are required.
[|b60c798c5fc241b4b1e7b15730eec841|](https://console.aws.amazon.com/cloudformation/home#/stacks/new?stackName=RayPrerequisiteInfra&templateURL=https://aws-data-wrangler-public-artifacts.s3.amazonaws.com/cloudformation/ray-prerequisite-infra.json)
#### Configure Ray Cluster Configuration[¶](#Configure-Ray-Cluster-Configuration)
Start with a cluster configuration file (YAML).
```
[ ]:
```
```
!touch config.yml
```
Replace all values to match your desired region, account number and name of resources deployed by the above CloudFormation Stack.
A limited set of AWS regions is currently supported (Python 3.8 and above). The example configuration below uses the AMI for `us-east-1`.
Then edit `config.yml` file with your custom configuration.
```
[ ]:
```
```
cluster_name: pandas-sdk-cluster
min_workers: 2 max_workers: 2
provider:
type: aws
region: us-east-1 # Change AWS region as necessary
availability_zone: us-east-1a,us-east-1b,us-east-1c # Change as necessary
security_group:
GroupName: ray-cluster
cache_stopped_nodes: False
available_node_types:
ray.head.default:
node_config:
InstanceType: m4.xlarge
IamInstanceProfile:
# Replace with your account id and profile name if you did not use the default value
Arn: arn:aws:iam::{ACCOUNT ID}:instance-profile/ray-cluster
# Replace ImageId if using a different region / python version
ImageId: ami-02ea7c238b7ba36af
TagSpecifications: # Optional tags
- ResourceType: "instance"
Tags:
- Key: Platform
Value: "ray"
ray.worker.default:
min_workers: 2
max_workers: 2
node_config:
InstanceType: m4.xlarge
IamInstanceProfile:
# Replace with your account id and profile name if you did not use the default value
Arn: arn:aws:iam::{ACCOUNT ID}:instance-profile/ray-cluster
# Replace ImageId if using a different region / python version
ImageId: ami-02ea7c238b7ba36af
TagSpecifications: # Optional tags
- ResourceType: "instance"
Tags:
- Key: Platform
Value: "ray"
setup_commands:
- pip install "awswrangler[modin,ray]"
```
#### Provision Ray Cluster[¶](#Provision-Ray-Cluster)
The command below creates a Ray cluster in your account based on the aforementioned config file. It consists of one head node and 2 workers (m4xlarge EC2s). The command takes a few minutes to complete.
```
[ ]:
```
```
!ray up -y config.yml
```
Once the cluster is up and running, we set the `RAY_ADDRESS` environment variable to the head node Ray Cluster Address
```
[ ]:
```
```
import os, subprocess
head_node_ip = subprocess.check_output(['ray', 'get-head-ip', 'config.yml']).decode("utf-8").split("\n")[-2]
os.environ['RAY_ADDRESS'] = f"ray://{head_node_ip}:10001"
```
As a result, `awswrangler` API calls now run on the cluster, not on your local machine. The SDK detects the required dependencies for its distributed mode and parallelizes supported methods on the cluster.
```
[ ]:
```
```
import awswrangler as wr import modin.pandas as pd
print(f"Execution engine: {wr.engine.get()}")
print(f"Memory format: {wr.memory_format.get()}")
```
Enter bucket Name
```
[ ]:
```
```
bucket = "BUCKET_NAME"
```
#### Read & write some data at scale on the cluster[¶](#Read-&-write-some-data-at-scale-on-the-cluster)
```
[ ]:
```
```
# Read last 3 months of Taxi parquet compressed data (400 Mb)
df = wr.s3.read_parquet(path="s3://ursa-labs-taxi-data/2018/1*.parquet")
df["month"] = df["pickup_at"].dt.month
# Write it back to S3 partitioned by month path=f"s3://{bucket}/taxi-data/"
database = "ray_test"
wr.catalog.create_database(name=database, exist_ok=True)
table = "nyc_taxi"
wr.s3.to_parquet(
df=df,
path=path,
dataset=True,
database=database,
table=table,
partition_cols=["month"],
)
```
#### Read it back via Athena UNLOAD[¶](#Read-it-back-via-Athena-UNLOAD)
The [UNLOAD](https://docs.aws.amazon.com/athena/latest/ug/unload.html) command distributes query processing in Athena to dump results in S3 which are then read in parallel into a dataframe
```
[ ]:
```
```
unload_path = f"s3://{bucket}/unload/nyc_taxi/"
# Athena UNLOAD requires that the S3 path is empty
# Note that s3.delete_objects is also a distributed call wr.s3.delete_objects(unload_path)
wr.athena.read_sql_query(
f"SELECT * FROM {table}",
database=database,
ctas_approach=False,
unload_approach=True,
s3_output=unload_path,
)
```
The EC2 cluster must be terminated or it will incur a charge.
```
[ ]:
```
```
!ray down -y ./config.yml
```
[More Info on Ray Clusters on AWS](https://docs.ray.io/en/latest/cluster/vms/getting-started.html#launch-a-cluster-on-a-cloud-provider)
### 36 - Distributing Calls on Glue Interactive sessions[¶](#36---Distributing-Calls-on-Glue-Interactive-sessions)
AWS SDK for pandas is pre-loaded into [AWS Glue interactive sessions](https://docs.aws.amazon.com/glue/latest/dg/interactive-sessions-overview.html) with Ray kernel, making it by far the easiest way to experiment with the library at scale.
In AWS Glue Studio, choose `Jupyter Notebook` to create an AWS Glue interactive session:
Then select `Ray` as the kernel. The IAM role must trust the AWS Glue service principal.
Once the notebook is up and running you can import the library. Since we are running on AWS Glue with Ray, AWS SDK for pandas will automatically use the existing Ray cluster with no extra configuration needed.
#### Install the library[¶](#Install-the-library)
```
[ ]:
```
```
!pip install "awswrangler[modin]"
```
```
[1]:
```
```
import awswrangler as wr
```
```
Welcome to the Glue Interactive Sessions Kernel For more information on available magic commands, please type %help in any new cell.
Please view our Getting Started page to access the most up-to-date information on the Interactive Sessions kernel: https://docs.aws.amazon.com/glue/latest/dg/interactive-sessions.html Installed kernel version: 0.37.0 Authenticating with environment variables and user-defined glue_role_arn: arn:aws:iam::977422593089:role/AWSGlueMantaTests Trying to create a Glue session for the kernel.
Worker Type: Z.2X Number of Workers: 5 Session ID: 309824f0-bad7-49d0-a2b4-e1b8c7368c5f Job Type: glueray Applying the following default arguments:
--glue_kernel_version 0.37.0
--enable-glue-datacatalog true Waiting for session 309824f0-bad7-49d0-a2b4-e1b8c7368c5f to get into ready status...
Session 309824f0-bad7-49d0-a2b4-e1b8c7368c5f has been created.
```
```
2022-11-21 16:24:03,136 INFO worker.py:1329 -- Connecting to existing Ray cluster at address: 2600:1f10:4674:6822:5b63:3324:984:3152:6379...
2022-11-21 16:24:03,144 INFO worker.py:1511 -- Connected to Ray cluster. View the dashboard at 127.0.0.1:8265
```
```
[3]:
```
```
df = wr.s3.read_csv(path="s3://nyc-tlc/csv_backup/yellow_tripdata_2021-0*.csv")
```
```
Read progress: 100%|##########| 9/9 [00:10<00:00, 1.15s/it]
UserWarning: When using a pre-initialized Ray cluster, please ensure that the runtime env sets environment variable __MODIN_AUTOIMPORT_PANDAS__ to 1
```
```
[4]:
```
```
df.head()
```
```
VendorID tpep_pickup_datetime ... total_amount congestion_surcharge 0 1.0 2021-01-01 00:30:10 ... 11.80 2.5 1 1.0 2021-01-01 00:51:20 ... 4.30 0.0 2 1.0 2021-01-01 00:43:30 ... 51.95 0.0 3 1.0 2021-01-01 00:15:48 ... 36.35 0.0 4 2.0 2021-01-01 00:31:49 ... 24.36 2.5
[5 rows x 18 columns]
```
To avoid incurring a charge, make sure to delete the Jupyter Notebook when you are done experimenting.
### 37 - Glue Data Quality[¶](#37---Glue-Data-Quality)
AWS Glue Data Quality helps you evaluate and monitor the quality of your data.
#### Create test data[¶](#Create-test-data)
First, let’s start by creating test data, writing it to S3, and registering it in the Glue Data Catalog.
```
[ ]:
```
```
import awswrangler as wr import pandas as pd
glue_database = "aws_sdk_pandas"
glue_table = "my_glue_table"
path = "s3://BUCKET_NAME/my_glue_table/"
df = pd.DataFrame({"c0": [0, 1, 2], "c1": [0, 1, 2], "c2": [0, 0, 0]})
wr.s3.to_parquet(df, path, dataset=True, database=glue_database, table=glue_table, partition_cols=["c2"])
```
#### Start with recommended data quality rules[¶](#Start-with-recommended-data-quality-rules)
AWS Glue Data Quality can recommend a set of data quality rules so you can get started quickly.
Note: Running Glue Data Quality recommendation and evaluation tasks requires an IAM role. This role must trust the Glue principal and allow permissions to various resources including the Glue table and the S3 bucket where your data is stored. Moreover, data quality IAM actions must be granted. To find out more, check [Authorization](https://docs.aws.amazon.com/glue/latest/dg/data-quality-authorization.html).
```
[7]:
```
```
first_ruleset = "ruleset_1"
iam_role_arn = "arn:aws:iam::..." # IAM role assumed by the Glue Data Quality job to access resources
df_recommended_ruleset = wr.data_quality.create_recommendation_ruleset( # Creates a recommended ruleset
name=first_ruleset,
database=glue_database,
table=glue_table,
iam_role_arn=iam_role_arn,
number_of_workers=2,
)
df_recommended_ruleset
```
```
[7]:
```
| | rule_type | parameter | expression |
| --- | --- | --- | --- |
| 0 | RowCount | None | between 1 and 6 |
| 1 | IsComplete | "c0" | None |
| 2 | Uniqueness | "c0" | > 0.95 |
| 3 | ColumnValues | "c0" | <= 2 |
| 4 | IsComplete | "c1" | None |
| 5 | Uniqueness | "c1" | > 0.95 |
| 6 | ColumnValues | "c1" | <= 2 |
| 7 | IsComplete | "c2" | None |
| 8 | ColumnValues | "c2" | in ["0"] |
#### Update the recommended rules[¶](#Update-the-recommended-rules)
Recommended rulesets are not perfect and you are likely to modify them or create your own.
```
[17]:
```
```
# Append and update rules df_updated_ruleset = df_recommended_ruleset.append(
{"rule_type": "Uniqueness", "parameter": '"c2"', "expression": "> 0.95"}, ignore_index=True
)
df_updated_ruleset.at[8, "expression"] = "in [0, 1, 2]"
# Update the existing ruleset (upsert)
wr.data_quality.update_ruleset(
name=first_ruleset,
df_rules=df_updated_ruleset,
mode="upsert", # update existing or insert new rules to the ruleset
)
wr.data_quality.get_ruleset(name=first_ruleset)
```
```
[17]:
```
| | rule_type | parameter | expression |
| --- | --- | --- | --- |
| 0 | RowCount | None | between 1 and 6 |
| 1 | IsComplete | "c0" | None |
| 2 | Uniqueness | "c0" | > 0.95 |
| 3 | ColumnValues | "c0" | <= 2 |
| 4 | IsComplete | "c1" | None |
| 5 | Uniqueness | "c1" | > 0.95 |
| 6 | ColumnValues | "c1" | <= 2 |
| 7 | IsComplete | "c2" | None |
| 8 | ColumnValues | "c2" | in [0, 1, 2] |
| 9 | Uniqueness | "c2" | > 0.95 |
#### Run a data quality task[¶](#Run-a-data-quality-task)
The ruleset can now be evaluated against the data. A cluster with 2 workers is used for the run. It returns a report with `PASS`/`FAIL` results for each rule.
```
[20]:
```
```
wr.data_quality.evaluate_ruleset(
name=first_ruleset,
iam_role_arn=iam_role_arn,
number_of_workers=2,
)
```
```
[20]:
```
| | Name | Description | Result | ResultId | EvaluationMessage |
| --- | --- | --- | --- | --- | --- |
| 0 | Rule_1 | RowCount between 1 and 6 | PASS | dqresult-be413b527c0e5520ad843323fecd9cf2e2edbdd5 | NaN |
| 1 | Rule_2 | IsComplete "c0" | PASS | dqresult-be413b527c0e5520ad843323fecd9cf2e2edbdd5 | NaN |
| 2 | Rule_3 | Uniqueness "c0" > 0.95 | PASS | dqresult-be413b527c0e5520ad843323fecd9cf2e2edbdd5 | NaN |
| 3 | Rule_4 | ColumnValues "c0" <= 2 | PASS | dqresult-be413b527c0e5520ad843323fecd9cf2e2edbdd5 | NaN |
| 4 | Rule_5 | IsComplete "c1" | PASS | dqresult-be413b527c0e5520ad843323fecd9cf2e2edbdd5 | NaN |
| 5 | Rule_6 | Uniqueness "c1" > 0.95 | PASS | dqresult-be413b527c0e5520ad843323fecd9cf2e2edbdd5 | NaN |
| 6 | Rule_7 | ColumnValues "c1" <= 2 | PASS | dqresult-be413b527c0e5520ad843323fecd9cf2e2edbdd5 | NaN |
| 7 | Rule_8 | IsComplete "c2" | PASS | dqresult-be413b527c0e5520ad843323fecd9cf2e2edbdd5 | NaN |
| 8 | Rule_9 | ColumnValues "c2" in [0,1,2] | PASS | dqresult-be413b527c0e5520ad843323fecd9cf2e2edbdd5 | NaN |
| 9 | Rule_10 | Uniqueness "c2" > 0.95 | FAIL | dqresult-be413b527c0e5520ad843323fecd9cf2e2edbdd5 | Value: 0.0 does not meet the constraint requir... |
#### Create ruleset from Data Quality Definition Language definition[¶](#Create-ruleset-from-Data-Quality-Definition-Language-definition)
The Data Quality Definition Language (DQDL) is a domain specific language that you can use to define Data Quality rules. For the full syntax reference, see [DQDL](https://docs.aws.amazon.com/glue/latest/dg/dqdl.html).
```
[21]:
```
```
second_ruleset = "ruleset_2"
dqdl_rules = (
"Rules = ["
"RowCount between 1 and 6,"
'IsComplete "c0",'
'Uniqueness "c0" > 0.95,'
'ColumnValues "c0" <= 2,'
'IsComplete "c1",'
'Uniqueness "c1" > 0.95,'
'ColumnValues "c1" <= 2,'
'IsComplete "c2",'
'ColumnValues "c2" <= 1'
"]"
)
wr.data_quality.create_ruleset(
name=second_ruleset,
database=glue_database,
table=glue_table,
dqdl_rules=dqdl_rules,
)
```
#### Create or update a ruleset from a data frame[¶](#Create-or-update-a-ruleset-from-a-data-frame)
AWS SDK for pandas also enables you to create or update a ruleset from a pandas data frame.
```
[24]:
```
```
third_ruleset = "ruleset_3"
df_rules = pd.DataFrame({
"rule_type": ["RowCount", "ColumnCorrelation", "Uniqueness"],
"parameter": [None, '"c0" "c1"', '"c0"'],
"expression": ["between 2 and 8", "> 0.8", "> 0.95"],
})
wr.data_quality.create_ruleset(
name=third_ruleset,
df_rules=df_rules,
database=glue_database,
table=glue_table,
)
wr.data_quality.get_ruleset(name=third_ruleset)
```
```
[24]:
```
| | rule_type | parameter | expression |
| --- | --- | --- | --- |
| 0 | RowCount | None | between 2 and 8 |
| 1 | ColumnCorrelation | "c0" "c1" | > 0.8 |
| 2 | Uniqueness | "c0" | > 0.95 |
#### Get multiple rulesets[¶](#Get-multiple-rulesets)
```
[25]:
```
```
wr.data_quality.get_ruleset(name=[first_ruleset, second_ruleset, third_ruleset])
```
```
[25]:
```
| | rule_type | parameter | expression | ruleset |
| --- | --- | --- | --- | --- |
| 0 | RowCount | None | between 1 and 6 | ruleset_1 |
| 1 | IsComplete | "c0" | None | ruleset_1 |
| 2 | Uniqueness | "c0" | > 0.95 | ruleset_1 |
| 3 | ColumnValues | "c0" | <= 2 | ruleset_1 |
| 4 | IsComplete | "c1" | None | ruleset_1 |
| 5 | Uniqueness | "c1" | > 0.95 | ruleset_1 |
| 6 | ColumnValues | "c1" | <= 2 | ruleset_1 |
| 7 | IsComplete | "c2" | None | ruleset_1 |
| 8 | ColumnValues | "c2" | in [0, 1, 2] | ruleset_1 |
| 9 | Uniqueness | "c2" | > 0.95 | ruleset_1 |
| 0 | RowCount | None | between 1 and 6 | ruleset_2 |
| 1 | IsComplete | "c0" | None | ruleset_2 |
| 2 | Uniqueness | "c0" | > 0.95 | ruleset_2 |
| 3 | ColumnValues | "c0" | <= 2 | ruleset_2 |
| 4 | IsComplete | "c1" | None | ruleset_2 |
| 5 | Uniqueness | "c1" | > 0.95 | ruleset_2 |
| 6 | ColumnValues | "c1" | <= 2 | ruleset_2 |
| 7 | IsComplete | "c2" | None | ruleset_2 |
| 8 | ColumnValues | "c2" | <= 1 | ruleset_2 |
| 0 | RowCount | None | between 2 and 8 | ruleset_3 |
| 1 | ColumnCorrelation | "c0" "c1" | > 0.8 | ruleset_3 |
| 2 | Uniqueness | "c0" | > 0.95 | ruleset_3 |
#### Evaluate Data Quality for a given partition[¶](#Evaluate-Data-Quality-for-a-given-partition)
A data quality evaluation run can be limited to specific partition(s) by leveraging the `pushDownPredicate` expression in the `additional_options` argument
```
[26]:
```
```
df = pd.DataFrame({"c0": [2, 0, 1], "c1": [1, 0, 2], "c2": [1, 1, 1]})
wr.s3.to_parquet(df, path, dataset=True, database=glue_database, table=glue_table, partition_cols=["c2"])
wr.data_quality.evaluate_ruleset(
name=third_ruleset,
iam_role_arn=iam_role_arn,
number_of_workers=2,
additional_options={
"pushDownPredicate": "(c2 == '1')",
},
)
```
```
[26]:
```
| | Name | Description | Result | ResultId | EvaluationMessage |
| --- | --- | --- | --- | --- | --- |
| 0 | Rule_1 | RowCount between 2 and 8 | PASS | dqresult-f676cfe0345aa93f492e3e3c3d6cf1ad99b84dc6 | NaN |
| 1 | Rule_2 | ColumnCorrelation "c0" "c1" > 0.8 | FAIL | dqresult-f676cfe0345aa93f492e3e3c3d6cf1ad99b84dc6 | Value: 0.5 does not meet the constraint requir... |
| 2 | Rule_3 | Uniqueness "c0" > 0.95 | PASS | dqresult-f676cfe0345aa93f492e3e3c3d6cf1ad99b84dc6 | NaN |
### 38 - OpenSearch Serverless[¶](#38---OpenSearch-Serverless)
Amazon OpenSearch Serverless is an on-demand serverless configuration for Amazon OpenSearch Service.
#### Create collection[¶](#Create-collection)
A collection in Amazon OpenSearch Serverless is a logical grouping of one or more indexes that represent an analytics workload.
Collections must have an assigned encryption policy, network policy, and a matching data access policy that grants permission to its resources.
```
[ ]:
```
```
# Install the optional modules first
!pip install 'awswrangler[opensearch]'
```
```
[1]:
```
```
import awswrangler as wr
```
```
[8]:
```
```
data_access_policy = [
{
"Rules": [
{
"ResourceType": "index",
"Resource": [
"index/my-collection/*",
],
"Permission": [
"aoss:*",
],
},
{
"ResourceType": "collection",
"Resource": [
"collection/my-collection",
],
"Permission": [
"aoss:*",
],
},
],
"Principal": [
wr.sts.get_current_identity_arn(),
],
}
]
```
AWS SDK for pandas can create default network and encryption policies based on the user input.
By default, the network policy allows public access to the collection, and the encryption policy encrypts the collection using AWS-managed KMS key.
Create a collection, and a corresponding data, network, and access policies:
```
[10]:
```
```
collection = wr.opensearch.create_collection(
name="my-collection",
data_policy=data_access_policy,
)
collection_endpoint = collection["collectionEndpoint"]
```
The call will wait and exit when the collection and corresponding policies are created and active.
To create a collection encrypted with customer KMS key, and attached to a VPC, provide KMS Key ARN and / or VPC endpoints:
```
[ ]:
```
```
kms_key_arn = "arn:aws:kms:..."
vpc_endpoint = "vpce-..."
collection = wr.opensearch.create_collection(
name="my-secure-collection",
data_policy=data_access_policy,
kms_key_arn=kms_key_arn,
vpc_endpoints=[vpc_endpoint],
)
```
##### Connect[¶](#Connect)
Connect to the collection endpoint:
```
[12]:
```
```
client = wr.opensearch.connect(host=collection_endpoint)
```
##### Create index[¶](#Create-index)
To create an index, run:
```
[13]:
```
```
index="my-index-1"
wr.opensearch.create_index(
client=client,
index=index,
)
```
```
[13]:
```
```
{'acknowledged': True, 'shards_acknowledged': True, 'index': 'my-index-1'}
```
##### Index documents[¶](#Index-documents)
To index documents:
```
[25]:
```
```
wr.opensearch.index_documents(
client,
documents=[{"_id": "1", "name": "John"}, {"_id": "2", "name": "George"}, {"_id": "3", "name": "Julia"}],
index=index,
)
```
```
Indexing: 100% (3/3)|####################################|Elapsed Time: 0:00:12
```
```
[25]:
```
```
{'success': 3, 'errors': []}
```
It is also possible to index Pandas data frames:
```
[26]:
```
```
import pandas as pd
df = pd.DataFrame(
[{"_id": "1", "name": "John", "tags": ["foo", "bar"]}, {"_id": "2", "name": "George", "tags": ["foo"]}]
)
wr.opensearch.index_df(
client,
df=df,
index="index-df",
)
```
```
Indexing: 100% (2/2)|####################################|Elapsed Time: 0:00:12
```
```
[26]:
```
```
{'success': 2, 'errors': []}
```
AWS SDK for pandas also supports indexing JSON and CSV documents.
For more examples, refer to the [031 - OpenSearch tutorial](https://aws-sdk-pandas.readthedocs.io/en/latest/tutorials/031%20-%20OpenSearch.html)
##### Search[¶](#Search)
Search using search DSL:
```
[27]:
```
```
wr.opensearch.search(
client,
index=index,
search_body={
"query": {
"match": {
"name": "Julia"
}
}
}
)
```
```
[27]:
```
| | _id | name |
| --- | --- | --- |
| 0 | 3 | Julia |
##### Delete index[¶](#Delete-index)
To delete an index, run:
```
[ ]:
```
```
wr.opensearch.delete_index(
client=client,
index=index
)
```
### 39 - Athena Iceberg[¶](#39---Athena-Iceberg)
Athena supports read, time travel, write, and DDL queries for Apache Iceberg tables that use the Apache Parquet format for data and the AWS Glue catalog for their metastore. More in [User Guide](https://docs.aws.amazon.com/athena/latest/ug/querying-iceberg.html).
#### Create Iceberg table[¶](#Create-Iceberg-table)
```
[50]:
```
```
import getpass bucket_name = getpass.getpass()
```
```
[2]:
```
```
import awswrangler as wr
glue_database = "aws_sdk_pandas"
glue_table = "iceberg_test"
path = f"s3://{bucket_name}/iceberg_test/"
temp_path = f"s3://{bucket_name}/iceberg_test_temp/"
# Cleanup table before create wr.catalog.delete_table_if_exists(database=glue_database, table=glue_table)
```
```
[2]:
```
```
True
```
#### Create table & insert data[¶](#Create-table-&-insert-data)
It is possible to insert Pandas data frame into Iceberg table using `wr.athena.to_iceberg`. If the table does not exist, it will be created:
```
[ ]:
```
```
import pandas as pd
df = pd.DataFrame({"id": [1, 2, 3], "name": ["John", "Lily", "Richard"]})
wr.athena.to_iceberg(
df=df,
database=glue_database,
table=glue_table,
table_location=path,
temp_path=temp_path,
)
```
Alternatively, it is also possible to insert by directly running `INSERT INTO ... VALUES`:
```
[53]:
```
```
wr.athena.start_query_execution(
sql=f"INSERT INTO {glue_table} VALUES (1,'John'), (2, 'Lily'), (3, 'Richard')",
database=glue_database,
wait=True,
)
```
```
[53]:
```
```
{'QueryExecutionId': 'e339fcd2-9db1-43ac-bb9e-9730e6395b51',
'Query': "INSERT INTO iceberg_test VALUES (1,'John'), (2, 'Lily'), (3, 'Richard')",
'StatementType': 'DML',
'ResultConfiguration': {'OutputLocation': 's3://aws-athena-query-results-...-us-east-1/e339fcd2-9db1-43ac-bb9e-9730e6395b51'},
'ResultReuseConfiguration': {'ResultReuseByAgeConfiguration': {'Enabled': False}},
'QueryExecutionContext': {'Database': 'aws_sdk_pandas'},
'Status': {'State': 'SUCCEEDED',
'SubmissionDateTime': datetime.datetime(2023, 3, 16, 10, 40, 8, 612000, tzinfo=tzlocal()),
'CompletionDateTime': datetime.datetime(2023, 3, 16, 10, 40, 11, 143000, tzinfo=tzlocal())},
'Statistics': {'EngineExecutionTimeInMillis': 2242,
'DataScannedInBytes': 0,
'DataManifestLocation': 's3://aws-athena-query-results-...-us-east-1/e339fcd2-9db1-43ac-bb9e-9730e6395b51-manifest.csv',
'TotalExecutionTimeInMillis': 2531,
'QueryQueueTimeInMillis': 241,
'QueryPlanningTimeInMillis': 179,
'ServiceProcessingTimeInMillis': 48,
'ResultReuseInformation': {'ReusedPreviousResult': False}},
'WorkGroup': 'primary',
'EngineVersion': {'SelectedEngineVersion': 'Athena engine version 3',
'EffectiveEngineVersion': 'Athena engine version 3'}}
```
```
[54]:
```
```
wr.athena.start_query_execution(
sql=f"INSERT INTO {glue_table} VALUES (4,'Anne'), (5, 'Jacob'), (6, 'Leon')",
database=glue_database,
wait=True,
)
```
```
[54]:
```
```
{'QueryExecutionId': '922c8f02-4c00-4050-b4a7-7016809efa2b',
'Query': "INSERT INTO iceberg_test VALUES (4,'Anne'), (5, 'Jacob'), (6, 'Leon')",
'StatementType': 'DML',
'ResultConfiguration': {'OutputLocation': 's3://aws-athena-query-results-...-us-east-1/922c8f02-4c00-4050-b4a7-7016809efa2b'},
'ResultReuseConfiguration': {'ResultReuseByAgeConfiguration': {'Enabled': False}},
'QueryExecutionContext': {'Database': 'aws_sdk_pandas'},
'Status': {'State': 'SUCCEEDED',
'SubmissionDateTime': datetime.datetime(2023, 3, 16, 10, 40, 24, 582000, tzinfo=tzlocal()),
'CompletionDateTime': datetime.datetime(2023, 3, 16, 10, 40, 27, 352000, tzinfo=tzlocal())},
'Statistics': {'EngineExecutionTimeInMillis': 2414,
'DataScannedInBytes': 0,
'DataManifestLocation': 's3://aws-athena-query-results-...-us-east-1/922c8f02-4c00-4050-b4a7-7016809efa2b-manifest.csv',
'TotalExecutionTimeInMillis': 2770,
'QueryQueueTimeInMillis': 329,
'QueryPlanningTimeInMillis': 189,
'ServiceProcessingTimeInMillis': 27,
'ResultReuseInformation': {'ReusedPreviousResult': False}},
'WorkGroup': 'primary',
'EngineVersion': {'SelectedEngineVersion': 'Athena engine version 3',
'EffectiveEngineVersion': 'Athena engine version 3'}}
```
#### Query[¶](#Query)
```
[65]:
```
```
wr.athena.read_sql_query(
sql=f'SELECT * FROM "{glue_table}"',
database=glue_database,
ctas_approach=False,
unload_approach=False,
)
```
```
[65]:
```
| | id | name |
| --- | --- | --- |
| 0 | 1 | John |
| 1 | 4 | Anne |
| 2 | 2 | Lily |
| 3 | 3 | Richard |
| 4 | 5 | Jacob |
| 5 | 6 | Leon |
#### Read query metadata[¶](#Read-query-metadata)
In a SELECT query, you can use the following properties after `table_name` to query Iceberg table metadata:
* `$files` Shows a table’s current data files
* `$manifests` Shows a table’s current file manifests
* `$history` Shows a table’s history
* `$partitions` Shows a table’s current partitions
```
[55]:
```
```
wr.athena.read_sql_query(
sql=f'SELECT * FROM "{glue_table}$files"',
database=glue_database,
ctas_approach=False,
unload_approach=False,
)
```
```
[55]:
```
| | content | file_path | file_format | record_count | file_size_in_bytes | column_sizes | value_counts | null_value_counts | nan_value_counts | lower_bounds | upper_bounds | key_metadata | split_offsets | equality_ids |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 0 | s3://.../iceberg_test/data/089a... | PARQUET | 3 | 360 | {1=48, 2=63} | {1=3, 2=3} | {1=0, 2=0} | {} | {1=1, 2=John} | {1=3, 2=Richard} | <NA> | NaN | NaN |
| 1 | 0 | s3://.../iceberg_test/data/5736... | PARQUET | 3 | 355 | {1=48, 2=61} | {1=3, 2=3} | {1=0, 2=0} | {} | {1=4, 2=Anne} | {1=6, 2=Leon} | <NA> | NaN | NaN |
```
[56]:
```
```
wr.athena.read_sql_query(
sql=f'SELECT * FROM "{glue_table}$manifests"',
database=glue_database,
ctas_approach=False,
unload_approach=False,
)
```
```
[56]:
```
| | path | length | partition_spec_id | added_snapshot_id | added_data_files_count | added_rows_count | existing_data_files_count | existing_rows_count | deleted_data_files_count | deleted_rows_count | partitions |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | s3://.../iceberg_test/metadata/... | 6538 | 0 | 4379263637983206651 | 1 | 3 | 0 | 0 | 0 | 0 | [] |
| 1 | s3://.../iceberg_test/metadata/... | 6548 | 0 | 2934717851675145063 | 1 | 3 | 0 | 0 | 0 | 0 | [] |
```
[58]:
```
```
df = wr.athena.read_sql_query(
sql=f'SELECT * FROM "{glue_table}$history"',
database=glue_database,
ctas_approach=False,
unload_approach=False,
)
# Save snapshot id snapshot_id = df.snapshot_id[0]
df
```
```
[58]:
```
| | made_current_at | snapshot_id | parent_id | is_current_ancestor |
| --- | --- | --- | --- | --- |
| 0 | 2023-03-16 09:40:10.438000+00:00 | 2934717851675145063 | <NA> | True |
| 1 | 2023-03-16 09:40:26.754000+00:00 | 4379263637983206651 | 2934717851675144704 | True |
```
[59]:
```
```
wr.athena.read_sql_query(
sql=f'SELECT * FROM "{glue_table}$partitions"',
database=glue_database,
ctas_approach=False,
unload_approach=False,
)
```
```
[59]:
```
| | record_count | file_count | total_size | data |
| --- | --- | --- | --- | --- |
| 0 | 6 | 2 | 715 | {id={min=1, max=6, null_count=0, nan_count=nul... |
#### Time travel[¶](#Time-travel)
```
[60]:
```
```
wr.athena.read_sql_query(
sql=f"SELECT * FROM {glue_table} FOR TIMESTAMP AS OF (current_timestamp - interval '5' second)",
database=glue_database,
)
```
```
[60]:
```
| | id | name |
| --- | --- | --- |
| 0 | 1 | John |
| 1 | 4 | Anne |
| 2 | 2 | Lily |
| 3 | 3 | Richard |
| 4 | 5 | Jacob |
| 5 | 6 | Leon |
#### Version travel[¶](#Version-travel)
```
[61]:
```
```
wr.athena.read_sql_query(
sql=f"SELECT * FROM {glue_table} FOR VERSION AS OF {snapshot_id}",
database=glue_database,
)
```
```
[61]:
```
| | id | name |
| --- | --- | --- |
| 0 | 1 | John |
| 1 | 2 | Lily |
| 2 | 3 | Richard |
#### Optimize[¶](#Optimize)
The `OPTIMIZE table REWRITE DATA` compaction action rewrites data files into a more optimized layout based on their size and number of associated delete files. For syntax and table property details, see [OPTIMIZE](https://docs.aws.amazon.com/athena/latest/ug/optimize-statement.html).
```
[62]:
```
```
wr.athena.start_query_execution(
sql=f"OPTIMIZE {glue_table} REWRITE DATA USING BIN_PACK",
database=glue_database,
wait=True,
)
```
```
[62]:
```
```
{'QueryExecutionId': '94666790-03ae-42d7-850a-fae99fa79a68',
'Query': 'OPTIMIZE iceberg_test REWRITE DATA USING BIN_PACK',
'StatementType': 'DDL',
'ResultConfiguration': {'OutputLocation': 's3://aws-athena-query-results-...-us-east-1/tables/94666790-03ae-42d7-850a-fae99fa79a68'},
'ResultReuseConfiguration': {'ResultReuseByAgeConfiguration': {'Enabled': False}},
'QueryExecutionContext': {'Database': 'aws_sdk_pandas'},
'Status': {'State': 'SUCCEEDED',
'SubmissionDateTime': datetime.datetime(2023, 3, 16, 10, 49, 42, 857000, tzinfo=tzlocal()),
'CompletionDateTime': datetime.datetime(2023, 3, 16, 10, 49, 45, 655000, tzinfo=tzlocal())},
'Statistics': {'EngineExecutionTimeInMillis': 2622,
'DataScannedInBytes': 220,
'DataManifestLocation': 's3://aws-athena-query-results-...-us-east-1/tables/94666790-03ae-42d7-850a-fae99fa79a68-manifest.csv',
'TotalExecutionTimeInMillis': 2798,
'QueryQueueTimeInMillis': 124,
'QueryPlanningTimeInMillis': 252,
'ServiceProcessingTimeInMillis': 52,
'ResultReuseInformation': {'ReusedPreviousResult': False}},
'WorkGroup': 'primary',
'EngineVersion': {'SelectedEngineVersion': 'Athena engine version 3',
'EffectiveEngineVersion': 'Athena engine version 3'}}
```
#### Vacuum[¶](#Vacuum)
`VACUUM` performs [snapshot expiration](https://iceberg.apache.org/docs/latest/spark-procedures/#expire_snapshots) and [orphan file removal](https://iceberg.apache.org/docs/latest/spark-procedures/#remove_orphan_files). These actions reduce metadata size and remove files not in the current table state that are also older than the retention period specified for the table. For syntax details, see [VACUUM](https://docs.aws.amazon.com/athena/latest/ug/vacuum-statement.html).
```
[64]:
```
```
wr.athena.start_query_execution(
sql=f"VACUUM {glue_table}",
database=glue_database,
wait=True,
)
```
```
[64]:
```
```
{'QueryExecutionId': '717a7de6-b873-49c7-b744-1b0b402f24c9',
'Query': 'VACUUM iceberg_test',
'StatementType': 'DML',
'ResultConfiguration': {'OutputLocation': 's3://aws-athena-query-results-...-us-east-1/717a7de6-b873-49c7-b744-1b0b402f24c9.csv'},
'ResultReuseConfiguration': {'ResultReuseByAgeConfiguration': {'Enabled': False}},
'QueryExecutionContext': {'Database': 'aws_sdk_pandas'},
'Status': {'State': 'SUCCEEDED',
'SubmissionDateTime': datetime.datetime(2023, 3, 16, 10, 50, 41, 14000, tzinfo=tzlocal()),
'CompletionDateTime': datetime.datetime(2023, 3, 16, 10, 50, 43, 441000, tzinfo=tzlocal())},
'Statistics': {'EngineExecutionTimeInMillis': 2229,
'DataScannedInBytes': 0,
'TotalExecutionTimeInMillis': 2427,
'QueryQueueTimeInMillis': 153,
'QueryPlanningTimeInMillis': 30,
'ServiceProcessingTimeInMillis': 45,
'ResultReuseInformation': {'ReusedPreviousResult': False}},
'WorkGroup': 'primary',
'EngineVersion': {'SelectedEngineVersion': 'Athena engine version 3',
'EffectiveEngineVersion': 'Athena engine version 3'}}
```
### 40 - EMR Serverless[¶](#40---EMR-Serverless)
Amazon EMR Serverless is a new deployment option for Amazon EMR. EMR Serverless provides a serverless runtime environment that simplifies the operation of analytics applications that use the latest open source frameworks, such as Apache Spark and Apache Hive. With EMR Serverless, you don’t have to configure, optimize, secure, or operate clusters to run applications with these frameworks. More in [User Guide](https://docs.aws.amazon.com/emr/latest/EMR-Serverless-UserGuide/emr-serverless.html).
#### Spark[¶](#Spark)
##### Create a Spark application[¶](#Create-a-Spark-application)
```
[1]:
```
```
import awswrangler as wr
spark_application_id: str = wr.emr_serverless.create_application(
name="my-spark-application",
application_type="Spark",
release_label="emr-6.10.0",
)
```
```
/var/folders/_n/7dm3ff5d5fb01gjt6ms150km0000gs/T/ipykernel_11468/3968622978.py:3: SDKPandasExperimentalWarning: `create_application`: This API is experimental and may change in future AWS SDK for Pandas releases.
spark_application_id: str = wr.emr_serverless.create_application(
```
##### Run a Spark job[¶](#Run-a-Spark-job)
```
[ ]:
```
```
iam_role_arn = "arn:aws:iam::...:role/..."
wr.emr_serverless.run_job(
application_id=spark_application_id,
execution_role_arn=iam_role_arn,
job_driver_args={
"entryPoint": "/usr/lib/spark/examples/jars/spark-examples.jar",
"entryPointArguments": ["1"],
"sparkSubmitParameters": "--class org.apache.spark.examples.SparkPi --conf spark.executor.cores=4 --conf spark.executor.memory=20g --conf spark.driver.cores=4 --conf spark.driver.memory=8g --conf spark.executor.instances=1",
},
job_type="Spark",
)
```
#### Hive[¶](#Hive)
##### Create a Hive application[¶](#Create-a-Hive-application)
```
[2]:
```
```
hive_application_id: str = wr.emr_serverless.create_application(
name="my-hive-application",
application_type="Hive",
release_label="emr-6.10.0",
)
```
```
/var/folders/_n/7dm3ff5d5fb01gjt6ms150km0000gs/T/ipykernel_11468/3826130602.py:1: SDKPandasExperimentalWarning: `create_application`: This API is experimental and may change in future AWS SDK for Pandas releases.
hive_application_id: str = wr.emr_serverless.create_application(
```
##### Run a Hive job[¶](#Run-a-Hive-job)
```
[ ]:
```
```
path = "s3://my-bucket/path"
wr.emr_serverless.run_job(
application_id=hive_application_id,
execution_role_arn="arn:aws:iam::...:role/...",
job_driver_args={
"query": f"{path}/hive-query.ql",
"parameters": f"--hiveconf hive.exec.scratchdir={path}/scratch --hiveconf hive.metastore.warehouse.dir={path}/warehouse",
},
job_type="Hive",
)
```
### 41 - Apache Spark on Amazon Athena[¶](#41---Apache-Spark-on-Amazon-Athena)
Amazon Athena makes it easy to interactively run data analytics and exploration using Apache Spark without the need to plan for, configure, or manage resources. Running Apache Spark applications on Athena means submitting Spark code for processing and receiving the results directly without the need for additional configuration.
More in [User Guide](https://docs.aws.amazon.com/athena/latest/ug/notebooks-spark.html).
#### Run a Spark calculation[¶](#Run-a-Spark-calculation)
For this tutorial, you will need Spark-enabled Athena Workgroup. For the steps to create one, visit [Getting started with Apache Spark on Amazon Athena.](https://docs.aws.amazon.com/athena/latest/ug/notebooks-spark-getting-started.html#notebooks-spark-getting-started-creating-a-spark-enabled-workgroup)
```
[ ]:
```
```
import awswrangler as wr
workgroup: str = "my-spark-workgroup"
result = wr.athena.run_spark_calculation(
code="print(spark)",
workgroup=workgroup,
)
```
#### Create and re-use a session[¶](#Create-and-re-use-a-session)
It is possible to create a session and re-use it launching multiple calculations with the same resources. To create a session, use:
```
[ ]:
```
```
session_id: str = wr.athena.create_spark_session(
workgroup=workgroup,
)
```
Now, to use the session, pass `session_id`:
```
[ ]:
```
```
result = wr.athena.run_spark_calculation(
code="print(spark)",
workgroup=workgroup,
session_id=session_id,
)
```
Architectural Decision Records[¶](#architectural-decision-records)
---
A collection of records for “architecturally significant” decisions:
those that affect the structure, non-functional characteristics, dependencies, interfaces, or construction techniques.
These decisions are made by the team which maintains *AWS SDK for pandas*.
However, suggestions can be submitted by any contributor via issues or pull requests.
Note
You can also find all ADRs on [GitHub](https://github.com/aws/aws-sdk-pandas/tree/main/docs/adr).
### 1. Record architecture decisions[¶](#record-architecture-decisions)
Date: 2023-03-08
#### Status[¶](#status)
Accepted
#### Context[¶](#context)
We need to record the architectural decisions made on this project.
#### Decision[¶](#decision)
We will use Architecture Decision Records, as [described by Michael Nygard](http://thinkrelevance.com/blog/2011/11/15/documenting-architecture-decisions).
#### Consequences[¶](#consequences)
See Michael Nygard’s article, linked above. For a lightweight ADR toolset, see Nat Pryce’s [adr-tools](https://github.com/npryce/adr-tools).
### 2. Handling unsupported arguments in distributed mode[¶](#handling-unsupported-arguments-in-distributed-mode)
Date: 2023-03-09
#### Status[¶](#status)
Accepted
#### Context[¶](#context)
Many of the API functions allow the user to pass their own `boto3` session, which will then be used by all the underlying `boto3` calls. With distributed computing, one of the limitations we have is that we cannot pass the `boto3` session to the worker nodes.
Boto3 session are not thread-safe, and therefore cannot be passed to Ray workers. The credentials behind a `boto3` session cannot be sent to Ray workers either, since sending credentials over the network is considered a security risk.
This raises the question of what to do when, in distributed mode, the customer passes arguments that are normally supported, but aren’t supported in distributed mode.
#### Decision[¶](#decision)
When a user passes arguments that are unsupported by distributed mode, the function should fail immediately.
The main alternative to this approach would be if a parameter such as a `boto3` session is passed, we should use it where possible. This could result in a situation where, when reading Parquet files from S3, the process of listing the files uses the `boto3` session whereas the reading of the Parquet files doesn’t. This could result in inconsistent behavior, as part of the function uses the extra parameters while the other part of it doesn’t.
Another alternative would simply be to ignore the unsupported parameters, while potentially outputting a warning. The main issue with this approach is that if a customer tells our API functions to use certain parameters, they expect those parameters to be used. By ignoring them, the the AWS SDK for pandas API would be doing something different from what the customer asked, without properly notifying them, and would thus lose the customer’s trust.
#### Consequences[¶](#consequences)
In [PR#2501](https://github.com/aws/aws-sdk-pandas/pull/2051), the `validate_distributed_kwargs` annotation was introduced which can check for the presence of arguments that are unsupported in the distributed mode.
The annotation has also been applied for arguments such as `s3_additional_kwargs` and `version_id` when reading/writing data on S3.
### 3. Use TypedDict to group similar parameters[¶](#use-typeddict-to-group-similar-parameters)
Date: 2023-03-10
#### Status[¶](#status)
Accepted
#### Context[¶](#context)
*AWS SDK for pandas* API methods contain many parameters which are related to a specific behaviour or setting. For example, methods which have an option to update the Glue AWScatalog, such as `to_csv` and `to_parquet`, contain a list of parameters that define the settings for the table in AWS Glue. These settings include the table description, column comments, the table type, etc.
As a consequence, some of our functions have grown to include dozens of parameters. When reading the function signatures, it can be unclear which parameters are related to which functionality. For example, it’s not immediately obvious that the parameter `column_comments` in `s3.to_parquet` only writes the column comments into the AWS Glue catalog, and not to S3.
#### Decision[¶](#decision)
Parameters that are related to similar functionality will be replaced by a single parameter of type [TypedDict](https://peps.python.org/pep-0589/). This will allow us to reduce the amount of parameters for our API functions, and also make it clearer that certain parameters are only related to specific functionalities.
For example, parameters related to Athena cache settings will be extracted into a parameter of type `AthenaCacheSettings`, parameters related to Ray settings will be extracted into `RayReadParquetSettings`, etc.
The usage of `TypedDict` allows the user to define the parameters as regular dictionaries with string keys, while empowering type checkers such as `mypy`. Alternately, implementations such as `AthenaCacheSettings` can be instantiated as classes.
##### Alternatives[¶](#alternatives)
The main alternative that was considered was the idea of using `dataclass` instead of `TypedDict`. The advantage of this alternative would be that default values for parameters could be defined directly in the class signature, rather than needing to be defined in the function which uses the parameter.
On the other hand, the main issue with using `dataclass` is that it would require the customer figure out which class needs to be imported. With `TypedDict`, this is just one of the options; the parameters can simply be passed as a typical Python dictionary.
This alternative was discussed in more detail as part of [PR#1855](https://github.com/aws/aws-sdk-pandas/pull/1855#issuecomment-1353618099).
#### Consequences[¶](#consequences)
Subclasses of `TypedDict` such as `GlueCatalogParameters`, `AthenaCacheSettings`, `AthenaUNLOADSettings`, `AthenaCTASSettings` and `RaySettings` have been created. They are defined in the `wrangler.typing` module.
These parameters grouping can used in either of the following two ways:
```
wr.athena.read_sql_query(
"SELECT * FROM ...",,
ctas_approach=True,
athena_cache_settings={"max_cache_seconds": 900},
)
wr.athena.read_sql_query(
"SELECT * FROM ...",,
ctas_approach=True,
athena_cache_settings=wr.typing.AthenaCacheSettings(
max_cache_seconds=900,
),
)
```
Many of our functions signatures have been changes to take advantage of this refactor. Many of these are breaking changes which will be released as part of the next major version: `3.0.0`.
### 4. AWS SDK for pandas does not alter IAM permissions[¶](#aws-sdk-for-pandas-does-not-alter-iam-permissions)
Date: 2023-03-15
#### Status[¶](#status)
Accepted
#### Context[¶](#context)
AWS SDK for pandas requires permissions to execute AWS API calls. Permissions are granted using AWS Identity and Access Management Policies that are attached to IAM entities - users or roles.
#### Decision[¶](#decision)
AWS SDK for pandas does not alter (create, update, delete) IAM permissions policies attached to the IAM entities.
#### Consequences[¶](#consequences)
It is users responsibility to ensure IAM entities they are using to execute the calls have the required permissions.
### 5. Move dependencies to optional[¶](#move-dependencies-to-optional)
Date: 2023-03-15
#### Status[¶](#status)
Accepted
#### Context[¶](#context)
AWS SDK for pandas relies on external dependencies in some of its modules. These include `redshift-connector`, `gremlinpython` and `pymysql` to cite a few.
In versions 2.x and below, most of these packages were set as required, meaning they were installed regardless of whether the user actually needed them. This has introduced two major risks and issues as the number of dependencies increased:
1. **Security risk**: Unused dependencies increase the attack surface to manage. Users must scan them and ensure that they are kept up to date even though they don’t need them 2. **Dependency hell**: Users must resolve dependencies for packages that they are not using. It can lead to dependency hell and prevent critical updates related to security patches and major bugs
#### Decision[¶](#decision)
A breaking change is introduced in version 3.x where the number of required dependencies is reduced to the most important ones, namely:
* boto3
* pandas
* numpy
* pyarrow
* typing-extensions
#### Consequences[¶](#consequences)
All other dependencies are moved to optional and must be installed by the user separately using pip install `awswrangler[dependency]`. For instance, the command to use the redshift APIs is `pip install awswrangler[redshift]`. Failing to do so raises an exception informing the user that the package is missing and how to install it
### 6. Deprecate wr.s3.merge_upsert_table[¶](#deprecate-wr-s3-merge-upsert-table)
Date: 2023-03-15
#### Status[¶](#status)
Accepted
#### Context[¶](#context)
AWS SDK for pandas `wr.s3.merge_upsert_table` is used to perform upsert (update else insert) onto an existing AWS Glue Data Catalog table. It is a much simplified version of upsert functionality that is supported natively by Apache Hudi and Athena Iceberg tables, and does not, for example, handle partitioned datasets.
#### Decision[¶](#decision)
To avoid poor user experience `wr.s3.merge_upsert_table` is deprecated and will be removed in 3.0 release.
#### Consequences[¶](#consequences)
In [PR#2076](https://github.com/aws/aws-sdk-pandas/pull/2076), `wr.s3.merge_upsert_table` function was removed.
### 7. Design of engine and memory format[¶](#design-of-engine-and-memory-format)
Date: 2023-03-16
#### Status[¶](#status)
Accepted
#### Context[¶](#context)
Ray and Modin are the two frameworks used to support running `awswrangler` APIs at scale. Adding them to the codebase requires significant refactoring work. The original approach considered was to handle both distributed and non-distributed code within the same modules. This quickly turned out to be undesirable as it affected the readability, maintainability and scalability of the codebase.
#### Decision[¶](#decision)
Version 3.x of the library introduces two new constructs, `engine` and `memory_format`, which are designed to address the aforementioned shortcomings of the original approach, but also provide additional functionality.
Currently `engine` takes one of two values: `python` (default) or `ray`, but additional engines could be onboarded in the future. The value is determined at import based on installed dependencies. The user can override this value with `wr.engine.set("engine_name")`. Likewise, `memory_format` can be set to `pandas` (default) or `modin` and overridden with `wr.memory_format.set("memory_format_name")`.
A custom dispatcher is used to register functions based on the execution and memory format values. For instance, if the `ray` engine is detected at import, then methods distributed with Ray are used instead of the default AWS SDK for pandas code.
#### Consequences[¶](#consequences)
**The good**:
*Clear separation of concerns*: Distributed methods live outside non-distributed code, eliminating ugly if conditionals, allowing both to scale independently and making them easier to maintain in the future
*Better dispatching*: Adding a new engine/memory format is as simple as creating a new directory with its methods and registering them with the custom dispatcher based on the value of the engine or memory format
*Custom engine/memory format classes*: Give more flexibility than config when it comes to interacting with the engine and managing its state (initialising, registering, get/setting…)
**The bad**:
*Managing state*: Adding a custom dispatcher means that we must maintain its state. For instance, unregistering methods when a user sets a different engine (e.g. moving from ray to dask at execution time) is currently unsupported
*Detecting the engine*: Conditionals are simpler/easier when it comes to detecting an engine. With a custom dispatcher, the registration and dispatching process is more opaque/convoluted. For example, there is a higher risk of not realising that we are using a given engine vs another
**The ugly**:
*Unused arguments*: Each method registered with the dispatcher must accept the union of both non-distributed and distributed arguments, even though some would be unused. As the list of supported engines grows, so does the number of unused arguments. It also means that we must maintain the same list of arguments across the different versions of the method
### 8. Switching between PyArrow and Pandas based datasources for CSV/JSON I/O[¶](#switching-between-pyarrow-and-pandas-based-datasources-for-csv-json-i-o)
Date: 2023-03-16
#### Status[¶](#status)
Accepted
#### Context[¶](#context)
The reading and writing operations for CSV/JSON data in *AWS SDK for pandas* make use of the underlying functions in Pandas. For example, `wr.s3.read_csv` will open a stream of data from S3 and then invoke `pandas.read_csv`. This allows the library to fully support all the arguments which are supported by the underlying Pandas functions. Functions such as `wr.s3.read_csv` or `wr.s3.to_json` accept a `**kwargs` parameter which forwards all parameters to `pandas.read_csv` and `pandas.to_json` automatically.
From version 3.0.0 onward, *AWS SDK for pandas* supports Ray and Modin. When those two libraries are installed, all aforementioned I/O functions will be distributed on a Ray cluster. In the background, this means that all the I/O functions for S3 are running as part of a [custom Ray data source](https://docs.ray.io/en/latest/_modules/ray/data/datasource/datasource.html). Data is then returned in blocks, which form the Modin DataFrame.
The issue is that the Pandas I/O functions work very slowly in the Ray datasource compared with the equivalent I/O functions in PyArrow. Therefore, calling `pyarrow.csv.read_csv` is significantly faster than calling `pandas.read_csv` in the background.
However, the PyArrow I/O functions do not support the same set of parameters as the ones in Pandas. As a consequence, whereas the PyArrow functions offer greater performance, they come at the cost of feature parity between the non-distributed mode and the distributed mode.
For reference, loading 5 GiB of CSV data with the PyArrow functions took around 30 seconds, compared to 120 seconds with the Pandas functions in the same scenario.
For writing back to S3, the speed-up is around 2x.
#### Decision[¶](#decision)
In order to maximize both performance without losing feature parity, we implemented logic whereby if the user passes a set of parameters which are supported by PyArrow, the library uses PyArrow for reading/writing. If not, the library defaults to the slower Pandas functions, which will support the set of parameter.
The following example will illustrate the difference:
```
# This will be loaded by PyArrow, as `doublequote` is supported wr.s3.read_csv(
path="s3://my-bucket/my-path/",
dataset=True,
doublequote=False,
)
# This will be loaded using the Pandas I/O functions, as `comment` is not supported by PyArrow wr.s3.read_csv(
path="s3://my-bucket/my-path/",
dataset=True,
comment="#",
)
```
This logic is applied to the following functions:
1. `wr.s3.read_csv`
2. `wr.s3.read_json`
3. `wr.s3.to_json`
4. `wr.s3.to_csv`
#### Consequences[¶](#consequences)
The logic of switching between using PyArrow or Pandas functions in background was implemented as part of [#1699](https://github.com/aws/aws-sdk-pandas/pull/1699). It was later expanded to support more parameters in [#2008](https://github.com/aws/aws-sdk-pandas/pull/2008) and [#2019](https://github.com/aws/aws-sdk-pandas/pull/2019).
### 9. Engine selection and lazy initialization[¶](#engine-selection-and-lazy-initialization)
Date: 2023-05-17
#### Status[¶](#status)
Accepted
#### Context[¶](#context)
In distributed mode, three approaches are possible when it comes to selecting and initializing a Ray engine:
1. Initialize the Ray runtime at import (current default). This option causes the least friction to the user but assumes that installing Ray as an optional dependency is enough to enable distributed mode. Moreover, the user cannot prevent/delay Ray initialization (as it’s done at import)
2. Initialize the Ray runtime on the first distributed API call. The user can prevent Ray initialization by switching the engine/memory format with environment variables or between import and the first awswrangler distributed API call. However, by default this approach still assumes that installing Ray is equivalent to enabling distributed mode 3. Wait for the user to enable distributed mode, via environment variables and/or via `wr.engine.set`. This option makes no assumption on which mode to use (distributed vs non-distributed). Non-distributed would be the default and it’s up to the user to switch the engine/memory format
#### Decision[¶](#decision)
Option #1 is inflexible and gives little control to the user, while option #3 introduces too much friction and puts the burden on the user. Option #2 on the other hand gives full flexibility to the user while providing a sane default.
#### Consequences[¶](#consequences)
The only difference between the current default and the suggested approach is to delay engine initialization, which is not a breaking change. However, it means that in certain situations more than one Ray instance is initialized. For instance, when running tests across multiple threads, each thread runs its own Ray runtime.
API Reference[¶](#api-reference)
---
* [Amazon S3](#amazon-s3)
* [AWS Glue Catalog](#aws-glue-catalog)
* [Amazon Athena](#amazon-athena)
* [AWS Lake Formation](#aws-lake-formation)
* [Amazon Redshift](#amazon-redshift)
* [PostgreSQL](#postgresql)
* [MySQL](#mysql)
* [Microsoft SQL Server](#microsoft-sql-server)
* [Oracle](#oracle)
* [Data API Redshift](#data-api-redshift)
* [Data API RDS](#data-api-rds)
* [AWS Glue Data Quality](#aws-glue-data-quality)
* [OpenSearch](#opensearch)
* [Amazon Neptune](#amazon-neptune)
* [DynamoDB](#dynamodb)
* [Amazon Timestream](#amazon-timestream)
* [AWS Clean Rooms](#aws-clean-rooms)
* [Amazon EMR](#amazon-emr)
* [Amazon EMR Serverless](#amazon-emr-serverless)
* [Amazon CloudWatch Logs](#amazon-cloudwatch-logs)
* [Amazon QuickSight](#amazon-quicksight)
* [AWS STS](#aws-sts)
* [AWS Secrets Manager](#aws-secrets-manager)
* [Amazon Chime](#amazon-chime)
* [Typing](#typing)
* [Global Configurations](#global-configurations)
* [Engine and Memory Format](#engine-and-memory-format)
* [Distributed - Ray](#distributed-ray)
### Amazon S3[¶](#amazon-s3)
| | |
| --- | --- |
| [`copy_objects`](index.html#awswrangler.s3.copy_objects)(paths, source_path, target_path) | Copy a list of S3 objects to another S3 directory. |
| [`delete_objects`](index.html#awswrangler.s3.delete_objects)(path[, use_threads, ...]) | Delete Amazon S3 objects from a received S3 prefix or list of S3 objects paths. |
| [`describe_objects`](index.html#awswrangler.s3.describe_objects)(path[, version_id, ...]) | Describe Amazon S3 objects from a received S3 prefix or list of S3 objects paths. |
| [`does_object_exist`](index.html#awswrangler.s3.does_object_exist)(path[, ...]) | Check if object exists on S3. |
| [`download`](index.html#awswrangler.s3.download)(path, local_file[, version_id, ...]) | Download file from a received S3 path to local file. |
| [`get_bucket_region`](index.html#awswrangler.s3.get_bucket_region)(bucket[, boto3_session]) | Get bucket region name. |
| [`list_buckets`](index.html#awswrangler.s3.list_buckets)([boto3_session]) | List Amazon S3 buckets. |
| [`list_directories`](index.html#awswrangler.s3.list_directories)(path[, chunked, ...]) | List Amazon S3 objects from a prefix. |
| [`list_objects`](index.html#awswrangler.s3.list_objects)(path[, suffix, ignore_suffix, ...]) | List Amazon S3 objects from a prefix. |
| [`merge_datasets`](index.html#awswrangler.s3.merge_datasets)(source_path, target_path[, ...]) | Merge a source dataset into a target dataset. |
| [`read_csv`](index.html#awswrangler.s3.read_csv)(path[, path_suffix, ...]) | Read CSV file(s) from a received S3 prefix or list of S3 objects paths. |
| [`read_excel`](index.html#awswrangler.s3.read_excel)(path[, version_id, use_threads, ...]) | Read EXCEL file(s) from a received S3 path. |
| [`read_fwf`](index.html#awswrangler.s3.read_fwf)(path[, path_suffix, ...]) | Read fixed-width formatted file(s) from a received S3 prefix or list of S3 objects paths. |
| [`read_json`](index.html#awswrangler.s3.read_json)(path[, path_suffix, ...]) | Read JSON file(s) from a received S3 prefix or list of S3 objects paths. |
| [`read_parquet`](index.html#awswrangler.s3.read_parquet)(path[, path_root, dataset, ...]) | Read Parquet file(s) from an S3 prefix or list of S3 objects paths. |
| [`read_parquet_metadata`](index.html#awswrangler.s3.read_parquet_metadata)(path[, dataset, ...]) | Read Apache Parquet file(s) metadata from an S3 prefix or list of S3 objects paths. |
| [`read_parquet_table`](index.html#awswrangler.s3.read_parquet_table)(table, database[, ...]) | Read Apache Parquet table registered in the AWS Glue Catalog. |
| [`read_orc`](index.html#awswrangler.s3.read_orc)(path[, path_root, dataset, ...]) | Read ORC file(s) from an S3 prefix or list of S3 objects paths. |
| [`read_orc_metadata`](index.html#awswrangler.s3.read_orc_metadata)(path[, dataset, ...]) | Read Apache ORC file(s) metadata from an S3 prefix or list of S3 objects paths. |
| [`read_orc_table`](index.html#awswrangler.s3.read_orc_table)(table, database[, ...]) | Read Apache ORC table registered in the AWS Glue Catalog. |
| [`read_deltalake`](index.html#awswrangler.s3.read_deltalake)(path[, version, partitions, ...]) | Load a Deltalake table data from an S3 path. |
| [`select_query`](index.html#awswrangler.s3.select_query)(sql, path, input_serialization, ...) | Filter contents of Amazon S3 objects based on SQL statement. |
| [`size_objects`](index.html#awswrangler.s3.size_objects)(path[, version_id, ...]) | Get the size (ContentLength) in bytes of Amazon S3 objects from a received S3 prefix or list of S3 objects paths. |
| [`store_parquet_metadata`](index.html#awswrangler.s3.store_parquet_metadata)(path, database, table) | Infer and store parquet metadata on AWS Glue Catalog. |
| [`to_csv`](index.html#awswrangler.s3.to_csv)(df[, path, sep, index, columns, ...]) | Write CSV file or dataset on Amazon S3. |
| [`to_excel`](index.html#awswrangler.s3.to_excel)(df, path[, boto3_session, ...]) | Write EXCEL file on Amazon S3. |
| [`to_json`](index.html#awswrangler.s3.to_json)(df[, path, index, columns, ...]) | Write JSON file on Amazon S3. |
| [`to_parquet`](index.html#awswrangler.s3.to_parquet)(df[, path, index, compression, ...]) | Write Parquet file or dataset on Amazon S3. |
| [`to_orc`](index.html#awswrangler.s3.to_orc)(df[, path, index, compression, ...]) | Write ORC file or dataset on Amazon S3. |
| [`to_deltalake`](index.html#awswrangler.s3.to_deltalake)(df, path[, index, mode, dtype, ...]) | Write a DataFrame to S3 as a DeltaLake table. |
| [`upload`](index.html#awswrangler.s3.upload)(local_file, path[, use_threads, ...]) | Upload file from a local file to received S3 path. |
| [`wait_objects_exist`](index.html#awswrangler.s3.wait_objects_exist)(paths[, delay, ...]) | Wait Amazon S3 objects exist. |
| [`wait_objects_not_exist`](index.html#awswrangler.s3.wait_objects_not_exist)(paths[, delay, ...]) | Wait Amazon S3 objects not exist. |
### AWS Glue Catalog[¶](#aws-glue-catalog)
| | |
| --- | --- |
| [`add_column`](index.html#awswrangler.catalog.add_column)(database, table, column_name[, ...]) | Add a column in a AWS Glue Catalog table. |
| [`add_csv_partitions`](index.html#awswrangler.catalog.add_csv_partitions)(database, table, ...[, ...]) | Add partitions (metadata) to a CSV Table in the AWS Glue Catalog. |
| [`add_parquet_partitions`](index.html#awswrangler.catalog.add_parquet_partitions)(database, table, ...) | Add partitions (metadata) to a Parquet Table in the AWS Glue Catalog. |
| [`create_csv_table`](index.html#awswrangler.catalog.create_csv_table)(database, table, path, ...) | Create a CSV Table (Metadata Only) in the AWS Glue Catalog. |
| [`create_database`](index.html#awswrangler.catalog.create_database)(name[, description, ...]) | Create a database in AWS Glue Catalog. |
| [`create_json_table`](index.html#awswrangler.catalog.create_json_table)(database, table, path, ...) | Create a JSON Table (Metadata Only) in the AWS Glue Catalog. |
| [`create_parquet_table`](index.html#awswrangler.catalog.create_parquet_table)(database, table, path, ...) | Create a Parquet Table (Metadata Only) in the AWS Glue Catalog. |
| [`databases`](index.html#awswrangler.catalog.databases)([limit, catalog_id, boto3_session]) | Get a Pandas DataFrame with all listed databases. |
| [`delete_column`](index.html#awswrangler.catalog.delete_column)(database, table, column_name) | Delete a column in a AWS Glue Catalog table. |
| [`delete_database`](index.html#awswrangler.catalog.delete_database)(name[, catalog_id, ...]) | Delete a database in AWS Glue Catalog. |
| [`delete_partitions`](index.html#awswrangler.catalog.delete_partitions)(table, database, ...[, ...]) | Delete specified partitions in a AWS Glue Catalog table. |
| [`delete_all_partitions`](index.html#awswrangler.catalog.delete_all_partitions)(table, database[, ...]) | Delete all partitions in a AWS Glue Catalog table. |
| [`delete_table_if_exists`](index.html#awswrangler.catalog.delete_table_if_exists)(database, table[, ...]) | Delete Glue table if exists. |
| [`does_table_exist`](index.html#awswrangler.catalog.does_table_exist)(database, table[, ...]) | Check if the table exists. |
| [`drop_duplicated_columns`](index.html#awswrangler.catalog.drop_duplicated_columns)(df) | Drop all repeated columns (duplicated names). |
| [`extract_athena_types`](index.html#awswrangler.catalog.extract_athena_types)(df[, index, ...]) | Extract columns and partitions types (Amazon Athena) from Pandas DataFrame. |
| [`get_columns_comments`](index.html#awswrangler.catalog.get_columns_comments)(database, table[, ...]) | Get all columns comments. |
| [`get_csv_partitions`](index.html#awswrangler.catalog.get_csv_partitions)(database, table[, ...]) | Get all partitions from a Table in the AWS Glue Catalog. |
| [`get_databases`](index.html#awswrangler.catalog.get_databases)([catalog_id, boto3_session]) | Get an iterator of databases. |
| [`get_parquet_partitions`](index.html#awswrangler.catalog.get_parquet_partitions)(database, table[, ...]) | Get all partitions from a Table in the AWS Glue Catalog. |
| [`get_partitions`](index.html#awswrangler.catalog.get_partitions)(database, table[, ...]) | Get all partitions from a Table in the AWS Glue Catalog. |
| [`get_table_description`](index.html#awswrangler.catalog.get_table_description)(database, table[, ...]) | Get table description. |
| [`get_table_location`](index.html#awswrangler.catalog.get_table_location)(database, table[, ...]) | Get table's location on Glue catalog. |
| [`get_table_number_of_versions`](index.html#awswrangler.catalog.get_table_number_of_versions)(database, table) | Get total number of versions. |
| [`get_table_parameters`](index.html#awswrangler.catalog.get_table_parameters)(database, table[, ...]) | Get all parameters. |
| [`get_table_types`](index.html#awswrangler.catalog.get_table_types)(database, table[, ...]) | Get all columns and types from a table. |
| [`get_table_versions`](index.html#awswrangler.catalog.get_table_versions)(database, table[, ...]) | Get all versions. |
| [`get_tables`](index.html#awswrangler.catalog.get_tables)([catalog_id, database, ...]) | Get an iterator of tables. |
| [`overwrite_table_parameters`](index.html#awswrangler.catalog.overwrite_table_parameters)(parameters, ...) | Overwrite all existing parameters. |
| [`sanitize_column_name`](index.html#awswrangler.catalog.sanitize_column_name)(column) | Convert the column name to be compatible with Amazon Athena and the AWS Glue Catalog. |
| [`sanitize_dataframe_columns_names`](index.html#awswrangler.catalog.sanitize_dataframe_columns_names)(df[, ...]) | Normalize all columns names to be compatible with Amazon Athena. |
| [`sanitize_table_name`](index.html#awswrangler.catalog.sanitize_table_name)(table) | Convert the table name to be compatible with Amazon Athena and the AWS Glue Catalog. |
| [`search_tables`](index.html#awswrangler.catalog.search_tables)(text[, catalog_id, boto3_session]) | Get Pandas DataFrame of tables filtered by a search string. |
| [`table`](index.html#awswrangler.catalog.table)(database, table[, transaction_id, ...]) | Get table details as Pandas DataFrame. |
| [`tables`](index.html#awswrangler.catalog.tables)([limit, catalog_id, database, ...]) | Get a DataFrame with tables filtered by a search term, prefix, suffix. |
| [`upsert_table_parameters`](index.html#awswrangler.catalog.upsert_table_parameters)(parameters, ...[, ...]) | Insert or Update the received parameters. |
### Amazon Athena[¶](#amazon-athena)
| | |
| --- | --- |
| [`create_athena_bucket`](index.html#awswrangler.athena.create_athena_bucket)([boto3_session]) | Create the default Athena bucket if it doesn't exist. |
| [`create_spark_session`](index.html#awswrangler.athena.create_spark_session)(workgroup[, ...]) | Create session and wait until ready to accept calculations. |
| [`create_ctas_table`](index.html#awswrangler.athena.create_ctas_table)(sql[, database, ...]) | Create a new table populated with the results of a SELECT query. |
| [`generate_create_query`](index.html#awswrangler.athena.generate_create_query)(table[, database, ...]) | Generate the query that created a table(EXTERNAL_TABLE) or a view(VIRTUAL_TABLE). |
| [`get_query_columns_types`](index.html#awswrangler.athena.get_query_columns_types)(query_execution_id) | Get the data type of all columns queried. |
| [`get_query_execution`](index.html#awswrangler.athena.get_query_execution)(query_execution_id[, ...]) | Fetch query execution details. |
| [`get_query_executions`](index.html#awswrangler.athena.get_query_executions)(query_execution_ids[, ...]) | From specified query execution IDs, return a DataFrame of query execution details. |
| [`get_query_results`](index.html#awswrangler.athena.get_query_results)(query_execution_id[, ...]) | Get AWS Athena SQL query results as a Pandas DataFrame. |
| [`get_named_query_statement`](index.html#awswrangler.athena.get_named_query_statement)(named_query_id[, ...]) | Get the named query statement string from a query ID. |
| [`get_work_group`](index.html#awswrangler.athena.get_work_group)(workgroup[, boto3_session]) | Return information about the workgroup with the specified name. |
| [`list_query_executions`](index.html#awswrangler.athena.list_query_executions)([workgroup, boto3_session]) | Fetch list query execution IDs ran in specified workgroup or primary work group if not specified. |
| [`read_sql_query`](index.html#awswrangler.athena.read_sql_query)(sql, database[, ...]) | Execute any SQL query on AWS Athena and return the results as a Pandas DataFrame. |
| [`read_sql_table`](index.html#awswrangler.athena.read_sql_table)(table, database[, ...]) | Extract the full table AWS Athena and return the results as a Pandas DataFrame. |
| [`repair_table`](index.html#awswrangler.athena.repair_table)(table[, database, data_source, ...]) | Run the Hive's metastore consistency check: 'MSCK REPAIR TABLE table;'. |
| [`run_spark_calculation`](index.html#awswrangler.athena.run_spark_calculation)(code, workgroup[, ...]) | Execute Spark Calculation and wait for completion. |
| [`start_query_execution`](index.html#awswrangler.athena.start_query_execution)(sql[, database, ...]) | Start a SQL Query against AWS Athena. |
| [`stop_query_execution`](index.html#awswrangler.athena.stop_query_execution)(query_execution_id[, ...]) | Stop a query execution. |
| [`to_iceberg`](index.html#awswrangler.athena.to_iceberg)(df, database, table[, temp_path, ...]) | Insert into Athena Iceberg table using INSERT INTO . |
| [`unload`](index.html#awswrangler.athena.unload)(sql, path, database[, file_format, ...]) | Write query results from a SELECT statement to the specified data format using UNLOAD. |
| [`wait_query`](index.html#awswrangler.athena.wait_query)(query_execution_id[, ...]) | Wait for the query end. |
| [`create_prepared_statement`](index.html#awswrangler.athena.create_prepared_statement)(sql, statement_name) | Create a SQL statement with the name statement_name to be run at a later time. |
| [`list_prepared_statements`](index.html#awswrangler.athena.list_prepared_statements)([workgroup, ...]) | List the prepared statements in the specified workgroup. |
| [`delete_prepared_statement`](index.html#awswrangler.athena.delete_prepared_statement)(statement_name[, ...]) | Delete the prepared statement with the specified name from the specified workgroup. |
### AWS Lake Formation[¶](#aws-lake-formation)
| | |
| --- | --- |
| [`read_sql_query`](index.html#awswrangler.lakeformation.read_sql_query)(sql, database[, ...]) | Execute PartiQL query on AWS Glue Table (Transaction ID or time travel timestamp). |
| [`read_sql_table`](index.html#awswrangler.lakeformation.read_sql_table)(table, database[, ...]) | Extract all rows from AWS Glue Table (Transaction ID or time travel timestamp). |
| [`cancel_transaction`](index.html#awswrangler.lakeformation.cancel_transaction)(transaction_id[, ...]) | Cancel the specified transaction. |
| [`commit_transaction`](index.html#awswrangler.lakeformation.commit_transaction)(transaction_id[, ...]) | Commit the specified transaction. |
| [`describe_transaction`](index.html#awswrangler.lakeformation.describe_transaction)(transaction_id[, ...]) | Return the status of a single transaction. |
| [`extend_transaction`](index.html#awswrangler.lakeformation.extend_transaction)(transaction_id[, ...]) | Indicate to the service that the specified transaction is still active and should not be canceled. |
| [`start_transaction`](index.html#awswrangler.lakeformation.start_transaction)([read_only, time_out, ...]) | Start a new transaction and returns its transaction ID. |
| [`wait_query`](index.html#awswrangler.lakeformation.wait_query)(query_id[, boto3_session, ...]) | Wait for the query to end. |
### Amazon Redshift[¶](#amazon-redshift)
| | |
| --- | --- |
| [`connect`](index.html#awswrangler.redshift.connect)([connection, secret_id, catalog_id, ...]) | Return a redshift_connector connection from a Glue Catalog or Secret Manager. |
| [`connect_temp`](index.html#awswrangler.redshift.connect_temp)(cluster_identifier, user[, ...]) | Return a redshift_connector temporary connection (No password required). |
| [`copy`](index.html#awswrangler.redshift.copy)(df, path, con, table, schema[, ...]) | Load Pandas DataFrame as a Table on Amazon Redshift using parquet files on S3 as stage. |
| [`copy_from_files`](index.html#awswrangler.redshift.copy_from_files)(path, con, table, schema[, ...]) | Load Parquet files from S3 to a Table on Amazon Redshift (Through COPY command). |
| [`read_sql_query`](index.html#awswrangler.redshift.read_sql_query)(sql, con[, index_col, ...]) | Return a DataFrame corresponding to the result set of the query string. |
| [`read_sql_table`](index.html#awswrangler.redshift.read_sql_table)(table, con[, schema, ...]) | Return a DataFrame corresponding the table. |
| [`to_sql`](index.html#awswrangler.redshift.to_sql)(df, con, table, schema[, mode, ...]) | Write records stored in a DataFrame into Redshift. |
| [`unload`](index.html#awswrangler.redshift.unload)(sql, path, con[, iam_role, ...]) | Load Pandas DataFrame from a Amazon Redshift query result using Parquet files on s3 as stage. |
| [`unload_to_files`](index.html#awswrangler.redshift.unload_to_files)(sql, path, con[, iam_role, ...]) | Unload Parquet files on s3 from a Redshift query result (Through the UNLOAD command). |
### PostgreSQL[¶](#postgresql)
| | |
| --- | --- |
| [`connect`](index.html#awswrangler.postgresql.connect)([connection, secret_id, catalog_id, ...]) | Return a pg8000 connection from a Glue Catalog Connection. |
| [`read_sql_query`](index.html#awswrangler.postgresql.read_sql_query)() | Return a DataFrame corresponding to the result set of the query string. |
| [`read_sql_table`](index.html#awswrangler.postgresql.read_sql_table)() | Return a DataFrame corresponding the table. |
| [`to_sql`](index.html#awswrangler.postgresql.to_sql)(df, con, table, schema[, mode, ...]) | Write records stored in a DataFrame into PostgreSQL. |
### MySQL[¶](#mysql)
| | |
| --- | --- |
| [`connect`](index.html#awswrangler.mysql.connect)([connection, secret_id, catalog_id, ...]) | Return a pymysql connection from a Glue Catalog Connection or Secrets Manager. |
| [`read_sql_query`](index.html#awswrangler.mysql.read_sql_query)() | Return a DataFrame corresponding to the result set of the query string. |
| [`read_sql_table`](index.html#awswrangler.mysql.read_sql_table)() | Return a DataFrame corresponding the table. |
| [`to_sql`](index.html#awswrangler.mysql.to_sql)(df, con, table, schema[, mode, ...]) | Write records stored in a DataFrame into MySQL. |
#### Microsoft SQL Server[¶](#microsoft-sql-server)
| | |
| --- | --- |
| [`connect`](index.html#awswrangler.sqlserver.connect)([connection, secret_id, catalog_id, ...]) | Return a pyodbc connection from a Glue Catalog Connection. |
| [`read_sql_query`](index.html#awswrangler.sqlserver.read_sql_query)() | Return a DataFrame corresponding to the result set of the query string. |
| [`read_sql_table`](index.html#awswrangler.sqlserver.read_sql_table)() | Return a DataFrame corresponding the table. |
| [`to_sql`](index.html#awswrangler.sqlserver.to_sql)(df, con, table, schema[, mode, ...]) | Write records stored in a DataFrame into Microsoft SQL Server. |
#### Oracle[¶](#oracle)
| | |
| --- | --- |
| [`connect`](index.html#awswrangler.oracle.connect)([connection, secret_id, catalog_id, ...]) | Return a oracledb connection from a Glue Catalog Connection. |
| [`read_sql_query`](index.html#awswrangler.oracle.read_sql_query)() | Return a DataFrame corresponding to the result set of the query string. |
| [`read_sql_table`](index.html#awswrangler.oracle.read_sql_table)() | Return a DataFrame corresponding the table. |
| [`to_sql`](index.html#awswrangler.oracle.to_sql)(df, con, table, schema[, mode, ...]) | Write records stored in a DataFrame into Oracle Database. |
### Data API Redshift[¶](#data-api-redshift)
| | |
| --- | --- |
| [`RedshiftDataApi`](index.html#awswrangler.data_api.redshift.RedshiftDataApi)([cluster_id, database, ...]) | Provides access to a Redshift cluster via the Data API. |
| [`connect`](index.html#awswrangler.data_api.redshift.connect)([cluster_id, database, ...]) | Create a Redshift Data API connection. |
| [`read_sql_query`](index.html#awswrangler.data_api.redshift.read_sql_query)(sql, con[, database]) | Run an SQL query on a RedshiftDataApi connection and return the result as a DataFrame. |
### Data API RDS[¶](#data-api-rds)
| | |
| --- | --- |
| [`RdsDataApi`](index.html#awswrangler.data_api.rds.RdsDataApi)(resource_arn, database[, ...]) | Provides access to the RDS Data API. |
| [`connect`](index.html#awswrangler.data_api.rds.connect)(resource_arn, database[, ...]) | Create a RDS Data API connection. |
| [`read_sql_query`](index.html#awswrangler.data_api.rds.read_sql_query)(sql, con[, database]) | Run an SQL query on an RdsDataApi connection and return the result as a DataFrame. |
| [`to_sql`](index.html#awswrangler.data_api.rds.to_sql)(df, con, table, database[, mode, ...]) | Insert data using an SQL query on a Data API connection. |
### AWS Glue Data Quality[¶](#aws-glue-data-quality)
| | |
| --- | --- |
| [`create_recommendation_ruleset`](index.html#awswrangler.data_quality.create_recommendation_ruleset)(database, ...) | Create recommendation Data Quality ruleset. |
| [`create_ruleset`](index.html#awswrangler.data_quality.create_ruleset)(name, database, table[, ...]) | Create Data Quality ruleset. |
| [`evaluate_ruleset`](index.html#awswrangler.data_quality.evaluate_ruleset)(name, iam_role_arn[, ...]) | Evaluate Data Quality ruleset. |
| [`get_ruleset`](index.html#awswrangler.data_quality.get_ruleset)(name[, boto3_session]) | Get a Data Quality ruleset. |
| [`update_ruleset`](index.html#awswrangler.data_quality.update_ruleset)(name[, mode, df_rules, ...]) | Update Data Quality ruleset. |
### OpenSearch[¶](#opensearch)
| | |
| --- | --- |
| [`connect`](index.html#awswrangler.opensearch.connect)(host[, port, boto3_session, region, ...]) | Create a secure connection to the specified Amazon OpenSearch domain. |
| [`create_collection`](index.html#awswrangler.opensearch.create_collection)(name[, collection_type, ...]) | Create Amazon OpenSearch Serverless collection. |
| [`create_index`](index.html#awswrangler.opensearch.create_index)(client, index[, doc_type, ...]) | Create an index. |
| [`delete_index`](index.html#awswrangler.opensearch.delete_index)(client, index) | Delete an index. |
| [`index_csv`](index.html#awswrangler.opensearch.index_csv)(client, path, index[, doc_type, ...]) | Index all documents from a CSV file to OpenSearch index. |
| [`index_documents`](index.html#awswrangler.opensearch.index_documents)(client, documents, index[, ...]) | Index all documents to OpenSearch index. |
| [`index_df`](index.html#awswrangler.opensearch.index_df)(client, df, index[, doc_type, ...]) | Index all documents from a DataFrame to OpenSearch index. |
| [`index_json`](index.html#awswrangler.opensearch.index_json)(client, path, index[, doc_type, ...]) | Index all documents from JSON file to OpenSearch index. |
| [`search`](index.html#awswrangler.opensearch.search)(client[, index, search_body, ...]) | Return results matching query DSL as pandas DataFrame. |
| [`search_by_sql`](index.html#awswrangler.opensearch.search_by_sql)(client, sql_query, **kwargs) | Return results matching [SQL query](https://opensearch.org/docs/search-plugins/sql/index/) as pandas DataFrame. |
### Amazon Neptune[¶](#amazon-neptune)
| | |
| --- | --- |
| [`connect`](index.html#awswrangler.neptune.connect)(host, port[, iam_enabled]) | Create a connection to a Neptune cluster. |
| [`execute_gremlin`](index.html#awswrangler.neptune.execute_gremlin)(client, query) | Return results of a Gremlin traversal as pandas DataFrame. |
| [`execute_opencypher`](index.html#awswrangler.neptune.execute_opencypher)(client, query) | Return results of a openCypher traversal as pandas DataFrame. |
| [`execute_sparql`](index.html#awswrangler.neptune.execute_sparql)(client, query) | Return results of a SPARQL query as pandas DataFrame. |
| [`flatten_nested_df`](index.html#awswrangler.neptune.flatten_nested_df)(df[, include_prefix, ...]) | Flatten the lists and dictionaries of the input data frame. |
| [`to_property_graph`](index.html#awswrangler.neptune.to_property_graph)(client, df[, batch_size, ...]) | Write records stored in a DataFrame into Amazon Neptune. |
| [`to_rdf_graph`](index.html#awswrangler.neptune.to_rdf_graph)(client, df[, batch_size, ...]) | Write records stored in a DataFrame into Amazon Neptune. |
| [`bulk_load`](index.html#awswrangler.neptune.bulk_load)(client, df, path, iam_role[, ...]) | Write records into Amazon Neptune using the Neptune Bulk Loader. |
| [`bulk_load_from_files`](index.html#awswrangler.neptune.bulk_load_from_files)(client, path, iam_role) | Load files from S3 into Amazon Neptune using the Neptune Bulk Loader. |
### DynamoDB[¶](#dynamodb)
| | |
| --- | --- |
| [`delete_items`](index.html#awswrangler.dynamodb.delete_items)(items, table_name[, boto3_session]) | Delete all items in the specified DynamoDB table. |
| [`execute_statement`](index.html#awswrangler.dynamodb.execute_statement)(statement[, parameters, ...]) | Run a PartiQL statement against a DynamoDB table. |
| [`get_table`](index.html#awswrangler.dynamodb.get_table)(table_name[, boto3_session]) | Get DynamoDB table object for specified table name. |
| [`put_csv`](index.html#awswrangler.dynamodb.put_csv)(path, table_name[, boto3_session, ...]) | Write all items from a CSV file to a DynamoDB. |
| [`put_df`](index.html#awswrangler.dynamodb.put_df)(df, table_name[, boto3_session, ...]) | Write all items from a DataFrame to a DynamoDB. |
| [`put_items`](index.html#awswrangler.dynamodb.put_items)(items, table_name[, ...]) | Insert all items to the specified DynamoDB table. |
| [`put_json`](index.html#awswrangler.dynamodb.put_json)(path, table_name[, boto3_session, ...]) | Write all items from JSON file to a DynamoDB. |
| [`read_items`](index.html#awswrangler.dynamodb.read_items)(table_name[, index_name, ...]) | Read items from given DynamoDB table. |
| [`read_partiql_query`](index.html#awswrangler.dynamodb.read_partiql_query)(query[, parameters, ...]) | Read data from a DynamoDB table via a PartiQL query. |
### Amazon Timestream[¶](#amazon-timestream)
| | |
| --- | --- |
| [`batch_load`](index.html#awswrangler.timestream.batch_load)(df, path, database, table, ...[, ...]) | Batch load a Pandas DataFrame into a Amazon Timestream table. |
| [`batch_load_from_files`](index.html#awswrangler.timestream.batch_load_from_files)(path, database, table, ...) | Batch load files from S3 into a Amazon Timestream table. |
| [`create_database`](index.html#awswrangler.timestream.create_database)(database[, kms_key_id, ...]) | Create a new Timestream database. |
| [`create_table`](index.html#awswrangler.timestream.create_table)(database, table, ...[, tags, ...]) | Create a new Timestream database. |
| [`delete_database`](index.html#awswrangler.timestream.delete_database)(database[, boto3_session]) | Delete a given Timestream database. |
| [`delete_table`](index.html#awswrangler.timestream.delete_table)(database, table[, boto3_session]) | Delete a given Timestream table. |
| [`list_databases`](index.html#awswrangler.timestream.list_databases)([boto3_session]) | List all databases in timestream. |
| [`list_tables`](index.html#awswrangler.timestream.list_tables)([database, boto3_session]) | List tables in timestream. |
| [`query`](index.html#awswrangler.timestream.query)(sql[, chunked, pagination_config, ...]) | Run a query and retrieve the result as a Pandas DataFrame. |
| [`wait_batch_load_task`](index.html#awswrangler.timestream.wait_batch_load_task)(task_id[, ...]) | Wait for the Timestream batch load task to complete. |
| [`write`](index.html#awswrangler.timestream.write)(df, database, table[, time_col, ...]) | Store a Pandas DataFrame into an Amazon Timestream table. |
| [`unload_to_files`](index.html#awswrangler.timestream.unload_to_files)(sql, path[, unload_format, ...]) | Unload query results to Amazon S3. |
| [`unload`](index.html#awswrangler.timestream.unload)(sql, path[, unload_format, ...]) | Unload query results to Amazon S3 and read the results as Pandas Data Frame. |
### AWS Clean Rooms[¶](#aws-clean-rooms)
| | |
| --- | --- |
| [`read_sql_query`](index.html#awswrangler.cleanrooms.read_sql_query)(sql, membership_id, ...[, ...]) | Execute Clean Rooms Protected SQL query and return the results as a Pandas DataFrame. |
| [`wait_query`](index.html#awswrangler.cleanrooms.wait_query)(membership_id, query_id[, ...]) | Wait for the Clean Rooms protected query to end. |
### Amazon EMR[¶](#amazon-emr)
| | |
| --- | --- |
| [`build_spark_step`](index.html#awswrangler.emr.build_spark_step)(path[, args, deploy_mode, ...]) | Build the Step structure (dictionary). |
| [`build_step`](index.html#awswrangler.emr.build_step)(command[, name, ...]) | Build the Step structure (dictionary). |
| [`create_cluster`](index.html#awswrangler.emr.create_cluster)(subnet_id[, cluster_name, ...]) | Create a EMR cluster with instance fleets configuration. |
| [`get_cluster_state`](index.html#awswrangler.emr.get_cluster_state)(cluster_id[, boto3_session]) | Get the EMR cluster state. |
| [`get_step_state`](index.html#awswrangler.emr.get_step_state)(cluster_id, step_id[, ...]) | Get EMR step state. |
| [`submit_ecr_credentials_refresh`](index.html#awswrangler.emr.submit_ecr_credentials_refresh)(cluster_id, path) | Update internal ECR credentials. |
| [`submit_spark_step`](index.html#awswrangler.emr.submit_spark_step)(cluster_id, path[, args, ...]) | Submit Spark Step. |
| [`submit_step`](index.html#awswrangler.emr.submit_step)(cluster_id, command[, name, ...]) | Submit new job in the EMR Cluster. |
| [`submit_steps`](index.html#awswrangler.emr.submit_steps)(cluster_id, steps[, boto3_session]) | Submit a list of steps. |
| [`terminate_cluster`](index.html#awswrangler.emr.terminate_cluster)(cluster_id[, boto3_session]) | Terminate EMR cluster. |
### Amazon EMR Serverless[¶](#amazon-emr-serverless)
| | |
| --- | --- |
| [`create_application`](index.html#awswrangler.emr_serverless.create_application)(name, release_label[, ...]) | Create an EMR Serverless application. |
| [`run_job`](index.html#awswrangler.emr_serverless.run_job)(application_id, execution_role_arn, ...) | Run an EMR serverless job. |
| [`wait_job`](index.html#awswrangler.emr_serverless.wait_job)(application_id, job_run_id[, ...]) | Wait for the EMR Serverless job to finish. |
### Amazon CloudWatch Logs[¶](#amazon-cloudwatch-logs)
| | |
| --- | --- |
| [`read_logs`](index.html#awswrangler.cloudwatch.read_logs)(query, log_group_names[, ...]) | Run a query against AWS CloudWatchLogs Insights and convert the results to Pandas DataFrame. |
| [`run_query`](index.html#awswrangler.cloudwatch.run_query)(query, log_group_names[, ...]) | Run a query against AWS CloudWatchLogs Insights and wait the results. |
| [`start_query`](index.html#awswrangler.cloudwatch.start_query)(query, log_group_names[, ...]) | Run a query against AWS CloudWatchLogs Insights. |
| [`wait_query`](index.html#awswrangler.cloudwatch.wait_query)(query_id[, boto3_session, ...]) | Wait query ends. |
| [`describe_log_streams`](index.html#awswrangler.cloudwatch.describe_log_streams)(log_group_name[, ...]) | List the log streams for the specified log group, return results as a Pandas DataFrame. |
| [`filter_log_events`](index.html#awswrangler.cloudwatch.filter_log_events)(log_group_name[, ...]) | List log events from the specified log group. |
### Amazon QuickSight[¶](#amazon-quicksight)
| | |
| --- | --- |
| [`cancel_ingestion`](index.html#awswrangler.quicksight.cancel_ingestion)(ingestion_id[, ...]) | Cancel an ongoing ingestion of data into SPICE. |
| [`create_athena_data_source`](index.html#awswrangler.quicksight.create_athena_data_source)(name[, workgroup, ...]) | Create a QuickSight data source pointing to an Athena/Workgroup. |
| [`create_athena_dataset`](index.html#awswrangler.quicksight.create_athena_dataset)(name[, database, ...]) | Create a QuickSight dataset. |
| [`create_ingestion`](index.html#awswrangler.quicksight.create_ingestion)([dataset_name, dataset_id, ...]) | Create and starts a new SPICE ingestion on a dataset. |
| [`delete_all_dashboards`](index.html#awswrangler.quicksight.delete_all_dashboards)([account_id, ...]) | Delete all dashboards. |
| [`delete_all_data_sources`](index.html#awswrangler.quicksight.delete_all_data_sources)([account_id, ...]) | Delete all data sources. |
| [`delete_all_datasets`](index.html#awswrangler.quicksight.delete_all_datasets)([account_id, ...]) | Delete all datasets. |
| [`delete_all_templates`](index.html#awswrangler.quicksight.delete_all_templates)([account_id, ...]) | Delete all templates. |
| [`delete_dashboard`](index.html#awswrangler.quicksight.delete_dashboard)([name, dashboard_id, ...]) | Delete a dashboard. |
| [`delete_data_source`](index.html#awswrangler.quicksight.delete_data_source)([name, data_source_id, ...]) | Delete a data source. |
| [`delete_dataset`](index.html#awswrangler.quicksight.delete_dataset)([name, dataset_id, ...]) | Delete a dataset. |
| [`delete_template`](index.html#awswrangler.quicksight.delete_template)([name, template_id, ...]) | Delete a template. |
| [`describe_dashboard`](index.html#awswrangler.quicksight.describe_dashboard)([name, dashboard_id, ...]) | Describe a QuickSight dashboard by name or ID. |
| [`describe_data_source`](index.html#awswrangler.quicksight.describe_data_source)([name, data_source_id, ...]) | Describe a QuickSight data source by name or ID. |
| [`describe_data_source_permissions`](index.html#awswrangler.quicksight.describe_data_source_permissions)([name, ...]) | Describe a QuickSight data source permissions by name or ID. |
| [`describe_dataset`](index.html#awswrangler.quicksight.describe_dataset)([name, dataset_id, ...]) | Describe a QuickSight dataset by name or ID. |
| [`describe_ingestion`](index.html#awswrangler.quicksight.describe_ingestion)(ingestion_id[, ...]) | Describe a QuickSight ingestion by ID. |
| [`get_dashboard_id`](index.html#awswrangler.quicksight.get_dashboard_id)(name[, account_id, ...]) | Get QuickSight dashboard ID given a name and fails if there is more than 1 ID associated with this name. |
| [`get_dashboard_ids`](index.html#awswrangler.quicksight.get_dashboard_ids)(name[, account_id, ...]) | Get QuickSight dashboard IDs given a name. |
| [`get_data_source_arn`](index.html#awswrangler.quicksight.get_data_source_arn)(name[, account_id, ...]) | Get QuickSight data source ARN given a name and fails if there is more than 1 ARN associated with this name. |
| [`get_data_source_arns`](index.html#awswrangler.quicksight.get_data_source_arns)(name[, account_id, ...]) | Get QuickSight Data source ARNs given a name. |
| [`get_data_source_id`](index.html#awswrangler.quicksight.get_data_source_id)(name[, account_id, ...]) | Get QuickSight data source ID given a name and fails if there is more than 1 ID associated with this name. |
| [`get_data_source_ids`](index.html#awswrangler.quicksight.get_data_source_ids)(name[, account_id, ...]) | Get QuickSight data source IDs given a name. |
| [`get_dataset_id`](index.html#awswrangler.quicksight.get_dataset_id)(name[, account_id, boto3_session]) | Get QuickSight Dataset ID given a name and fails if there is more than 1 ID associated with this name. |
| [`get_dataset_ids`](index.html#awswrangler.quicksight.get_dataset_ids)(name[, account_id, ...]) | Get QuickSight dataset IDs given a name. |
| [`get_template_id`](index.html#awswrangler.quicksight.get_template_id)(name[, account_id, ...]) | Get QuickSight template ID given a name and fails if there is more than 1 ID associated with this name. |
| [`get_template_ids`](index.html#awswrangler.quicksight.get_template_ids)(name[, account_id, ...]) | Get QuickSight template IDs given a name. |
| [`list_dashboards`](index.html#awswrangler.quicksight.list_dashboards)([account_id, boto3_session]) | List dashboards in an AWS account. |
| [`list_data_sources`](index.html#awswrangler.quicksight.list_data_sources)([account_id, boto3_session]) | List all QuickSight Data sources summaries. |
| [`list_datasets`](index.html#awswrangler.quicksight.list_datasets)([account_id, boto3_session]) | List all QuickSight datasets summaries. |
| [`list_groups`](index.html#awswrangler.quicksight.list_groups)([namespace, account_id, ...]) | List all QuickSight Groups. |
| [`list_group_memberships`](index.html#awswrangler.quicksight.list_group_memberships)(group_name[, ...]) | List all QuickSight Group memberships. |
| [`list_iam_policy_assignments`](index.html#awswrangler.quicksight.list_iam_policy_assignments)([status, ...]) | List IAM policy assignments in the current Amazon QuickSight account. |
| [`list_iam_policy_assignments_for_user`](index.html#awswrangler.quicksight.list_iam_policy_assignments_for_user)(user_name) | List all the IAM policy assignments. |
| [`list_ingestions`](index.html#awswrangler.quicksight.list_ingestions)([dataset_name, dataset_id, ...]) | List the history of SPICE ingestions for a dataset. |
| [`list_templates`](index.html#awswrangler.quicksight.list_templates)([account_id, boto3_session]) | List all QuickSight templates. |
| [`list_users`](index.html#awswrangler.quicksight.list_users)([namespace, account_id, ...]) | Return a list of all of the Amazon QuickSight users belonging to this account. |
| [`list_user_groups`](index.html#awswrangler.quicksight.list_user_groups)(user_name[, namespace, ...]) | List the Amazon QuickSight groups that an Amazon QuickSight user is a member of. |
### AWS STS[¶](#aws-sts)
| | |
| --- | --- |
| [`get_account_id`](index.html#awswrangler.sts.get_account_id)([boto3_session]) | Get Account ID. |
| [`get_current_identity_arn`](index.html#awswrangler.sts.get_current_identity_arn)([boto3_session]) | Get current user/role ARN. |
| [`get_current_identity_name`](index.html#awswrangler.sts.get_current_identity_name)([boto3_session]) | Get current user/role name. |
### AWS Secrets Manager[¶](#aws-secrets-manager)
| | |
| --- | --- |
| [`get_secret`](index.html#awswrangler.secretsmanager.get_secret)(name[, boto3_session]) | Get secret value. |
| [`get_secret_json`](index.html#awswrangler.secretsmanager.get_secret_json)(name[, boto3_session]) | Get JSON secret value. |
### Amazon Chime[¶](#amazon-chime)
| | |
| --- | --- |
| [`post_message`](index.html#awswrangler.chime.post_message)(webhook, message) | Send message on an existing Chime Chat rooms. |
### Typing[¶](#typing)
| | |
| --- | --- |
| [`GlueTableSettings`](index.html#awswrangler.typing.GlueTableSettings) | Typed dictionary defining the settings for the Glue table. |
| [`AthenaCTASSettings`](index.html#awswrangler.typing.AthenaCTASSettings) | Typed dictionary defining the settings for using CTAS (Create Table As Statement). |
| [`AthenaUNLOADSettings`](index.html#awswrangler.typing.AthenaUNLOADSettings) | Typed dictionary defining the settings for using UNLOAD. |
| [`AthenaCacheSettings`](index.html#awswrangler.typing.AthenaCacheSettings) | Typed dictionary defining the settings for using cached Athena results. |
| [`AthenaPartitionProjectionSettings`](index.html#awswrangler.typing.AthenaPartitionProjectionSettings) | Typed dictionary defining the settings for Athena Partition Projection. |
| [`RaySettings`](index.html#awswrangler.typing.RaySettings) | Typed dictionary defining the settings for distributing calls using Ray. |
| [`RayReadParquetSettings`](index.html#awswrangler.typing.RayReadParquetSettings) | Typed dictionary defining the settings for distributing reading calls using Ray. |
| [`_S3WriteDataReturnValue`](index.html#awswrangler.typing._S3WriteDataReturnValue) | Typed dictionary defining the dictionary returned by S3 write functions. |
| [`_ReadTableMetadataReturnValue`](index.html#awswrangler.typing._ReadTableMetadataReturnValue)(columns_types, ...) | Named tuple defining the return value of the `read_*_metadata` functions. |
### Global Configurations[¶](#global-configurations)
| | |
| --- | --- |
| [`reset`](index.html#awswrangler.config.reset)([item]) | Reset one or all (if None is received) configuration values. |
| [`to_pandas`](index.html#awswrangler.config.to_pandas)() | Load all configurations on a Pandas DataFrame. |
### Engine and Memory Format[¶](#engine-and-memory-format)
| | |
| --- | --- |
| [`Engine`](index.html#awswrangler._distributed.Engine)() | Execution engine configuration class. |
| [`MemoryFormat`](index.html#awswrangler._distributed.MemoryFormat)() | Memory format configuration class. |
### Distributed - Ray[¶](#distributed-ray)
| | |
| --- | --- |
| [`initialize_ray`](index.html#awswrangler.distributed.ray.initialize_ray)([address, redis_password, ...]) | Connect to an existing Ray cluster or start one and connect to it. | |
MVNBayesian | cran | R | Package ‘MVNBayesian’
October 12, 2022
Type Package
Title Bayesian Analysis Framework for MVN (Mixture) Distribution
Version 0.0.8-11
Author <NAME>
Maintainer <NAME> <<EMAIL>>
Description Tools of Bayesian analysis framework using the method
suggested by Berger (1985) <doi:10.1007/978-1-4757-4286-2> for
multivariate normal (MVN) distribution and multivariate normal
mixture (MixMVN) distribution:
a) calculating Bayesian posteriori of (Mix)MVN distribution;
b) generating random vectors of (Mix)MVN distribution;
c) Markov chain Monte Carlo (MCMC) for (Mix)MVN distribution.
Imports mvtnorm, plyr, stats
Suggests rgl, Rfast
License GPL-2
URL https://github.com/CubicZebra/MVNBayesian
Encoding UTF-8
LazyData true
RoxygenNote 6.1.0
NeedsCompilation no
Repository CRAN
Date/Publication 2018-08-16 10:40:07 UTC
R topics documented:
MVNBayesian-packag... 2
Ascending_Nu... 3
dataset... 4
dataset... 4
MatrixAlternativ... 5
MixMVN_BayesianPosterior... 5
MixMVN_GibbsSample... 7
MixMVN_MCM... 8
MVN_BayesianIterato... 10
MVN_BayesianPosterior... 12
MVN_FConditiona... 13
MVN_GibbsSample... 14
MVN_MCM... 16
MVNBayesian-package Bayesian Analysis Framework for MVN (Mixture) Distribution
Description
Tools of Bayesian analysis framework using the method suggested by Berger (1985) <doi:10.1007/978-
1-4757-4286-2> for multivariate normal (MVN) distribution and multivariate normal mixture (MixMVN)
distribution: a) calculating Bayesian posteriori of (Mix)MVN distribution; b) generating random
vectors of (Mix)MVN distribution; c) Markov chain Monte Carlo (MCMC) for (Mix)MVN distri-
bution.
Details
This package is aimed to build a easy approach for MVN (mixture) distribution in Bayesian analysis
framework. Bayesian posteriori MVN (mixture) distribution can be calculated in conditions of
given priori MVN (mixture) informations. The conjugated property of MVN distribution makes it
effective in parameter estimation using Bayesian iterator. Joint and marginal probability densities of
a certain MVN (mixture) can be achieved through random vector generator, using Gibbs sampling.
Conditional probability densities from a certain MVN (mixture) can be simulated using MCMC
method.
Author(s)
<NAME>
Maintainer: <NAME> <<EMAIL>>
References
"Statistical Inference" by <NAME>. <NAME>;
"Statistical Decision Theory and Bayesian Analysis" by <NAME>;
"Matrix Computation" by <NAME>. <NAME>;
"Bayesian Statistics" by WEI Laisheng;
"Machine Learning" by NAKAGAWA Hiroshi.
See Also
stats, mvtnorm
Examples
library(Rfast)
library(mvtnorm)
library(plyr)
head(dataset1)
BP <- MVN_BayesianPosteriori(dataset1)
BP
BP_Gibbs <- MVN_GibbsSampler(5000, BP)
colMeans(BP_Gibbs)
colrange(BP_Gibbs)
result <- MVN_MCMC(BP, 5000, c(1), c(77.03))
result$Accept
Ascending_Num Renumbering vector by elemental frequency
Description
Renumbering vector by elemental frequency in ascending order.
Usage
# Tidy vector by elemental frequency:
Ascending_Num(data)
Arguments
data An 1d-vector.
Value
return a renumbered vector by elemental frequency. Factors will be positive integers arrayed in
ascending order.
Examples
library(plyr)
x <- c(1,2,2,2,2,2,2,2,3,3,3,1,3,3,3)
x
Ascending_Num(x)
dataset1 Dataset for MVN test
Description
Dataset built for MVN mixture test, which contains 3 variables and 25 observations.
Usage
data("dataset1")
Format
A data frame with 25 observations on 3 independent variables, named as fac1, fac2 and fac3.
fac1 The 1st factor.
fac2 The 2nd factor.
fac3 The 3rd factor.
Examples
dataset1
dataset2 Dataset for MVN mixture test
Description
Dataset built for MVN mixture test, which contains 4 variables (the first 4 columns), clustering (the
last column) and 96 observations.
Usage
data("dataset2")
Format
A data frame with 96 pseudo-observations generated by random number generator. All observa-
tions come from 3 different centers which have been marked in the last column "species". More
specifically, data of species=1 comes from the center (1,1,1,1); data of species=2 comes from the
center (2,2,2,0); data of species=3 comes from the center (1,0,2,2).
dimen1 the 1st variable
dimen2 the 2nd variable
dimen3 the 3rd variable
dimen4 the 4th variable
species clustering label
Examples
dataset2
MatrixAlternative Interchanging specified rows and columns
Description
Interchange all elements between two specified rows and columns in a matrix.
Usage
# A matrix-like data
MatrixAlternative(data, sub, rep)
Arguments
data A matrix to be processed.
sub A positive integer. The first selected dimension.
rep A positive integer. The second selected dimension. Default value is 1.
Value
return a matrix with interchanged rows and columns in two specified dimensions.
Examples
library(plyr)
M <- matrix(1:9,3,3,1)
M
MatrixAlternative(M, 2)
MixMVN_BayesianPosteriori
Calculate Bayesian posteriori MVN mixture distribution
Description
The function to export the mixture probabilities, the mean vectors and covariance matrices of
Bayesian posteriori MVN mixture distribution in the basis of given priori information (priori MVN
mixture) and observation data (a design matrix containing all variables).
Usage
# paramtric columns-only as input data:
# data <- dataset2[,1:4]
# Specify species to get parameters of MVN mixture model:
MixMVN_BayesianPosteriori(data, species, idx)
Arguments
data A data.frame or matrix-like data: obervations should be arrayed in rows while
variables should be arrayed in columns.
species A positive integer. The number of clusters for import data. It will be only
called once by the next argument idx through kmeans clustering algrithm in this
function. Default value is 1, which means no clustering algrithm is used.
idx A vector-like data to import for accepting clustering result. Default value is
generated by kmeans clustering. Notice the length of idx should be the same as
observation numbers of data (rows).
Value
return a matrix-like result containing all parameters of Bayesian posteriori MVN mixture distri-
bution: Clusters are arrayed in rows, while the mixture probabilities, posteriori mean vectors and
posteriori covariance matrices are arrayed in columns.
See Also
kmeans, MVN_BayesianPosteriori
Examples
library(plyr)
# Design matrix should only contain columns of variables
# Export will be a matrix-like data
# Using kmeans (default) clustering algrithm
data_dim <- dataset2[,1:4]
result <- MixMVN_BayesianPosteriori(data=data_dim, species=3)
result
# Get the parameters of the cluster1:
result[1,]
# Get the mixture probability of cluster2:
# (Attention to the difference between
# result[2,1][[1]] and result[2,1])
result[2,1][[1]]
# Get the mean vector of cluster1:
result[1,2][[1]]
# Get the covariance matrix of cluster3:
result[3,3][[1]]
MixMVN_GibbsSampler Gibbs sampler for MVN mixture distribution
Description
Generating random vectors on the basis of a given MVN mixture distribution, through Gibbs sam-
pling algorithm or matrix factorization.
Usage
# Bayesian posteriori MVN mixture model as input data:
# data <- MixMVN_BayesianPosteriori(dataset2[,1:4], species=3)
# Generate random vectors based on Bayesian posteriori MVN mixture:
MixMVN_GibbsSampler(n, data, random_method = c("Gibbs", "Fast"), reject_rate=0, ...)
Arguments
n A positive integer. The numbers of random vectors to be generated.
data A matrix-like data which contains the mixture probability, mean vector and co-
variance matrix for each cluster in each row.
random_method The method to generate random vectors. Options are "Gibbs": Gibbs sampling
for MVN mixture model; and "Fast": call rmvnorm() to generate random vec-
tors based on matrix factorization.
reject_rate A numeric value which will be efficient if the random_method is "Gibbs": De-
termine the discarded items in burn-in period by ratio. Default value is 0. For
details see MVN_GibbsSampler.
... Other arguments to control the process in Gibbs sampling if the random_method
is "Gibbs".
Details
It is recommanded using the random method of "Fast" due to the high efficiency. The time complex-
ity of "Gibbs" method is O(k*n) where the k means dimensionality of MVN mixture model and n
means generated numbers of random vectors; while that of the "Fast" method is only O(n), without
considering the effect of burn-in period. this discrepancy will be even further significant when we
use MCMC methods to do some further analysis in which random vectors will be generated every
time when we set conditions.
Value
return a series random vectors in the basis of given MVN mixture distribution.
See Also
Ascending_Num, MixMVN_BayesianPosteriori, MVN_BayesianPosteriori
Examples
library(plyr)
library(mvtnorm)
library(stats)
# Use dataset2 for demonstration. Get parameters of Bayesian
# posteriori multivariate normal mixture distribution
head(dataset2)
dataset2_par <- dataset2[,1:4] # only parameter columns are premitted
MixBPos <- MixMVN_BayesianPosteriori(dataset2_par, species=3)
MixBPos
# Generate random vectors using Gibbs sampling:
MixBPos_Gibbs <- MixMVN_GibbsSampler(5000, MixBPos, random_method = "Gibbs")
head(MixBPos_Gibbs)
# Compared generation speed of "Gibbs" to that of "Fast"
MixBPos_Fast <- MixMVN_GibbsSampler(5000, MixBPos, random_method = "Fast")
head(MixBPos_Fast)
# Visulization by clusters:
library(rgl)
dimen1 <- MixBPos_Gibbs[,1]
dimen2 <- MixBPos_Gibbs[,2]
dimen3 <- MixBPos_Gibbs[,3]
dimen4 <- MixBPos_Gibbs[,4]
plot3d(x=dimen1, y=dimen2, z=dimen3, col=MixBPos_Gibbs[,5], size=2)
MixMVN_MCMC MCMC simulation for MVN mixture distribution
Description
Function to get a MCMC simulation results based on the imported MVN mixture distribution. It
is commonly used for inquiring the specified conditional probability of MVN mixture distribuiton
calculated through Bayesian posteriori.
Usage
# Bayesian posteriori mix MVN as input data:
# data <- MixMVN_BayesianPosteriori(dataset2[,1:4], 3)
# run MCMC simulation based on Bayesian posteriori mix MVN:
MixMVN_MCMC(data, steps, pars, values, tol, random_method, ...)
Arguments
data A matrix-like data containing the mixture probability, mean vector and covari-
ance matrix for each cluster in each row.
steps A positive integer. The numbers of random vectors to be generated for MCMC
step.
pars A integer vector to declare fixed dimension(s). For example if the desired di-
mensions are 1st=7 and 3rd=10, set this argument as c(1,3).
values A numeric vector to assign value(s) to declared dimension(s). For example if
the desired dimensions are 1st=7 and 3rd=10, set this argument as c(7,10).
tol Tolerance. A numeric value to control the generated vectors to be accepted or
rejected. Criterion uses Euclidean distance in declared dimension(s). Default
value is 0.3.
random_method The method to generate random vectors. Options are "Gibbs": Gibbs sampling
for MVN mixture model; and "Fast": call rmvnorm() to generate random vec-
tors based on matrix factorization. Default option is "Fast".
... Other arguments to control the process in Gibbs sampling if the random_method
is "Gibbs".
Value
return a list which contains:
AcceptRate Acceptance of declared conditions of MCMC
MCMCdata All generated random vectors in MCMC step based on MVN mixture distribu-
tion
Accept Subset of accepted sampling in MCMCdata
Reject Subset of rejected sampling in MCMCdata
See Also
MixMVN_BayesianPosteriori, MixMVN_GibbsSampler, MVN_GibbsSampler, MVN_FConditional
Examples
library(plyr)
library(mvtnorm)
library(stats)
# dataset2 has 4 parameters: dimen1, dimen2, dimen3 and dimen4:
head(dataset2)
dataset2_dim <- dataset2[,1:4] # extract parametric columns
# Get posteriori parameters of dataset2 using kmeans 3 clustering:
MixBPos <- MixMVN_BayesianPosteriori(dataset2_dim, 3)
# If we want to know when dimen1=1, which clusters are accepted, run:
MixBPos_MCMC <- MixMVN_MCMC(MixBPos, steps=5000, pars=c(1), values=c(1), tol=0.3)
MixBPos_MCMC$AcceptRate
result <- MixBPos_MCMC$MCMCdata
head(result)
# count accepted samples by clustering:
count(result[which(result[,7]==1),5])
library(rgl)
# Visualization using plot3d() if necessary:
# Clustering result in the rest 3 dimensions:
plot3d(result[,2], result[,3], z=result[,4], col=result[,5], size=2)
# Acceptance rejection visualization:
plot3d(result[,2], result[,3], z=result[,4], col=result[,7]+1, size=2)
MVN_BayesianIterator Parameter estimation using Bayesian iteration
Description
Function to execute parameter estimation for MVN distribution, under Bayesian analysis frame-
work.
Usage
# Get parameters of Bayesian posteriori MVN:
MVN_BayesianIterator(data, pri_mean=colMeans(data), Gibbs_nums=5000,
pseudo_nums=dim(data)[1], threshold=1e-04, iteration=100, ...)
Arguments
data A data.frame or matrix-like data: obervations should be arrayed in rows while
variables should be arrayed in columns.
pri_mean A numeric vector to assign priori mean for MVN. Default value applies colMeans()
to data.
Gibbs_nums A positive integer. The numbers of random vectors to be generated for each
iteration step. Defult value is 5000.
pseudo_nums A positive integer. The argument to determine numbers of generated vectors
used for each iteration step. Default value keeps the same scale as input data.
Notice that a too small value can result in singular matrix.
threshold A numeric value to control stoping the iteration loop. Default value used 0.0001.
While the Euclidean distance of mean vectors between pseudo-data (the last
pseudo_nums items) and Bayesian posteriori is less than threshold, iteration
stops.
iteration A positive integer. Argument to assign the maximum steps for iteration. Default
value is 100 after which the iteration loop will compulsively exit.
... Other arguments to control the process in Gibbs sampling.
Details
Because that MVN distribution possess conjugated property in Bayesian analysis framework, the
convergence of Bayesian iterator for MVN distribution can be ensured, accoumpanied with the
shrink of 2nd-norm of Bayesian posteriori covariance matrix. But pay attention to the fact that
pseudo-data leads to the randomness, the argument pseudo_nums should be set carefully.
Value
return a double level list containing Bayesian posteriori after iteration process:
mean Bayesian posteriori mean vector
var Bayesian posteriori covariance matrix
Note
If the parameter values are the only interested thing we concerned, this iterator makes sense. Since it
can significantly help us decrease the scale of covariance matrix, to obtain a more reliable estimation
for the parameters. However, in more cases, some correlationships of a certain group of pamameters
are more valuable, which are usually clued by the covariance matrix.
See Also
MVN_BayesianPosteriori, MVN_GibbsSampler, MVN_FConditional, MatrixAlternative
Examples
library(mvtnorm)
# Bayesian posteriori before iteration using dataset1 as example,
# c(80, 16, 3) as priori mean:
# View 2-norm of covariance matrix of Bayesian posteriori:
BPos_init <- MVN_BayesianPosteriori(dataset1, c(80,16,3))
BPos_init
norm(as.matrix(BPos_init$var), type = "2")
# Bayesian posteriori after iteration using c(80,16,3) as priori
# Using 30 last samples generated by GibbsSampler for each step:
BPos_fina1 <- MVN_BayesianIterator(dataset1, c(80,16,3), 5000, 30)
BPos_fina1
norm(as.matrix(BPos_fina1$var), type = "2")
# Too small pseudo_nums setting can results in singular system, try:
MVN_BayesianIterator(dataset1, pseudo_nums=3)
MVN_BayesianPosteriori
Calculate Bayesian posteriori MVN distribution
Description
The function to export the mean vector and covariance matrix of Bayesian posteriori MVN distri-
bution in the basis of given priori information (priori MVN) and observation data (a design matrix
containing all variables).
Usage
# Given the data as design matrix, priori mean vector and priori covariance
# matrix, this function will export a list which contains mean ($mean) and
# covariance ($var) of Bayesian posteriori multivariate normal distribution.
MVN_BayesianPosteriori(data, pri_mean, pri_var)
Arguments
data A data.frame or matrix-like data: obervations should be arrayed in rows while
variables should be arrayed in columns.
pri_mean A numeric vector to assign priori mean for MVN. Default value applies colMeans()
to data.
pri_var A matrix-like parameter to assign priori covariance matrix. Default value uses
unit matrix.
Value
return a double level list containing:
mean mean vector of Bayesian posteriori MVN distribution
var covariance of Bayesian posteriori MVN distribution
Note
It is strongly recommanded that users should have some prior knowledge of ill-conditioned system
before using this function. Simply, ill-conditioned system, or singular matrix, is caused by a) in-
sufficient data or b) almostly linear dependency of two certain parameters, which two can result
in a excessively small eigenvalue then cause a ill-conditioned (singular) system. Therefore users
must diagnose their data firstly to confirm the fact that the it contains enough observations, and
the degree of freedom is strictly equal to the number of parameters as well. Additionally, for the
argument pri_var, a real symmetric matrix is desired by definition.
Examples
# Demo using dataset1:
head(dataset1)
BPos <- MVN_BayesianPosteriori(dataset1, c(80,16,3))
BPos$mean
BPos$var
# Singular system caused by insufficient data
eigen(var(dataset1[1:3,]))$values
rcond(var(dataset1[1:3,]))
eigen(var(dataset1[1:6,]))$values
rcond(var(dataset1[1:6,]))
# Singular system caused by improper degree of freedom
K <- cbind(dataset1, dataset1[,3]*(-2)+3)
eigen(var(K[,2:4]))$values
rcond(var(K[,2:4]))
MVN_FConditional Calculate full conditional normal ditribution of MVN
Description
Function to export parameters of full conditional normal distribution in basis of given MVN distri-
bution, the undecided dimension, as well as all values in the rest dimensions.
Usage
# Bayesian posteriori as input data:
# data <- MVN_BayesianPosteriori(dataset1, c(80,16,3))
# inquire parameters of full-conditional distribution based on Bayesian posteriori:
MVN_FConditional(data, variable, z)
Arguments
data A double level list containing all parameters of MVN distribution: mean vector
(data$mean) and covariance matrix (data$var).
variable A integer to specify the undecided dimension.
z A nd-vector to assign conditions (n = dimensions of given MVN distribution).
It should be noted that the value in dimension specified by variable doesn’t
participate in the calculation.
Details
It can be proved that any full conditional distribution from a given MVN will degenerate to an
1d-normal distribution.
Value
return a double level list containing the following parameters of full conditional normal distributions
of given MVN in specified dimension:
mean a numberic mean of a normal distribution
var a numberic variance of a normal distribution
See Also
MVN_BayesianPosteriori, MatrixAlternative
Examples
head(dataset1)
BPos <- MVN_BayesianPosteriori(dataset1, c(80,16,3))
BPos # Bayesian Posteriori
result <- MVN_FConditional(BPos, variable = 1, z=c(75, 13, 4))
result$mean
class(result$mean)
result$var
class(result$var)
# compare the following results:
MVN_FConditional(BPos, variable = 2, z=c(75, 13, 4))
MVN_FConditional(BPos, variable = 2, z=c(75, 88, 4))
MVN_FConditional(BPos, variable = 1, z=c(75, 88, 4))
MVN_GibbsSampler Gibbs sampler for MVN distribution
Description
Generating random vectors on the basis of a given MVN distribution, through Gibbs sampling
algorithm.
Usage
# Bayesian posteriori as data
# data <- MVN_BayesianPosteriori(dataset1)
# Using Gibbs sampler to generate random vectors based on Bayesian posteriori:
MVN_GibbsSampler(n, data, initial, reject_rate, burn)
Arguments
n A positive integer. The numbers of random vectors to be generated.
data A double level list which contains the mean vector (data$mean) and the covari-
ance matrix (data$var) of a given MVN distribution.
initial Initial vector where Markov chain starts. Default value use a random vector
generated by rmvnorm().
reject_rate A numeric to control burn-in period by ratio. Default value is 0.2, namely the
first 20% generated vectors will be rejected. If this arg was customized, the next
arg burn should maintain the default value.
burn A numeric to control burn-in period by numbers. If this arg was customized,
final result will be generated by this manner in which it will drop the first n
numbers (n=burn).
Details
There’re also some literatures suggest using the mean or mode of priori as initial vector. Users can
customize this setting according to their own needs.
Value
return a series random vectors in the basis of given MVN distribution.
See Also
MVN_FConditional, MatrixAlternative
Examples
library(mvtnorm)
# Get parameters of Bayesian posteriori multivariate normal distribution
BPos <- MVN_BayesianPosteriori(dataset1)
BPos
# Using previous result (BPos) to generate random vectors through Gibbs
# sampling: 7000 observations, start from c(1,1,2), use 0.3 burning rate
BPos_Gibbs <- MVN_GibbsSampler(7000, BPos, initial=c(1,1,2), 0.3)
tail(BPos_Gibbs)
# Check for convergence of Markov chain
BPos$mean
colMeans(BPos_Gibbs)
BPos$var
var(BPos_Gibbs)
# 3d Visulization:
library(rgl)
fac1 <- BPos_Gibbs[,1]
fac2 <- BPos_Gibbs[,2]
fac3 <- BPos_Gibbs[,3]
plot3d(x=fac1, y=fac2, z=fac3, col="red", size=2)
MVN_MCMC MCMC simulation for MVN distribution
Description
Function to get a MCMC simulation results based on the imported MVN distribution. It is com-
monly used for inquiring the specified conditional probability of MVN distribuiton calculated
through Bayesian posteriori.
Usage
# Bayesian posteriori as input data
# data <- MVN_BayesianPosteriori(dataset1, pri_mean=c(80,16,3))
# run MCMC simulation using Bayesian posteriori:
MVN_MCMC(data, steps, pars, values, tol, ...)
Arguments
data A double level list which contains the mean vector (data$mean) and the covari-
ance matrix (data$var) of a given MVN distribution.
steps A positive integer. The numbers of random vectors to be generated for MCMC
step.
pars A integer vector to declare fixed dimension(s). For example if the desired di-
mensions are 1st=7 and 3rd=10, set this argument as c(1,3).
values A numeric vector to assign value(s) to declared dimension(s). For example if
the desired dimensions are 1st=7 and 3rd=10, set this argument as c(7,10).
tol Tolerance. A numeric value to control the generated vectors to be accepted or
rejected. Criterion uses Euclidean distance in declared dimension(s). Default
value is 0.3.
... Other arguments to control the process in Gibbs sampling.
Value
return a list which contains:
AcceptRate Acceptance of declared conditions of MCMC
MCMCdata All generated random vectors in MCMC step based on MVN distribution
Accept Subset of accepted sampling in MCMCdata
Reject Subset of rejected sampling in MCMCdata
See Also
MVN_GibbsSampler, MVN_FConditional
Examples
library(mvtnorm)
library(plyr)
# dataset1 has three parameters: fac1, fac2 and fac3:
head(dataset1)
# Get posteriori parameters of dataset1 using prior of c(80,16,3):
BPos <- MVN_BayesianPosteriori(dataset1, pri_mean=c(80,16,3))
# If we want to know when fac1=78, how fac2 responses to fac3, run:
BPos_MCMC <- MVN_MCMC(BPos, steps=8000, pars=c(1), values=c(78), tol=0.3)
MCMC <- BPos_MCMC$MCMCdata
head(MCMC)
# Visualization using plot3d() if necessary:
library(rgl)
plot3d(MCMC[,1], MCMC[,2], z=MCMC[,3], col=MCMC[,5]+1, size=2)
# Visualization: 2d scatter plot
MCMC_2d <- BPos_MCMC$Accept
head(MCMC_2d)
plot(MCMC_2d[,3], MCMC_2d[,2], pch=20, col="red", xlab = "fac3", ylab = "fac2")
# Compared to the following scatter plot when fac1 is not fixed:
plot(BPos_MCMC$MCMCdata[,3], BPos_MCMC$MCMCdata[,2], pch=20, col="red", xlab = "fac3",
ylab = "fac2") |
gollum | hex | Erlang | Toggle Theme
gollum v0.3.3
API Reference
===
Modules
---
[Gollum](Gollum.html)
Robots.txt parser with caching. Modelled after Kryten
[Gollum.Cache](Gollum.Cache.html)
Caches the robots.txt files from different hosts in memory
[Gollum.Fetcher](Gollum.Fetcher.html)
In charge of fetching the actual robots.txt files
[Gollum.Host](Gollum.Host.html)
Represents one host's robots.txt files
[Gollum.Parser](Gollum.Parser.html)
Parses a robots.txt file
Toggle Theme
gollum v0.3.3 Gollum
===
Robots.txt parser with caching. Modelled after Kryten.
Usage of [`Gollum`](#content) would simply be to call [`Gollum.crawlable?/3`](Gollum.html#crawlable?/3) to obtain whether a certain URL is permitted for the specified user agent.
[`Gollum`](#content) is an OTP app (For the cache) so just remember to specify it in the
`extra_applications` key in your `mix.exs` to ensure it is started.
[`Gollum`](#content) allows for some configuration in your `config.exs` file. The following shows their default values. They are all optional.
```
config :gollum,
name: Gollum.Cache, # Name of the Cache GenServer
refresh_secs: 86_400, # Amount of time before the robots.txt will be refetched
lazy_refresh: false, # Whether to setup a timer that auto-refetches, or to only refetch when requested
user_agent: "Gollum" # User agent to use when sending the GET request for the robots.txt
```
You can also setup a [`Gollum.Cache`](Gollum.Cache.html) manually using [`Gollum.Cache.start_link/1`](Gollum.Cache.html#start_link/1)
and add it to your supervision tree.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[crawlable?(user_agent, url, opts \\ [])](#crawlable?/3)
Returns whether a url is permitted. `false` will be returned if an error occurs
[Link to this section](#functions)
Functions
===
```
crawlable?([binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)(), [binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)(), [keyword](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()) ::
:crawlable | :uncrawlable | :undefined
```
Returns whether a url is permitted. `false` will be returned if an error occurs.
Options
---
* `name` - The name of the GenServer. Default value is [`Gollum.Cache`](Gollum.Cache.html).
Any other options passed will be passed to the internal `Cache.start_link/1` call.
Examples
---
```
iex> Gollum.crawlable?("hello", "https://google.com/")
:crawlable iex> Gollum.crawlable?("hello", "https://google.com/m/")
:uncrawlable
```
Toggle Theme
gollum v0.3.3 Gollum.Cache
===
Caches the robots.txt files from different hosts in memory.
Add this module to your supervision tree. Use this module to perform fetches of the robots.txt and automatic caching of results. It also makes sure the two identical requests don't happen at the same time.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[child_spec(init_arg)](#child_spec/1)
Returns a specification to start this module under a supervisor
[fetch(host, opts \\ [])](#fetch/2)
Fetches the robots.txt from a host and stores it in the cache.
It will only perform the HTTP request if there isn't any current data in the cache, the data is too old (specified in the `refresh_secs` option in `start_link/2`) or when the
`force` flag is set. This function is useful if you know which hosts you need to request beforehand
[get(host, opts \\ [])](#get/2)
Gets the [`Gollum.Host`](Gollum.Host.html) struct for the specified host from the cache
[init(init_arg)](#init/1)
Invoked when the server is started. `start_link/3` or `start/3` will block until it returns
[start_link(opts \\ [])](#start_link/1)
Starts up the cache
[Link to this section](#functions)
Functions
===
Returns a specification to start this module under a supervisor.
See [`Supervisor`](https://hexdocs.pm/elixir/Supervisor.html).
```
fetch([binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)(), [keyword](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()) :: :ok | {:error, [term](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()}
```
Fetches the robots.txt from a host and stores it in the cache.
It will only perform the HTTP request if there isn't any current data in the cache, the data is too old (specified in the `refresh_secs` option in `start_link/2`) or when the
`force` flag is set. This function is useful if you know which hosts you need to request beforehand.
Options
---
* `name` - The name of the GenServer. Default value is [`Gollum.Cache`](#content).
* `async` - Whether this call is async. If the call is async, `:ok` is always returned. The default value is `false`.
* `force` - If the cache has already fetched from the host, this flag determines whether it should force a refresh. Default is `false`.
```
get([binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)(), [keyword](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()) :: [Gollum.Host.t](Gollum.Host.html#t:t/0)() | nil
```
Gets the [`Gollum.Host`](Gollum.Host.html) struct for the specified host from the cache.
Options
---
* `name` - The name of the GenServer. Default value is [`Gollum.Cache`](#content).
Invoked when the server is started. `start_link/3` or `start/3` will block until it returns.
`init_arg` is the argument term (second argument) passed to `start_link/3`.
Returning `{:ok, state}` will cause `start_link/3` to return
`{:ok, pid}` and the process to enter its loop.
Returning `{:ok, state, timeout}` is similar to `{:ok, state}`,
except that it also sets a timeout. See the "Timeouts" section in the module documentation for more information.
Returning `{:ok, state, :hibernate}` is similar to `{:ok, state}`
except the process is hibernated before entering the loop. See
`c:handle_call/3` for more information on hibernation.
Returning `{:ok, state, {:continue, continue}}` is similar to
`{:ok, state}` except that immediately after entering the loop the `c:handle_continue/2` callback will be invoked with the value
`continue` as first argument.
Returning `:ignore` will cause `start_link/3` to return `:ignore` and the process will exit normally without entering the loop or calling
`c:terminate/2`. If used when part of a supervision tree the parent supervisor will not fail to start nor immediately try to restart the
[`GenServer`](https://hexdocs.pm/elixir/GenServer.html). The remainder of the supervision tree will be started and so the [`GenServer`](https://hexdocs.pm/elixir/GenServer.html) should not be required by other processes.
It can be started later with [`Supervisor.restart_child/2`](https://hexdocs.pm/elixir/Supervisor.html#restart_child/2) as the child specification is saved in the parent supervisor. The main use cases for this are:
* The [`GenServer`](https://hexdocs.pm/elixir/GenServer.html) is disabled by configuration but might be enabled later.
* An error occurred and it will be handled by a different mechanism than the
[`Supervisor`](https://hexdocs.pm/elixir/Supervisor.html). Likely this approach involves calling [`Supervisor.restart_child/2`](https://hexdocs.pm/elixir/Supervisor.html#restart_child/2)
after a delay to attempt a restart.
Returning `{:stop, reason}` will cause `start_link/3` to return
`{:error, reason}` and the process to exit with reason `reason` without entering the loop or calling `c:terminate/2`.
Callback implementation for [`GenServer.init/1`](https://hexdocs.pm/elixir/GenServer.html#c:init/1).
```
start_link([keyword](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()) :: {:ok, [pid](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} | {:error, [term](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()}
```
Starts up the cache.
Options
---
* `name` - The name of the GenServer. Default value is [`Gollum.Cache`](#content).
* `refresh_secs` - The number of seconds until the robots.txt will be refetched from the host. Defaults to `86_400`, which is 1 day.
* `lazy_refresh` - If this flag is set to `true`, the file will only be refetched from the host if needed. Otherwise, the file will be refreshed at the interval specified by `refresh_secs`. Defaults to
`false`.
* `user_agent` - The user agent to use when performing the GET request. Default is `"Gollum"`.
Toggle Theme
gollum v0.3.3 Gollum.Fetcher
===
In charge of fetching the actual robots.txt files.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[fetch(domain, opts)](#fetch/2)
Fetches the robots.txt file from the specified host. Simply performs a `GET` request to the domain via [`HTTPoison`](https://hexdocs.pm/httpoison/1.5.1/HTTPoison.html)
[Link to this section](#functions)
Functions
===
```
fetch([binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)(), [keyword](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()) :: {:ok, [binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()} | {:error, [term](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()}
```
Fetches the robots.txt file from the specified host. Simply performs a `GET` request to the domain via [`HTTPoison`](https://hexdocs.pm/httpoison/1.5.1/HTTPoison.html).
Toggle Theme
gollum v0.3.3 Gollum.Host
===
Represents one host's robots.txt files.
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[t()](#t:t/0)
[Functions](#functions)
---
[crawlable?(host, user_agent, path)](#crawlable?/3)
Returns whether a specified path is crawlable by the specified user agent,
based on the rules defined in the specified host struct
[new(host, rules)](#new/2)
Creates a new [`Gollum.Host`](#content) struct, passing in the host and rules.
The rules usually are the output of the parser
[Link to this section](#types)
Types
===
```
t() :: %Gollum.Host{host: [binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)(), rules: [map](https://hexdocs.pm/elixir/typespecs.html#basic-types)()}
```
[Link to this section](#functions)
Functions
===
```
crawlable?([Gollum.Host.t](Gollum.Host.html#t:t/0)(), [binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)(), [binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()) ::
:crawlable | :uncrawlable | :undefined
```
Returns whether a specified path is crawlable by the specified user agent,
based on the rules defined in the specified host struct.
Checks are done based on the specification defined by Google, which can be found [here](https://developers.google.com/search/reference/robots_txt).
Examples
---
```
iex> alias Gollum.Host iex> rules = %{
...> "hello" => %{
...> allowed: ["/p"],
...> disallowed: ["/"],
...> },
...> "otherhello" => %{
...> allowed: ["/$"],
...> disallowed: ["/"],
...> },
...> "*" => %{
...> allowed: ["/page"],
...> disallowed: ["/*.htm"],
...> },
...> }
iex> host = Host.new("hello.net", rules)
iex> Host.crawlable?(host, "Hello", "/page")
:crawlable iex> Host.crawlable?(host, "OtherHello", "/page.htm")
:uncrawlable iex> Host.crawlable?(host, "NotHello", "/page.htm")
:undefined
```
```
new([binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)(), [map](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: [Gollum.Host.t](Gollum.Host.html#t:t/0)()
```
Creates a new [`Gollum.Host`](#content) struct, passing in the host and rules.
The rules usually are the output of the parser.
Examples
---
```
iex> alias Gollum.Host iex> rules = %{"Hello" => %{allowed: [], disallowed: []}}
iex> Host.new("hello.net", rules)
%Gollum.Host{host: "hello.net", rules: %{"Hello" => %{allowed: [], disallowed: []}}}
```
Toggle Theme
gollum v0.3.3 Gollum.Parser
===
Parses a robots.txt file.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[parse(string)](#parse/1)
Parse the file, passed in as a simple binary
[Link to this section](#functions)
Functions
===
```
parse([binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()) :: [map](https://hexdocs.pm/elixir/typespecs.html#basic-types)()
```
Parse the file, passed in as a simple binary.
It follows the [spec defined by Google](https://developers.google.com/search/reference/robots_txt)
as closely as possible.
Examples
---
```
iex> alias Gollum.Parser iex> Parser.parse("User-agent: Hello\nAllow: /hello\nDisallow: /hey")
%{"hello" => %{allowed: ["/hello"], disallowed: ["/hey"]}}
``` |
KScorrect | cran | R | Package ‘KScorrect’
October 12, 2022
Type Package
Title Lilliefors-Corrected Kolmogorov-Smirnov Goodness-of-Fit Tests
Version 1.4.0
Depends R (>= 3.6.0)
Imports MASS (>= 7.3.0), doParallel (>= 1.0.14), foreach (>= 1.4.4),
iterators (>= 1.0.10), parallel (>= 3.6.0), mclust (>= 5.4)
Date 2019-06-30
Author <NAME>, <NAME>
Maintainer <NAME> <<EMAIL>>
Description Implements the Lilliefors-corrected Kolmogorov-Smirnov test for use
in goodness-of-fit tests, suitable when population parameters are unknown and
must be estimated by sample statistics. P-values are estimated by simulation.
Can be used with a variety of continuous distributions, including normal,
lognormal, univariate mixtures of normals, uniform, loguniform, exponential,
gamma, and Weibull distributions. Functions to generate random numbers and
calculate density, distribution, and quantile functions are provided for use
with the log uniform and mixture distributions.
License CC0
URL https://github.com/pnovack-gottshall/KScorrect
BugReports https://github.com/pnovack-gottshall/KScorrect/issues
LazyData TRUE
RoxygenNote 6.1.1
Encoding UTF-8
NeedsCompilation no
Repository CRAN
Date/Publication 2019-07-03 19:30:03 UTC
R topics documented:
KScorrect-packag... 2
dluni... 3
dmixnor... 4
ks_test_sta... 7
LcK... 8
KScorrect-package KScorrect: Lilliefors-Corrected Kolmogorov-Smirnov Goodness-of-
Fit Tests
Description
Implements the Lilliefors-corrected Kolmogorov-Smirnov test for use in goodness-of-fit tests.
Details
KScorrect implements the Lilliefors-corrected Kolmogorov-Smirnov test for use in goodness-of-fit
tests, suitable when population parameters are unknown and must be estimated by sample statistics.
P-values are estimated by simulation. Coded to complement ks.test, it can be used with a variety
of continuous distributions, including normal, lognormal, univariate mixtures of normals, uniform,
loguniform, exponential, gamma, and Weibull distributions.
Functions to generate random numbers and calculate density, distribution, and quantile functions
are provided for use with the loguniform and mixture distributions.
Author(s)
<NAME> <<EMAIL>>
<NAME> <<EMAIL>>
Examples
# Get the package version and citation of KScorrect
packageVersion("KScorrect")
citation("KScorrect")
x <- runif(200)
Lc <- LcKS(x, cdf="pnorm", nreps=999)
hist(Lc$D.sim)
abline(v = Lc$D.obs, lty = 2)
print(Lc, max=50) # Print first 50 simulated statistics
# Approximate p-value (usually) << 0.05
# Confirmation uncorrected version has increased Type II error rate when
# using sample statistics to estimate parameters:
ks.test(x, "pnorm", mean(x), sd(x)) # p-value always larger, (usually) > 0.05
x <- rlunif(200, min=exp(1), max=exp(10)) # random loguniform sample
Lc <- LcKS(x, cdf="plnorm")
Lc$p.value # Approximate p-value: (usually) << 0.05
dlunif The Log Uniform Distribution
Description
Density, distribution function, quantile function and random generation for the log uniform distri-
bution in the interval from min to max. Parameters must be raw values (not log-transformed) and
will be log-transformed using specified base.
Usage
dlunif(x, min, max, base = exp(1))
plunif(q, min, max, base = exp(1))
qlunif(p, min, max, base = exp(1))
rlunif(n, min, max, base = exp(1))
Arguments
x Vector of quantiles.
min Lower limit of the distribution, in raw (not log-transformed) values. Negative
values will give warning.
max Upper limit of the distribution, in raw (not log-transformed) values. Negative
values will give warning.
base The base to which logarithms are computed. Defaults to e=exp(1). Must be a
positive number.
q Vector of quantiles.
p Vector of probabilities.
n Number of observations.
Details
A log uniform (or loguniform or log-uniform) random variable has a uniform distribution when
log-transformed.
Value
dlunif gives the density, plunif gives the distribution function, qlunif gives the quantile function,
and rlunif generates random numbers.
Note
Parameters min, max must be provided as raw (not log-transformed) values and will be log-transformed
using base. In other words, when log-transformed, a log uniform random variable with parameters
min=a and max=b is uniform over the interval from log(a) to log(b).
Author(s)
<NAME> <<EMAIL>>
See Also
Distributions for other standard distributions
Examples
plot(1:100, dlunif(1:100, exp(1), exp(10)), type="l", main="Loguniform density")
plot(log(1:100), dlunif(log(1:100), log(1), log(10)), type="l",
main="Loguniform density")
plot(1:100, plunif(1:100, exp(1), exp(10)), type="l", main="Loguniform cumulative")
plot(qlunif(ppoints(100), exp(1), exp(10)), type="l", main="Loguniform quantile")
hist(rlunif(1000, exp(1), exp(10)), main="random loguniform sample")
hist(log(rlunif(10000, exp(1), exp(10))), main="random loguniform sample")
hist(log(rlunif(10000, exp(1), exp(10), base=10), base=10), main="random loguniform sample")
dmixnorm The Normal Mixture Distribution
Description
Density, distribution function, quantile function, and random generation for a univariate (one-
dimensional) distribution composed of a mixture of normal distributions with means equal to mean,
standard deviations equal to sd, and mixing proportion of the components equal to pro.
Usage
dmixnorm(x, mean, sd, pro)
pmixnorm(q, mean, sd, pro)
qmixnorm(p, mean, sd, pro, expand = 1)
rmixnorm(n, mean, sd, pro)
Arguments
x Vector of quantiles.
mean Vector of means, one for each component.
sd Vector of standard deviations, one for each component. If a single value is pro-
vided, an equal-variance mixture model is implemented. Must be non-negative.
pro Vector of mixing proportions, one for each component. If missing, an equal-
proportion model is implemented, with a warning. If proportions do not sum to
unity, they are rescaled to do so. Must be non-negative.
q Vector of quantiles.
p Vector of probabilities.
expand Value to expand the range of probabilities for quantile approximation. Default
= 1.0. See details below.
n Number of observations.
Details
These functions use, modify, and wrap around those from the mclust package, especially dens,
and sim. Functions are slightly faster than the corresponding mclust functions when used with
univariate distributions.
Unlike mclust, which primarily focuses on parameter estimation based on mixture samples, the
functions here are modified to calculate PDFs, CDFs, approximate quantiles, and random numbers
for mixture distributions with user-specified parameters. The functions are written to emulate the
syntax of other R distribution functions (e.g., Normal).
The number of mixture components (argument G in mclust) is specified from the length of the
mean vector. If a single sd value is provided, an equal-variance mixture model (modelNames="E" in
mclust) is implemented; if multiple values are provided, a variable-variance model (modelNames="V"
in mclust) is implemented. If mixing proportion pro is missing, all components are assigned equal
mixing proportions, with a warning. Mixing proportions are rescaled to sum to unity. If the lengths
of supplied means, standard deviations, and mixing proportions conflict, an error is called.
Analytical solutions are not available to calculate a quantile function for all combinations of mixture
parameters. qmixnorm approximates the quantile function using a spline function calculated from
cumulative density functions for the specified mixture distribution. Quantile values for probabilities
near zero and one are approximated by taking a randomly generated sample (with sample size equal
to the product of 1000 and the number of mixture components), and expanding that range positively
and negatively by a multiple (specified by (default) expand = 1) of the observed range in the
random sample. In cases where the distribution range is large (such as when mixture components
are discrete or there are large distances between components), resulting extreme probability values
will be very close to zero or one and can result in non-calculable (NaN) quantiles (and a warning).
Use of other expand values (especially expand < 1.0 that expand the ranges by smaller multiples)
often will yield improved approximations. Note that expand values equal to or close to 0 may result
in inaccurate approximation of extreme quantiles. In situations requiring extreme quantile values,
it is recommended that the largest expand value that does not result in a non-calculable quantile
(i.e., no warning called) be used. See examples for confirmation that approximations are accurate,
comparing the approximate quantiles from a single ’mixture’ distribution to those calculated for the
same distribution using qnorm, and demonstrating cases in which using non-default expand values
will allow correct approximation of quantiles.
Value
dmixnorm gives the density, pmixnorm gives the distribution function, qmixnorm approximates the
quantile function, and rmixnorm generates random numbers.
Author(s)
<NAME> <pnovack-gott<EMAIL>> and <NAME> <<EMAIL>>,
based on functions written by <NAME>.
See Also
Distributions for other standard distributions, and mclust::dens, sim, and cdfMclust for alter-
native density, quantile, and random number functions for multivariate mixture distributions.
Examples
# Mixture of two normal distributions
mean <- c(3, 6)
pro <- c(.25, .75)
sd <- c(.5, 1)
x <- rmixnorm(n=5000, mean=mean, pro=pro, sd=sd)
hist(x, n=20, main="random bimodal sample")
## Not run:
# Requires functions from the 'mclust' package
require(mclust)
# Confirm 'rmixnorm' above produced specified model
mod <- mclust::Mclust(x)
mod # Best model (correctly) has two-components with unequal variances
mod$parameters # and approximately same parameters as specified above
sd^2 # Note reports var (sigma-squared) instead of sd used above
## End(Not run)
# Density, distribution, and quantile functions
plot(seq(0, 10, .1), dmixnorm(seq(0, 10, .1), mean=mean, sd=sd, pro=pro),
type="l", main="Normal mixture density")
plot(seq(0, 10, .1), pmixnorm(seq(0, 10, .1), mean=mean, sd=sd, pro=pro),
type="l", main="Normal mixture cumulative")
plot(stats::ppoints(100), qmixnorm(stats::ppoints(100), mean=mean, sd=sd, pro=pro),
type="l", main="Normal mixture quantile")
# Any number of mixture components are allowed
plot(seq(0, 50, .01), pmixnorm(seq(0, 50, .01), mean=1:50, sd=.05, pro=rep(1, 50)),
type="l", main="50-component normal mixture cumulative")
# 'expand' can be specified to prevent non-calculable quantiles:
q1 <- qmixnorm(stats::ppoints(30), mean=c(1, 20), sd=c(1, 1), pro=c(1, 1))
q1 # Calls a warning because of NaNs
# Reduce 'expand'. (Values < 0.8 allow correct approximation)
q2 <- qmixnorm(stats::ppoints(30), mean=c(1, 20), sd=c(1, 1), pro=c(1, 1), expand=.5)
plot(stats::ppoints(30), q2, type="l", main="Quantile with reduced range")
## Not run:
# Requires functions from the 'mclust' package
# Confirmation that qmixnorm approximates correct solution
# (single component 'mixture' should mimic qnorm):
x <- rmixnorm(n=5000, mean=0, pro=1, sd=1)
mpar <- mclust::Mclust(x)$param
approx <- qmixnorm(p=ppoints(100), mean=mpar$mean, pro=mpar$pro,
sd=sqrt(mpar$variance$sigmasq))
known <- qnorm(p=ppoints(100), mean=mpar$mean, sd=sqrt(mpar$variance$sigmasq))
cor(approx, known) # Approximately the same
plot(approx, main="Quantiles for (unimodal) normal")
lines(known)
legend("topleft", legend=c("known", "approximation"), pch=c(NA,1),
lty=c(1, NA), bty="n")
## End(Not run)
ks_test_stat Internal KScorrect Function.
Description
Internal function not intended to be called directly by users.
Usage
ks_test_stat(x, y, ...)
Arguments
x a numeric vector of data values.
y a character string naming a cumulative distribution function or an actual cumu-
lative distribution function such as pnorm. Only continuous CDFs are valid. See
/codeLcKS for accepted functions.
... parameters of the distribution specified (as a character string) by y.
Details
Simplified and faster ks.test function that calculates just the two-sided test statistic D.
Note
Calculating the Kolmogorov-Smirnov test statistic D by itself is faster than calculating the other
ouput that that function produces.
See Also
ks.test
LcKS Lilliefors-corrected Kolmogorov-Smirnov Goodness-Of-Fit Test
Description
Implements the Lilliefors-corrected Kolmogorov-Smirnov test for use in goodness-of-fit tests, suit-
able when population parameters are unknown and must be estimated by sample statistics. It uses
Monte Carlo simulation to estimate p-values. Using a modification of ks.test, it can be used with
a variety of continuous distributions, including normal, lognormal, univariate mixtures of normals,
uniform, loguniform, exponential, gamma, and Weibull distributions. The Monte Carlo algorithm
can run ’in parallel.’
Usage
LcKS(x, cdf, nreps = 4999, G = 1:9, varModel = c("E", "V"),
parallel = FALSE, cores = NULL)
Arguments
x A numeric vector of data values (observed sample).
cdf Character string naming a cumulative distribution function. Case insensitive.
Only continuous CDFs are valid. Allowed CDFs include:
• "pnorm" for normal,
• "pmixnorm" for (univariate) normal mixture,
• "plnorm" for lognormal (log-normal, log normal),
• "punif" for uniform,
• "plunif" for loguniform (log-uniform, log uniform),
• "pexp" for exponential,
• "pgamma" for gamma,
• "pweibull" for Weibull.
nreps Number of replicates to use in simulation algorithm. Default = 4999 replicates.
See details below. Should be a positive integer.
G Numeric vector of mixture components to consider, for mixture models only.
Default = 1:9 fits up to 9 components. Must contain positive integers. See
details below.
varModel For mixture models, character string determining whether to allow equal-variance
mixture components (E), variable-variance mixture components (V) or both (the
default).
parallel Logical value that switches between running Monte Carlo algorithm in parallel
(if TRUE) or not (if FALSE, the default).
cores Numeric value to control how many cores to build when running in parallel.
Default = detectCores - 1.
Details
The function builds a simulation distribution D.sim of length nreps by drawing random samples
from the specified continuous distribution function cdf with parameters calculated from the pro-
vided sample x. Observed statistic D and simulated test statistics are calculated using a simplified
version of ks.test.
The default nreps = 4999 provides accurate p-values. nreps = 1999 is sufficient for most cases,
and computationally faster when dealing with more complicated distributions (such as univariate
normal mixtures, gamma, and Weibull). See below for potentially faster parallel implementations.
The p-value is calculated as the number of Monte Carlo samples with test statistics D as extreme as
or more extreme than that in the observed sample D.obs, divided by the nreps number of Monte
Carlo samples. A value of 1 is added to both the numerator and denominator to allow the observed
sample to be represented within the null distribution (Manly 2004); this has the benefit of avoiding
nonsensical p.value = 0.000 and accounts for the fact that the p-value is an estimate.
Parameter estimates are calculated for the specified continuous distribution, using maximum-likelihood
estimates. When testing against the gamma and Weibull distributions, MASS::fitdistr is used to
calculate parameter estimates using maximum likelihood optimization, with sensible starting val-
ues. Because this incorporates an optimization routine, the simulation algorithm can be slow if us-
ing large nreps or problematic samples. Warnings often occur during these optimizations, caused
by difficulties estimating sample statistic standard errors. Because such SEs are not used in the
Lilliefors-corrected simulation algorithm, warnings are suppressed during these optimizations.
Sample statistics for the (univariate) normal mixture distribution pmixnorm are calculated using
package mclust, which uses BIC to identify the optimal mixture model for the sample, and the EM
algorithm to calculate parameter estimates for this model. The number of mixture components G
(with default allowing up to 9 components), variance model (whether equal E or variable V variance),
and component statistics (means, sds, and mixing proportions pro) are estimated from the sample
when calculating D.obs and passed internally when creating random Monte Carlo samples. It is
possible that some of these samples may differ in their optimal G (for example a two-component
input sample might yield a three-component random sample within the simulation distribution).
This can be constrained by specifying that simulation BIC-optimizations only consider G mixture
components.
Be aware that constraining G changes the null hypothesis. The default (G = 1:9) null hypothesis is
that a sample was drawn from any G = 1:9-component mixture distribution. Specifying a particular
value, such as G = 2, restricts the null hypothesis to particular mixture distributions with just G
components, even if simulated samples might better be represented as different mixture models.
The LcKS(cdf = "pmixnorm") test implements two control loops to avoid errors caused by this con-
straint and when working with problematic samples. The first loop occurs during model-selection
for the observed sample x, and allows for estimation of parameters for the second-best model when
those for the optimal model are not able to be calculated by the EM algorithm. A second loop
occurs during the simulation algorithm, rejecting samples that cannot be fit by the mixture model
specified by the observed sample x. Such problematic cases are most common when the observed
or simulated samples have a component(s) with very small variance (i.e., duplicate observations) or
when a Monte Carlo sample cannot be fit by the specified G.
Parellel computing can be implemented using parallel = TRUE, using the operating-system ver-
satile doParallel-package and foreach infrastructure, using a default detectCores - 1 number
of cores. Parallel computing is generally advisable for the more complicated cumulative density
functions (i.e., univariate normal mixture, gamma, Weibull), where maximum likelihood estima-
tion is time-intensive, but is generally not advisable for density functions with quickly calculated
sample statistics (i.e., other distribution functions). Warnings within the function provide sensible
recommendations, but users are encouraged to experiment to discover their fastest implementation
for their individual cases.
Value
A list containing the following components:
D.obs The value of the test statistic D for the observed sample.
D.sim Simulation distribution of test statistics, with length = nreps. This can be used
to calculate critical values; see examples.
p.value
P
p-value of the test, calculated as ( (D.sim > D.obs) + 1)/(nreps + 1).
Note
The Kolmogorov-Smirnov (such as ks.test) is only valid as a goodness-of-fit test when the pop-
ulation parameters are known. This is typically not the case in practice. This invalidation occurs
because estimating the parameters changes the null distribution of the test statistic; i.e., using the
sample to estimate population parameters brings the Kolmogorov-Smirnov test statistic D closer
to the null distribution than it would be under the hypothesis where the population parameters are
known. In other words, it is biased and results in increased Type II error rates. Lilliefors (1967,
1969) provided a solution, using Monte Carlo simulation to approximate the shape of the null dis-
tribution when the sample statistics are used to estimate population parameters, and to use this null
distribution as the basis for critical values. The function LcKS generalizes this solution for a range
of continuous distributions.
Author(s)
<NAME> <<EMAIL>>, based on code from <NAME> (Uni-
versity of Minnesota).
References
Lilliefors, <NAME>. 1967. On the Kolmogorov-Smirnov test for normality with mean and variance
unknown. Journal of the American Statistical Association 62(318):399-402.
Lilliefors, <NAME>. 1969. On the Kolmogorov-Smirnov test for the exponential distribution with mean
unknown. Journal of the American Statistical Association 64(325):387-389.
<NAME>. 2004. Randomization, Bootstrap and Monte Carlo Methods in Biology. Chapman &
Hall, Cornwall, Great Britain.
<NAME>., and <NAME>. 1982. A Kolmogorov-Smirnov goodness-of-fit test for the two-
parameter Weibull distribution when the parameters are estimated from the data. Microelectronics
Reliability 22(2):163-167.
See Also
Distributions for standard cumulative distribution functions, plunif for the loguniform cumula-
tive distribution function, and pmixnorm for the univariate normal mixture cumulative distribution
function.
Examples
x <- runif(200)
Lc <- LcKS(x, cdf = "pnorm", nreps = 999)
hist(Lc$D.sim)
abline(v = Lc$D.obs, lty = 2)
print(Lc, max = 50) # Print first 50 simulated statistics
# Approximate p-value (usually) << 0.05
# Confirmation uncorrected version has increased Type II error rate when
# using sample statistics to estimate parameters:
ks.test(x, "pnorm", mean(x), sd(x)) # p-value always larger, (usually) > 0.05
# Confirm critical values for normal distribution are correct
nreps <- 9999
x <- rnorm(25)
Lc <- LcKS(x, "pnorm", nreps = nreps)
sim.Ds <- sort(Lc$D.sim)
crit <- round(c(.8, .85, .9, .95, .99) * nreps, 0)
# Lilliefors' (1967) critical values, using improved values from
# Parsons & Wirsching (1982) (for n = 25):
# 0.141 0.148 0.157 0.172 0.201
round(sim.Ds[crit], 3) # Approximately the same critical values
# Confirm critical values for exponential are the same as reported by Lilliefors (1969)
nreps <- 9999
x <- rexp(25)
Lc <- LcKS(x, "pexp", nreps = nreps)
sim.Ds <- sort(Lc$D.sim)
crit <- round(c(.8, .85, .9, .95, .99) * nreps, 0)
# Lilliefors' (1969) critical values (for n = 25):
# 0.170 0.180 0.191 0.210 0.247
round(sim.Ds[crit], 3) # Approximately the same critical values
## Not run:
# Gamma and Weibull tests require functions from the 'MASS' package
# Takes time for maximum likelihood optimization of statistics
require(MASS)
x <- runif(100, min = 1, max = 100)
Lc <- LcKS(x, cdf = "pgamma", nreps = 499)
Lc$p.value
# Confirm critical values for Weibull the same as reported by <NAME> (1982)
nreps <- 9999
x <- rweibull(25, shape = 1, scale = 1)
Lc <- LcKS(x, "pweibull", nreps = nreps)
sim.Ds <- sort(Lc$D.sim)
crit <- round(c(.8, .85, .9, .95, .99) * nreps, 0)
# Parsons & Wirsching (1982) critical values (for n = 25):
# 0.141 0.148 0.157 0.172 0.201
round(sim.Ds[crit], 3) # Approximately the same critical values
# Mixture test requires functions from the 'mclust' package
# Takes time to identify model parameters
require(mclust)
x <- rmixnorm(200, mean = c(10, 20), sd = 2, pro = c(1,3))
Lc <- LcKS(x, cdf = "pmixnorm", nreps = 499, G = 1:9) # Default G (1:9) takes long time
Lc$p.value
G <- Mclust(x)$parameters$variance$G # Optimal model has only two components
Lc <- LcKS(x, cdf = "pmixnorm", nreps = 499, G = G) # Restricting to likely G saves time
# But note changes null hypothesis: now testing against just two-component mixture
Lc$p.value
# Running 'in parallel'
require(doParallel)
set.seed(3124)
x <- rmixnorm(300, mean = c(110, 190, 200), sd = c(3, 15, .1), pro = c(1, 3, 1))
system.time(LcKS(x, "pgamma"))
system.time(LcKS(x, "pgamma", parallel = TRUE)) # Should be faster
## End(Not run) |
OTRselect | cran | R | Package ‘OTRselect’
October 12, 2022
Type Package
Title Variable Selection for Optimal Treatment Decision
Version 1.1
Date 2022-06-05
Author <NAME>, <NAME>, <NAME>, <NAME>, and
<NAME>
Maintainer <NAME> <<EMAIL>>
Description A penalized regression framework that can simultaneously estimate
the optimal treatment strategy and identify important variables.
Appropriate for either censored or uncensored continuous response.
License GPL-2
Depends stats, lars, survival, methods
NeedsCompilation no
Repository CRAN
Date/Publication 2022-06-06 23:10:30 UTC
R topics documented:
OTRselect-packag... 2
censore... 3
Qha... 5
uncensore... 6
OTRselect-package Variable Selection for Optimal Treatment Decision
Description
A penalized regression framework that can simultaneously estimate the optimal treatment strat-
egy and identify important variables. Appropriate for either censored or uncensored continuous
response.
Details
The DESCRIPTION file:
Package: OTRselect
Type: Package
Title: Variable Selection for Optimal Treatment Decision
Version: 1.1
Date: 2022-06-05
Author: <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>
Maintainer: <NAME> <<EMAIL>>
Description: A penalized regression framework that can simultaneously estimate the optimal treatment strategy and i
License: GPL-2
Depends: stats, lars, survival, methods
NeedsCompilation: no
Index of help topics:
OTRselect-package Variable Selection for Optimal Treatment
Decision
Qhat Mean Response or Restricted Mean Response Given
a Treatment Regime
censored Variable Selection for Optimal Treatment
Decision with Censored Survival Times
uncensored Variable Selection for Optimal Treatment
Decision with Uncensored Continuous Response
Function censored performs variable selection for censored continuous response. Function uncensored
performs variable selection for uncensored continuous response. Function Qhat estimates the re-
stricted mean response given a treatment regime for censored data or the mean response given a
treatment regime for uncensored data.
Author(s)
<NAME>, <NAME>, <NAME>, <NAME>, and <NAME>
Maintainer: <NAME> <<EMAIL>>
References
<NAME>., <NAME>., and <NAME>. (2013). Variable selection for optimal treatment decision.
Statistical Methods in Medical Research, 22, 493–504. PMCID: PMC3303960.
<NAME>., <NAME>., and <NAME>. (2015). On optimal treatment regimes selection for mean
survival time. Statistics in Medicine, 34, 1169–1184. PMCID: PMC4355217.
censored Variable Selection for Optimal Treatment Decision with Censored Sur-
vival Times
Description
A penalized regression framework that can simultaneously estimate the optimal treatment strategy
and identify important variables when the response is continuous and censored. This method uses
an inverse probability weighted least squares estimation with adaptive LASSO penalty for variable
selection.
Usage
censored(x, y, a, delta, propen, phi, logY = TRUE,
intercept = TRUE)
Arguments
x Matrix or data.frame of model covariates.
y Vector of response. Note that this data is used to estimate the Kaplan-Meier
Curve and should not be log(T).
a Vector of treatment received. Treatments must be coded as integers or numerics
that can be recast as integers without loss of information.
delta Event indicator vector. The indicator must be coded as 0/1 where 0=no event
and 1=event.
propen Vector or matrix of propensity scores for each treatment. If a vector, the propen-
sity is assumed to be the same for all samples. Column or element order must
correspond to the sort order of the treatment variable, i.e., 0,1,2,3,... If the num-
ber of columns/elements in propen is one fewer than the total number of treat-
ment options, it is assumed that the base or lowest valued treatment has not been
provided.
phi A character ’c’ or ’l’ indicating if the constant (’c’) or linear (’l’) baseline mean
function is to be used.
logY TRUE/FALSE indicating if log(y) is to be used for regression.
intercept TRUE/FALSE indicating if an intercept is to be included in phi model.
Value
A list object containing
beta A vector of the estimated regression coefficients after variable selection.
optTx The estimated optimal treatment for each sample.
Author(s)
<NAME>, <NAME>, <NAME>, and <NAME>
References
<NAME>., <NAME>., and <NAME>. (2015). On optimal treatment regimes selection for mean
survival time. Statistics in Medicine, 34, 1169–1184. PMCID: PMC4355217.
Examples
sigma <- diag(10)
ct <- 0.5^{1L:9L}
rst <- unlist(sapply(1L:9L,function(x){ct[1L:{10L-x}]}))
sigma[lower.tri(sigma)] <- rst
sigma[upper.tri(sigma)] <- t(sigma)[upper.tri(sigma)]
M <- t(chol(sigma))
Z <- matrix(rnorm(1000),10,100)
X <- t(M%*%Z)
A <- rbinom(100,1,0.5)
Y <- rweibull(100,shape=0.5,scale=1)
C <- rweibull(100,shape=0.5,scale=1.5)
delta <- as.integer(C <= Y)
Y[delta > 0.5] <- C[delta>0.5]
dat <- data.frame(X,A,exp(Y),delta)
colnames(dat) <- c(paste("X",1:10,sep=""),"a","y","del")
censored(x = X,
y = Y,
a = A,
delta = delta,
propen = 0.5,
phi = "c",
logY = TRUE,
intercept = TRUE)
Qhat Mean Response or Restricted Mean Response Given a Treatment
Regime
Description
Estimates the mean response given a treatment regime if data is uncensored. If data is censored,
estimates the restricted mean response given a treatment regime.
Usage
Qhat(y, a, g, wgt = NULL)
Arguments
y vector of responses. Note if logY = TRUE in censored, this value should also be
the logarithm.
a vector of treatments received.
g vector of the given treatment regime.
wgt weights to be used if response is censored.
Value
Returns the estimated mean response or restricted mean response.
Author(s)
<NAME>, <NAME>, <NAME>, <NAME>, and <NAME>
References
<NAME>., <NAME>., and <NAME>. (2013). Variable selection for optimal treatment decision.
Statistical Methods in Medical Research, 22, 493–504. PMCID: PMC3303960.
<NAME>., <NAME>., and <NAME>. (2015). On optimal treatment regimes selection for mean
survival time. Statistics in Medicine, 34, 1169–1184. PMCID: PMC4355217.
Examples
y <- rnorm(100)
a <- rbinom(100,1,0.5)
g <- integer(100)
Qhat(y = y, a = a, g = g)
uncensored Variable Selection for Optimal Treatment Decision with Uncensored
Continuous Response
Description
A penalized regression framework that can simultaneously estimate the optimal treatment strategy
and identify important variables when the response is continuous and not censored. This method
uses an inverse probability weighted least squares estimation with adaptive LASSO penalty for
variable selection.
Usage
uncensored(x, y, a, propen, phi, intercept = TRUE)
Arguments
x Matrix or data.frame of model covariates.
y Vector of response. Note that this data is used to estimate the Kaplan-Meier
Curve and should not be log(T).
a Vector of treatment received. Treatments must be coded as integers or numerics
that can be recast as integers without loss of information.
propen Vector or matrix of propensity scores for each treatment. If a vector, the propen-
sity is assumed to be the same for all samples. Column or element order must
correspond to the sort order of the treatment variable, i.e., 0,1,2,3,... If the num-
ber of columns/elements in propen is one fewer than the total number of treat-
ment options, it is assumed that the base or lowest valued treatment has not been
provided.
phi A character ’c’ or ’l’ indicating if the constant (’c’) or linear (’l’) baseline mean
function is to be used.
intercept TRUE/FALSE indicating if an intercept is to be included in phi model.
Value
A list object containing
beta A vector of the estimated regression coefficients after variable selection.
optTx The estimated optimal treatment for each sample.
Author(s)
<NAME>, <NAME>, <NAME>, and <NAME>
References
<NAME>., <NAME>., and <NAME>. (2013). Variable selection for optimal treatment decision.
Statistical Methods in Medical Research, 22, 493–504. PMCID: PMC3303960.
Examples
sigma <- diag(10)
ct <- 0.5^{1L:9L}
rst <- unlist(sapply(1L:9L,function(x){ct[1L:{10L-x}]}))
sigma[lower.tri(sigma)] <- rst
sigma[upper.tri(sigma)] <- t(sigma)[upper.tri(sigma)]
M <- t(chol(sigma))
Z <- matrix(rnorm(1000),10,100)
X <- t(M %*% Z)
gamma1 <- c(1, -1, rep(0,8))
beta <- c(1,1,rep(0,7), -0.9, 0.8)
A <- rbinom(100,1,0.5)
Y <- 1.0 + X %*% gamma1 +
A*{cbind(1.0,X)%*%beta} + rnorm(100,0,.25)
dat <- data.frame(X,A,Y)
uncensored(x=X,
y = Y,
a = A,
propen = 0.5,
phi = "c",
intercept = TRUE) |
libsyslog-sys | rust | Rust | Crate libsyslog_sys
===
The code in this crate contains the raw bindings for syslog, automatically generated by bindgen. Before continuing any further, please make sure libsyslog is not the crate you really are looking for.
See The Open Group Base Specifications Issue 7, 2018 edition for actual API documentation or Wikipedia for general context.
Implementation specific documentation: (verified working platforms)
* FreeBSD
* Haiku
* illumos
* Linux (with glibc)
* NetBSD
* OpenBSD
Apple Inc. is advising to no longer use syslog on macOS 10.12 and later, yet this crate compiles and messages produced by it do appear in the output of `log stream` on such platforms.
Structs
---
* __va_list_tag
Constants
---
* LOG_ALERT
* LOG_AUTH
* LOG_AUTHPRIV
* LOG_CONS
* LOG_CRIT
* LOG_CRON
* LOG_DAEMON
* LOG_DEBUG
* LOG_EMERG
* LOG_ERR
* LOG_FACMASK
* LOG_FTP
* LOG_INFO
* LOG_KERN
* LOG_LOCAL0
* LOG_LOCAL1
* LOG_LOCAL2
* LOG_LOCAL3
* LOG_LOCAL4
* LOG_LOCAL5
* LOG_LOCAL6
* LOG_LOCAL7
* LOG_LPR
* LOG_MAIL
* LOG_NDELAY
* LOG_NEWS
* LOG_NFACILITIES
* LOG_NOTICE
* LOG_NOWAIT
* LOG_ODELAY
* LOG_PERROR
* LOG_PID
* LOG_PRIMASK
* LOG_SYSLOG
* LOG_USER
* LOG_UUCP
* LOG_WARNING
* _ATFILE_SOURCE
* _BITS_SYSLOG_PATH_H
* _DEFAULT_SOURCE
* _FEATURES_H
* _PATH_LOG
* _POSIX_C_SOURCE
* _POSIX_SOURCE
* _STDC_PREDEF_H
* _SYS_CDEFS_H
* _SYS_SYSLOG_H
* __GLIBC_MINOR__
* __GLIBC_USE_DEPRECATED_GETS
* __GLIBC_USE_DEPRECATED_SCANF
* __GLIBC_USE_ISOC2X
* __GLIBC__
* __GNUC_VA_LIST
* __GNU_LIBRARY__
* __HAVE_DISTINCT_FLOAT16
* __HAVE_DISTINCT_FLOAT32
* __HAVE_DISTINCT_FLOAT32X
* __HAVE_DISTINCT_FLOAT64
* __HAVE_DISTINCT_FLOAT64X
* __HAVE_DISTINCT_FLOAT128
* __HAVE_DISTINCT_FLOAT128X
* __HAVE_FLOAT16
* __HAVE_FLOAT32
* __HAVE_FLOAT32X
* __HAVE_FLOAT64
* __HAVE_FLOAT64X
* __HAVE_FLOAT64X_LONG_DOUBLE
* __HAVE_FLOAT128
* __HAVE_FLOAT128X
* __HAVE_FLOATN_NOT_TYPEDEF
* __HAVE_GENERIC_SELECTION
* __LDOUBLE_REDIRECTS_TO_FLOAT128_ABI
* __STDC_IEC_559_COMPLEX__
* __STDC_IEC_559__
* __STDC_IEC_60559_BFP__
* __STDC_IEC_60559_COMPLEX__
* __STDC_ISO_10646__
* __SYSCALL_WORDSIZE
* __TIMESIZE
* __USE_ATFILE
* __USE_FORTIFY_LEVEL
* __USE_ISOC11
* __USE_ISOC95
* __USE_ISOC99
* __USE_MISC
* __USE_POSIX
* __USE_POSIX2
* __USE_POSIX199309
* __USE_POSIX199506
* __USE_POSIX_IMPLICITLY
* __USE_XOPEN2K
* __USE_XOPEN2K8
* __WORDSIZE
* __WORDSIZE_TIME64_COMPAT32
* __glibc_c99_flexarr_available
Functions
---
* closelog⚠
* openlog⚠
* setlogmask⚠
* syslog⚠
* vsyslog⚠
Type Definitions
---
* _Float32
* _Float32x
* _Float64
* _Float64x
* __builtin_va_list
* __gnuc_va_list
* va_list
Crate libsyslog_sys
===
The code in this crate contains the raw bindings for syslog, automatically generated by bindgen. Before continuing any further, please make sure libsyslog is not the crate you really are looking for.
See The Open Group Base Specifications Issue 7, 2018 edition for actual API documentation or Wikipedia for general context.
Implementation specific documentation: (verified working platforms)
* FreeBSD
* Haiku
* illumos
* Linux (with glibc)
* NetBSD
* OpenBSD
Apple Inc. is advising to no longer use syslog on macOS 10.12 and later, yet this crate compiles and messages produced by it do appear in the output of `log stream` on such platforms.
Structs
---
* __va_list_tag
Constants
---
* LOG_ALERT
* LOG_AUTH
* LOG_AUTHPRIV
* LOG_CONS
* LOG_CRIT
* LOG_CRON
* LOG_DAEMON
* LOG_DEBUG
* LOG_EMERG
* LOG_ERR
* LOG_FACMASK
* LOG_FTP
* LOG_INFO
* LOG_KERN
* LOG_LOCAL0
* LOG_LOCAL1
* LOG_LOCAL2
* LOG_LOCAL3
* LOG_LOCAL4
* LOG_LOCAL5
* LOG_LOCAL6
* LOG_LOCAL7
* LOG_LPR
* LOG_MAIL
* LOG_NDELAY
* LOG_NEWS
* LOG_NFACILITIES
* LOG_NOTICE
* LOG_NOWAIT
* LOG_ODELAY
* LOG_PERROR
* LOG_PID
* LOG_PRIMASK
* LOG_SYSLOG
* LOG_USER
* LOG_UUCP
* LOG_WARNING
* _ATFILE_SOURCE
* _BITS_SYSLOG_PATH_H
* _DEFAULT_SOURCE
* _FEATURES_H
* _PATH_LOG
* _POSIX_C_SOURCE
* _POSIX_SOURCE
* _STDC_PREDEF_H
* _SYS_CDEFS_H
* _SYS_SYSLOG_H
* __GLIBC_MINOR__
* __GLIBC_USE_DEPRECATED_GETS
* __GLIBC_USE_DEPRECATED_SCANF
* __GLIBC_USE_ISOC2X
* __GLIBC__
* __GNUC_VA_LIST
* __GNU_LIBRARY__
* __HAVE_DISTINCT_FLOAT16
* __HAVE_DISTINCT_FLOAT32
* __HAVE_DISTINCT_FLOAT32X
* __HAVE_DISTINCT_FLOAT64
* __HAVE_DISTINCT_FLOAT64X
* __HAVE_DISTINCT_FLOAT128
* __HAVE_DISTINCT_FLOAT128X
* __HAVE_FLOAT16
* __HAVE_FLOAT32
* __HAVE_FLOAT32X
* __HAVE_FLOAT64
* __HAVE_FLOAT64X
* __HAVE_FLOAT64X_LONG_DOUBLE
* __HAVE_FLOAT128
* __HAVE_FLOAT128X
* __HAVE_FLOATN_NOT_TYPEDEF
* __HAVE_GENERIC_SELECTION
* __LDOUBLE_REDIRECTS_TO_FLOAT128_ABI
* __STDC_IEC_559_COMPLEX__
* __STDC_IEC_559__
* __STDC_IEC_60559_BFP__
* __STDC_IEC_60559_COMPLEX__
* __STDC_ISO_10646__
* __SYSCALL_WORDSIZE
* __TIMESIZE
* __USE_ATFILE
* __USE_FORTIFY_LEVEL
* __USE_ISOC11
* __USE_ISOC95
* __USE_ISOC99
* __USE_MISC
* __USE_POSIX
* __USE_POSIX2
* __USE_POSIX199309
* __USE_POSIX199506
* __USE_POSIX_IMPLICITLY
* __USE_XOPEN2K
* __USE_XOPEN2K8
* __WORDSIZE
* __WORDSIZE_TIME64_COMPAT32
* __glibc_c99_flexarr_available
Functions
---
* closelog⚠
* openlog⚠
* setlogmask⚠
* syslog⚠
* vsyslog⚠
Type Definitions
---
* _Float32
* _Float32x
* _Float64
* _Float64x
* __builtin_va_list
* __gnuc_va_list
* va_list
Struct libsyslog_sys::__va_list_tag
===
```
#[repr(C)]pub struct __va_list_tag {
pub gp_offset: c_uint,
pub fp_offset: c_uint,
pub overflow_arg_area: *mutc_void,
pub reg_save_area: *mutc_void,
}
```
Fields
---
`gp_offset: c_uint``fp_offset: c_uint``overflow_arg_area: *mutc_void``reg_save_area: *mutc_void`Trait Implementations
---
### impl Clone for __va_list_tag
#### fn clone(&self) -> __va_list_tag
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Auto Trait Implementations
---
### impl RefUnwindSafe for __va_list_tag
### impl !Send for __va_list_tag
### impl !Sync for __va_list_tag
### impl Unpin for __va_list_tag
### impl UnwindSafe for __va_list_tag
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. |
rmargint | cran | R | Package ‘rmargint’
October 14, 2022
Type Package
Title Robust Marginal Integration Procedures
Version 2.0.2
Date 2020-08-03
Description Three robust marginal integration procedures for additive models based on local
polynomial kernel smoothers. As a preliminary estimator of the multivariate
function for the marginal integration procedure, a first approach uses local
constant M-estimators, a second one uses local polynomials of order 1 over all the
components of covariates, and the third one uses M-estimators based on local
polynomials but only in the direction of interest. For this last approach,
estimators of the derivatives of the additive functions can be obtained. All three
procedures can compute predictions for points outside the training set if desired.
See Boente and Martinez (2017) <doi:10.1007/s11749-016-0508-0> for details.
License GPL (>= 3.0)
RoxygenNote 6.1.1
Encoding UTF-8
Imports stats, graphics
NeedsCompilation yes
Author <NAME> [aut, cre],
<NAME> [aut]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2020-08-04 22:50:02 UTC
R topics documented:
rmargint-packag... 2
deviance.margin... 3
fitted.values.margin... 3
formula.margin... 4
k.epa... 4
kernel1... 5
kernel... 6
kernel... 7
kernel... 7
margint.c... 8
margint.ro... 10
my.norm.... 12
plot.margin... 13
predict.margin... 14
print.margin... 15
psi.hube... 15
psi.tuke... 16
residuals.margin... 17
summary.margin... 17
rmargint-package Robust marginal integration estimators for additive models.
Description
Robust marginal integration estimators for additive models.
Details
Package: rmargint
Type: Package
Version: 1.1
Date: 2019-10-15
License: GPL 3.0
Author(s)
<NAME>, <NAME>
Maintainer: <NAME> <<EMAIL>>
References
Boente G. and Martinez A. (2017). Marginal integration M-estimators for additive models. TEST,
26, 231-260.
deviance.margint Deviance for objects of class margint
Description
This function returns the deviance of the fitted additive model using one of the three classical or
robust marginal integration estimators, as computed with margint.cl or margint.rob.
Usage
## S3 method for class 'margint'
deviance(object, ...)
Arguments
object an object of class margint, a result of a call to margint.cl or margint.rob.
... additional other arguments. Currently ignored.
Value
A real number.
Author(s)
<NAME> <<EMAIL>>
fitted.values.margint Fitted values for objects of class margint
Description
This function returns the fitted values given the covariates of the original sample under an ad-
ditive model using a classical or robust marginal integration procedure estimator computed with
margint.cl or margint.rob.
Usage
fitted.values.margint(object, ...)
Arguments
object an object of class margint, a result of a call to margint.cl or margint.rob.
... additional other arguments. Currently ignored.
Value
A vector of fitted values.
Author(s)
<NAME> <<EMAIL>>
formula.margint Additive model formula
Description
Description of the additive model formula extracted from an object of class margint.
Usage
## S3 method for class 'margint'
formula(x, ...)
Arguments
x an object of class margint, a result of a call to margint.cl or margint.rob.
... additional other arguments. Currently ignored.
Value
A model formula.
Author(s)
<NAME> <<EMAIL>>
k.epan Epanechnikov kernel
Description
This function evaluates an Epanechnikov kernel
Usage
k.epan(x)
Arguments
x a vector of real numbers
Details
This function evaluates an Epanechnikov kernel.
Value
A vector of the same length as x where each entry is 0.75 * (1 - x^2) if x < 1 and 0 otherwise.
Author(s)
<NAME>, <<EMAIL>>, <NAME>
Examples
x <- seq(-2, 2, length=10)
k.epan(x)
kernel10 Order 10 kernel
Description
This function evaluates a kernel of order 10. A kernel of order 10.
Usage
kernel10(x)
Arguments
x A vector of real numbers.
Details
This function evaluates a kernel of order 10. A kernel L is a kernel of order 10 if it integrates 1, the
integrals of u^j L(u) are 0 for 1 <= j < 10 (j integer) and the integral of u^10 L(u) is different from
0.
Value
A vector of the same length as x where each entry is 0.75 * ( 1 - x^2 ) * ( 315/128 - 105/32 *
x^2 + 63/64 * x^4 - 3/32 * x^6 - 1/384 * x^8 ) and 0 otherwise.
Author(s)
<NAME>, <<EMAIL>>, <NAME>
Examples
x <- seq(-2,2,length=10)
kernel10(x)
kernel4 Order 4 kernel
Description
This function evaluates a kernel of order 4.
Usage
kernel4(x)
Arguments
x A vector of real numbers.
Details
This function evaluates a kernel of order 4. A kernel L is a kernel of order 4 if it integrates 1, the
integrals of u^j L(u) are 0 for 1 <= j < 4 (j integer) and the integral of u^4 L(u) is different from 0.
Value
A vector of the same length as x where each entry is ( 15/32 ) * ( 1 - x^2 ) * ( 3 - 7 * x^2 ) if
abs(x) < 1 and 0 otherwise.
Author(s)
<NAME>, <<EMAIL>>, <NAME>
Examples
x <- seq(-2,2,length=10)
kernel4(x)
kernel6 Order 6 kernel
Description
This function evaluates a kernel of order 6.
Usage
kernel6(x)
Arguments
x A vector of real numbers.
Details
This function evaluates a kernel of order 6. A kernel L is a kernel of order 6 if it integrates 1, the
integrals of u^j L(u) are 0 for 1 <= j < 6 (j integer) and the integral of u^6 L(u) is different from 0.
Value
A vector of the same length as x where each entry is ( 105/256 ) * ( 1 - x^2 ) * ( 5 - 30 * x^2 +
33 * x^4 ) if abs(x) < 1 and 0 otherwise.
Author(s)
<NAME>, <<EMAIL>>, <NAME>
Examples
x <- seq(-2,2,length=10)
kernel6(x)
kernel8 Order 8 kernel
Description
This function evaluates a kernel of order 8.
Usage
kernel8(x)
Arguments
x A vector of real numbers.
Details
This function evaluates a kernel of order 8. A kernel L is a kernel of order 8 if it integrates 1, the
integrals of u^j L(u) are 0 for 1 <= j < 8 (j integer) and the integral of u^8 L(u) is different from 0.
Value
A vector of the same length as x where each entry is ( 315/4096 ) * ( 1 - x^2 ) * ( 35 - 385 * x^2
+ 1001 * x^4 - 715 * x^6 ) and 0 otherwise.
Author(s)
<NAME>, <<EMAIL>>, <NAME>
Examples
x <- seq(-2,2,length=10)
kernel8(x)
margint.cl Classic marginal integration procedures for additive models
Description
This function computes the standard marginal integration procedures for additive models.
Usage
margint.cl(formula, data, subset, point = NULL, windows,
epsilon = 1e-06, prob = NULL, type = "0", degree = NULL,
qderivate = FALSE, orderkernel = 2, Qmeasure = NULL)
Arguments
formula an object of class formula (or one that can be coerced to that class): a symbolic
description of the model to be fitted.
data an optional data frame, list or environment (or object coercible by as.data.frame
to a data frame) containing the variables in the model. If not found in data,
the variables are taken from environment(formula), typically the environment
from which the function was called.
subset an optional vector specifying a subset of observations to be used in the fitting
process.
point a matrix of points where predictions will be computed and returned.
windows a vector or a squared matrix of bandwidths for the smoothing estimation proce-
dure.
epsilon convergence criterion.
prob a vector of probabilities of observing each response (n). Defaults to NULL.
type three different type of estimators can be selected: type '0' (local constant on
all the covariates), type '1' (local linear smoother on all the covariates), type
'alpha' (local polynomial smoother only on the direction of interest).
degree degree of the local polynomial smoother in the direction of interest when us-
ing the estimator of type 'alpha'. Defaults to NULL for the case when using
estimators of type '0' or '1'.
qderivate if TRUE, it calculates g^(q+1)/(q+1)! for each component only for the type
'alpha' method. Defaults to FALSE.
orderkernel order of the kernel used in the nuisance directions when using the estimator of
type 'alpha'. Defaults to 2.
Qmeasure a matrix of points where the integration procedure ocurrs. Defaults to NULL for
calcuting the integrals over the sample.
Details
This function computes three types of classical marginal integration procedures for additive models,
that is, considering a squared loss function.
Value
A list with the following components:
mu Estimate for the intercept.
g.matrix Matrix of estimated additive components (n by p).
prediction Matrix of estimated additive components for the points listed in the argument
point.
mul A vector of size p showing in each component the estimated intercept that con-
siders only that direction of interest when using the type 'alpha' method.
g.derivative Matrix of estimated derivatives of the additive components (only when qderivate
is TRUE) (n by p).
prediction.derivate
Matrix of estimated derivatives of the additive components for the points listed
in the argument point (only when qderivate is TRUE).
Xp Matrix of explanatory variables.
yp Vector of responses.
formula Model formula
Author(s)
<NAME>, <<EMAIL>>, <NAME>
References
<NAME>., <NAME>., <NAME>. and <NAME>. (1996). Nonparametric estimation of
additive separable regression models. Physica-Verlag HD, Switzerland. Linton O. and Nielsen
J. (1995). A kernel method of estimating structured nonparametric regression based on marginal
integration. Biometrika, 82(1), 93-101. <NAME>. and <NAME>. (1999). Estimation of
derivatives for additive separable models. Statistics, 33(3), 241-265. <NAME>. and <NAME>.
(1994). Nonparametric identification of nonlinear time series: Selecting significant lags. Journal of
the American Statistical Association, 89(428), 1410-1430.
Examples
function.g1 <- function(x1) 24*(x1-1/2)^2-2
function.g2 <- function(x2) 2*pi*sin(pi*x2)-4
n <- 150
x1 <- runif(n)
x2 <- runif(n)
X <- cbind(x1, x2)
eps <- rnorm(n,0,sd=0.15)
regresion <- function.g1(x1) + function.g2(x2)
y <- regresion + eps
bandw <- matrix(0.25,2,2)
set.seed(8090)
nQ <- 80
Qmeasure <- matrix(runif(nQ*2), nQ, 2)
fit.cl <- margint.cl(y ~ X, windows=bandw, type='alpha', degree=1, Qmeasure=Qmeasure)
margint.rob Robust marginal integration procedures for additive models
Description
This function computes robust marginal integration procedures for additive models.
Usage
margint.rob(formula, data, subset, point = NULL, windows, prob = NULL,
sigma.hat = NULL, win.sigma = NULL, epsilon = 1e-06, type = "0",
degree = NULL, typePhi = "Huber", k.h = 1.345, k.t = 4.685,
max.it = 20, qderivate = FALSE, orderkernel = 2, Qmeasure = NULL)
Arguments
formula an object of class formula (or one that can be coerced to that class): a symbolic
description of the model to be fitted.
data an optional data frame, list or environment (or object coercible by as.data.frame
to a data frame) containing the variables in the model. If not found in data,
the variables are taken from environment(formula), typically the environment
from which the function was called.
subset an optional vector specifying a subset of observations to be used in the fitting
process.
point a matrix of points where predictions will be computed and returned.
windows a vector or a squared matrix of bandwidths for the smoothing estimation proce-
dure.
prob a vector of probabilities of observing each response (n). Defaults to NULL.
sigma.hat estimate of the residual standard error. If NULL we use the mad of the residuals
obtained with local medians.
win.sigma a vector of bandwidths for estimating sigma.hat. If NULL it uses the argument
windows if it is a vector or its diagonal if it is a matrix.
epsilon convergence criterion.
type three different type of estimators can be selected: type '0' (local constant on
all the covariates), type '1' (local linear smoother on all the covariates), type
'alpha' (local polynomial smoother only on the direction of interest).
degree degree of the local polynomial smoother in the direction of interest when us-
ing the estimator of type 'alpha'. Defaults to NULL for the case when using
estimators of type '0' or '1'.
typePhi one of either 'Tukey' or 'Huber'.
k.h tuning constant for a Huber-type loss function. Defaults to 1.345.
k.t tuning constant for a Tukey-type loss function. Defaults to 4.685.
max.it maximum number of iterations for the algorithm.
qderivate if TRUE, it calculates g^(q+1)/(q+1)! for each component only for the type
'alpha' method. Defaults to FALSE.
orderkernel order of the kernel used in the nuisance directions when using the estimator of
type 'alpha'. Defaults to 2.
Qmeasure a matrix of points where the integration procedure ocurrs. Defaults to NULL for
calcuting the integrals over the sample.
Details
This function computes three types of robust marginal integration procedures for additive models.
Value
A list with the following components:
mu Estimate for the intercept.
g.matrix Matrix of estimated additive components (n by p).
sigma.hat Estimate of the residual standard error.
prediction Matrix of estimated additive components for the points listed in the argument
point.
mul A vector of size p showing in each component the estimated intercept that con-
siders only that direction of interest when using the type 'alpha' method.
g.derivative Matrix of estimated derivatives of the additive components (only when qderivate
is TRUE) (n by p).
prediction.derivate
Matrix of estimated derivatives of the additive components for the points listed
in the argument point (only when qderivate is TRUE).
Xp Matrix of explanatory variables.
yp Vector of responses.
formula Model formula
Author(s)
<NAME>, <<EMAIL>>, <NAME>
References
<NAME>. and <NAME>. (2017). Marginal integration M-estimators for additive models. TEST,
26(2), 231-260. https://doi.org/10.1007/s11749-016-0508-0
Examples
function.g1 <- function(x1) 24*(x1-1/2)^2-2
function.g2 <- function(x2) 2*pi*sin(pi*x2)-4
set.seed(140)
n <- 150
x1 <- runif(n)
x2 <- runif(n)
X <- cbind(x1, x2)
eps <- rnorm(n,0,sd=0.15)
regresion <- function.g1(x1) + function.g2(x2)
y <- regresion + eps
bandw <- matrix(0.25,2,2)
set.seed(8090)
nQ <- 80
Qmeasure <- matrix(runif(nQ*2), nQ, 2)
fit.rob <- margint.rob(y ~ X, windows=bandw, type='alpha', degree=1, Qmeasure=Qmeasure)
my.norm.2 Euclidean norm of a vector
Description
This function calculates the Euclidean norm of a vector.
Usage
my.norm.2(x)
Arguments
x A real vector.
Value
The Euclidean norm of the input vector.
Author(s)
<NAME>, <<EMAIL>>, <NAME>
Examples
x <- seq(-2, 2, length=10)
my.norm.2(x)
plot.margint Diagnostic plots for objects of class margint
Description
Plot method for class margint.
Usage
## S3 method for class 'margint'
plot(x, derivative = FALSE, which = 1:np,
ask = FALSE, ...)
Arguments
x an object of class margint, a result of a call to margint.cl or margint.rob.
derivative if TRUE, it plots the q-th derivatives. Defaults to FALSE.
which vector of indices of explanatory variables for which partial residuals plots will
be generated. Defaults to all available explanatory variables.
ask logical value. If TRUE, the graphical device will prompt before going to the next
page/screen of output.
... additional other arguments.
Author(s)
<NAME> <<EMAIL>>
Examples
function.g1 <- function(x1) 24*(x1-1/2)^2-2
function.g2 <- function(x2) 2*pi*sin(pi*x2)-4
set.seed(140)
n <- 150
x1 <- runif(n)
x2 <- runif(n)
X <- cbind(x1, x2)
eps <- rnorm(n,0,sd=0.15)
regresion <- function.g1(x1) + function.g2(x2)
y <- regresion + eps
bandw <- matrix(0.25,2,2)
set.seed(8090)
nQ <- 80
Qmeasure <- matrix(runif(nQ*2), nQ, 2)
fit.rob <- margint.rob(y ~ X, windows=bandw, type='alpha', degree=1, Qmeasure=Qmeasure)
plot(fit.rob, which=1)
predict.margint Fitted values for objects of class margint
Description
This function returns the fitted values given the covariates of the original sample under an ad-
ditive model using a classical or robust marginal integration procedure estimator computed with
margint.cl or margint.rob.
Usage
## S3 method for class 'margint'
predict(object, ...)
Arguments
object an object of class margint, a result of a call to margint.cl or margint.rob.
... additional other arguments. Currently ignored.
Value
A vector of fitted values.
Author(s)
<NAME> <<EMAIL>>
print.margint Print a Marginal Integration procedure
Description
The default print method for a margint object.
Usage
## S3 method for class 'margint'
print(x, ...)
Arguments
x an object of class margint, a result of a call to margint.cl or margint.rob.
... additional other arguments. Currently ignored.
Value
A real number.
Author(s)
<NAME> <<EMAIL>>
psi.huber Derivative of Huber’s loss function.
Description
This function evaluates the first derivative of Huber’s loss function.
Usage
psi.huber(r, k = 1.345)
Arguments
r A vector of real numbers.
k A positive tuning constant.
Details
This function evaluates the first derivative of Huber’s loss function.
Value
A vector of the same length as r.
Author(s)
Mat<NAME>, <<EMAIL>>, <NAME>
Examples
x <- seq(-2, 2, length=10)
psi.huber(r=x, k = 1.5)
psi.tukey Derivative of Tukey’s bi-square loss function.
Description
This function evaluates the first derivative of Tukey’s bi-square loss function.
Usage
psi.tukey(r, k = 4.685)
Arguments
r A vector of real numbers
k A positive tuning constant.
Details
This function evaluates the first derivative of Tukey’s bi-square loss function.
Value
A vector of the same length as r.
Author(s)
<NAME>, <<EMAIL>>, <NAME>
Examples
x <- seq(-2, 2, length=10)
psi.tukey(r=x, k = 1.5)
residuals.margint Residuals for objects of class margint
Description
This function returns the residuals of the fitted additive model using one of the three classical or
robust marginal integration estimators, as computed with margint.cl or margint.rob.
Usage
## S3 method for class 'margint'
residuals(object, ...)
Arguments
object an object of class margint, a result of a call to margint.cl or margint.rob.
... additional other arguments. Currently ignored.
Value
A vector of residuals.
Author(s)
<NAME> <<EMAIL>>
summary.margint Summary for additive models fits using a marginal integration proce-
dure
Description
Summary method for class margint.
Usage
## S3 method for class 'margint'
summary(object, ...)
Arguments
object an object of class margint, a result of a call to margint.cl or margint.rob.
... additional other arguments.
Details
This function returns the estimation of the intercept and also the five-number summary and the mean
of the residuals for both classical and robust estimators. For the robust estimator it also returns the
estimate of the residual standard error.
Author(s)
<NAME> <<EMAIL>> |
github.com/go-openapi/swag | go | Go | README
[¶](#section-readme)
---
### Swag [Build Status](https://travis-ci.org/go-openapi/swag) [codecov](https://codecov.io/gh/go-openapi/swag) [Slack Status](https://slackin.goswagger.io)
[![license](http://img.shields.io/badge/license-Apache%20v2-orange.svg)](https://raw.githubusercontent.com/go-openapi/swag/master/LICENSE)
[![GoDoc](https://godoc.org/github.com/go-openapi/swag?status.svg)](http://godoc.org/github.com/go-openapi/swag)
[![Go Report Card](https://goreportcard.com/badge/github.com/go-openapi/swag)](https://goreportcard.com/report/github.com/go-openapi/swag)
Contains a bunch of helper functions for go-openapi and go-swagger projects.
You may also use it standalone for your projects.
* convert between value and pointers for builtin types
* convert from string to builtin types (wraps strconv)
* fast json concatenation
* search in path
* load from file or http
* name mangling
This repo has only few dependencies outside of the standard library:
* YAML utilities depend on gopkg.in/yaml.v2
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
Package swag contains a bunch of helper functions for go-openapi and go-swagger projects.
You may also use it standalone for your projects.
* convert between value and pointers for builtin types
* convert from string to builtin types (wraps strconv)
* fast json concatenation
* search in path
* load from file or http
* name mangling
This repo has only few dependencies outside of the standard library:
* YAML utilities depend on gopkg.in/yaml.v2
### Index [¶](#pkg-index)
* [Constants](#pkg-constants)
* [Variables](#pkg-variables)
* [func AddInitialisms(words ...string)](#AddInitialisms)
* [func Bool(v bool) *bool](#Bool)
* [func BoolMap(src map[string]bool) map[string]*bool](#BoolMap)
* [func BoolSlice(src []bool) []*bool](#BoolSlice)
* [func BoolValue(v *bool) bool](#BoolValue)
* [func BoolValueMap(src map[string]*bool) map[string]bool](#BoolValueMap)
* [func BoolValueSlice(src []*bool) []bool](#BoolValueSlice)
* [func BytesToYAMLDoc(data []byte) (interface{}, error)](#BytesToYAMLDoc)
* [func Camelize(word string) (camelized string)](#Camelize)
* [func ConcatJSON(blobs ...[]byte) []byte](#ConcatJSON)
* [func ContainsStrings(coll []string, item string) bool](#ContainsStrings)
* [func ContainsStringsCI(coll []string, item string) bool](#ContainsStringsCI)
* [func ConvertBool(str string) (bool, error)](#ConvertBool)
* [func ConvertFloat32(str string) (float32, error)](#ConvertFloat32)
* [func ConvertFloat64(str string) (float64, error)](#ConvertFloat64)
* [func ConvertInt16(str string) (int16, error)](#ConvertInt16)
* [func ConvertInt32(str string) (int32, error)](#ConvertInt32)
* [func ConvertInt64(str string) (int64, error)](#ConvertInt64)
* [func ConvertInt8(str string) (int8, error)](#ConvertInt8)
* [func ConvertUint16(str string) (uint16, error)](#ConvertUint16)
* [func ConvertUint32(str string) (uint32, error)](#ConvertUint32)
* [func ConvertUint64(str string) (uint64, error)](#ConvertUint64)
* [func ConvertUint8(str string) (uint8, error)](#ConvertUint8)
* [func DynamicJSONToStruct(data interface{}, target interface{}) error](#DynamicJSONToStruct)
* [func FindInGoSearchPath(pkg string) string](#FindInGoSearchPath)
* [func FindInSearchPath(searchPath, pkg string) string](#FindInSearchPath)
* [func Float32(v float32) *float32](#Float32)
* [func Float32Map(src map[string]float32) map[string]*float32](#Float32Map)
* [func Float32Slice(src []float32) []*float32](#Float32Slice)
* [func Float32Value(v *float32) float32](#Float32Value)
* [func Float32ValueMap(src map[string]*float32) map[string]float32](#Float32ValueMap)
* [func Float32ValueSlice(src []*float32) []float32](#Float32ValueSlice)
* [func Float64(v float64) *float64](#Float64)
* [func Float64Map(src map[string]float64) map[string]*float64](#Float64Map)
* [func Float64Slice(src []float64) []*float64](#Float64Slice)
* [func Float64Value(v *float64) float64](#Float64Value)
* [func Float64ValueMap(src map[string]*float64) map[string]float64](#Float64ValueMap)
* [func Float64ValueSlice(src []*float64) []float64](#Float64ValueSlice)
* [func FormatBool(value bool) string](#FormatBool)
* [func FormatFloat32(value float32) string](#FormatFloat32)
* [func FormatFloat64(value float64) string](#FormatFloat64)
* [func FormatInt16(value int16) string](#FormatInt16)
* [func FormatInt32(value int32) string](#FormatInt32)
* [func FormatInt64(value int64) string](#FormatInt64)
* [func FormatInt8(value int8) string](#FormatInt8)
* [func FormatUint16(value uint16) string](#FormatUint16)
* [func FormatUint32(value uint32) string](#FormatUint32)
* [func FormatUint64(value uint64) string](#FormatUint64)
* [func FormatUint8(value uint8) string](#FormatUint8)
* [func FromDynamicJSON(data, target interface{}) error](#FromDynamicJSON)
* [func FullGoSearchPath() string](#FullGoSearchPath)
* [func Int(v int) *int](#Int)
* [func Int32(v int32) *int32](#Int32)
* [func Int32Map(src map[string]int32) map[string]*int32](#Int32Map)
* [func Int32Slice(src []int32) []*int32](#Int32Slice)
* [func Int32Value(v *int32) int32](#Int32Value)
* [func Int32ValueMap(src map[string]*int32) map[string]int32](#Int32ValueMap)
* [func Int32ValueSlice(src []*int32) []int32](#Int32ValueSlice)
* [func Int64(v int64) *int64](#Int64)
* [func Int64Map(src map[string]int64) map[string]*int64](#Int64Map)
* [func Int64Slice(src []int64) []*int64](#Int64Slice)
* [func Int64Value(v *int64) int64](#Int64Value)
* [func Int64ValueMap(src map[string]*int64) map[string]int64](#Int64ValueMap)
* [func Int64ValueSlice(src []*int64) []int64](#Int64ValueSlice)
* [func IntMap(src map[string]int) map[string]*int](#IntMap)
* [func IntSlice(src []int) []*int](#IntSlice)
* [func IntValue(v *int) int](#IntValue)
* [func IntValueMap(src map[string]*int) map[string]int](#IntValueMap)
* [func IntValueSlice(src []*int) []int](#IntValueSlice)
* [func IsFloat64AJSONInteger(f float64) bool](#IsFloat64AJSONInteger)
* [func IsZero(data interface{}) bool](#IsZero)
* [func JoinByFormat(data []string, format string) []string](#JoinByFormat)
* [func LoadFromFileOrHTTP(path string) ([]byte, error)](#LoadFromFileOrHTTP)
* [func LoadFromFileOrHTTPWithTimeout(path string, timeout time.Duration) ([]byte, error)](#LoadFromFileOrHTTPWithTimeout)
* [func LoadStrategy(path string, local, remote func(string) ([]byte, error)) func(string) ([]byte, error)](#LoadStrategy)
* [func ReadJSON(data []byte, value interface{}) error](#ReadJSON)
* [func SplitByFormat(data, format string) []string](#SplitByFormat)
* [func SplitHostPort(addr string) (host string, port int, err error)](#SplitHostPort)
* [func String(v string) *string](#String)
* [func StringMap(src map[string]string) map[string]*string](#StringMap)
* [func StringSlice(src []string) []*string](#StringSlice)
* [func StringValue(v *string) string](#StringValue)
* [func StringValueMap(src map[string]*string) map[string]string](#StringValueMap)
* [func StringValueSlice(src []*string) []string](#StringValueSlice)
* [func Time(v time.Time) *time.Time](#Time)
* [func TimeMap(src map[string]time.Time) map[string]*time.Time](#TimeMap)
* [func TimeSlice(src []time.Time) []*time.Time](#TimeSlice)
* [func TimeValue(v *time.Time) time.Time](#TimeValue)
* [func TimeValueMap(src map[string]*time.Time) map[string]time.Time](#TimeValueMap)
* [func TimeValueSlice(src []*time.Time) []time.Time](#TimeValueSlice)
* [func ToCommandName(name string) string](#ToCommandName)
* [func ToDynamicJSON(data interface{}) interface{}](#ToDynamicJSON)
* [func ToFileName(name string) string](#ToFileName)
* [func ToGoName(name string) string](#ToGoName)
* [func ToHumanNameLower(name string) string](#ToHumanNameLower)
* [func ToHumanNameTitle(name string) string](#ToHumanNameTitle)
* [func ToJSONName(name string) string](#ToJSONName)
* [func ToVarName(name string) string](#ToVarName)
* [func Uint(v uint) *uint](#Uint)
* [func Uint16(v uint16) *uint16](#Uint16)
* [func Uint16Map(src map[string]uint16) map[string]*uint16](#Uint16Map)
* [func Uint16Slice(src []uint16) []*uint16](#Uint16Slice)
* [func Uint16Value(v *uint16) uint16](#Uint16Value)
* [func Uint16ValueMap(src map[string]*uint16) map[string]uint16](#Uint16ValueMap)
* [func Uint16ValueSlice(src []*uint16) []uint16](#Uint16ValueSlice)
* [func Uint32(v uint32) *uint32](#Uint32)
* [func Uint32Map(src map[string]uint32) map[string]*uint32](#Uint32Map)
* [func Uint32Slice(src []uint32) []*uint32](#Uint32Slice)
* [func Uint32Value(v *uint32) uint32](#Uint32Value)
* [func Uint32ValueMap(src map[string]*uint32) map[string]uint32](#Uint32ValueMap)
* [func Uint32ValueSlice(src []*uint32) []uint32](#Uint32ValueSlice)
* [func Uint64(v uint64) *uint64](#Uint64)
* [func Uint64Map(src map[string]uint64) map[string]*uint64](#Uint64Map)
* [func Uint64Slice(src []uint64) []*uint64](#Uint64Slice)
* [func Uint64Value(v *uint64) uint64](#Uint64Value)
* [func Uint64ValueMap(src map[string]*uint64) map[string]uint64](#Uint64ValueMap)
* [func Uint64ValueSlice(src []*uint64) []uint64](#Uint64ValueSlice)
* [func UintMap(src map[string]uint) map[string]*uint](#UintMap)
* [func UintSlice(src []uint) []*uint](#UintSlice)
* [func UintValue(v *uint) uint](#UintValue)
* [func UintValueMap(src map[string]*uint) map[string]uint](#UintValueMap)
* [func UintValueSlice(src []*uint) []uint](#UintValueSlice)
* [func WriteJSON(data interface{}) ([]byte, error)](#WriteJSON)
* [func YAMLData(path string) (interface{}, error)](#YAMLData)
* [func YAMLDoc(path string) (json.RawMessage, error)](#YAMLDoc)
* [func YAMLMatcher(path string) bool](#YAMLMatcher)
* [func YAMLToJSON(data interface{}) (json.RawMessage, error)](#YAMLToJSON)
* [type CommandLineOptionsGroup](#CommandLineOptionsGroup)
* [type File](#File)
* + [func (f *File) Close() error](#File.Close)
+ [func (f *File) Read(p []byte) (n int, err error)](#File.Read)
* [type JSONMapItem](#JSONMapItem)
* + [func (s JSONMapItem) MarshalEasyJSON(w *jwriter.Writer)](#JSONMapItem.MarshalEasyJSON)
+ [func (s JSONMapItem) MarshalJSON() ([]byte, error)](#JSONMapItem.MarshalJSON)
+ [func (s *JSONMapItem) UnmarshalEasyJSON(in *jlexer.Lexer)](#JSONMapItem.UnmarshalEasyJSON)
+ [func (s *JSONMapItem) UnmarshalJSON(data []byte) error](#JSONMapItem.UnmarshalJSON)
* [type JSONMapSlice](#JSONMapSlice)
* + [func (s JSONMapSlice) MarshalEasyJSON(w *jwriter.Writer)](#JSONMapSlice.MarshalEasyJSON)
+ [func (s JSONMapSlice) MarshalJSON() ([]byte, error)](#JSONMapSlice.MarshalJSON)
+ [func (s JSONMapSlice) MarshalYAML() (interface{}, error)](#JSONMapSlice.MarshalYAML)
+ [func (s *JSONMapSlice) UnmarshalEasyJSON(in *jlexer.Lexer)](#JSONMapSlice.UnmarshalEasyJSON)
+ [func (s *JSONMapSlice) UnmarshalJSON(data []byte) error](#JSONMapSlice.UnmarshalJSON)
* [type NameProvider](#NameProvider)
* + [func NewNameProvider() *NameProvider](#NewNameProvider)
* + [func (n *NameProvider) GetGoName(subject interface{}, name string) (string, bool)](#NameProvider.GetGoName)
+ [func (n *NameProvider) GetGoNameForType(tpe reflect.Type, name string) (string, bool)](#NameProvider.GetGoNameForType)
+ [func (n *NameProvider) GetJSONName(subject interface{}, name string) (string, bool)](#NameProvider.GetJSONName)
+ [func (n *NameProvider) GetJSONNameForType(tpe reflect.Type, name string) (string, bool)](#NameProvider.GetJSONNameForType)
+ [func (n *NameProvider) GetJSONNames(subject interface{}) []string](#NameProvider.GetJSONNames)
### Constants [¶](#pkg-constants)
```
const (
// GOPATHKey represents the env key for gopath
GOPATHKey = "GOPATH"
)
```
### Variables [¶](#pkg-variables)
```
var DefaultJSONNameProvider = [NewNameProvider](#NewNameProvider)()
```
DefaultJSONNameProvider the default cache for types
```
var GoNamePrefixFunc func([string](/builtin#string)) [string](/builtin#string)
```
GoNamePrefixFunc sets an optional rule to prefix go names which do not start with a letter.
e.g. to help convert "123" into "{prefix}123"
The default is to prefix with "X"
```
var LoadHTTPBasicAuthPassword = ""
```
LoadHTTPBasicAuthPassword the password to use when load requests require basic auth
```
var LoadHTTPBasicAuthUsername = ""
```
LoadHTTPBasicAuthUsername the username to use when load requests require basic auth
```
var LoadHTTPCustomHeaders = map[[string](/builtin#string)][string](/builtin#string){}
```
LoadHTTPCustomHeaders an optional collection of custom HTTP headers for load requests
```
var LoadHTTPTimeout = 30 * [time](/time).[Second](/time#Second)
```
LoadHTTPTimeout the default timeout for load requests
### Functions [¶](#pkg-functions)
####
func [AddInitialisms](https://github.com/go-openapi/swag/blob/v0.22.4/util.go#L380) [¶](#AddInitialisms)
```
func AddInitialisms(words ...[string](/builtin#string))
```
AddInitialisms add additional initialisms
####
func [Bool](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L67) [¶](#Bool)
```
func Bool(v [bool](/builtin#bool)) *[bool](/builtin#bool)
```
Bool returns a pointer to of the bool value passed in.
####
func [BoolMap](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L104) [¶](#BoolMap)
```
func BoolMap(src map[[string](/builtin#string)][bool](/builtin#bool)) map[[string](/builtin#string)]*[bool](/builtin#bool)
```
BoolMap converts a string map of bool values into a string map of bool pointers
####
func [BoolSlice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L82) [¶](#BoolSlice)
```
func BoolSlice(src [][bool](/builtin#bool)) []*[bool](/builtin#bool)
```
BoolSlice converts a slice of bool values into a slice of bool pointers
####
func [BoolValue](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L73) [¶](#BoolValue)
```
func BoolValue(v *[bool](/builtin#bool)) [bool](/builtin#bool)
```
BoolValue returns the value of the bool pointer passed in or false if the pointer is nil.
####
func [BoolValueMap](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L115) [¶](#BoolValueMap)
```
func BoolValueMap(src map[[string](/builtin#string)]*[bool](/builtin#bool)) map[[string](/builtin#string)][bool](/builtin#bool)
```
BoolValueMap converts a string map of bool pointers into a string map of bool values
####
func [BoolValueSlice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L92) [¶](#BoolValueSlice)
```
func BoolValueSlice(src []*[bool](/builtin#bool)) [][bool](/builtin#bool)
```
BoolValueSlice converts a slice of bool pointers into a slice of bool values
####
func [BytesToYAMLDoc](https://github.com/go-openapi/swag/blob/v0.22.4/yaml.go#L45) [¶](#BytesToYAMLDoc)
```
func BytesToYAMLDoc(data [][byte](/builtin#byte)) (interface{}, [error](/builtin#error))
```
BytesToYAMLDoc converts a byte slice into a YAML document
####
func [Camelize](https://github.com/go-openapi/swag/blob/v0.22.4/util.go#L191) [¶](#Camelize)
```
func Camelize(word [string](/builtin#string)) (camelized [string](/builtin#string))
```
Camelize an uppercased word
####
func [ConcatJSON](https://github.com/go-openapi/swag/blob/v0.22.4/json.go#L94) [¶](#ConcatJSON)
```
func ConcatJSON(blobs ...[][byte](/builtin#byte)) [][byte](/builtin#byte)
```
ConcatJSON concatenates multiple json objects efficiently
####
func [ContainsStrings](https://github.com/go-openapi/swag/blob/v0.22.4/util.go#L318) [¶](#ContainsStrings)
```
func ContainsStrings(coll [][string](/builtin#string), item [string](/builtin#string)) [bool](/builtin#bool)
```
ContainsStrings searches a slice of strings for a case-sensitive match
####
func [ContainsStringsCI](https://github.com/go-openapi/swag/blob/v0.22.4/util.go#L328) [¶](#ContainsStringsCI)
```
func ContainsStringsCI(coll [][string](/builtin#string), item [string](/builtin#string)) [bool](/builtin#bool)
```
ContainsStringsCI searches a slice of strings for a case-insensitive match
####
func [ConvertBool](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L72) [¶](#ConvertBool)
```
func ConvertBool(str [string](/builtin#string)) ([bool](/builtin#bool), [error](/builtin#error))
```
ConvertBool turn a string into a boolean
####
func [ConvertFloat32](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L78) [¶](#ConvertFloat32)
```
func ConvertFloat32(str [string](/builtin#string)) ([float32](/builtin#float32), [error](/builtin#error))
```
ConvertFloat32 turn a string into a float32
####
func [ConvertFloat64](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L87) [¶](#ConvertFloat64)
```
func ConvertFloat64(str [string](/builtin#string)) ([float64](/builtin#float64), [error](/builtin#error))
```
ConvertFloat64 turn a string into a float64
####
func [ConvertInt16](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L101) [¶](#ConvertInt16)
```
func ConvertInt16(str [string](/builtin#string)) ([int16](/builtin#int16), [error](/builtin#error))
```
ConvertInt16 turn a string into an int16
####
func [ConvertInt32](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L110) [¶](#ConvertInt32)
```
func ConvertInt32(str [string](/builtin#string)) ([int32](/builtin#int32), [error](/builtin#error))
```
ConvertInt32 turn a string into an int32
####
func [ConvertInt64](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L119) [¶](#ConvertInt64)
```
func ConvertInt64(str [string](/builtin#string)) ([int64](/builtin#int64), [error](/builtin#error))
```
ConvertInt64 turn a string into an int64
####
func [ConvertInt8](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L92) [¶](#ConvertInt8)
```
func ConvertInt8(str [string](/builtin#string)) ([int8](/builtin#int8), [error](/builtin#error))
```
ConvertInt8 turn a string into an int8
####
func [ConvertUint16](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L133) [¶](#ConvertUint16)
```
func ConvertUint16(str [string](/builtin#string)) ([uint16](/builtin#uint16), [error](/builtin#error))
```
ConvertUint16 turn a string into an uint16
####
func [ConvertUint32](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L142) [¶](#ConvertUint32)
```
func ConvertUint32(str [string](/builtin#string)) ([uint32](/builtin#uint32), [error](/builtin#error))
```
ConvertUint32 turn a string into an uint32
####
func [ConvertUint64](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L151) [¶](#ConvertUint64)
```
func ConvertUint64(str [string](/builtin#string)) ([uint64](/builtin#uint64), [error](/builtin#error))
```
ConvertUint64 turn a string into an uint64
####
func [ConvertUint8](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L124) [¶](#ConvertUint8)
```
func ConvertUint8(str [string](/builtin#string)) ([uint8](/builtin#uint8), [error](/builtin#error))
```
ConvertUint8 turn a string into an uint8
####
func [DynamicJSONToStruct](https://github.com/go-openapi/swag/blob/v0.22.4/json.go#L84) [¶](#DynamicJSONToStruct)
```
func DynamicJSONToStruct(data interface{}, target interface{}) [error](/builtin#error)
```
DynamicJSONToStruct converts an untyped json structure into a struct
####
func [FindInGoSearchPath](https://github.com/go-openapi/swag/blob/v0.22.4/path.go#L43) [¶](#FindInGoSearchPath)
```
func FindInGoSearchPath(pkg [string](/builtin#string)) [string](/builtin#string)
```
FindInGoSearchPath finds a package in the $GOPATH:$GOROOT
####
func [FindInSearchPath](https://github.com/go-openapi/swag/blob/v0.22.4/path.go#L30) [¶](#FindInSearchPath)
```
func FindInSearchPath(searchPath, pkg [string](/builtin#string)) [string](/builtin#string)
```
FindInSearchPath finds a package in a provided lists of paths
####
func [Float32](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L547) [¶](#Float32)
added in v0.19.9
```
func Float32(v [float32](/builtin#float32)) *[float32](/builtin#float32)
```
Float32 returns a pointer to of the float32 value passed in.
####
func [Float32Map](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L589) [¶](#Float32Map)
added in v0.19.9
```
func Float32Map(src map[[string](/builtin#string)][float32](/builtin#float32)) map[[string](/builtin#string)]*[float32](/builtin#float32)
```
Float32Map converts a string map of float32 values into a string map of float32 pointers
####
func [Float32Slice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L563) [¶](#Float32Slice)
added in v0.19.9
```
func Float32Slice(src [][float32](/builtin#float32)) []*[float32](/builtin#float32)
```
Float32Slice converts a slice of float32 values into a slice of float32 pointers
####
func [Float32Value](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L553) [¶](#Float32Value)
added in v0.19.9
```
func Float32Value(v *[float32](/builtin#float32)) [float32](/builtin#float32)
```
Float32Value returns the value of the float32 pointer passed in or 0 if the pointer is nil.
####
func [Float32ValueMap](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L602) [¶](#Float32ValueMap)
added in v0.19.9
```
func Float32ValueMap(src map[[string](/builtin#string)]*[float32](/builtin#float32)) map[[string](/builtin#string)][float32](/builtin#float32)
```
Float32ValueMap converts a string map of float32 pointers into a string map of float32 values
####
func [Float32ValueSlice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L575) [¶](#Float32ValueSlice)
added in v0.19.9
```
func Float32ValueSlice(src []*[float32](/builtin#float32)) [][float32](/builtin#float32)
```
Float32ValueSlice converts a slice of float32 pointers into a slice of float32 values
####
func [Float64](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L615) [¶](#Float64)
```
func Float64(v [float64](/builtin#float64)) *[float64](/builtin#float64)
```
Float64 returns a pointer to of the float64 value passed in.
####
func [Float64Map](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L652) [¶](#Float64Map)
```
func Float64Map(src map[[string](/builtin#string)][float64](/builtin#float64)) map[[string](/builtin#string)]*[float64](/builtin#float64)
```
Float64Map converts a string map of float64 values into a string map of float64 pointers
####
func [Float64Slice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L630) [¶](#Float64Slice)
```
func Float64Slice(src [][float64](/builtin#float64)) []*[float64](/builtin#float64)
```
Float64Slice converts a slice of float64 values into a slice of float64 pointers
####
func [Float64Value](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L621) [¶](#Float64Value)
```
func Float64Value(v *[float64](/builtin#float64)) [float64](/builtin#float64)
```
Float64Value returns the value of the float64 pointer passed in or 0 if the pointer is nil.
####
func [Float64ValueMap](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L663) [¶](#Float64ValueMap)
```
func Float64ValueMap(src map[[string](/builtin#string)]*[float64](/builtin#float64)) map[[string](/builtin#string)][float64](/builtin#float64)
```
Float64ValueMap converts a string map of float64 pointers into a string map of float64 values
####
func [Float64ValueSlice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L640) [¶](#Float64ValueSlice)
```
func Float64ValueSlice(src []*[float64](/builtin#float64)) [][float64](/builtin#float64)
```
Float64ValueSlice converts a slice of float64 pointers into a slice of float64 values
####
func [FormatBool](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L156) [¶](#FormatBool)
```
func FormatBool(value [bool](/builtin#bool)) [string](/builtin#string)
```
FormatBool turns a boolean into a string
####
func [FormatFloat32](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L161) [¶](#FormatFloat32)
```
func FormatFloat32(value [float32](/builtin#float32)) [string](/builtin#string)
```
FormatFloat32 turns a float32 into a string
####
func [FormatFloat64](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L166) [¶](#FormatFloat64)
```
func FormatFloat64(value [float64](/builtin#float64)) [string](/builtin#string)
```
FormatFloat64 turns a float64 into a string
####
func [FormatInt16](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L176) [¶](#FormatInt16)
```
func FormatInt16(value [int16](/builtin#int16)) [string](/builtin#string)
```
FormatInt16 turns an int16 into a string
####
func [FormatInt32](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L181) [¶](#FormatInt32)
```
func FormatInt32(value [int32](/builtin#int32)) [string](/builtin#string)
```
FormatInt32 turns an int32 into a string
####
func [FormatInt64](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L186) [¶](#FormatInt64)
```
func FormatInt64(value [int64](/builtin#int64)) [string](/builtin#string)
```
FormatInt64 turns an int64 into a string
####
func [FormatInt8](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L171) [¶](#FormatInt8)
```
func FormatInt8(value [int8](/builtin#int8)) [string](/builtin#string)
```
FormatInt8 turns an int8 into a string
####
func [FormatUint16](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L196) [¶](#FormatUint16)
```
func FormatUint16(value [uint16](/builtin#uint16)) [string](/builtin#string)
```
FormatUint16 turns an uint16 into a string
####
func [FormatUint32](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L201) [¶](#FormatUint32)
```
func FormatUint32(value [uint32](/builtin#uint32)) [string](/builtin#string)
```
FormatUint32 turns an uint32 into a string
####
func [FormatUint64](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L206) [¶](#FormatUint64)
```
func FormatUint64(value [uint64](/builtin#uint64)) [string](/builtin#string)
```
FormatUint64 turns an uint64 into a string
####
func [FormatUint8](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L191) [¶](#FormatUint8)
```
func FormatUint8(value [uint8](/builtin#uint8)) [string](/builtin#string)
```
FormatUint8 turns an uint8 into a string
####
func [FromDynamicJSON](https://github.com/go-openapi/swag/blob/v0.22.4/json.go#L184) [¶](#FromDynamicJSON)
```
func FromDynamicJSON(data, target interface{}) [error](/builtin#error)
```
FromDynamicJSON turns an object into a properly JSON typed structure
####
func [FullGoSearchPath](https://github.com/go-openapi/swag/blob/v0.22.4/path.go#L48) [¶](#FullGoSearchPath)
```
func FullGoSearchPath() [string](/builtin#string)
```
FullGoSearchPath gets the search paths for finding packages
####
func [Int](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L126) [¶](#Int)
```
func Int(v [int](/builtin#int)) *[int](/builtin#int)
```
Int returns a pointer to of the int value passed in.
####
func [Int32](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L185) [¶](#Int32)
```
func Int32(v [int32](/builtin#int32)) *[int32](/builtin#int32)
```
Int32 returns a pointer to of the int32 value passed in.
####
func [Int32Map](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L222) [¶](#Int32Map)
```
func Int32Map(src map[[string](/builtin#string)][int32](/builtin#int32)) map[[string](/builtin#string)]*[int32](/builtin#int32)
```
Int32Map converts a string map of int32 values into a string map of int32 pointers
####
func [Int32Slice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L200) [¶](#Int32Slice)
```
func Int32Slice(src [][int32](/builtin#int32)) []*[int32](/builtin#int32)
```
Int32Slice converts a slice of int32 values into a slice of int32 pointers
####
func [Int32Value](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L191) [¶](#Int32Value)
```
func Int32Value(v *[int32](/builtin#int32)) [int32](/builtin#int32)
```
Int32Value returns the value of the int32 pointer passed in or 0 if the pointer is nil.
####
func [Int32ValueMap](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L233) [¶](#Int32ValueMap)
```
func Int32ValueMap(src map[[string](/builtin#string)]*[int32](/builtin#int32)) map[[string](/builtin#string)][int32](/builtin#int32)
```
Int32ValueMap converts a string map of int32 pointers into a string map of int32 values
####
func [Int32ValueSlice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L210) [¶](#Int32ValueSlice)
```
func Int32ValueSlice(src []*[int32](/builtin#int32)) [][int32](/builtin#int32)
```
Int32ValueSlice converts a slice of int32 pointers into a slice of int32 values
####
func [Int64](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L244) [¶](#Int64)
```
func Int64(v [int64](/builtin#int64)) *[int64](/builtin#int64)
```
Int64 returns a pointer to of the int64 value passed in.
####
func [Int64Map](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L281) [¶](#Int64Map)
```
func Int64Map(src map[[string](/builtin#string)][int64](/builtin#int64)) map[[string](/builtin#string)]*[int64](/builtin#int64)
```
Int64Map converts a string map of int64 values into a string map of int64 pointers
####
func [Int64Slice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L259) [¶](#Int64Slice)
```
func Int64Slice(src [][int64](/builtin#int64)) []*[int64](/builtin#int64)
```
Int64Slice converts a slice of int64 values into a slice of int64 pointers
####
func [Int64Value](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L250) [¶](#Int64Value)
```
func Int64Value(v *[int64](/builtin#int64)) [int64](/builtin#int64)
```
Int64Value returns the value of the int64 pointer passed in or 0 if the pointer is nil.
####
func [Int64ValueMap](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L292) [¶](#Int64ValueMap)
```
func Int64ValueMap(src map[[string](/builtin#string)]*[int64](/builtin#int64)) map[[string](/builtin#string)][int64](/builtin#int64)
```
Int64ValueMap converts a string map of int64 pointers into a string map of int64 values
####
func [Int64ValueSlice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L269) [¶](#Int64ValueSlice)
```
func Int64ValueSlice(src []*[int64](/builtin#int64)) [][int64](/builtin#int64)
```
Int64ValueSlice converts a slice of int64 pointers into a slice of int64 values
####
func [IntMap](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L163) [¶](#IntMap)
```
func IntMap(src map[[string](/builtin#string)][int](/builtin#int)) map[[string](/builtin#string)]*[int](/builtin#int)
```
IntMap converts a string map of int values into a string map of int pointers
####
func [IntSlice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L141) [¶](#IntSlice)
```
func IntSlice(src [][int](/builtin#int)) []*[int](/builtin#int)
```
IntSlice converts a slice of int values into a slice of int pointers
####
func [IntValue](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L132) [¶](#IntValue)
```
func IntValue(v *[int](/builtin#int)) [int](/builtin#int)
```
IntValue returns the value of the int pointer passed in or 0 if the pointer is nil.
####
func [IntValueMap](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L174) [¶](#IntValueMap)
```
func IntValueMap(src map[[string](/builtin#string)]*[int](/builtin#int)) map[[string](/builtin#string)][int](/builtin#int)
```
IntValueMap converts a string map of int pointers into a string map of int values
####
func [IntValueSlice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L151) [¶](#IntValueSlice)
```
func IntValueSlice(src []*[int](/builtin#int)) [][int](/builtin#int)
```
IntValueSlice converts a slice of int pointers into a slice of int values
####
func [IsFloat64AJSONInteger](https://github.com/go-openapi/swag/blob/v0.22.4/convert.go#L31) [¶](#IsFloat64AJSONInteger)
```
func IsFloat64AJSONInteger(f [float64](/builtin#float64)) [bool](/builtin#bool)
```
IsFloat64AJSONInteger allow for integers [-2^53, 2^53-1] inclusive
####
func [IsZero](https://github.com/go-openapi/swag/blob/v0.22.4/util.go#L343) [¶](#IsZero)
```
func IsZero(data interface{}) [bool](/builtin#bool)
```
IsZero returns true when the value passed into the function is a zero value.
This allows for safer checking of interface values.
####
func [JoinByFormat](https://github.com/go-openapi/swag/blob/v0.22.4/util.go#L107) [¶](#JoinByFormat)
```
func JoinByFormat(data [][string](/builtin#string), format [string](/builtin#string)) [][string](/builtin#string)
```
JoinByFormat joins a string array by a known format (e.g. swagger's collectionFormat attribute):
```
ssv: space separated value tsv: tab separated value pipes: pipe (|) separated value csv: comma separated value (default)
```
####
func [LoadFromFileOrHTTP](https://github.com/go-openapi/swag/blob/v0.22.4/loading.go#L43) [¶](#LoadFromFileOrHTTP)
```
func LoadFromFileOrHTTP(path [string](/builtin#string)) ([][byte](/builtin#byte), [error](/builtin#error))
```
LoadFromFileOrHTTP loads the bytes from a file or a remote http server based on the path passed in
####
func [LoadFromFileOrHTTPWithTimeout](https://github.com/go-openapi/swag/blob/v0.22.4/loading.go#L49) [¶](#LoadFromFileOrHTTPWithTimeout)
```
func LoadFromFileOrHTTPWithTimeout(path [string](/builtin#string), timeout [time](/time).[Duration](/time#Duration)) ([][byte](/builtin#byte), [error](/builtin#error))
```
LoadFromFileOrHTTPWithTimeout loads the bytes from a file or a remote http server based on the path passed in timeout arg allows for per request overriding of the request timeout
####
func [LoadStrategy](https://github.com/go-openapi/swag/blob/v0.22.4/loading.go#L54) [¶](#LoadStrategy)
```
func LoadStrategy(path [string](/builtin#string), local, remote func([string](/builtin#string)) ([][byte](/builtin#byte), [error](/builtin#error))) func([string](/builtin#string)) ([][byte](/builtin#byte), [error](/builtin#error))
```
LoadStrategy returns a loader function for a given path or uri
####
func [ReadJSON](https://github.com/go-openapi/swag/blob/v0.22.4/json.go#L70) [¶](#ReadJSON)
```
func ReadJSON(data [][byte](/builtin#byte), value interface{}) [error](/builtin#error)
```
ReadJSON reads json data, prefers finding an appropriate interface to short-circuit the unmarshaler so it takes the fastest option available
####
func [SplitByFormat](https://github.com/go-openapi/swag/blob/v0.22.4/util.go#L133) [¶](#SplitByFormat)
```
func SplitByFormat(data, format [string](/builtin#string)) [][string](/builtin#string)
```
SplitByFormat splits a string by a known format:
```
ssv: space separated value tsv: tab separated value pipes: pipe (|) separated value csv: comma separated value (default)
```
####
func [SplitHostPort](https://github.com/go-openapi/swag/blob/v0.22.4/net.go#L24) [¶](#SplitHostPort)
```
func SplitHostPort(addr [string](/builtin#string)) (host [string](/builtin#string), port [int](/builtin#int), err [error](/builtin#error))
```
SplitHostPort splits a network address into a host and a port.
The port is -1 when there is no port to be found
####
func [String](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L8) [¶](#String)
```
func String(v [string](/builtin#string)) *[string](/builtin#string)
```
String returns a pointer to of the string value passed in.
####
func [StringMap](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L45) [¶](#StringMap)
```
func StringMap(src map[[string](/builtin#string)][string](/builtin#string)) map[[string](/builtin#string)]*[string](/builtin#string)
```
StringMap converts a string map of string values into a string map of string pointers
####
func [StringSlice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L23) [¶](#StringSlice)
```
func StringSlice(src [][string](/builtin#string)) []*[string](/builtin#string)
```
StringSlice converts a slice of string values into a slice of string pointers
####
func [StringValue](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L14) [¶](#StringValue)
```
func StringValue(v *[string](/builtin#string)) [string](/builtin#string)
```
StringValue returns the value of the string pointer passed in or
"" if the pointer is nil.
####
func [StringValueMap](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L56) [¶](#StringValueMap)
```
func StringValueMap(src map[[string](/builtin#string)]*[string](/builtin#string)) map[[string](/builtin#string)][string](/builtin#string)
```
StringValueMap converts a string map of string pointers into a string map of string values
####
func [StringValueSlice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L33) [¶](#StringValueSlice)
```
func StringValueSlice(src []*[string](/builtin#string)) [][string](/builtin#string)
```
StringValueSlice converts a slice of string pointers into a slice of string values
####
func [Time](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L674) [¶](#Time)
```
func Time(v [time](/time).[Time](/time#Time)) *[time](/time).[Time](/time#Time)
```
Time returns a pointer to of the time.Time value passed in.
####
func [TimeMap](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L711) [¶](#TimeMap)
```
func TimeMap(src map[[string](/builtin#string)][time](/time).[Time](/time#Time)) map[[string](/builtin#string)]*[time](/time).[Time](/time#Time)
```
TimeMap converts a string map of time.Time values into a string map of time.Time pointers
####
func [TimeSlice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L689) [¶](#TimeSlice)
```
func TimeSlice(src [][time](/time).[Time](/time#Time)) []*[time](/time).[Time](/time#Time)
```
TimeSlice converts a slice of time.Time values into a slice of time.Time pointers
####
func [TimeValue](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L680) [¶](#TimeValue)
```
func TimeValue(v *[time](/time).[Time](/time#Time)) [time](/time).[Time](/time#Time)
```
TimeValue returns the value of the time.Time pointer passed in or time.Time{} if the pointer is nil.
####
func [TimeValueMap](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L722) [¶](#TimeValueMap)
```
func TimeValueMap(src map[[string](/builtin#string)]*[time](/time).[Time](/time#Time)) map[[string](/builtin#string)][time](/time).[Time](/time#Time)
```
TimeValueMap converts a string map of time.Time pointers into a string map of time.Time values
####
func [TimeValueSlice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L699) [¶](#TimeValueSlice)
```
func TimeValueSlice(src []*[time](/time).[Time](/time#Time)) [][time](/time).[Time](/time#Time)
```
TimeValueSlice converts a slice of time.Time pointers into a slice of time.Time values
####
func [ToCommandName](https://github.com/go-openapi/swag/blob/v0.22.4/util.go#L215) [¶](#ToCommandName)
```
func ToCommandName(name [string](/builtin#string)) [string](/builtin#string)
```
ToCommandName lowercases and underscores a go type name
####
func [ToDynamicJSON](https://github.com/go-openapi/swag/blob/v0.22.4/json.go#L170) [¶](#ToDynamicJSON)
```
func ToDynamicJSON(data interface{}) interface{}
```
ToDynamicJSON turns an object into a properly JSON typed structure
####
func [ToFileName](https://github.com/go-openapi/swag/blob/v0.22.4/util.go#L203) [¶](#ToFileName)
```
func ToFileName(name [string](/builtin#string)) [string](/builtin#string)
```
ToFileName lowercases and underscores a go type name
####
func [ToGoName](https://github.com/go-openapi/swag/blob/v0.22.4/util.go#L285) [¶](#ToGoName)
```
func ToGoName(name [string](/builtin#string)) [string](/builtin#string)
```
ToGoName translates a swagger name which can be underscored or camel cased to a name that golint likes
####
func [ToHumanNameLower](https://github.com/go-openapi/swag/blob/v0.22.4/util.go#L226) [¶](#ToHumanNameLower)
```
func ToHumanNameLower(name [string](/builtin#string)) [string](/builtin#string)
```
ToHumanNameLower represents a code name as a human series of words
####
func [ToHumanNameTitle](https://github.com/go-openapi/swag/blob/v0.22.4/util.go#L242) [¶](#ToHumanNameTitle)
```
func ToHumanNameTitle(name [string](/builtin#string)) [string](/builtin#string)
```
ToHumanNameTitle represents a code name as a human series of words with the first letters titleized
####
func [ToJSONName](https://github.com/go-openapi/swag/blob/v0.22.4/util.go#L258) [¶](#ToJSONName)
```
func ToJSONName(name [string](/builtin#string)) [string](/builtin#string)
```
ToJSONName camelcases a name which can be underscored or pascal cased
####
func [ToVarName](https://github.com/go-openapi/swag/blob/v0.22.4/util.go#L273) [¶](#ToVarName)
```
func ToVarName(name [string](/builtin#string)) [string](/builtin#string)
```
ToVarName camelcases a name which can be underscored or pascal cased
####
func [Uint](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L370) [¶](#Uint)
```
func Uint(v [uint](/builtin#uint)) *[uint](/builtin#uint)
```
Uint returns a pointer to of the uint value passed in.
####
func [Uint16](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L303) [¶](#Uint16)
added in v0.19.9
```
func Uint16(v [uint16](/builtin#uint16)) *[uint16](/builtin#uint16)
```
Uint16 returns a pointer to of the uint16 value passed in.
####
func [Uint16Map](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L344) [¶](#Uint16Map)
added in v0.19.9
```
func Uint16Map(src map[[string](/builtin#string)][uint16](/builtin#uint16)) map[[string](/builtin#string)]*[uint16](/builtin#uint16)
```
Uint16Map converts a string map of uint16 values into a string map of uint16 pointers
####
func [Uint16Slice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L319) [¶](#Uint16Slice)
added in v0.19.9
```
func Uint16Slice(src [][uint16](/builtin#uint16)) []*[uint16](/builtin#uint16)
```
Uint16Slice converts a slice of uint16 values into a slice of uint16 pointers
####
func [Uint16Value](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L309) [¶](#Uint16Value)
added in v0.19.9
```
func Uint16Value(v *[uint16](/builtin#uint16)) [uint16](/builtin#uint16)
```
Uint16Value returns the value of the uint16 pointer passed in or 0 if the pointer is nil.
####
func [Uint16ValueMap](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L357) [¶](#Uint16ValueMap)
added in v0.19.9
```
func Uint16ValueMap(src map[[string](/builtin#string)]*[uint16](/builtin#uint16)) map[[string](/builtin#string)][uint16](/builtin#uint16)
```
Uint16ValueMap converts a string map of uint16 pointers into a string map of uint16 values
####
func [Uint16ValueSlice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L330) [¶](#Uint16ValueSlice)
added in v0.19.9
```
func Uint16ValueSlice(src []*[uint16](/builtin#uint16)) [][uint16](/builtin#uint16)
```
Uint16ValueSlice converts a slice of uint16 pointers into a slice of uint16 values
####
func [Uint32](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L429) [¶](#Uint32)
```
func Uint32(v [uint32](/builtin#uint32)) *[uint32](/builtin#uint32)
```
Uint32 returns a pointer to of the uint32 value passed in.
####
func [Uint32Map](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L466) [¶](#Uint32Map)
```
func Uint32Map(src map[[string](/builtin#string)][uint32](/builtin#uint32)) map[[string](/builtin#string)]*[uint32](/builtin#uint32)
```
Uint32Map converts a string map of uint32 values into a string map of uint32 pointers
####
func [Uint32Slice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L444) [¶](#Uint32Slice)
```
func Uint32Slice(src [][uint32](/builtin#uint32)) []*[uint32](/builtin#uint32)
```
Uint32Slice converts a slice of uint32 values into a slice of uint32 pointers
####
func [Uint32Value](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L435) [¶](#Uint32Value)
```
func Uint32Value(v *[uint32](/builtin#uint32)) [uint32](/builtin#uint32)
```
Uint32Value returns the value of the uint32 pointer passed in or 0 if the pointer is nil.
####
func [Uint32ValueMap](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L477) [¶](#Uint32ValueMap)
```
func Uint32ValueMap(src map[[string](/builtin#string)]*[uint32](/builtin#uint32)) map[[string](/builtin#string)][uint32](/builtin#uint32)
```
Uint32ValueMap converts a string map of uint32 pointers into a string map of uint32 values
####
func [Uint32ValueSlice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L454) [¶](#Uint32ValueSlice)
```
func Uint32ValueSlice(src []*[uint32](/builtin#uint32)) [][uint32](/builtin#uint32)
```
Uint32ValueSlice converts a slice of uint32 pointers into a slice of uint32 values
####
func [Uint64](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L488) [¶](#Uint64)
```
func Uint64(v [uint64](/builtin#uint64)) *[uint64](/builtin#uint64)
```
Uint64 returns a pointer to of the uint64 value passed in.
####
func [Uint64Map](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L525) [¶](#Uint64Map)
```
func Uint64Map(src map[[string](/builtin#string)][uint64](/builtin#uint64)) map[[string](/builtin#string)]*[uint64](/builtin#uint64)
```
Uint64Map converts a string map of uint64 values into a string map of uint64 pointers
####
func [Uint64Slice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L503) [¶](#Uint64Slice)
```
func Uint64Slice(src [][uint64](/builtin#uint64)) []*[uint64](/builtin#uint64)
```
Uint64Slice converts a slice of uint64 values into a slice of uint64 pointers
####
func [Uint64Value](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L494) [¶](#Uint64Value)
```
func Uint64Value(v *[uint64](/builtin#uint64)) [uint64](/builtin#uint64)
```
Uint64Value returns the value of the uint64 pointer passed in or 0 if the pointer is nil.
####
func [Uint64ValueMap](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L536) [¶](#Uint64ValueMap)
```
func Uint64ValueMap(src map[[string](/builtin#string)]*[uint64](/builtin#uint64)) map[[string](/builtin#string)][uint64](/builtin#uint64)
```
Uint64ValueMap converts a string map of uint64 pointers into a string map of uint64 values
####
func [Uint64ValueSlice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L513) [¶](#Uint64ValueSlice)
```
func Uint64ValueSlice(src []*[uint64](/builtin#uint64)) [][uint64](/builtin#uint64)
```
Uint64ValueSlice converts a slice of uint64 pointers into a slice of uint64 values
####
func [UintMap](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L407) [¶](#UintMap)
```
func UintMap(src map[[string](/builtin#string)][uint](/builtin#uint)) map[[string](/builtin#string)]*[uint](/builtin#uint)
```
UintMap converts a string map of uint values into a string map of uint pointers
####
func [UintSlice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L385) [¶](#UintSlice)
```
func UintSlice(src [][uint](/builtin#uint)) []*[uint](/builtin#uint)
```
UintSlice converts a slice of uint values into a slice of uint pointers
####
func [UintValue](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L376) [¶](#UintValue)
```
func UintValue(v *[uint](/builtin#uint)) [uint](/builtin#uint)
```
UintValue returns the value of the uint pointer passed in or 0 if the pointer is nil.
####
func [UintValueMap](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L418) [¶](#UintValueMap)
```
func UintValueMap(src map[[string](/builtin#string)]*[uint](/builtin#uint)) map[[string](/builtin#string)][uint](/builtin#uint)
```
UintValueMap converts a string map of uint pointers into a string map of uint values
####
func [UintValueSlice](https://github.com/go-openapi/swag/blob/v0.22.4/convert_types.go#L395) [¶](#UintValueSlice)
```
func UintValueSlice(src []*[uint](/builtin#uint)) [][uint](/builtin#uint)
```
UintValueSlice converts a slice of uint pointers into a slice of uint values
####
func [WriteJSON](https://github.com/go-openapi/swag/blob/v0.22.4/json.go#L56) [¶](#WriteJSON)
```
func WriteJSON(data interface{}) ([][byte](/builtin#byte), [error](/builtin#error))
```
WriteJSON writes json data, prefers finding an appropriate interface to short-circuit the marshaler so it takes the fastest option available.
####
func [YAMLData](https://github.com/go-openapi/swag/blob/v0.22.4/yaml.go#L443) [¶](#YAMLData)
```
func YAMLData(path [string](/builtin#string)) (interface{}, [error](/builtin#error))
```
YAMLData loads a yaml document from either http or a file
####
func [YAMLDoc](https://github.com/go-openapi/swag/blob/v0.22.4/yaml.go#L428) [¶](#YAMLDoc)
```
func YAMLDoc(path [string](/builtin#string)) ([json](/encoding/json).[RawMessage](/encoding/json#RawMessage), [error](/builtin#error))
```
YAMLDoc loads a yaml document from either http or a file and converts it to json
####
func [YAMLMatcher](https://github.com/go-openapi/swag/blob/v0.22.4/yaml.go#L29) [¶](#YAMLMatcher)
```
func YAMLMatcher(path [string](/builtin#string)) [bool](/builtin#bool)
```
YAMLMatcher matches yaml
####
func [YAMLToJSON](https://github.com/go-openapi/swag/blob/v0.22.4/yaml.go#L35) [¶](#YAMLToJSON)
```
func YAMLToJSON(data interface{}) ([json](/encoding/json).[RawMessage](/encoding/json#RawMessage), [error](/builtin#error))
```
YAMLToJSON converts YAML unmarshaled data into json compatible data
### Types [¶](#pkg-types)
####
type [CommandLineOptionsGroup](https://github.com/go-openapi/swag/blob/v0.22.4/util.go#L390) [¶](#CommandLineOptionsGroup)
```
type CommandLineOptionsGroup struct {
ShortDescription [string](/builtin#string)
LongDescription [string](/builtin#string)
Options interface{}
}
```
CommandLineOptionsGroup represents a group of user-defined command line options
####
type [File](https://github.com/go-openapi/swag/blob/v0.22.4/file.go#L20) [¶](#File)
added in v0.21.0
```
type File struct {
Data [multipart](/mime/multipart).[File](/mime/multipart#File)
Header *[multipart](/mime/multipart).[FileHeader](/mime/multipart#FileHeader)
}
```
File represents an uploaded file.
####
func (*File) [Close](https://github.com/go-openapi/swag/blob/v0.22.4/file.go#L31) [¶](#File.Close)
added in v0.21.0
```
func (f *[File](#File)) Close() [error](/builtin#error)
```
Close the file
####
func (*File) [Read](https://github.com/go-openapi/swag/blob/v0.22.4/file.go#L26) [¶](#File.Read)
added in v0.21.0
```
func (f *[File](#File)) Read(p [][byte](/builtin#byte)) (n [int](/builtin#int), err [error](/builtin#error))
```
Read bytes from the file
####
type [JSONMapItem](https://github.com/go-openapi/swag/blob/v0.22.4/yaml.go#L326) [¶](#JSONMapItem)
```
type JSONMapItem struct {
Key [string](/builtin#string)
Value interface{}
}
```
JSONMapItem represents the value of a key in a JSON object held by JSONMapSlice
####
func (JSONMapItem) [MarshalEasyJSON](https://github.com/go-openapi/swag/blob/v0.22.4/yaml.go#L339) [¶](#JSONMapItem.MarshalEasyJSON)
```
func (s [JSONMapItem](#JSONMapItem)) MarshalEasyJSON(w *[jwriter](/github.com/mailru/easyjson/jwriter).[Writer](/github.com/mailru/easyjson/jwriter#Writer))
```
MarshalEasyJSON renders a JSONMapItem as JSON, using easyJSON
####
func (JSONMapItem) [MarshalJSON](https://github.com/go-openapi/swag/blob/v0.22.4/yaml.go#L332) [¶](#JSONMapItem.MarshalJSON)
```
func (s [JSONMapItem](#JSONMapItem)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON renders a JSONMapItem as JSON
####
func (*JSONMapItem) [UnmarshalEasyJSON](https://github.com/go-openapi/swag/blob/v0.22.4/yaml.go#L353) [¶](#JSONMapItem.UnmarshalEasyJSON)
```
func (s *[JSONMapItem](#JSONMapItem)) UnmarshalEasyJSON(in *[jlexer](/github.com/mailru/easyjson/jlexer).[Lexer](/github.com/mailru/easyjson/jlexer#Lexer))
```
UnmarshalEasyJSON makes a JSONMapItem from JSON, using easyJSON
####
func (*JSONMapItem) [UnmarshalJSON](https://github.com/go-openapi/swag/blob/v0.22.4/yaml.go#L346) [¶](#JSONMapItem.UnmarshalJSON)
```
func (s *[JSONMapItem](#JSONMapItem)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON makes a JSONMapItem from JSON
####
type [JSONMapSlice](https://github.com/go-openapi/swag/blob/v0.22.4/yaml.go#L169) [¶](#JSONMapSlice)
```
type JSONMapSlice [][JSONMapItem](#JSONMapItem)
```
JSONMapSlice represent a JSON object, with the order of keys maintained
####
func (JSONMapSlice) [MarshalEasyJSON](https://github.com/go-openapi/swag/blob/v0.22.4/yaml.go#L179) [¶](#JSONMapSlice.MarshalEasyJSON)
```
func (s [JSONMapSlice](#JSONMapSlice)) MarshalEasyJSON(w *[jwriter](/github.com/mailru/easyjson/jwriter).[Writer](/github.com/mailru/easyjson/jwriter#Writer))
```
MarshalEasyJSON renders a JSONMapSlice as JSON, using easyJSON
####
func (JSONMapSlice) [MarshalJSON](https://github.com/go-openapi/swag/blob/v0.22.4/yaml.go#L172) [¶](#JSONMapSlice.MarshalJSON)
```
func (s [JSONMapSlice](#JSONMapSlice)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON renders a JSONMapSlice as JSON
####
func (JSONMapSlice) [MarshalYAML](https://github.com/go-openapi/swag/blob/v0.22.4/yaml.go#L218) [¶](#JSONMapSlice.MarshalYAML)
added in v0.22.1
```
func (s [JSONMapSlice](#JSONMapSlice)) MarshalYAML() (interface{}, [error](/builtin#error))
```
####
func (*JSONMapSlice) [UnmarshalEasyJSON](https://github.com/go-openapi/swag/blob/v0.22.4/yaml.go#L202) [¶](#JSONMapSlice.UnmarshalEasyJSON)
```
func (s *[JSONMapSlice](#JSONMapSlice)) UnmarshalEasyJSON(in *[jlexer](/github.com/mailru/easyjson/jlexer).[Lexer](/github.com/mailru/easyjson/jlexer#Lexer))
```
UnmarshalEasyJSON makes a JSONMapSlice from JSON, using easyJSON
####
func (*JSONMapSlice) [UnmarshalJSON](https://github.com/go-openapi/swag/blob/v0.22.4/yaml.go#L195) [¶](#JSONMapSlice.UnmarshalJSON)
```
func (s *[JSONMapSlice](#JSONMapSlice)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON makes a JSONMapSlice from JSON
####
type [NameProvider](https://github.com/go-openapi/swag/blob/v0.22.4/json.go#L195) [¶](#NameProvider)
```
type NameProvider struct {
// contains filtered or unexported fields
}
```
NameProvider represents an object capable of translating from go property names to json property names This type is thread-safe.
####
func [NewNameProvider](https://github.com/go-openapi/swag/blob/v0.22.4/json.go#L206) [¶](#NewNameProvider)
```
func NewNameProvider() *[NameProvider](#NameProvider)
```
NewNameProvider creates a new name provider
####
func (*NameProvider) [GetGoName](https://github.com/go-openapi/swag/blob/v0.22.4/json.go#L297) [¶](#NameProvider.GetGoName)
```
func (n *[NameProvider](#NameProvider)) GetGoName(subject interface{}, name [string](/builtin#string)) ([string](/builtin#string), [bool](/builtin#bool))
```
GetGoName gets the go name for a json property name
####
func (*NameProvider) [GetGoNameForType](https://github.com/go-openapi/swag/blob/v0.22.4/json.go#L303) [¶](#NameProvider.GetGoNameForType)
```
func (n *[NameProvider](#NameProvider)) GetGoNameForType(tpe [reflect](/reflect).[Type](/reflect#Type), name [string](/builtin#string)) ([string](/builtin#string), [bool](/builtin#bool))
```
GetGoNameForType gets the go name for a given type for a json property name
####
func (*NameProvider) [GetJSONName](https://github.com/go-openapi/swag/blob/v0.22.4/json.go#L273) [¶](#NameProvider.GetJSONName)
```
func (n *[NameProvider](#NameProvider)) GetJSONName(subject interface{}, name [string](/builtin#string)) ([string](/builtin#string), [bool](/builtin#bool))
```
GetJSONName gets the json name for a go property name
####
func (*NameProvider) [GetJSONNameForType](https://github.com/go-openapi/swag/blob/v0.22.4/json.go#L279) [¶](#NameProvider.GetJSONNameForType)
```
func (n *[NameProvider](#NameProvider)) GetJSONNameForType(tpe [reflect](/reflect).[Type](/reflect#Type), name [string](/builtin#string)) ([string](/builtin#string), [bool](/builtin#bool))
```
GetJSONNameForType gets the json name for a go property name on a given type
####
func (*NameProvider) [GetJSONNames](https://github.com/go-openapi/swag/blob/v0.22.4/json.go#L256) [¶](#NameProvider.GetJSONNames)
```
func (n *[NameProvider](#NameProvider)) GetJSONNames(subject interface{}) [][string](/builtin#string)
```
GetJSONNames gets all the json property names for a type |
node-sql-parser | npm | JavaScript | [Nodejs SQL Parser](#nodejs-sql-parser)
===
**Parse simple SQL statements into an abstract syntax tree (AST) with the visited tableList, columnList and convert it back to SQL.**
[⭐ Features](#star-features)
---
* support multiple sql statement seperate by semicolon
* support select, delete, update and insert type
* support drop, truncate and rename command
* output the table and column list that the sql visited with the corresponding authority
* support various databases engine
[🎉 Install](#tada-install)
---
### [From](#from-npmjs) [npmjs](https://www.npmjs.org/)
```
npm install node-sql-parser --save
or
yarn add node-sql-parser
```
### [From](#from-github-package-registry) [GitHub Package Registry](https://npm.pkg.github.com/)
```
npm install @taozhi8833998/node-sql-parser --registry=https://npm.pkg.github.com/
```
### [From Browser](#from-browser)
Import the JS file in your page:
```
// support all database parser, but file size is about 750K
<script src="https://unpkg.com/node-sql-parser/umd/index.umd.js"></script// or you can import specified database parser only, it's about 150K
<script src="https://unpkg.com/node-sql-parser/umd/mysql.umd.js"></script<script src="https://unpkg.com/node-sql-parser/umd/postgresql.umd.js"></script>
```
* `NodeSQLParser` object is on `window`
```
<!DOCTYPE html>
<html lang="en" >
<head>
<title>node-sql-parser</title>
<meta charset="utf-8" />
</head>
<body>
<p><em>Check console to see the output</em></p>
<script src="https://unpkg.com/node-sql-parser/umd/mysql.umd.js"></script>
<script>
window.onload = function () {
// Example parser
const parser = new NodeSQLParser.Parser()
const ast = parser.astify("select id, name from students where age < 18")
console.log(ast)
const sql = parser.sqlify(ast)
console.log(sql)
}
</script>
</body>
</html>
```
[🚀 Usage](#rocket-usage)
---
### [Supported Database SQL Syntax](#supported-database-sql-syntax)
* BigQuery
* DB2
* Hive
* MariaDB
* MySQL
* PostgresQL
* Sqlite
* TransactSQL
* [FlinkSQL](https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/sql/)
* Snowflake(alpha)
* New issue could be made for other new database.
### [Create AST for SQL statement](#create-ast-for-sql-statement)
```
// import Parser for all databases const { Parser } = require('node-sql-parser');
const parser = new Parser();
const ast = parser.astify('SELECT * FROM t'); // mysql sql grammer parsed by default
console.log(ast);
```
* `ast` for `SELECT * FROM t`
```
{
"with": null,
"type": "select",
"options": null,
"distinct": null,
"columns": "*",
"from": [
{
"db": null,
"table": "t",
"as": null
}
],
"where": null,
"groupby": null,
"having": null,
"orderby": null,
"limit": null
}
```
### [Convert AST back to SQL](#convert-ast-back-to-sql)
```
const opt = {
database: 'MySQL' // MySQL is the default database
}
// import mysql parser only const { Parser } = require('node-sql-parser');
const parser = new Parser()
// opt is optional const ast = parser.astify('SELECT * FROM t', opt);
const sql = parser.sqlify(ast, opt);
console.log(sql); // SELECT * FROM `t`
```
### [Parse specified Database](#parse-specified-database)
There two ways to parser the specified database.
import Parser from the specified database path `node-sql-parser/build/{database}`
```
// import transactsql parser only const { Parser } = require('node-sql-parser/build/transactsql')
const parser = new Parser()
const sql = `SELECT id FROM test AS result`
const ast = parser.astify(sql)
console.log(parser.sqlify(ast)) // SELECT [id] FROM [test] AS [result]
```
OR you can pass a options object to the parser, and specify the database property.
```
const opt = {
database: 'Postgresql'
}
// import all databases parser const { Parser } = require('node-sql-parser')
const parser = new Parser()
// pass the opt config to the corresponding methods const ast = parser.astify('SELECT * FROM t', opt)
const sql = parser.sqlify(ast, opt)
console.log(sql); // SELECT * FROM "t"
```
### [Get TableList, ColumnList, Ast by `parse` function](#get-tablelist-columnlist-ast-by-parse-function)
```
const opt = {
database: 'MariaDB' // MySQL is the default database
}
const { Parser } = require('node-sql-parser/build/mariadb');
const parser = new Parser()
// opt is optional const { tableList, columnList, ast } = parser.parse('SELECT * FROM t', opt);
```
### [Get the SQL visited tables](#get-the-sql-visited-tables)
* get the table list that the sql visited
* the format is **{type}::{dbName}::{tableName}** // type could be select, update, delete or insert
```
const opt = {
database: 'MySQL'
}
const { Parser } = require('node-sql-parser/build/mysql');
const parser = new Parser();
// opt is optional const tableList = parser.tableList('SELECT * FROM t', opt);
console.log(tableList); // ["select::null::t"]
```
### [Get the SQL visited columns](#get-the-sql-visited-columns)
* get the column list that the sql visited
* the format is **{type}::{tableName}::{columnName}** // type could be select, update, delete or insert
* for `select *`, `delete` and `insert into tableName values()` without specified columns, the `.*` column authority regex is required
```
const opt = {
database: 'MySQL'
}
const { Parser } = require('node-sql-parser/build/mysql');
const parser = new Parser();
// opt is optional const columnList = parser.columnList('SELECT t.id FROM t', opt);
console.log(columnList); // ["select::t::id"]
```
### [Check the SQL with Authority List](#check-the-sql-with-authority-list)
* check table authority
* `whiteListCheck` function check on `table` mode and `MySQL` database by default
```
const { Parser } = require('node-sql-parser');
const parser = new Parser();
const sql = 'UPDATE a SET id = 1 WHERE name IN (SELECT name FROM b)'
const whiteTableList = ['(select|update)::(.*)::(a|b)'] // array that contain multiple authorities const opt = {
database: 'MySQL',
type: 'table',
}
// opt is optional parser.whiteListCheck(sql, whiteTableList, opt) // if check failed, an error would be thrown with relevant error message, if passed it would return undefined
```
* check column authority
```
const { Parser } = require('node-sql-parser');
const parser = new Parser();
const sql = 'UPDATE a SET id = 1 WHERE name IN (SELECT name FROM b)'
const whiteColumnList = ['select::null::name', 'update::a::id'] // array that contain multiple authorities const opt = {
database: 'MySQL',
type: 'column',
}
// opt is optional parser.whiteListCheck(sql, whiteColumnList, opt) // if check failed, an error would be thrown with relevant error message, if passed it would return undefined
```
[😘 Acknowledgement](#kissing_heart-acknowledgement)
---
This project is inspired by the SQL parser [flora-sql-parser](https://github.com/godmodelabs/flora-sql-parser) module.
[License](#license)
---
[Apache-2.0](https://github.com/taozhi8833998/node-sql-parser/blob/HEAD/LICENSE)
[Buy me a Coffee](#buy-me-a-coffee)
---
If you like my project, **Star** in the corresponding project right corner. Your support is my biggest encouragement! ^_^
You can also scan the qr code below or open paypal link to donate to Author.
### [Paypal](#paypal)
Donate money by [paypal](https://www.paypal.me/taozhi8833998/5) to my account [<EMAIL>](https://github.com/taozhi8833998/node-sql-parser/blob/HEAD/<EMAIL>)
### [AliPay(支付宝)](#alipay支付宝)
### [Wechat(微信)](#wechat微信)
### [Explain](#explain)
If you have made a donation, you can leave your name and email in the issue, your name will be written to the donation list.
[Donation list](https://github.com/taozhi8833998/node-sql-parser/blob/master/DONATIONLIST.md)
---
[Star History](#star-history)
---
Readme
---
### Keywords
* sql
* sql-parser
* parser
* node
* nodejs
* node-parser
* node-sql-parser
* ast
* sql-ast |
fxTWAPLS | cran | R | Package ‘fxTWAPLS’
November 25, 2022
Title An Improved Version of WA-PLS
Version 0.1.2
Description The goal of this package is to provide an improved version of
WA-PLS (Weighted Averaging Partial Least Squares) by including the
tolerances of taxa and the frequency of the sampled climate variable.
This package also provides a way of leave-out cross-validation that
removes both the test site and sites that are both geographically
close and climatically close for each cycle, to avoid the risk of
pseudo-replication.
License GPL-3
Encoding UTF-8
URL https://github.com/special-uor/fxTWAPLS/,
https://special-uor.github.io/fxTWAPLS/,
https://research.reading.ac.uk/palaeoclimate/
BugReports https://github.com/special-uor/fxTWAPLS/issues/
Imports doFuture, foreach, future, geosphere, ggplot2, JOPS, MASS,
parallel, progressr
Suggests lintr (>= 3.0.0), magrittr, progress, scales, spelling,
styler, tictoc
Depends R (>= 3.6)
RoxygenNote 7.2.2
Language en-GB
NeedsCompilation no
Author <NAME> [aut] (<https://orcid.org/0000-0001-6250-0148>),
<NAME> [aut] (<https://orcid.org/0000-0002-1296-6764>),
<NAME> [aut] (<https://orcid.org/0000-0002-0414-8745>),
<NAME> [aut] (<https://orcid.org/0000-0001-5687-1903>),
<NAME> [aut, cre]
(<https://orcid.org/0000-0001-5036-8661>),
SPECIAL Research Group @ University of Reading [cph]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2022-11-25 16:40:01 UTC
R topics documented:
cv.pr.... 2
cv.... 4
f... 6
fx_psplin... 7
get_distanc... 8
get_pseud... 9
p... 11
plot_residual... 12
plot_trai... 13
rand.t.test.... 14
sse.sampl... 15
TWAPLS.predict.... 17
TWAPLS.... 19
TWAPLS.w... 21
WAPLS.predict.... 23
WAPLS.... 24
WAPLS.w... 26
cv.pr.w Pseudo-removed leave-out cross-validation
Description
Pseudo-removed leave-out cross-validation
Usage
cv.pr.w(
modern_taxa,
modern_climate,
nPLS = 5,
trainfun,
predictfun,
pseudo,
usefx = FALSE,
fx_method = "bin",
bin = NA,
cpus = 4,
test_mode = TRUE,
test_it = 5
)
Arguments
modern_taxa The modern taxa abundance data, each row represents a sampling site, each
column represents a taxon.
modern_climate The modern climate value at each sampling site.
nPLS The number of components to be extracted.
trainfun Training function you want to use, either WAPLS.w or TWAPLS.w.
predictfun Predict function you want to use: if trainfun is WAPLS.w, then this should be
WAPLS.predict.w; if trainfun is TWAPLS.w, then this should be TWAPLS.predict.w.
pseudo The geographically and climatically close sites to each test site, obtained from
get_pseudo function.
usefx Boolean flag on whether or not use fx correction.
fx_method Binned or p-spline smoothed fx correction: if usefx = FALSE, this should be NA;
otherwise, fx function will be used when choosing "bin"; fx_pspline function
will be used when choosing "pspline".
bin Binwidth to get fx, needed for both binned and p-splined method. if usefx =
FALSE, this should be NA;
cpus Number of CPUs for simultaneous iterations to execute, check parallel::detectCores()
for available CPUs on your machine.
test_mode Boolean flag to execute the function with a limited number of iterations, test_it,
for testing purposes only.
test_it Number of iterations to use in the test mode.
Value
Leave-one-out cross validation results.
See Also
fx, TWAPLS.w, TWAPLS.predict.w, WAPLS.w, and WAPLS.predict.w
Examples
## Not run:
# Load modern pollen data
modern_pollen <- read.csv("/path/to/modern_pollen.csv")
# Extract taxa
taxaColMin <- which(colnames(modern_pollen) == "taxa0")
taxaColMax <- which(colnames(modern_pollen) == "taxaN")
taxa <- modern_pollen[, taxaColMin:taxaColMax]
point <- modern_pollen[, c("Long", "Lat")]
test_mode <- TRUE # It should be set to FALSE before running
dist <- fxTWAPLS::get_distance(
point,
cpus = 2, # Remove the following line
test_mode = test_mode
)
pseudo_Tmin <- fxTWAPLS::get_pseudo(
dist,
modern_pollen$Tmin,
cpus = 2, # Remove the following line
test_mode = test_mode
)
cv_pr_tf_Tmin2 <- fxTWAPLS::cv.pr.w(
taxa,
modern_pollen$Tmin,
nPLS = 5,
fxTWAPLS::TWAPLS.w2,
fxTWAPLS::TWAPLS.predict.w,
pseudo_Tmin,
usefx = TRUE,
fx_method = "bin",
bin = 0.02,
cpus = 2, # Remove the following line
test_mode = test_mode
)
# Run with progress bar
`%>%` <- magrittr::`%>%`
cv_pr_tf_Tmin2 <- fxTWAPLS::cv.pr.w(
taxa,
modern_pollen$Tmin,
nPLS = 5,
fxTWAPLS::TWAPLS.w2,
fxTWAPLS::TWAPLS.predict.w,
pseudo_Tmin,
usefx = TRUE,
fx_method = "bin",
bin = 0.02,
cpus = 2, # Remove the following line
test_mode = test_mode
) %>%
fxTWAPLS::pb()
## End(Not run)
cv.w Leave-one-out cross-validation
Description
Leave-one-out cross-validation as rioja (https://cran.r-project.org/package=rioja).
Usage
cv.w(
modern_taxa,
modern_climate,
nPLS = 5,
trainfun,
predictfun,
usefx = FALSE,
fx_method = "bin",
bin = NA,
cpus = 4,
test_mode = FALSE,
test_it = 5
)
Arguments
modern_taxa The modern taxa abundance data, each row represents a sampling site, each
column represents a taxon.
modern_climate The modern climate value at each sampling site.
nPLS The number of components to be extracted.
trainfun Training function you want to use, either WAPLS.w or TWAPLS.w.
predictfun Predict function you want to use: if trainfun is WAPLS.w, then this should be
WAPLS.predict.w; if trainfun is TWAPLS.w, then this should be TWAPLS.predict.w.
usefx Boolean flag on whether or not use fx correction.
fx_method Binned or p-spline smoothed fx correction: if usefx = FALSE, this should be NA;
otherwise, fx function will be used when choosing "bin"; fx_pspline function
will be used when choosing "pspline".
bin Binwidth to get fx, needed for both binned and p-splined method. if usefx =
FALSE, this should be NA;
cpus Number of CPUs for simultaneous iterations to execute, check parallel::detectCores()
for available CPUs on your machine.
test_mode boolean flag to execute the function with a limited number of iterations, test_it,
for testing purposes only.
test_it number of iterations to use in the test mode.
Value
leave-one-out cross validation results
See Also
fx, TWAPLS.w, TWAPLS.predict.w, WAPLS.w, and WAPLS.predict.w
Examples
## Not run:
# Load modern pollen data
modern_pollen <- read.csv("/path/to/modern_pollen.csv")
# Extract taxa
taxaColMin <- which(colnames(modern_pollen) == "taxa0")
taxaColMax <- which(colnames(modern_pollen) == "taxaN")
taxa <- modern_pollen[, taxaColMin:taxaColMax]
## LOOCV
test_mode <- TRUE # It should be set to FALSE before running
cv_tf_Tmin2 <- fxTWAPLS::cv.w(
taxa,
modern_pollen$Tmin,
nPLS = 5,
fxTWAPLS::TWAPLS.w2,
fxTWAPLS::TWAPLS.predict.w,
usefx = TRUE,
fx_method = "bin",
bin = 0.02,
cpus = 2, # Remove the following line
test_mode = test_mode
)
# Run with progress bar
`%>%` <- magrittr::`%>%`
cv_tf_Tmin2 <- fxTWAPLS::cv.w(
taxa,
modern_pollen$Tmin,
nPLS = 5,
fxTWAPLS::TWAPLS.w2,
fxTWAPLS::TWAPLS.predict.w,
usefx = TRUE,
fx_method = "bin",
bin = 0.02,
cpus = 2, # Remove the following line
test_mode = test_mode
) %>% fxTWAPLS::pb()
## End(Not run)
fx Get frequency of the climate value
Description
Function to get the frequency of the climate value, which will be used to provide fx correction for
WA-PLS and TWA-PLS.
Usage
fx(x, bin, show_plot = FALSE)
Arguments
x Numeric vector with the modern climate values.
bin Binwidth to get the frequency of the modern climate values.
show_plot Boolean flag to show a plot of fx ~ x.
Value
Numeric vector with the frequency of the modern climate values.
See Also
cv.w, cv.pr.w, and sse.sample
Examples
## Not run:
# Load modern pollen data
modern_pollen <- read.csv("/path/to/modern_pollen.csv")
# Get the frequency of each climate variable fx
fx_Tmin <- fxTWAPLS::fx(modern_pollen$Tmin, bin = 0.02, show_plot = TRUE)
fx_gdd <- fxTWAPLS::fx(modern_pollen$gdd, bin = 20, show_plot = TRUE)
fx_alpha <- fxTWAPLS::fx(modern_pollen$alpha, bin = 0.002, show_plot = TRUE)
## End(Not run)
fx_pspline Get frequency of the climate value with p-spline smoothing
Description
Function to get the frequency of the climate value, which will be used to provide fx correction for
WA-PLS and TWA-PLS.
Usage
fx_pspline(x, bin, show_plot = FALSE)
Arguments
x Numeric vector with the modern climate values.
bin Binwidth to get the frequency of the modern climate values, the curve will be
p-spline smoothed later
show_plot Boolean flag to show a plot of fx ~ x.
Value
Numeric vector with the frequency of the modern climate values.
See Also
cv.w, cv.pr.w, and sse.sample
Examples
## Not run:
# Load modern pollen data
modern_pollen <- read.csv("/path/to/modern_pollen.csv")
# Get the frequency of each climate variable fx
fx_pspline_Tmin <- fxTWAPLS::fx_pspline(
modern_pollen$Tmin,
bin = 0.02,
show_plot = TRUE
)
fx_pspline_gdd <- fxTWAPLS::fx_pspline(
modern_pollen$gdd,
bin = 20,
show_plot = TRUE
)
fx_pspline_alpha <- fxTWAPLS::fx_pspline(
modern_pollen$alpha,
bin = 0.002,
show_plot = TRUE
)
## End(Not run)
get_distance Get the distance between points
Description
Get the distance between points, the output will be used in get_pseudo.
Usage
get_distance(point, cpus = 4, test_mode = FALSE, test_it = 5)
Arguments
point Each row represents a sampling site, the first column is longitude and the second
column is latitude, both in decimal format.
cpus Number of CPUs for simultaneous iterations to execute, check parallel::detectCores()
for available CPUs on your machine.
test_mode Boolean flag to execute the function with a limited number of iterations, test_it,
for testing purposes only.
test_it Number of iterations to use in the test mode.
Value
Distance matrix, the value at the i-th row, means the distance between the i-th sampling site and
the whole sampling sites.
See Also
get_pseudo
Examples
## Not run:
# Load modern pollen data
modern_pollen <- read.csv("/path/to/modern_pollen.csv")
point <- modern_pollen[, c("Long", "Lat")]
test_mode <- TRUE # It should be set to FALSE before running
dist <- fxTWAPLS::get_distance(
point,
cpus = 2, # Remove the following line
test_mode = test_mode
)
# Run with progress bar
`%>%` <- magrittr::`%>%`
dist <- fxTWAPLS::get_distance(
point,
cpus = 2, # Remove the following line
test_mode = test_mode
) %>%
fxTWAPLS::pb()
## End(Not run)
get_pseudo Get geographically and climatically close sites
Description
Get the sites which are both geographically and climatically close to the test site, which could result
in pseudo-replication and inflate the cross-validation statistics. The output will be used in cv.pr.w.
Usage
get_pseudo(dist, x, cpus = 4, test_mode = FALSE, test_it = 5)
Arguments
dist Distance matrix which contains the distance from other sites.
x The modern climate values.
cpus Number of CPUs for simultaneous iterations to execute, check parallel::detectCores()
for available CPUs on your machine.
test_mode Boolean flag to execute the function with a limited number of iterations, test_it,
for testing purposes only.
test_it Number of iterations to use in the test mode.
Value
The geographically and climatically close sites to each test site.
See Also
get_distance
Examples
## Not run:
# Load modern pollen data
modern_pollen <- read.csv("/path/to/modern_pollen.csv")
point <- modern_pollen[, c("Long", "Lat")]
test_mode <- TRUE # It should be set to FALSE before running
dist <- fxTWAPLS::get_distance(
point,
cpus = 2, # Remove the following line
test_mode = test_mode
)
pseudo_Tmin <- fxTWAPLS::get_pseudo(
dist,
modern_pollen$Tmin,
cpus = 2, # Remove the following line
test_mode = test_mode
)
# Run with progress bar
`%>%` <- magrittr::`%>%`
pseudo_Tmin <- fxTWAPLS::get_pseudo(
dist,
modern_pollen$Tmin,
cpus = 2, # Remove the following line
test_mode = test_mode
) %>%
fxTWAPLS::pb()
## End(Not run)
pb Show progress bar
Description
Show progress bar
Usage
pb(expr, ...)
Arguments
expr R expression.
... Arguments passed on to progressr::with_progress
cleanup If TRUE, all progression handlers will be shutdown at the end regard-
less of the progression is complete or not.
delay_terminal If TRUE, output and conditions that may end up in the termi-
nal will delayed.
delay_stdout If TRUE, standard output is captured and relayed at the end just
before any captured conditions are relayed.
delay_conditions A character vector specifying base::condition classes to be
captured and relayed at the end after any captured standard output is re-
layed.
interrupts Controls whether interrupts should be detected or not. If TRUE
and a interrupt is signaled, progress handlers are asked to report on the cur-
rent amount progress when the evaluation was terminated by the interrupt,
e.g. when a user pressed Ctrl-C in an interactive session, or a batch process
was interrupted because it ran out of time.
interval (numeric) The minimum time (in seconds) between successive pro-
gression updates from handlers.
enable (logical) If FALSE, then progress is not reported. The default is to
report progress in interactive mode but not batch mode. See below for more
details.
Value
Return data from the function called.
plot_residuals Plot the residuals
Description
Plot the residuals, the black line is 0 line, the red line is the locally estimated scatterplot smoothing,
which shows the degree of local compression.
Usage
plot_residuals(train_output, col)
Arguments
train_output Training output, can be the output of WA-PLS, WA-PLS with fx correction,
TWA-PLS, or TWA-PLS with fx correction
col Choose which column of the fitted value to plot, in other words, how many
number of components you want to use.
Value
Plotting status.
See Also
TWAPLS.w and WAPLS.w
Examples
## Not run:
# Load modern pollen data
modern_pollen <- read.csv("/path/to/modern_pollen.csv")
# Extract taxa
taxaColMin <- which(colnames(modern_pollen) == "taxa0")
taxaColMax <- which(colnames(modern_pollen) == "taxaN")
taxa <- modern_pollen[, taxaColMin:taxaColMax]
fit_tf_Tmin2 <- fxTWAPLS::TWAPLS.w2(
taxa,
modern_pollen$Tmin,
nPLS = 5,
usefx = TRUE,
fx_method = "bin",
bin = 0.02
)
nsig <- 3 # This should be got from the random t-test of the cross validation
fxTWAPLS::plot_residuals(fit_tf_Tmin2, nsig)
## End(Not run)
plot_train Plot the training results
Description
Plot the training results, the black line is the 1:1 line, the red line is the linear regression line to
fitted and x, which shows the degree of overall compression.
Usage
plot_train(train_output, col)
Arguments
train_output Training output, can be the output of WA-PLS, WA-PLS with fx correction,
TWA-PLS, or TWA-PLS with fx correction.
col Choose which column of the fitted value to plot, in other words, how many
number of components you want to use.
Value
Plotting status.
See Also
TWAPLS.w and WAPLS.w
Examples
## Not run:
# Load modern pollen data
modern_pollen <- read.csv("/path/to/modern_pollen.csv")
# Extract taxa
taxaColMin <- which(colnames(modern_pollen) == "taxa0")
taxaColMax <- which(colnames(modern_pollen) == "taxaN")
taxa <- modern_pollen[, taxaColMin:taxaColMax]
fit_tf_Tmin2 <- fxTWAPLS::TWAPLS.w2(
taxa,
modern_pollen$Tmin,
nPLS = 5,
usefx = TRUE,
fx_method = "bin",
bin = 0.02
)
nsig <- 3 # This should be got from the random t-test of the cross validation
fxTWAPLS::plot_train(fit_tf_Tmin2, nsig)
## End(Not run)
rand.t.test.w Random t-test
Description
Do a random t-test to the cross-validation results.
Usage
rand.t.test.w(cvoutput, n.perm = 999)
Arguments
cvoutput Cross-validation output either from cv.w or cv.pr.w.
n.perm The number of permutation times to get the p value, which assesses whether
using the current number of components is significantly different from using
one less.
Value
A matrix of the statistics of the cross-validation results. Each component is described below:
R2 the coefficient of determination (the larger, the better the fit).
Avg.Bias average bias.
Max.Bias maximum bias.
Min.Bias minimum bias.
RMSEP root-mean-square error of prediction (the smaller, the better the fit).
delta.RMSEP the percent change of RMSEP using the current number of components than using
one component less.
p assesses whether using the current number of components is significantly different from using
one component less, which is used to choose the last significant number of components to
avoid over-fitting.
- The degree of overall compression is assessed by doing linear regression to the cross-validation
result and the observed climate values.
• Compre.b0: the intercept.
• Compre.b1: the slope (the closer to 1, the less the overall compression).
• Compre.b0.se: the standard error of the intercept.
• Compre.b1.se: the standard error of the slope.
See Also
cv.w and cv.pr.w
Examples
## Not run:
## Random t-test
rand_pr_tf_Tmin2 <- fxTWAPLS::rand.t.test.w(cv_pr_tf_Tmin2, n.perm = 999)
# note: choose the last significant number of components based on the p-value,
# see details at <NAME>, Prentice <NAME>, ter <NAME>.,
# <NAME>.. 2020 An improved statistical approach for reconstructing
# past climates from biotic assemblages. Proc. R. Soc. A. 476: 20200346.
# <https://doi.org/10.1098/rspa.2020.0346>
## End(Not run)
sse.sample Calculate Sample Specific Errors
Description
Calculate Sample Specific Errors
Usage
sse.sample(
modern_taxa,
modern_climate,
fossil_taxa,
trainfun,
predictfun,
nboot,
nPLS,
nsig,
usefx = FALSE,
fx_method = "bin",
bin = NA,
cpus = 4,
seed = NULL,
test_mode = FALSE,
test_it = 5
)
Arguments
modern_taxa The modern taxa abundance data, each row represents a sampling site, each
column represents a taxon.
modern_climate The modern climate value at each sampling site
fossil_taxa Fossil taxa abundance data to reconstruct past climates, each row represents a
site to be reconstructed, each column represents a taxon.
trainfun Training function you want to use, either WAPLS.w or TWAPLS.w.
predictfun Predict function you want to use: if trainfun is WAPLS.w, then this should be
WAPLS.predict.w; if trainfun is TWAPLS.w, then this should be TWAPLS.predict.w.
nboot The number of bootstrap cycles you want to use.
nPLS The number of components to be extracted.
nsig The significant number of components to use to reconstruct past climates, this
can be obtained from the cross-validation results.
usefx Boolean flag on whether or not use fx correction.
fx_method Binned or p-spline smoothed fx correction: if usefx = FALSE, this should be NA;
otherwise, fx function will be used when choosing "bin"; fx_pspline function
will be used when choosing "pspline".
bin Binwidth to get fx, needed for both binned and p-splined method. if usefx =
FALSE, this should be NA;
cpus Number of CPUs for simultaneous iterations to execute, check parallel::detectCores()
for available CPUs on your machine.
seed Seed for reproducibility.
test_mode Boolean flag to execute the function with a limited number of iterations, test_it,
for testing purposes only.
test_it Number of iterations to use in the test mode.
Value
The bootstrapped standard error for each site.
See Also
fx, TWAPLS.w, TWAPLS.predict.w, WAPLS.w, and WAPLS.predict.w
Examples
## Not run:
# Load modern pollen data
modern_pollen <- read.csv("/path/to/modern_pollen.csv")
# Extract taxa
taxaColMin <- which(colnames(modern_pollen) == "taxa0")
taxaColMax <- which(colnames(modern_pollen) == "taxaN")
taxa <- modern_pollen[, taxaColMin:taxaColMax]
# Load reconstruction data
Holocene <- read.csv("/path/to/Holocene.csv")
taxaColMin <- which(colnames(Holocene) == "taxa0")
taxaColMax <- which(colnames(Holocene) == "taxaN")
core <- Holocene[, taxaColMin:taxaColMax]
## SSE
nboot <- 5 # Recommended 1000
nsig <- 3 # This should be got from the random t-test of the cross validation
sse_tf_Tmin2 <- fxTWAPLS::sse.sample(
modern_taxa = taxa,
modern_climate = modern_pollen$Tmin,
fossil_taxa = core,
trainfun = fxTWAPLS::TWAPLS.w2,
predictfun = fxTWAPLS::TWAPLS.predict.w,
nboot = nboot,
nPLS = 5,
nsig = nsig,
usefx = TRUE,
fx_method = "bin",
bin = 0.02,
cpus = 2,
seed = 1
)
# Run with progress bar
`%>%` <- magrittr::`%>%`
sse_tf_Tmin2 <- fxTWAPLS::sse.sample(
modern_taxa = taxa,
modern_climate = modern_pollen$Tmin,
fossil_taxa = core,
trainfun = fxTWAPLS::TWAPLS.w2,
predictfun = fxTWAPLS::TWAPLS.predict.w,
nboot = nboot,
nPLS = 5,
nsig = nsig,
usefx = TRUE,
fx_method = "bin",
bin = 0.02,
cpus = 2,
seed = 1
) %>% fxTWAPLS::pb()
## End(Not run)
TWAPLS.predict.w TWA-PLS predict function
Description
TWA-PLS predict function
Usage
TWAPLS.predict.w(TWAPLSoutput, fossil_taxa)
Arguments
TWAPLSoutput The output of the TWAPLS.w training function, either with or without fx correc-
tion.
fossil_taxa Fossil taxa abundance data to reconstruct past climates, each row represents a
site to be reconstructed, each column represents a taxon.
Value
A list of the reconstruction results. Each element in the list is described below:
fit the fitted values using each number of components.
nPLS the total number of components extracted.
See Also
TWAPLS.w
Examples
## Not run:
# Load modern pollen data
modern_pollen <- read.csv("/path/to/modern_pollen.csv")
# Extract taxa
taxaColMin <- which(colnames(modern_pollen) == "taxa0")
taxaColMax <- which(colnames(modern_pollen) == "taxaN")
taxa <- modern_pollen[, taxaColMin:taxaColMax]
# Load reconstruction data
Holocene <- read.csv("/path/to/Holocene.csv")
taxaColMin <- which(colnames(Holocene) == "taxa0")
taxaColMax <- which(colnames(Holocene) == "taxaN")
core <- Holocene[, taxaColMin:taxaColMax]
## Train
fit_t_Tmin <- fxTWAPLS::TWAPLS.w(taxa, modern_pollen$Tmin, nPLS = 5)
fit_tf_Tmin <- fxTWAPLS::TWAPLS.w(
taxa,
modern_pollen$Tmin,
nPLS = 5,
usefx = TRUE,
fx_method = "bin",
bin = 0.02
)
fit_t_Tmin2 <- fxTWAPLS::TWAPLS.w2(taxa, modern_pollen$Tmin, nPLS = 5)
fit_tf_Tmin2 <- fxTWAPLS::TWAPLS.w2(
taxa,
modern_pollen$Tmin,
nPLS = 5,
usefx = TRUE,
fx_method = "bin",
bin = 0.02
)
## Predict
fossil_t_Tmin <- fxTWAPLS::TWAPLS.predict.w(fit_t_Tmin, core)
fossil_tf_Tmin <- fxTWAPLS::TWAPLS.predict.w(fit_tf_Tmin, core)
fossil_t_Tmin2 <- fxTWAPLS::TWAPLS.predict.w(fit_t_Tmin2, core)
fossil_tf_Tmin2 <- fxTWAPLS::TWAPLS.predict.w(fit_tf_Tmin2, core)
## End(Not run)
TWAPLS.w TWA-PLS training function
Description
TWA-PLS training function, which can perform fx correction. 1/fx^2 correction will be applied
at step 7.
Usage
TWAPLS.w(
modern_taxa,
modern_climate,
nPLS = 5,
usefx = FALSE,
fx_method = "bin",
bin = NA
)
Arguments
modern_taxa The modern taxa abundance data, each row represents a sampling site, each
column represents a taxon.
modern_climate The modern climate value at each sampling site.
nPLS The number of components to be extracted.
usefx Boolean flag on whether or not use fx correction.
fx_method Binned or p-spline smoothed fx correction: if usefx = FALSE, this should be NA;
otherwise, fx function will be used when choosing "bin"; fx_pspline function
will be used when choosing "pspline".
bin Binwidth to get fx, needed for both binned and p-splined method. if usefx =
FALSE, this should be NA;
Value
A list of the training results, which will be used by the predict function. Each element in the list is
described below:
fit the fitted values using each number of components.
x the observed modern climate values.
taxon_name the name of each taxon.
optimum the updated taxon optimum
comp each component extracted (will be used in step 7 regression).
u taxon optimum for each component (step 2).
t taxon tolerance for each component (step 2).
z a parameter used in standardization for each component (step 5).
s a parameter used in standardization for each component (step 5).
orth a list that stores orthogonalization parameters (step 4).
alpha a list that stores regression coefficients (step 7).
meanx mean value of the observed modern climate values.
nPLS the total number of components extracted.
See Also
fx, TWAPLS.predict.w, and WAPLS.w
Examples
## Not run:
# Load modern pollen data
modern_pollen <- read.csv("/path/to/modern_pollen.csv")
# Extract taxa
taxaColMin <- which(colnames(modern_pollen) == "taxa0")
taxaColMax <- which(colnames(modern_pollen) == "taxaN")
taxa <- modern_pollen[, taxaColMin:taxaColMax]
# Training
fit_t_Tmin <- fxTWAPLS::TWAPLS.w(taxa, modern_pollen$Tmin, nPLS = 5)
fit_tf_Tmin <- fxTWAPLS::TWAPLS.w(
taxa,
modern_pollen$Tmin,
nPLS = 5,
usefx = TRUE,
fx_method = "bin",
bin = 0.02
)
## End(Not run)
TWAPLS.w2 TWA-PLS training function v2
Description
TWA-PLS training function, which can perform fx correction. 1/fx correction will be applied at
step 2 and step 7.
Usage
TWAPLS.w2(
modern_taxa,
modern_climate,
nPLS = 5,
usefx = FALSE,
fx_method = "bin",
bin = NA
)
Arguments
modern_taxa The modern taxa abundance data, each row represents a sampling site, each
column represents a taxon.
modern_climate The modern climate value at each sampling site.
nPLS The number of components to be extracted.
usefx Boolean flag on whether or not use fx correction.
fx_method Binned or p-spline smoothed fx correction: if usefx = FALSE, this should be NA;
otherwise, fx function will be used when choosing "bin"; fx_pspline function
will be used when choosing "pspline".
bin Binwidth to get fx, needed for both binned and p-splined method. if usefx =
FALSE, this should be NA;
Value
A list of the training results, which will be used by the predict function. Each element in the list is
described below:
fit the fitted values using each number of components.
x the observed modern climate values.
taxon_name the name of each taxon.
optimum the updated taxon optimum
comp each component extracted (will be used in step 7 regression).
u taxon optimum for each component (step 2).
t taxon tolerance for each component (step 2).
z a parameter used in standardization for each component (step 5).
s a parameter used in standardization for each component (step 5).
orth a list that stores orthogonalization parameters (step 4).
alpha a list that stores regression coefficients (step 7).
meanx mean value of the observed modern climate values.
nPLS the total number of components extracted.
See Also
fx, TWAPLS.predict.w, and WAPLS.w
Examples
## Not run:
# Load modern pollen data
modern_pollen <- read.csv("/path/to/modern_pollen.csv")
# Extract taxa
taxaColMin <- which(colnames(modern_pollen) == "taxa0")
taxaColMax <- which(colnames(modern_pollen) == "taxaN")
taxa <- modern_pollen[, taxaColMin:taxaColMax]
# Training
fit_t_Tmin2 <- fxTWAPLS::TWAPLS.w2(taxa, modern_pollen$Tmin, nPLS = 5)
fit_tf_Tmin2 <- fxTWAPLS::TWAPLS.w2(
taxa,
modern_pollen$Tmin,
nPLS = 5,
usefx = TRUE,
fx_method = "bin",
bin = 0.02
)
## End(Not run)
WAPLS.predict.w WA-PLS predict function
Description
WA-PLS predict function
Usage
WAPLS.predict.w(WAPLSoutput, fossil_taxa)
Arguments
WAPLSoutput The output of the WAPLS.w training function, either with or without fx correc-
tion.
fossil_taxa Fossil taxa abundance data to reconstruct past climates, each row represents a
site to be reconstructed, each column represents a taxon.
Value
A list of the reconstruction results. Each element in the list is described below:
fit The fitted values using each number of components.
nPLS The total number of components extracted.
See Also
WAPLS.w
Examples
## Not run:
# Load modern pollen data
modern_pollen <- read.csv("/path/to/modern_pollen.csv")
# Extract taxa
taxaColMin <- which(colnames(modern_pollen) == "taxa0")
taxaColMax <- which(colnames(modern_pollen) == "taxaN")
taxa <- modern_pollen[, taxaColMin:taxaColMax]
# Load reconstruction data
Holocene <- read.csv("/path/to/Holocene.csv")
taxaColMin <- which(colnames(Holocene) == "taxa0")
taxaColMax <- which(colnames(Holocene) == "taxaN")
core <- Holocene[, taxaColMin:taxaColMax]
## Train
fit_Tmin <- fxTWAPLS::WAPLS.w(taxa, modern_pollen$Tmin, nPLS = 5)
fit_f_Tmin <- fxTWAPLS::WAPLS.w(
taxa,
modern_pollen$Tmin,
nPLS = 5,
usefx = TRUE,
fx_method = "bin",
bin = 0.02
)
fit_Tmin2 <- fxTWAPLS::WAPLS.w2(taxa, modern_pollen$Tmin, nPLS = 5)
fit_f_Tmin2 <- fxTWAPLS::WAPLS.w2(
taxa,
modern_pollen$Tmin,
nPLS = 5,
usefx = TRUE,
fx_method = "bin",
bin = 0.02
)
## Predict
fossil_Tmin <- fxTWAPLS::WAPLS.predict.w(fit_Tmin, core)
fossil_f_Tmin <- fxTWAPLS::WAPLS.predict.w(fit_f_Tmin, core)
fossil_Tmin2 <- fxTWAPLS::WAPLS.predict.w(fit_Tmin2, core)
fossil_f_Tmin2 <- fxTWAPLS::WAPLS.predict.w(fit_f_Tmin2, core)
## End(Not run)
WAPLS.w WA-PLS training function
Description
WA-PLS training function, which can perform fx correction. 1/fx^2 correction will be applied at
step 7.
Usage
WAPLS.w(
modern_taxa,
modern_climate,
nPLS = 5,
usefx = FALSE,
fx_method = "bin",
bin = NA
)
Arguments
modern_taxa The modern taxa abundance data, each row represents a sampling site, each
column represents a taxon.
modern_climate The modern climate value at each sampling site.
nPLS The number of components to be extracted.
usefx Boolean flag on whether or not use fx correction.
fx_method Binned or p-spline smoothed fx correction: if usefx = FALSE, this should be NA;
otherwise, fx function will be used when choosing "bin"; fx_pspline function
will be used when choosing "pspline".
bin Binwidth to get fx, needed for both binned and p-splined method. if usefx =
FALSE, this should be NA;
Value
A list of the training results, which will be used by the predict function. Each element in the list is
described below:
fit the fitted values using each number of components.
x the observed modern climate values.
taxon_name the name of each taxon.
optimum the updated taxon optimum (u* in the WA-PLS paper).
comp each component extracted (will be used in step 7 regression).
u taxon optimum for each component (step 2).
z a parameter used in standardization for each component (step 5).
s a parameter used in standardization for each component (step 5).
orth a list that stores orthogonalization parameters (step 4).
alpha a list that stores regression coefficients (step 7).
meanx mean value of the observed modern climate values.
nPLS the total number of components extracted.
See Also
fx, TWAPLS.w, and WAPLS.predict.w
Examples
## Not run:
# Load modern pollen data
modern_pollen <- read.csv("/path/to/modern_pollen.csv")
# Extract taxa
taxaColMin <- which(colnames(modern_pollen) == "taxa0")
taxaColMax <- which(colnames(modern_pollen) == "taxaN")
taxa <- modern_pollen[, taxaColMin:taxaColMax]
# Training
fit_Tmin <- fxTWAPLS::WAPLS.w(taxa, modern_pollen$Tmin, nPLS = 5)
fit_f_Tmin <- fxTWAPLS::WAPLS.w(
taxa,
modern_pollen$Tmin,
nPLS = 5,
usefx = TRUE,
fx_method = "bin",
bin = 0.02
)
## End(Not run)
WAPLS.w2 WA-PLS training function v2
Description
WA-PLS training function, which can perform fx correction. 1/fx correction will be applied at
step 2 and step 7.
Usage
WAPLS.w2(
modern_taxa,
modern_climate,
nPLS = 5,
usefx = FALSE,
fx_method = "bin",
bin = NA
)
Arguments
modern_taxa The modern taxa abundance data, each row represents a sampling site, each
column represents a taxon.
modern_climate The modern climate value at each sampling site.
nPLS The number of components to be extracted.
usefx Boolean flag on whether or not use fx correction.
fx_method Binned or p-spline smoothed fx correction: if usefx = FALSE, this should be NA;
otherwise, fx function will be used when choosing "bin"; fx_pspline function
will be used when choosing "pspline".
bin Binwidth to get fx, needed for both binned and p-splined method. if usefx =
FALSE, this should be NA;
Value
A list of the training results, which will be used by the predict function. Each element in the list is
described below:
fit the fitted values using each number of components.
x the observed modern climate values.
taxon_name the name of each taxon.
optimum the updated taxon optimum (u* in the WA-PLS paper).
comp each component extracted (will be used in step 7 regression).
u taxon optimum for each component (step 2).
z a parameter used in standardization for each component (step 5).
s a parameter used in standardization for each component (step 5).
orth a list that stores orthogonalization parameters (step 4).
alpha a list that stores regression coefficients (step 7).
meanx mean value of the observed modern climate values.
nPLS the total number of components extracted.
See Also
fx, TWAPLS.w, and WAPLS.predict.w
Examples
## Not run:
# Load modern pollen data
modern_pollen <- read.csv("/path/to/modern_pollen.csv")
# Extract taxa
taxaColMin <- which(colnames(modern_pollen) == "taxa0")
taxaColMax <- which(colnames(modern_pollen) == "taxaN")
taxa <- modern_pollen[, taxaColMin:taxaColMax]
# Training
fit_Tmin2 <- fxTWAPLS::WAPLS.w2(taxa, modern_pollen$Tmin, nPLS = 5)
fit_f_Tmin2 <- fxTWAPLS::WAPLS.w2(
taxa,
modern_pollen$Tmin,
nPLS = 5,
usefx = TRUE,
fx_method = "bin",
bin = 0.02
)
## End(Not run) |
rpi-backlight | readthedoc | Markdown | rpi-backlight 2.6.0 documentation
[rpi-backlight](index.html#document-index)
---
rpi-backlight Documentation[¶](#rpi-backlight-documentation)
===
| Version: | 2.6.0 |
| Author: | <NAME> |
| Contact: | [<EMAIL>](mailto:mail%40linusgroh.de) |
| License (code): | [MIT license](index.html#license) |
| License (docs): | This document was placed in the public domain. |
Contents[¶](#contents)
===
Introduction[¶](#introduction)
---
When I bought the official Raspberry Pi 7” touch LCD, I was quite happy about it
- with one exception: *you can’t change the display brightness in a simple way out of the box*.
I did some research and hacked some Python code together. Time passed by,
and the whole project turned into a Python module: `rpi-backlight`.
Currently it has the following features:
* Change the display brightness **smoothly** or **abrupt**
* Set the display power on or off
* Get the current brightness
* Get the maximum brightness
* Get the display power state (on/off)
* Command line interface
* Graphical user interface
Now you are able to easily set the brightness of your display from the command line, a GUI and even Python code!
Installation[¶](#installation)
---
This section covers the installation of the library on the Raspberry Pi.
### Requirements[¶](#requirements)
* A **Raspberry Pi** including a correctly assembled **7” touch display v1.1 or higher**
(look on the display’s circuit board to see its version) running a Linux-based OS.
Alternatively you can use [rpi-backlight-emulator](https://github.com/linusg/rpi-backlight-emulator) on all operating systems and without the actual hardware.
* Python 3.6+
* Optional: `pygobject` for the GUI, already installed on a recent Raspbian
### Installation[¶](#id2)
Note
This library will **not** work with Windows IoT, you’ll need a Linux distribution running on your Raspberry Pi. This was tested with Raspbian 9 (Stretch) and 10 (Buster).
rpi-backlight is available on [PyPI](https://pypi.org/project/rpi-backlight/), so you can install it using `pip3`:
```
$ pip3 install rpi_backlight
```
**Note:** Create this udev rule to update permissions, otherwise you’ll have to run Python code, the GUI and CLI as root when *changing* the power or brightness:
```
$ echo 'SUBSYSTEM=="backlight",RUN+="/bin/chmod 666 /sys/class/backlight/%k/brightness /sys/class/backlight/%k/bl_power"' | sudo tee -a /etc/udev/rules.d/backlight-permissions.rules
```
rpi-backlight is now installed. See [Usage](index.html#usage) to get started!
Usage[¶](#usage)
---
### Python API[¶](#python-api)
Make sure you’ve [installed](index.html#installation) the library correctly.
Open a Python shell and import the [`Backlight`](index.html#rpi_backlight.Backlight) class:
```
>>> from rpi_backlight import Backlight
```
Create an instance:
```
>>> backlight = Backlight()
```
Now you can get and set the display power and brightness:
```
>>> backlight.brightness 100
>>> backlight.brightness = 50
>>> backlight.brightness 50
>>>
>>> with backlight.fade(duration=1):
... backlight.brightness = 0
...
>>> backlight.fade_duration = 0.5
>>> # subsequent `backlight.brightness = x` will fade 500ms
>>>
>>> backlight.power True
>>> backlight.power = False
>>> backlight.power False
>>>
```
To use with ASUS Tinker Board:
```
>>> from rpi_backlight import Backlight, BoardType
>>>
>>> backlight = Backlight(board_type=BoardType.TINKER_BOARD)
>>> # continue like above
```
See the [API reference](index.html#api) for more details.
### Command line interface[¶](#command-line-interface)
Open a terminal and run `rpi-backlight`.
```
$ rpi-backlight -b 100
$ rpi-backlight --set-brightness 20 --duration 1.5
$ rpi-backlight --get-brightness 20
$ rpi-backlight --get-power on
$ rpi-backlight -p off
$ rpi-backlight --get-power off
$ rpi-backlight --set-power off :emulator:
$
```
To use with ASUS Tinker Board:
```
$ rpi-backlight --board-type tinker-board ...
```
You can set the backlight sysfs path using a positional argument, set it to :emulator:
to use with rpi-backlight-emulator.
Available options:
```
usage: rpi-backlight [-h] [--get-brightness] [-b VALUE] [--get-power]
[-p VALUE] [-d DURATION] [-B {raspberry-pi,tinker-board}]
[-V]
[SYSFS_PATH]
Get/set power and brightness of the official Raspberry Pi 7" touch display.
positional arguments:
SYSFS_PATH Optional path to the backlight sysfs, set to
:emulator: to use with rpi-backlight-emulator
optional arguments:
-h, --help show this help message and exit
--get-brightness get the display brightness (0-100)
-b VALUE, --set-brightness VALUE
set the display brightness (0-100)
--get-power get the display power (on/off)
-p VALUE, --set-power VALUE
set the display power (on/off/toggle)
-d DURATION, --duration DURATION
fading duration in seconds
-B {raspberry-pi,tinker-board}, --board-type {raspberry-pi,tinker-board}
board type
-V, --version show program's version number and exit
```
### Graphical user interface[¶](#graphical-user-interface)
Open a terminal and run `rpi-backlight-gui`.
#### Adding a shortcut to the LXDE panel[¶](#adding-a-shortcut-to-the-lxde-panel)
First, create a `.desktop` file for rpi-backlight (e.g.
`/home/pi/.local/share/applications/rpi-backlight.desktop`) with the following content:
```
[Desktop Entry]
Version=1.0 Type=Application Terminal=false Name=rpi-backlight GUI Exec=/home/pi/.local/bin/rpi-backlight-gui Icon=/usr/share/icons/HighContrast/256x256/status/display-brightness.png Categories=Utility;
```
*The absolute path to* `rpi-backlight-gui` *might differ if you did not follow the installation instructions exactly, e.g. installed as root.*
Make it executable:
```
$ chmod +x /home/pi/.local/share/applications/rpi-backlight.desktop
```
You should now be able to start the rpi-backlight GUI from the menu:
`(Raspberry Pi Logo) → Accessoires → rpi-backlight GUI`.
Next, right-click on the panel and choose `Add / Remove panel items`. Select
`Application Launch Bar` and click `Preferences`:
Select `rpi-backlight GUI` on the right and click `Add`:
You’re done!
API reference[¶](#module-rpi_backlight)
---
*class* `rpi_backlight.``Backlight`(*backlight_sysfs_path: Union[str, PathLike[str], None] = None, board_type: rpi_backlight.BoardType = <BoardType.RASPBERRY_PI: 1>*)[¶](#rpi_backlight.Backlight)
Main class to access and control the display backlight power and brightness.
Set `backlight_sysfs_path` to `":emulator:"` to use with rpi-backlight-emulator.
`brightness`[¶](#rpi_backlight.Backlight.brightness)
The display brightness in range 0-100.
```
>>> backlight = Backlight()
>>> backlight.brightness # Display is at 50% brightness 50
>>> backlight.brightness = 100 # Set to full brightness
```
| Getter: | Return the display brightness. |
| Setter: | Set the display brightness. |
| Type: | float |
`fade`(*duration: float*) → Generator[T_co, T_contra, V_co][¶](#rpi_backlight.Backlight.fade)
Context manager for temporarily changing the fade duration.
```
>>> backlight = Backlight()
>>> with backlight.fade(duration=0.5):
... backlight.brightness = 1 # Fade to 100% brightness for 0.5s
...
>>> with backlight.fade(duration=0):
... backlight.brightness = 0 # Set to 0% brightness without fading, use if you have set `backlight.fade_duration` > 0
```
`fade_duration`[¶](#rpi_backlight.Backlight.fade_duration)
The brightness fade duration in seconds, defaults to 0.
Also see [`fade()`](#rpi_backlight.Backlight.fade).
```
>>> backlight = Backlight()
>>> backlight.fade_duration # Fading is disabled by default 0
>>> backlight.fade_duration = 0.5 # Set to 500ms
```
| Getter: | Return the fade duration. |
| Setter: | Set the fade duration. |
| Type: | float |
`power`[¶](#rpi_backlight.Backlight.power)
Turn the display on and off.
```
>>> backlight = Backlight()
>>> backlight.power # Display is on True
>>> backlight.power = False # Turn display off
```
| Getter: | Return whether the display is powered on or off. |
| Setter: | Set the display power on or off. |
| Type: | bool |
*class* `rpi_backlight.``BoardType`[¶](#rpi_backlight.BoardType)
Enum to specify a board type in the [`Backlight`](#rpi_backlight.Backlight) constructor.
`MICROSOFT_SURFACE_RT` *= 4*[¶](#rpi_backlight.BoardType.MICROSOFT_SURFACE_RT)
Microsoft Surface RT
`RASPBERRY_PI` *= 1*[¶](#rpi_backlight.BoardType.RASPBERRY_PI)
Raspberry Pi
`TINKER_BOARD` *= 2*[¶](#rpi_backlight.BoardType.TINKER_BOARD)
Tinker Board
`TINKER_BOARD_2` *= 3*[¶](#rpi_backlight.BoardType.TINKER_BOARD_2)
Tinker Board 2
`rpi_backlight.cli.``main`()[¶](#rpi_backlight.cli.main)
Start the command line interface.
`rpi_backlight.gui.``main`()[¶](#rpi_backlight.gui.main)
Start the graphical user interface.
`rpi_backlight.utils.``detect_board_type`() → Optional[BoardType][¶](#rpi_backlight.utils.detect_board_type)
Try to detect the board type based on the model string in
`/proc/device-tree/model`.
*class* `rpi_backlight.utils.``FakeBacklightSysfs`[¶](#rpi_backlight.utils.FakeBacklightSysfs)
Context manager to create a temporary “fake sysfs” containing all relevant files.
Used for tests and emulation.
```
>>> with FakeBacklightSysfs() as backlight_sysfs:
... backlight = Backlight(backlight_sysfs_path=backlight_sysfs.path)
... # use `backlight` as usual
```
Changes[¶](#changes)
---
### 2.6.0[¶](#id1)
* Add Python 3.11 as supported version, drop 3.6
* Add support for Microsoft Surface RT ([#54](https://github.com/linusg/rpi-backlight/pull/54), [@apandada1](https://github.com/apandada1))
### 2.5.0[¶](#id3)
* Add Python 3.10 as supported version
* Support alternate backlight sysfs path ([#50](https://github.com/linusg/rpi-backlight/pull/50), [@j-coopz](https://github.com/j-coopz))
### 2.4.1[¶](#id5)
* Fix board type detection
### 2.4.0[¶](#id6)
* Drop support for Python 3.5, which reached end-of-life in late 2020. Supported versions as of this release are 3.6 - 3.9
* Implement automatic board type detection ([#32](https://github.com/linusg/rpi-backlight/pull/32), [@p1r473](https://github.com/p1r473)),
passing a `board_type` to the constructor is no longer necessary in most cases, even when using an ASUS Tinker Board
* Fix setting brightness to max value and brightness fading loop condition ([#35](https://github.com/linusg/rpi-backlight/pull/35), [@Martin-HiPi](https://github.com/Martin-HiPi))
### 2.3.0[¶](#id9)
* Add support for ASUS Tinker Board 2 ([#29](https://github.com/linusg/rpi-backlight/pull/29), [@p1r473](https://github.com/p1r473))
### 2.2.0[¶](#id12)
* Add toggle functionality to CLI ([#21](https://github.com/linusg/rpi-backlight/pull/21), [@p1r473](https://github.com/p1r473))
* Replace Travis CI with GitHub actions ([#20](https://github.com/linusg/rpi-backlight/pull/20), [@linusg](https://github.com/linusg))
* Improve tests
### 2.1.0[¶](#id16)
* Add support for ASUS Tinker Board ([#19](https://github.com/linusg/rpi-backlight/pull/19), [@p1r473](https://github.com/p1r473))
### 2.0.1[¶](#id19)
* Add mypy type checking
* Add Python 3.8 to Travis CI config
* Fix documentation readthedocs build
* Fix typo in docs
* Improve README.md
* Mark project as stable on PyPI
### 2.0.0[¶](#id20)
* New, more pythonic API
* Update CLI and GUI
* Support emulator
* Add tests
### 1.8.1[¶](#id21)
* Fix float division issue with Python 2
### 1.8.0[¶](#id22)
* Fix permission error inconsistency across Python versions
* Update link to PyPI
### 1.7.1[¶](#id23)
* Fixed typo in `CHANGES.rst`
* Fixed rendering of parameters and return types in the documentation
### 1.7.0[¶](#id24)
* Fixed bug in `get_power`, which would eventually always return False
* Added parameters and return types in docstrings
### 1.6.0[¶](#id25)
* Added `duration` parameter to `set_brightness`
* `smooth` now defaults to `False`
* Huge improvements on CLI
* Fixed renamed function in examples
* Minor code and readme improvements
### 1.5.0[¶](#id26)
* PR #3 by Scouttp: Fixed permission errors
* Added documentation
* Code improvements
* Fixed typos
### 1.4.0[¶](#id27)
* Check for `pygobject` being installed
* Code cleanup
* README improvements
+ Added external links
+ Added badges
+ Fixed typos
* Moved to Travis CI and Landscape.io for builds and code health testing
* Prepared docs hosting at readthedocs.org
### 1.3.1[¶](#id28)
* Fixed type conversion
### 1.3.0[¶](#id29)
* Added experimental GUI (start with `rpi-backlight-gui`)
### 1.2.1[¶](#id30)
* Fixed CLI and typo
### 1.2.0[¶](#id31)
* Added command line interface (`rpi-backlight` and `rpi-backlight-gui`)
* Code improvements - thanks to deets
### 1.1.0[¶](#id32)
* Fixed `set_power(on)` function
* Added function to get the current power state of the LCD
* Added docstrings
* Code cleanup and improvements
### 1.0.0[¶](#id33)
Initial release. Added necessary files and basic features:
* Change the display brightness smoothly or abrupt
* Set the display power on or off
* Get the current brightness
* Get the maximum brightness
License[¶](#license)
---
The rpi-backlight source code is distributed under the terms of the MIT license,
see below:
```
MIT License
Copyright (c) 2016-2022 <NAME>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
```
Indices and tables[¶](#indices-and-tables)
===
* [Index](genindex.html)
* [Module Index](py-modindex.html)
* [Search Page](search.html) |
strand | cran | R | Package ‘strand’
October 14, 2022
Type Package
Title A Framework for Investment Strategy Simulation
Version 0.2.0
Date 2020-11-18
Description Provides a framework for performing discrete (share-level) simulations of
investment strategies. Simulated portfolios optimize exposure to an input signal subject
to constraints such as position size and factor exposure. For background see L. Chincarini
and <NAME> (2010, ISBN:978-0-07-145939-6) ``Quantitative Equity Portfolio Management''.
License GPL-3
URL https://github.com/strand-tech/strand
BugReports https://github.com/strand-tech/strand/issues
Depends R (>= 3.5.0)
Imports R6, Matrix, Rglpk, dplyr, tidyr, arrow, lubridate, rlang,
yaml, ggplot2, tibble, methods
Suggests testthat, knitr, rmarkdown, shiny, shinyFiles, shinyjs, DT,
Rsymphony, officer, flextable, plotly
Encoding UTF-8
LazyData true
VignetteBuilder knitr
RoxygenNote 7.1.1
NeedsCompilation no
Author <NAME> [cre, aut, cph],
<NAME> [aut],
<NAME> [ctb],
<NAME> [ctb],
<NAME> [ctb],
<NAME> [ctb]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2020-11-19 21:40:06 UTC
R topics documented:
strand-packag... 2
example_shiny_ap... 3
example_strategy_confi... 4
make_f... 4
PortOp... 5
sample_input... 9
sample_pricin... 10
sample_secre... 11
show_best_wors... 11
show_confi... 12
show_constraint... 12
show_monthly_return... 12
show_stat... 13
Simulatio... 13
strand-package strand: a framework for investment strategy simulation
Description
The strand package provides a framework for performing discrete (share-level) simulations of in-
vestment strategies. Simulated portfolios optimize exposure to an input signal subject to constraints
such as position size and factor exposure.
For an introduction to running simulations using the package, see vignette("strand"). For de-
tails on available methods see the documentation for the Simulation class.
Author(s)
<NAME> <<EMAIL>> and <NAME> <<EMAIL>>
Examples
# Load up sample data
data(sample_secref)
data(sample_pricing)
data(sample_inputs)
# Load sample configuration
config <- example_strategy_config()
# Override config file end date to run a one-week sim
config$to <- as.Date("2020-06-05")
# Create the Simulation object and run
sim <- Simulation$new(config,
raw_input_data = sample_inputs,
raw_pricing_data = sample_pricing,
security_reference_data = sample_secref)
sim$run()
# Print overall statistics
sim$overallStatsDf()
# Access tabular result data
head(sim$getSimSummary())
head(sim$getSimDetail())
head(sim$getPositionSummary())
head(sim$getInputStats())
head(sim$getOptimizationSummary())
head(sim$getExposures())
# Plot results
## Not run:
sim$plotPerformance()
sim$plotMarketValue()
sim$plotCategoryExposure("sector")
sim$plotFactorExposure(c("value", "size"))
sim$plotNumPositions()
## End(Not run)
example_shiny_app Run an example shiny app
Description
Runs a shiny app that allows interactively configuring and running a simulation. Once the simu-
lation is finished results, such as performance statistics and plots of exposures, are available in a
results panel.
Usage
example_shiny_app()
Examples
if (interactive()) {
example_shiny_app()
}
example_strategy_config
Load example strategy configuration
Description
Loads an example strategy configuration file for use in examples.
Usage
example_strategy_config()
Value
An object of class list that contains the example configuration. The list object is the result of
loading the package’s example yaml configuration file application/strategy_config.yaml.
Examples
config <- example_strategy_config()
names(config$strategies)
show(config$strategies$strategy_1)
make_ft Make Basic Flextable
Description
Make a flextable with preferred formatting
Usage
make_ft(x, title = NULL, col_names = NULL, hlines = "all")
Arguments
x The data.frame to use for flextable
title The string to use as the table title
col_names A character vector of preferred column names for flextable. Length of character
vector must be equal to the number of columns. Defaults to NULL, in which
case the column names of x are used in the flextable.
hlines The row numbers to draw horizontal lines beneath. Defaults to "all", can be
"all", "none", or a numeric vector.
Value
A flextable object with the argued formatting
PortOpt Portfolio optimization class
Description
The PortOpt object is used to set up and solve a portfolio optimization problem.
Details
A PortOpt object is configured in the same way as a Simulation object, by supplying configu-
ration in a yaml file or list to the object constructor. Methods are available for adding constraints
and retrieving information about the optimization setup and results. See the package vignette for
information on configuration file setup.
Methods
Public methods:
• PortOpt$new()
• PortOpt$setVerbose()
• PortOpt$addConstraints()
• PortOpt$getConstraintMatrix()
• PortOpt$getConstraintMeta()
• PortOpt$solve()
• PortOpt$getResultData()
• PortOpt$getLoosenedConstraints()
• PortOpt$getMaxPosition()
• PortOpt$getMaxOrder()
• PortOpt$summaryDf()
• PortOpt$print()
• PortOpt$clone()
Method new(): Create a new PortOpt object.
Usage:
PortOpt$new(config, input_data)
Arguments:
config An object of class list or character. If the value passed is a character vector, it
should be of length 1 and specify the path to a yaml configuration file that contains the
object’s configuration info. If the value passed is of class list(), the list should contain the
object’s configuration info in list form (e.g, the return value of calling yaml.load_file on
the configuration file).
input_data A data.frame that contains all necessary input for the optimization.
If the top-level configuration item price_var is not set, prices will be expected in the
ref_price column of input_data.
Returns: A new PortOpt object.
Examples:
library(dplyr)
data(sample_secref)
data(sample_inputs)
data(sample_pricing)
# Construct optimization input for one day from sample data. The columns
# of the input data must match the input configuration.
optim_input <-
inner_join(sample_inputs, sample_pricing,
by = c("id", "date")) %>%
left_join(sample_secref, by = "id") %>%
filter(date %in% as.Date("2020-06-01")) %>%
mutate(ref_price = price_unadj,
shares_strategy_1 = 0)
opt <-
PortOpt$new(config = example_strategy_config(),
input_data = optim_input)
# The problem is not solved until the \code{solve} method is called
# explicitly.
opt$solve()
Method setVerbose(): Set the verbose flag to control the amount of informational output.
Usage:
PortOpt$setVerbose(verbose)
Arguments:
verbose Logical flag indicating whether to be verbose or not.
Returns: No return value, called for side effects.
Method addConstraints(): Add optimization constraints.
Usage:
PortOpt$addConstraints(constraint_matrix, dir, rhs, name)
Arguments:
constraint_matrix Matrix with one row per constraint and (S + 1) × N columns, where S is
number of strategies and N is the number of stocks.
The variables in the optimization are
x1,1 , x2,1 , . . . , xN,1 ,
x1,2 , x2,2 , . . . , xN,2 ,
..
.
The first N × S variables are the individual strategy trades. Variable xi,s represents the
signed trade for stock i in strategy s. The following N auxillary variables y1 , . . . , yN repre-
sent the absolute value of the net trade in each stock. So for a stock i, we have:
X
yi = |xi,s |
s
dir Vector of class character of length nrow(constraint_matrix) that specifies the direction
of the constraints. All elements must be one of ">=", "==", or "<=".
rhs Vector of class numeric of length nrow(constraint_matrix) that specifies the bounds of
the constraints.
name Character vector of length 1 that specifies a name for the set of constraints that are being
created.
Returns: No return value, called for side effects.
Method getConstraintMatrix(): Constraint matrix access.
Usage:
PortOpt$getConstraintMatrix()
Returns: The optimization’s constraint matrix.
Method getConstraintMeta(): Provide high-level constraint information.
Usage:
PortOpt$getConstraintMeta()
Returns: A data frame that contains constraint metadata, such as current constraint value and
whether a constraint is currently within bounds, for all single-row constraints. Explicitly exclude
net trade constraints and constraints that involve net trade variables.
Method solve(): Solve the optimization. After running solve(), results can be retrieved using
getResultData().
Usage:
PortOpt$solve()
Returns: No return value, called for side effects.
Method getResultData(): Get optimization result.
Usage:
PortOpt$getResultData()
Returns: A data frame that contains the number of shares and the net market value of the trades
at the strategy and joint (net) level for each stock in the optimization’s input.
Method getLoosenedConstraints(): Provide information about any constraints that were
loosened in order to solve the optimization.
Usage:
PortOpt$getLoosenedConstraints()
Returns: Object of class list where keys are the names of the loosened constraints and values
are how much they were loosened toward current values. Values are expressed as (current
constraint value - loosened constraint value) / (current constraint value - violated constraint
value). A value of 0 means a constraint was loosened 100% and is not binding.
Method getMaxPosition(): Provide information about the maximum position size allowed for
long and short positions.
Usage:
PortOpt$getMaxPosition()
Returns: An object of class data.frame that contains the limits on size for long and short
positions for each strategy and security. The columns in the data frame are:
id Security identifier.
strategy Strategy name.
max_pos_lmv Maximum net market value for a long position.
max_pos_smv Maximum net market value for a short position.
Method getMaxOrder(): Provide information about the maximum order size allowed for each
security and strategy.
Usage:
PortOpt$getMaxOrder()
Returns: An object of class data.frame that contains the limit on order size for each strategy
and security. The columns in the data frame are:
id Security identifier.
strategy Strategy name.
max_order_gmv Maximum gross market value allowed for an order.
Method summaryDf(): Provide aggregate level optimization information if the problem has been
solved.
Usage:
PortOpt$summaryDf()
Returns: A data frame with one row per strategy, including the joint (net) level, and columns
for starting and ending market values and factor expoure values.
Method print(): Print summary information.
Usage:
PortOpt$print()
Returns: No return value, called for side effects.
Method clone(): The objects of this class are cloneable with this method.
Usage:
PortOpt$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
Examples
## ------------------------------------------------
## Method `PortOpt$new`
## ------------------------------------------------
library(dplyr)
data(sample_secref)
data(sample_inputs)
data(sample_pricing)
# Construct optimization input for one day from sample data. The columns
# of the input data must match the input configuration.
optim_input <-
inner_join(sample_inputs, sample_pricing,
by = c("id", "date")) %>%
left_join(sample_secref, by = "id") %>%
filter(date %in% as.Date("2020-06-01")) %>%
mutate(ref_price = price_unadj,
shares_strategy_1 = 0)
opt <-
PortOpt$new(config = example_strategy_config(),
input_data = optim_input)
# The problem is not solved until the \code{solve} method is called
# explicitly.
opt$solve()
sample_inputs Sample security inputs for examples and testing
Description
A dataset containing sample security input data for 492 securities and 65 weekdays, from 2020-06-
01 to 2020-08-31. Data items include average trading dollar volume, market cap, and normalized
size and value factors. The pricing data used to construct the dataset was downloaded using the
Tiingo Stock API and is used with permission. Fundamental data items were downloaded from
EDGAR.
Usage
data(sample_inputs)
Format
A data frame with 31980 rows and 7 variables:
date Input date. It is assumed that the input data for day X is known at the beginning of day X
(e.g., the data is as-of the previous day’s close).
id Security identifier.
rc_vol Average dollar trading volume for the security over the past 20 trading days.
market_cap Market capitalization, in dollars. The shares outstanding value used to calculate mar-
ket cap is the latest value available at the beginning of the month.
book_to_price Ratio of total equity to market cap. The stockholders’ equity value used to calculate
book to price is the latest value available at the beginning of the month.
size Market cap factor normalized to be N(0,1) for each day.
value Book to price factor normalized to be N(0,1) for each day.
Details
Data for most members of the S&P 500 are present. Some securities have been omitted due to data
processing complexities. For example, securities for companies with multiple share classes have
been omitted in the current version.
Values for shares outstanding and stockholders’ equity downloaded from EDGAR may be inaccu-
rate due to XBRL parsing issues.
Full code for reconstructing the dataset can be found in the pystrand repository.
sample_pricing Sample security pricing data for examples and testing
Description
A dataset containing sample security pricing data for 492 securities and 65 weekdays, from 2020-
06-01 to 2020-08-31. This data was downloaded using the Tiingo Stock API and is redistributed
with permission.
Usage
data(sample_pricing)
Format
A data frame with 31980 rows and 8 variables:
date Pricing date.
id Security identifier.
price_unadj The unadjusted price of the security.
prior_close_unadj The unadjusted prior closing price of the security.
dividend_unadj The dividend for the security on an unadjusted basis, if any.
distribution_unadj The distribution (e.g., spin-off) for the security on an unadjusted basis (note
that there is no spin-off information in this dataset, so all values are zero).
volume Trading volume for the security, in shares.
adjustment_ratio The adjustment ratio for the security. For example, AAPL has an adjustment
ratio of 0.25 to account for its 4:1 split on 2020-08-31.
Details
Full code for reconstructing the dataset can be found in the pystrand repository.
sample_secref Sample security reference data for examples and testing
Description
A dataset containing sample reference data for the securities of 492 large companies. All securities
in the dataset were in the S&P 500 for most or all of the period June-August 2020.
Usage
data(sample_secref)
Format
A data frame with 492 rows and 4 variables:
id Unique security identifier (the security’s ticker).
name Company name.
symbol Human-readable symbol for display and reporting purposes. In the case of this dataset it is
the same as the id variable.
sector GICS sector for the company according to the Wikipedia page List of S&P 500 companies.
show_best_worst Show Best/Worst Performers
Description
Build a flextable object showing a Simulation’s best and worst performers
Usage
show_best_worst(sim)
Arguments
sim A Simulation object to show the best and worst performers for
show_config Show Strategy Configuration
Description
Build a flextable object showing a Simulation’s configuration
Usage
show_config(sim)
Arguments
sim A Simulation object to show the configuration for
show_constraints Show Strategy Constraints
Description
Build a flextable object showing a Simulation’s risk constraints
Usage
show_constraints(sim)
Arguments
sim A Simulation object to show the configuration for
show_monthly_returns Show monthly returns
Description
Build a flextable object that shows a simulation’s return by month by formatting the output of
‘Simulation$overallReturnsByMonthDf‘.
Usage
show_monthly_returns(sim)
Arguments
sim A Simulation object with results to display
show_stats Show Overall Stats Table
Description
Build a flextable object showing a Simulation’s overall statistics
Usage
show_stats(sim)
Arguments
sim A Simulation object to show the statistics for
Simulation Simulation class
Description
Class for running a simulation and getting results.
Details
The Simulation class is used to set up and run a daily simulation over a particular period. Portfolio
construction parameters and other simulator settings can be configured in a yaml file that is passed
to the object’s constructor. See vignette("strand") for information on configuration file setup.
Methods
Public methods:
• Simulation$new()
• Simulation$setVerbose()
• Simulation$setShinyCallback()
• Simulation$getSecurityReference()
• Simulation$run()
• Simulation$getSimDates()
• Simulation$getSimSummary()
• Simulation$getSimDetail()
• Simulation$getPositionSummary()
• Simulation$getInputStats()
• Simulation$getLooseningInfo()
• Simulation$getOptimizationSummary()
• Simulation$getExposures()
• Simulation$getDelistings()
• Simulation$getSingleStrategySummaryDf()
• Simulation$plotPerformance()
• Simulation$plotContribution()
• Simulation$plotMarketValue()
• Simulation$plotCategoryExposure()
• Simulation$plotFactorExposure()
• Simulation$plotNumPositions()
• Simulation$plotTurnover()
• Simulation$plotUniverseSize()
• Simulation$plotNonInvestablePct()
• Simulation$overallStatsDf()
• Simulation$overallReturnsByMonthDf()
• Simulation$print()
• Simulation$writeFeather()
• Simulation$readFeather()
• Simulation$getConfig()
• Simulation$writeReport()
• Simulation$clone()
Method new(): Create a new Simulation object.
Usage:
Simulation$new(
config = NULL,
raw_input_data = NULL,
input_dates = NULL,
raw_pricing_data = NULL,
security_reference_data = NULL,
delisting_data = NULL
)
Arguments:
config An object of class list or character, or NULL. If the value passed is a character vector,
it should be of length 1 and specify the path to a yaml configuration file that contains the
object’s configuration info. If the value passed is of class list(), the list should contain the
object’s configuration info in list form (e.g, the return value of calling yaml.load_file
on the configuration file). If the value passed is NULL, then there will be no configuration
information associated with the simulation and it will not possible to call the run method.
Setting config = NULL is useful when creating simulation objects into which results will be
loaded with readFeather.
raw_input_data A data frame that contains all of the input data (for all periods) for the simu-
lation. The data frame must have a date column. Data supplied using this parameter will be
used if the configuration option simulator/input_data/type is set to object. Defaults
to NULL.
input_dates Vector of class Date that specifies when input data should be updated. If data is
being supplied using the raw_input_data parameter, then input_dates defaults to set of
dates present in this data.
raw_pricing_data A data frame that contains all of the input data (for all periods) for the sim-
ulation. The data frame must have a date column. Data supplied using this parameter will
only be used if the configuration option simulator/pricing_data/type is set to object.
Defaults to NULL.
security_reference_data A data frame that contains reference data on the securities in the
simulation, including any categories that are used in portfolio construction constraints. Note
that the simulator will throw an error if there are input data records for which there is no
entry in the security reference. Data supplied using this parameter will only be used if the
configuration option simulator/secref_data/type is set to object. Defaults to NULL.
delisting_data A data frame that contains delisting dates and associated returns. It must
contain three columns: id (character), delisting_date (Date), and delisting_return (numeric).
The date in the delisting_date column means the day on which a stock will be removed
from the simulation portfolio. It is typically the day after the last day of trading. The
delisting_return column reflects what, if any, P&L should be recorded on the delisting date.
A delisting_return of -1 means that the shares were deemed worthless. The delisting return
is multiplied by the starting net market value of the position to determine P&L for the
delisted position on the delisting date. Note that the portfolio optimization does not include
stocks that are being removed due to delisting. Data supplied using this parameter will only
be used if the configuration option simulator/delisting_data/type is set to object.
Defaults to NULL.
Returns: A new Simulation object.
Method setVerbose(): Set the verbose flag to control info output.
Usage:
Simulation$setVerbose(verbose)
Arguments:
verbose Logical flag indicating whether to be verbose or not.
Returns: No return value, called for side effects.
Method setShinyCallback(): Set the callback function for updating progress when running a
simulation in shiny.
Usage:
Simulation$setShinyCallback(callback)
Arguments:
callback A function suitable for updating a shiny Progress object. It must have two parame-
ters: value, indicating the progress amount, and detail, and detail, a text string for display
on the progress bar.
Returns: No return value, called for side effects.
Method getSecurityReference(): Get security reference information.
Usage:
Simulation$getSecurityReference()
Returns: An object of class data.frame that contains the security reference data for the simu-
lation.
Method run(): Run the simulation.
Usage:
Simulation$run()
Returns: No return value, called for side effects.
Method getSimDates(): Get a list of all date for the simulation.
Usage:
Simulation$getSimDates()
Returns: A vector of class Date over which the simulation currently iterates: all weekdays
between the ’from’ and ’to’ dates in the simulation’s config.
Method getSimSummary(): Get summary information.
Usage:
Simulation$getSimSummary(strategy_name = NULL)
Arguments:
strategy_name Character vector of length 1 that specifies the strategy for which to get detail
data. If NULL data for all strategies is returned. Defaults to NULL.
Returns: An object of class data.frame that contains summary data for the simulation, by
period, at the joint and strategy level. The data frame contains the following columns:
strategy Strategy name, or ’joint’ for the aggregate strategy.
sim_date Date of the summary data.
market_fill_nmv Total net market value of fills that do not net down across strategies.
transfer_fill_nmv Total net market value of fills that represent "internal transfers", i.e., fills in
one strategy that net down with fills in another. Note that at the joint level this column by
definition is 0.
market_order_gmv Total gross market value of orders that do not net down across strategies.
market_fill_gmv Total gross market value of fills that do not net down across strategies.
transfer_fill_gmv Total gross market value of fills that represent "internal transfers", i.e., fills
in one strategy that net down with fills in another.
start_nmv Total net market value of all positions at the start of the period.
start_lmv Total net market value of all long positions at the start of the period.
start_smv Total net market value of all short positions at the start of the period.
end_nmv Total net market value of all positions at the end of the period.
end_gmv Total gross market value of all positions at the end of the period.
end_lmv Total net market value of all long positions at the end of the period.
end_smv Total net market value of all short positions at the end of the period.
end_num Total number of positions at the end of the period.
end_num_long Total number of long positions at the end of the period.
end_num_short Total number of short positions at the end of the period.
position_pnl The total difference between the end and start market value of positions.
trading_pnl The total difference between the market value of trades at the benchmark price
and at the end price. Note: currently assuming benchmark price is the closing price, so
trading P&L is zero.
gross_pnl Total P&L gross of costs, calculated as position_pnl + trading_pnl.
trade_costs Total trade costs (slippage).
financing_costs Total financing/borrow costs.
net_pnl Total P&L net of costs, calculated as gross_pnl - trade_costs - financing_costs.
fill_rate_pct Total fill rate across all market orders, calculated as 100 * market_fill_gmv / mar-
ket_order_gmv.
num_investable Number of investable securities (size of universe).
Method getSimDetail(): Get detail information.
Usage:
Simulation$getSimDetail(
sim_date = NULL,
strategy_name = NULL,
security_id = NULL,
columns = NULL
)
Arguments:
sim_date Vector of length 1 of class Date or character that specifies the period for which to get
detail information. If NULL then data from all periods is returned. Defaults to NULL.
strategy_name Character vector of length 1 that specifies the strategy for which to get detail
data. If NULL data for all strategies is returned. Defaults to NULL.
security_id Character vector of length 1 that specifies the security for which to get detail
data. If NULL data for all securities is returned. Defaults to NULL.
columns Vector of class character specifying the columns to return. This parameter can be
useful when dealing with very large detail datasets.
Returns: An object of class data.frame that contains security-level detail data for the simula-
tion for the desired strategies, securities, dates, and columns. Available columns include:
id Security identifier.
strategy Strategy name, or ’joint’ for the aggregate strategy.
sim_date Date to which the data pertains.
shares Shares at the start of the period.
int_shares Shares at the start of the period that net down with positions in other strategies.
ext_shares Shares at the start of the period that do not net down with positions in other strate-
gies.
order_shares Order, in shares.
market_order_shares Order that does not net down with orders in other strategies, in shares.
transfer_order_shares Order that nets down with orders in other strategies, in shares.
fill_shares Fill, in shares.
market_fill_shares Fill that does not net down with fills in other strategies, in shares.
transfer_fill_shares Fill that nets down with fills in other strategies, in shares.
end_shares Shares at the end of the period.
end_int_shares Shares at the end of the period that net down with positions in other strategies.
end_ext_shares Shares at the end of the period that do not net down with positions in other
strategies.
start_price Price for the security at the beginning of the period.
end_price Price for the security at the end of the period.
dividend Dividend for the security, if any, for the period.
distribution Distribution (e.g., spin-off) for the security, if any, for the period.
investable Logical indicating whether the security is part of the investable universe. The value
of the flag is set to TRUE if the security has not been delisted and satisfies the universe
criterion provided (if any) in the simulator/universe configuration option.
delisting Logical indicating whether a position in the security was removed due to delisting.
If delisting is set to TRUE, the gross_pnl and net_pnl columns will contain the P&L due
to delisting, if any. P&L due to delisting is calculated as the delisting return times the
start_nmv of the position.
position_pnl Position P&L, calculated as shares * (end_price + dividend + distribution - start_price)
trading_pnl The difference between the market value of trades at the benchmark price and at
the end price. Note: currently assuming benchmark price is the closing price, so trading
P&L is zero.
trade_costs Trade costs, calculated as a fixed percentage (set in the simulation configuration)
of the notional of the market trade (valued at the close).
financing_costs Financing cost for the position, calculated as a fixed percentage (set in the
simulation configuration) of the notional of the starting value of the portfolio’s external po-
sitions. External positions are positions held on the street and are recorded in the ext_shares
column.
gross_pnl Gross P&L, calculated as position_pnl + trading_pnl.
net_pnl Net P&L, calculated as gross_pnl - trade_costs - financing_costs.
market_order_nmv Net market value of the order that does not net down with orders in other
strategies.
market_fill_gmv Gross market value of the order that does not net down with orders in other
strategies.
market_fill_nmv Net market value of the fill that does not net down with orders in other strate-
gies.
market_fill_gmv Gross market value of the fill that does not net down with orders in other
strategies.
transfer_fill_nmv Net market value of the fill that nets down with fills in other strategies.
transfer_fill_gmv Gross market value of the fill that nets down with fills in other strategies.
start_nmv Net market value of the position at the start of the period.
end_nmv Net market value of the position at the end of the period.
end_gmv Gross market value of the position at the end of the period.
Method getPositionSummary(): Get summary information by security. This method can be
used, for example, to calculate the biggest winners and losers over the course of the simulation.
Usage:
Simulation$getPositionSummary(strategy_name = NULL)
Arguments:
strategy_name Character vector of length 1 that specifies the strategy for which to get detail
data. If NULL data for all strategies is returned. Defaults to NULL.
Returns: An object of class data.frame that contains summary information aggregated by
security. The data frame contains the following columns:
id Security identifier.
strategy Strategy name, or ’joint’ for the aggregate strategy.
gross_pnl Gross P&L for the position over the entire simulation.
gross_pnl Net P&L for the position over the entire simulation.
average_market_value Average net market value of the position over days in the simulation
where the position was not flat.
total_trading Total gross market value of trades for the security.
trade_costs Total cost of trades for the security over the entire simulation.
trade_costs Total cost of financing for the position over the entire simulation.
days_in_portfolio Total number of days there was a position in the security in the portfolio
over the entire simulation.
Method getInputStats(): Get input statistics.
Usage:
Simulation$getInputStats()
Returns: An object of class data.frame that contains statistics on select columns of input data.
Statistics are tracked for the columns listed in the configuration variable simulator/input_data/track_metadata.
The data frame contains the following columns:
period Period to which statistics pertain.
input_rows Total number of rows of input data, including rows carried forward from the pre-
vious period.
cf_rows Total number of rows carried forward from the previous period.
num_na_column Number of NA values in column. This measure appears for each element of
track_metadata.
cor_column Period-over-period correlation for column. This measure appears for each element
of track_metadata.
Method getLooseningInfo(): Get loosening information.
Usage:
Simulation$getLooseningInfo()
Returns: An object of class data.frame that contains, for each period, which constraints were
loosened in order to solve the portfolio optimization problem, if any. The data frame contains
the following columns:
date Date for which the constraint was loosened.
constraint_name Name of the constraint that was loosened.
pct_loosened Percentage by which the constraint was loosened, where 100 means loosened
fully (i.e., the constraint is effectively removed).
Method getOptimizationSummary(): Get optimization summary information.
Usage:
Simulation$getOptimizationSummary()
Returns: An object of class data.frame that contains optimization summary information, such
as starting and ending factor constraint values, at the strategy and joint level. The data frame
contains the following columns:
strategy Strategy name, or ’joint’ for the aggregate strategy.
sim_date Date to which the data pertains.
order_gmv Total gross market value of orders generated by the optimization.
start_smv Total net market value of short positions at the start of the optimization.
start_lmv Total net market value of long positions at the start of the optimization.
end_smv Total net market value of short positions at the end of the optimization.
end_lmv Total net market value of long positions at the end of the optimization.
start_factor Total net exposure to factor at the start of the optimization, for each factor con-
straint.
end_factor Total net exposure to factor at the start of the optimization, for each factor con-
straint.
Method getExposures(): Get end-of-period exposure information.
Usage:
Simulation$getExposures(type = "net")
Arguments:
type Vector of length 1 that may be one of "net", "long", "short", and "gross".
Returns: An object of class data.frame that contains end-of-period exposure information
for the simulation portfolio. The units of the exposures are portfolio weight relative to strat-
egy_captial (i.e., net market value of exposure divided by strategy capital). The data frame
contains the following columns:
strategy Strategy name, or ’joint’ for the aggregate strategy.
sim_date Date of the exposure data.
category_level Exposure to level within category, for all levels of all category constraints, at
the end of the period.
factor Exposure to factor, for all factor constraints, at the end of the period.
Method getDelistings(): Get information on positions removed due to delisting.
Usage:
Simulation$getDelistings()
Returns: An object of class data.frame that contains a row for each position that is removed
from the simulation portfolio due to a delisting. Each row contains the size of the position on
the day on which it was removed from the portfolio.
Method getSingleStrategySummaryDf(): Get summary information for a single strategy suit-
able for plotting input.
Usage:
Simulation$getSingleStrategySummaryDf(
strategy_name = "joint",
include_zero_row = TRUE
)
Arguments:
strategy_name Strategy for which to return summary data.
include_zero_row Logical flag indicatiing whether to prepend a row to the summary data with
starting values at zero. Defaults to TRUE.
Returns: A data frame that contains summary information for the desired strategy, as well as
columns for cumulative net and gross total return, calculated as pnl divided by ending gross
market value.
Method plotPerformance(): Draw a plot of cumulative gross and net return by date.
Usage:
Simulation$plotPerformance(strategy_name = "joint")
Arguments:
strategy_name Character vector of length 1 specifying the strategy for the plot. Defaults to
"joint".
Method plotContribution(): Draw a plot of contribution to net return on GMV for levels of
a specified category.
Usage:
Simulation$plotContribution(category_var, strategy_name = "joint")
Arguments:
category_var Plot performance contribution for the levels of category_var. category_var
must be present in the simulation’s security reference, and detail data must be present in the
object’s result data.
strategy_name Character vector of length 1 specifying the strategy for the plot. Defaults to
"joint".
Method plotMarketValue(): Draw a plot of total gross, long, short, and net market value by
date.
Usage:
Simulation$plotMarketValue(strategy_name = "joint")
Arguments:
strategy_name Character vector of length 1 specifying the strategy for the plot. Defaults to
"joint".
Method plotCategoryExposure(): Draw a plot of exposure to all levels in a category by date.
Usage:
Simulation$plotCategoryExposure(in_var, strategy_name = "joint")
Arguments:
in_var Category for which exposures are plotted. In order to plot exposures for category
in_var, we must have run the simulation with in_var in the config setting simulator/calculate_exposures/cate
strategy_name Character vector of length 1 specifying the strategy for the plot. Defaults to
"joint".
Method plotFactorExposure(): Draw a plot of exposure to factors by date.
Usage:
Simulation$plotFactorExposure(in_var, strategy_name = "joint")
Arguments:
in_var Factors for which exposures are plotted.
strategy_name Character vector of length 1 specifying the strategy for the plot. Defaults to
"joint".
Method plotNumPositions(): Draw a plot of number of long and short positions by date.
Usage:
Simulation$plotNumPositions(strategy_name = "joint")
Arguments:
strategy_name Character vector of length 1 specifying the strategy for the plot. Defaults to
"joint".
Method plotTurnover(): Draw a plot of number of long and short positions by date.
Usage:
Simulation$plotTurnover(strategy_name = "joint")
Arguments:
strategy_name Character vector of length 1 specifying the strategy for the plot. Defaults to
"joint".
Method plotUniverseSize(): Draw a plot of the universe size, or number of investable stocks,
over time.
Usage:
Simulation$plotUniverseSize(strategy_name = "joint")
Arguments:
strategy_name Character vector of length 1 specifying the strategy for the plot. Defaults to
joint.
Method plotNonInvestablePct(): Draw a plot of the percentage of portfolio GMV held in
non-investable stocks (e.g., stocks that do not satisfy universe criteria) for a given strategy. Note
that this plot requires detail data.
Usage:
Simulation$plotNonInvestablePct(strategy_name = "joint")
Arguments:
strategy_name Character vector of length 1 specifying the strategy for the plot. Defaults to
"joint".
Method overallStatsDf(): Calculate overall simulation summary statistics, such as total P&L,
Sharpe, average market values and counts, etc.
Usage:
Simulation$overallStatsDf()
Returns: A data frame that contains summary statistics, suitable for reporting.
Method overallReturnsByMonthDf(): Calculate return for each month and summary statistics
for each year, such as total return and annualized Sharpe. Return in data frame format suitable for
reporting.
Usage:
Simulation$overallReturnsByMonthDf()
Returns: The data frame contains one row for each calendar year in the simulation, and up to
seventeen columns: one column for year, one column for each calendar month, and columns
for the year’s total return, annualized return, annualized volatility, and annualized Sharpe. Total
return is the sum of daily net returns. Annualized return is the mean net return times 252. Annu-
alized volatility is the standard deviation of net return times the square root of 252. Annualized
Sharpe is the ratio of annualized return to annualized volatility. All returns are in percent.
Method print(): Print overall simulation statistics.
Usage:
Simulation$print()
Method writeFeather(): Write the data in the object to feather files.
Usage:
Simulation$writeFeather(out_loc)
Arguments:
out_loc Directory in which output files should be created.
Returns: No return value, called for side effects.
Method readFeather(): Load files created with writeFeather into the object. Note that
because detail data is not re-split by period, it will not be possible to use the sim_date parameter
when calling getSimDetail on the populated object.
Usage:
Simulation$readFeather(in_loc)
Arguments:
in_loc Directory that contains files to be loaded.
Returns: No return value, called for side effects.
Method getConfig(): Get the object’s configuration information.
Usage:
Simulation$getConfig()
Returns: Object of class list that contains the simulation’s configuration information.
Method writeReport(): Write an html document of simulation results.
Usage:
Simulation$writeReport(
out_dir,
out_file,
out_fmt = "html",
contrib_vars = NULL
)
Arguments:
out_dir Directory in which output files should be created
out_file File name for output
out_fmt Format in which output files should be created. The default is html and that is currently
the only option.
contrib_vars Security reference variables for which to plot return contribution.
res The object of class ’Simulation’ which we want to write the report about.
Method clone(): The objects of this class are cloneable with this method.
Usage:
Simulation$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone. |
ex_datacube | hex | Erlang |
ExDatacube
===
Wrapper de comunicação com à API da DataCube.
A API da DataCube conta com três grandes seções:
* Veículos ([`ExDatacube.Veiculos`](ExDatacube.Veiculos.html));
* Cadastros ([`ExDatacube.Cadastros`](ExDatacube.Cadastros.html));
* CNH ([`ExDatacube.CNH`](ExDatacube.CNH.html)).
Cada uma dessas seções é implementada de forma independente, de modo que as configurações também precisam ser fornecidas independentemente.
[opções-compartilhadas](#module-opções-compartilhadas)
Opções compartilhadas
---
Todas as funções de comunicação com a api compartilham as opções a seguir. Além
de passar as opções para as funções, há a opção de definí-las globalmente através da
configuração da aplicação:
```
config :ex_datacube, auth_token: "token"
config :ex_datacube, ExDatacube.Veiculos,
auth_token: "token",
adaptador: ExDatacube.Veiculos.Adaptores.Default
```
* `:auth_token` — token de autenticação à API, que pode ser fornecido de três maneiras, seguindo a seguinte precedência:
+ nas opções de qualquer chamada;
+ configuração global no contexto da api (`config :ex_datacube, API, ...`);
+ configuração global (`config :ex_datacube, ...`).
* `:adaptador` — Adaptador a ser utilizado nas chamadas. Por padrão, o adaptador usado é o `ModuloAPI.Adaptores.Default` que comunica-se com a api de produção.
Há disponível também o adaptador de testes `ModuloAPI.Adaptores.Stub`
que pode ser usado por bibliotecas como a Mox para realizar testes.
* `:receive_timeout` — timeout da requisição. Default: 1 minuto.
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[shared_opts()](#t:shared_opts/0)
[API Veículos](#api-veículos)
---
[consulta_nacional_agregados(placa, opts \\ [])](#consulta_nacional_agregados/2)
Retorna resultado da busca de veículos no endpoint agregados.
[consulta_nacional_completa(placa, opts \\ [])](#consulta_nacional_completa/2)
Retorna resultado da busca de veículos completa.
[consulta_nacional_simples_v2(placa, opts \\ [])](#consulta_nacional_simples_v2/2)
Retorna resultado da busca de veículos simplificada v2.
Nesta consulta temos informação de renavam, chassi, proprietário e dados cadastrais do veículo.
[consulta_nacional_simples_v3(placa, opts \\ [])](#consulta_nacional_simples_v3/2)
Retorna resultado da busca de veículos simplificada v3.
Nesta consulta temos informação de renavam, chassi e dados cadastrais do veículo.
[API CNH](#api-cnh)
---
[consulta_nacional_cnh(cpf, opts \\ [])](#consulta_nacional_cnh/2)
Retorna CNH de motorista da base nacional.
[API Cadastros](#api-cadastros)
---
[consulta_dados_cnpj(cnpj, opts \\ [])](#consulta_dados_cnpj/2)
Retorna dados da empresa identificada pelo `cnpj`.
[Link to this section](#types)
Types
===
[Link to this section](#api-veículos)
API Veículos
===
[Link to this section](#api-cnh)
API CNH
===
[Link to this section](#api-cadastros)
API Cadastros
===
ExDatacube.API
===
Módulo de comunicação com a API da Datacube.
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[auth_token()](#t:auth_token/0)
Se token de autenticação à API for fornecido nas opções, ele será incluído no body da requisição caso um token não exista no mapa de parâmetros fornecido.
[decode()](#t:decode/0)
Função de decodificação dos retornos da API. Default [`Jason.decode!/1`](https://hexdocs.pm/jason/1.3.0/Jason.html#decode!/1)
[encode()](#t:encode/0)
Função de codificação dos parâmetros para x-www-form-urlencoded. Default [`URI.encode_query/1`](https://hexdocs.pm/elixir/URI.html#encode_query/1)
[error()](#t:error/0)
[network_error_type()](#t:network_error_type/0)
Erros resultantes na comunicação com o servidor.
[opts()](#t:opts/0)
[server_error_type()](#t:server_error_type/0)
Erros provenientes do servidor da API (tradução códigos http)
[Functions](#functions)
---
[base_url()](#base_url/0)
URL base da API.
[post(path, params, opts \\ [])](#post/3)
Faz uma chamada post à API no endpoint `path`. Converte `params` para `x-www-form-urlencoded`.
[url(path)](#url/1)
Concatena `path` com url base da API.
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
ExDatacube.API.Resposta
===
Resposta padrão de retorno da API consultasdeveiculos
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[error_message()](#t:error_message/0)
Mensagem de erro da API quando status: `false`
[status()](#t:status/0)
Indica o status da requisição. Se `status` for `false`, houve algum erro na requisição e neste caso detalhes podem ser acessados na variável `msg`.
[t()](#t:t/0)
[Functions](#functions)
---
[changeset(struct, params)](#changeset/2)
[new(params)](#new/1)
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
ExDatacube.CNH behaviour
===
Define `behaviour` com chamadas da API de CNH.
TOOD: Implementar adaptadores.
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[cpf()](#t:cpf/0)
CPF do motorista
[Callbacks](#callbacks)
---
[consulta_nacional_cnh(cpf, shared_opts)](#c:consulta_nacional_cnh/2)
Retorna CNH de motorista da base nacional.
[Link to this section](#types)
Types
===
[Link to this section](#callbacks)
Callbacks
===
ExDatacube.Cadastros behaviour
===
Define `behaviour` com chamadas da API de Cadastros.
TOOD: Implementar adaptadores.
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[cnpj()](#t:cnpj/0)
CNPJ da empresa
[Callbacks](#callbacks)
---
[consulta_dados_cnpj(cnpj, shared_opts)](#c:consulta_dados_cnpj/2)
Retorna resultado da busca de veículos simplificada
[Link to this section](#types)
Types
===
[Link to this section](#callbacks)
Callbacks
===
ExDatacube.Veiculos behaviour
===
Define `behaviour` com chamadas da API de informações de veículos.
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[adaptador()](#t:adaptador/0)
Adaptador a ser utilizado para comunicar-se com a API.
[placa()](#t:placa/0)
Placa do veículo a ser buscada
[Callbacks](#callbacks)
---
[consulta_nacional_agregados(placa, shared_opts)](#c:consulta_nacional_agregados/2)
Retorna resultado da busca de veículos simplificada (sem proprietário) e mais em conta, no entanto há a possibilidade da informação de Renavam não retornar na consulta.
[consulta_nacional_completa(placa, shared_opts)](#c:consulta_nacional_completa/2)
Retorna resultado da busca de veículos completa.
[consulta_nacional_simples_v2(placa, shared_opts)](#c:consulta_nacional_simples_v2/2)
Retorna resultado da busca de veículos simplificada
[consulta_nacional_simples_v3(placa, shared_opts)](#c:consulta_nacional_simples_v3/2)
Retorna resultado da busca de veículos simplificada V3 (sem informação de proprietário)
[Link to this section](#types)
Types
===
[Link to this section](#callbacks)
Callbacks
===
ExDatacube.Veiculos.Adaptores.Default
===
Implementa behaviour da api de veículos comunicando-se à API de produção.
ExDatacube.Veiculos.Adaptores.Stub
===
Implementa um stub para o behaviour da api de veículos retornando dados dummy.
ExDatacube.Veiculos.Veiculo
===
Tipo veículo que retorna em consultas
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[ano()](#t:ano/0)
Ano como string.
[cnpj()](#t:cnpj/0)
[cpf()](#t:cpf/0)
[restricao()](#t:restricao/0)
Descrição de possível restrição aplicada ao veículo
[t()](#t:t/0)
[Functions](#functions)
---
[new(params)](#new/1)
Cria um novo veículo a partir dos `params` fornecidos.
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
ExDatacube.Veiculos.Veiculo.ComunicadoVenda
===
Representa um comunicado de venda.
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[t()](#t:t/0)
[Functions](#functions)
---
[changeset(struct, params)](#changeset/2)
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
ExDatacube.Veiculos.Veiculo.FipePossivel
===
Representa tipo `FipePossivel` .
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[t()](#t:t/0)
[Functions](#functions)
---
[changeset(struct, params)](#changeset/2)
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
ExDatacube.Veiculos.Veiculo.Gravame
===
Representa um gravame.
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[t()](#t:t/0)
[Functions](#functions)
---
[changeset(struct, params)](#changeset/2)
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
ExDatacube.Veiculos.Veiculo.Renavam
===
Representa um renavam. Inclui funções de validação e limpeza além de implementar o behaviour [`Ecto.Type`](https://hexdocs.pm/ecto/3.8.3/Ecto.Type.html).
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[t()](#t:t/0)
[Functions](#functions)
---
[clean(renavam)](#clean/1)
Limpa renavam deixando somente números.
[format(renavam)](#format/1)
Limpa e preenche com zeros à esquerda retornando sempre 11 caracteres.
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
ExDatacube.Veiculos.Veiculo.UF
===
Representa uma uf brasileira. Inclui funções de validação e limpeza além de implementar o behaviour [`Ecto.Type`](https://hexdocs.pm/ecto/3.8.3/Ecto.Type.html).
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[t()](#t:t/0)
[Functions](#functions)
---
[clean(arg1)](#clean/1)
[map_state_to_initials(arg1)](#map_state_to_initials/1)
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
=== |
after-effects-expressions-guide | readthedoc | SQL | Use the After Effects expression elements along with standard JavaScript elements to write your expressions. You can use the Expression Language menu at any time to insert methods and attributes into an expression, and you can use the pick whip at any time to insert properties.
If an argument description contains an equal sign (=) and a value (such as t=time or width=.2), then the argument uses the included default value if you don’t specify a different value.
Some argument descriptions include a number in square brackets—this number indicates the dimension of the expected property or Array.
Some return-value descriptions include a number in square brackets—this number specifies the dimension of the returned property or Array. If a specific dimension is not included, the dimension of the returned Array depends on the dimension of the input.
The W3Schools JavaScript reference website provides information for the standard JavaScript language, including pages for the JavaScript Math and String objects.
What’s new and changed for expressions?
## After Effects 17.7 (Feb 2021)¶
Fixed: An issue where expression edits made in the Graph Editor were not applied consistently.
## After Effects 17.6 (Jan 2021)¶
Fixed: An issue that could cause an expression to be replaced instead of appending when using expression or property pick-whip.
## After Effects 17.1.2 (Jul 2020)¶
Fixed: An issue where Markers could not be referenced by name in the JavaScript Expressions Engine.
## After Effects 17.1 (May 19 2020)¶
Fixed: An issue with Expression editor to auto-complete ‘timeToFrames’ function.
## After Effects 17.0.5 (Mar 2020)¶
Fixed: An issue where the Link Focus to Layer command produced an expression that did not work with the JavaScript expression engine.
## After Effects 17.0.2 (Jan 2020)¶
Fixed: An issue where wrong line numbers would be displayed related to errors in JavaScript expressions.
## After Effects 17.0 (Jan 24 2020)¶
Implemented Dropdown Menu Expression Control
*
Expression Editor improvements:
You can now use the new scrolling functionality to prevent the scroll from adjusting incorrectly when the box is resized by typing the return character.
*
Prevent numbers from matching in an autocomplete list if the variable begins with a number. Smarter autocomplete prevents from overriding closing brackets and quotes.
*
You can now scale font size for Hi-DPI displays.
*
Graph editor now commits changes in preferences for all the open graph editors.
*
If you enable syntax highlight, the folding icon buttons in the UI now respect the default and background color, or the line numbers color and background color.
*
Expression performance improvements:
After Effects now attempts to detect an expression that does not change throughout a comp and calculates the expression only once. Load your favorite expression-filled comp and experience the improved performance.
*
Any expression using posterizeTime() now calculates only once for the entire comp, not on every frame.
*
Added: Extended expressions access to Text properties.
Added: Text.Font…
*
Added: Source Text
*
Added: Text Style
## After Effects 16.1.3 (Sep 2019)¶
Fixed: Indentation of curly braces on new lines could be incorrect in the Expressions editor.
## After Effects 16.1.2 (June 2019)¶
Fixed: After Effects crashes when you close a project that has an expression containing an error.
*
Fixed: Expression error messages could be truncated in the error ribbon if there were multiple lines of error text to show.
*
Fixed: The property, this_Layer had stopped working when using the Legacy ExtendScript expression engine.
*
Fixed: Crash when switching the project level expression engine from JavaScript to Legacy ExtendScript.
*
Fixed: Crash with expressions that contain calls to Date.toLocaleString().
*
Fixed: Crash when editing expressions in the Graph Editor expression field when AutoComplete is disabled.
## After Effects 16.1 (CC 19) (Apr 2 2019)¶
Implemented new expression editor
*
Fixed: The JavaScript expressions engine does not generate the same random number results as the Legacy ExtendScript engine.
*
Fixed: When an expression references the name of a layer in a string or in a Source Text property, the name of the layer is not returned. Instead, it returns [Object].
*
Fixed: The sampleImage() expression method returns the wrong value if the post-expression value of the property is read by a ScriptUI panel.
*
Fixed: Applying the createPath() expression via the Expression Language menu auto-fills the (is_Closed) parameter as deprecated snake case instead of camel caseisClosed.
*
Fixed: Renaming an effect that is referenced by an expression causes the expression to incorrectly update references to that effect properties when those properties have the same name as the effect.
*
Fixed: The Link Focus Distance to Layer, Link Focus Distance to Point of Interest, Create Stereo 3D Rig, and Create Orbit Null commands create expressions that are incompatible with the JavaScript expression engine.
*
Fixed: Specific complex, multi-composition expressions cause fast flickering of the expression error warning banner and icons. Note that to fix this, there is a small slowdown in expression evaluation speed for these expressions.
## After Effects 16.0 (CC 19) (Oct 15 2018)¶
Implemented new Javascript engine
*
Added: hexToRgb
*
Added: marker protectedRegion property
## After Effects 15.1.2 (Jul 16 2018)¶
Fixed: If your project contains multiple master properties by the same name, the expressions that refer to the master properties evaluate incorrectly.
*
Fixed: The Property Link pick whip incorrectly writes a self-referential expression for the other selected properties.
## After Effects 15.1 (Apr 3 2018)¶
Added: Property Link pick whip
*
Added: Support for custom expression function libraries
*
Added: Expression access to Project
Added: Project.fullPath
*
Added: Project.bitsPerChannel
*
Added: Project.linearBlending
## After Effects 15.0 (CC) (Oct 18 2017)¶
Added: Expression access to data in JSON files
Added: footage sourceText attribute
*
Added: footage sourceData attribute
*
Added: footage dataValue method
*
Added: footage dataKeyCount method
*
Added: footage dataKeyTimes method
*
Added: footage dataKeyValues method
*
Added: Expression access to path points on masks, Bezier shapes, and brush strokes
Added: path points method
*
Added: path inTangents method
*
Added: path outTangents method
*
Added: path isClosed method
*
Added: path pointOnPath method
*
Added: path tangentOnPath method
*
Added: path normalOnPath method
*
Added: path createPath method
## After Effects 13.6 (CC 2015) (Nov 30 2015)¶
Improved performance of expressions on time-remapped layers. This also reduces rendering time for audio on time-remapped layers with expressions.
*
Fixed: Changing the source text of a text layer no longer causes expressions to fail when the name of the text layer was referenced.
*
Fixed: After Effects no longer crashes when the graph editor is displayed while processing an time remapping expression.
## After Effects 13.5 (CC 2015) (Jun 15 2015)¶
More efficient expression evaluation
*
Added: Expression warning banner
## After Effects 13.2 (CC 2014.2) (Dec 16 2014)¶
Added: sourceRectAtTime() method
*
Fixed: sampleImage() in an expression no longer disables multiprocessing
## After Effects 12.1 (CC) (Sep 8 2013)¶
Added iris and highlight properties for camera layers to the expression language menu
*
Added: Camera.irisShape
*
Added: Camera.irisRotation
*
Added: Camera.irisRoundness
*
Added: Camera.irisAspectRatio
*
Added: Camera.irisDiffractionFringe
*
Added: Camera.highlightGain
*
Added: Camera.highlightThreshold
*
Added: Camera.highlightSaturation
## After Effects 10.5 (CS5.5) (Apr 11 2011)¶
Added: Footage.ntscDropFrame
*
Added: ntscDropFrame argument to timeToCurrentFormat()
*
Added: Layer.sourceTime()
## After Effects 5.5 (Jan 7 2002)¶
Added: Looping via expressions
*
Added: Expression controllers
## After Effects 5.0 (Apr 2001)¶
Expressions first added
Date: 2008-11-01
Categories:
Tags:
The user ‘Beaver’ posted 5 Expressions that will change your life on the Mograph forums.
<NAME> provides example expressions and tutorials for learning how to work with expressions on his MotionScript website. For example, Dan provides an excellent page about collision detection.
<NAME> provides a tutorial and example project on their site that show how to use expressions to make one layer repel others in a natural-seeming manner.
The AE Enhancers forum provides many examples and much useful information about expressions, as well as scripts and animation presets. In this post on the AE Enhancers forum, <NAME> provides a tutorial and example project that show how to use expressions to animate several layers in a swarm.
<NAME> provides an example on Rick’s site that demonstrates rolling a square object along a floor so that the sides stay in contact with the floor plane.
<NAME> provides a video tutorial on the Creative COW website that demonstrates how to use expressions and parenting to relate the rotation of a set of wheels to the horizontal movement of a vehicle.
<NAME> provides an example project on chriszwar.com for automatically arranging still images or videos into a grid (like a video wall). You can easily adjust position and spacing with sliders that are connected to a system of expressions. There are three compositions in the project—one for stills, one for videos, and one to create an auto-storyboard in which a video is sampled at user-defined intervals and aligned into a grid.
JJ Gifford’s website provides several example projects that demonstrate how to use expressions.
Maltaannon (<NAME>, Jr.) provides a video tutorial on maltaanon.com that shows how to use expressions to create a volume meter using the results of the Convert Audio To Keyframes command.
Vector Math functions are global methods that perform operations on arrays, treating them as mathematical vectors. Unlike built-in JavaScript methods, such as `Math.sin` , these methods are not used with the Math prefix. Unless otherwise specified, Vector Math methods are lenient about dimensions and return a value that is the dimension of the largest input Array object, filling in missing elements with zeros. For example, the expression
```
add([10, 20], [1, 2, 3])
```
returns `[11, 22, 3]` .
Note
JJ Gifford’s website provides explanations and examples that show how to use simple geometry and trigonometry with expressions.
## add(
Adds two vectors.
## sub(
Subtracts two vectors.
## mul(
Multiplies every element of the vector by the amount.
## div(
Divides every element of the vector by the amount.
## clamp(
`value` , `limit1` , `limit2` )¶
Description
The value of each component of `value` is constrained to fall between the values of the corresponding values of `limit1` and `limit2` .
Parameters
## dot(
Returns the dot (inner) product of the vector arguments.
## cross(
`vec1` , `vec2` )¶
Description
Returns the vector cross product of `vec1` and `vec2` . Refer to a math reference or JavaScript guide for more information.
Parameters
Array (2- or 3-dimensional)
## normalize(
`vec` )¶
Description
Normalizes the vector so that its length is `1.0` . Using the normalize method is a short way of performing the operation
```
div(vec, length(vec))
```
.
Parameters
`vec` )¶
Description
Returns the length of vector `vec` .
Parameters
`point1` , `point2` )¶
Description
Returns the distance between two points. The `point2` argument is optional. For example,
```
length(point1, point2)
```
is the same as
```
length(sub(point1, point2))
```
.
For example, add this expression to the Focus Distance property of a camera to lock the focal plane to the camera’s point of interest so that the point of interest is in focus:
```
length(position, pointOfInterest)
```
## lookAt(
`fromPoint` , `atPoint` )¶
Description
The argument `fromPoint` is the location in world space of the layer you want to orient. The argument `atPoint` is the point in world space you want to point the layer at. The return value can be used as an expression for the Orientation property, making the z-axis of the layer point at atPoint.
This method is especially useful for cameras and lights. If you use this expression on a camera, turn off auto-orientation.
For example, this expression on the Orientation property of a spot light makes the light point at the anchor point of layer number 1 in the same composition:
```
lookAt(position, thisComp.layer(1).position)
```
The wiggle method—which is used to randomly vary a property value—is in the Property attributes and methods category. See Property attributes and methods.
## seedRandom(
`offset` , `timeless=false` )¶
Description
The random and gaussRandom methods use a seed value that controls the sequence of numbers. By default, the seed is computed as a function of a unique layer identifier, the property within the layer, the current time, and an offset value of `0` . Call seedRandom to set the offset to something other than 0 to create a different random sequence. Use true for the timeless argument to not use the current time as input to the random seed. Using true for the timeless argument allows you to generate a random number that doesn’t vary depending on the time of evaluation. The offset value, but not the timeless value, is also used to control the initial value of the wiggle function.
For example, this expression on the Opacity property sets the Opacity value to a random value that does not vary with time:
```
seedRandom(123456, true);
random()*100
```
The multiplication by `100` in this example converts the value in the range `0–1` returned by the random method into a `number` in the range `0–100` ; this range is more typically useful for the Opacity property, which has values from `0%` to `100%` .
Parameters
None
## random()¶
Description
Returns a random number in the range `0–1` .
Note
In After Effects CC and CS6, the behavior of random() is changed to be more random when layer IDs are close together. The wiggle() expression is not affected.
`maxValOrArray` )¶
Description
If `maxValOrArray` is a `Number` , this method returns a number in the range from `0` to `maxValOrArray` . If `maxValOrArray` is an `Array` , this method returns an Array with the same dimension as `maxValOrArray` , with each component ranging from `0` to the corresponding component of `maxValOrArray` .
Parameters
`minValOrArray` , `maxValOrArray` )¶
Description
If `minValOrArray` and `maxValOrArray` are `Numbers` , this method returns a number in the range from `minValOrArray` to `maxValOrArray` . If the arguments are `Arrays` , this method returns an `Array` with the same dimension as the argument with the greater dimension, with each component in the range from the corresponding component of `minValOrArray` to the corresponding component of `maxValOrArray` . For example, the expression
```
random([100, 200], [300, 400])
```
returns an `Array` whose first value is in the range `100–300` and whose second value is in the range `200–400` . If the dimensions of the two input Arrays don’t match, higher-dimension values of the shorter Array are filled out with zeros.
Parameters
## gaussRandom()¶
Description
The results have a Gaussian (bell-shaped) distribution. Approximately `90%` of the results are in the range `0–1` , and the remaining `10%` are outside this range.
Type
`maxValOrArray` )¶
Description
When `maxValOrArray` is a `Number` , this method returns a random number. Approximately `90%` of the results are in the `0` to `maxValOrArray` range, and the remaining `10%` are outside this range. When `maxValOrArray` is an `Array` , this method returns an Array of random values, with the same dimension as `maxValOrArray` . `90%` of the values are in the range from `0` to `maxValOrArray` , and the remaining `10%` are outside this range.
The results have a Gaussian (bell-shaped) distribution.
`minValOrArray` , `maxValOrArray` )¶
Description
If `minValOrArray` and `maxValOrArray` are `Numbers` , this method returns a random number. Approximately `90%` of the results are in the range from `minValOrArray` to `maxValOrArray` , and the remaining `10%` are outside this range. If the arguments are `Arrays` , this method returns an `Array` of random numbers with the same dimension as the argument with the greater dimension. For each component, approximately
```
90%``of the results are in the range from the corresponding component of ``minValOrArray
```
to the corresponding component of `maxValOrArray` , and the remaining `10%` are outside this range.
The results have a Gaussian (bell-shaped) distribution.
## noise(
`valOrArray` )¶
Description
Returns a number in the range from `-1` to `1` . The noise is not actually random; it is based on Perlin noise, which means that the return values for two input values that are near one another tend to be near one another. This type of noise is useful when you want a sequence of seemingly random numbers that don’t vary wildly from one to the other—as is usually the case when animating any apparently random natural motion.
Example:
```
rotation + 360*noise(time)
```
## Project.fullPath¶
The platform-specific absolute file path, including the project file name. If the project has not been saved, it returns an empty string.
Example:
`thisProject.fullPath`
Type
## Project.bitsPerChannel¶
The color depth of the project in bits per channel (bpc), as set in Project Settings > Color Management They are one of 8, 16, or 32. Equivalent to the scripting project attribute app.project.bitsPerChannel.
```
thisProject.bitsPerChannel
```
## Project.linearBlending¶
The state of the Blend Colors Using 1.0 Gamma option in Project Settings > Color Management. Equivalent to the scripting project attribute app.project.linearBlending.
```
thisProject.linearBlending
```
Retrieves the layer by number (order in the Timeline panel).
Example:
`thisComp.layer(3)`
Parameters
Retrieves the layer by name. Names are matched according to layer name, or source name if there is no layer name. If duplicate names exist, After Effects uses the first (topmost) one in the Timeline panel.
```
thisComp.layer("Solid 1")
```
`otherLayer` , `relIndex` )¶
Description
Retrieves the layer that is relIndex layers above or below otherLayer. For example,
```
thisComp.layer(thisLayer, 1).active
```
returns true if the next layer down in the Timeline panel is active.
Parameters
## Comp.layerByComment(
`comment` )¶
Description
Retrieves a layer by matching the comment parameter to the value in the layer’s Comment column. The matches are simple text matches. They will match partial words, and are case sensitive. Matching does not appear to use regular expressions or wildcards. If duplicate comments exist, After Effects uses the first (topmost) one in the Timeline panel.
```
thisComp.layerByComment("Control") //note this will match a layer with a comment "Controller" or "Motion Control"
```
## Comp.marker¶
You cannot access a composition marker by marker number. If you have a project created in a previous version of After Effects that uses composition marker numbers in expressions, you must change those calls to use marker.key(name) instead. Because the default name of a composition marker is a number, converting the reference to use the name is often just a matter of surrounding the number with quotation marks.
MarkerProperty
Returns the MarkerKey object of the marker with the specified index. The index refers to the order of the marker in composition time, not to the name of the marker.
For example, this expression returns the time of the first composition marker:
```
thisComp.marker.key(1).time
```
Returns the MarkerKey object of the marker with the specified name. The name value is the name of the marker, as typed in the comment field in the marker dialog box, for example, marker.key(“1”). For a composition marker, the default name is a number. If more than one marker in the composition has the same name, this method returns the marker that occurs first in time (in composition time). The value for a marker key is a String, not a Number.
For example, this expression returns the time of the composition marker with the name “0”:
```
thisComp.marker.key("0").time
```
## Comp.marker.nearestKey(
Returns the marker that is nearest in time to t.
For example, this expression returns the time of the composition marker nearest to the time of 1 second:
```
thisComp.marker.nearestKey(1).time
```
This expression returns the time of the composition marker nearest to the current time:
```
thisComp.marker.nearestKey(time).time
```
## Comp.marker.numKeys¶
Returns the total number of composition markers in the composition.
## Comp.numLayers¶
Returns the number of layers in the composition.
## Comp.activeCamera¶
Returns the Camera object for the camera through which the composition is rendered at the current frame. This camera is not necessarily the camera through which you are looking in the Composition panel.
Camera
## Comp.width¶
Returns the composition width, in pixels.Apply the following expression to the Position property of a layer to center the layer in the composition frame: [thisComp.width/2, thisComp.height/2]
## Comp.height¶
Returns the composition height, in pixels.
## Comp.duration¶
Returns the composition duration, in seconds.
## Comp.ntscDropFrame¶
Returns true if the timecode is in drop-frame format.
Available in After Effects CS5.5 and later.
## Comp.displayStartTime¶
Returns the composition start time, in seconds.
## Comp.frameDuration¶
Returns the duration of a frame, in seconds.
## Comp.shutterAngle¶
Returns the shutter-angle value of the composition, in degrees.
## Comp.shutterPhase¶
Returns the shutter phase of the composition, in degrees.
## Comp.bgColor¶
Returns the background color of the composition.
## Comp.pixelAspect¶
Returns the pixel aspect ratio of the composition.
## Comp.name¶
Returns the name of the composition.
Description
To use a footage item from the Project panel as an object in an expression, use the global footage method, as in `footage("file_name")` . You can also access a footage object using the source attribute on a layer whose source is a footage item.
## Footage.width¶
## Footage.height¶
## Footage.duration¶
Returns the duration of the footage item, in seconds.
## Footage.frameDuration¶
Returns the duration of a frame in the footage item, in seconds.
## Footage.ntscDropFrame¶
Returns true if the timecode is in drop-frame format. (After Effects CS5.5 and later.)
## Footage.pixelAspect¶
Returns the pixel aspect ratio of the footage item.
## Footage.name¶
Returns the name of the footage item as shown in the Project panel.
## Footage.sourceText¶
Returns the contents of a JSON file as a string.
The `eval()` method can be used to convert the string to an array of sourceData objects, identical to the results of the Footage.sourceData attribute, from which the individual data streams can be referenced as hierarchal attributes of the data.
For example:
```
var myData = eval(footage("sample.json").sourceText);
myData.sampleValue;
```
String, the contents of the JSON file; read-only.
## Footage.sourceData¶
Returns the data of a JSON file as an array of sourceData objects.
The structure of the JSON file will determine the size and complexity of the array.
Individual data streams can be referenced as hierarchal attributes of the data.
For example, given a data stream named “Color”, the following will return the value of “Color” from the first data object:
```
footage("sample.json").sourceData[0].Color
```
Typical use is to assign a JSON file’s `sourceData` to a variable, and then reference the desired data stream. For example:
```
var myData = footage("sample.json").sourceData;
myData[0].Color;
```
An array of sourceData objects; read-only.
## Footage.dataValue(
Returns the value of specificed static or dynamic data stream in a MGJSON file.
For example, to return data of the first child:
```
footage("sample.mgjson").dataValue([0])
```
Or to return data of the first child in the second group:
```
footage("sample.mgjson").dataValue([1][0])
```
The value of the data stream.
## Footage.dataKeyCount(
Returns the number of samples in a specificed dynamic data stream in a MGJSON file.
For example, to return the count of samples for the first child:
```
footage("sample.mgjson").dataKeyCount([0])
```
Or to return the count of samples for the second group:
```
footage("sample.mgjson").dataKeyCount([1][0])
```
The number of samples in the dynamic data stream.
## Footage.dataKeyTimes(
```
footage("sample.mgjson").dataKeyTimes([0], 1, 3)
```
Array of numbers representing the sample times.
## Footage.dataKeyValues(
```
footage("sample.mgjson").dataKeyValues([0], 1, 3)
```
Array of numbers representing the sample values.
Camera objects have the same attributes and methods as Layer objects, except for:
## Camera.pointOfInterest¶
Returns the point of interest values of a camera in world space.
Array (3 dimensional)
## Camera.zoom¶
Returns the zoom values of a camera in pixels.
Here’s an expression for the Scale property of a layer that maintains the relative size of the layer in frame while changing the z position (depth) of a layer or the Zoom value of a camera:
```
cam = thisComp.activeCamera;
distance = length(sub(position, cam.position));
scale * distance / cam.zoom;
```
## Camera.depthOfField¶
Description
Returns `1` if the Depth Of Field property of a camera is on, or returns `0` if the Depth Of Field property is off.
Type
## Camera.focusDistance¶
Returns the focus distance value of a camera, in pixels.
## Camera.aperture¶
Returns the aperture value of a camera, in pixels.
## Camera.blurLevel¶
Returns the blur level value of a camera as a percentage.
## Camera.irisShape¶
Returns the iris shape value from 1-10, corresponding to the selected dropdown value.
## Camera.irisRotation¶
Returns the iris rotation value, in degrees.
## Camera.irisRoundness¶
Returns the camera iris roundness value as a percentage.
## Camera.irisAspectRatio¶
## Camera.irisDiffractionFringe¶
## Camera.highlightGain¶
Returns the camera highlight gain, from 1 to 100.
## Camera.highlightThreshold¶
Returns the camera highlight threshhold.
In an 8-bit comp, this value ranges from 0 to 100
*
In a 16-bit comp, this value ranges from 0 to 32768
*
In a 32-bit comp, this value ranges from 0 to 1.0
## Camera.highlightSaturation¶
Returns the camera highlight saturation, from 1 to 100.
## Camera.active¶
Description
Returns `true` if the camera:
is the active camera for the composition at the current time: the video switch for the camera layer is on
*
the current time is in the range from the in point of the camera layer to the out point of the camera layer
*
and it is the first (topmost) such camera layer listed in the timeline panel
Returns `false` otherwise.
Type
Light objects have the same attributes and methods as Layer objects, except for:
<NAME> provides an instructional article and sample project on his omino pixel blog that shows how to use expressions with lights.
## Light.pointOfInterest¶
Returns the point of interest values for a light in world space.
## Light.intensity¶
Returns the intensity values of a light as a percentage.
## Light.color¶
Returns the color value of a light.
## Light.coneAngle¶
Returns the cone angle of a light, in degrees.
## Light.coneFeather¶
Returns the cone feather value of a light as a percentage.
## Light.shadowDarkness¶
Returns the shadow darkness value of a light as a percentage.
## Light.shadowDiffusion¶
Returns the shadow diffusion value of a light, in pixels.
All properties in the Timeline are organized into groups, which share some attributes of properties like `name` and `propertyIndex` . Groups can have a fixed number of properties (e.g. an individual effect whose properties don’t change) or a variable number of properties (e.g. the Effects group itself which can have any number of effect within it).
* Top-level groups in a Layer:
*
Motion Trackers
*
Text
*
Contents
*
Masks
*
Effects
*
Transform
*
Layer Styles
*
Geometry Options
*
Material Options
*
Audio
*
Data
*
Essential Properties
*
* Nested groups
*
Individual effects
*
Individual masks
*
Shape groups
*
Text Animators
## numProperties¶
Returns the number of properties or groups directly within a group. This does not include properties nested inside child groups.
Tip
Find the number of effects applied to a layer with
```
thisLayer("ADBE Effect Parade").numProperties
```
using the match name to remain language-agnostic.
Type
## propertyGroup(
`countUp=1` )¶
Description
Returns a higher-level property group relative to the property group on which the method is called. See propertyGroup(countUp=1) for additional details.
Group
## propertyIndex¶
Returns the index of a property group relative to other properties or groups in its property group.
## name¶
Returns the name of the property group.
When you access a Key object, you can get time, index, and value properties from it. For example, the following expression gives you the value of the third Position keyframe: position.key(3).value.
The following expression, when written on an Opacity property with keyframes, ignores the keyframe values and uses only the placement of the keyframes in time to determine where a flash should occur:
```
d = Math.abs(time - nearestKey(time).time);
easeOut(d, 0, .1, 100, 0)
```
## Key.value¶
Returns the value of the keyframe.
## Key.time¶
Returns the time of the keyframe.
## Key.index¶
Returns the index of the keyframe.
For After Effects CC and CS6, the Expression language menu, the “Layer Sub-objects”, “Layer General”, “Layer Properties”, “Layer 3D”, and “Layer Space Transforms” have been arranged into a “Layer” submenu.
## Layer.source¶
Returns the source Comp or source Footage object for the layer. Default time is adjusted to the time in the source.
```
source.layer(1).position
```
Comp or Footage
## Layer.sourceTime(
`t=time` )¶
Description
Returns the layer source corresponding to time `t` .
Note
After Effects CS5.5 and later
## Layer.sourceRectAtTime(
```
t = time, includeExtents = false
```
)¶
Description
Returns a JavaScript object with four attributes:
```
[top, left, width, height]
```
Extents apply only to shape layers and paragraph text layers.
Shape layer extents increase the size of the layer bounds as necessary.
Paragraph text layers returns the bounds of the paragraph box.
After Effects 13.2 and later. Paragraph text extents added in After Effects 15.1.
```
myTextLayer.sourceRectAtTime().width
```
After Effects finds the effect by its name in the Effect Controls panel. The name can be the default name or a user-defined name. If multiple effects have the same name, the effect closest to the top of the Effect Controls panel is used.
```
effect("Fast Blur")("Blurriness")
```
`index` )¶
Description
After Effects finds the effect by its index in the Effect Controls panel, starting at `1` and counting from the top.
Parameters
The name can be the default name or a user-defined name. If multiple masks have the same name, the first (topmost) mask is used.
Example:
`mask("Mask 1")`
Parameters
`index` )¶
Description
After Effects finds the mask by its index in the Timeline panel, starting at `1` and counting from the top.
Parameters
When you add masks, effects, paint, or text to a layer, After Effects adds new properties to the Timeline panel. There are too many of these properties to list here, so use the pick whip to learn the syntax for referring to them in your expressions.
## Layer.anchorPoint¶
Returns the anchor point value of the layer in the coordinate system of the layer (layer space).
## Layer.position¶
Returns the position value of the layer, in world space if the layer has no parent. If the layer has a parent, it returns the position value of the layer in the coordinate system of the parent layer (in the layer space of the parent layer).
## Layer.scale¶
Returns the scale value of the layer, expressed as a percentage.
## Layer.rotation¶
Returns the rotation value of the layer in degrees. For a 3D layer, it returns the z rotation value in degrees.
## Layer.opacity¶
Returns the opacity value for the layer, expressed as a percentage.
## Layer.audioLevels¶
Returns the value of the Audio Levels property of the layer, in decibels. This value is a 2D value; the first value represents the left audio channel, and the second value represents the right. The value is not the amplitude of the audio track of the source material. Instead, it is the value of the Audio Levels property, which may be affected by keyframes.
Array of Numbers (2-dimensional)
## Layer.timeRemap¶
Returns the value of the Time Remap property, in seconds, if Time Remap is enabled.
Returns the MarkerKey object of the layer marker with the specified index.
`name` )¶
Description
Returns the MarkerKey object of the layer marker with the specified name. The name value is the name of the marker, as typed in the comment field in the marker dialog box, for example, `marker.key("ch1")` . If more than one marker on the layer has the same name, this method returns the marker that occurs first in time (in layer time). The value for a marker key is a `String` , not a `Number` . This expression on a property ramps the value of the property from `0` to `100` between two markers identified by name:
```
m1 = marker.key("Start").time;
m2 = marker.key("End").time;
linear(time, m1, m2, 0, 100);
```
## Layer.marker.nearestKey(
Returns the layer marker that is nearest in time to t.
For example, this expression returns the time of the marker on the layer nearest to the time of `1` second:
```
marker.nearestKey(1).time
```
This expression returns the time of the marker on the layer nearest to the current time:
```
marker.nearestKey(time).time
```
## Layer.marker.numKeys¶
Returns the total number of markers on the layer.
## Layer.name¶
Returns the name of the layer.
## Layer.orientation¶
Returns the 3D orientation value, in degrees, for a 3D layer.
## Layer.rotationX¶
Returns the x rotation value, in degrees, for a 3D layer.
## Layer.rotationY¶
Returns the y rotation value, in degrees, for a 3D layer.
## Layer.rotationZ¶
Returns the z rotation value, in degrees, for a 3D layer.
## Layer.lightTransmission¶
Returns the value of the Light Transmission property for a 3D layer.
## Layer.castsShadows¶
## Layer.acceptsShadows¶
## Layer.acceptsLights¶
Description
Returns a value of `1` if the layer accepts lights.
Type
## Layer.ambient¶
Returns the ambient component value as a percentage.
## Layer.diffuse¶
Returns the diffuse component value as a percentage.
## Layer.specular¶
Returns the specular component value as a percentage.
## Layer.shininess¶
Returns the shininess component value as a percentage.
## Layer.metal¶
Returns the metal component value as a percentage.
This category holds generic text-related entries for text layers.
## Text.sourceText¶
Returns the text content of a text layer.
As of After Effects 17.0, this property returns the Source Text object to access text style properties. If no style properties are specified, this returns the text content as expected.
String of text content, or Source Text (AE 17.0+)
## Text.Font…¶
Launches a dialog window for the user to specify a font name and weight.
Upon selection, the internal font name is injected into the expression editor as a string.
These functions are accessible from Text.sourceText after AE 17.0.
## SourceText.style¶
Description
Returns the text style object for a given `sourceText` property.
Type
## SourceText.getStyleAt(
`charIndex` , `t = time` )¶
Description
This function returns the style value of a particular character at a specific time.
In case the style is keyframed and changes over time, use the second `time` parameter to specify the target time to get the style at.
Note
Using SourceText.style is the same as using
```
text.sourceText.getStyleAt(0,0)
```
For example, to get the style of the first character at the beginning of the timeline:
```
text.sourceText.getStyleAt(0,0);
```
## SourceText.createStyle()¶
Used to initialize an empty Text Style object in which you’d manually bake in specific values.
For example, to create a new style with font size 300 and the font Impact:
```
text.sourceText.createStyle().setFontSize(300).setFont("Impact");
```
None. |
RPyGeo | cran | R | Package ‘RPyGeo’
October 12, 2022
Type Package
Title ArcGIS Geoprocessing via Python
Version 1.0.0
Date 2018-11-12
Description Provides access to ArcGIS geoprocessing tools by building an
interface between R and the ArcPy Python side-package via the
reticulate package.
URL https://github.com/fapola/RPyGeo
BugReports https://github.com/fapola/RPyGeo/issues
License GPL-3
LazyData TRUE
Imports reticulate (>= 1.2), sf, raster, tools, stringr, utils,
rmarkdown, magrittr, stats, purrr
SystemRequirements Python (>= 2.6.0), ArcGIS (>= 10.0)
RoxygenNote 6.1.1
Suggests testthat, knitr, spData, rstudioapi, bookdown
VignetteBuilder knitr
NeedsCompilation no
Author <NAME> [aut, cre],
<NAME> [aut],
<NAME> [aut],
<NAME> [ctb] (<https://orcid.org/0000-0001-7834-4717>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2018-11-14 11:00:11 UTC
R topics documented:
RPyGeo-packag... 2
rpygeo_build_en... 3
rpygeo_hel... 4
rpygeo_loa... 5
rpygeo_sav... 7
rpygeo_searc... 8
%rpygeo_-... 9
%rpygeo_+... 10
%rpygeo_/... 11
%rpygeo_*... 12
RPyGeo-package RPyGeo: ArcGIS Geoprocessing in R via Python
Description
Provide access to (virtually any) ArcGIS geoprocessing tool from within R by running Python
geoprocessing without writing Python code or touching ArcGIS.
Details
The package utilizes the ArcPy Python site-package or the ArcGIS API in order to access ArcGIS
functionality. The function rpygeo_build_env can be applied to generate an ArcPy or arcgis ob-
ject.
Author(s)
Maintainer: <NAME> <<EMAIL>>
Authors:
• <NAME> <<EMAIL>>
• <NAME> <<EMAIL>>
Other contributors:
• <NAME> (0000-0001-7834-4717) [contributor]
See Also
Useful links:
• https://github.com/fapola/RPyGeo
• Report bugs at https://github.com/fapola/RPyGeo/issues
Examples
# load the ArcPy module related to ArcGIS Pro (and save it as a R
# object called "arcpy_m") in R and also set the overwrite parameter
# to FALSE and add some extensions. Note that we do not have to set the path
# because the Python version is located in the default location
# (C:/Program Files/ArcGIS/Pro/bin/Python/envs/arcgispro-py3/)in this example.
## Not run: arcpy <- rpygeo_build_env(overwrite = TRUE,
extensions = c("3d", "Spatial", "na"),
pro = TRUE)
## End(Not run)
# Suppose we want to calculate the slope of a Digtial Elevation Model.
# It is possible to get the description of any ArcPy function as a R list:
## Not run: py_function_docs("arcpy$Slope_3d")
# Now we can run our computation:
## Not run: arcpy$Slope_3d(arcpy$Slope_3d(in_raster = "dem.tif", out_raster = "slope.tif")
rpygeo_build_env Initialize ArcPy site-package in R
Description
Initialises the Python ArcPy site-package in R via the reticulate package. Addtionally environ-
ment settings and extensions are configured.
Usage
rpygeo_build_env(path = NULL, overwrite = TRUE, extensions = NULL,
x64 = FALSE, pro = FALSE, arcgisAPI = FALSE, workspace = NULL,
scratch_workspace = NULL)
Arguments
path Full path to folder containing Python version which is linked to the ArcPy site-
package. If left empty, the function looks for python.exe in the most likely
location (C:/Python27/). It is also possible to provide a path to the ArcGIS API
for Python here. In order to do so you need to provide the path to the python
anaconda library were the arcgis package is installed. Additionally arcgisAPI
must be set to true.
overwrite If TRUE (default), existing ArcGIS datasets can be overwritten (does not work
while using ArcGIS API for Python).
extensions Optional character vector listing ArcGIS extension that should be enabled (does
not work while using ArcGIS API for Python)
x64 Logical (default: FALSE). Determines if path search should look for 64 bit Python
ArcPy version in default folder (C:/Python27)
pro Logical (default: FALSE). If set to TRUE‘ rpygeo_build_env tries to find Python
version to use in the default ArcGIS Pro location (C:/Program Files/ArcGIS/Pro/bin/Python/envs/a
arcgisAPI Logical (default: FALSE). Must be set to TRUE in order to use the ArcGIS API.
This is the only option to work with the RPyGeo Package under a linux operation
system.
workspace Path of ArcGIS workspace in which to perform the geoprocessing (does not
work while using ArcGIS API for Python).
scratch_workspace
Path to ArcGIS scratch workspace in which to store temporary files (does not
work while using ArcGIS API for Python). If NULL a folder named scratch is cre-
ated inside the workspace folder or on the same directory level as the workspace
file geodatabase.
Value
Returns ArcPy or ArcGIS modules in R
Author(s)
<NAME>, <NAME>
Examples
## Not run:
# Load ArcPy side-package of ArcGIS Pro with 3D and Spatial Analysis extension.
# Set environment setting 'overwrite' to TRUE.
# Note that no path parameter is necessary because Python is located in the
# default location.
arcpy <- rpygeo_build_env(overwrite = TRUE,
extensions = c("3d", "Spatial"),
pro = TRUE)
## End(Not run)
# Load the ArcPy module when your Python version is located in a different
# folder
rpygeo_help Get help file for ArcPy function
Description
This function opens the help file for ArcPy function in viewer panel or if not available in the browser.
Usage
rpygeo_help(arcpy_function)
Arguments
arcpy_function ArcPy module with function or class
Author(s)
<NAME>
Examples
## Not run:
# Load the ArcPy module and build environment
arcpy <- arcpy_build_env(overwrite = TRUE, workspace = tempdir())
# Open help file
rpygeo_help(arcpy$Slope_3d)
## End(Not run)
rpygeo_load Load output of ArcPy functions into R session
Description
This function loads the output of an ArcPy function into the R session. Raster files are loaded as
raster objects and vector files as sf objects.
Usage
rpygeo_load(data)
Arguments
data reticulate object or filename of the ArcPy function output
Details
Currently files and datasets stored in file geodatabases are supported.
Supported file formats:
• Tagged Image File Format (.tif)
• Erdas Imagine Images (.img)
• Esri Arc/Info Binary Grid (.adf)
• Esri ASCII Raster (.asc)
• Esri Shapefiles (.shp)
Supported datasets:
• Feature Class
• Raster Dataset
Esri has not released an API for raster datasets in file geodatabases. rpygeo_load converts a raster
dataset to a temporary ASCII raster first and then loads it into the R session. Be aware that this can
take a long time for large raster datasets.
This function can be used with the %>% operator from the dplyr package. The %>% operator forwards
the reticulate object from the ArcPy function to rpygeo_load (s. Example 1). If used without
the %>% operator an reticulate object can be specified for the data parameter (s. Example 2). It is
also possible to use the filename of the ArcPy function output (s. Example 3). For Arc/Info Binary
Grids the data parameter is just the name of the directory, which contains the adf files.
Value
raster or sf object
Author(s)
<NAME>
Examples
## Not run:
# Load packages
library(RPyGeo)
library(magrittr)
library(RQGIS)
library(spData)
# Get data
data(dem, package = "RQGIS")
data(nz, package = "spData")
# Write data to disk
writeRaster(dem, file.path(tempdir(), "dem.tif"), format = "GTiff")
st_write(nz, file.path(tempdir(), "nz.shp"))
# Load the ArcPy module and build environment
arcpy <- arcpy_build_env(overwrite = TRUE, workspace = tempdir())
# Create a slope raster and load it into the R session (Example 1)
slope <-
arcpy$Slope_3d(in_raster = "dem.tif", out_raster = "slope.tif") %>%
rpygeo_load()
# Create a aspect raster and load it into the R session (Example 2)
ras_aspect <- arcpy$sa$Aspect(in_raster = "dem.tif")
rpygeo_load(ras_aspect)
# Convert elevation raster to polygon shapefile and load it into R session (Example 3)
arcpy$RasterToPolygon_conversion("dem.tif", "elev.shp")
rpygeo_load("elev.shp")
## End(Not run)
rpygeo_save Save temporary raster to workspace
Description
This function saves temporary a raster as permanent raster to the workspace.
Usage
rpygeo_save(data, filename)
Arguments
data reticulate object or full path of the ArcPy function output
filename Filename with extension or without extension if the workspace is file geodatabase
Details
Some ArcPy functions have no parameter to specify an output raster. Instead they return a raster
object and a temporary raster is saved to the scratch workspace. This functions writes the temporary
raster as a permanent raster to the workspace.
How the file is written depends on the workspace and scratch workspace environment settings.
• Workspace and scratch workspace are directories: Raster is loaded with the raster package
and is written to workspace directory. The file format is inferred from the file extension in the
filename parameter.
• Workspace and scratch workspace are file geodatabases: Raster is copied to workspace file
geodatabase. No file extension necessary for the filename parameter.
• Workspace is file geodatabase and scratch workspace is directory: Raster is copied to workspace
file geodatabase. No file extension necessary for the filename parameter.
• Workspace is directory and scratch workspace is file geodatabase: Raster is exported to
workspace directory. The filename parameter is ignored due to restrictions in arcpy.RasterToOtherFormat_convers
function. If the automatically generated filename already exists, a number is appended to the
end of the filename.
Author(s)
<NAME>
Examples
## Not run:
# Load packages
library(RPyGeo)
library(RQGIS)
library(magrittr)
# Get data
data(dem, package = "RQGIS")
# Load the ArcPy module and build environment
arcpy <- arcpy_build_env(overwrite = TRUE, workspace = tempdir())
# Write raster to workspace directory
writeRaster(dem, file.path(tempdir(), "dem.tif"), format = "GTiff")
# Calculate temporary aspect file and save to workspace
arcpy$sa$Aspect(in_raster = "dem.tif") %>%
rpygeo_save("aspect.tif")
## End(Not run)
rpygeo_search Search for ArcPy functions and classes
Description
Search for ArcPy functions and classes with a character string or regular expression.
Usage
rpygeo_search(search_term = NULL)
Arguments
search_term Search term. Regular expressions are possible.
Details
The list members are referenced by the ArcPy module names. Each member contains a character
vector of matching ArcPy functions and classes. Except for the main module, functions and classes
have to be accessed by their module names (s. examples).
Value
Named list of character vectors of matching ArcPy functions and classes
Author(s)
<NAME>
Examples
## Not run:
# Load packages
library(RPyGeo)
library(magrittr)
library(RQGIS)
# Get data
data(dem, package = "RQGIS")
# Write data to disk
writeRaster(dem, file.path(tempdir(), "dem.tif"), format = "GTiff")
# Load the ArcPy module and build environment
arcpy <- rpygeo_build_env(overwrite = TRUE,
workspace = tempdir(),
extensions = "Spatial")
# Search for ArcPy functions, which contain the term slope
rpygeo_search("slope")
#> $toolbox
#> [1] "Slope_3d" "SurfaceSlope_3d"
#>
#> $main
#> [1] "Slope_3d" "SurfaceSlope_3d"
#>
#> $sa
#> [1] "Slope"
#>
#> $ddd
#> [1] "Slope" "SurfaceSlope"
# Run function from sa module
arcpy$sa$Slope(in_raster="dem.tif")
# Run function from main module
arcpy$Slope_3d(in_raster="dem.tif")
## End(Not run)
%rpygeo_-% Subtraction operator
Description
Subtraction operator for map algebra. Spatial Analylist extension is requiered for map algebra.
Usage
raster_1 %rpygeo_-% raster_2
Arguments
raster_1 raster dataset or numeric
raster_2 raster dataset or numeric
Value
reticulate object
Author(s)
<NAME>
Examples
## Not run:
# Load the ArcPy module and build environment
arcpy <- arcpy_build_env(overwrite = TRUE, workspace = "C:/workspace")
# Write raster to workspace directory
writeRater(elev, "C:/workspace/elev.tif", extensions = "Spatial")
# Create raster object
ras <- arcpy$sa$Raster("elev.tif")
# Substract raster from itself
ras %rpygeo_+% ras %>%
rpygeo_load()
## End(Not run)
%rpygeo_+% Addition operator
Description
Addition operator for map algebra. Spatial Analylist extension is requiered for map algebra.
Usage
raster_1 %rpygeo_+% raster_2
Arguments
raster_1 raster dataset or numeric
raster_2 raster dataset or numeric
Value
reticulate object
Author(s)
<NAME>
Examples
## Not run:
# Load the ArcPy module and build environment
arcpy <- arcpy_build_env(overwrite = TRUE, workspace = "C:/workspace")
# Write raster to workspace directory
writeRater(elev, "C:/workspace/elev.tif", extensions = "Spatial")
# Create raster object
ras <- arcpy$sa$Raster("elev.tif")
# Add raster to itself
ras %rpygeo_+% ras %>%
rpygeo_load()
## End(Not run)
%rpygeo_/% Division operator
Description
Division operator for map algebra. Spatial Analylist extension is requiered for map algebra.
Usage
raster_1 %rpygeo_/% raster_2
Arguments
raster_1 raster dataset or numeric
raster_2 raster dataset or numeric
Value
reticulate object
Author(s)
<NAME>
Examples
## Not run:
# Load the ArcPy module and build environment
arcpy <- arcpy_build_env(overwrite = TRUE, workspace = "C:/workspace")
# Write raster to workspace directory
writeRater(elev, "C:/workspace/elev.tif", extensions = "Spatial")
# Create raster object
ras <- arcpy$sa$Raster("elev.tif")
# Divide raster by itself
ras %rpygeo_+% ras %>%
rpygeo_load()
## End(Not run)
%rpygeo_*% Multiplication operator
Description
Multiplication operator for map algebra. Spatial Analylist extension is requiered for map algebra.
Usage
raster_1 %rpygeo_*% raster_2
Arguments
raster_1 raster dataset or numeric
raster_2 raster dataset or numeric
Value
reticulate object
Author(s)
<NAME>
Examples
## Not run:
# Load the ArcPy module and build environment
arcpy <- arcpy_build_env(overwrite = TRUE, workspace = "C:/workspace")
# Write raster to workspace directory
writeRater(elev, "C:/workspace/elev.tif", extensions = "Spatial")
# Create raster object
ras <- arcpy$sa$Raster("elev.tif")
# Multiply raster to itself
ras %rpygeo_+% ras %>%
rpygeo_load()
## End(Not run) |
SCpubr | cran | R | Package ‘SCpubr’
October 11, 2023
Type Package
Title Generate Publication Ready Visualizations of Single Cell
Transcriptomics Data
Version 2.0.2
Description A system that provides a streamlined way of generating
publication ready plots for known Single-Cell transcriptomics data in
a “publication ready” format. This is, the goal is to automatically
generate plots with the highest quality possible, that can be used
right away or with minimal modifications for a research article.
License GPL-3
URL https://github.com/enblacar/SCpubr/,
https://enblacar.github.io/SCpubr-book/
BugReports https://github.com/enblacar/SCpubr/issues/
Depends R (>= 4.0.0)
Suggests AnnotationDbi, assertthat, AUCell, circlize, cli, cluster,
clusterProfiler, colorspace, ComplexHeatmap, covr, decoupleR,
dplyr (>= 1.1.0), enrichplot, forcats, ggalluvial, ggbeeswarm,
ggdist, ggExtra, ggh4x, ggnewscale, ggplot2 (>= 3.4.0),
ggplotify, ggrastr, ggrepel, ggridges, ggsignif, graphics,
infercnv, knitr, labeling, magrittr, MASS, Matrix, methods,
Nebulosa, org.Hs.eg.db, patchwork, pheatmap, plyr, purrr, qpdf,
RColorBrewer, rjags, rlang, rmarkdown, scales, scattermore,
Seurat, SeuratObject, sf, stringr, svglite, testthat (>=
3.0.0), tibble, tidyr, UCell, viridis, withr
VignetteBuilder knitr
biocViews Software, SingleCell, Visualization
Config/testthat/edition 3
Encoding UTF-8
LazyData true
LazyDataCompression xz
RoxygenNote 7.2.1
NeedsCompilation no
Author <NAME> [cre, aut]
(<https://orcid.org/0000-0002-1208-1691>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-10-11 09:50:02 UTC
R topics documented:
do_AlluvialPlo... 3
do_BarPlo... 6
do_BeeSwarmPlo... 10
do_BoxPlo... 14
do_CellularStatesPlo... 18
do_ChordDiagramPlo... 23
do_ColorPalett... 26
do_CopyNumberVariantPlo... 29
do_CorrelationPlo... 33
do_DimPlo... 36
do_DotPlo... 41
do_EnrichmentHeatma... 46
do_ExpressionHeatma... 50
do_FeaturePlo... 54
do_FunctionalAnnotationPlo... 60
do_GeyserPlo... 63
do_GroupedGOTermPlo... 68
do_GroupwiseDEPlo... 71
do_NebulosaPlo... 74
do_PathwayActivityPlo... 77
do_RidgePlo... 81
do_TermEnrichmentPlo... 86
do_TFActivityPlo... 88
do_ViolinPlo... 92
do_VolcanoPlo... 96
human_chr_location... 98
package_repor... 99
do_AlluvialPlot Generate Alluvial plots.
Description
This function is based on the ggalluvial package. It allows you to generate alluvial plots from a
given Seurat object.
Usage
do_AlluvialPlot(
sample,
first_group,
last_group,
middle_groups = NULL,
colors.use = NULL,
plot.title = NULL,
plot.subtitle = NULL,
plot.caption = NULL,
font.size = 14,
font.type = "sans",
xlab = NULL,
ylab = "Number of cells",
repel = FALSE,
fill.by = last_group,
use_labels = FALSE,
stratum.color = "black",
stratum.fill = "white",
stratum.width = 1/3,
stratum.fill.conditional = FALSE,
use_geom_flow = FALSE,
alluvium.color = "white",
flow.color = "white",
flip = FALSE,
label.color = "black",
curve_type = "sigmoid",
use_viridis = FALSE,
viridis.palette = "G",
viridis.direction = -1,
sequential.palette = "YlGnBu",
sequential.direction = 1,
plot.grid = FALSE,
grid.color = "grey75",
grid.type = "dashed",
na.value = "white",
legend.position = "right",
legend.title = NULL,
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain"
)
Arguments
sample Seurat | A Seurat object, generated by CreateSeuratObject.
first_group character | Categorical metadata variable. First group of nodes of the alluvial
plot.
last_group character | Categorical metadata variable. Last group of nodes of the alluvial
plot.
middle_groups character | Categorical metadata variable. Vector of groups of nodes of the
alluvial plot.
colors.use character | Named list of colors corresponding to the unique values in fill.by
(which defaults to last_group).
plot.title, plot.subtitle, plot.caption
character | Title, subtitle or caption to use in the plot.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
xlab, ylab character | Titles for the X and Y axis.
repel logical | Whether to repel the text labels.
fill.by character | One of first_group, middle_groups (one of the values, if multiple
mid_groups) or last_group. These values will be used to color the alluvium/flow.
use_labels logical | Whether to use labels instead of text for the stratum.
stratum.color, alluvium.color, flow.color
character | Color for the border of the alluvium (and flow) and stratum.
stratum.fill character | Color to fill the stratum.
stratum.width logical | Width of the stratum.
stratum.fill.conditional
logical | Whether to fill the stratum with the same colors as the alluvium/flow.
use_geom_flow logical | Whether to use geom_flow instead of geom_alluvium. Visual results
might differ.
flip logical | Whether to invert the axis of the displayed plot.
label.color character | Color for the text labels.
curve_type character | Type of curve used in geom_alluvium. One of:
• linear.
• cubic.
• quintic.
• sine.
• arctangent.
• sigmoid.
• xspline.
use_viridis logical | Whether to use viridis color scales.
viridis.palette
character | A capital letter from A to H or the scale name as in scale_fill_viridis.
viridis.direction
numeric | Either 1 or -1. Controls how the gradient of viridis scale is formed.
sequential.palette
character | Type of sequential color palette to use. Out of the sequential
palettes defined in brewer.pal.
sequential.direction
numeric | Direction of the sequential color scale. Either 1 or -1.
plot.grid logical | Whether to plot grid lines.
grid.color character | Color of the grid in the plot. In heatmaps, color of the border of the
cells.
grid.type character | One of the possible linetype options:
• blank.
• solid.
• dashed.
• dotted.
• dotdash.
• longdash.
• twodash.
na.value character | Color value for NA.
legend.position
character | Position of the legend in the plot. One of:
• top: Top of the figure.
• bottom: Bottom of the figure.
• left: Left of the figure.
• right: Right of the figure.
• none: No legend is displayed.
legend.title character | Title for the legend.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
Value
A ggplot2 object.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_AlluvialPlot", passive = TRUE)
message(value)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# Define your Seurat object.
sample <- readRDS(system.file("extdata/seurat_dataset_example.rds", package = "SCpubr"))
# Compute basic sankey plot.
p <- SCpubr::do_AlluvialPlot(sample = sample,
first_group = "orig.ident",
last_group = "seurat_clusters")
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_BarPlot Create Bar Plots.
Description
Create Bar Plots.
Usage
do_BarPlot(
sample,
group.by,
order = FALSE,
add.n = FALSE,
add.n.face = "bold",
add.n.expand = c(0, 1.15),
add.n.size = 4,
order.by = NULL,
split.by = NULL,
facet.by = NULL,
position = "stack",
font.size = 14,
font.type = "sans",
legend.position = "bottom",
legend.title = NULL,
legend.ncol = NULL,
legend.nrow = NULL,
legend.byrow = FALSE,
axis.text.x.angle = 45,
xlab = NULL,
ylab = NULL,
colors.use = NULL,
flip = FALSE,
plot.title = NULL,
plot.subtitle = NULL,
plot.caption = NULL,
plot.grid = FALSE,
grid.color = "grey75",
grid.type = "dashed",
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain",
strip.text.face = "bold",
return_data = FALSE
)
Arguments
sample Seurat | A Seurat object, generated by CreateSeuratObject.
group.by character | Metadata column to compute the counts of. Has to be either a
character or factor column.
order logical | Whether to order the results in descending order of counts.
add.n logical | Whether to add the total counts on top of each bar.
add.n.face character | Font face of the labels added by add.n.
add.n.expand numeric | Vector of two numerics representing the start and end of the scale.
Minimum should be 0 and max should be above 1. This basically expands the
Y axis so that the labels fit when flip = TRUE.
• stack: Set the bars side by side, displaying the total number of counts.
Uses position_stack.
• fill: Set the bars on top of each other, displaying the proportion of counts
from the total that each group represents. Uses position_fill.
add.n.size numeric | Size of the labels
order.by character | When split.by is used, value of group.by to reorder the columns
based on its value.
split.by character | Metadata column to split the values of group.by by. If not used,
defaults to the active idents.
facet.by character | Metadata column to gather the columns by. This is useful if you
have other overarching metadata.
position character | Position function from ggplot2. Either stack or fill.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
legend.position
character | Position of the legend in the plot. One of:
• top: Top of the figure.
• bottom: Bottom of the figure.
• left: Left of the figure.
• right: Right of the figure.
• none: No legend is displayed.
legend.title character | Title for the legend.
legend.ncol numeric | Number of columns in the legend.
legend.nrow numeric | Number of rows in the legend.
legend.byrow logical | Whether the legend is filled by row or not.
axis.text.x.angle
numeric | Degree to rotate the X labels. One of: 0, 45, 90.
xlab, ylab character | Titles for the X and Y axis.
colors.use named_vector | Named vector of valid color representations (either name of
HEX codes) with as many named colors as unique values of group.by. If group.by
is not provided, defaults to the unique values of Idents. If not provided, a color
scale will be set by default.
flip logical | Whether to invert the axis of the displayed plot.
plot.title, plot.subtitle, plot.caption
character | Title, subtitle or caption to use in the plot.
plot.grid logical | Whether to plot grid lines.
grid.color character | Color of the grid in the plot. In heatmaps, color of the border of the
cells.
grid.type character | One of the possible linetype options:
• blank.
• solid.
• dashed.
• dotted.
• dotdash.
• longdash.
• twodash.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
strip.text.face
character | Controls the style of the font for the strip text. One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
return_data logical | Returns a data.frame with the count and proportions displayed in the
plot.
Value
A ggplot2 object containing a Bar plot.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_BarPlot", passive = TRUE)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# Define your Seurat object.
sample <- readRDS(system.file("extdata/seurat_dataset_example.rds", package = "SCpubr"))
# Basic bar plot, horizontal.
p1 <- SCpubr::do_BarPlot(sample = sample,
group.by = "seurat_clusters",
legend.position = "none",
plot.title = "Number of cells per cluster")
# Split by a second variable.
sample$modified_orig.ident <- sample(x = c("Sample_A", "Sample_B", "Sample_C"),
size = ncol(sample),
replace = TRUE,
p <- SCpubr::do_BarPlot(sample,
group.by = "seurat_clusters",
split.by = "modified_orig.ident",
plot.title = "Number of cells per cluster in each sample",
position = "stack")
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_BeeSwarmPlot BeeSwarm plot.
Description
BeeSwarm plot.
Usage
do_BeeSwarmPlot(
sample,
feature_to_rank,
group.by = NULL,
assay = NULL,
reduction = NULL,
slot = NULL,
continuous_feature = FALSE,
order = FALSE,
colors.use = NULL,
legend.title = NULL,
legend.type = "colorbar",
legend.position = "bottom",
legend.framewidth = 0.5,
legend.tickwidth = 0.5,
legend.length = 20,
legend.width = 1,
legend.framecolor = "grey50",
legend.tickcolor = "white",
legend.ncol = NULL,
legend.icon.size = 4,
plot.title = NULL,
plot.subtitle = NULL,
plot.caption = NULL,
xlab = NULL,
ylab = NULL,
font.size = 14,
font.type = "sans",
remove_x_axis = FALSE,
remove_y_axis = FALSE,
flip = FALSE,
use_viridis = FALSE,
viridis.palette = "G",
viridis.direction = 1,
sequential.palette = "YlGnBu",
sequential.direction = 1,
verbose = TRUE,
raster = FALSE,
raster.dpi = 300,
plot_cell_borders = TRUE,
border.size = 1.5,
border.color = "black",
pt.size = 2,
min.cutoff = NA,
max.cutoff = NA,
na.value = "grey75",
number.breaks = 5,
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain"
)
Arguments
sample Seurat | A Seurat object, generated by CreateSeuratObject.
feature_to_rank
character | Feature for which the cells are going to be ranked. Ideal case is that
this feature is stored as a metadata column.
group.by character | Metadata variable to group the output by. Has to be a character of
factor column.
assay character | Assay to use. Defaults to the current assay.
reduction character | Reduction to use. Can be the canonical ones such as "umap", "pca",
or any custom ones, such as "diffusion". If you are unsure about which re-
ductions you have, use Seurat::Reductions(sample). Defaults to "umap" if
present or to the last computed reduction if the argument is not provided.
slot character | Data slot to use. Only one of: counts, data, scale.data. Defaults to
"data".
continuous_feature
logical | Is the feature to rank and color for continuous? I.e: an enrichment
score.
order logical | Whether to reorder the groups based on the median of the ranking.
colors.use named_vector | Named vector of valid color representations (either name of
HEX codes) with as many named colors as unique values of group.by. If group.by
is not provided, defaults to the unique values of Idents. If not provided, a color
scale will be set by default.
legend.title character | Title for the legend.
legend.type character | Type of legend to display. One of:
• normal: Default legend displayed by ggplot2.
• colorbar: Redefined colorbar legend, using guide_colorbar.
legend.position
character | Position of the legend in the plot. One of:
• top: Top of the figure.
• bottom: Bottom of the figure.
• left: Left of the figure.
• right: Right of the figure.
• none: No legend is displayed.
legend.framewidth, legend.tickwidth
numeric | Width of the lines of the box in the legend.
legend.length, legend.width
numeric | Length and width of the legend. Will adjust automatically depending
on legend side.
legend.framecolor
character | Color of the lines of the box in the legend.
legend.tickcolor
character | Color of the ticks of the box in the legend.
legend.ncol numeric | Number of columns in the legend.
legend.icon.size
numeric | Size of the icons in legend.
plot.title, plot.subtitle, plot.caption
character | Title, subtitle or caption to use in the plot.
xlab, ylab character | Titles for the X and Y axis.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
remove_x_axis, remove_y_axis
logical | Remove X axis labels and ticks from the plot.
flip logical | Whether to invert the axis of the displayed plot.
use_viridis logical | Whether to use viridis color scales.
viridis.palette
character | A capital letter from A to H or the scale name as in scale_fill_viridis.
viridis.direction
numeric | Either 1 or -1. Controls how the gradient of viridis scale is formed.
sequential.palette
character | Type of sequential color palette to use. Out of the sequential
palettes defined in brewer.pal.
sequential.direction
numeric | Direction of the sequential color scale. Either 1 or -1.
verbose logical | Whether to show extra comments, warnings,etc.
raster logical | Whether to raster the resulting plot. This is recommendable if plotting
a lot of cells.
raster.dpi numeric | Pixel resolution for rasterized plots. Defaults to 1024. Only activates
on Seurat versions higher or equal than 4.1.0.
plot_cell_borders
logical | Whether to plot border around cells.
border.size numeric | Width of the border of the cells.
border.color character | Color for the border of the heatmap body.
pt.size numeric | Size of the dots.
min.cutoff, max.cutoff
numeric | Set the min/max ends of the color scale. Any cell/group with a value
lower than min.cutoff will turn into min.cutoff and any cell with a value higher
than max.cutoff will turn into max.cutoff. In FeaturePlots, provide as many
values as features. Use NAs to skip a feature.
na.value character | Color value for NA.
number.breaks numeric | Controls the number of breaks in continuous color scales of ggplot2-
based plots.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
Value
A ggplot2 object containing a Bee Swarm plot.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_BeeSwarmPlot", passive = TRUE)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# Define your Seurat object.
sample <- readRDS(system.file("extdata/seurat_dataset_example.rds", package = "SCpubr"))
# Basic Bee Swarm plot - categorical coloring.
# This will color based on the unique values of seurat_clusters.
p <- SCpubr::do_BeeSwarmPlot(sample = sample,
feature_to_rank = "PC_1",
group.by = "seurat_clusters",
continuous_feature = FALSE)
# Basic Bee Swarm plot - continuous coloring.
# This will color based on the PC_1 values.
p <- SCpubr::do_BeeSwarmPlot(sample = sample,
feature_to_rank = "PC_1",
group.by = "seurat_clusters",
continuous_feature = TRUE)
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_BoxPlot Generate Box Plots.
Description
Generate Box Plots.
Usage
do_BoxPlot(
sample,
feature,
group.by = NULL,
split.by = NULL,
assay = NULL,
slot = "data",
font.size = 14,
font.type = "sans",
axis.text.x.angle = 45,
colors.use = NULL,
na.value = "grey75",
plot.title = NULL,
plot.subtitle = NULL,
plot.caption = NULL,
xlab = NULL,
ylab = NULL,
legend.title = NULL,
legend.title.position = "top",
legend.position = "bottom",
boxplot.line.color = "black",
outlier.color = "black",
outlier.alpha = 0.5,
boxplot.linewidth = 0.5,
boxplot.width = NULL,
plot.grid = TRUE,
grid.color = "grey75",
grid.type = "dashed",
flip = FALSE,
order = FALSE,
use_silhouette = FALSE,
use_test = FALSE,
comparisons = NULL,
test = "wilcox.test",
map_signif_level = TRUE,
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain"
)
Arguments
sample Seurat | A Seurat object, generated by CreateSeuratObject.
feature character | Feature to represent.
group.by character | Metadata variable to group the output by. Has to be a character of
factor column.
split.by character | Secondary metadata variable to further group (split) the output by.
Has to be a character of factor column.
assay character | Assay to use. Defaults to the current assay.
slot character | Data slot to use. Only one of: counts, data, scale.data. Defaults to
"data".
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
axis.text.x.angle
numeric | Degree to rotate the X labels. One of: 0, 45, 90.
colors.use named_vector | Named vector of valid color representations (either name of
HEX codes) with as many named colors as unique values of group.by. If group.by
is not provided, defaults to the unique values of Idents. If not provided, a color
scale will be set by default.
na.value character | Color value for NA.
plot.title, plot.subtitle, plot.caption
character | Title, subtitle or caption to use in the plot.
xlab, ylab character | Titles for the X and Y axis.
legend.title character | Title for the legend.
legend.title.position
character | Position for the title of the legend. One of:
• top: Top of the legend.
• bottom: Bottom of the legend.
• left: Left of the legend.
• right: Right of the legend.
legend.position
character | Position of the legend in the plot. One of:
• top: Top of the figure.
• bottom: Bottom of the figure.
• left: Left of the figure.
• right: Right of the figure.
• none: No legend is displayed.
boxplot.line.color
character | Color of the borders of the boxplots if use_silhouette is FALSE.
outlier.color character | Color of the outlier dots.
outlier.alpha numeric | Alpha applied to the outliers.
boxplot.linewidth
numeric | Width of the lines in the boxplots. Also controls the lines of the tests
applied if use_test is set to true.
boxplot.width numeric | Width of the boxplots.
plot.grid logical | Whether to plot grid lines.
grid.color character | Color of the grid in the plot. In heatmaps, color of the border of the
cells.
grid.type character | One of the possible linetype options:
• blank.
• solid.
• dashed.
• dotted.
• dotdash.
• longdash.
• twodash.
flip logical | Whether to invert the axis of the displayed plot.
order logical | Whether to order the boxplots by average values. Can not be used
alongside split.by.
use_silhouette logical | Whether to color the borders of the boxplots instead of the inside area.
use_test logical | Whether to apply a statistical test to a given pair of elements. Can not
be used alongside split.by.
comparisons A list of length-2 vectors. The entries in the vector are either the names of 2
values on the x-axis or the 2 integers that correspond to the index of the columns
of interest.
test the name of the statistical test that is applied to the values of the 2 columns (e.g.
t.test, wilcox.test etc.). If you implement a custom test make sure that it
returns a list that has an entry called p.value.
map_signif_level
Boolean value, if the p-value are directly written as annotation or asterisks are
used instead. Alternatively one can provide a named numeric vector to create
custom mappings from p-values to annotation: For example: c("***"=0.001,
"**"=0.01, "*"=0.05). Alternatively, one can provide a function that takes a
numeric argument (the p-value) and returns a string.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
Value
A ggplot2 object.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_BoxPlot", passive = TRUE)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# Define your Seurat object.
sample <- readRDS(system.file("extdata/seurat_dataset_example.rds", package = "SCpubr"))
# Basic box plot.
p <- SCpubr::do_BoxPlot(sample = sample,
feature = "nCount_RNA")
p
# Use silhouette style.
p <- SCpubr::do_BoxPlot(sample = sample,
feature = "nCount_RNA",
use_silhouette = TRUE)
p
# Order by mean values.
p <- SCpubr::do_BoxPlot(sample = sample,
feature = "nCount_RNA",
order = TRUE)
p
# Apply second grouping.
sample$orig.ident <- ifelse(sample$seurat_clusters %in% c("0", "1", "2", "3"), "A", "B")
p <- SCpubr::do_BoxPlot(sample = sample,
feature = "nCount_RNA",
split.by = "orig.ident")
p
# Apply statistical tests.
p <- SCpubr::do_BoxPlot(sample = sample,
feature = "nCount_RNA",
group.by = "orig.ident",
use_test = TRUE,
comparisons = list(c("A", "B")))
p
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_CellularStatesPlot Cellular States plot.
Description
This plot aims to show the relationships between distinct enrichment scores. If 3 variables are
provided, the relationship is between the Y axis and the dual X axis. If 4 variables are provided,
each corner of the plot represents how enriched the cells are in that given list. How to interpret this?
In a 3-variable plot, the Y axis just means one variable. The higher the cells are in the Y axis the
more enriched they are in that given variable. The X axis is a dual parameter one. Cells falling into
each extreme of the axis are highly enriched for either x1 or x2, while cells falling in between are
not enriched for any of the two. In a 4-variable plot, each corner shows the enrichment for one of
the 4 given features. Cells will tend to locate in either of the four corners, but there will be cases
of cells locating mid-way between two given corners (enriched in both features) or in the middle of
the plot (not enriched for any).
do_CellularStatesPlot 19
Usage
do_CellularStatesPlot(
sample,
input_gene_list,
x1,
y1,
x2 = NULL,
y2 = NULL,
group.by = NULL,
colors.use = NULL,
legend.position = "bottom",
legend.icon.size = 4,
legend.ncol = NULL,
legend.nrow = NULL,
legend.byrow = FALSE,
plot.title = NULL,
plot.subtitle = NULL,
plot.caption = NULL,
font.size = 14,
font.type = "sans",
xlab = NULL,
ylab = NULL,
axis.ticks = TRUE,
axis.text = TRUE,
verbose = FALSE,
enforce_symmetry = FALSE,
plot_marginal_distributions = FALSE,
marginal.type = "density",
marginal.size = 5,
marginal.group = TRUE,
plot_cell_borders = TRUE,
plot_enrichment_scores = FALSE,
border.size = 2,
border.color = "black",
pt.size = 2,
raster = FALSE,
raster.dpi = 1024,
plot_features = FALSE,
features = NULL,
use_viridis = TRUE,
viridis.palette = "G",
viridis.direction = 1,
sequential.palette = "YlGnBu",
sequential.direction = -1,
nbin = 24,
ctrl = 100,
number.breaks = 5,
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain"
)
Arguments
sample Seurat | A Seurat object, generated by CreateSeuratObject.
input_gene_list
named_list | Named list of lists of genes to be used as input.
x1 character | A name of a list from input_gene_list. First feature in the X axis.
Will go on the right side of the X axis if y2 is not provided and top-right quadrant
if provided.
y1 character | A name of a list from input_gene_list. First feature on the Y axis.
Will become the Y axis if y2 is not provided and bottom-right quadrant if pro-
vided.
x2 character | A name of a list from input_gene_list. Second feature on the X
axis. Will go on the left side of the X axis if y2 is not provided and top-left
quadrant if provided.
y2 character | A name of a list from input_gene_list. Second feature on the Y
axis. Will become the bottom-left quadrant if provided.
group.by character | Metadata variable to group the output by. Has to be a character of
factor column.
colors.use named_vector | Named vector of valid color representations (either name of
HEX codes) with as many named colors as unique values of group.by. If group.by
is not provided, defaults to the unique values of Idents. If not provided, a color
scale will be set by default.
legend.position
character | Position of the legend in the plot. One of:
• top: Top of the figure.
• bottom: Bottom of the figure.
• left: Left of the figure.
• right: Right of the figure.
• none: No legend is displayed.
legend.icon.size
numeric | Size of the icons in legend.
legend.ncol numeric | Number of columns in the legend.
legend.nrow numeric | Number of rows in the legend.
legend.byrow logical | Whether the legend is filled by row or not.
plot.title, plot.subtitle, plot.caption
character | Title, subtitle or caption to use in the plot.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
xlab, ylab character | Titles for the X and Y axis.
axis.ticks logical | Whether to show axis ticks.
axis.text logical | Whether to show axis text.
verbose logical | Whether to show extra comments, warnings,etc.
enforce_symmetry
logical | Whether to enforce the plot to follow a symmetry (3 variables, the
X axis has 0 as center, 4 variables, all axis have the same range and the plot is
squared).
plot_marginal_distributions
logical | Whether to plot marginal distributions on the figure or not.
marginal.type character | One of:
• density: Compute density plots on the margins.
• histogram: Compute histograms on the margins.
• boxplot: Compute boxplot on the margins.
• violin: Compute violin plots on the margins.
• densigram: Compute densigram plots on the margins.
marginal.size numeric | Size ratio between the main and marginal plots. A value of 5 means
that the main plot is 5 times bigger than the marginal plots.
marginal.group logical | Whether to group the marginal distribution by group.by or current
identities.
plot_cell_borders
logical | Whether to plot border around cells.
plot_enrichment_scores
logical | Whether to report enrichment scores for the input lists as plots.
border.size numeric | Width of the border of the cells.
border.color character | Color for the border of the heatmap body.
pt.size numeric | Size of the dots.
raster logical | Whether to raster the resulting plot. This is recommendable if plotting
a lot of cells.
raster.dpi numeric | Pixel resolution for rasterized plots. Defaults to 1024. Only activates
on Seurat versions higher or equal than 4.1.0.
plot_features logical | Whether to also report any other feature onto the primary plot.
features character | Additional features to plot.
use_viridis logical | Whether to use viridis color scales.
viridis.palette
character | A capital letter from A to H or the scale name as in scale_fill_viridis.
viridis.direction
numeric | Either 1 or -1. Controls how the gradient of viridis scale is formed.
sequential.palette
character | Type of sequential color palette to use. Out of the sequential
palettes defined in brewer.pal.
sequential.direction
numeric | Direction of the sequential color scale. Either 1 or -1.
nbin numeric | Number of bins to use in AddModuleScore.
ctrl numeric | Number of genes in the control set to use in AddModuleScore.
number.breaks numeric | Controls the number of breaks in continuous color scales of ggplot2-
based plots.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
Details
This plots are based on the following publications:
• <NAME> al. An Integrative Model of Cellular States, Plasticity, and Genetics for Glioblas-
toma. Cell 178, 835-849.e21 (2019). doi:10.1016/j.cell.2019.06.024
• <NAME>., <NAME>., <NAME>. et al. Single-cell RNA-seq supports a developmental hi-
erarchy in human oligodendroglioma. Nature 539, 309–313 (2016). doi:10.1038/nature20123
Value
A ggplot2 object containing a butterfly plot.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_CellularStatesPlot", passive = TRUE)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# Define your Seurat object.
sample <- readRDS(system.file("extdata/seurat_dataset_example.rds", package = "SCpubr"))
# Define some gene sets to query. It has to be a named list.
gene_set <- list("A" = rownames(sample)[1:10],
"B" = rownames(sample)[11:20],
"C" = rownames(sample)[21:30],
"D" = rownames(sample)[31:40])
# Using two variables: A scatter plot X vs Y.
p <- SCpubr::do_CellularStatesPlot(sample = sample,
input_gene_list = gene_set,
x1 = "A",
y1 = "B",
nbin = 1,
ctrl = 10)
p
# Using three variables. Figure from: https://www.nature.com/articles/nature20123.
p <- SCpubr::do_CellularStatesPlot(sample = sample,
input_gene_list = gene_set,
x1 = "A",
y1 = "B",
x2 = "C",
nbin = 1,
ctrl = 10)
p
# Using four variables. Figure from: https://pubmed.ncbi.nlm.nih.gov/31327527/
p <- SCpubr::do_CellularStatesPlot(sample = sample,
input_gene_list = gene_set,
x1 = "A",
y1 = "C",
x2 = "B",
y2 = "D",
nbin = 1,
ctrl = 10)
p
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_ChordDiagramPlot Generate a Chord diagram.
Description
Generate a Chord diagram.
Usage
do_ChordDiagramPlot(
sample = NULL,
from = NULL,
to = NULL,
colors.from = NULL,
colors.to = NULL,
big.gap = 10,
small.gap = 1,
link.border.color = NA,
link.border.width = 1,
highlight_group = NULL,
alpha.highlight = 25,
link.sort = NULL,
link.decreasing = TRUE,
z_index = FALSE,
self.link = 1,
symmetric = FALSE,
directional = 1,
direction.type = c("diffHeight", "arrows"),
link.arr.type = "big.arrow",
scale = FALSE,
alignment = "default",
annotationTrack = c("grid", "axis"),
padding_labels = 4,
...
)
Arguments
sample Seurat | A Seurat object, generated by CreateSeuratObject.
from, to character | Categorical metadata variable to be used as origin and end points
of the interactions.
colors.from, colors.to
named_vector | Named vector of colors corresponding to the unique values of
"from" and "to".
big.gap numeric | Space between the groups in "from" and "to".
small.gap numeric | Space within the groups.
link.border.color
character | Color for the border of the links. NA = no color.
link.border.width
numeric | Width of the border line of the links.
highlight_group
character | A value from from that will be used to highlight only the links
coming from it.
alpha.highlight
numeric | A value between 00 (double digits) and 99 to depict the alpha of the
highlighted links. No transparency needs "FF"
link.sort pass to chordDiagramFromMatrix or chordDiagramFromDataFrame
link.decreasing
pass to chordDiagramFromMatrix or chordDiagramFromDataFrame
z_index logical | Whether to bring the bigger links to the top.
self.link numeric | Behavior of the links. One of:
• 1: Prevents self linking.
• 2: Allows self linking.
symmetric pass to chordDiagramFromMatrix
directional numeric | Set the direction of the links. One of:
• 0: Non-directional data.
• 1: Links go from "from" to "to".
• -1: Links go from "to" to "from".
• 2: Links go in both directions.
direction.type character | How to display the directions. One of:
• diffHeight: Sets a line at the origin of the group showing to how many
groups and in which proportion this group is linked to.
• arrows: Sets the connection as arrows.
• both: Sets up both behaviors. Use as: c("diffHeight", "arrows").
link.arr.type character | Sets the appearance of the arrows. One of:
• triangle: Arrow with a triangle tip at the end displayed on top of the link.
• big.arrow: The link itself ends in a triangle shape.
scale logical | Whether to put all nodes the same width.
alignment character | How to align the diagram. One of:
• default: Allows circlize to set up the plot as it sees fit.
• horizontal: Sets the break between "from" and "to" groups on the hori-
zontal axis.
• vertical: Sets the break between "from" and "to" groups on the vertical
axis.
annotationTrack
pass to chordDiagramFromMatrix or chordDiagramFromDataFrame
padding_labels numeric | Number of extra padding (white spaces) of the labels so that they do
not overlap with the scales.
... For internal use only.
Value
A circlize plot.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_ChordDiagramPlot", passive = TRUE)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# Define your Seurat object.
sample <- readRDS(system.file("extdata/seurat_dataset_example.rds", package = "SCpubr"))
# Basic chord diagram.
sample$assignment <- ifelse(sample$seurat_clusters %in% c("0", "4", "7"), "A", "B")
sample$assignment[sample$seurat_clusters %in% c("1", "2")] <- "C"
sample$assignment[sample$seurat_clusters %in% c("10", "5")] <- "D"
sample$assignment[sample$seurat_clusters %in% c("8", "9")] <- "E"
p <- SCpubr::do_ChordDiagramPlot(sample = sample,
from = "seurat_clusters",
to = "assignment")
p
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_ColorPalette Generate color scales based on a value.
Description
This function is an adaptation of colortools package. As the package was removed from CRAN on
23-06-2022, this utility function came to existence in order to cover the gap. It is, on its basis, an
adaptation of the package into a single function. Original code, developed by <NAME>, can
be found in: https://github.com/gastonstat/colortools
Usage
do_ColorPalette(
colors.use,
n = 12,
opposite = FALSE,
adjacent = FALSE,
triadic = FALSE,
split_complementary = FALSE,
tetradic = FALSE,
square = FALSE,
complete_output = FALSE,
plot = FALSE,
font.size = 14,
font.type = "sans"
)
Arguments
colors.use character | One color upon which generate the color scale. Can be a name or
a HEX code.
n numeric | Number of colors to include in the color wheel. Use it when all other
options are FALSE, otherwise, it becomes 12.
opposite logical | Return the opposing color to the one provided.
adjacent logical | Return the adjacent colors to the one provided.
triadic logical | Return the triadic combination of colors to the one provided.
split_complementary
logical | Return the split complementary combination of colors to the one pro-
vided.
tetradic logical | Return the tetradic combination of colors to the one provided.
square logical | Return the square combination of colors to the one provided.
complete_output
logical | Runs all the previous options and returns all the outputs as a list that
contains all color vectors, all plots and a combined plot with everything.
plot logical | Whether to also return a plot displaying the values instead of a vector
with the color.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
Value
A character vector with the desired color scale.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_ColorPalette", passive = TRUE)
if (isTRUE(value)){
# Generate a color wheel based on a single value.
colors <- SCpubr::do_ColorPalette(colors.use = "steelblue")
p <- SCpubr::do_ColorPalette(colors.use = "steelblue",
plot = TRUE)
# Generate a pair of opposite colors based on a given one.
colors <- SCpubr::do_ColorPalette(colors.use = "steelblue",
opposite = TRUE)
p <- SCpubr::do_ColorPalette(colors.use = "steelblue",
opposite = TRUE,
plot = TRUE)
# Generate a trio of adjacent colors based on a given one.
colors <- SCpubr::do_ColorPalette(colors.use = "steelblue",
adjacent = TRUE)
p <- SCpubr::do_ColorPalette(colors.use = "steelblue",
adjacent = TRUE,
plot = TRUE)
# Generate a trio of triadic colors based on a given one.
colors <- SCpubr::do_ColorPalette(colors.use = "steelblue",
triadic = TRUE)
p <- SCpubr::do_ColorPalette(colors.use = "steelblue",
triadic = TRUE,
plot = TRUE)
# Generate a trio of split complementary colors based on a given one.
colors <- SCpubr::do_ColorPalette(colors.use = "steelblue",
split_complementary = TRUE)
p <- SCpubr::do_ColorPalette(colors.use = "steelblue",
split_complementary = TRUE,
plot = TRUE)
# Generate a group of tetradic colors based on a given one.
colors <- SCpubr::do_ColorPalette(colors.use = "steelblue",
tetradic = TRUE)
p <- SCpubr::do_ColorPalette(colors.use = "steelblue",
tetradic = TRUE,
plot = TRUE)
# Generate a group of square colors based on a given one.
colors <- SCpubr::do_ColorPalette(colors.use = "steelblue",
square = TRUE)
p <- SCpubr::do_ColorPalette(colors.use = "steelblue",
square = TRUE,
plot = TRUE)
# Retrieve the output of all options.
out <- SCpubr::do_ColorPalette(colors.use = "steelblue",
complete_output = TRUE)
## Retrieve the colors.
colors <- out$colors
## Retrieve the plots.
plots <- out$plots
## Retrieve a combined plot with all the options.
p <- out$combined_plot
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_CopyNumberVariantPlot
Display CNV scores from inferCNV as Feature Plots.
Description
Display CNV scores from inferCNV as Feature Plots.
Usage
do_CopyNumberVariantPlot(
sample,
infercnv_object,
chromosome_locations,
group.by = NULL,
using_metacells = FALSE,
metacell_mapping = NULL,
legend.type = "colorbar",
legend.position = "bottom",
legend.length = 20,
legend.width = 1,
legend.framewidth = 0.5,
legend.tickwidth = 0.5,
legend.framecolor = "grey50",
legend.tickcolor = "white",
font.size = 14,
pt.size = 1,
font.type = "sans",
axis.text.x.angle = 45,
enforce_symmetry = TRUE,
legend.title = NULL,
na.value = "grey75",
viridis.palette = "G",
viridis.direction = 1,
verbose = FALSE,
min.cutoff = NA,
max.cutoff = NA,
number.breaks = 5,
diverging.palette = "RdBu",
diverging.direction = -1,
sequential.palette = "YlGnBu",
sequential.direction = -1,
use_viridis = TRUE,
return_object = FALSE,
grid.color = "white",
border.color = "black",
flip = FALSE,
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain"
)
Arguments
sample Seurat | A Seurat object, generated by CreateSeuratObject.
infercnv_object
infercnv | Output inferCNV object run on the same Seurat object.
chromosome_locations
tibble | Tibble containing the chromosome regions to use. Can be obtained
using utils::data("human_chr_locations", package = "SCpubr").
group.by character | Metadata variable to group the output by. Has to be a character of
factor column.
using_metacells
logical | Whether inferCNV was run using metacells or not.
metacell_mapping
named_vector | Vector or cell - metacell mapping.
legend.type character | Type of legend to display. One of:
• normal: Default legend displayed by ggplot2.
• colorbar: Redefined colorbar legend, using guide_colorbar.
legend.position
character | Position of the legend in the plot. One of:
• top: Top of the figure.
• bottom: Bottom of the figure.
• left: Left of the figure.
• right: Right of the figure.
• none: No legend is displayed.
legend.length, legend.width
numeric | Length and width of the legend. Will adjust automatically depending
on legend side.
legend.framewidth, legend.tickwidth
numeric | Width of the lines of the box in the legend.
legend.framecolor
character | Color of the lines of the box in the legend.
legend.tickcolor
character | Color of the ticks of the box in the legend.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
pt.size numeric | Size of the dots.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
axis.text.x.angle
numeric | Degree to rotate the X labels. One of: 0, 45, 90.
enforce_symmetry
logical | Return a symmetrical plot axes-wise or continuous color scale-wise,
when applicable.
legend.title character | Title for the legend.
na.value character | Color value for NA.
viridis.palette
character | A capital letter from A to H or the scale name as in scale_fill_viridis.
viridis.direction
numeric | Either 1 or -1. Controls how the gradient of viridis scale is formed.
verbose logical | Whether to show extra comments, warnings,etc.
min.cutoff, max.cutoff
numeric | Set the min/max ends of the color scale. Any cell/group with a value
lower than min.cutoff will turn into min.cutoff and any cell with a value higher
than max.cutoff will turn into max.cutoff. In FeaturePlots, provide as many
values as features. Use NAs to skip a feature.
number.breaks numeric | Controls the number of breaks in continuous color scales of ggplot2-
based plots.
diverging.palette
character | Type of symmetrical color palette to use. Out of the diverging
palettes defined in brewer.pal.
diverging.direction
numeric | Either 1 or -1. Direction of the divering palette. This basically flips
the two ends.
sequential.palette
character | Type of sequential color palette to use. Out of the sequential
palettes defined in brewer.pal.
sequential.direction
numeric | Direction of the sequential color scale. Either 1 or -1.
use_viridis logical | Whether to use viridis color scales.
return_object logical | Returns the Seurat object with the modifications performed in the
function. Nomally, this contains a new assay with the data that can then be used
for any other visualization desired.
grid.color character | Color of the grid in the plot. In heatmaps, color of the border of the
cells.
border.color character | Color for the border of the heatmap body.
flip logical | Whether to invert the axis of the displayed plot.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
Value
A list containing Feature Plots for different chromosome regions and corresponding dot plots by
groups..
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_CopyNumberVariantPlot", passive = TRUE)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# This function expects that you have run inferCNV on your
# own and you have access to the output object.
# Define your Seurat object.
sample <- readRDS(system.file("extdata/seurat_dataset_example.rds",
package = "SCpubr"))
# Define your inferCNV object.
infercnv_object <- readRDS(system.file("extdata/infercnv_object_example.rds",
package = "SCpubr"))
# Get human chromosome locations.
chromosome_locations = SCpubr::human_chr_locations
# Compute for a all chromosomes.
p <- SCpubr::do_CopyNumberVariantPlot(sample = sample,
infercnv_object = infercnv_object,
using_metacells = FALSE,
chromosome_locations = chromosome_locations)
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_CorrelationPlot 33
do_CorrelationPlot Create correlation matrix heatmaps.
Description
Create correlation matrix heatmaps.
Usage
do_CorrelationPlot(
sample = NULL,
input_gene_list = NULL,
cluster = TRUE,
remove.diagonal = TRUE,
mode = "hvg",
assay = NULL,
group.by = NULL,
legend.title = "Pearson coef.",
enforce_symmetry = ifelse(mode == "hvg", TRUE, FALSE),
font.size = 14,
font.type = "sans",
na.value = "grey75",
legend.width = 1,
legend.length = 20,
legend.framewidth = 0.5,
legend.tickwidth = 0.5,
legend.framecolor = "grey50",
legend.tickcolor = "white",
legend.type = "colorbar",
legend.position = "bottom",
min.cutoff = NA,
max.cutoff = NA,
number.breaks = 5,
plot.title = NULL,
plot.subtitle = NULL,
plot.caption = NULL,
diverging.palette = "RdBu",
diverging.direction = -1,
use_viridis = FALSE,
viridis.palette = "G",
viridis.direction = -1,
sequential.palette = "YlGnBu",
sequential.direction = 1,
axis.text.x.angle = 45,
grid.color = "white",
border.color = "black",
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain"
)
Arguments
sample Seurat | A Seurat object, generated by CreateSeuratObject.
input_gene_list
named_list | Named list of lists of genes to be used as input.
cluster logical | Whether to cluster the elements in the heatmap or not.
remove.diagonal
logical | Whether to convert diagnoal to NA. Normally this value would be 1,
heavily shifting the color scale.
mode character | Different types of correlation matrices can be computed. Right
now, the only possible value is "hvg", standing for Highly Variable Genes. The
sample is subset for the HVG and the data is re-scaled. Scale data is used for the
correlation.
assay character | Assay to use. Defaults to the current assay.
group.by character | Metadata variable to group the output by. Has to be a character of
factor column.
legend.title character | Title for the legend.
enforce_symmetry
logical | Return a symmetrical plot axes-wise or continuous color scale-wise,
when applicable.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
na.value character | Color value for NA.
legend.length, legend.width
numeric | Length and width of the legend. Will adjust automatically depending
on legend side.
legend.framewidth, legend.tickwidth
numeric | Width of the lines of the box in the legend.
legend.framecolor
character | Color of the lines of the box in the legend.
legend.tickcolor
character | Color of the ticks of the box in the legend.
legend.type character | Type of legend to display. One of:
• normal: Default legend displayed by ggplot2.
• colorbar: Redefined colorbar legend, using guide_colorbar.
legend.position
character | Position of the legend in the plot. One of:
• top: Top of the figure.
• bottom: Bottom of the figure.
• left: Left of the figure.
• right: Right of the figure.
• none: No legend is displayed.
min.cutoff, max.cutoff
numeric | Set the min/max ends of the color scale. Any cell/group with a value
lower than min.cutoff will turn into min.cutoff and any cell with a value higher
than max.cutoff will turn into max.cutoff. In FeaturePlots, provide as many
values as features. Use NAs to skip a feature.
number.breaks numeric | Controls the number of breaks in continuous color scales of ggplot2-
based plots.
plot.title, plot.subtitle, plot.caption
character | Title, subtitle or caption to use in the plot.
diverging.palette
character | Type of symmetrical color palette to use. Out of the diverging
palettes defined in brewer.pal.
diverging.direction
numeric | Either 1 or -1. Direction of the divering palette. This basically flips
the two ends.
use_viridis logical | Whether to use viridis color scales.
viridis.palette
character | A capital letter from A to H or the scale name as in scale_fill_viridis.
viridis.direction
numeric | Either 1 or -1. Controls how the gradient of viridis scale is formed.
sequential.palette
character | Type of sequential color palette to use. Out of the sequential
palettes defined in brewer.pal.
sequential.direction
numeric | Direction of the sequential color scale. Either 1 or -1.
axis.text.x.angle
numeric | Degree to rotate the X labels. One of: 0, 45, 90.
grid.color character | Color of the grid in the plot. In heatmaps, color of the border of the
cells.
border.color character | Color for the border of the heatmap body.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
Value
A ggplot2 object.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_CorrelationPlot", passive = TRUE)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# Define your Seurat object.
sample <- readRDS(system.file("extdata/seurat_dataset_example.rds", package = "SCpubr"))
# Default values.
p <- SCpubr::do_CorrelationPlot(sample = sample)
p
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_DimPlot Wrapper for DimPlot.
Description
Wrapper for DimPlot.
Usage
do_DimPlot(
sample,
reduction = NULL,
group.by = NULL,
split.by = NULL,
colors.use = NULL,
shuffle = TRUE,
order = NULL,
do_DimPlot 37
raster = FALSE,
pt.size = 1,
label = FALSE,
label.color = "black",
label.fill = "white",
label.size = 4,
label.box = TRUE,
repel = FALSE,
cells.highlight = NULL,
idents.highlight = NULL,
idents.keep = NULL,
sizes.highlight = 1,
ncol = NULL,
plot.title = NULL,
plot.subtitle = NULL,
plot.caption = NULL,
legend.title = NULL,
legend.position = "bottom",
legend.title.position = "top",
legend.ncol = NULL,
legend.nrow = NULL,
legend.icon.size = 4,
legend.byrow = FALSE,
raster.dpi = 2048,
dims = c(1, 2),
font.size = 14,
font.type = "sans",
na.value = "grey75",
plot_cell_borders = TRUE,
border.size = 2,
border.color = "black",
border.density = 1,
plot_marginal_distributions = FALSE,
marginal.type = "density",
marginal.size = 5,
marginal.group = TRUE,
plot.axes = FALSE,
plot_density_contour = FALSE,
contour.position = "bottom",
contour.color = "grey90",
contour.lineend = "butt",
contour.linejoin = "round",
contour_expand_axes = 0.25,
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain"
)
Arguments
sample Seurat | A Seurat object, generated by CreateSeuratObject.
reduction character | Reduction to use. Can be the canonical ones such as "umap", "pca",
or any custom ones, such as "diffusion". If you are unsure about which re-
ductions you have, use Seurat::Reductions(sample). Defaults to "umap" if
present or to the last computed reduction if the argument is not provided.
group.by character | Metadata variable to group the output by. Has to be a character of
factor column.
split.by character | Secondary metadata variable to further group (split) the output by.
Has to be a character of factor column.
colors.use named_vector | Named vector of valid color representations (either name of
HEX codes) with as many named colors as unique values of group.by. If group.by
is not provided, defaults to the unique values of Idents. If not provided, a color
scale will be set by default.
shuffle logical | Whether to shuffle the cells or not, so that they are not plotted cluster-
wise. Recommended.
order character | Vector of identities to be plotted. Either one with all identities or
just some, which will be plotted last.
raster logical | Whether to raster the resulting plot. This is recommendable if plotting
a lot of cells.
pt.size numeric | Size of the dots.
label logical | Whether to plot the cluster labels in the UMAP. The cluster labels will
have the same color as the cluster colors.
label.color character | Color of the labels in the plot.
label.fill character | Color to fill the labels. Has to be a single color, that will be used
for all labels. If NULL, the colors of the clusters will be used instead.
label.size numeric | Size of the labels in the plot.
label.box logical | Whether to plot the plot labels as geom_text (FALSE) or geom_label
(TRUE).
repel logical | Whether to repel the text labels.
cells.highlight, idents.highlight
character | Vector of cells/identities to focus into. The identities have to much
those in Seurat::Idents(sample) The rest of the cells will be grayed out.
Both parameters can be used at the same time.
idents.keep character | Vector of identities to keep. This will effectively set the rest of the
cells that do not match the identities provided to NA, therefore coloring them
according to na.value parameter.
sizes.highlight
numeric | Point size of highlighted cells using cells.highlight parameter.
ncol numeric | Number of columns used in the arrangement of the output plot using
"split.by" parameter.
plot.title, plot.subtitle, plot.caption
character | Title, subtitle or caption to use in the plot.
legend.title character | Title for the legend.
legend.position
character | Position of the legend in the plot. One of:
• top: Top of the figure.
• bottom: Bottom of the figure.
• left: Left of the figure.
• right: Right of the figure.
• none: No legend is displayed.
legend.title.position
character | Position for the title of the legend. One of:
• top: Top of the legend.
• bottom: Bottom of the legend.
• left: Left of the legend.
• right: Right of the legend.
legend.ncol numeric | Number of columns in the legend.
legend.nrow numeric | Number of rows in the legend.
legend.icon.size
numeric | Size of the icons in legend.
legend.byrow logical | Whether the legend is filled by row or not.
raster.dpi numeric | Pixel resolution for rasterized plots. Defaults to 1024. Only activates
on Seurat versions higher or equal than 4.1.0.
dims numeric | Vector of 2 numerics indicating the dimensions to plot out of the
selected reduction. Defaults to c(1, 2) if not specified.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
na.value character | Color value for NA.
plot_cell_borders
logical | Whether to plot border around cells.
border.size numeric | Width of the border of the cells.
border.color character | Color for the border of the heatmap body.
border.density numeric | Controls the number of cells used when plot_cell_borders = TRUE.
Value between 0 and 1. It computes a 2D kernel density and based on this cells
that have a density below the specified quantile will be used to generate the clus-
ter contour. The lower this number, the less cells will be selected, thus reducing
the overall size of the plot but also potentially preventing all the contours to be
properly drawn.
plot_marginal_distributions
logical | Whether to plot marginal distributions on the figure or not.
marginal.type character | One of:
• density: Compute density plots on the margins.
• histogram: Compute histograms on the margins.
• boxplot: Compute boxplot on the margins.
• violin: Compute violin plots on the margins.
• densigram: Compute densigram plots on the margins.
marginal.size numeric | Size ratio between the main and marginal plots. A value of 5 means
that the main plot is 5 times bigger than the marginal plots.
marginal.group logical | Whether to group the marginal distribution by group.by or current
identities.
plot.axes logical | Whether to plot axes or not.
plot_density_contour
logical | Whether to plot density contours in the UMAP.
contour.position
character | Whether to plot density contours on top or at the bottom of the
visualization layers, thus overlapping the clusters/cells or not.
contour.color character | Color of the density lines.
contour.lineend
character | Line end style (round, butt, square).
contour.linejoin
character | Line join style (round, mitre, bevel).
contour_expand_axes
numeric | To make the contours fit the plot, the limits of the X and Y axis are
expanding a given percentage from the min and max values for each axis. This
controls such percentage.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
Value
A ggplot2 object containing a DimPlot.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_DimPlot", passive = TRUE)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# Define your Seurat object.
sample <- readRDS(system.file("extdata/seurat_dataset_example.rds", package = "SCpubr"))
# Basic DimPlot.
p <- SCpubr::do_DimPlot(sample = sample)
# Restrict the amount of identities displayed.
p <- SCpubr::do_DimPlot(sample = sample,
idents.keep = c("1", "3", "5"))
# Group by another variable rather than `Seurat::Idents(sample)`
p <- SCpubr::do_DimPlot(sample = sample,
group.by = "seurat_clusters")
# Split the output in as many plots as unique identities.
p <- SCpubr::do_DimPlot(sample = sample,
split.by = "seurat_clusters")
# Highlight given identities
p <- SCpubr::do_DimPlot(sample,
idents.highlight = c("1", "3"))
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_DotPlot This function is a wrapper for DotPlot. It provides most of its func-
tionalities while adding extra. You can
Description
This function is a wrapper for DotPlot. It provides most of its functionalities while adding extra.
You can
Usage
do_DotPlot(
sample,
features,
assay = NULL,
group.by = NULL,
scale = FALSE,
legend.title = NULL,
legend.type = "colorbar",
legend.position = "bottom",
legend.framewidth = 0.5,
legend.tickwidth = 0.5,
legend.length = 20,
legend.width = 1,
legend.framecolor = "grey50",
legend.tickcolor = "white",
colors.use = NULL,
dot.scale = 6,
plot.title = NULL,
plot.subtitle = NULL,
plot.caption = NULL,
xlab = NULL,
ylab = NULL,
font.size = 14,
font.type = "sans",
cluster = FALSE,
flip = FALSE,
axis.text.x.angle = 45,
scale.by = "size",
use_viridis = FALSE,
viridis.palette = "G",
viridis.direction = -1,
sequential.palette = "YlGnBu",
sequential.direction = 1,
na.value = "grey75",
dot_border = TRUE,
plot.grid = TRUE,
grid.color = "grey75",
grid.type = "dashed",
number.breaks = 5,
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain"
)
Arguments
sample Seurat | A Seurat object, generated by CreateSeuratObject.
features character | Features to represent.
assay character | Assay to use. Defaults to the current assay.
group.by character | Metadata variable to group the output by. Has to be a character of
factor column.
scale logical | Whether the data should be scaled or not. Non-scaled data allows for
comparison across genes. Scaled data allows for an easier comparison along the
same gene.
legend.title character | Title for the legend.
legend.type character | Type of legend to display. One of:
• normal: Default legend displayed by ggplot2.
• colorbar: Redefined colorbar legend, using guide_colorbar.
legend.position
character | Position of the legend in the plot. One of:
• top: Top of the figure.
• bottom: Bottom of the figure.
• left: Left of the figure.
• right: Right of the figure.
• none: No legend is displayed.
legend.framewidth, legend.tickwidth
numeric | Width of the lines of the box in the legend.
legend.length, legend.width
numeric | Length and width of the legend. Will adjust automatically depending
on legend side.
legend.framecolor
character | Color of the lines of the box in the legend.
legend.tickcolor
character | Color of the ticks of the box in the legend.
colors.use named_vector | Named vector of valid color representations (either name of
HEX codes) with as many named colors as unique values of group.by. If group.by
is not provided, defaults to the unique values of Idents. If not provided, a color
scale will be set by default.
dot.scale numeric | Scale the size of the dots.
plot.title, plot.subtitle, plot.caption
character | Title, subtitle or caption to use in the plot.
xlab, ylab character | Titles for the X and Y axis.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
cluster logical | Whether to cluster the identities based on the expression of the fea-
tures.
flip logical | Whether to invert the axis of the displayed plot.
axis.text.x.angle
numeric | Degree to rotate the X labels. One of: 0, 45, 90.
scale.by character | How to scale the size of the dots. One of:
• radius: use radius aesthetic.
• size: use size aesthetic.
use_viridis logical | Whether to use viridis color scales.
viridis.palette
character | A capital letter from A to H or the scale name as in scale_fill_viridis.
viridis.direction
numeric | Either 1 or -1. Controls how the gradient of viridis scale is formed.
sequential.palette
character | Type of sequential color palette to use. Out of the sequential
palettes defined in brewer.pal.
sequential.direction
numeric | Direction of the sequential color scale. Either 1 or -1.
na.value character | Color value for NA.
dot_border logical | Whether to plot a border around dots.
plot.grid logical | Whether to plot grid lines.
grid.color character | Color of the grid in the plot. In heatmaps, color of the border of the
cells.
grid.type character | One of the possible linetype options:
• blank.
• solid.
• dashed.
• dotted.
• dotdash.
• longdash.
• twodash.
number.breaks numeric | Controls the number of breaks in continuous color scales of ggplot2-
based plots.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
Value
A ggplot2 object containing a Dot Plot.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_DotPlot", passive = TRUE)
if (isTRUE(value)){
# Define your Seurat object.
sample <- readRDS(system.file("extdata/seurat_dataset_example.rds", package = "SCpubr"))
# Basic Dot plot.
p <- SCpubr::do_DotPlot(sample = sample,
features = "EPC1")
# Querying multiple features.
genes <- rownames(sample)[1:14]
p <- SCpubr::do_DotPlot(sample = sample,
features = genes)
# Inverting the axes.
p <- SCpubr::do_DotPlot(sample = sample,
features = genes,
cluster = TRUE,
plot.title = "Clustered",
flip = TRUE)
# Modifying default colors.
# Two colors to generate a gradient.
p <- SCpubr::do_DotPlot(sample = sample,
features = genes,
colors.use = c("#001219", "#e9d8a6"))
# Querying multiple features as a named list - splitting by each item in list.
# Genes have to be unique.
genes <- list("Naive CD4+ T" = rownames(sample)[1:2],
"EPC1+ Mono" = rownames(sample)[3:4],
"Memory CD4+" = rownames(sample)[5],
"B" = rownames(sample)[6],
"CD8+ T" = rownames(sample)[7],
"FCGR3A+ Mono" = rownames(sample)[8:9],
"NK" = rownames(sample)[10:11],
"DC" = rownames(sample)[12:13],
"Platelet" = rownames(sample)[14])
p <- SCpubr::do_DotPlot(sample = sample,
features = genes)
# Clustering the identities.
p <- SCpubr::do_DotPlot(sample = sample,
features = genes,
cluster = TRUE,
plot.title = "Clustered")
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_EnrichmentHeatmap Create enrichment scores heatmaps.
Description
This function computes the enrichment scores for the cells using AddModuleScore and then ag-
gregates the scores by the metadata variables provided by the user and displays it as a heatmap,
computed by Heatmap.
Usage
do_EnrichmentHeatmap(
sample,
input_gene_list,
features.order = NULL,
groups.order = NULL,
cluster = TRUE,
scale_scores = TRUE,
assay = NULL,
slot = NULL,
reduction = NULL,
group.by = NULL,
verbose = FALSE,
na.value = "grey75",
legend.position = "bottom",
use_viridis = FALSE,
viridis.palette = "G",
viridis.direction = 1,
legend.framewidth = 0.5,
legend.tickwidth = 0.5,
legend.length = 20,
legend.width = 1,
legend.framecolor = "grey50",
legend.tickcolor = "white",
legend.type = "colorbar",
font.size = 14,
font.type = "sans",
axis.text.x.angle = 45,
enforce_symmetry = FALSE,
nbin = 24,
ctrl = 100,
flavor = "Seurat",
legend.title = NULL,
ncores = 1,
storeRanks = TRUE,
min.cutoff = NA,
max.cutoff = NA,
pt.size = 1,
plot_cell_borders = TRUE,
border.size = 2,
return_object = FALSE,
number.breaks = 5,
sequential.palette = "YlGnBu",
diverging.palette = "RdBu",
diverging.direction = -1,
sequential.direction = 1,
flip = FALSE,
grid.color = "white",
border.color = "black",
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain"
)
Arguments
sample Seurat | A Seurat object, generated by CreateSeuratObject.
input_gene_list
named_list | Named list of lists of genes to be used as input.
features.order character | Should the gene sets be ordered in a specific way? Provide it as a
vector of characters with the same names as the names of the gene sets.
groups.order named_list | Should the groups in theheatmaps be ordered in a specific way?
Provide it as a named list (as many lists as values in group.by) with the order
for each of the elements in the groups.
cluster logical | Whether to perform clustering of rows and columns.
scale_scores logical | Whether to transform the scores to a range of 0-1 for plotting.
assay character | Assay to use. Defaults to the current assay.
slot character | Data slot to use. Only one of: counts, data, scale.data. Defaults to
"data".
reduction character | Reduction to use. Can be the canonical ones such as "umap", "pca",
or any custom ones, such as "diffusion". If you are unsure about which re-
ductions you have, use Seurat::Reductions(sample). Defaults to "umap" if
present or to the last computed reduction if the argument is not provided.
group.by character | Metadata variable to group the output by. Has to be a character of
factor column.
verbose logical | Whether to show extra comments, warnings,etc.
na.value character | Color value for NA.
legend.position
character | Position of the legend in the plot. One of:
• top: Top of the figure.
• bottom: Bottom of the figure.
• left: Left of the figure.
• right: Right of the figure.
• none: No legend is displayed.
use_viridis logical | Whether to use viridis color scales.
viridis.palette
character | A capital letter from A to H or the scale name as in scale_fill_viridis.
viridis.direction
numeric | Either 1 or -1. Controls how the gradient of viridis scale is formed.
legend.framewidth, legend.tickwidth
numeric | Width of the lines of the box in the legend.
legend.length, legend.width
numeric | Length and width of the legend. Will adjust automatically depending
on legend side.
legend.framecolor
character | Color of the lines of the box in the legend.
legend.tickcolor
character | Color of the ticks of the box in the legend.
legend.type character | Type of legend to display. One of:
• normal: Default legend displayed by ggplot2.
• colorbar: Redefined colorbar legend, using guide_colorbar.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
axis.text.x.angle
numeric | Degree to rotate the X labels. One of: 0, 45, 90.
enforce_symmetry
logical | Whether the geyser and feature plot has a symmetrical color scale.
nbin numeric | Number of bins to use in AddModuleScore.
ctrl numeric | Number of genes in the control set to use in AddModuleScore.
flavor character | One of: Seurat, UCell. Compute the enrichment scores using Ad-
dModuleScore or AddModuleScore_UCell.
legend.title character | Title for the legend.
ncores numeric | Number of cores used to run UCell scoring.
storeRanks logical | Whether to store the ranks for faster UCell scoring computations.
Might require large amounts of RAM.
min.cutoff, max.cutoff
numeric | Set the min/max ends of the color scale. Any cell/group with a value
lower than min.cutoff will turn into min.cutoff and any cell with a value higher
than max.cutoff will turn into max.cutoff. In FeaturePlots, provide as many
values as features. Use NAs to skip a feature.
pt.size numeric | Size of the dots.
plot_cell_borders
logical | Whether to plot border around cells.
border.size numeric | Width of the border of the cells.
return_object logical | Return the Seurat object with the enrichment scores stored.
number.breaks numeric | Controls the number of breaks in continuous color scales of ggplot2-
based plots.
sequential.palette
character | Type of sequential color palette to use. Out of the sequential
palettes defined in brewer.pal.
diverging.palette
character | Type of symmetrical color palette to use. Out of the diverging
palettes defined in brewer.pal.
diverging.direction
numeric | Either 1 or -1. Direction of the divering palette. This basically flips
the two ends.
sequential.direction
numeric | Direction of the sequential color scale. Either 1 or -1.
flip logical | Whether to invert the axis of the displayed plot.
grid.color character | Color of the grid in the plot. In heatmaps, color of the border of the
cells.
border.color character | Color for the border of the heatmap body.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
Value
A ggplot2 object.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_EnrichmentHeatmap", passive = TRUE)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# Define your Seurat object.
sample <- readRDS(system.file("extdata/seurat_dataset_example.rds", package = "SCpubr"))
# Genes have to be unique.
genes <- list("A" = rownames(sample)[1:5],
"B" = rownames(sample)[6:10],
"C" = rownames(sample)[11:15])
# Default parameters.
p <- SCpubr::do_EnrichmentHeatmap(sample = sample,
input_gene_list = genes,
nbin = 1,
ctrl = 10)
p
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_ExpressionHeatmap Create heatmaps of averaged expression by groups.
Description
This function generates a heatmap with averaged expression values by the unique groups of the
metadata variables provided by the user.
Usage
do_ExpressionHeatmap(
sample,
features,
group.by = NULL,
assay = NULL,
cluster = TRUE,
features.order = NULL,
groups.order = NULL,
slot = "data",
legend.title = "Avg. Expression",
na.value = "grey75",
legend.position = "bottom",
legend.width = 1,
legend.length = 20,
legend.framewidth = 0.5,
legend.tickwidth = 0.5,
legend.framecolor = "grey50",
legend.tickcolor = "white",
legend.type = "colorbar",
font.size = 14,
font.type = "sans",
axis.text.x.angle = 45,
enforce_symmetry = FALSE,
min.cutoff = NA,
max.cutoff = NA,
diverging.palette = "RdBu",
diverging.direction = -1,
sequential.palette = "YlGnBu",
sequential.direction = 1,
number.breaks = 5,
use_viridis = FALSE,
viridis.palette = "G",
viridis.direction = -1,
flip = FALSE,
grid.color = "white",
border.color = "black",
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain"
)
Arguments
sample Seurat | A Seurat object, generated by CreateSeuratObject.
features character | Features to represent.
group.by character | Metadata variable to group the output by. Has to be a character of
factor column.
assay character | Assay to use. Defaults to the current assay.
cluster logical | Whether to perform clustering of rows and columns.
features.order character | Should the gene sets be ordered in a specific way? Provide it as a
vector of characters with the same names as the names of the gene sets.
groups.order named_list | Should the groups in theheatmaps be ordered in a specific way?
Provide it as a named list (as many lists as values in group.by) with the order
for each of the elements in the groups.
slot character | Data slot to use. Only one of: counts, data, scale.data. Defaults to
"data".
legend.title character | Title for the legend.
na.value character | Color value for NA.
legend.position
character | Position of the legend in the plot. One of:
• top: Top of the figure.
• bottom: Bottom of the figure.
• left: Left of the figure.
• right: Right of the figure.
• none: No legend is displayed.
legend.length, legend.width
numeric | Length and width of the legend. Will adjust automatically depending
on legend side.
legend.framewidth, legend.tickwidth
numeric | Width of the lines of the box in the legend.
legend.framecolor
character | Color of the lines of the box in the legend.
legend.tickcolor
character | Color of the ticks of the box in the legend.
legend.type character | Type of legend to display. One of:
• normal: Default legend displayed by ggplot2.
• colorbar: Redefined colorbar legend, using guide_colorbar.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
axis.text.x.angle
numeric | Degree to rotate the X labels. One of: 0, 45, 90.
enforce_symmetry
logical | Return a symmetrical plot axes-wise or continuous color scale-wise,
when applicable.
min.cutoff, max.cutoff
numeric | Set the min/max ends of the color scale. Any cell/group with a value
lower than min.cutoff will turn into min.cutoff and any cell with a value higher
than max.cutoff will turn into max.cutoff. In FeaturePlots, provide as many
values as features. Use NAs to skip a feature.
diverging.palette
character | Type of symmetrical color palette to use. Out of the diverging
palettes defined in brewer.pal.
diverging.direction
numeric | Either 1 or -1. Direction of the divering palette. This basically flips
the two ends.
sequential.palette
character | Type of sequential color palette to use. Out of the sequential
palettes defined in brewer.pal.
sequential.direction
numeric | Direction of the sequential color scale. Either 1 or -1.
number.breaks numeric | Controls the number of breaks in continuous color scales of ggplot2-
based plots.
use_viridis logical | Whether to use viridis color scales.
viridis.palette
character | A capital letter from A to H or the scale name as in scale_fill_viridis.
viridis.direction
numeric | Either 1 or -1. Controls how the gradient of viridis scale is formed.
flip logical | Whether to invert the axis of the displayed plot.
grid.color character | Color of the grid in the plot. In heatmaps, color of the border of the
cells.
border.color character | Color for the border of the heatmap body.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
Value
A ggplot2 object.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_ExpressionHeatmap", passive = TRUE)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# Define your Seurat object.
sample <- readRDS(system.file("extdata/seurat_dataset_example.rds", package = "SCpubr"))
# Define list of genes.
genes <- rownames(sample)[1:10]
# Default parameters.
p <- SCpubr::do_ExpressionHeatmap(sample = sample,
features = genes,
viridis.direction = -1)
p
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_FeaturePlot Wrapper for FeaturePlot.
Description
Wrapper for FeaturePlot.
Usage
do_FeaturePlot(
sample,
features,
assay = NULL,
reduction = NULL,
slot = NULL,
order = FALSE,
group.by = NULL,
group.by.colors.use = NULL,
group.by.legend = NULL,
group.by.show.dots = TRUE,
group.by.dot.size = 8,
group.by.cell_borders = FALSE,
group.by.cell_borders.alpha = 0.1,
split.by = NULL,
idents.keep = NULL,
cells.highlight = NULL,
idents.highlight = NULL,
dims = c(1, 2),
enforce_symmetry = FALSE,
symmetry.type = "absolute",
symmetry.center = NA,
pt.size = 1,
font.size = 14,
font.type = "sans",
legend.title = NULL,
legend.type = "colorbar",
legend.position = "bottom",
do_FeaturePlot 55
legend.framewidth = 0.5,
legend.tickwidth = 0.5,
legend.length = 20,
legend.width = 1,
legend.framecolor = "grey50",
legend.tickcolor = "white",
legend.ncol = NULL,
legend.nrow = NULL,
legend.byrow = FALSE,
plot.title = NULL,
plot.subtitle = NULL,
plot.caption = NULL,
individual.titles = NULL,
individual.subtitles = NULL,
individual.captions = NULL,
ncol = NULL,
use_viridis = FALSE,
viridis.palette = "G",
viridis.direction = 1,
raster = FALSE,
raster.dpi = 1024,
plot_cell_borders = TRUE,
border.size = 2,
border.color = "black",
border.density = 1,
na.value = "grey75",
verbose = TRUE,
plot.axes = FALSE,
min.cutoff = rep(NA, length(features)),
max.cutoff = rep(NA, length(features)),
plot_density_contour = FALSE,
contour.position = "bottom",
contour.color = "grey90",
contour.lineend = "butt",
contour.linejoin = "round",
contour_expand_axes = 0.25,
label = FALSE,
label.color = "black",
label.size = 4,
number.breaks = 5,
diverging.palette = "RdBu",
diverging.direction = -1,
sequential.palette = "YlGnBu",
sequential.direction = 1,
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain"
)
Arguments
sample Seurat | A Seurat object, generated by CreateSeuratObject.
features character | Features to represent.
assay character | Assay to use. Defaults to the current assay.
reduction character | Reduction to use. Can be the canonical ones such as "umap", "pca",
or any custom ones, such as "diffusion". If you are unsure about which re-
ductions you have, use Seurat::Reductions(sample). Defaults to "umap" if
present or to the last computed reduction if the argument is not provided.
slot character | Data slot to use. Only one of: counts, data, scale.data. Defaults to
"data".
order logical | Whether to order the cells based on expression.
group.by character | Metadata variable based on which cells are grouped. This will ef-
fectively introduce a big dot in the center of each cluster, colored using a categor-
ical color scale or with the values provided by the user in group.by.colors.use.
It will also displays a legend.
group.by.colors.use
character | Colors to use for the group dots.
group.by.legend
character | Title for the legend when group.by is used. Use NA to disable it
and NULL to use the default column title provided in group.by.
group.by.show.dots
logical | Controls whether to place in the middle of the groups.
group.by.dot.size
numeric | Size of the dots placed in the middle of the groups.
group.by.cell_borders
logical | Plots another border around the cells displaying the same color code
of the dots displayed with group.by. Legend is shown always with alpha = 1
regardless of the alpha settings.
group.by.cell_borders.alpha
numeric | Controls the transparency of the new borders drawn by group.by.cell_borders.
split.by character | Secondary metadata variable to further group (split) the output by.
Has to be a character of factor column.
idents.keep character | Vector of identities to plot. The gradient scale will also be subset
to only the values of such identities.
cells.highlight, idents.highlight
character | Vector of cells/identities to focus into. The identities have to much
those in Seurat::Idents(sample) The rest of the cells will be grayed out.
Both parameters can be used at the same time.
dims numeric | Vector of 2 numerics indicating the dimensions to plot out of the
selected reduction. Defaults to c(1, 2) if not specified.
enforce_symmetry
logical | Return a symmetrical plot axes-wise or continuous color scale-wise,
when applicable.
symmetry.type character | Type of symmetry to be enforced. One of:
• absolute: The highest absolute value will be taken into a account to gen-
erate the color scale. Works after min.cutoff and max.cutoff.
• centered: Centers the scale around the provided value in symmetry.center.
Works after min.cutoff and max.cutoff.
symmetry.center
numeric | Value upon which the scale will be centered.
pt.size numeric | Size of the dots.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
legend.title character | Title for the legend.
legend.type character | Type of legend to display. One of:
• normal: Default legend displayed by ggplot2.
• colorbar: Redefined colorbar legend, using guide_colorbar.
legend.position
character | Position of the legend in the plot. One of:
• top: Top of the figure.
• bottom: Bottom of the figure.
• left: Left of the figure.
• right: Right of the figure.
• none: No legend is displayed.
legend.framewidth, legend.tickwidth
numeric | Width of the lines of the box in the legend.
legend.length, legend.width
numeric | Length and width of the legend. Will adjust automatically depending
on legend side.
legend.framecolor
character | Color of the lines of the box in the legend.
legend.tickcolor
character | Color of the ticks of the box in the legend.
legend.ncol numeric | Number of columns in the legend.
legend.nrow numeric | Number of rows in the legend.
legend.byrow logical | Whether the legend is filled by row or not.
plot.title, plot.subtitle, plot.caption
character | Title, subtitle or caption to use in the plot.
individual.titles, individual.subtitles, individual.captions
character | Titles or subtitles. for each feature if needed. Either NULL or a
vector of equal length of features.
ncol numeric | Number of columns used in the arrangement of the output plot using
"split.by" parameter.
use_viridis logical | Whether to use viridis color scales.
viridis.palette
character | A capital letter from A to H or the scale name as in scale_fill_viridis.
viridis.direction
numeric | Either 1 or -1. Controls how the gradient of viridis scale is formed.
raster logical | Whether to raster the resulting plot. This is recommendable if plotting
a lot of cells.
raster.dpi numeric | Pixel resolution for rasterized plots. Defaults to 1024. Only activates
on Seurat versions higher or equal than 4.1.0.
plot_cell_borders
logical | Whether to plot border around cells.
border.size numeric | Width of the border of the cells.
border.color character | Color for the border of the heatmap body.
border.density numeric | Controls the number of cells used when plot_cell_borders = TRUE.
Value between 0 and 1. It computes a 2D kernel density and based on this cells
that have a density below the specified quantile will be used to generate the clus-
ter contour. The lower this number, the less cells will be selected, thus reducing
the overall size of the plot but also potentially preventing all the contours to be
properly drawn.
na.value character | Color value for NA.
verbose logical | Whether to show extra comments, warnings,etc.
plot.axes logical | Whether to plot axes or not.
min.cutoff, max.cutoff
numeric | Set the min/max ends of the color scale. Any cell/group with a value
lower than min.cutoff will turn into min.cutoff and any cell with a value higher
than max.cutoff will turn into max.cutoff. In FeaturePlots, provide as many
values as features. Use NAs to skip a feature.
plot_density_contour
logical | Whether to plot density contours in the UMAP.
contour.position
character | Whether to plot density contours on top or at the bottom of the
visualization layers, thus overlapping the clusters/cells or not.
contour.color character | Color of the density lines.
contour.lineend
character | Line end style (round, butt, square).
contour.linejoin
character | Line join style (round, mitre, bevel).
contour_expand_axes
numeric | To make the contours fit the plot, the limits of the X and Y axis are
expanding a given percentage from the min and max values for each axis. This
controls such percentage.
label logical | Whether to plot the cluster labels in the UMAP. The cluster labels will
have the same color as the cluster colors.
label.color character | Color of the labels in the plot.
label.size numeric | Size of the labels in the plot.
number.breaks numeric | Controls the number of breaks in continuous color scales of ggplot2-
based plots.
diverging.palette
character | Type of symmetrical color palette to use. Out of the diverging
palettes defined in brewer.pal.
diverging.direction
numeric | Either 1 or -1. Direction of the divering palette. This basically flips
the two ends.
sequential.palette
character | Type of sequential color palette to use. Out of the sequential
palettes defined in brewer.pal.
sequential.direction
numeric | Direction of the sequential color scale. Either 1 or -1.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
Value
A ggplot2 object containing a Feature Plot.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_FeaturePlot", passive = TRUE)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# Define your Seurat object.
sample <- readRDS(system.file("extdata/seurat_dataset_example.rds", package = "SCpubr"))
# Regular FeaturePlot.
p <- SCpubr::do_FeaturePlot(sample = sample,
features = "nCount_RNA")
# FeaturePlot with a subset of identities
# (in Seurat::Idents(sample)) maintaining the original UMAP shape.
idents.use <- levels(sample)[!(levels(sample) %in% c("2", "5", "8"))]
p <- SCpubr::do_FeaturePlot(sample = sample,
idents.highlight = idents.use,
features = c("EPC1"))
# Splitting the FeaturePlot by a variable and
# maintaining the color scale and the UMAP shape.
p <- SCpubr::do_FeaturePlot(sample = sample,
features = "EPC1",
split.by = "seurat_clusters")
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_FunctionalAnnotationPlot
Compute functional annotation plots using GO or KEGG ontologies
Description
Compute functional annotation plots using GO or KEGG ontologies
Usage
do_FunctionalAnnotationPlot(
genes,
org.db,
organism = "hsa",
database = "GO",
GO_ontology = "BP",
min.overlap = NULL,
p.adjust.cutoff = 0.05,
pAdjustMethod = "BH",
minGSSize = 10,
maxGSSize = 500,
font.size = 10,
font.type = "sans",
axis.text.x.angle = 45,
xlab = NULL,
ylab = NULL,
plot.title = NULL,
plot.subtitle = NULL,
plot.caption = NULL,
legend.type = "colorbar",
legend.position = "bottom",
legend.framewidth = 0.5,
legend.tickwidth = 0.5,
legend.length = 10,
legend.width = 1,
legend.framecolor = "grey50",
legend.tickcolor = "white",
number.breaks = 5,
return_matrix = FALSE,
grid.color = "white",
border.color = "black",
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain"
)
Arguments
genes character | Vector of gene symbols to query for functional annotation.
org.db OrgDB | Database object to use for the query.
organism character | Supported KEGG organism.
database character | Database to run the analysis on. One of:
• GO.
• KEGG.
GO_ontology character | GO ontology to use. One of:
• BP: For Biological Process.
• MF: For Molecular Function.
• CC: For Cellular Component.
min.overlap numeric | Filter the output result to the terms which are supported by this many
genes.
p.adjust.cutoff
numeric | Significance cutoff used to filter non-significant terms.
pAdjustMethod character | Method to adjust for multiple testing. One of:
• holm.
• hochberg.
• hommel.
• bonferroni.
• BH.
• BY.
• fdr.
• none.
minGSSize numeric | Minimal size of genes annotated by Ontology term for testing.
maxGSSize numeric | Maximal size of genes annotated for testing.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
axis.text.x.angle
numeric | Degree to rotate the X labels. One of: 0, 45, 90.
xlab, ylab character | Titles for the X and Y axis.
plot.title, plot.subtitle, plot.caption
character | Title, subtitle or caption to use in the plot.
legend.type character | Type of legend to display. One of:
• normal: Default legend displayed by ggplot2.
• colorbar: Redefined colorbar legend, using guide_colorbar.
legend.position
character | Position of the legend in the plot. One of:
• top: Top of the figure.
• bottom: Bottom of the figure.
• left: Left of the figure.
• right: Right of the figure.
• none: No legend is displayed.
legend.framewidth, legend.tickwidth
numeric | Width of the lines of the box in the legend.
legend.length, legend.width
numeric | Length and width of the legend. Will adjust automatically depending
on legend side.
legend.framecolor
character | Color of the lines of the box in the legend.
legend.tickcolor
character | Color of the ticks of the box in the legend.
number.breaks numeric | Controls the number of breaks in continuous color scales of ggplot2-
based plots.
return_matrix logical | Returns the matrices with the enriched Terms for further use.
grid.color character | Color of the grid in the plot. In heatmaps, color of the border of the
cells.
border.color character | Color for the border of the heatmap body.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
Value
A list containing a heatmap of the presence/absence of the genes in the enriched term, as well as a
bar plot, dot plot and tree plot of the enriched terms.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_FunctionalAnnotationPlot", passive = TRUE)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# Need to load this library or equivalent.
suppressMessages(library("org.Hs.eg.db"))
# Define list of genes to query.
genes.use <- c("CCR7", "CD14", "LYZ",
"S100A4", "MS4A1",
"MS4A7", "GNLY", "NKG7", "FCER1A",
"CST3", "PPBP")
# Compute the grouped GO terms.
out <- SCpubr::do_FunctionalAnnotationPlot(genes = genes.use,
org.db = org.Hs.eg.db)
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_GeyserPlot Generate a Geyser plot.
Description
A Geyser plot is a custom plot in which we plot continuous values on the Y axis grouped by a
categorical value in the X. This is plotted as a dot plot, jittered so that the dots span all the way
to the other groups. On top of this, the mean and .66 and .95 of the data is plotted, depicting the
overall distribution of the dots. The cells can, then, be colored by a continuous variable (same as Y
axis or different) or a categorical one (same as X axis or different).
Usage
do_GeyserPlot(
sample,
features,
assay = NULL,
slot = "data",
group.by = NULL,
split.by = NULL,
enforce_symmetry = FALSE,
scale_type = "continuous",
order = TRUE,
plot_cell_borders = TRUE,
jitter = 0.45,
pt.size = 1,
border.size = 2,
border.color = "black",
legend.position = "bottom",
legend.width = 1,
legend.length = 20,
legend.framewidth = 0.5,
legend.tickwidth = 0.5,
legend.framecolor = "grey50",
legend.tickcolor = "white",
legend.type = "colorbar",
font.size = 14,
font.type = "sans",
axis.text.x.angle = 45,
viridis.palette = "G",
viridis.direction = 1,
colors.use = NULL,
na.value = "grey75",
legend.ncol = NULL,
legend.nrow = NULL,
legend.icon.size = 4,
legend.byrow = FALSE,
legend.title = NULL,
plot.title = NULL,
plot.subtitle = NULL,
plot.caption = NULL,
xlab = "Groups",
ylab = feature,
flip = FALSE,
min.cutoff = rep(NA, length(features)),
max.cutoff = rep(NA, length(features)),
number.breaks = 5,
diverging.palette = "RdBu",
diverging.direction = -1,
sequential.palette = "YlGnBu",
sequential.direction = -1,
use_viridis = TRUE,
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain"
)
Arguments
sample Seurat | A Seurat object, generated by CreateSeuratObject.
features character | Features to represent.
assay character | Assay to use. Defaults to the current assay.
slot character | Data slot to use. Only one of: counts, data, scale.data. Defaults to
"data".
group.by character | Metadata variable to group the output by. Has to be a character of
factor column.
split.by character | Secondary metadata variable to further group (split) the output by.
Has to be a character of factor column.
enforce_symmetry
logical | Return a symmetrical plot axes-wise or continuous color scale-wise,
when applicable.
scale_type character | Type of color scale to use. One of:
• categorical: Use a categorical color scale based on the values of "group.by".
• continuous: Use a continuous color scale based on the values of "feature".
order logical | Whether to order the groups by the median of the data (highest to
lowest).
plot_cell_borders
logical | Whether to plot border around cells.
jitter numeric | Amount of jitter in the plot along the X axis. The lower the value, the
more compacted the dots are.
pt.size numeric | Size of the dots.
border.size numeric | Width of the border of the cells.
border.color character | Color for the border of the heatmap body.
legend.position
character | Position of the legend in the plot. One of:
• top: Top of the figure.
• bottom: Bottom of the figure.
• left: Left of the figure.
• right: Right of the figure.
• none: No legend is displayed.
legend.length, legend.width
numeric | Length and width of the legend. Will adjust automatically depending
on legend side.
legend.framewidth, legend.tickwidth
numeric | Width of the lines of the box in the legend.
legend.framecolor
character | Color of the lines of the box in the legend.
legend.tickcolor
character | Color of the ticks of the box in the legend.
legend.type character | Type of legend to display. One of:
• normal: Default legend displayed by ggplot2.
• colorbar: Redefined colorbar legend, using guide_colorbar.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
axis.text.x.angle
numeric | Degree to rotate the X labels. One of: 0, 45, 90.
viridis.palette
character | A capital letter from A to H or the scale name as in scale_fill_viridis.
viridis.direction
numeric | Either 1 or -1. Controls how the gradient of viridis scale is formed.
colors.use character | Named vector of colors to use. Has to match the unique values of
group.by when scale_type is set to categorical.
na.value character | Color value for NA.
legend.ncol numeric | Number of columns in the legend.
legend.nrow numeric | Number of rows in the legend.
legend.icon.size
numeric | Size of the icons in legend.
legend.byrow logical | Whether the legend is filled by row or not.
legend.title character | Title for the legend.
plot.title, plot.subtitle, plot.caption
character | Title, subtitle or caption to use in the plot.
xlab, ylab character | Titles for the X and Y axis.
flip logical | Whether to invert the axis of the displayed plot.
min.cutoff, max.cutoff
numeric | Set the min/max ends of the color scale. Any cell/group with a value
lower than min.cutoff will turn into min.cutoff and any cell with a value higher
than max.cutoff will turn into max.cutoff. In FeaturePlots, provide as many
values as features. Use NAs to skip a feature.
number.breaks numeric | Controls the number of breaks in continuous color scales of ggplot2-
based plots.
diverging.palette
character | Type of symmetrical color palette to use. Out of the diverging
palettes defined in brewer.pal.
diverging.direction
numeric | Either 1 or -1. Direction of the divering palette. This basically flips
the two ends.
sequential.palette
character | Type of sequential color palette to use. Out of the sequential
palettes defined in brewer.pal.
sequential.direction
numeric | Direction of the sequential color scale. Either 1 or -1.
use_viridis logical | Whether to use viridis color scales.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
Details
Special thanks to <NAME> for coming up with the name of the plot.
Value
Either a plot of a list of plots, depending on the number of features provided.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_GeyserPlot", passive = TRUE)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# Define your Seurat object.
sample <- readRDS(system.file("extdata/seurat_dataset_example.rds", package = "SCpubr"))
# Geyser plot with categorical color scale.
p <- SCpubr::do_GeyserPlot(sample = sample,
features = "nCount_RNA",
scale_type = "categorical")
p
# Geyser plot with continuous color scale.
p <- SCpubr::do_GeyserPlot(sample = sample,
features = "nCount_RNA",
scale_type = "continuous")
p
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_GroupedGOTermPlot Compute an overview of the GO terms associated with the input list of
genes.
Description
Compute an overview of the GO terms associated with the input list of genes.
Usage
do_GroupedGOTermPlot(
genes,
org.db,
levels.use = NULL,
GO_ontology = "BP",
min.overlap = 3,
flip = TRUE,
colors.use = c(Present = "#1e3d59", Absent = "#bccbcd"),
legend.position = "bottom",
reverse.levels = TRUE,
axis.text.x.angle = 45,
font.size = 10,
font.type = "sans",
plot.title = paste0("GO | ", GO_ontology),
plot.subtitle = NULL,
plot.caption = NULL,
verbose = FALSE,
return_matrices = FALSE,
grid.color = "white",
border.color = "black",
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain"
)
Arguments
genes character | Vector of gene symbols to query for functional annotation.
org.db OrgDB | Database object to use for the query.
levels.use numeric | Vector of numerics corresponding to the GO ontology levels to plot.
If NULL will compute all recursively until there are no results.
GO_ontology character | GO ontology to use. One of:
• BP: For Biological Process.
• MF: For Molecular Function.
• CC: For Cellular Component.
min.overlap numeric | Filter the output result to the terms which are supported by this many
genes.
flip logical | Whether to invert the axis of the displayed plot.
colors.use character | Named vector with two colors assigned to the names Present and
Absent.
legend.position
character | Position of the legend in the plot. One of:
• top: Top of the figure.
• bottom: Bottom of the figure.
• left: Left of the figure.
• right: Right of the figure.
• none: No legend is displayed.
reverse.levels logical | Whether to place the higher levels first when computing the joint
heatmap.
axis.text.x.angle
numeric | Degree to rotate the X labels. One of: 0, 45, 90.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
plot.title, plot.subtitle, plot.caption
character | Title, subtitle or caption to use in the plot.
verbose logical | Whether to show extra comments, warnings,etc.
return_matrices
logical | Returns the matrices of grouped GO terms.
grid.color character | Color of the grid in the plot. In heatmaps, color of the border of the
cells.
border.color character | Color for the border of the heatmap body.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
Value
A list containing all the matrices for the respective GO levels and all the individual and combined
heatmaps.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_GroupedGOTermPlot", passive = TRUE)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# Need to load this library or equivalent.
suppressMessages(library("org.Hs.eg.db"))
# Define list of genes to query.
genes.use <- c("CCR7", "CD14", "LYZ",
"S100A4", "MS4A1",
"MS4A7", "GNLY", "NKG7", "FCER1A",
"CST3", "PPBP")
# Compute the grouped GO terms.
out <- SCpubr::do_GroupedGOTermPlot(genes = genes.use,
org.db = org.Hs.eg.db)
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_GroupwiseDEPlot Compute a heatmap with the results of a group-wise DE analysis.
Description
Compute a heatmap with the results of a group-wise DE analysis.
Usage
do_GroupwiseDEPlot(
sample,
de_genes,
group.by = NULL,
number.breaks = 5,
top_genes = 5,
use_viridis = FALSE,
viridis.direction = -1,
viridis.palette.pvalue = "C",
viridis.palette.logfc = "E",
viridis.palette.expression = "G",
sequential.direction = 1,
sequential.palette.pvalue = "YlGn",
sequential.palette.logfc = "YlOrRd",
sequential.palette.expression = "YlGnBu",
assay = NULL,
slot = "data",
legend.position = "bottom",
legend.width = 1,
legend.length = 20,
legend.framewidth = 0.5,
legend.tickwidth = 0.5,
legend.framecolor = "grey50",
legend.tickcolor = "white",
legend.type = "colorbar",
font.size = 14,
font.type = "sans",
axis.text.x.angle = 45,
min.cutoff = NA,
max.cutoff = NA,
na.value = "grey75",
grid.color = "white",
border.color = "black",
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain"
)
Arguments
sample Seurat | A Seurat object, generated by CreateSeuratObject.
de_genes tibble | DE genes matrix resulting of running Seurat::FindAllMarkers().
group.by character | Metadata variable to group the output by. Has to be a character of
factor column.
number.breaks numeric | Controls the number of breaks in continuous color scales of ggplot2-
based plots.
top_genes numeric | Top N differentially expressed (DE) genes by group to retrieve.
use_viridis logical | Whether to use viridis color scales.
viridis.direction
numeric | Either 1 or -1. Controls how the gradient of viridis scale is formed.
viridis.palette.pvalue, viridis.palette.logfc, viridis.palette.expression
character | Viridis color palettes for the p-value, logfc and expression heatmaps.
A capital letter from A to H or the scale name as in scale_fill_viridis.
sequential.direction
numeric | Direction of the sequential color scale. Either 1 or -1.
sequential.palette.pvalue, sequential.palette.expression, sequential.palette.logfc
character | Sequential palettes for p-value, logfc and expression heatmaps.
Type of sequential color palette to use. Out of the sequential palettes defined
in brewer.pal.
assay character | Assay to use. Defaults to the current assay.
slot character | Data slot to use. Only one of: counts, data, scale.data. Defaults to
"data".
legend.position
character | Position of the legend in the plot. One of:
• top: Top of the figure.
• bottom: Bottom of the figure.
• left: Left of the figure.
• right: Right of the figure.
• none: No legend is displayed.
legend.length, legend.width
numeric | Length and width of the legend. Will adjust automatically depending
on legend side.
legend.framewidth, legend.tickwidth
numeric | Width of the lines of the box in the legend.
legend.framecolor
character | Color of the lines of the box in the legend.
legend.tickcolor
character | Color of the ticks of the box in the legend.
legend.type character | Type of legend to display. One of:
• normal: Default legend displayed by ggplot2.
• colorbar: Redefined colorbar legend, using guide_colorbar.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
axis.text.x.angle
numeric | Degree to rotate the X labels. One of: 0, 45, 90.
min.cutoff, max.cutoff
numeric | Set the min/max ends of the color scale. Any cell/group with a value
lower than min.cutoff will turn into min.cutoff and any cell with a value higher
than max.cutoff will turn into max.cutoff. In FeaturePlots, provide as many
values as features. Use NAs to skip a feature.
na.value character | Color value for NA.
grid.color character | Color of the grid in the plot. In heatmaps, color of the border of the
cells.
border.color character | Color for the border of the heatmap body.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
Value
A heatmap composed of 3 main panels: -log10(adjusted p-value), log2(FC) and mean expression
by cluster.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_GroupwiseDEPlot", passive = TRUE)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# Define your Seurat object.
sample <- readRDS(system.file("extdata/seurat_dataset_example.rds", package = "SCpubr"))
# Compute DE genes and transform to a tibble.
de_genes <- readRDS(system.file("extdata/de_genes_example.rds", package = "SCpubr"))
# Default output.
p <- SCpubr::do_GroupwiseDEPlot(sample = sample,
de_genes = de_genes)
p
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_NebulosaPlot Wrapper for Nebulosa::plot_density in Seurat.
Description
Wrapper for Nebulosa::plot_density in Seurat.
Usage
do_NebulosaPlot(
sample,
features,
slot = NULL,
dims = c(1, 2),
pt.size = 1,
reduction = NULL,
combine = TRUE,
method = c("ks", "wkde"),
joint = FALSE,
return_only_joint = FALSE,
plot.title = NULL,
plot.subtitle = NULL,
plot.caption = NULL,
legend.type = "colorbar",
legend.framewidth = 0.5,
legend.tickwidth = 0.5,
legend.length = 20,
legend.width = 1,
legend.framecolor = "grey50",
legend.tickcolor = "white",
font.size = 14,
font.type = "sans",
legend.position = "bottom",
plot_cell_borders = TRUE,
border.size = 2,
border.color = "black",
viridis.palette = "G",
viridis.direction = 1,
verbose = TRUE,
na.value = "grey75",
plot.axes = FALSE,
number.breaks = 5,
use_viridis = FALSE,
sequential.palette = "YlGnBu",
sequential.direction = 1,
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain"
)
Arguments
sample Seurat | A Seurat object, generated by CreateSeuratObject.
features character | Features to represent.
slot character | Data slot to use. Only one of: counts, data, scale.data. Defaults to
"data".
dims numeric | Vector of 2 numerics indicating the dimensions to plot out of the
selected reduction. Defaults to c(1, 2) if not specified.
pt.size numeric | Size of the dots.
reduction character | Reduction to use. Can be the canonical ones such as "umap", "pca",
or any custom ones, such as "diffusion". If you are unsure about which re-
ductions you have, use Seurat::Reductions(sample). Defaults to "umap" if
present or to the last computed reduction if the argument is not provided.
combine logical | Whether to create a single plot out of multiple features.
method Kernel density estimation method:
• ks: Computes density using the kde function from the ks package.
• wkde: Computes density using a modified version of the kde2d function
from the MASS package to allow weights. Bandwidth selection from the ks
package is used instead.
joint logical | Whether to plot different features as joint density.
return_only_joint
logical | Whether to only return the joint density panel.
plot.title, plot.subtitle, plot.caption
character | Title, subtitle or caption to use in the plot.
legend.type character | Type of legend to display. One of:
• normal: Default legend displayed by ggplot2.
• colorbar: Redefined colorbar legend, using guide_colorbar.
legend.framewidth, legend.tickwidth
numeric | Width of the lines of the box in the legend.
legend.length, legend.width
numeric | Length and width of the legend. Will adjust automatically depending
on legend side.
legend.framecolor
character | Color of the lines of the box in the legend.
legend.tickcolor
character | Color of the ticks of the box in the legend.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
legend.position
character | Position of the legend in the plot. One of:
• top: Top of the figure.
• bottom: Bottom of the figure.
• left: Left of the figure.
• right: Right of the figure.
• none: No legend is displayed.
plot_cell_borders
logical | Whether to plot border around cells.
border.size numeric | Width of the border of the cells.
border.color character | Color for the border of the heatmap body.
viridis.palette
character | A capital letter from A to H or the scale name as in scale_fill_viridis.
viridis.direction
numeric | Either 1 or -1. Controls how the gradient of viridis scale is formed.
verbose logical | Whether to show extra comments, warnings,etc.
na.value character | Color value for NA.
plot.axes logical | Whether to plot axes or not.
number.breaks numeric | Controls the number of breaks in continuous color scales of ggplot2-
based plots.
use_viridis logical | Whether to use viridis color scales.
sequential.palette
character | Type of sequential color palette to use. Out of the sequential
palettes defined in brewer.pal.
sequential.direction
numeric | Direction of the sequential color scale. Either 1 or -1.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
Value
A ggplot2 object containing a Nebulosa plot.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_NebulosaPlot", passive = TRUE)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# Define your Seurat object.
sample <- readRDS(system.file("extdata/seurat_dataset_example.rds", package = "SCpubr"))
# Basic Nebulosa plot.
p <- SCpubr::do_NebulosaPlot(sample = sample,
features = "EPC1")
# Compute joint density.
p <- SCpubr::do_NebulosaPlot(sample = sample,
features = c("EPC1", "TOX2"),
joint = TRUE)
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_PathwayActivityPlot
Plot Pathway Activities from decoupleR using Progeny prior knowl-
edge.
Description
Plot Pathway Activities from decoupleR using Progeny prior knowledge.
78 do_PathwayActivityPlot
Usage
do_PathwayActivityPlot(
sample,
activities,
group.by = NULL,
split.by = NULL,
slot = "scale.data",
statistic = "norm_wmean",
pt.size = 1,
border.size = 2,
na.value = "grey75",
legend.position = "bottom",
legend.width = 1,
legend.length = 20,
legend.framewidth = 0.5,
legend.tickwidth = 0.5,
legend.framecolor = "grey50",
legend.tickcolor = "white",
legend.type = "colorbar",
font.size = 14,
font.type = "sans",
axis.text.x.angle = 45,
enforce_symmetry = TRUE,
min.cutoff = NA,
max.cutoff = NA,
number.breaks = 5,
diverging.palette = "RdBu",
diverging.direction = -1,
use_viridis = FALSE,
viridis.palette = "G",
viridis.direction = -1,
sequential.palette = "YlGnBu",
sequential.direction = 1,
flip = FALSE,
return_object = FALSE,
grid.color = "white",
border.color = "black",
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain"
)
Arguments
sample Seurat | A Seurat object, generated by CreateSeuratObject.
activities tibble | Result of running decoupleR method with progeny regulon prior knowl-
edge.
group.by character | Metadata variable to group the output by. Has to be a character of
factor column.
split.by character | Secondary metadata variable to further group (split) the output by.
Has to be a character of factor column.
slot character | Data slot to use. Only one of: counts, data, scale.data. Defaults to
"data".
statistic character | DecoupleR statistic to use. One of:
• wmean: For weighted mean.
• norm_wmean: For normalized weighted mean.
• corr_wmean: For corrected weighted mean.
pt.size numeric | Size of the dots.
border.size numeric | Width of the border of the cells.
na.value character | Color value for NA.
legend.position
character | Position of the legend in the plot. One of:
• top: Top of the figure.
• bottom: Bottom of the figure.
• left: Left of the figure.
• right: Right of the figure.
• none: No legend is displayed.
legend.length, legend.width
numeric | Length and width of the legend. Will adjust automatically depending
on legend side.
legend.framewidth, legend.tickwidth
numeric | Width of the lines of the box in the legend.
legend.framecolor
character | Color of the lines of the box in the legend.
legend.tickcolor
character | Color of the ticks of the box in the legend.
legend.type character | Type of legend to display. One of:
• normal: Default legend displayed by ggplot2.
• colorbar: Redefined colorbar legend, using guide_colorbar.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
axis.text.x.angle
numeric | Degree to rotate the X labels. One of: 0, 45, 90.
enforce_symmetry
logical | Return a symmetrical plot axes-wise or continuous color scale-wise,
when applicable.
min.cutoff, max.cutoff
numeric | Set the min/max ends of the color scale. Any cell/group with a value
lower than min.cutoff will turn into min.cutoff and any cell with a value higher
than max.cutoff will turn into max.cutoff. In FeaturePlots, provide as many
values as features. Use NAs to skip a feature.
number.breaks numeric | Controls the number of breaks in continuous color scales of ggplot2-
based plots.
diverging.palette
character | Type of symmetrical color palette to use. Out of the diverging
palettes defined in brewer.pal.
diverging.direction
numeric | Either 1 or -1. Direction of the divering palette. This basically flips
the two ends.
use_viridis logical | Whether to use viridis color scales.
viridis.palette
character | A capital letter from A to H or the scale name as in scale_fill_viridis.
viridis.direction
numeric | Either 1 or -1. Controls how the gradient of viridis scale is formed.
sequential.palette
character | Type of sequential color palette to use. Out of the sequential
palettes defined in brewer.pal.
sequential.direction
numeric | Direction of the sequential color scale. Either 1 or -1.
flip logical | Whether to invert the axis of the displayed plot.
return_object logical | Returns the Seurat object with the modifications performed in the
function. Nomally, this contains a new assay with the data that can then be used
for any other visualization desired.
grid.color character | Color of the grid in the plot. In heatmaps, color of the border of the
cells.
border.color character | Color for the border of the heatmap body.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
Value
A ggplot2 object.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_PathwayActivityPlot", passive = TRUE)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# Define your Seurat object.
sample <- readRDS(system.file("extdata/seurat_dataset_example.rds",
package = "SCpubr"))
# Define your activities object.
progeny_activities <- readRDS(system.file("extdata/progeny_activities_example.rds",
package = "SCpubr"))
# General heatmap.
out <- SCpubr::do_PathwayActivityPlot(sample = sample,
activities = progeny_activities)
p <- out$heatmaps$average_scores
p
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_RidgePlot Create ridge plots.
Description
This function computes ridge plots based on the ggridges package.
Usage
do_RidgePlot(
sample,
feature,
group.by = NULL,
split.by = NULL,
assay = "SCT",
slot = "data",
continuous_scale = FALSE,
legend.title = NULL,
legend.ncol = NULL,
legend.nrow = NULL,
legend.byrow = FALSE,
legend.position = NULL,
legend.width = 1,
legend.length = 20,
legend.framewidth = 0.5,
legend.tickwidth = 0.5,
legend.framecolor = "grey50",
legend.tickcolor = "white",
legend.type = "colorbar",
colors.use = NULL,
font.size = 14,
font.type = "sans",
axis.text.x.angle = 45,
plot.title = NULL,
plot.subtitle = NULL,
plot.caption = NULL,
xlab = NULL,
ylab = NULL,
compute_quantiles = FALSE,
compute_custom_quantiles = FALSE,
quantiles = c(0.25, 0.5, 0.75),
compute_distribution_tails = FALSE,
prob_tails = 0.025,
color_by_probabilities = FALSE,
use_viridis = TRUE,
viridis.palette = "G",
viridis.direction = 1,
sequential.palette = "YlGnBu",
sequential.direction = 1,
plot.grid = TRUE,
grid.color = "grey75",
grid.type = "dashed",
flip = FALSE,
number.breaks = 5,
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain"
)
Arguments
sample Seurat | A Seurat object, generated by CreateSeuratObject.
feature character | Feature to represent.
group.by character | Metadata variable to group the output by. Has to be a character of
factor column.
split.by character | Secondary metadata variable to further group (split) the output by.
Has to be a character of factor column.
assay character | Assay to use. Defaults to the current assay.
slot character | Data slot to use. Only one of: counts, data, scale.data. Defaults to
"data".
continuous_scale
logical | Whether to color the ridges depending on a categorical or continuous
scale.
legend.title character | Title for the legend.
legend.ncol numeric | Number of columns in the legend.
legend.nrow numeric | Number of rows in the legend.
legend.byrow logical | Whether the legend is filled by row or not.
legend.position
character | Position of the legend in the plot. One of:
• top: Top of the figure.
• bottom: Bottom of the figure.
• left: Left of the figure.
• right: Right of the figure.
• none: No legend is displayed.
legend.length, legend.width
numeric | Length and width of the legend. Will adjust automatically depending
on legend side.
legend.framewidth, legend.tickwidth
numeric | Width of the lines of the box in the legend.
legend.framecolor
character | Color of the lines of the box in the legend.
legend.tickcolor
character | Color of the ticks of the box in the legend.
legend.type character | Type of legend to display. One of:
• normal: Default legend displayed by ggplot2.
• colorbar: Redefined colorbar legend, using guide_colorbar.
colors.use character | Named vector of colors to use. Has to match the unique values of
group.by or color.by (if used) when scale_type is set to categorical.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
axis.text.x.angle
numeric | Degree to rotate the X labels. One of: 0, 45, 90.
plot.title, plot.subtitle, plot.caption
character | Title, subtitle or caption to use in the plot.
xlab, ylab character | Titles for the X and Y axis.
compute_quantiles
logical | Whether to compute quantiles of the distribution and color the ridge
plots by them.
compute_custom_quantiles
logical | Whether to compute custom quantiles.
quantiles numeric | Numeric vector of quantiles.
compute_distribution_tails
logical | Whether to compute distribution tails and color them.
prob_tails numeric | The accumulated probability that the tails should contain.
color_by_probabilities
logical | Whether to color the ridges depending on the probability.
use_viridis logical | Whether to use viridis color scales.
viridis.palette
character | A capital letter from A to H or the scale name as in scale_fill_viridis.
viridis.direction
numeric | Either 1 or -1. Controls how the gradient of viridis scale is formed.
sequential.palette
character | Type of sequential color palette to use. Out of the sequential
palettes defined in brewer.pal.
sequential.direction
numeric | Direction of the sequential color scale. Either 1 or -1.
plot.grid logical | Whether to plot grid lines.
grid.color character | Color of the grid in the plot. In heatmaps, color of the border of the
cells.
grid.type character | One of the possible linetype options:
• blank.
• solid.
• dashed.
• dotted.
• dotdash.
• longdash.
• twodash.
flip logical | Whether to invert the axis of the displayed plot.
number.breaks numeric | Controls the number of breaks in continuous color scales of ggplot2-
based plots.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
Value
A ggplot2 object.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_RidgePlot", passive = TRUE)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# Define your Seurat object.
sample <- readRDS(system.file("extdata/seurat_dataset_example.rds", package = "SCpubr"))
# Compute the most basic ridge plot.
p <- SCpubr::do_RidgePlot(sample = sample,
feature = "nFeature_RNA")
p
# Use continuous color scale.
p <- SCpubr::do_RidgePlot(sample = sample,
feature = "nFeature_RNA",
continuous_scale = TRUE,
viridis.direction = 1)
p
# Draw quantiles of the distribution.
p <- SCpubr::do_RidgePlot(sample = sample,
feature = "nFeature_RNA",
continuous_scale = TRUE,
compute_quantiles = TRUE,
compute_custom_quantiles = TRUE)
p
# Draw probability tails.
p <- SCpubr::do_RidgePlot(sample = sample,
feature = "nFeature_RNA",
continuous_scale = TRUE,
compute_quantiles = TRUE,
compute_distribution_tails = TRUE)
p
# Draw probability tails.
p <- SCpubr::do_RidgePlot(sample = sample,
feature = "nFeature_RNA",
continuous_scale = TRUE,
compute_quantiles = TRUE,
color_by_probabilities = TRUE)
p
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_TermEnrichmentPlot Display the enriched terms for a given list of genes.
Description
Display the enriched terms for a given list of genes.
Usage
do_TermEnrichmentPlot(
enriched_terms,
nchar_wrap = 20,
nterms = 10,
font.size = 14,
font.type = "sans",
plot.title = NULL,
plot.subtitle = NULL,
plot.caption = NULL,
legend.position = "bottom",
legend.type = "colorbar",
colors.use = NULL,
text_labels_size = 4,
legend.length = 30,
legend.width = 1,
legend.framewidth = 0.5,
legend.tickwidth = 0.5,
legend.framecolor = "grey50",
legend.tickcolor = "white",
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain"
)
Arguments
enriched_terms list | List containing the output(s) of running Enrichr.
nchar_wrap numeric | Number of characters to use as a limit to wrap the term names. The
higher this value, the longer the lines would be for each term in the plots. De-
faults to 60.
nterms numeric | Number of terms to report for each database. Terms are arranged by
adjusted p-value and selected from lowest to highest. Defaults to 5.
• Enrichr.
• FlyEnrichr.
• WormEnrichr.
• YeastEnrichr.
• FishEnrichr.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
plot.title, plot.subtitle, plot.caption
character | Title, subtitle or caption to use in the plot.
legend.position
character | Position of the legend in the plot. One of:
• top: Top of the figure.
• bottom: Bottom of the figure.
• left: Left of the figure.
• right: Right of the figure.
• none: No legend is displayed.
legend.type character | Type of legend to display. One of:
• normal: Default legend displayed by ggplot2.
• colorbar: Redefined colorbar legend, using guide_colorbar.
colors.use character | Character vector of 2 colors (low and high ends of the color scale)
to generate the gradient.
text_labels_size
numeric | Controls how big or small labels are in the plot.
legend.length, legend.width
numeric | Length and width of the legend. Will adjust automatically depending
on legend side.
legend.framewidth, legend.tickwidth
numeric | Width of the lines of the box in the legend.
legend.framecolor
character | Color of the lines of the box in the legend.
legend.tickcolor
character | Color of the ticks of the box in the legend.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
Value
A ggplot2 object with enriched terms.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_TermEnrichmentPlot", passive = TRUE)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# Define your enriched terms.
enriched_terms <- readRDS(system.file("extdata/enriched_terms_example.rds", package = "SCpubr"))
enriched_terms$GO_Cellular_Component_2021 <- NULL
enriched_terms$Azimuth_Cell_Types_2021 <- NULL
# Default plot.
p <- SCpubr::do_TermEnrichmentPlot(enriched_terms = enriched_terms)
p
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_TFActivityPlot Plot TF Activities from decoupleR using Dorothea prior knowledge.
Description
Plot TF Activities from decoupleR using Dorothea prior knowledge.
do_TFActivityPlot 89
Usage
do_TFActivityPlot(
sample,
activities,
n_tfs = 25,
slot = "scale.data",
statistic = "norm_wmean",
tfs.use = NULL,
group.by = NULL,
split.by = NULL,
na.value = "grey75",
legend.position = "bottom",
legend.width = 1,
legend.length = 20,
legend.framewidth = 0.5,
legend.tickwidth = 0.5,
legend.framecolor = "grey50",
legend.tickcolor = "white",
legend.type = "colorbar",
font.size = 14,
font.type = "sans",
axis.text.x.angle = 45,
enforce_symmetry = TRUE,
diverging.palette = "RdBu",
diverging.direction = -1,
use_viridis = FALSE,
viridis.palette = "G",
viridis.direction = -1,
sequential.palette = "YlGnBu",
sequential.direction = 1,
min.cutoff = NA,
max.cutoff = NA,
number.breaks = 5,
flip = FALSE,
return_object = FALSE,
grid.color = "white",
border.color = "black",
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain"
)
Arguments
sample Seurat | A Seurat object, generated by CreateSeuratObject.
activities tibble | Result of running decoupleR method with dorothea regulon prior knowl-
edge.
n_tfs numeric | Number of top regulons to consider for downstream analysis.
slot character | Data slot to use. Only one of: counts, data, scale.data. Defaults to
"data".
statistic character | DecoupleR statistic to use. One of:
• wmean: For weighted mean.
• norm_wmean: For normalized weighted mean.
• corr_wmean: For corrected weighted mean.
tfs.use character | Restrict the analysis to given regulons.
group.by character | Metadata variable to group the output by. Has to be a character of
factor column.
split.by character | Secondary metadata variable to further group (split) the output by.
Has to be a character of factor column.
na.value character | Color value for NA.
legend.position
character | Position of the legend in the plot. One of:
• top: Top of the figure.
• bottom: Bottom of the figure.
• left: Left of the figure.
• right: Right of the figure.
• none: No legend is displayed.
legend.length, legend.width
numeric | Length and width of the legend. Will adjust automatically depending
on legend side.
legend.framewidth, legend.tickwidth
numeric | Width of the lines of the box in the legend.
legend.framecolor
character | Color of the lines of the box in the legend.
legend.tickcolor
character | Color of the ticks of the box in the legend.
legend.type character | Type of legend to display. One of:
• normal: Default legend displayed by ggplot2.
• colorbar: Redefined colorbar legend, using guide_colorbar.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
axis.text.x.angle
numeric | Degree to rotate the X labels. One of: 0, 45, 90.
enforce_symmetry
logical | Whether the geyser and feature plot has a symmetrical color scale.
diverging.palette
character | Type of symmetrical color palette to use. Out of the diverging
palettes defined in brewer.pal.
diverging.direction
numeric | Either 1 or -1. Direction of the divering palette. This basically flips
the two ends.
use_viridis logical | Whether to use viridis color scales.
viridis.palette
character | A capital letter from A to H or the scale name as in scale_fill_viridis.
viridis.direction
numeric | Either 1 or -1. Controls how the gradient of viridis scale is formed.
sequential.palette
character | Type of sequential color palette to use. Out of the sequential
palettes defined in brewer.pal.
sequential.direction
numeric | Direction of the sequential color scale. Either 1 or -1.
min.cutoff, max.cutoff
numeric | Set the min/max ends of the color scale. Any cell/group with a value
lower than min.cutoff will turn into min.cutoff and any cell with a value higher
than max.cutoff will turn into max.cutoff. In FeaturePlots, provide as many
values as features. Use NAs to skip a feature.
number.breaks numeric | Controls the number of breaks in continuous color scales of ggplot2-
based plots.
flip logical | Whether to invert the axis of the displayed plot.
return_object logical | Returns the Seurat object with the modifications performed in the
function. Nomally, this contains a new assay with the data that can then be used
for any other visualization desired.
grid.color character | Color of the grid in the plot. In heatmaps, color of the border of the
cells.
border.color character | Color for the border of the heatmap body.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
Value
A ggplot2 object.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_TFActivityPlot", passive = TRUE)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# Define your Seurat object.
sample <- readRDS(system.file("extdata/seurat_dataset_example.rds",
package = "SCpubr"))
# Define your activities object.
dorothea_activities <- readRDS(system.file("extdata/dorothea_activities_example.rds",
package = "SCpubr"))
# General heatmap.
out <- SCpubr::do_TFActivityPlot(sample = sample,
activities = dorothea_activities)
p <- out$heatmaps$average_scores
p
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_ViolinPlot Wrapper for VlnPlot.
Description
Wrapper for VlnPlot.
Usage
do_ViolinPlot(
sample,
features,
assay = NULL,
slot = NULL,
group.by = NULL,
split.by = NULL,
colors.use = NULL,
pt.size = 0,
line_width = 0.5,
y_cut = rep(NA, length(features)),
plot_boxplot = TRUE,
boxplot_width = 0.2,
legend.position = "bottom",
plot.title = NULL,
plot.subtitle = NULL,
plot.caption = NULL,
xlab = rep(NA, length(features)),
ylab = rep(NA, length(features)),
font.size = 14,
font.type = "sans",
axis.text.x.angle = 45,
plot.grid = TRUE,
grid.color = "grey75",
grid.type = "dashed",
flip = FALSE,
ncol = NULL,
share.y.lims = FALSE,
legend.title = NULL,
legend.title.position = "top",
legend.ncol = NULL,
legend.nrow = NULL,
legend.byrow = FALSE,
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain"
)
Arguments
sample Seurat | A Seurat object, generated by CreateSeuratObject.
features character | Features to represent.
assay character | Assay to use. Defaults to the current assay.
slot character | Data slot to use. Only one of: counts, data, scale.data. Defaults to
"data".
group.by character | Metadata variable to group the output by. Has to be a character of
factor column.
split.by character | Secondary metadata variable to further group (split) the output by.
Has to be a character of factor column.
colors.use named_vector | Named vector of valid color representations (either name of
HEX codes) with as many named colors as unique values of group.by. If group.by
is not provided, defaults to the unique values of Idents. If not provided, a color
scale will be set by default.
pt.size numeric | Size of points in the Violin plot.
line_width numeric | Width of the lines drawn in the plot. Defaults to 1.
y_cut numeric | Vector with the values in which the Violins should be cut. Only works
for one feature.
plot_boxplot logical | Whether to plot a Box plot inside the violin or not.
boxplot_width numeric | Width of the boxplots. Defaults to 0.2.
legend.position
character | Position of the legend in the plot. One of:
• top: Top of the figure.
• bottom: Bottom of the figure.
• left: Left of the figure.
• right: Right of the figure.
• none: No legend is displayed.
plot.title, plot.subtitle, plot.caption
character | Title, subtitle or caption to use in the plot.
xlab, ylab character | Titles for the X and Y axis.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
axis.text.x.angle
numeric | Degree to rotate the X labels. One of: 0, 45, 90.
plot.grid logical | Whether to plot grid lines.
grid.color character | Color of the grid in the plot. In heatmaps, color of the border of the
cells.
grid.type character | One of the possible linetype options:
• blank.
• solid.
• dashed.
• dotted.
• dotdash.
• longdash.
• twodash.
flip logical | Whether to invert the axis of the displayed plot.
ncol numeric | Number of columns used in the arrangement of the output plot using
"split.by" parameter.
share.y.lims logical | When querying multiple features, force the Y axis of all of them
to be on the same range of values (this being the max and min of all features
combined).
legend.title character | Title for the legend.
legend.title.position
character | Position for the title of the legend. One of:
• top: Top of the legend.
• bottom: Bottom of the legend.
• left: Left of the legend.
• right: Right of the legend.
legend.ncol numeric | Number of columns in the legend.
legend.nrow numeric | Number of rows in the legend.
legend.byrow logical | Whether the legend is filled by row or not.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
Value
A ggplot2 object containing a Violin Plot.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_ViolinPlot", passive = TRUE)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# Define your Seurat object.
sample <- readRDS(system.file("extdata/seurat_dataset_example.rds", package = "SCpubr"))
# Basic violin plot.
p <- SCpubr::do_ViolinPlot(sample = sample,
feature = "nCount_RNA")
p
# Remove the box plots.
p <- SCpubr::do_ViolinPlot(sample = sample,
feature = "nCount_RNA",
plot_boxplot = FALSE)
p
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
do_VolcanoPlot Compute a Volcano plot out of DE genes.
Description
Compute a Volcano plot out of DE genes.
Usage
do_VolcanoPlot(
sample,
de_genes,
pval_cutoff = 0.05,
FC_cutoff = 2,
pt.size = 2,
border.size = 1.5,
border.color = "black",
font.size = 14,
font.type = "sans",
plot.title = NULL,
plot.subtitle = NULL,
plot.caption = NULL,
plot_lines = TRUE,
line_color = "grey75",
line_size = 0.5,
add_gene_tags = TRUE,
order_tags_by = "both",
n_genes = 5,
use_labels = FALSE,
colors.use = "steelblue",
plot.title.face = "bold",
plot.subtitle.face = "plain",
plot.caption.face = "italic",
axis.title.face = "bold",
axis.text.face = "plain",
legend.title.face = "bold",
legend.text.face = "plain"
)
Arguments
sample Seurat | A Seurat object, generated by CreateSeuratObject.
de_genes tibble | Output of Seurat::FindMarkers().
pval_cutoff numeric | Cutoff for the p-value.
FC_cutoff numeric | Cutoff for the avg_log2FC.
pt.size numeric | Size of the dots.
border.size numeric | Width of the border of the cells.
border.color character | Color for the border of the heatmap body.
font.size numeric | Overall font size of the plot. All plot elements will have a size rela-
tionship with this font size.
font.type character | Base font family for the plot. One of:
• mono: Mono spaced font.
• serif: Serif font family.
• sans: Default font family.
plot.title, plot.subtitle, plot.caption
character | Title, subtitle or caption to use in the plot.
plot_lines logical | Whether to plot the division lines.
line_color character | Color for the lines.
line_size numeric | Size of the lines in the plot.
add_gene_tags logical | Whether to plot the top genes.
order_tags_by character | Either "both", "pvalue" or "logfc".
n_genes numeric | Number of top genes in each side to plot.
use_labels logical | Whether to use labels instead of text for the tags.
colors.use character | Color to generate a tetradic color scale with.
plot.title.face, plot.subtitle.face, plot.caption.face, axis.title.face, axis.text.face, legend.title.
character | Controls the style of the font for the corresponding theme element.
One of:
• plain: For normal text.
• italic: For text in itallic.
• bold: For text in bold.
• bold.italic: For text both in itallic and bold.
Value
A volcano plot as a ggplot2 object.
Examples
# Check Suggests.
value <- SCpubr:::check_suggests(function_name = "do_VolcanoPlot", passive = TRUE)
if (isTRUE(value)){
# Consult the full documentation in https://enblacar.github.io/SCpubr-book/
# Define your Seurat object.
sample <- readRDS(system.file("extdata/seurat_dataset_example.rds", package = "SCpubr"))
# Retrieve DE genes.
de_genes <- readRDS(system.file("extdata/de_genes_example.rds", package = "SCpubr"))
# Generate a volcano plot.
p <- SCpubr::do_VolcanoPlot(sample = sample,
de_genes = de_genes)
p
} else if (base::isFALSE(value)){
message("This function can not be used without its suggested packages.")
message("Check out which ones are needed using `SCpubr::state_dependencies()`.")
}
human_chr_locations Chromosome arm locations for human genome GRCh38.
Description
A tibble containing the chromosome, arm and start and end coordinates.
Usage
data(human_chr_locations)
Format
A tibble with 48 rows and 4 columns:
chr Chromosome.
arm Chromosome arm.
start Start coordinates.
end End coordinates.
package_report Generate a status report of SCpubr and its dependencies.
Description
This function generates a summary report of the installation status of SCpubr, which packages are
still missing and which functions can or can not currently be used.
Usage
package_report(startup = FALSE, extended = FALSE)
Arguments
startup logical | Whether the message should be displayed at startup, therefore, also
containing welcoming messages and tips. If FALSE, only the report itself will be
printed.
extended logical | Whether the message should also include installed packages, current
and available version, and which SCpubr functions can be used with the cur-
rently installed packages.
Value
None
Examples
# Print a package report.
SCpubr::package_report(startup = FALSE, extended = FALSE) |